 Welcome everyone to the session by Misut on testers guide to quality. Misut has joined us from Tokyo. He brings a rich experience to us in the quality assurance engineering domain. He will share with us his story of bringing in quality in a project that had a huge room for improvement. We are excited to hear that Misut. Thanks for the kind introduction and let me share my screen and then we can start. Hello everyone again and I assume for most of the people it's still morning. So good morning and for me it's already afternoon as I told in the beginning I'm joining from Japan. And I'm a quality assurance engineer and in the session we will a little bit talk about the initiatives that we can perform in our activities to improve the quality. The quality of the quality of the product and the quality of the processes that we follow as well. So as a starter my name is Misut and I have 13 years of experience in lots of different systems and domains including robotic systems, IoT platforms, cloud systems, microservices, architectures and both API and UI systems testing. And nowadays mostly I'm doing test automation and trying to execute our test cases in the CI pipelines and which means we are doing the continuous testing. And we are continuously trying to ensure the quality that the updates or the changes applied to the system or the behaviors do not break anything. And we will talk about all the importance of all these kinds of testing activities and test automation test execution and all the other activities that we can improve in our activities. So we are performing lots of different stages in the whole software testing lifecycle. We start by the analysis of the system under test. First of all, we try to understand the requirements and then we design some test cases to cover the specifications with tests. And then we of course implement to be able to automatically execute them because nowadays test automation is very important, right? Without test automation it would be really difficult and we had to allocate a lot of resources for the manual testing activities. It doesn't mean that we can all forget about manual testing because we will still need the support from the manual testing for the test automation. We will analyze this and discuss this. And the next thing after implementing test cases is executing them in anyway, like in manual ways or in automated ways. And eventually we have the maintenance stage which is the continuous improvement and continuous maintenance. So we will go through all of them one by one and we will discuss what we can do, what we can further improve in our activities. Let's start with the first one, which is the system under test, analyzing the system that we are testing and developing. Because it is very important, right? If we do not know the system that we are testing very well or even the users of the system, if we don't know very well, then maybe we will have some coverage issues. We will miss some things that the end users or the clients, the customers are performing in their execution environments. For example, which platforms they are using or which devices they have in their environments. If we test only in certain execution environments with only certain data ranges, then maybe we will have some differences for the execution of scenarios between our test environments and the real production environments. So starting with the practices that we follow, nowadays we are mostly doing agile practices and which is very open for changes and updates. So we can have some change requests or some additional features or behaviors from the customers. If they need something else, if they are not satisfied with the current flows, they may request some additional changes, which is very welcome, which is quite okay for our workflows. But if we do not keep our test cases up to date, I mean, if any behavior is already updated, but if we did not update our assertions for the expected results, then maybe we will have some variances. Because the behavior that we are testing is already expecting a different scenario, a different expected result, but we are still trying to cover the previous scenario. So keeping test cases up to date is very important. And in agile practices, we know that the working product, the working software is more important than the comprehensive documentation. But sometimes we misinterpret this. It doesn't mean that we don't need any documentation at all. Of course, we need the basic understanding of the features, what kind of scenarios the users will have or what kind of expectations they need to see in their environments. For example, when you sometimes talk to different people in the project, they all have the basic understanding of the features. But regarding some details, sometimes it is not clear, right? Some people say that, okay, maybe when you put a negative value, then maybe a pop-up window should appear. And some other people say that, okay, in the summer, maybe we should totally ignore the requests. So these kind of details, if it is not very well documented, we may have some misunderstandings. And we may have some bugs or vulnerabilities which may appear in the production environment. What we can do as an improvement action item is maybe we can sit together with the product owners, the product team or whatever who is responsible for these products or the features. We can go through all the features and we can do a cleanup. If we have any absolute features, we don't need them anymore, right? If no one is using this in production environment, then why we are trying to maintain this? It will raise some additional costs. We will try to execute test cases on these features which are not used at all. And we try to update them according to the dependencies. And it will raise some extra costs. So this is the first thing that we can do. We can clean up the absolute features. And regarding the existing features, we can try to do a common understanding. We can try to build a common understanding. If there is any unclear issues, unclear points, then we can discuss what should the expected behaviors are and we can document them in any tools or any platforms that we are using to maintain or manage our specifications. The next thing that we can do is, of course, encouraging the early QA involvement. Because if we do not start our test preparations in the meantime that the design is started, then maybe we will be too late to execute our test cases, which means we will not be able to give early feedback. Because even starting from the analysis of the requirements, if we are already involved as the QA members into the discussions, then maybe we would already give some feedback related to the testability of the requirements. Or even maybe the usability of the behaviors or the requirements. So we should already be involved in the discussions from the requirement designs. And even in terms of the development activities, whenever the features are developed, then we can already start our test executions. And if they are failing in development stages, then maybe we will already block deployment to the later stages, like the QA stage or the PROS stage, because we already found out that there are some vulnerabilities in the development environment. So after the first stage, which is the analysis of the system under test, then the next thing that we do is, after understanding the system very well, then we design some test cases. And of course, designing the test cases properly is very important, because if we do not cover the features very well, then maybe we will execute some test cases. We will design some test cases and we execute them. But if we do not cover the whole use cases, the whole scenario aspects, then we may have some issues in the later stages. And for this purpose, we can embrace lots of different test design techniques. Not only certain design techniques, but also maybe sometimes we will do the equivalence partitioning to embrace different data ranges, not only positive values, sometimes the negative values as well, or any different digits and any different amount of data that we will put as the test data. Or maybe we will have the boundary value analysis to figure out the corner cases, like what are the maximum values or what are the minimum values supported by the system or the specification. And in addition to all this, of course, sometimes we will do the exploratory testing to create some more scenarios which were already defined. Of course, not only the testing techniques, but also some different verification and validation methods we can utilize, like maybe the review of the design or some simulation techniques, because sometimes we will need this, right, because sometimes we will have some testable issues. Sometimes somehow, for some reason, we will not be able to execute tests. So what we can do is maybe mathematically prove that the algorithm is in the correct way, or which were already designed in an efficient way. We should do the review of design, or sometimes if we have some hardware modules, or if we have some hardware components, then if testing or automating is not possible, sometimes we need the simulation support, we need to test harness. So we can utilize lots of different testing techniques, test design techniques to improve the coverage, because if we have some coverage issues, then we may face some bugs or vulnerabilities, some improvement points stemming from these coverage gaps. And after we design tests, of course, again, documentation is important, because in this way we can build some traceable to matrices. What are traceable to matrices? Like we can map the tests to the features, and then we can understand which test case is executing the scenario for which feature or the requirement. Or the other way around, which feature is tested by which test cases we can understand by reviewing the traceable to matrices. And if we figure out that if we reveal that some features are not mapped with some test cases, which means they are not tested yet. So which means we have some coverage issues over there. And why they are not mapped to some test cases? Maybe testable to issues is one reason. So I wanted to emphasize this specifically, because I think this is very important. If we have some testability issues, for some reason, if we are not able to test or execute our test cases, or even design our test cases, then it will reduce the coverage. So what kind of testable issues we may have? For example, if I'm testing a subsystem in the whole system, and if I'm having some dependencies, like let's say whenever a new data is created, my subsystem should automatically pull, update itself from the dependent system, let's say, dependent subsystem, let's say, which is an external module to my system under test. In that case, when a new data is created, my system should update itself, right? So how can I test this? First of all, I need to create a new test case to check if the update mechanism is working well. But if I don't have the full control on the other dependent system, like if I don't have right to create a data, or even there is no public API or public interface, API or UI, any interface to create data, then how can I simulate this scenario? I have to hit a certain scenario in which a new data is over there, but if I don't have right to create data, what will I do? I don't have any testability. So in this kind of situations, what I can do is, of course, I can use some mock data, but it will not be the same as the real usage. So in this kind of cases, what I can do is maybe I can communicate to the development teams and discuss them to waive some ways to improve the testability, maybe sometimes I will request them for additional interfaces, which will not be used in production. I already know that no one will use this additional interface, but only for testing purposes. I can perform these requests. So close communication with the other teams is very important in these terms. Like I will repeat myself a few times, but individual effort is not sufficient for a quality mindset. There should be a holistic approach. Everyone in the team should be on the same page and they should all concern about quality, because of course we have some quality teams or quality team members in the projects, but of course these people are not the only people responsible from the quality. Everyone is responsible from the quality, but why do we have some quality members or the quality teams? Because sometimes we will perform as a quality coach as well. We are the people who maybe start these initiatives, but they should be performed all together with the collaboration with the other teams. So discussing with everyone, like the development teams and getting some support from product teams, development teams is very important. Otherwise, we will not maybe achieve our quality goals. So the next stage in the software testing lifecycle is implementation. And again, implementation is very important. If we are automating our test cases, if we are developing our test codes in the correct way, then we will have some outcome. But other way around, like if we are having some anti-patterns and if we are having some quality issues, then we may have some trouble. What kind of quality issues we may have? Like there are lots of different aspects of quality, not only the functionality, but also the maintainability. Like whenever we have to update some test case, what do we have to do? Like if we have some duplication, whenever I need to change something, I have to go lots of different places, right? This is an example of duplication. Or if I have some fragile test cases, or if I have some flakiness, like sometimes tests are passing, sometimes failing. And if I have some efficiency issues, like test executions are taking too much time, these kind of issues are also some other aspects of the quality. Like even the usability, maintainability, and portability, compatibility, there are lots of abilities of the quality. So we will go through them, at least a few of them, and discuss why they are important in terms of testing. Starting with the reliability. What does reliability mean? If I can rely on the test results, the reports coming from the test executions, because let's suppose, let's consider a few different situations. In one of them, if the feature is working well, if the feature is working as expected, my test case is failing, then it means it is not a real bug, because the feature is working as expected, but my test case is failing. So it is a false alarm. So why is this a problem for me? Why is it a challenge? Because in these kind of situations, there is a failure, there is a notification. I am notified by the test automation framework, and it says that there is a failing test. And then I have to understand the root cause. I have to figure out if this is a real failure, real bug, or a false alarm. So it is an extra analysis effort. If it was passing, then I would not spend some time on this analysis activity. But this way, there is some extra cost in terms of time and resources. On the other hand, if there is a feature which is not working well, there is a bug on the feature. But if my test case is passing, which means it is not reporting any failure, then what does it mean? I am having some silent horror cases. There are some bugs, but I am not aware. So I will not be able to catch in time, but I will see that in the later stages. Maybe in the production environments. Maybe the end users will report them. So it will be a prestige issue for me. So getting rid of these reliability issues is very important in terms of the quality of the test results. But why can I have some reliability issues? There may be lots of different reasons. We call them as test smells. There may be lots of different test smells we may have in our test automation framework. And by doing the root cause analysis, we can understand why we are having these kind of smells. Maybe we are not waiting for the expected results properly. Like if the services that we are testing is microservices and working independently from each other asynchronously. Then working for the expected results in the proper way is very important. Like after I perform my request, maybe I will have already 200 response, which says it is successful. But maybe the whole transition is not completed in the other backend services. The other part of the operation is still being maintained. So after getting 200 response, if I immediately check for the expected result, maybe it will fail. It has not met yet. But what I can do is maybe I can adapt a polling mechanism. If the expected result is not there, maybe poll one more time periodically and until the maximum acceptable time, let's say the timeout is one minute, two minutes, whatever. After that, I can already fail my test case. But until that time, I can poll and I can check for the expected result a few times. Or if there is any other notification or any other check that I can control to be sure that the whole transaction is completed after ensuring this is completed, then I can check for the expected results. This is only one example for the asynchronous weights. But similarly, there may be lots of different reasons for test smells. And we may have lots of different test smells, like the fragile issues, the dependency issues, or the scope-related issues, like sometimes the test cases are too eager and trying to cover lots of different things. So it will be really difficult to maintain. So what we can do is to list all the test smells we are having by analyzing the test results and try to get rid of all the test smells one by one. And another way to improve the quality is, of course, having strong quality gates. And how we can improve the quality gates, maybe we can embrace the state-of-the-art code analysis tools, like one example is a very commonly used one, a sonar cube. We can set up the servers and we can define our quality rules. And if there are any violations, then we can notify people to fix those kind of issues. And of course, additionally, we can embrace the peer review activities. Whenever we develop a code, we can share with another colleague in the team and we can get some feedback. So on the other hand, doing from the review, as the review side of the medallion, doing the code reviews is very important because not only checking quickly and putting some comments and setting all that are done, apart from doing this, doing a comprehensive code analysis and code review is important. Not only all the needed steps, but also the way that they are done and implemented. If there's any other way that we can implement these steps in a more efficient way, then maybe we can provide some feedback and we can again act as a quality coach to improve the, not only the functionality, but also the efficiency and the reliability of the test code as well. So one example from my project is the maintenance of the locators, because after some time I realized that the UI cases were broken too often, because the UI pages were updated frequently and the locators were having some updates like their class names and the class paths were updated, or maybe their X-pads were updated. So what we did is we sit together with the development teams and we made an agreement. Whenever they put a new element or whenever they update some page elements, I asked them to put some unique data test IDs. So in this way, even if the class names or the class paths are changed or updated, since the unique ID is same, then after these updates my test case would not be broken. And even apart from the robotness, the maintenance is also improved, because rather than trying to figure out the complex and complicated X-pads, I would directly put the test data IDs, which is very easy to find in the elements. So after all these improvements that we can do during implementing our test cases, some more things that we can do, some more improvements that we can do is in the execution stage. While we are executing test cases, what we can do, how we can further improve the test executions. First of all, the reusability is very important, what I mean by reusability. On this example, I put a real-life scenario in which there is two lines of code, which is like apart from the logs, there are really two lines. The first one is doing some operation, which is finding the element and putting some queries on this text field. And then the next line is just to check for the expected result, that after I put my text value into the text field, then the system should automatically complete into some other values. So these two lines is a very easy test case for me. But what I can do is by introducing some configurational variables, by separating these environmental variables as configurations, I can execute the very same test case on lots of different configurations. Like I can change the browser on which I'm testing. So in this way, I can do the compatibility testing, or I can change the platform that I'm testing. Like I can test this test case on the browser on desktop version or mobile version as well. So I have an execution platform variable here. And by introducing this as a configurational parameter, the test case is realizing that the execution platform is set to either desktop or mobile. And in before test blocks, if it is set to desktop, for example, it is adding some more user agent headers, or if it is changing the viewport, the window size of the application to be able to set in the mobile platform conditions. And similarly, I can change the stage parameters like I can execute on my QA stage or pro stage, or even the application itself I can change. For example, the URLs that I'm navigating, I can dynamically generate these URLs by reading these configurational parameters. So the best practice here is, of course, getting rid of these kind of embedded values. If I already hard-code these kind of environmental variables into the code, then I would not be able to execute the same test case on different configurations. But I'm separating as much as possible these environmental variables as configurations. And in this way, I can improve the reusability. And of course, I can generate some suites to increase the efficiency. Because after some regression suites, I don't have to execute the whole set, right? But I can execute only the relevant test cases, or maybe at least only the most prior test cases to ensure that any high priority test case is not broken. Because otherwise, if I want to execute all the test cases after or before every merge, then it will not be feasible. There will be a lot of resources consumed by these kind of test executions, but I can narrow down the test scope by selecting the correct subsets. So what I can do is, again, maybe in my test automation framework, I can introduce some text or annotations. And by thanks to these kind of annotations, I can select the relevant test cases and I can execute the correct subsets, which means the correct test suites after my development activities. And eventually, the maintenance, the continuous improvement stage is the last ring in the whole software testing lifecycle chain. So what we can improve continuously is, first of all, the robustness. As we discussed the reliability, the robustness and the accuracy of the test results is very important. So what we can do is, if we have some public reports, like we can introduce some dashboards, or there are some open tools, like one example, mostly common one is LUR. So if we utilize any of these kind of reporting tools, then we will have chance to analyze the previous executions. And we figure out what test cases were fragile or flaky, like sometimes passing and sometimes failing. And then I can understand the root causes if I'm collecting the evidences. Like if I'm seeing that just that the test case failed, maybe I will not have any clue why it failed. But if inside the execution, I already collect all the evidences, maybe the screenshots, even the maybe the execution video I can collect. Of course, this is a trade-off, the resources-sponsored and the benefits introduced by these evidence activities. I can decide what kind of evidences I can collect and eventually by figuring out the root causes for the flakiness issues, I can reveal them one by one. So this example is again from my project. And after we start doing this kind of analysis, we reduce the number of flaky tests. Maybe it's a little bit difficult to read here, but on the same number of executions there were at least, I guess, seven test cases failing. But after we start fixing those kind of root causes, we get rid of all the flakiness issues. And it indirectly improved, contributed in the execution durations. The test is not needed to be retried anymore, because it already passed in the first trial. And the execution duration itself is very important. So what we can do to improve the execution durations is, if we have a chance, we can separate and split the whole suite into parallel runs. Like introducing some virtual machines, we can allocate some test cases to some virtual machines and some others to the other virtual machines. In this way, we can start all executions in the same time. And with a parallel run collection, we can complete the whole test executions in a way smaller time. And one last thing, last but not least is collecting some metrics and doing some monitoring activities in production to understand what kind of issues the customers on the end users are facing. Like sometimes if the page is not responding, or even the response time is having some peaks, then we can try to understand in what time ranges are the response times are having some peaks. Or maybe after which operations, the pages are not responsive. Or maybe sometimes we are returning some 500 error codes, or maybe 400 or better response error codes. So we can try to understand lots of different metrics regarding the quality. So we can decide which metrics will be representing our goals in the best way. And then we can start monitoring these kind of metrics from the production environments to get some insights about different aspects of the quality of our product and our processes as well. So to wrap up, there are lots of different initiatives that we can perform to improve the quality of the product and the processes as well. Because revealing the bugs is very important in that way we can fix the bugs that we that are found on the product is important and we can fix them so we can directly improve the quality in our product. But how about avoiding them in the first place. Maybe improving the processes that we are following, like encouraging QA in the early stages, or maybe improving the non functional tests in addition to the functional tests, like having some chaos testing activities, or improving the coverage issues, trying to embrace some different test design techniques in all stages, we can improve the quality, both in the product itself, or the processes that we are following to develop our products. So this was the main idea like having the holistic team approach and having everyone in the team on the same page with a quality mindset. In this way, we can achieve our quality goals, rather than having individual efforts, just executing the test cases. In this way, we can have and we can achieve our quality goals. Thanks for listening and if we have any questions I will be more than glad to answer them. Thank you. Thank you for a wonderful session on test automation. Participants, please post your questions if any on the Q&A chat. And while I think people put in their questions, I had one question about, okay, there is a question. Yes, we have a question that asks about your experience on any quality tools, like what are the good quality tools that we can use? Yes, starting from the management tools, like first of all, we can use different tools to manage our tasks or the roadmap. One very common example is JIRA, we can use this or maybe we can use some other open source tools. And other than this issue tracking system, just for managing the test cases itself, we can use some different test tools. Some examples are like test rail or test link, these kind of tools we can use. Or there are some plugins of JIRA, which is called xray that we can use to manage our test cases. In this way, we can increase the visibility of test cases, we can document all the test step definition, test scenarios, and we can share by exporting these into different external platforms, like we can export into spreadsheets or Excel sheets and we can print them out. And other than that, the static code analysis tools, as I explained in the slides, SonarCube is a good example for this. And other than this, we can use some linters to check the static code quality. Eventually, regarding the test results, we can utilize the allure reporting, or otherwise we can use some dashboards. It depends on the tool itself, the framework itself that we are using. But for example, what I'm doing is currently I'm using Cypress as a test automation framework. So Cypress has already a dashboard. So on the Cypress dashboard, you can already see the execution durations and the failing test case ratios and the flagging tracking you can have on this dashboard. Sure. Thank you. Thank you, Ms. Hood. There is another question which talks about what if you're your scrum master and you want to learn test automation. Please advise the best process. Somebody who is working as a scrum master wants to learn the test automation process. What is the best way? Yeah, I think the biggest responsibility here is on the test automation engineers themselves because the only thing that we can do is not only automating the test cases, but sometimes letting everyone know about what we are doing. We can do some demo sessions, or we can do some introductory meetings, and we can invite different people to introduce about activities that we are performing. In this way, people can have an idea about what we are doing, in what ways we are implementing and doing the automation. And again, this is a very important and good question because everyone needs a basic understanding about automation. Maybe everyone doesn't have to do the automation, but at least have a basic understanding about automation because test automation is an important part of our quality activities. Without test automation, it would be really difficult by only manual testing. So that's why in most of the teams we have some automation engineers. So if the scrum master or the product owner doesn't know anything about test automation, then it will be a little bit difficult to communicate. You cannot communicate in the same language because if you are explaining your tasks, your daily activities, then they will not understand. For example, if you are explaining or sharing the things, the obstacles that you are struggling with, then maybe they will not be able to help, but normally scrum master or product owner should support you with the obstacles that you are having. So what we can do is maybe we can do some collaborative studies and we can do pair programming as well. For example, instead of just assigning some tasks, at first in the first tasks, we can do together. For example, pair programming and we can start with the easy tasks and we can assign them and little by little we can introduce them and we can facilitate their adaptation into automation activities. Sure. Yeah. Thank you. There's one more small question that maybe we can take up first. So Tanuji is asking for the link of the dashboard that you had suggested. Okay, if Tanuji, if it is possible, maybe I will jump into breakout session so I can share the tool that I'm using. Sure, sure. Tanuji request, yeah, yeah, yeah, of course it makes sense I think. Tanuji, please, requesting you to please join the Hangouts and Misoot can share more information about the dashboard there. There's one more question. It is a little longer. I think it is a little technical. So I'll try to summarize. So I think the question is about the problem that you had spoken about changing locators more often. From release to release. So it becomes a problematic thing that we don't know which locator we are testing and then we are getting an error, unable to locate an element. So you had suggested some changes to be done by the developers there. Yes. So is that so the question I cannot understand who it has written it is saying anonymous. But is that understanding correct and do you have any more suggestions there. Yeah, that is correct. I request from the development team to add some unique IDs, because normally, originally they have only some locators like I know only the class pet or the class name. But when they change the design of the page for example they move a button from the bottom of the page to the top of the page right then the pet has already changed. So my test case would be broken. So the first important thing is the close communication channel between development teams and the quality teams. When they do this change, they should let me know right but otherwise, I would not be aware of this change whenever they plan such a change whenever they plan to change the UI page. Then they should let me know in the first place. Again, early key involvement is very important. I can already get some feedback. If you make this change, maybe it will not be very usable for the end users anymore. I can do this feedback. And in the meantime, I will be notified. I will already be aware of this change so I can only the start preparations. I can only start changing the locators in my test automation framework. This is one thing. And the second thing is adding some unique IDs. If you have these unique IDs, even if you change like you move the button from bottom to the top, then the unique IDs still will be same. So this change will not have any effect on the test case. These were the two suggestions I made. Sure. Yeah. Thank you. Thank you. There's one more question. I think and then we can probably wrap up. So the question is, can you suggest how we can automate the maintenance issues due to false alerts? Oh, yeah. Again, we have to find out what the root causes, like why we are having some false alarm. Because the test case is failing, right? But normally everything looks accurate. So why test cases fail? We have to understand the root cause. So sometimes, and mostly it is like a waiting issue. Sometimes we want to see the expected result, but it is not there yet. For example, after I did my query, I expect to see, like, let's say three results on the page. But when I check, there are two of them. Because when I check, I find the elements on the page, but it is not updated yet. So if I wait a little bit more, it would be only the three of them. But when I check, it was two. And since I expect three of them, my test case would fail. Because the actual result and the expected result does not match. So what I can do is, before checking, I have to ensure that the operation is completed from the system site. System complete everything that it has to done. And then it is ready to check. After I ensure this, if I do my check, if it is still two, then it would really fail. It should really fail. But if not, if it is three, then it would pass. So what I can do is, first of all, list all the root causes, like if it is a waiting issue or if it is any other reason, then I should take the relevant action item and fix my test case. Thank you. Thank you, Mr. Thank you for your patience to answer all the questions and thank you for a wonderful session and sharing your experience with us today.