 Please raise your hand. Yes, completely, 100%. And how much does the entire QA department of yours trust your automation? No hands? One hand. And how much does your management trust your automation? Yeah? OK, that's nice. Without trust, we don't really collaborate. We merely coordinate or at best cooperate. It is trust that transforms a group of people into a team. I like this quote, because it points out the importance of not just going to work. You spend nine to five at work, and you want to add some meaning to the work that you do. Along with the people you spend most of the time with. You just don't want to go there and listen to someone and do something just cooperate without you liking it. You want to have that team building experience with everyone and enjoy your time at work. Because you go to work because you're passionate about the work, and you want to let everyone know the kind of work that you're doing. So I welcome you all to building the blocks of trust in automation. I am Sneha Vishwalingam, and a little bit about myself. I'm a test automation engineer, and I have worked on setting up automation frameworks in two companies so far, and moving on to the third one when I get back from here. This is my, thank you. This is my LinkedIn profile. I don't have a Twitter handle yet. I need to work on it. But this is how I connect with people. So feel free to connect with me. And if you have any questions, just send me. And I will definitely respond to you. So I'm happy to see so many of you here, and I'm very humbled. What are your expectations out of this talk? Like what do you have in mind when you came in? What do you want to take away with you from this talk? No expectations at all? How do I build up the expectation with not me? You would have read the proposal. You would have read the summary of the talk. What came to your mind when you read that? And what made you come to this room to attend this talk? I'm sure you're getting something out of it. I'm sure you'll take that something out of it. Moving on. So having followed some best practices from the previous Selenium content staff, my team was able to shift from flaky tests to stable, reliable automation tests. And during that time, I learned the importance of actually trusting in your test suite to unite the team as a whole. And once you start trusting your test suite, it becomes crucial for the software development life cycle as well. So this talk is going to be about our journey in building that and how we involve the entire organization in the process. So we're going to talk about building trust among the automation engineers, among the QA team, and having the management actually believe in us and trust us. So a little bit of background of what a company wanted. This company that I got into was looking to embrace automation from a fully manual QA environment so they did not have any automation in place. So the goal was to provide an easily maintainable and extensible framework that would enable the team to contribute to automation. So we wanted a framework that is easy to adapt by even non-technical people in the team so that we could involve all of them in the automation. So the journey began. We went ahead with the BDD approach using specflow and Selenium WebDriver with C-sharp. If you haven't heard specflow before, it's Cucumber for .NET. So the concepts are very similar to Cucumber. And it's an open-source solution trusted by the .NET devs around the world. That's what the website says. And ours was a Microsoft shop, so we used Team Foundation server for continuous integration and we used Gemini for the project management. And if you're not aware of Gemini, it's like the younger brother or Jira, it works. And then we had Nike's scheduled build and then we had CraneShot, we had reports and we had logs and we had data management. We had everything that was needed for a decent automation framework to be good. And we trained two of the manual testers who had a good programming background to help us with automation. So we were a team of the three of us working on automation. Everything was going good. We developed some tests and everything was fancy. Everyone was happy with how the automation was going on until we started getting tested. We started getting some Christmas lights, not these lights. I would have been very happy if I got these, but then we started getting these. So what helped? We knew that we had to do something about it and come up with some strategies to deal with it. So we started with attacking the Fakie test. Fakie tests are the fastest way to lose trust. All it takes is just one Fakie test to spoil the trust. So what we did was we separated the failing tests from the successful tests. We kept them under observation so that we knew that all the tests that were in the build that was running in the continuous integration were going to succeed. And if it failed, it's because there is a genuine defect in the app and not because of Fakie tests, or it's not because of us, it's because of them. So we kept the Fakie test under observation and we kept working on them and we started adding some table identifiers and then we added some intelligent weights. So basically what we did was we worked on them within our team so that the others could trust the automation because we're not giving them bad tests, we're giving them good tests. And then the main aim was to maintain the build always green until there was a change in the app that way everyone trusts the feedback. And while working on the Fakie test, if we had some locator issues and we weren't able to add the right elements and locators, we would just talk to the developers and ask them to help us out because it's a team effort, they should help us help them as well. So we didn't shy away from talking to the developers, they probably hate us now, but it just helps us trust the automation even more. And then what we did was don't use Selenium unnecessarily just because you can use it. We were using Selenium to set up pre-req data as well. It was the data that is supposed to be in the app, but just because we could use Selenium, we just wiped away everything and used Selenium to build the data from scratch and all the tests were dependent on those. So there was a lot of dependency that we were adding. So what we did was we added the data to the database and just restore the database whenever we wanted to run those tests against it. So what you can do is this is something that worked for us. Maybe you could add the REST API calls and then use what works for you in your organization. For us, having our software and conservation, this is what worked best for us. But the idea is to not use Selenium just because you can use it, use it where you really have to use it and use it where it really helps you the most. And then direct URLs. So we had a lot of places where we had to navigate to a lot of buttons to reach to a page. That was also giving a lot of fleekiness because of timeout issues and it was just leading to unnecessary complications. So we just added direct URLs. We obviously parametrized them and all that, but we added direct URLs in places where we had to navigate a lot unnecessarily because that's definitely not something that we are very interested in testing. We're interested in going to the page and testing what's there in the page. Then keep the test short. You don't want to test, yeah, keep the test short. You don't want to test, you don't want to test in which each step depends on the other test, right? So you break your flow down into small, manageable, and independent cases so that there's less dependency. And that way, if one test fails, it won't make the whole test read come to a halt. You can still proceed and you still get feedback and you get shorter feedback. And that way you can effectively increase your test coverage at each execution of your test. So yeah, the aim is to maintain the build screen always unless there's a change in the app. That was the goal that we had in mind and that really helped us. And this is essentially what we did. We just disabled them, not completely. We put them in a different test suite where we are working on them continuously. The idea was to work on it until it succeeds completely for like 10 to 20 run. And then when we are very confident about those tests, we will move those tests that way. It's not because of the flakiness. I mean, I know flakiness happens, but if it runs successfully for 20 runs, I'm sure the flakiness is solved. And we can trust those skills after that. And this is a tip that I got from a presentation from one of the Selenium contents. So there are so many things that we included from the talks and the Selenium contents that really helped us. It might be just small, small tips, but it makes a huge difference when you implement it at work. Code reviews. So we set some standards for code reviews. Initially, we weren't very strict about code reviews. We would just see if everything was fine and just merge them, which was a big fault. And later when we had all these issues, we set some standards for code reviews and we followed the footsteps of the devs. And they had some templates that, so we adopted that from them. And then we had our own rule set for all each and code reviews as well. It started with TINIST, having all the common values in the configuration file. So we used a conflict file to sort common needs, values across the project. And then we extracted the common logic and needed separate functions, which others can invoke. And then we made sure that we avoided repeating the same steps in multiple test cases, which definitely helped avoid redundancy. These are small things to take care of for these values necessary when the test case starts getting bigger and more maintained in a very efficient way. And then we, no documentation, a big no. This is something we really stressed on because people are constantly moving and then someone new comes and if there's no documentation, especially with test cases and cleaning tests, it's very difficult to figure out what works. So we made sure that we have, all of us put in documentation, even if, I mean, not so much, but even if a small child wants to eat it, she can understand what it is. And then relative path always that is something that also are following and fighting for. And if you're not able to get relative path again, go and bug your developers to help test your test. That issue. And then reading files. I didn't understand the importance of reading files before, but one of my colleagues, you need to pay us a lot and make it a beautiful reading file. And ever since, it has been so good for us. So each project should have reading files. And this file should include any external dependencies, URLs, setup, instructions, mode configurations, and values that need to be changed for a local machine and all that. And decorate your code with lost comments and make your code more readable and maintainable in the long run. And basically think of coding as an art. You just decorate it and just make it as legible and beautiful as you can. And this book is, well, basically our code Bible. If you haven't said this before, please add if you are reading this. It would really help my team to be able to achieve the code. And it was very useful. I cannot stress enough on the fact that please add if you are reading this. If you want to. Now let's move on to building first among the two teams. For communication. So we noticed that our manual testers did not have a pure picture of what was automated. Sometimes they would see them regressing tests that are already automated. And that is useless. This is wasting time and wasting your effort and this is not trusting automated. And that's not what we want. And that is partly because we didn't communicate well enough. And if you're not trusting, then it's something that we did as well, I'm sorry. So we addressed the communication gap issues. And as communication is key, we scheduled sharing sessions in our weekly routine. Both automation and manual testing team should be active participants and jointly plan the testing tasks of each project. And we would run all the automation and they'll take a look at it and then they'll tell us if you're missing anything, if you're missing some test cases. And then they are the subject matter experts and they know the ins and outs of the software. So it was necessary even for us to run it through them and make sure they felt like all the tests were covered correctly. And then the tools for the job. So having an integrated test management environment was the best way to keep the test coverages in check. We've used the test trail and we've even used Excel sheets. Just make sure you mark all of them as automated. And the way they have a clear idea of what are the tests that are automated and what are the tests that may need to request. And yeah, and test trail has some, and all of these tools have some cool way to find metrics that you could use to before the release. So collaboration with manual testers. As I said, they are the ones who know the ins and outs of the system and they know the business centric, tip centrics. They know what kind of test data would be needed for what test, et cetera. So we wanted to leverage test fields into automation. And so we had all the automation tests, manual testers write feature files and then attach them to the tickets. Feature files, if you're not aware of is a part of the spec flow, like the BDD scenario. So the training, how do we train our manual testers to write feature files? We basically explain how feature files are evident to our automation structure. We had workshops where we had them train in our automation framework. We would show them the entire flow of how we create a test and how we generate step files and how we write pages for those step files. And then when they write feature files and they add them as attachments to the ticket, we would take the feature files and generate steps and write pages. But we trained them, we had workshops to let them know just for them to understand how everything fits in place so that they have a detailed knowledge on what they're doing. So tools like spec flow are created to enable non-technical people to participate in development directly. But then expecting the same people to use Visual Studio or Eclipse to write feature files is not a fair expectation in my opinion. Because sometimes they know more about the domain and they're not necessarily very interested in learning about IDEs and tools. So instead of asking them to learn the IDEs, it seemed like a better option to look for ways to serve the same purpose of enabling the team in writing feature files. So they found this app called TidyGurkin. It's a Chrome app. It's very easy to install and you don't require an IDE for that. It is free and it is very, very lightweight. So it was very easy for manual testers to use this to write feature files. And honestly, not all the manual testers are open to learning automation. They are excellent in their field and they're happy with that. They don't want to move into automation, but there are some people who are very excited and who really want to move into automation. So if you want to get something out of someone, you need to give them easier ways to listen to you or help you out with what you want. So that was the reason why we wanted TidyGurkin, why we implemented, we used TidyGurkin while training the manual testers. I will show you how TidyGurkin looks after the doc and we'll just go to a small demo of TidyGurkin. Documentation. Technical documentation is basically an invaluable resource. Everyone knows about it. No one really uses it unless they really are interested, but it is still an invaluable resource. You can use anything for documentation, but it's important to have easy access to the documentation. For example, you have options like Confluence or you can make your own website and maintain documentation there or you can even have Word documents and put them in a common folder where everyone accesses it. Whatever works for you, but make sure you keep it up to date and we tried to do that. I wouldn't say we did it every week or anything, but we tried our best to keep our documents up to date. And I've used Confluence before and it is so much better to collaborate online and work on documentation with set templates. Then you have the collaboration going on when something makes it so easy for you to collaborate. I think it encourages the team to go off in and update all the documentation. So that is how we kind of collaborated with all the manual testers and help the manual testers help us and then we used to have stand-ups in the morning where we would tell the reports, where we would tell the report updates and all that. So it was a good collaboration for us. Now, how do you get your management to trust us? Yeah, marketing automation is kind of a big deal because if you don't market it, no one knows about it, all your effort goes to waste. Benefits of formation are basically faster testing cycles, improve test coverage and ability to catch bugs earlier. But you know about this, but how do you get this across to the management? Because all they need is proof. Without proof, they won't believe you. So what sells to management is basically money. Metrics and visibility. The money part of it, I'm not sure how we can help. Maybe faster, maybe, then you figure it out, I really don't know how we can help. But the metrics and visibility part, we can surely help. Collect metrics. What kind of metrics do you collect? So you collect a test metrics, test basically test coverage, test progress curve. That's the measure of the breadth of the functional coverage that you have so far. And then determine by comparing the number of successful tests to the total number of tests that need to be completed for a given project. And if your coverage is poor, you have to address this issue immediately. And they wanna know that. They wanna make strategies depending on how the situation is. So metrics really help them. Defect metrics. Defect in the final product versus defects reported. Defect turnaround time. So that's basically the ratio of the number of pre-delivery defects versus the number of post-delivery defects. That is a metric that you really need to know to understand how well they're performing in the team. Next, we talk about visibility. How to increase the visibility of formation in the organization. So how many of you already use wallboards in your company? That's a good number. So for those of you who don't know or don't use wallboards, it's a visual aid that displays a real-time metric that we are interested in. So we could configure whatever widgets that we are interested in seeing on a day-to-day basis and have them at the wallboard. It's a large display of information and it's continuously updated. And it's usually located in a place where the whole team can see, right? So why do we have wallboards? You could just walk into the office and see the state of the build. No more digging into email reports. No more trying to figure out from the junk in your email of what is the actual report that you're looking for. If you know what I mean, like there are so many emails that you get from CFS and you have to figure out which one you're interested in and all that. You just need to look over your monitor to see the metrics, right? And it is kind of a double-edged sword if your automation is not set in place very well because that's like openly saying, oh, so many tests failed. And everyone's gonna be behind your back for the entire day about why the test failed. But instead, if you just had it in your email, it's just your headache to figure out why it failed and you can figure it out. But there's so much pressure coming from the entire company because the whole world knows that it has failed and something's wrong. So be careful when you do that. And if you don't want the entire organization to come behind you, maybe you could initially start with having all both only for your team, then for your QA team. And then if you're very confident, go and keep it in your CEO's office or do whatever you want. But be very careful when you go ahead to that step. Then round back sessions. And how many of you have round back sessions regularly in your company? So what are the topics that you cover in your round back sessions? I'm very curious to know what are the topics that you could cover in round back sessions. Hands somewhere over there. Okay, that's nice. So for the ones who don't know what round back sessions are or don't have round back sessions in your company, it's just another form of communication that's used within the group for visibility purposes. And just for discussions and presentations and workshops and so on. And the presenter at a round back session will be aware that the participants will be eating their lunches. If you know the history of round back, that's why it's called round back sessions because they have their lunches in round back sandwiches and Pepsi and all that. And then they come into the session, they'll be eating their lunches and the presenter just presents something. It's a very informal session. So the topics typically in round back sessions are something that you, it'll be nice to know, but it's not something that you really, really need to know. Right? So it's an informal meeting and all that. And why do we have round back sessions? Again, for visibility. The topic itself doesn't matter. It's about your personal preference and obviously the entire organization doesn't want a technical deep dive on Selenium WebDriver. They're not gonna gain anything out of it, but they would be open to hearing some latest developments in the field of automation, right? And on how you plan on implementing them at work. And if you give too much details on the technical aspect, like Selenium Grid or the unit test that you used and how you're gonna implement, how you're gonna implement them is fine, but just general topics on technical deep dives they're not gonna like it and they're gonna get bored very easily. And they're just gonna say that this automation team is so full of themselves. So be careful when you choose your topics, choose some nice, gentle topics that they'd be interested in listening to. And that also relates to your automation. Then attending networking events. Attend meetups, conferences, workshops, et cetera. It is good for the industry because you will no doubt return from an event workshop or conference with an abundance of fresh knowledge and likely a burning desire to implement all of what you have learned in the conference. And your organization is gonna like it. Who doesn't like positive energy? Who doesn't like people who brings in fresh ideas of what they've learned? So I understand sometimes attending conferences, workshops would be expensive, but attending meetups is free. So you'd be very interested to know what are the kind of topics that you can actually learn from the meetups. And you can have wonderful brainstorming sessions out of those and inject inspiration into the work culture. So there's so many advantages out of attending networking events. And I'm sure your work would also encourage such activities. So putting all these blocks together. Oh, and just in case if you haven't noticed we had color schemes in the presentation. So this is like putting all those blocks together. So putting these blocks or strategies together has helped my team gain confidence not only with the tests we wrote but also within each other because there was a lot of collaboration and there was a lot of working together. And there was a lot of transparency about all. When you're transparent with each other, you actually trust each other. So I hope that you could relate to at least a few strategies from this talk that you could actually implement within your team to help the trust. Even if not exactly what we did, you could map what we did into something that fits into your company and something that would benefit you. Thank you so much. Do you have any questions? What about the demo? Remember that everything testing in production always fails. So this is how Kydigokin looks. It's very easy to download. Let me show you, just go to this and install it. I already installed it, so just launch app and it opens up the app. This is how it looks and you could easily insert the template and you could write your feature here. So I'm just writing something and then you write your scenarios here and then you insert your scenario outline. Don't worry about the template output because you see it tidies it down. That's the beauty of tidigokin and then you just keep adding how many of the scenarios that you want and write your test cases over here. And then you just, well, you save this. It saves as a dot feature file. That way you could just attach this feature file to your ticket and write whatever you want in the ticket and then it comes to the automation engineer. We just have to use the feature file as it is or sometimes make changes according to automation standards and then work with it. See how much time it saves because then you have to see all the specifications in the ticket without tidigokin and then try to figure out what they mean and then write our own test cases out of it but instead you have a living documentation over here that you could just use and use it right away. Can I? What was that? Can you make it a little bit bigger? Probably change the resolution down. So do you have any questions for me? I want scenario outline. Scenario outline. You got scenario but how to get scenario outline? Over here. With examples. Examples. Do we need to mention it? Yes. Scenario means for single-time execution, scenario outline for data driven. Oh yeah. And then you could just insert your tables over here like you could have your own and insert table and this is how you add data to this and then it automatically becomes all pretty because it just ties it up down and then you save your feature file, that's how it saves. Examples colon not required, right? Examples colon not required, right? No, you could just remove it. It depends on you. If you want to give examples, you can add examples. Actually we are using Eclipse with cucumber plugin. So definitely we need to mention examples colon before giving data. Okay. Thank you. I was wanting to know how you handle code reviews when you're producing these tests. Feature file is written by manual testers? Yes. Okay, that basically just comes to us and we change it according to our automation. We don't have code reviews for that because it is attaching it to the tickets and we take that and make the necessary changes for us to implement them in the automation. Are there changes to the step definitions that need to undergo some sort of review? Step definitions are made by us. Okay. It's just the feature files that they send to us. I have another question. So building trust in automation maybe in a starting exercise when you start doing and it starts following up but then did you feel that it's like continuous process because by the time you have the trust in automation for one release the next release is going to be get ready and if it involves a lots of UI changes and it's going to change everything and again tests are going to become flaky. So what are the things or tips you would say you would suggest to actually communicate with developers in initial phase itself so that we don't like having rework everything again. There's a lot of changes. We would come to know already that this release is going to have a lot of changes. So be prepared for that. So we try to figure out what are the places that we can expect the changes in and we keep ourselves alert in those spaces and we try to work on them. That starts when the development starts but then it's gonna be changing till it actually gets released or how you keep. That is the thing with automation, right? You automate something and then it changes and then you change. So what we do is we automate it when we know that it's completely done and it's ready for testing and manual testers have already taken a look at it once and then it comes to automation and that's one more thing why it was nice for manual testers to write feature files because they've already tested it once while they've written test cases for us. So when it comes to us, we know that these things are working and we are automating something that's already working and we'll be able to catch it if it stops working. Yeah, thanks. Any more questions? We can have probably one more last question. All right, Sneha, thanks a lot. And brown bag is something I'm going to take away and I'm going to implement in my workplace as well. So thanks for that too. Thank you.