 So I'm Prasad, I'm working with ThoughtWorks for the last six and a half years now and being a QA prior to that was with Persistent System, again a Pune based firm. Being always a QA, doing whatever comes the way, functional has been my forte and particularly in ThoughtWorks, the role goes in more of a consultant sort of a side and that's the reason being aware of the best practices, able to tell people what the right thing are or what the problems are raising concerns is sort of our major worker, could say bread owner. So today I'm going to talk about how a lot of us are doing automation. Can I have a raise of hand? How many of us do automations? Great, all of them. How many of us are PMs, project managers or tech leads or something like that? Great, great. So I think in all of our career, a lot of times we get into a situation where either we have to start an automation for a legacy system or we have to take a legacy automation and then go ahead. And we always come across problems and then we apply either hacks or either our historical whatever solutions we know. I think they save us, they do help us for some time, however on a longer run they do fail. And at least from my experience, yes, I have seen such problems within our projects as well. So what I'm going to talk about is trying to say put my case study ahead and also say what are the type of thought process which I have kept in mind and which has helped us to build a scalable product in terms of automation. I wanted to know from you guys, is there anything else, expectations from it? What do you want to listen? I can see if I can manage that or I can park it out. Anyone? From this, when you're here, what are you expecting? Something, right? Something, no? Blank slate, I can fill in anything? Great. I'll take it as golden silences, yes. Cool. So we had a raise of hands and we had a lot of people say why we automate, right? They do automate. So anyone can tell me why we automate right now in any of our projects, why we choose the path of automation. I think we have heard enough in the morning, but someone who can give me, sorry, perfect, maybe, not boring for everyone maybe, but yes, reduce time, save time, stop doing the redundant work, faster feedback, right? Right now we have heard terminologies like continuous integration and all and a lot of us going into that mode of building automation, right? We need to run those tests. I think even Sargass was telling us the same. So yes, fast feedback is definitely, we want a mechanism which keeps running and we have different inputs because that's what an automation script is. You give different inputs and you should get different expected outputs. If there is a discrepancy, that's a failure. So that's the fast feedback we want in our fast moving life actually and fast moving projects that we get faster feedback. Second, I think every organization has to sustain business and for sustaining business, definitely they need a better case studies and that's what delivering on time is important. At the same reason, we want to perform exploratory testing. I think manual testing is boring, but what if I can do what maybe one of our speaker told us, put analytics into a testing and utilize that for exploratory testing? How do I say where my market is more analyzed and do manual testing around it? So that I need some time to do this exploratory testing. Most of the time I have met my friends and people around. They are under a sort of pressure to complete say 100 QC test cases or something like that. There is a test plan which needs to get completed. Just imagine you don't have that life and you are being told that this is your domain, go into exploratory. I'm pretty sure we'll find more bugs. So that is one thing where we need time. Second is, yes, when we say QA needs to start first, QA needs to start right from the analysis of the product system. We need to also be four runners in what is going to come next. Right now I'm functionality one. I'm testing that but what is coming to come in functionality two or three. I need to know that. I need to analyze that. It might happen that I'm not familiar with that domain. I might need to do that homework. All these things require time. So automation, yes, helps us faster feedback. And since you have taken care of those scenarios, you can rely on them and you can utilize your valuable time on the other required stuff. Then why we fail? I mean, I'm not saying that we have failed miserably, but at least some places where we look back and say, oh, whatever I was doing till date is not giving me that benefits which I thought of. I'm not getting that time. Right? That happens with all of us. Happened with me. For me, I think too much of unwanted automation is a reason where we fail. Right? Where we get into a situation where I had a series of manual test cases and now I have a goal to get them automated and I went three months getting all of them automated. But one day I'm seeing that actually they are not covering my useful test cases or the areas which I will need to rely on and say that, okay, this build which has come out of this automation tested is ready to go on production. That is what my goal is. So too much of unwanted automation. We will look at what I mean by that and why do we have too much of unwanted automation. We'll be talking about that. Yes, this is another reason which I think why automation or we fail is that automation is done perfectly. What it means is I have seen a lot of projects where the development is already done. And then we come into picture and we say that, okay, now I need to write it to a test because a lot of industry standards are going ahead with it. Or yes, we want to write automation regression suit and go ahead. So it's like, damn, we have agreed to automate. Let me write some tests. But that's that's absolutely if you have a product in hand which is stable and in production, I would rather not write any test for it. So the entire point which we keep in mind when we are building products right from gathering requirements from the client, why can't we do that same thought process for automation? I feel that automation code has to be of the same quality of our production code because it's going to test your production code. What if it's not maintainable? What if it's not scalable? You're going to actually slow down your fast paced movement, which you started at maybe rabbit space and now you're going to go in a turtle space. So keeping this context in mind, I want to set the next stage what we are going to talk about. So I'll take my case study and tell you my journey of challenges and then we can try to relate or whatever Commandments I will try to give out. Let's see if they work for us and that's the takeaway for us. In my nine years of experience, I've been a bit lucky to have worked on both mobile application testing and application testing as well. That has given me the flavor of testing and understanding the complexity which each of these domain brings. Plus, if I keep functional domain in mind, I have worked on three major domains e-commerce, train travel and now recently airline travels. When I say e-commerce, I've actually worked on projects where we have sold small products like toys to pains. To that level, we have worked on this. Now when it comes to e-commerce, definitely all the analytics stuff comes into mind, right? What promotions are we getting? What things I need to get sell? All those things. e-commerce relates to payment scenarios. So there is a lot of complexity which comes with payment there. So that is an area which comes in. That brings its own non-functional requirement. Train travel and airline travel, they are huge domain. Trust me. Even a flight delayed can mean that there is actually 200 to 300 different combinations which you need to test. Multi-segment, single-segment. This is just one example I'm giving. It might happen that one of your flight got delayed but other got diverted. But yes, this is a scenario which needs to come to my mind, right? So all this thing has sort of given me good opportunity to judge what can be better for next projects. So I just wanted to share with you guys. For the future of our presentation, I want to focus on the recent of my expedition, I would say. It's an airline domain. We are building a mobile application for Android, iOS as well as their mobile website. So now this is my area or this is my what I have product in test. It's a big organization structure. Trust me. They are like around fourth or third leading airlines in US. They have overall counting all the ground crew and all those. They are around 80,000 employees. Plus the mobile development or the ThoughtWorks team which is working for this app development is the only team which is working in agile. Other people in agile right now are Scrum. Great. Great. So not an alien word. Great. Mobile app development is very different than what a web application development is. Mobile app development brings in their own challenges of just life cycles. A lot of browsers are very well tested and I would say stable in that matter. However, if you really follow mobile app, how the progression is happening, Google and Apple sort of try to I should not say rat race, but they are in a race to sort of bring out new functionalities. Now, if you are building a mobile app and you, for example, they recently launched something called Apple Pay. What if my application doesn't support Apple Pay for next two years? It is going to get absolute by the time I get it in development. So in web application, at least you can prioritize or have a slower rollout of functionalities, but in mobile app, it is very much rapid pace. And that is the reason even the automation, all the tools and frameworks which are available are very rapidly changing. You call it APM, you call it other tools, whatever are there. We are using Calabash, correct? Yeah. Mix skill team. That has been one of our thing. So in my team, if I say we have developers, business analysts, iteration managers, because we are in agile testers. And in testers, there are two types of people sort of skill set, I would say one who are completely manual focused on domain and others are manual as well as automation. We do not have something called specific automation testers, but we have both of it. Now, how do you cope up with this, right? They are all your team. You need to go along with them. You cannot have one left behind another running ahead. We are located at four different locations. Why? Multiple companies are located. However, the complexity is that the app development is done on a repository which is contributed by all four locations. So it's not like they are working on a separate component. The same code base goes into, when I say agile, it goes without seeing that we are into CI. That is the reason even if say the fourth location is in US, if they push in some code, that's going to trigger the CI for me. So keeping all these things right, as an agile QA, what it meant to me, correct? As an agile QA, I have to be very swift. I have to be very fast-paced, adopting to the new technologies as well as running and writing my automation or doing my manual testing at the right time, not late in the cycle. And hence, today my focus is only automation. I'm not going to focus on the strategic planning or something on manual plot, but automation should serve as a help and not an overhead for me. Just imagine in such a team where we are rushing to get Apple payout and your automation is failing for some flakiness, which is because of scroll issues. That is the common case, right? And that is just going to slow you down. You would say, I'm not going to touch that. Let me manually test that. That could have given you more confidence and be release ready. So keeping all this context in mind, right? I have come up with six commandments which I feel we all can try to relate and be customized based on your own company situations, like whatever your projects will have. It might be a live service, like a stock exchange, can have some other circumstances. But let's see if there are some generics which we can apply and take home. The first one, let automation run in the blood. What it means is automation should not be a QA responsibility only. It should be everyone's responsibility. When I say automation, let we'll get further into details on my next thing, but it should be taken care by each one of the team member. It is done to help you and get production ready faster. Your CI becomes more reliable and effective. That is the reason automation is to be. How do you do that? Be thoughtful while automating. Even if someone says, go ahead, automate this 100, do your own pass, go through it and say that no, not this 100, but I think this five are going to give me a value addition. Let me just create it. So just to point out, when we are thoughtful, when we are doing app development, developers are thoughtful. They choose the right design pattern. They refactor. Why can't we as QAs keep refactoring our code? So clean coding practices, go ahead, read about it. Why writing variable names which are more user friendly helps? Why putting comments is a bad practice in your code? All this clean coding practices needs to understood. Design and tools, as I said, choose the right tool. Don't choose. I mean, it's a Selenium conference, but I would say that if your product in test is not compatible to Selenium, don't go for it just because one of your friend has suggested. Try to analyze what is the right one. Do those POCs or spike whatever is required. QA build from CI. Most of our manual testers wait for a build to be out, right? And we need to give them a build which is gone through a CI. So be thoughtful on that, that okay, whatever automation test it is going to run, really needs to smoke it out or needs to do the sanity of it. All those things. How much and where to is again an expert, which I'm going to talk to, but try to always keep in mind. What are you automating? Where are you automating? Yes, break the silos. A lot of our way of working are in silos. A lot of time development team sits somewhere, manual team sits somewhere, they send us test in, say, Excel sheets, mails and different mechanism. Let's not have that. Let's get everyone into the same boat so that every QA contributes. I'll come to the third part first to automation. With this example, I want to give out, as I said, I had a mixed skill team. I had team members who never have done any get push or any version control. Now, how do you pick cope them up? If you really put yourself into their shoes, they get intimidated by even listening to automation. It's something like your business owners, right? Generally, they are very functionality focused people. If you talk about, no, you know this on resume and on activity and all this thing, they will not understand all this thing. They'll say, I don't understand that. That's what happens with manual testers. If you keep telling them, you know, tell me the manual test case, I'll automate for you. Let's break that break the ice there. So what you need to do is you need to find a mechanism in your project so that they can also come together. Maybe they do not need to implement your test cases, but at least they provide you the test cases in the same repository where you are going to automate it. I'll come to it. I'll come. How can we do that? And I think that is the best when your business refers to a test case. You know, this is like an inherent feedback, which you keep getting that. So a lot of times in agile, particularly, you keep doing showcases, you keep checking with the product owner. Are you going right? So in that cases, why don't you provide the report or the repo access, which is the test case labor to your product owner and say that, okay, these are the test cases, which I have written. There could be a challenge if they are written in the technical fashion. And I'll come to that to get this sort of an insult how or what tool you can choose. Cool. Second, as I just said, right? Keep optimizing your as you proceed. Do not stop. Just just maybe do retrospectives. See that where have you gone wrong? What are the things which is pricking you and pricking you small or pricking you bring? And that is going to give you input, take out action items, try to do the iteration planning in such a way that your all those action items get scheduled in the right fashion. It might happen that you cannot do a big bank change and clean clean up, but do incremental way, but do not pile it up for one year. I think surges also said the same thing. Keep looking at your automation. Keep running it. So when I when I say refactor, I think I have spoken enough at delete merge. A lot of time what happens is we keep adding automation and we reach to a situation where I have heard in today's day that I have 10,000 test cases. But actually if you see a lot of these test cases will be duplicate of each other just for a one conditional situation. It will say that this test case expects true. This expects false along with this. Why can't we think of a flow based test case, which we can combine this lower than number of tests can help you maintain it for better. Basically fail fast, grow right. Do not worry about your failures. Don't worry. Keep do the right retrospective. Get the right action items out. Second aim for the test pyramid. Have people heard about test pyramid? Yes, great. Thank you. That's so far for those who don't know. This is something which a test pyramid looks like. So test pyramid is an equilateral triangle where the base is nothing but the unit tests, which a developer should be writing. If a developer is developing a calculator application, if it's implementing an addition operation, then a plus B should give you C needs to be tested at unit level so that different values that two plus five is going to give seven. These are mock based thing, but it is at least testing the functionality or logic in mind. So this is the unit test which you developers need to write. After unit tests are generally written at a class level for people to so when applications are developed people, even if it's functional programming, whatever module they write, the unit tests are at that level. Second, these modules do not work in isolation. They work with each other. One module sort of accepts input, gives some output, some other module takes that as an input. So this integration also needs to get tested. At different level, you have to think of it. If you really sort of dig down, you will get different frameworks which are available. For example, if you want to really talk about unit tests, JUnit is already there. A lot of people, if people are developing .NET application, any unit is there. They provide you, testNG is there. These all things are going to provide you APIs and provisions where you can write this unit test. In case if your project is related to UI, you will have JavaScripts. Why do you want to keep your JavaScript test around error message in e2e test? Your JavaScript, in case if an error should give you a red message saying that the input is invalid, write a JavaScript test. There are Jasmine tests which we can write on that. So you just go and Google around that, okay, this is my area of focus right now. You get a lot of framework. So what test pyramid particularly keeps telling you is that all your test cases at the bottom are technology-facing tests. And at the top, you have the UI or web service. Final product is not your code. Final product is the product, the web app or the mobile app. So it needs to work on an Android phone. It needs to work on a Chrome browser. So you need to see that the browser is able to read all my classes. Even if A plus B is getting added as C, it needs to come up on the UI. So that entire whole integration needs to be checked. But I need to do an end-to-end flow so that I do not only check one page, say if I keep IRCTC in mind, I just don't want to check the journey stage. Like I just don't want to say if I can book a flight from one place to another. I want to test that if I can get till the checkout. Second, if you really be thoughtful in this manner, you are very clear that you have a lot of unit tests covered. You will remove the redundancy at E2E level. You would not write tests that on clicking of no input and clicking on button, you will get these red messages. That test are covered at JavaScript. You are not going to write at E2E test. And because of all these tests are running on your CI and your build is coming out, your manual testing is also lowered down. So automation should always be written to complement your manual testing. You would say that whatever time I save from automation testing and that reliance I get, I am going to put on exploratory testing. Also, if you see the cost to run and the time to run and the impact it makes goes higher. Unit test, even if you write 10,000 unit tests, it might just take say 3 minutes or 4 minutes to run. But if you write 10,000 E2E tests, end-to-end tests using Selenium, it is going to take at least 2-3 hours. So the time to run those tests, the cost, in case if an E2E break, you have to look at, oh, is it an app problem or this and that. But if a unit test breaks, you know that A plus B has broken. Nothing else has broken. So the time to even fix those are small. However, the impact, yes, A plus B in isolation works, but as a product, it doesn't work. But E2E test tells me that my entire product is working fine. Go ahead. Go into production. Don't care. So when we say thoughtfulness, this is what I meant. Let's, if we are not aware of this term, let's get into this. I have taken this from Anand's one of the site where he has came up with such a good representation. If you just Google around test, you will get a lot of documentation and different ways how your thought process can be focused. Cool? How have we done in my project? As I said, it was an mobile app project. We have unit test, which each, whatever user story which we are implementing, the developers write an unit test. So iOS, Android, we have mobile web also. So they write it. Devs own it. The ownership needs to be very clear. Unit test is nothing which you and me would write as a QA. Integration test. Devs own it while TDD. We have a practice of test-driven development in the ThoughtWorks. However, if you are not even there, but we can try to get that into practice. Or we can even think of that, okay, after a user story to say that this user story is developer done, your integration test is an important part and we need to write it. And that's what I'm trying to say that each one of us need to retrospect that where my projects stand, what are my project circumstances and how can I apply this. Functional test. We focus E2E and QA's and Devs both own it here. Just because a developer is going to change some locators and then that person is going to push that and it's if it's my job to clear the mess, no, it's not how the team works. So that there is an attitude of what I could say practice change also which we need to keep in mind. However, if I'm testing something and if a developer has written a happy scenario and I think that adding one more scenario is going to give a value addition, I should go ahead and write a functional test. Particularly in my project, because it's a mobile app and you guys know that mobile app, the way you see it is going to give a lot of user ratings. If you give a lot of information on a mobile screen, you are going to say it's a very messy app. So the UI is very important in mobile apps, right? In a website on an eBay or a Flipkart, you can go and have a lot of promotions offer zone and whatnot, but on a mobile, you need to be very precise and concise what you want to provide. So that is the reason we have GUI test also. Here the focus out the objective of GUI test is to say what the colors are, what the padding given to those things are. So these are nothing but image comparison test. Then there are different tools which are there. Phantom JSS is there. There are a lot of those things. These are there in our project. Third, what we have done in our project is we have strongly dependent on OOPS concept, object-oriented programming. Aware of it? Perfect. Object-oriented programming tells you that be modular. Keep abstracting your code. If in case it's not, if a class has been defined, just know what its responsibilities are. Do not clutter it with a lot of services. These are the object-oriented principles which every one of us needs to be very well versed with. And that's what we have kept saying. How it has helped will come up in the next slides. We use page objects. I think we got good enough information today and even with our experiences, a lot of us know page objects. So for example, if I consider any of this, a homepage for a banking website would look like a user profile information. Account some links. You can perform three of this operation. You can check your account balance, open DMAT, apply loan. So we have used page objects in our project where we are doing airline ticketing. We also have a series of pages which come up. And like our journey search page is going to give me a results page. The results page is going to give me more details about the flight. And that's how. Final page is the booking confirmation. And then there are different cases. In case of check-in flow, it's going to be different. In case of user management flow like my profile, account updates, it's going to be different. But it's going to come up with a series of pages. So I just want to go through quickly what are the points which we should keep in mind for page objects. Only required services need to be defined. So for example in this home page I should not be doing anything more than what I can see on this thing. Like I can check account balance but maybe I cannot add a payee or I cannot add a new account for example. So I should not be defining that when I write a home page class as a page object. So we need to be very clear of what user services we are defining in the class. Second, every home page, any page in itself knows what web elements or mobile elements it contains. So it should know about the locators only which that page contains. A lot of time we get into a habit of writing, oh this continue button is there everywhere, let me define it here. No, if it is there on that page, define it there. If it is there on the other page, let it be there also. But we then need to see where it would come up. If you have an inheritance defined, we can go it up. But if it is not there, it's okay to have continue button at two places. Just define all the locators which are only relevant to that page. Navigation, in any application even in manual testing we know the flow that after the journey search you will get a result screen. Will we get up to a booking confirmation? No, in that case we need to define what the navigations are. So page object defined should clearly say that on the perform of the trigger of action like click of a button continue or submit. It's going to go to my deterministic state. It's only going to go to result screen not at booking confirmation. By defining this, half of your things work. What will happen is even if the developer has introduced a bug where it has gone to a wrong page, your stack trace is going to give you an example that none of the expected page has come. And that's going to tell you that hey you know your app you have broken some navigation flow in the app. You don't even have to go and check the automation code. So we need to have a better debugging also when your test fails. It should not be that way. Third, any actions it wants to do will come to this concept of driver. It needs to throw it to someone else. Like page objects need not know how to click. They just need to know that there is something called click in the world. Like a page object has a button continue. This continue will be able to click. If a page object has a text box for username, it just knows it cannot click. However, it should enter text. But how to do it? Should a keyboard come up or should I put a JavaScript or something? No, the page object should not be agnostic about it. Who should do it? Driver will come to it. Registered with page registry. Page registry is like an accountant. Who knows what of pages are there in my project? So that tomorrow if someone tries to add a page which is not relevant to my project or someone adds a page and tries to refer not knowing to the page registry, you will get an error. So these are sort of skeleton which you need to build in your framework so that a lot of your guidelines do not have to be always through a knowledge sharing session. Your skeleton of the framework itself says that this is what I need to do. And so a new joining comes. Even if you roll off from the project, it's not a problem. You have to be there always in that project. So driver. It needs to be hard working. So what is a driver? Basically as I said, driver is a class. Who knows how to perform those actions? So if I say click, it would be in a web application mode. It will be findElement.click. And right now I'm going to rely on my Selenium API of click. In mobile world, I might use APM again, use Selenium and all those things. But I'll try to use that. Correct? In all projects, I'm pretty sure we always have a customization that a lot of times click after click, either a loader comes up or it takes some time to navigate to the page. So all this sort of weight which is required for your project, this driver should do as an complete action. So whenever a page object has said continue.click, it will click as well as it will wait for the loader to disappear. You do not have to write that in the page object again that click and then the next step is wait for page to load or wait for loader to disappear. So that your driver should be able to take care. May have setup methods. A lot of time in our web application automation, we have to launch browsers. We have to create a browser instantiation, browser object instantiation. So that the driver should be able to do. So that page object can clearly say that I am, give me a browser object, maximize to this screen and then run my project. Can take care of customers, I just said about it. So just to give a gist of our ecosystem on my project, we have, because of this, we were able to write single test layer. And we are using, I'm going to come to that. It will implement step implementation. It is going to call relevant page objects and relevant page objects are going to call respective libraries to do action. When I say this, if I'm running my test for Android app, it needs to use a driver which is going to run the click or tap or touch for Android. If I'm going to run my test for iOS, it needs to do the same action for iOS. If I'm going to run my same test for a website, if it is relevant, then I need to use it or Selenium or a water or whatnot. So this is what our ecosystem is. As I said in my first commandment, let your business look into scenarios. So how can we get that sort of resolved in our project, right? If you write a JUnit test, the business is not going to look at it. They will not, they will discard you. They'll call you mad. So in that case, what we have tried is, try behavior-driven scenarios. I think people have heard of behavior-driven. Cucumber, yep, cool. Cucumber is what? It's like even in Java, we have JBehave. It gives you a provision in which you can write your test in plain English. Sorry? Yeah, so JBehave is there and Cucumber JBM is also there. So the point here is your test case layer is completely very non-technical persona-friendly. That person can immediately look at the test case and say that this is what I have covered and wow, this is the coverage of this functionality and I'm very happy with it. So that is, try to have a test case-driven behavior. So for example, just to again go through a bulleted points to it, if this is my scenario, I need to have a clear given, which is a precondition. I need to know what actions I need to do and I need to know what assertions I need to do. Obviously, it's not going to be this simple three-line sentence. However, it's going to be a series on E2E. However, this is going to cover that this scenario is going to cover my domestic coverage. It's going to cover an international route. It's going to cover a single-segment flight. So it's going to be very clear for the business also to understand what this scenario is covering up. They might even be valuable. Hey, could you add this test case around multi-segment and we could have a very healthy conversion that no, you know, do not worry, we do have a unit test or integration test already covered. So that sort of a healthy conversion needs to get sorted. The entire goal is to have a confident-based suit. How QX have helped us? The scenarios are E2E. They have acted as framework as if just giving more information of Cucumber. Selenium WebDriver are tools which you help and do user accent simulation. But we need, say, JUnit or any unit as a framework because it provides your mechanism to execute this test, have reports out of it, which is more important. Cucumber provides you that. Cucumber even provides a lot of different features where you can dry run and look at technical problems and whatnot. So Cucumber serves as a whole framework for you. So you can have reports and then you can, based on your customer requirements, you can get them better. You can use different report beautification tools that are available. We can integrate them in our project and then we can create pie charts, histograms, whatnot. Sorry. Yeah, so user acceptance test, whatever they write. Yes. Sorry. So that's the whole idea when we are going for that. So where I'm trying to say, wherever state we are in our project right now, let's try to aim for the test pyramid, which means that your functional test comes on the top, which means that you are having less functional test and the valuable functional test. If you're going to write less functional test and you want to have your manual testers also incorporating and giving you ideas, they need to have test cases in their friendly mode. You want business to look into it, then you friend. In all these three cases, user acceptance test in a behavioral driven, do help us. They try to help us gel the team. Try to. This is in case if you're not using queues, build independent test scenarios grouped in functional checks. All your test scenarios should be atomic. They should be very independent to each other. It should not happen that if one test case fails, your entire suit has gone for a toss. Try to have a very clear setup and teardown method. Second, always provide sensible names to the test. This really helps to understand whether the test failed. It clearly tells you the area of failure also. Check-in related test has failed or journey search related test has failed. It clearly does. So a lot of time I've seen we duplicate the names of the test or create big names and we have given wrong impression when the test failed. We don't understand. We have to dig in and look into more problems. Avoid setup and assertions in the test case layer. If your test case, even if it is not queues, do not write the value should be equal to true there or do not write setup method like let me instantiate browser in my test class. Let all this go down to the driver. So that tomorrow if you even change from Selenium to WebDriver, you know one class which needs to get updated. If you write it in every test case, you might have say 10 or 20 feature files right or you can have 20 classes or tests. You need to update each and every places. Perfect. Fifth one. Last two to go. Maintainable test data. Next thing which comes in. If you have a test data which is maintainable, you should be able to use your own test data in a way which is easy to maintain, easy to use, understand. So these are some of the things. For example, if this is the badge of a user and this is my test data that I have different logged in users, how can I keep it very understandable? How can I say that this field of user name is user name, nothing else. There is no, what do you call it, ambiguity. Keep it close to domain. A lot of time having your test data close to your domain. Say in case of a banking project, be very specific about what are the known whatever fields you need to put in. If you are in an airline domain, put all the fields which are related to airline like a frequent flyer number and all those things. Do not call it something user ID. If it is a frequent flyer number, give that name as frequent flyer number so that it also builds a culture in your project. So that each one who reads that test says that that frequent flyer number has gone wrong. That really helps you on a longer run that all your team talks the same language. And that really helps on a problem even when you have a lot of client calls because they are very close to those domain and they are going to talk in that language. One of your team member would say user ID has gone wrong and the business would say, what are you talking about? I don't know. Keep it simple. Do not complicate it. There are some ideas which we have implemented to maintain test data. And to keep it simple, whenever there is a scenario which is very simple and I can provide in my scenario, we have gone ahead and provided that test data into the scenarios. For example, this one. When I search a flight, origin, Bombay, destination, there, type is one way. Here in the scenario, I get my test data. I do not have any complexity reading it. I understand it's a one-way flight, Bombay, Dindy. Also, for example, in case if it's a seat map related thing, then I should get a 13A as an aisle seat. It could be a window seat. So all these things is going to help you maintain your test data in a very simple manner. So your scenario while reading itself is telling you all these things. What in case the complexity of test data increases? You can't write all the test data here. Your scenario would look bad. What would you do in that case? You have to write a hash. A lot of time we do payments, right? We have Visa, we have MX, we have PayPal, we have different. In that case, the details are going to be different. So are you going to say I submit my credit card with all 10 fields? And then your scenario is going to look bad. What if your scenario was like when I enter Visa card detail? And this Visa card is nothing but a key to a hash, which is going to provide all the related information which is required. Can you relate with it? Yeah? Test data objects. In case if your project is very much complex, right? Even test data hash is not helping you. What do you do? A lot of time your test data comes from, say, JSON or it comes from some database. Correct? It might happen in such cases. In such cases, are you going to duplicate that data again into a hash? No, don't do that. If you have read it from a database, if you have read it from a JSON, use objects. I have a delayed multi-packager PNR. Now this key is going to go to a factory sort of a, it's like a builder pattern. And it's going to say, give me a PNR data with all this delayed multi-pack. This might query the database or this might already have a JSON object already created out of it. Okay? So the point here is, based on the complexity of your project, try to use a strategy of test data, but try to keep it simple, because it's going to take a lot of time from the domain. As the last disclaimer, avoid sheets and CSVs. I mean, I would say a lot of time we go for keyword-driven, hybrid, what not, but we go for sheets and CSVs, text file. Why? Finally, we have to write code in Java or Python or something. And all of these provide data structures. So why can't we leverage the stack? Whatever we are going to use. You are going to actually have less of folders to maintain. So a lot of my, I have heard in projects where you have a script which runs in the start, which setups the database and then you are going to query. What if the DB connection fails? Even if your automation script was right, even if your app is right, because of that connection failure, your test is going to go red. Just a hassle to your, okay. Done? Okay, cool. I'll just say, last one is, in case if you write a framework, talk about it in detail. And keep all your tests running in CI. Use some of the CI tool. And one thing, every experience no matter how bad it seems is a blessing. So let it be bad, but just retrospect, get the right action items and fix for it. Thank you.