 I've seen a lot of people try to automate tests and a lot of times the tests get abandoned and I feel very sad about it. The reason for why the test automation projects get abandoned is not because the team is not disciplined, it's because the team is resource constrained and the QA of whoever is doing testing is not able to justify the value of testing. So the reason why I'm giving this talk is to help you prioritize your tests based on how much value your test would give. So that's why we are here today. So let's start with the first question of why people get asking me where should I start and there's a lot of ways to start testing. A lot of people get a lot of teams get stuck here discussing should we why is the shortest path to 100% test coverage should we start from here should we start from there should we use this tool should we use that tool should we use this technique should we start from back end should we start from front end and so on and so forth. So this conversation is very familiar with for doing like and for me in my experience this is what the conversation often looks like so in fact a lot of the times people look at the testing period and for me what the testing pyramid means is mostly a description of what the structure of most tests project look like you have a lot of small unit tests because they are small and light they test your components and you have a few UI or integration tests just to test across the system which also test the components but more importantly you have a few of them just which will cover across your system. So this is for me I have a very simple interpretation of the test pyramid but a lot of the mistake that a lot of teams do is they look at the test period and they think is a guide to how to do testing so it's just like you're supposed to climb climb up the pyramid like start from the bottom from the simplest unit test and then once you have only write your integration test as you need but this is one school of thought this is one camp and I see a lot of teams debating about this like so this is one of the debates that I had in one of my previous company we should start from unit tests for our back end first because it's faster to write and we can get coverage of our core functionality across the system quickly sorry by the way it's my slides updating actually okay so a lot of teams get stuck here they wonder some of them would say oh we should start from unit test first because it's faster and lighter to create some of them would say oh no no we should start from UI test because our integration test because we can just write a few of them but they will cover a board strap and whatever the unit test will cover the integration test will also cover and more and the thing is both camps are right so that's the reason why a lot teams get stuck in this debate and the problem is they would get again this analysis paralysis and nothing nothing actually happens so so for a few months and this actually happens a lot it happened in my old company like the senior developers had a debate and and then nothing came away we made no decision and nothing happened so and every so why people get stuck in this conversation is because it's because the is the wrong conversation to has have right now so what you're trying to talk about unit test versus integration test is talking is you're trying to figure out how to test but the problem is is that putting the card before the horse you're trying to figure out how to test before you figure know what to test so what you should actually be talking about and how we should refrain the question of where should I start is not how to test but what to test so let's take a look at how do we figure out what to test and we should what we want to how we want to figure out what to test is to use priorities and we should prioritize how we want to test what what features we want to test first so and the reason why we want to do this is for two reasons one we want to be able to make sure that whatever we do in test automation delivers value as quickly as possible that's because I don't want to see test automation projects get cancelled so I always want the QA team or the developers who are writing the test if it's the developers to be able to communicate the management this is why testing is redundant and we should allocate resources to testing and the second reason is that once you have priority levels set on all your different features you're able to categorize your tests so that into tiers of level of monitoring and level of attention that you need to give to each tier so what we want to have at the end of the day is something that looks like this so you have a disaster managed disaster scale system and we want to have sorry sorry to interrupt your camera off again okay let me come back to my camera turn it on there you go do you see me now okay good so we want to have like a category if we want to tier our test into this group so that we know that the most important test gets the most monitoring done and the most attention if any failures happen within this tier itself so a lot of teams when once I reframe this question from how to test to what to test a lot of teams quickly have intuition from for what features they want to test first and for me my rule of time is to start with your scariest feature and the reason why I say this is because you probably have a very good reason why you think that feature is scary and for me the reason how I prioritize things is based on how much shit do I get into if this component doesn't work so I have a scareometer it's a very very simple framework it's just four questions that I will pose for each of the components that I want to test so the first one is what is the business impact of this component if it fails how frequently is the component used how complex is the component and how much domain expertise do you need to understand how this component works whether or not you want to develop it or whether or not you want to test it so we're going to rate this on a scale of one to five and use this for categorizing our test later so let's go through each of the question is very simple so first business impact how much does a failure would impact the business operation and revenue if a failure occurs so if very little impact for example if our email customer feature broke it's just going to annoy people it's just a little bit inconvenient that we're just going to give you a score of one but let's say the checkout feature fail we're going to call it a high impact because we're going to lose customers and we're going to lose revenue and usually I would seek my product managers or business stakeholders to ask them to quickly list down to me like what are the 10 most important features that you want to make sure there are no failures that are happening there okay so frequently of use is pretty obvious this is related to business impact itself so the more often your users use these features of course they if it fails then the more complaints your customer support team will get so I would usually recommend using analytics if you have it to get a more objective data about the frequency of use because usually what you think users are or how you think your users are using product is not usually what you imagine or design for users would use it in ways that you didn't think of so try to get analytics if you have now complexity so complexity is measured by how many logical paths there is through a software for us in for us testers we usually call this test cases so usually the programmers they like to call this psychometric complexity if it's really straightforward you can explain it to a five year old we just give you a score of one if you need a little bit of knowledge to figure out how to test this feature for some maybe it's like as complex as assembling a subway sandwich then give a score of three if it's really really complex then there's a very good chance that if you get a new developer to add a new feature to this component you're gonna the developer is going to introduce a new bug because it's such a complex flow there's a lot if and else there's small loops a lot of cases and usually what I like to do is I like to go around asking the developer which code are you most afraid of touching and just get a list of 10 10 answers and that's usually a good place for you to start writing your test for and then the third one the last one is domain expertise so usually people overlook this domain expertise is do you need to get a specialist to explain to you how or to understand the requirements of this component itself so the reason why I ask this is because we want to make sure that if the original developers who worked on these components are no longer around they are automated tests so that if someone new comes on board to maintain a feature or add new functionality he's not going to screw up and introduce new bug so we can give you a score of one if it's very simple to understand like anyone who can understand how this software works give you a score of three if you need to know a little bit about the industry like for example accounting software you might need to know like why is accrual why is credit why is debit and and so and so forth if you need an industry specialist like for CAD software you might need to get the silver engineer sitting beside you to explain to you okay this is how I think the software should work then you can give you a rating of five so this is our four questions so and to put it in business terms this is what it would mean to the business the first question business impact and frequency is just how much would a failure cost to the business the complexity of the component would is more of like how likely would the component fail if we get a new programmer to make some changes to it if it's very complex there's a good chance that there's a new failure domain expertise is somewhat similar if it might be very expensive to always have to get your invite your silver engineer to sit down with you for uat to make sure that the component is working so you want to automate that and what we want to do now is to put our test into uh into tiers so what I usually do is to add all of this up and I try to keep it simple I just have three different tiers so sanity smoke test suite uh create critical test suite and full test suite so sanity smoke test suite I will monitor daily I will also automate this first the critical test suite I will automate I will run it every release and a full test suite I will just run it for major releases where because it's a really really long running test it might take a couple I might take a full day to complete our full automation test suite so you might just only test this for not for like hot fixes but we just want to test this for major releases and we keep that on uh like less frequent monitoring schedule like weekly for example so now we're going to try a very short exercise okay so let's imagine we have a transaction management system for a real estate agent and we're going to take a look at a few components of this system and try to give them a score so first we have login as an agent then we're going to try to add a property to our listings we are going to try to create a transaction as a sales person and create a lease agreement as well we're going to generate the invoice PDF and you're going to email it and we're going to calculate how much the agent receives every month for his commission so let's take a look at login as an agent so if the login feature fails how much is the business impact we'll give it a five because if the agent cannot login the system is completely useless and i'll give frequency of five because the agent is going to do that do this quite often but it's not a very complex feature and it doesn't require a lot of domain expertise to figure this out so overall this is a score of 12 now let's take a look at the next um feature uh add a property listing so uh if an agent is not able to add a listing today um it might annoy him a little bit it might like slow down his sales progress like but he could do it tomorrow and there might not be such a uh if he speaks that it might not be such a big impact it might add the new listing once or twice a week there's not a it's a somewhat complex feature to test because or to build because you need to test the number of bays number of uh bathroom if it's rental does the property allow pads on the property so there might be a lot knobs for you to test but it doesn't require a lot of domain expertise to test this feature or to develop this feature so i give it the overall is a score of 10 now let's go to the third one so here i'm creating a transaction so i would give this a five because if there's a one day delay the agent might lose his sale he might do this uh create transaction creation once or twice a week or maybe three times a week it's um fairly complex because the agent we need to under um there might be the stem duty or tax or um inheritance um some transaction or idline items based on what kind of transaction it is you might need to know a little bit of domain expertise just to understand the local regulations to be able to develop and test this feature okay so now generating invoice pdf i would say that it's pretty important um you'll do it as often as you do the transaction it's somewhat complex it doesn't require a lot of expertise overall 13 yeah i'm sorry to god we are running out of time now actually okay so i'm going to um get uh skip to the towards the end then so let me just um skip through the last few features that we have so so we have uh rated all of our features over here so what i would do is i would then put them into our different tier so this is one of the features i'll put into the smoke and sanity test based on how many points i give and i'll in our critical test will have the the added property listing calculate monthly sales commission and our full test will just have the email invoice because it's not that critical if it fails we'll just test it once again so this is um this is how we'll start we'll start automating tests from our sanity test would uh then to the critical test then to the full test and after that once you have figured what you want to test it's actually pretty easy to figure out how you want to test if you look at login maybe some teams would want to go for it go at go at it from a lot api perspective or from a ui testing point of view but for calculating the the commissions for example um unit testing might make more sense so it's actually pretty easy to figure out how you want to test each components when you figure out what what you want to test okay so anyway this is just a um a guideline so and you can change it according to what your company uh or your team needs so you can add more questions like how much time does it take type the test uh you can change the weights you can use a different sale some maybe you just want to use a yes or no scale or maybe you want to add like a a bcd with an s class for extreme difficulty um or you might want to add more tiers for example um but my rule of thumb or as well as the don't don't make it too complicated it should be something a framework should be like you know for you to quickly decide and last thing i want to say as well is it will always seem very scary and overwhelming when you have a very big project and to start with but focus on one thing at a time focus on one feature at a time and if you focus on the thing that is most scary to start with it actually gets very easy so that's that's all thank you and uh is there any questions yeah thank you so much li um we are actually under the uh we are sitting in between people coffee break so people would be wanting to have the coffee uh but meanwhile people still hanging around with us uh having the sipping the coffee on the chairs and they like to listen to the questions answered but maybe we can take a one or two questions so i'll read back to you so there's a question uh from kathike about how to find feasibility of any application test cases where we can achieve 100 percent automation so i usually will actually say that it is very difficult to achieve 100 percent um automation and in fact i will actually would ask the team like which uh features do you have stable and which features are not yet stable and prioritize the automation of stable features first because if it's you have unstable features you're going to spend a lot of time um going back and forth with the marketing team with the the design team on picking um the test according to the how the design changes so so i would say um um try not to get too fixated on 100 percent test automation and i think um prioritize or think about uh i think the the key question or the key objective for your test automation would be what is the highest value i can give to the company yeah that's helps me answer uh next question is how to decide priority between manual and automated tests during short release cycles well if you have very short release cycle sometimes it can't be helped to uh do manual testing because frankly there are times when um writing the automated test it just takes longer than doing the ordinary uh manual testing but if it's gonna not uh if doing this is not going to just be a one-off manual testing thing but if it's gonna happen like every week uh it's gonna all this manual testing time is going to add up and i would strongly suggest negotiating with your manager to give you more time uh like just give you two weeks to sit down and automate this thing so that you can save more time in the future so you should have to try to convince your your management to invest in time to allocate you time to sit down and do the automation rather than forcing you to stick to a schedule yeah so yeah that's the that's where we end now and thanks a lot Ray i just truly could relate with your slides uh from how to start to actually want to start with and that's something amazing like love love your talk and and i'm sure um i have personal questions with you maybe i should also go to the vip section now so for all uh she will be available in the vip section you can uh search her name and hit enter and we will be all be meeting with her again and this stage will be occupied for the next speaker now so and also um this talk was uh uh brought to us by gage so we would like to thank them to sponsoring us and um you can also go to the launch section and meet other people talk to them and learn more from them so thanks a lot for all joining in and and before going do not uh hit the polls uh give the polls uh to us so that we can uh improve upon our sessions yeah thank you hey thank you puja thank you everyone