 But thank you very much for having me over here. And I think I love the inclusivity of this conference. In an APM conference, they wanted an exploratory testers to speak. How can a conference get more inclusive than this, that they want a tester in an automation conference? So I really love that aspect of it. And I really love to speak to people who have been testing, who have been doing automation. So just to show off hands, how many of you do exploratory testing? And how many of you don't do exploratory testing? There are a few people who don't do exploratory testing. How many of you write automation code? How many of you automate tests? OK. Great. So for me, having been an exploratory tester, here is how I understand the testing spectrum. So no matter what your understanding of testing is, testing has these components, which is understanding the context, which is discovering things beyond understanding the context, setting up, beat a lab, beat the environment, and then about coming up with experiments, and then running those experiments, and then about using the reports to influence people. Do you broadly agree this is what the spectrum of testing is? Yes, no. Do you broadly agree? Do you have any disagreement? This is a good time to stop and say, I disagree with this. Do you have any disagreements on this? Yes, no. Do you have any disagreements? OK, let me simplify this question. Do you have disagreements? Yes, no. No, OK, awesome. So we are in sync. And so in this kind of a spectrum, from the outside, people think that this seems to be a smooth flow, that we understand the context, then it moves to this, then it moves to this. So it looks like it's a phased approach that somebody would understand at the start of the sprint something, and then they would translate it to a setup, and then they would translate it to a set of tests, and then they would start to run those tests. So in my experience of having been an exploratory tester, here's one thing that I discovered that it's not just like a sequence, it's not a phase. It is a continuous activity. So when you are running tests, you could be thinking about what to influence. When you're running tests, you could be thinking about setup. When you're running tests, you could be thinking about, oh, I missed understanding this context. And so this is what happens in my mind as an exploratory tester, and now I ask this question, what do we automate? Every time I've heard this term automation in the space of testing, most people tend to think that they're automating the tests, the run. The run part of the tests, because that forms a large part. Why do they do that? It's because there is a repeatability of things that we need to do. And then there is a human fatigue if humans were supposed to keep actually doing that. And then there are checks. So checks can be automated. And all of this is to address a pain that if those tests were not done by humans and were done by machines, we would move faster. So there are multiple reasons why people think about automating this. And can I ask this question? Can I ask this question multiple times in Twitter recently, even for me as a prep to this conference talk? Who does automation help? And whose pain does automation solve? So I want to hear your answers. I've got some answers on Twitter that the community has actually spoken. So whose pain does automation solve? Any answers? The testers? OK. The? The release cycles. OK, the release cycles? OK, the developers? The product owners and business analysts? OK. So it seems to address the whole spectrum of everybody in the company. OK, that's what automation is supposed to solve? Yes or no? OK, OK. How does automation help a tester? I've been an exploratory tester. And when our team decided that we're going to do automation and then assume the automation was done and it started running, how different was my life? How different do you think my life was? My life got more busier. I was told automation is going to make me free. You know why? Everybody in my team wanted to go and do automation. So all the backlog that was there fell on my head. So automation never made me free. Automation made me more and more busy as an exploratory tester. So for me, when I started my career in 2003 and when I was discovering what testing is and what automation testing is, I was told that automation will come and help you. Some of you said automation will help a tester. How has automation really helped a tester? Right? I've been asking these questions. And for me, it has only made me more and more busy because as even Anand Bhagmar pointed out, you can't automate everything. Some has to be done by humans. And those people who are interested to do that is not there. And the testers need help. So people like me need help. Who is helping me? So assume you're writing something on APM. You're automating some of the checks. How does it help me as a tester? Can somebody answer that? I'm an exploratory tester. How does it help me? What? Some tools will help you. What kind of tools are you talking about? How many of those tools are built by the testing community for the testers? And as a follow-up to that, who should care about the tester's pain? Who should care about the tester's pain? But the tester should care about the tester's pain. How many testers you know really want to do testing? In your companies, I see a few people smiling. Oh, nice. In your companies, this is the reality, right? In your company, everybody who is as a tester wants to get out of testing as quickly as possible. This is the culture we have built that we have made automation as a sexy thing and testing as a non-sexy thing. And all of those people who advocate automation, including people like Anand Bhagmar, who said automation is not everything. Good Anand, who else does that? Who does the other aspect of it? What are we doing as a community? What are we doing as a community to support those testers? Who are building this? Who are working towards removing the tester's pain? So this is the question that I embarked on and I started to focus to build tools because one thing I realized for sure, nobody is gonna come and help me as a tester especially. Now I know in some of your minds is like, what is this guy really saying, right? And I'm very clear, I've gone and asked for help. I need help on testability, right? Most often, I don't know about the culture of the companies where you come from. Most often developers are so busy that they can't add a testability layer for me. I gotta build that for myself if I really wanna do a good job, right? So what I did is I started to focus to remove the tester pain, my pain as a part of journey. And I'm gonna share with you a few stories of the kind of pain that I experienced and what I did to remove that. So the first thing is I joined McAfee in 2004 and I was a part of this team called the Group Shield Domino and we had this Lotus Notes and there was a wireless scanner attached actually to it and we had to run through a lot of these tests and there is a huge wireless collection that we have to run and then this has to be benchmarked against the command line scanner and the GUI scanner and then it has to be benchmark and then a report has to be generated. So in terms of complexity of this, people had actually come out with the five page scripts, five page script to follow, to generate a report and validate the report and here is those five pages. Okay, here's the five pages. I've certainly obfuscated it and we took eight hours to generate every report. So for me, I wanna find more bugs. As an exploratory tester, I wanna do two things. I wanna improve my coverage and I wanna find more bugs and I wanna spend time influencing people to make better decisions. I wanna provide a mirror. I wanna influence people to think and rethink about the product and engineering decisions that they've been making. But for me, I spent eight hours trying to generate a report and here's what happened is that I didn't know programming on this particular date. I learned pull because I said, I went around looking for help. People were busy. I'm not saying that they ignored me. I'm saying that the people were busy and so I took this up as my personal thing to solve because I wanted to really find more bugs. I'm really passionate about testing. I've never wanted to go into the sexy side of testing which is let's say automation. I know what my game is. I wanna play this game really well. So I learned pull and I tried my own implementation of this and we had this tool and this saved my time. I ended up solving my problem and then it was rolled out to the other testers. So this is how I built my first tool. Okay, the second one, in 2009, I was consulting for actually Tesco and they had this, they have store in Hungary and we were actually testing in India a scheduler application which will actually schedule when an employee will be there at what store and what time based on their contract and all of this stuff. And one thing I'd like to say is this, when in a bunch of Indians testing labor law compliance for Hungary and here's the context. If the scheduler schedules in such a way that we flout one of the employees contract, that employee can sue Tesco and claim several million dollars. Do you think we as Indians are sensitive to this? We slog, we don't know labor laws. Okay, do you know labor laws or do you know Indian labor laws? We don't even know Indian labor laws and here are a bunch of testers testing and validating Hungary's business laws. Hungary's business labor laws. So for me, when I went in as a consultant, I was actually a contractor I should say. I was calling myself a consultant but they said, somebody is going for a six weeks leave. Can you come and actually fill in? So I said, okay. And then I thought, oh, I'm gonna use my brain. But then this was something like this. This is what they gave me. And then they gave me a rule sheet or the labor law sheet. And there were testers trying to look at, oh, this labor law, this employee, this contract, you know this thing. So they had at least what's it three monitors in front of them. And so they were like, huh? You know, this is how it was. It was like watching a tennis match, right? Obviously this, for me, okay look, by this time I was code handicapped. You understand what that is? I knew a little bit of pull but that couldn't mean that I could create actually magic. I was code handicapped. And here is what I did in what's a test code to solve this problem. I said, we probably don't need to do these things. We probably can build a tool around this. And they said, do you know how to build it? I said, I know what tool to build but I don't exactly know to write the code. It involved some store procedure, some Java, some pull and all of these combination. And then a little bit of Excel macros. So I found a developer had joined and the good news about some of the Indian companies is you don't get a laptop for the first one month because there's a lot of processes that needs to happen. I gave this developer my laptop and said, I will watch you code. Here is the problem statement, okay? Can you help me, right? And I built a good partnership and this developer helped me to build this and this problem was solved. And we shipped this tool. We shipped this tool back to Hungary business team and said, don't disturb the testing team for watching a tennis match, right? I'm not saying this from an ego perspective. I'm just saying that that's the business value that we as testers need to provide. When we talk about automation, when we talk about testing, how does the business understand its value? If business had really understood its value, the way businesses would be thinking about QA today would be very different, okay? First of all, they took the word tester and any human doing testing, they started calling QA, okay? Why? Because they wanted to call the non-developer coders as S-Deads, right? So for me, call me QA, call me S-Dead, whatever, this is the partnership that we built in order to build this tool out. Then in 2012, we were testing a bunch of e-commerce apps and as what's Anand Bhagmar was also mentioning, about, hey, I need feedback from what the users are saying. Now, in many, many organization cultures, they don't give access for the tester to analytics. They don't give access to the product code also. They don't give access to the analytics as well. But I don't want to be limited by what the organization culture is. So I still wanna provide value, okay? So what we did is me and another developer, we partnered actually together to build something called the Twitter-driven exploratory testing tool, where the moment a release happens, we start to run this tool, which will keep capturing tweets made to a certain handle and keep analyzing what are people actually tweeting. So we get a live instant feedback, where we are able to see, are people tweeting the negative things? Are people talking about crash? Are people talking about things? What are the search terms that they're using? How can we improvise the data that we can use, right? Okay, so that's one of those things. Then, okay, condition testing. So as an exploratory tester, if I wanna find bugs, and Anand actually touched upon this beautiful point, if you're all testing mobile apps, in the wild, things break. In your lab, everything works. So where do you wanna test? You wanna test in the wild. How can you simulate the wild conditions? So I know companies in the US when they wanted to enter in India, they had to test something as crazy as the 2G network. They would have probably not tested some of the recent apps built over there. The 2G network here is a quasi-state. There is neither a network, nor you can call it whether there is an internet connection, but there's no internet connection. That's the 2G network. How do you simulate all of those things? So what we did is we built a tool which will help simulate different conditions that the mobile app actually goes through. So what we do is if we simulate a low RAM condition, run the same functional test which passed in now on a low RAM condition. What would happen now, right? And then run it in a different orientation. So you write a functional test. It could be automation. It could be human-driven. Now, if you had the capability to run the same test in a low RAM and when the orientation is actually changing, your functional test will fail. But you wanna know why it failed. So that's where we built a TDK. It's an SDK to do this. Now, bug reporting is a distraction. Every bug that I find becomes a distraction, especially when I'm testing mobile apps, right? Because what do you all do when you find a bug? You take a screenshot, right? And if it's a crash, you try to connect to your Android Studio or something and try to download the logs. And then you have to combine all of this and actually put it into Jira, right? Look at how much time you're gonna spend for every bug that you find. Every bug you find becomes a speed breaker for you to find more bugs, right? For me as a tester, I am obsessed to finding more and more high-value bugs. So for this, what we built is, we built an app which will capture the flow of whatever the tester is actually doing, whatever the input the tester is actually providing, and then capture the flow, capture the screenshots, and capture the logs, including the performance, profiling aspect of it. So this way, the majority time of a tester goes in finding bugs and not in spending time too much to report all of these things, right? Now, one of the other question for a lot of developers that we have interacted with, at least in India, but that doesn't mean, that's the reason why I wrote the indie developers, what should I test my app for? So a lot of developers out there cannot afford getting their app actually tested by somebody. So for them, we built a full-fledged checklist, right? And what we did is we used the iOS guidelines, we used the Android guidelines, converted that into a checklist, and we built this over. Now I have, I don't know this phenomenon, some of you can relate to, and for those who have actually come from outside of India, you may not relate this to. Have you heard of this beautiful thing called come and reproduce in front of me? You have not heard of that, okay? Here's what happens in India. So the moment a tester files a bug and the developer gets a notification that there's a bug assigned to that developer, they pick up the phone, they call the tester and say, come and reproduce in front of me. First time I heard it, I was like, wow, really? Okay, now why does that happen? Okay, that happens because we write pathetic bug reports, horrible bug reports. I've seen bug reports where it said, I did this, I did this, I did this, and then I got an error massage, right? Like wait a minute, how can somebody associate credibility to this tester, but this tester is trying to add some value, right? So for me, I've been frustrated to hear this and I really wanna fix this problem because for me, this pains a lot. Every tester whose credibility is going down, it's also my credibility that's going down because I am a part of that community. Why should I let other testers' credibility go down? What am I doing about it? Am I affected by this pain? I am affected by this pain. So right now, there's a work in progress where we are building a Jira plugin where as and when you type, it'll try to catch as many obvious mistakes that a tester is actually doing and warn the tester. It's like compiling your code. It's like compiling your code. The moment the code is compiled, it says these many warnings, these many errors, that's what should happen for a tester when they're reporting a bug. You say there is an attachment, there is no attachment. And these things are pretty obvious. It's there in Gmail. We just have to bring these available things into what we need to do. So here's what I did. I went to the Jira plugin database, to Atlassian. I searched for all the plugins that's there. Most of the plugins that are there are all test management plugins. There is no tester-savvy plugins. Think about it. A lot of testers breathe in, breathe out Jira, right? So the pain is plenty. I have, my life is full of pain. There is no pleasure that I've seen yet, okay? Because for me, all these problems bother, bother quite a lot. And I think that for those who wanna add value, they will see pain. And here's one thing that I've understood, working with a lot of people. People who get adjusted to pain don't solve the problems. People who can't get adjusted to the pain are the ones who will take some decisions to solve the problem. So here if you are sitting in the APM conference, somebody thought that they can't live with this problem and they had to solve this problem. And look at how beautiful community this is created, right? Like that, there is multiple opportunities for us to create multiple testing communities by solving all of these problems. And here's an example. Here's a very simple example. A lot of testers don't know how much time they've spent, where they've spent, why they've not been able to add all of these things. So we're building some self-reflective dashboards, which we'll try to plug into the calendar and try to analyze, where have you spent your time? Is this where you wanna spend your time? And things like that. Now, and this is the way I see it. The whole world wants to solve one problem. Why? Because there's a lot of jobs in that space, right? And so there is 98% of the world who wants to automate the tests. Everybody wants to automate the tests. But there are so many other pain areas, and that's getting very, very little focus. So my prayer, my prayer. We all understand what a prayer means irrespective of whatever religion you are. Even if you're an atheist, you must understand this, right? So here's my prayer to this community. Build complementary things. In your company, in your company, don't everybody jump up and build the same thing. There are plenty of people out there that you can hire to today run tests, automate tests. So you have an opportunity to build complementary things, to look at the whole problem space and say, these pain points are never been addressed by anybody. There are things beyond tests to automate. So for me, I call myself exploratory tester. I've also built some tools. I've also written some tools. So for me, I still call myself exploratory tester. And for me, this is automation too. It's not just that we have to automate the tests, become automation. All of this is automation. For me, that app which actually captures all of the screenshot flow and makes my reporting easy, that is beautiful automation. That also deserves a huge community. I'm not saying because I built it. And in terms of, if you look at the problem statements, I can give you 100 problem statements today, having been a tester because of the pain. I'm sure go talk to the testers in your company as well. That's what they'll give. They have a huge laundry list of it. And one thing that I've understood is that testers have pain, but they don't know to communicate it. Testers need help, but they don't know to ask for it. This is also one of the reasons why they've not got the help that they really need. There are not enough builders. People need to become sensitive to a pain to solve the problem. You all are committed to APM because you are excited about solving this problem. And here's what I'm saying, that there are also other problems for you to think about. So if you have some free time, if you do write code, and once in a while you wanna change from writing APM code, you could think about writing these little tools, and just like the way he mentioned, please make that public. Okay, please let others also use it. If you're not a build, if you're not a build, let us partner. From this talk, and from the series of the talks that I'm doing, it is important for us to form a community of where people come up with the ideas, like how there is an app store. There should be a test store. If this is your problem, okay, let's do this. If this is your problem, let's do this. And here are the tools. We have been rewriting a whole lot of stuff in the space of testing. Now, if you don't know the build, you know, does anybody here not write code? Wow, okay, everybody writes code except me, that's good. If you don't know, if you don't know the build on your own, find a partner. If you know the build, okay, then, and if you're interested about, you know, addressing a pain point, okay, go ahead and address the pain point and build the test, you know, store. Now, okay, pick an, great. Okay, pick an unaddressed, you know, pain point and let us begin to solve that. Okay? You know, let us begin to solve that. Okay, so can you talk about, you know, some of the pain points that you see your testers are going through where there are no tools over there? Anybody? Any pain points that you can think about? Do, you know, do testers in your companies, you know, struggle without the necessary tools? Yes, no. Okay, what, what are the pain points? What are the pain points? I couldn't hear that. Impact analysis tool, excellent, excellent, impact analysis tool. This is very important for us to influence people. You know, you know, here is something that we have experimented. We do, we do this impact analysis and we have, we have automated triggering emails based on the risks, right? This is also for what's a estimation, right? Is estimation working for you? Is estimation working for you? Yes or no? No, okay, it doesn't work anywhere in the world. I hope so, right? Now, here's something that we have done about, about estimation. People ask for an estimate, right? Because they wanna hear the number that they have in the mind. So it's like, it's like, it's like actually playing a game. Okay, how long would you take to test this? Whatever number you say, they're gonna say no. That's not the number. There's another number in my mind. I want you to, okay, give, I'll give you three more chances, right? Yeah, that's not how it works. We would estimate, we would estimate based on mitigating certain kind of risks. For us to mitigate those kind of risks, we would have to do a risk profiling. So what we do is, again, there is a tool that we have built for that. We would look at all of those story points or epics or the features, look at the complexity, look at the revenue impact of those, right? Look at the depth of the testing that's required, and then say, for this level of depth of testing, for this complexity, for this revenue impact, we would need this much time per browser, per mobile phone. Now, if they want to override this, they make a change to this in the tool, and then they would see that the risk level is going high. And if the risk level goes super high, there is an email that's actually triggered saying that this is a risk. Now, the reason why we built this tool is because testers are supposed to be courageous, but they are not, right? So we need a system to talk to people who understand systems and not understand testers, right? Okay, so that's on the impact analysis. Any other problem statement? Any other problem statement that you wanna talk about? Yeah, you add something. Visible in the sense when you have a scenario, and that scenario gets automated according to a particular test validation, like when you have a test validation matrix, right? Testability matrix. So if you have a test, traceability matrix, and that tells you what all to test regarding that impact of that particular feature, which is going to give you a regression. So if you have a tool which can actually have an AI, kind of an AI, which gives you a sense of what to test for a particular feature, and do not go for the basic functionality, which actually, if you have a code mind map, which tells you the flow itself does not go there where you're testing. So that makes sense of automating the stuff for the manual tester so that he already knows what flows to be tested, manually and not repeat the task again. Yeah, I think. Okay, great. So here is a tool that I can think of, which I've seen a tester, right. This guy wrote a tool which will, where you upload an APK, and it'll give you a mind map of the whole menu and the submenu and the subfolders and things like this. This becomes very, very helpful, right? There are a lot of things to automate such as setup, right? That the testers keep actually doing the same thing. And here is another thing that you have to understand as you spoke about the traceability matrix part of it. I would say instead of that, we could look at the impact part of it, the risk part of it, and say, well, you've been doing traceability matrix for a long time. How has it resulted? What kind of user feedback have we got? And then tying that up to test coverage, and if people see this data, they might actually chuck the concept of, what's a traceability matrix and focus on user coverage, user profile coverage, and all of those different types of coverages that they might have not been doing in the name of traceability matrix, right? Any other problem statement that you wanna talk about? Yes, please. When we get an iterative build, right? So build one, build two. So when the delta is more from build one to build two, so maybe there may be a bug leak from iterative build one to build two. So that would have reduced the risk when we have initially find that in the build one itself. So this adds a lot of criticism and also reduces the tester conference because it is tried forward or maybe an edge case but would have resorted in a bug leak in the prod or maybe which has been reduced as a showstopper. So in this case, how would help us in finding this? Okay, let me quickly try to rephrase that. Are you saying that there's a build one and a build two and build one has had a bug which is kind of, what's a slip to build two? Is that what you're saying? And what's your question about it? Your question about it is, how do we find that in the build one itself? Okay, it's a, so this is a question of the test coverage. I just wanna try to go back to this one, which talked about exploratory testing and things like that. Every, there is something called the unlearn and learn. You can't necessarily find all the bugs that you wanna find but you can only learn. Okay, one, the second thing is you could focus on, you could focus on the test coverage, you could focus on the modeling. You know, that reminds me of another tool that I've seen, which talks about all different kinds of models that can be applied to a certain, you know, a certain actually feature. Now, the reason why this did not work, we implemented this with a lot of testers. The reason why it does not work is testers are not used to modeling. So for them, it was intimidating to see that there are so many test models that they don't know about, right? But that's why you also need to bring in the training. Okay, so this partly can be addressed by the tool in terms of showcasing the models, but it cannot be completely addressed, you know, just by the tools. It also has to be, you know, going back to some level of, you know, training of the human beings, okay? Any other, any other, you know, ping points, yes? Yeah. Yeah. So think like a user, step in the user shoe and do it. Right. But now there are concepts like personas, you create like four or five personas. Right. What I feel is testers don't have the psychological part, like how to, suppose the age is from 18 to 60. Yeah. So there's an age group and the different people, we use the app in a different way. We don't have that psychological part, how they might use this. And nobody is training the tester in that front. Right. So there is a solution for that. There are already plenty of tools available for that. We don't need to rebuild anything. So, you know, when we built a web application, we integrated something called Lucky Orange. I've sat and watched hours and hours of actual users coming and signing up and, you know, everything that they do with the application. There is a lot of, you know, such, what's the tools that's already available. But that's not the problem that I'm seeing today. I'm seeing the problem that the testers are not influential enough to sell these things into the company. Right. Because if you see, the first, one of the big reasons, one of the big reasons. And now, no, I know I'm actually talking to a large open source, you know, community. At least in India, a lot of people ask this question, is it open source? And what they really mean is, is it free? Right. It's not that they're gonna go touch the code, you know, tweak the code and make something work. It is all to validate that is it free so that I can convince my manager much more easily. So if you take an example of the Lucky Orange, and here is where I think, you know, this is where the, you know, the testers need a bit of education in order to be able to say that, hey, this is a value I can contribute back. Now, I am looking to build a tool where testers can communicate to the management without, you know, bypassing their, you know, fears, bypassing their obligations to their employer and, you know, things like that. Okay, that could be an interesting, you know, problem to solve. Okay? Thanks. Yes, anything else? Yes, please. Come to the picture. So anything like, how do you put test evidence? And that too, like not copying log, taking screenshot and writing all those steps, like? The test, the evidence. Evidence. Okay, the test evidence. Okay, so like the example that I showed, the, you know, the one app where I mentioned that it kind of records all the flow and, you know, reports this, that is the evidence. Okay, one. You know, second thing, there are a few problems which are cultural problems in organization. Cultural problems cannot be solved through tools. They can only be addressed through people, right? So, in such cases, as a tester, what I have done is I have tried fixing the cultural problem, you know? I'll also give an example where, you know, the best testing I ever did was no testing at all. There, you know, so we had this, you know, legacy product where nobody knew what the architecture was and every time they fixed a bug, they would add, you know, 10 bucks, which we would find three bills later, right? And I said, the best thing to do on this product is no testing. Don't also fix any bugs. As long as you can sell it and try to, you know, re-architect and try to make a web version of that, right? Okay, we don't say these things. That's the problem we are facing today. We don't say these things. We say yes, where we wanna say no, which is something which you all know. It's not like I have to tell you that. But for me, what is exciting, what is interesting right now for me as a tester is how do I build that tool which helps people build confidence, build that transparency, build the visibility? Okay, so on this context, I can tell you, you know, just one thing. You all would know that they ask the tester, should we release or not? They're doing your companies, yes, no. I wanna hear it loud. Good, good, okay. That's not for me. That's for people who have come from outside India to think that, hey, this is what's happening here, right? And they ask that question and you say no. But still they go live. Why even ask that question? It's just for the sake. It's just like estimation question, right? So for me, see these problems are plaguing our industry. Right, there is enough smart people out there to solve the test automation problem. So you don't bother about it, right? There is enough complimentary people out there. These are kind of problems which are plaguing our industry and we need to do something about it. Who is doing about it? I'm affected. I hope you, I hope even if you are affected, you know, because you smiled, you laughed at, you know, some of those things and you acknowledge these pain points. Come, let's form this community. Let's build things which will help testers improve their credibility, improve their influence and as a software community, we succeed through that, right? With that is my time done. Okay, thank you very much. Okay, thanks a lot for your time. Thank you. Thank you.