 Good morning everyone. I'm Marit and yesterday when I talked to people here I realized that I need to introduce myself in a very uncomfortable way today. I am Marit and I am what you might call a manual tester and that means that instead of actually using my fingers to do the testing I use my brains for the testing but the test automation that's more of an afterthought for me and today in this talk I'm going to show you a bit of the kind of the world that I live in. So we'll talk about the intersection of automation and exploratory testing. The world that I kind of come from the world that I live in I prefer not to use the word manual testing I prefer to use the word exploratory testing. In that world I look at testing kind of as if it was kind of stage art in a way, performance, you get on stage, you interact with the program, you hear out what it has to say and you do all kinds of things that are right for the situation so that information reveals itself. So it's kind of testing as a performance that's the idea whereas I look at the automation side of testing more as testing as artifact creation, leaving something for the future, leaving something for reuse at a later point. On these two kind of sides I don't think it's either or I think it's both and but I definitely approach things more from the performance side of things. I really like the things that this type of testing can give me, this exploratory type of testing, it doesn't give me the absolute answer, it doesn't give me the spec like the automation and the unit testing side of things usually gives me. It gives me guidance you know like that direction, better that way, more that way type of things. It gives me understanding, basically spending a lot of time with the application. I'm usually one of those people that everyone comes to an ask I'm like oh I don't remember quite why did we decide to do it this way? Is there some reason why we decided to accept this or is there some reason why things are the way they are? And I often have that kind of like historical perspective and when I don't have that historical perspective and I end up with something where someone should have the historical perspective I learned to build up that historical perspective really really quickly. I also usually I focus more on the model side, kind of like drawing pictures around what the application does, what does the world around the application look like and one of my favorite things in exporter testing is the word serendipity. It's a lucky accident. There are days when I do not intend to find a problem but when I have ice time, private time with the application it just lets me know of things that I should know that I didn't expect to see. So the unknown unknowns come to life when spending enough time with the application. But I also do respect the other side of all of this. The ability to have automation that gives a specific case and it gives a specific feedback. Did these things succeed or not? It gives us the regression kind of the continuous way of looking at things but also it gives us granularity. When things fail in a long chain we know exactly where we are. At least that's the good part of automation that I see. I also have the automation experiences where granularity is not quite in the place. So this is kind of the dual world that I live in. But I approach all of this from the perspective of an exporter tester and I look at the products as if they were my external imagination. You know, I'm more creative with hands on the keyboard with the software and looking at whatever it's speaking back to me. So it's my external imagination and I find that a lot of times working in real software teams, that's what I do. I test with my developers, test with my test automation colleagues. That's what I do is that I am my developers and my colleagues external imagination. I look at them and like, hmm, should we be testing today? They're kind of coming back to me saying, oh, you'd want me to do this. You'd want me to do that. And that's the kind of thing that I have with the application and other people often have with me. I also, of course, need to go into some of my credentials. This is one of the very credentials that I'm particularly proud of. I was testing for one particular developer. They had this little project that they were creating. And I needed something to show that you can do export for testing on an API as well. So I needed to spend a few hours in testing that. And in that time, at first, I couldn't get it to work at all because it turned out that if you tried to have this little library in use so that you have both or two different unit testing frameworks on the same computer, it wouldn't work. One was okay, but two wasn't. So environmental problems. And then I found out also things around discoverability of the API, how easy it was to learn, how the documentation was set up, and how easy it was to use. And this is basically the thing that is on the slide. The idea that I destroyed it in like an hour and a half. I actually think it took me more like two and a half hours. But I did destroy it in a way. But what I actually did destroy is more of the developer's idea of that it would work in different kinds of environments for different kind of people. And he did accept after with the fact that there is a lot of rework needed based on that. So what I usually provide is information, information to break illusions around the code. And maybe the proudest breaking illusion stuff is usually around business models. It's not just about the code. But sometimes we waste years and years of our time or many years of our time building something that nobody's actually willing to pay for. And the best test you can do is to try to make someone pay 5%, 10%, whatever number it is in advance for that team that they're claiming to be so valuable. And when it turns out that they don't want to put any of the money out, even a few months in advance, it might hint that maybe the money isn't available a few months later either, even though you have a big question on that. So illusions about everything around the system. That's what I, what I work on. But instead of me explaining all of this, all this talk, I thought that we would do a little bit of a demonstration of what exporter testing looks like, right? Let's get out of here. So I have this little test target here. And as usual, I have access to the code. I have access. Well, not as usual, but I have access to a specification. This little piece of code is called the Gilded Rose. It's created by Emily Park, and it's usually used as a refactoring exercise. But today, I just want to use it as a target for exporter testing. Gilded Rose is provided with, well, the source code of dustings that isn't very easy to read. But it's also provided with something that helps me get started with testing and exploring it. So I've had a very nice developer who gives me a single unit test that I can use as part of my exploration. And the single unit test seems like it's passing. So looking at the specification, I can figure out some things that I need to know about the application. I've made it a little easier for you to read. So that I will give the keywords here. So this is a shopping system or shop system with goods that are constantly degrading in quality as they approach a sale by date. So there's these kind of concepts. And it also updates the inventory for us. So all of the items, here's all these kind of things have a selling value, have a quality value. And at the end of the day, this change. So looking at this from the code perspective, I've been provided this check item method here. And it takes three things in. There's a name of whatever the item is. There's the selling how many days there's left. And there's quality. So as an exporter tester, I have all these sources available. I also have the possibility of running it under coverage so that I can see whatever is going on in the code. And obviously, all of this information is useful. So as an exporter tester now, my job is to decide what do I want to actually spend my time on? This is all the information that I have available. There's also the things that I have already in my head understanding that oh, right now I am stuck on my Mac. I'm not going to be testing this on a Windows computer right now, because I'm not going to be moving anywhere else. Or there might be other components around this that might have a relevance on how this works. But I do not have access to those right now. So whatever I have is this right here right now. So as an explorer, I can start playing with things and saying like, okay, so what if I give it a different name? I'd say like it's called foo now. Does anything change? So it seems like that name at least doesn't have anything to do with the quality. And I have a lot of hints on what I could try if I want to read the specification. But instead, before I go there, I want to move into a bit more comfortable way of exploring. I hate copy pasting code. And also hate when things don't stay around for later. Because again, even though I am an explorer tester, I don't mind if the automation or whatever, you know, things that I'm learning, they are preserved for future. They are just not my main focus. So what I want to do is I want to use a thing called combination approvals, just so that I can have my focus on not copy pasting the code, but just generating the different, different things. So combination approvals. And I have the method I have the same method here. So it seems to have three inputs. So I just know that I'm going to use a function of three inputs. And just call that function here. What was it called? Check item. And I'll have three parameters that I need to input to it. So it was that I need to have different kinds of items. I need to have different values of the cell ins. And I need to have different values of qualities, right? So similarly, I need to quick fees to fix these to have the local variables available so that I can just get to the point where I have what I had before. So the first one takes in strings, the second one takes in integers. And the third one takes in integers. So I just need to do a bit of setup, like, you know, this is very routine. And I'm just, you know, looking at modeling whatever my developer already gave me with the method. So for the items, I tried a couple of values so far. I tried any and full. And I realized that they were not any different. So I'm going to use the value any for cell ins. I haven't tried any other values except for the zero. So I'm just going to use that. And for the qualities, similarly, I'm going to use a value zero, which was basically whatever my developer already gave me. So I'm just introducing a way of doing this in a combination fashion. So running this, it's going to pop up a deep tool. And this shows in a little small format. So I'm just going to tell you what what it has. So it generates approval test is what I'm using here. It generates this file that says that if I give an input any zero and zero, I get an output any minus one and zero. Okay, so now I can see what's going on. The middle value was the cell ins. So how many days from zero raised to minus one days makes sense. If it was quality zero in the beginning, it's quality zero in the after, you know, this was also something that the developer said it's already okay, like they thought that this was was okay. So I'm just gonna, you know, visually verify and think around this because I am exploring the thinking is my main thing to do. So I'm going to just accept it, save it. And I have kind of the starting point where I got into this. So I just have a different kind of unit test in many ways here. But now I can start actually playing with the thing. So if I look at my specification here, I can see that there's all kinds of promises here like always the selling date has passed quality quality degrades twice as fast. So clearly selling date had passed, but it was zero. So I need some other values. I need a quality that is bigger than zero. Let's say like five. I just feel like five sounds like a good number right now. But I could also, you know, decide that 100 sounds like a great number. And I can add whatever I want to add as I'm learning, but also selling has passed the grades quite twice as fast. I don't have any selling dates that are above. So I'm again going to use the five value five here, just to see what kind of things happen. So I just have an interface that I can play with. And I that's what I do as an expert register. I play with an interface and I let the application speak to me and give it ideas of what kind of things might be right. So okay, so I have here for the five days thing, it goes to four days. That sounds good. But for the five days or five quality, it goes to four. So yes, sounds like yeah, it goes down by one. That's maybe half of what I was looking for with that specification thing. And here also with zero days in the beginning, it goes from five to three, the quality. So it looks like this might actually be as specified. So I'm going to again accept whatever I had here. And I have different ways again of running this. So I can just as well run it under coverage so that I can see where I am with my code, with my specification and with my code. So right now I can again make choices as an expert register and what do I want to spend my time on. I know for a fact having spent time on this application that the specification is not completely right. So I definitely would want to, you know, carefully pinpoint every single argument thing on the specification and figure out the things that I want to have a discussion on. Like things related to let's say quality of an item is never more than 50. So let's say quality of an item is more than 50, 51. Now it's more than that. So based on that specification item, I kind of would assume that I do not get to do this. There should be some kind of an input validation. That's at least my idea. But then again, I'm also told that this is legacy code, you know, working in production. So whatever the code is doing right now, whatever I'm finding out before anyone changes it might also be relevant information. So I'll make my personal note of having a discussion on, did we really want no input validation here? But also I am kind of making a note of the fact that this is what the code was doing before it was changed. And if it was working in production, maybe, just maybe, we don't want to change it because someone might be relying on whatever implementation we have right now. So there's all kinds of things in the specification that I can think of and look around. But also if I come to the conclusion that I really don't care so much right now about the specification, I can just try this from the perspective of the code. So I could just get to a point where all of the code gets covered. So I really like the fact that it's much easier to get the exact namings from the code. So if I want to include things like age three, I can avoid the typo by not just going and writing it. The specification mentions backstage passes, but in the code it actually has a bit more specificity. So unless I do exactly what was in the code, here's again a thing that I need to discuss with someone. Is our specification inaccurate or is it just a shorthand that we talk about backstage passes? So let's put it again here on our list. And there's also this thing called Sulfurus that I can just add into my test cases. So again now I get a lot more combinations and again visually I can verify these that age three seems to be getting better. That was one of the things in the specification. You need to trust me on that one. I'm not going to go in there right now. Backstage passes had this idea that when the selling date is approaching, they lose value completely. So kind of like something with zero and a bit number, there might be confirming that. And Hand of Ragnaros was not supposed to go forward in time at all and it was not supposed to change in quality. And you know visually I can again verify all of this as well. Have again generated a couple more test cases here for my my thing to remember for the future. But my focus again is on the exploration not on creation of the automation here. So if I now run this again under coverage, I have a little less of the red but I still have clearly things here that I don't cover. So I see numbers like 11 and 50. I feel so for us somehow mentioned. And you know I could try figuring out everything in the code but I'm feeling really, really particularly lazy as an exporter tester today. So I'm just gonna set a range let's say from minus one because I like negative numbers as well to something that is a little above 50 because that's what the things I have said I've seen there. And I'm going to give the same range actually for the other one as well. So again I'm running it. I'm getting generated whatever things I want to visually verify. And now I'm feeling particularly lazy because by now I've ended up with 11,236 things so I could sample some of these. But I've already sampled some of these and right now I do not want to spend time on sampling a lot more of it. So let me just move all of this over here and see where I am with my coverage. So I seem to be having everything green. But I am definitely because I skipped a lot of the specification. I'm still missing a lot of the information but I wanted to keep this demo a little bit of a short one here. So this is what I usually do as an exporter tester. There is nothing in exploratory testing that doesn't say that automation wouldn't be included. It's not just the main thing that I think of when I'm exploring. The learning about the application. The understanding of the specification. The understanding of the real world. Those are the things that I pay attention to. So what did I just demo? I think I was trying to demo you. Exploratory unit testing of legacy code. Legacy code meaning that I can accept whatever the code is doing as right. But also the specification and my understanding of things that might not be right right now as things that we can still add to whatever is going on with that application. I demoed combining my ideas whatever I would know about inventory management systems. The specification and the code coverage while I was testing. So all of these are things that an exploratory tester can use. It doesn't make sense to use them on all applications and on all problems and on all levels. But they are tools that are available to us when the scope is right. And I also was trying to show you well within the time frame available today more of a shallow testing. If I would spend an hour more on that application I would probably figure out more aspects that are not quite the way they should that we should have a discussion on. And that's when we get to the deeper testing seeing things in context of the whole whole use. And the difference here is in learning. You do not learn immediately everything. You have to spend time with the application spend time with whatever you've been building. And when you learn in layers you get deeper every round as long as you have the learning mindset. I also demoed what I think of as disposable test automation. You know I could keep that. You know I could check it in. But I also could be throwing that away after I'm sure that things are you know okay for whatever after whatever changes. This is the particular type that I probably might want to keep except for the ranges and you know generating all the different options. I am not sure if I would not want to clean that up a little bit if I wanted to keep it for for future. But mainly what test automation here was for me was was documentation. The main thing that I was still doing is is exploratory testing. So the way I look at this is kind of like exploratory testing. It's not the other side of testing but it's the box within which test automation fits in. That's how I look at it. But I also have absolutely wonderful colleagues and one of the colleagues that I really really appreciate is Angie Jones who is a test automation specialist from US. And when I talk to her I feel like I'm looking in a weird mirror. She looks at test automation as the main thing where exploratory testing fits in and we are so much in agreement except that we decide to use different words for the same thing. So it's a really funny thing kind of like you know finding someone who loves testing just as much as you do and and they look at it from a completely different perspective. And yet the only thing we're not in unison with this is the words we decided to use. So a lot of times I find that when I approach things as an exploratory tester it's really the focus that I talk about. I focus first on learning the first execution you know making it every time different. I can try five now. I can try ten later. I don't have to commit to any number. I can change my mind and that increases serendipity. The lucky accident that maybe five and ten are different. And I just didn't know that. Whereas people with automation specialty often focus on the idea of leaving something behind like a legacy behind that people can use afterwards. So that's kind of where we are. Where we're different. So I often end up asking myself the question where the time that I use would be most valuable. And in particular the question that I ask is personally as an exploratory tester with the skills that I have with someone who still identifies more with the group of manual testers than those with the automation testers how valuable am I like this for my organization. Well I know that I am valuable but I still keep on asking this because it's a chance for us any one of us to take us forward. So I wanted to give you a couple of stories. The first one is about me working in the job for four years and leaving that about two years ago. And you can see there's some small text there saying that in the first two and a half years that's what I put in my CV at some point. I had personally reported and gotten fixed. So I only looked at the ones that were closed and fixed. Two thousand two hundred sixty one issues. And it meant for those two and a half years that I usually found more than one bug in an hour that I was at the office. And usually also for the times when I wasn't at the office because I've always been speaking at conferences. And I do not tend to find so many bugs with our own application while I'm at conferences. So from that perspective I guess you know I provided value. They hired me in that organization because they had a group of developers who knew from metrics in production that 18% of users would see a visible pop up screen saying something is off something crashed. And they had no idea how to get to those problems. They didn't know how to have the repro steps because none of the users called in none. They didn't tell what to click to get to that problem. And the developers didn't have those ideas of the repro steps in that particular organization. But in those four years I also spent some time with my group there as an exploratory tester as a manual tester doing some more programming meaning sitting with my developers as someone who usually doesn't spend so much time on the code. And just you know contributing my ideas real time so that whatever I would know that might be breaking I could speak up about it as we were implementing it into the code. It was kind of nice to have people around me say things like that would have been an expensive mistake. And forget that the mistake ever happened half an hour later when we were having a retrospective. So more programming working on a single computer in a particular style that was something that I was also doing in that job. So that job made me then think of you know how was I doing on two different perspectives on the results perspective was my my results more on the side of productive the things that I contributed to the work or generative making other people be more productive because I was around. And then another axis the availability timeline of those results were they available like you know right now short term within few weeks of whatever I was doing or were they available long term and looking at you know that four years of work I had to realize that the main thing that I contributed was being productive as a tester on a short term scale. I found those problems they got fixed you know in long term you know they're still probably fixed but they could be reintroduced. They didn't usually get reintroduced. So again the whole fear of huge amounts of regression wasn't the thing for that particular thing. But on the generative side I realized that the thing that I was doing mostly is kind of like holding space for developers to test better and care for quality better. And a lot of the work on on on making things better from the testing and quality perspective was done in that team by developers 16 developers and only one of me that was the ratio. So it was kind of a natural thing to happen. On the long term and productive things I remember that I've been programming since age of 12 by sitting in the mob and nowadays it's kind of a part of my everyday life but I still identify more strongly with you know it's still a sidetrack it's not the main thing that I think of. So I remember that I've always been a programmer. And also kind of the generative side and long term I worked with that team with the way of getting to daily releases or even multiple times a day releases even without test automation. And that alone already helped us and that was then the foundation where we started building some of the test automation with the developers. But looking at this I had to kind of like look at like I invested four of my years into this project. How well was I doing really an opportunity cost is this idea that you know at least in hindsight you can always you know second think of your thoughts or or decision. And opportunity cost kind of implies this idea that you could have chosen differently. The two hours you spend on manual testing you could spend automating. The two hours you spend on trying to figure out why the hell the automation thing isn't working at all. You could have spent manually testing and having information right away available. And in hindsight it's always you know great you know second guess your thoughts. But in the moment you made whatever choices you made. And you can only reflect your past choices looking at whatever you want to do differently in the future. So I realized that the big thing that I did in that project was that when I left that project whatever I had in me left with me. And I went into a new job with that realization. So I'm right now for two years I've been at F secure. I actually rejoined F secure after ten years away. So I'm on my second turn there. So totally I'm now somewhere about five and a half years with that company. And it's really interesting kind of going back to a company that has been without you for ten years. Because one of the things that I realized that had changed in the ten years I was away is that they had found a way of doing test automation in the way that we talk about in various conferences where it was a major part of the way we were working. And I realized that maybe you know even more in hindsight. Maybe the energy that I had used in talking against automation and trying to remind people that there's this other thing as well. I'm not reminding that there's these two things actually always. Maybe that was preventing the organization while I was there in moving into the test automation. Because now that I went back it felt like a different kind of an organization. There was a lot of automation in place. And there's not that many people like me anymore working in that organization. Those who kind of like identify mainly as exploratory testers. There's a few. But the numbers have clearly gone down in in ten years. And it's OK. The automation people some of them have accrued some of the information that those those so-called manual testers used to have the history information. But also there are people growing who will eventually be good in both kinds of things. So one of the things that I did in in this job of mine that I have right now is I got to explore a lot of features. I took one one example of the things that I've been spending time with in the last two years. And one of the things that I got to test is remotely managed Windows firewall. So we decided we don't need our own firewall anymore. So we wanted to use the Windows firewall. We wanted to be you know someone out there long long gone far away. They might need to remotely be able to manage that. And it didn't take long for the developers to know come up with the tech you know the code for that. They created all the unit tests related to that. They figured out whatever kind of test automation they felt that need they needed for that. And I came in as an exploratory tester to a point where everyone is basically saying that it's been 100 percent tested already. So I make this little mind map coming up with all kinds of ideas that I want to test. You know I focus on on different kinds of rules and I figure out that it's very easy for us to put wrong things in. You know garbage in means also garbage out. And there's no way for us to check whether you know the things that the real users put in are actually real and working things otherwise than testing on the environment. Similarly playing with all kinds of rules. I figured out that there are some rules that overrule others. Information that we didn't have because you know some other organizations decisions we didn't know about all of that. We just put our own rules there. And also that there would be third party you know other people who like for example things like domain rules that could overall whatever we were trying to do and just get the whole computer into a weird state where nobody could do anything anymore. So I come into this kind of like the idea that the fully automated testing that we had didn't give us all the information that we needed to have. But whatever I could figure out while I was exploring could add add to that. That's overall thing. So what I realized in this project is that my colleagues the creation of automation in that project it was always handy because I could you know take it and just run it also on my machine and having that little script that created that one rule I could make it create thousands of rules just by you know a couple of lines of code. And I could make it create rules of different lengths and I didn't have to go and you know manually change all those things. But I could generate more opportunities for me to notice that things were going wrong. But again my focus was still on the learning new things about the application. So the existing automation was great for me to use as a starting point for exploration. Also I realized that when I then got into these ideas of what would I want to see in automation I appeared with my colleagues who were mostly focused on the automation side that when we were automating we were finding details that were wrong that I would not necessarily pay that detailed attention without the flashlight of automation. So there's no way anyone can create automation without exploring the application in a lot of detail. So that was also something that I really appreciated realizing. And I realized also that whenever the automation was failing for all of us it was always just an invitation to explore. This is something Richard Bradshaw is saying quite often. It's just the beginning point of trying to understand why is it failing and is there something like if it's not failing for a reason that we like is there something we can do that makes it more reliable in the future. So after two years I'm again looking at the types of things that I'm contributing and how things are different in that scale. So yeah I still do a lot of this testing done illusions broken type of stuff and I do that still a lot on the business level. So it means often that I get to be somewhat popular amongst managers popular enough that they just decided that I don't need to be a tester anymore. I get demoted to being a manager two and a half years two and a half weeks ago. So but that's a side side track. I also realized that I had now contributed to adding test automation. So you know that was something that I had not done personally so much myself. On the generative side I was teaching others on how to not make particular types of mistakes by refusing to use G Rattle. So I don't write back reports when I find a problem I go sit next to a developer and we fix it and I don't usually well I leave momentarily but I come back as well so that we get that thing fixed and some of them are really really annoyed with me and some of them really really appreciate the way that I work with them mostly on the appreciative side. But it's an acquired taste. Let's say that way. My scripts that I created now they are generating you know reports and alerts for others. So I'm kind of like you know I'm more holistically contributing in my projects. And on the long term side that maybe isn't paying off every day I'm right now very much into C++ and Python. I also I'm very much into test automation architectures and I really really hate how our test automation is sometimes hard to read. Actually more almost always hard to read and I pair with people to get that fixed so that it's easier and more maintainable. So that's something that I you know with my quality mindfulness seems to be quite nice way of contributing. And on the long term side I still think that I am almost half obsessed with the innovative practices for shorter release cycles. I was just bragging yesterday that from five days to making a release we're now down to two hours and I will get it to less than 30 minutes with with my team because that's a way of changing the way the whole team works. And also I'm teaching still people who are very hesitant in pairing that you know showing up might be something that I do. So to sum this up after 25 years in the industry this is where I am but not all of us have 25 years in the industry. There's a thing called five year rule saying that the software industry doubles every five years which means that if this room was a representative of the industry half of you would have less than five years of experience. You wouldn't be able to have had time the same time available for the continuous learning that I have had in 25 years. So you might need to make choices and you're making choices. So you need to maybe make a choice on on what you focus on. And my advice is to focus on learning. It's not about whether you focus on programming and automation or whether you focus on testing. You will need both of these but focus on learning. You should think of every day when you come to work as a day that makes you better by tomorrow. That's the idea. So if I look at myself I started off spending very good 20 years and long days and intensive learning on the exporter testing side. It's a huge area. There's so much to learn. I'm not done. I'm by no means done right now. But I started adding in the latest five years I started adding more of the automated stuff. And I still think of all of that as testing. I have colleagues who came from a different point of view. They started off spending their first years on the automation side. They added some of the stuff around how do I become a better tester. How do I actually think around testing really properly. And we're pretty much in the same place. So we need to remember the five year rule and that sometimes we will need to have people coming together two different bodies. Let's say that way and coming to get a pairing for getting the same kind of testing done. And this whole pairing thing it's absolutely something you should be doing. They're pairing and mobbing. And not only because it helps you learn other things but also the collaboration stuff is really important. So spend time learning collaboration learn times spend time learning exploratory testing and learning programming. It doesn't really matter what you learn but you need to remember that if you spend one hour every single day or not doing the so-called productive work one hour away from the work you've been hired for I'm using the quotes very strongly here that one hour if it makes you one percent better you're already better better than the past yourself in twenty eight days. That's a ridiculously little investment and you should be paying attention to making yourself better. And interestingly you can spend as much as six and a half hours out of an eight hour day to achieve that one percent improvement and you're still better in a year than you were today. So learning is the number one thing and focusing on learning is what you need to be doing. So learn testing learn programming learn collaboration but whatever you do learn. So this hopefully brings you to a place where you can break illusions in a timely fashion capture notes in test automation help people with the test automation when you're gone you know leave something for the organization of yourself when you're gone. You can on the long term generative or productive side improve your skills learn to pair a mob and on the generative side make sure it's not just about you improve the skills of everyone around you. And I believe that that's the thing that testers actually nowadays do. We are the productivity tool for developers helping also them to focus on things that might be you know relevant to learn for us together. So it's not just this thing it's learning together. Thank you. Awesome. Thank you very much. Marit that was really insightful. I think one of the main reasons we wanted you to come and do the keynote at a Selenium conference is to break some of these illusions around what is exploratory testing and the importance of exploratory testing even in automation. So I appreciate you taking the time and coming all the way to do that. Thank you very much. What I would want to do is quickly have everyone stand up turn to the person next to you and talk about your biggest takeaway from this talk. We'll do questions after that but I just want to capture. You sharing your learning with the other person. All right. I think that's good time to exchange your takeaway from the talk with the person next to you. So we'll we have 10 more minutes for questions. We can do questions. I'm going to start with the first question and then pass over the mic. So Marit what is your view on you know Michael Bolton's definition of checking versus testing. So not all of you might know what he was just asking. There's this big thing going on in the testing world where some people like to rename what a lot of you might call testing into checking. You know this whole thing where you can automate that's the thing that they want to call checking and then the thinking thing around it is is called testing. And you know those are just words. I don't really care about that so much but I also don't like the approach that a lot of people have on on feeling the freedom of correcting other people's words because I feel that's a silencing technique. It's a way of saying the way you think and the way you speak isn't good enough. And I very strongly am opposed to that kind of ideas. So I refused to give up on the word test automation because it was always intended to be including thinking and I refused to move into the word checking myself. But I'm you know I'm able to hear what other other people mean when they say checking but I will still say testing and test automation and not automation in testing which is the most recent popular way of rephrasing it that we just don't quite understand what all of these words mean. I believe in more words rather than less and I suggest you would try adding meaning by having experiences examples and discussions around those rather than focus on the right terminology. OK, thank you. Questions. Right. Hello. Hey, thank you for your presentation. I'd like to ask a question like assume I'm a QA who is to take care end to end kind of activities from functional and non-functional point of view like all the things. So I have a brand new product right now. So I want to explore that application in all point of view. So what are the data points you'll you don't know anything about the application. It's a brand new application and B or a B in QA you take care of all functional non-functional activities whether it is manual or test automation. So what are the things you will go and ask or meet the teams to gather what kind of requirements you'll gather in order to start the exploratory testing on a brand new product. So the if I try to sum what I heard of the question around how I understand it is I don't know anything about the application but I need to be responsible for all aspects of quality what would I ask before I start exploring. I would probably first ask for just an access to a person who knows the application with access to the application so that we can spend some time on the application. I usually try to get my hands on the application rather sooner than later and having a guide who is kind of more versed in whatever this is supposed to do is what I would do. I don't think it's so much about what questions would I ask beforehand. It's about the million of questions I will ask while I'm doing it and the questions really depend more on am I going to be doing this for the next 10 minutes. The next two hours or the next two years. In two years I probably get through the million questions so so it's more on the idea again on the opportunity cost. If I ask everything in advance I will never get to the doing part and that's one of the things that I've learned in my career that less planning having kind of like an overall vision less planning and just getting to doing the stuff you're actually surprisingly much more successful that way. And you get a lot more things done. So I wouldn't ask all the questions in advance but definitely like there's all kinds of heuristic checklists that are kind of in my mind of how I would approach the overall functional and non-functional different levels of testing like that's the things that I've been learning for the last 25 years. So I don't think I can put it in one short answer. Hi Marie thanks a lot for breaking down the midst of exploratory testing. So I work with big data and I have two questions regarding the same. So first is does exploratory testing have a boundary or do you just keep on going doing exploring and adding cases. Second if there are no boundaries then how do I know when to stop. I can just continue doing exploratory testing like finding scenarios which may or may not make much sense. So how do I know when to stop. So is there a boundary. I already kind of mentioned this in my previous answer that I believe that I look at kind of testing as if it was an investment like I can invest half an hour. I can invest a day. I can invest a week. And depending on how critical the thing is and what kind of a team I'm working with I end up investing more or less. My current team is a team of 12 developers. There's me as an exploratory tester. There's a couple of people doing test automation focused work and then there's the general purpose developers who are basically doing all of the stuff that everyone else is doing as well. So that's already setting a boundary for the investments in my team on what people can do within those skill sets and abilities. And you know continuous releasing it kind of opens the budget always every day. There's a new budget available for every one of us. So I would look at it more from the perspective of how much time would you want to spend on it. And how do you know if you're spending too much time. The answer is if you're not learning anything new you're not finding any information that you can convey to others. And you think you don't have the creativity to find new perspectives. Then maybe you're done. Thanks. Hello. This is Jagdish from Hexagon. Can you tell me the major challenges you face while doing mob programming especially convincing the management and even keeping everyone on the same page. So if I had he's asking if I had any challenges while mobbing for the long answer you need to go to my blog. I've been blogging my memoirs of how we did it feel like before and after. For the last years pretty much sometimes even almost on a daily basis. So there's a lot of things to read. But I think the main thing was that the managers were not hard to convince for me. The developers were hard to convince. They felt they are so busy and so valuable you know doing their own things that coming together was a thing that they didn't want to do. I tried for almost six months and they wouldn't do it. And then I told them that I feel a little lonely. You say you like me please please please couple of hours just for my benefit. And we spent the best two hours ever having fun refactoring our code together. And after that it wasn't hard to convince them to spend two hours learning anymore. So frame it as learning rather than working together. That's my best advice. Thank you. Hi Marit. Thank you so much for the talk I feel like I learned a lot. And I loved the demo of the exploratory API testing. And I was wondering do you ever do like exploratory integration testing too at like different levels. And is that useful as well. Short answer yes I do. Yes that is useful. There is nothing that you can do in an exploratory fashion. The idea is just that exploration is the focus on doing seeing what you learned and using that learning into whatever you're doing next. So you are actively learning and then you're doing exploring. Hi. You mentioned about a pairing of automation engineer and exploratory testers. So suppose I have a really bad applications test bad in the sense that it's not very testable. So I need to focus on one thing at a time. So it is is it possible to for that pair to work parallely. So that I mean what's the dependency of the exploratory tester and automation engineer because in my mind I'm thinking like unless the exploratory tester tells me what exactly are the test cases the scenarios. I can't really automate. You can't or can I can't automate those specific modules that has not been explored yet obviously. But is there any way to find an idea or strategy that we can work parallely like an exploratory tester would work with an automation engineer without depending on a sequence. It's the way of working parallely without pairing. Is that what you're asking or while pairing. Yeah. No in a pair I'm trying to ask is wouldn't the test automation engineer have to wait for the exploratory tester to finish exploring. No because usually you can explore for half an hour and then you can already learn from that. So when you you have small pieces that you're working on they intertwine quite nicely but I do prefer the idea of actually having them pair and the way that I pair with people it's called strong style pairing. The idea is that if you're working on for example let's say an automation task that you're pairing on the person who knows more about automation isn't allowed to touch the keyboard. The person who knows less less is always on the keyboard. So the pace of how things are going forward happens on the person based on the person who is on the keyboard and the person who knows what's going to be done what is the things that we're going to be doing has to use the words for an idea from my head to go to the computer. It must go through someone else's hands. So again if you want to create an even tighter collaboration how about you as an automation engineer explore with that tester and as soon as you see something that can be turned into automation that they might not even know to identify as a thing you could turn into automation. You say let's switch modes now from you telling me what to do to explore. I'll drive now or I navigate now and I'll tell you what to do so that we turn it into automation. So switching as you have an idea of how you drive this forward best. Thank you. Hello ma'am ma'am I have two questions actually the first one being can you give me some insight on how to test non-functional requirements using exclurally testing. That's the first question and second is are there any matrices that you publish after completing your test and are you maintaining some documentation as well. So first question related to non-functional testing and export testing. So let's take an example performance testing. I find that whenever people are trying to figure out performance scenarios it's very exploratory testing in nature they're trying to figure out what kind of you know things they need to do and how many things need to happen in parallel. So that learning it's very much an exploratory activity security testing very similarly you would look at an application from a point of view of someone who wants to do harm with using that and you use your creativity not on the point of view of the good people but on the point of view of the bad people. So I don't see why that would be a non-issue. I actually don't think those people have ever been able to do their job really well unless they had a bit of an exploratory mindset and they get to the exploratory mindset by failing and learning and failing and learning and you know eventually succeeding that's how all of us ended up in getting the success eventually. And on the other question on do I report in some way. It really depends on where I work. So I worked in an insurance company and done exploratory work there. I reported test cases high level test cases past there. I work in my current company and nobody needs to report anything other than we released a new feature into production today. This is what the feature does. So it really depends how often you release whether you need to try reporting quality metrics. So if your organization is forcing you into giving quality and bug and and test case metrics type of things and how many automated scripts are passing you're not releasing often enough. That's my my rule number one. But how different are your test cases from the functional test cases. I mean what exactly do you do extra when you're doing an exploratory testing in my organization. We don't have functional test cases. We have test automation that runs something whatever we have captured into it by that time. There are no separate test cases. All the things worth documenting are worth documenting in test automation format. But I do know there's organizations that are somewhere in between this and I suggest faster releases. They will force you to think in ways that help you get away from from that that type of thinking. Thank you. I'm a vendor from Fidelity and you need to move the microphone a little closer. Yeah sure. So first of all I think thanks for choosing such a topic like which is very close to the heart of all the testers being exploratory or automation testers. So for most of the time in my career I have played both the roles exploratory as well as the automation tester. But with time I feel that there is more focus on automation testing. People are always in a rush to like achieve 90 percent or 100 percent automation. So my question to you is like what is your suggestion like how can I prioritize myself the exploratory testing and how can I convince the people around me that it's as much important as the automation testing. So supposing you have limited amount of time that's kind of what you're asking. How do you make sure you have time for both. The way I usually do it is I use my calendar to mark that certain days I do certain things. So you need to time box things you need to make sure that you have availability for both kinds of activities. When you are very very fluent in automation very very fluent and the ways of doing things kind of like on the level of API and you don't run into the same kind of problems with trying to get your automation things done as as I often see in the end to end automation then it's it's a non issue usually to explore through the API and kind of combine those two. But when getting the code written in the first place is a major effort the only way you can make time for you know thinking with the application is to make time to schedule two hours a day whatever you need or whatever you feel you want and make it visible for yourself like what did you want to have how much for each of those boxes in a way and and if you're not doing what you promised yourself you know who else is going to keep you accountable except yourself. A lot of times when I'm learning new technologies I make a note of like you know I promised myself I'm learning this this thing about my IDE today. And in the end of the day I look at that paper and it says you were supposed to learn this thing about your IDE today and you end up doing something completely different. Nobody none of my managers comes and tells me that I did something wrong. I need to do that myself. And again tomorrow is another day. Every day is a chance of doing things differently. Right. Thank you very much. I think we'll wrap up the keynote. I would appreciate if everyone can give a standing applause for such a great talk.