 So welcome everyone to the session, Patterns to Old Team Test Automation Transformation by Marrett PRV. We are glad Marrett can join us today. So without any further delay, over to you Marrett. Good afternoon from my part. I've been spending this morning thinking about things that preparing and rewriting particularly this talk gave me around the fact that I currently work in an organization where I have multiple teams. I can roughly say that each of the teams cost about one million euros a year to exist. And I've been thinking a lot in terms of what's the value that we're providing and what's the role of test automation in all of that and how can we kind of create a way of testing including test automation that gets so intertwined into the DNA of what we're paying for that it's almost like if you think in terms of having a good steak, eating a really good steak, that fact that you have all around like marbled into the grain of the meat. You consider that good in some circles at least having testing that you can't really separate from the rest of it. That's kind of the goal that I find myself working against. So I wanted to talk today on the experiences we've had on moving towards the whole team idea of test automation kind of like intertwining it so that you can't say that testing would be something that is done by testers. You wouldn't say that test automation is done by a particular person, but it's actually something that the entire team contributes on. And obviously we have different skills and different interests by different people. And I'm assuming also in watching this video or watching this streaming right now, some of you will have a kind of like a developer test automation development background. Some of you might have a testing background. Some of you might have a management background. And I kind of personally I shift between these different kinds of roles and not really sticking to a single one, but kind of doing improvement in all the different kinds of statistics. So I work currently at Vizala and at Vizala, well, the way we do test automation is that a lot of the projects and a lot of the products that we're putting out, they are embedded systems and then there's some kind of a cloud or software backend where you collect a load of data and you have functionalities that are either on the embedded piece or it might be on the online software piece that you have. And obviously because of that, we've built our own things around test automation, which is called Plexus. So technically, we have this thing we call within the company. We have a fancy own name. You can Google for it. You can find it. It's something that is built in the company and it's basically a particular Python library or set of Python libraries with one of the special aspects to it is that it's able to drive hardware interfaces. So you can turn off an embedded device. You can do long presses on physical buttons. You can do short presses. You can control power, whether you have power or not. And all the things that are kind of hardware related, those are also automatable for us. And that's kind of what makes the core of our test automation and a lot of the considerations. So we have a robot framework because of it. Someone built this library for hardware driving into robot framework. And when we have user interfaces, it's typically Selenium. That's kind of what the baseline of most of the organization is. And there are very few teams anymore that don't do test automation as a default. If you have someone specializing in testing in the team, it is very typical that someone nowadays is hired to do test automation first and testing second, kind of like both of those, of course, are expected, but without test automation skills. It's very rare that we have those people. We have a few of them around still building kind of the hardware systems, but most of the people are actually driving things with automation. However, the environment that we work in, it's not very static. So we've also been replacing some of the robot framework because we've been noticing that robot framework really isn't a technically most likely foundation for the whole team to participate in automation. It's a new language for all of the developers in the team. And if the developers already are working in typically either C or Python or some of them Java, Java script, they are usually much more happy into going into some what they consider a real full open language for creating the test automation. So we found that when we moved some of our test automation, not replacing it completely, but moving some of our test automation over to PyTest and again, using Selenium for driving the user interfaces, we had the whole team kind of joining in to contribute. And obviously, when the whole team is joining to contribute, there's more voices, which also means we've started to experiment a little bit more with Playwright Advisor. So in addition to this, we also have teams that are really kind of like by their DNA and identity, they don't think in terms of embedded systems being part of their thing at all. So they don't need to work with that world where you have the hardware interfaces that you can also drive with Python and a library built on top of robots. So we have also teams that are kind of fully software teams. They might be in JavaScript world, but actually typically they are very much in JavaScript and Java world. Some of them are doing Kotlin again, a wide variety of languages. And there it seems that the choices of tools that we're usually going for nowadays seems to be that it's either Cypress or it's Playwright. So overall, all of these things exist in the organization in a good and friendly manner. They're all doing the work of testing that needs to be done as of today. And there's no kind of particular interesting getting rid of any of them. It's just that people are kind of, you know, building different things with different kind of frameworks and using that. So I wanted to kind of start from the point of view of what technologies are we using because that's usually the thing that people come and ask in the end if I don't start with it. But I didn't want to go back to, you know, my previous organization and a couple of years where I started kind of looking at the success of test automation from the perspective that I've been doing testing for 25 years. I've been part of things or part of projects where there's been test automation for maybe like almost 20 out of that. So there's always been some level of test automation and for a lot of years, I felt like the test automation, it existed in this like small silo. It wasn't really bringing the benefits. It was somebody's hobby project that we were financing and typically looking at year over year over some years. It always vanished typically when that one person left the organization. Nobody else was willing to continue with whatever the organization had invested in and I don't consider that success. But I was working at FCQ a couple of years ago and I was writing then out of that project. I wrote this article. You can you can read the details of that organization from this article. But what we were looking at basically is this idea that we had built a very different kind of system and we were really, really happy with the benefits and the results that it was giving us. And we had basically, well, we had had nose test as the runner. So there was Python world. We replaced it with pie test. We have very, very little of Selenium and a lot of people when they were thinking in terms of why are we successful? They were actually quoting the idea of not having almost any user interface tests. And being able to test on the API level pretty much anything and everything we needed to do. That was the general understanding technology wise, but also for the successes that we were seeing, people were saying that the the services that we had, like we could have a Windows virtual machine at our fingertips ready to be used, operated run tests for manual and automation testing, but for automation in particular in less than five seconds. And that is something that we're still struggling to do with the publicly available virtualization services in the cloud. So replacing that in-billhouse built system over the years, it actually made the testing a little slower and the fast feedback, the fast availability of that environment. It really drew the whole teams and the developers in the teams also into contributing into building this automation. And it was like Lego blocks, you add a piece when you realize you need a piece, you could replace a piece and you would have different parties, multiple business lines in the organization contributing the same technological platform. So I thought this was amazing and successful. So I started looking at for the purposes of that article that we wanted to write, I started looking at what did we do particularly well with that test automation, because it actually wasn't just technological kind of like, you know, we did this great architecture. It wasn't the choices of the technology. It was never the tools, but it was something else in the organization and we wanted to understand what were those things in the organization that made it worthwhile for us to pay for all the people who were doing that testing and they were never these, you know, considerations of how much we're investing or are we investing too much or is it worth paying for this automation? So it was kind of evident for various levels of the organization that things were well. And then comparing that back to my current organization, so definitely different architectures, different things. But with the same kind of criteria of success and fail, I've been around my organization in different projects and different teams responsible for various things. And we've had automation in every single one of these and I'm calling three of these teams that I've been at a success. Four of these teams, I would call them fails in the sense that the business isn't getting the benefit and the value that they have. Automation looks perfect. And if I ask any of the testers in these teams, they think they are doing well. But in the scale of kind of looking at is it really fitting my criteria of success in automation, which means that we keep the things around even when the people leave. I kind of divided also on the three others. So success and failure, it's not just a snapshot. Well, it is a snapshot in time in that sense that it's not something that stays the same over a longer time. So let's then talk about what is this particularly well? What does that then mean? And what does that look like? Particularly well for me looks usually like this. So I call this whole team test automation. And one of the things that I've done in the recent years in order to illustrate this is to use tools that help visualize the reality that is happening. So I took a video with a tool called Gorse that visualizes one of our code bases that I consider successful. And the success looks like many different colored people moving around different places willing to make changes pretty much continuously. I have similar videos of the non-successful teams. It usually looks like one person making careful changes in one area or one person making a lot of changes in one area and kind of having that testing available and keeping it up to date kind of as a service. But the real success is where it stays even when the people leave the organization where they give the benefit where this automation gives the benefit for the organization in the long term even after we are gone from doing it. They are the ones where we've managed to get test automation to the whole team's asset. And when one person leaves there's always the others who will continue with it. So I suggest running Gorse maybe against your own repo and just from that visualization you can probably see whether you're fitting some of my criteria of success. So the reasons we looked at with the research that we did in my previous organization there were some things that we noticed that we definitely didn't do quite as the literature around how do you do excellent test automation. We didn't do it the same way. The article was written so that there were people like myself from within the organization but we also had two people from a research institute in Finland participating. We would do the research so that in the organization we would write basically down how we worked whatever questions they asked we would respond in writing. We analyzed all of that text on how we did and then we reviewed the kind of conclusions and markings that came out of that. And these conflicts were not something that I personally identified. These were things that the researchers looking at things from the outside identified. They said that and these all are true in both of my organizations. We had no explicit test automation strategy and it was weird because that well it was weird for them that we didn't have this because usually they had grown accustomed to the most of the advice that is published to say that you need to start with a really good strategy and we didn't have anything that was considered strategy at least not on paper. It wasn't formalized but there were these relaxed verbally communicated ideas without any strict rules. Collaboratively we were changing all of the rules that we had. We had an idea of what would good look like. That was kind of the thing that was driving so you could say in a way the ideas existed. We had no careful tool selection. We had had that in the past in both of these organizations but it was more like if you feel that you have energy for this tool you're welcome to use this tool but you need to consider that the whole team needs to be owning that tool rather than just a single person in that tool in that team. We weren't measuring quality and performance of this automation. So they were asking kind of like these percentages questions in terms of kind of like how many of your manual test cases have you automated? We had no manual test cases in either one of these organizations actually. So we can't say what's the percentage that feels like a conversation that we should not even be having and the whole performance of TA well in FC Cures case while we were doing the research we didn't have any measures of that in the more recent years. I've now introduced again some measures mostly for visualization and management support perspective and we had no explicit guidelines. So all of these were kind of weirdnesses that we were on the way that we worked. The results also we reported those in the 2019 thing and the pink ones are now from Vaisala and the light brown ones those are from F secure. So just to give you some ideas of what does it then look like when automation actually is looked at as kind of like it's part of the whole team's DNA and what's the impact of it being good can be? Well, speed to release. It wasn't really about automation. It was more about other practices around it. That's just basically deciding that no matter how much automation we have we're going to be releasing based on that automation now and if it's missing things we can add it later as well and we learned that most of the things that had been thought of as gates towards the release were work that never needed to be done by automation or anything else. So in both these organizations quite a drastic change on the release speeds in that term. The decreasing times definitely automation had a lot to do with it. So if you can press a button and it does most of you already in the pipeline and the manual activities are minimized you don't have the wait times and the wait times are usually the big thing there. Team productivity kind of like naturally going and making the changes without all the hands of definitely something we followed in both of these organizations. We considered sharing reworking well reusing code that somebody else was creating looking at test automation maintenance effort in that team of 11 people we didn't even notice that we had to maintain test automation even though we had I think there's the number there high test efficiency 213,708 tests run in a single working day it still didn't feel like the maintenance was a problem. So all of these aspects of how it shows to the customers finding relevant issues not causing pain in the team and not being separated from the rest of the team these are all aspects of the success we considered. So that then takes us to why and I've had now a few years in thinking in terms of why and what are the right things that I right now believe in and one of the things that I've learned is well definitely the idea that whatever works for me in one organization it might not even work for me in another organization and the other thing is that even if it works for me it might not always work for everyone else there's certain level of I would call it maybe cloud or prestige or or fame or intimidation status maybe even that you get after 25 years of being around and doing this stuff consistently and speaking about it all the time and that means that when I ask for something I usually get it easier than an average person I get it easier than most of the developers I get it easier than most of the testers and sometimes working together with all the different parties we together can come to these conclusions very quickly on wanting to do or at least experiment try something so framing things in terms of experiments have helped me but this why question it really kind of has made me wonder and in the research that we did this is the why that we identified in F-Secure as organization and again those little stars there that I overlaid they're showing that these patterns followed me from F-Secure also to Weisala in the sense that the whole team effort the human aspect of it needs to be shared so that it has a future even when I am gone and not just me gone but anyone who was creating that automation continuing with it it's not worth the investment for the organization unless that's true it's still the central one and that's why I raised it in the center of this talk today the research people who are looking at things with me they said that we had expert team members this always made me laugh in the sense that yes we had expert team members including the 15 year old boy who was a 15 year old and then 16 year old and was considered an expert after one year of working in the industry even without actually going to any particular schools or having any particular hobbies in the space of software creation so definitely yes we were expert team members I think we were a learning team so I would prefer phrasing it differently when you have someone you really are trying to keep them around rather than expecting that you need to find the perfect people in the first place none of us is perfect even with years of experience there's always more learning so staying expert requires continuous learning we definitely were self motivated and self organized in the sense that we would kind of go around the organization have the conversations if you wanted to do a change you knew that you would coordinate with the other ones as well and you didn't expect someone else like you know you talk to a PO when they do the running for you you do the running yourself when you're driving something forward so in that sense motivated and organized in a way where you can feel that you're owning and being able to take things forward on the organizing time well definitely taking the time and allowing the time that results are not perfect but also this idea that we were not just a single team we were multiple teams and we were working on a shared code base definitely a big part of good results and this is the one where we're still most struggling with I think in my current organization that's why the color we are moving to that direction but it's still very easy for people to kind of consider my component your component rather than our component as the mindset the technical choices great infrastructure good tool choices being able to replace tools the testability of the product definitely important all of these for the success I would maybe even say that the testability of the product that it was always a question whether we would test it in a particular way or we would change the product so that we can test it easier and almost always in these cases of success the change of the product so that testing is easier more flexible and possible coming with the whole team feeling the ownership and pain of the test automation that usually helped us to get to the good results and the small incremental changes kind of processes the way that we built automation those were important so out of all of these I raised five things that today I think are core to success the first one being the idea that the language that we create our test automation in it matters Selenium is not a language playwright is not a language languages are things like robot framework language which is a separate English like way of writing code with limitations of a specialist language that are not necessarily the limitations of a generalist language Python on the other hand is a generalist language so there's a lot of people around there's a big community there's availability of tools that are not just for testing purposes but for other purposes as well and it opens up this whole different world of taking whatever libraries you might need in order to run things on the cloud to maybe do some model-based testing on top of your UI automation this is something we can do with the generalist language and the more openly available libraries which we couldn't do with the specialist language and in particular the mindset of the people on where they need to grow it's not that they have to be there today but where they need to go is that they need to grow to be fully fledged programmers who can contribute something today and something more in a wider scale tomorrow so typically it would also grow us into this generalizing specialist when we were sharing the language and getting the feedback on our programming skills across the entire team instead of having just few people in the testing team who were kind of focusing on on what the best-looking test cases most useful test cases for us look like in automation so this is a core for us looking at all the failed teams in Vizala or well they are not failed teams they are still doing useful automation but in my terms not the best teams that I'm looking for in terms of of where we are growing with the whole team continuation typically the difference comes with the generalist language allowing the developers not having to learn yet another language to be contributing and it's not that the developers can't learn the robot framework language for example they do learn that quite many of them actually have but there is also a large section of the developers that we've been well I've been working with who basically say that they're so busy with the other things that they have the excuse of not going into that that other language so making the threshold as small as possible definitely a pattern for our success another pattern that we were really using a lot I think is this visualizing testing depth and coverage so I call this this team more like a theme of how to get management of your back so that you can do good work so a lot of times product owners different levels of managers deciding on the investment that we're doing they have all these detailed questions on why does it take so long for you to write a single test case or they might have these questions of how much time did it take did you also have time to do the release testing while you were doing this test automation thing they have to work in terms of kind of how much do we invest in manual testing and how much we invest in automation testing if we are able to get away from these conversations and get to the conversations of how can we do the testing today so that some of it and more of it continuously is getting automated and getting supported by these tools changing kind of the DNA of how we talk in the organization then that helps and in terms of giving some kind of visuals showing the number of epics and requirements we usually think in terms of epics come from product owners and they include or are linked to the requirements knowing how much of that we cover with our automation not how much of the manual test we cover but how much automation we at least have that is related to our epics and being able to say that we have a percentage and it's growing 39% of our epics is actually a really small number but this is already a number that is helping this particular team that I look took the numbers from doing a really good job with continuously releasing products from a type of business including embedded software where they used to think they can do frequent releases we also describe test automation reliability so how much green are we seeing that seems to be the thing we need most management support on so even when they care for is it green or not instead of thinking of how many tests we have don't care about that really from the management point of view care for is it green is it causing a lot of maintenance work for you or is it actually just doing the work kind of quietly and that the red means actually something other than than having to do surprise work caring for that definitely helped us and then the lead time to reliable kind of like building a ways of working within the team so that when you do a breaking change that you know will break the test automation maybe fix the test automation as you go maybe tell that it's coming in soon something that breaks so that you don't have to start from basically wondering what broke and then trying to fix it while other things are breaking so I will lead time to reliable in the last year with the team that I had the visualization from it used to be a month to get the test automation fixed by by major effort in that team and now it's it's typically still ours even after I am gone from that team so again moving around is a great way for seeing if things stick after someone leaves I showed the video earlier I just took it as a background picture here on showcasing progress made of continuous flow of small changes so teaching managers maybe to care a little less of how many exactly like numbers showing managers the progress in a visual way showing that there's new branches that we're working on and having more meaningful conversation than just looking at the numbers usually helps in getting things things forward the choices on not just automation but the way where I would maybe say this is the strategic thinking looking at it in the hindsight starting from the idea that you can choose to not test manually when you're making a release you can choose that you can test while you're implementing the features and try to kind of think what the changes have an impact on and you can explore them and maybe you can write automation also for the new features so that you can see that they work in the context of the product that you're working on but at the time when you decide you're going to make a release and you move that product of yours from one environment to another or maybe you handpick certain changes and you build a build that is somehow separate ready for that release at the time when you do that activity that kind of moving from feature time work of testing to release time of testing you can just make the what used to be three months of work you can make it one day by choosing not to do things I've done this now with three different organizations and more teams than I care to count and it's actually the number one pattern that has proven useful to me in getting the teams thinking about test automation and contributing to test automation in a good way by taking away the time of catching things later on and having to build that in introducing test automation introducing other kinds of ways of also exploring and figuring out if the changes are are having an impact so moving the work elsewhere I used to talk maybe five six years ago I used to talk about the idea of doing continuous deployment without test automation and that's basically this idea that you can choose to release with whatever automation you have and the 39% automation we talked about with one of my teams that is definitely already a really good maybe even a better level of regression testing that we're running on the releases now by moving basically the whole focus on the feature testing time following the changes and introducing continuously a little bit more automation incrementally finally my fifth idea here that I want to talk about on the patterns that has really helped me is that to get to the whole team it's not that straightforward you need to ask for people to spend time on automation that normally don't I would usually have these conversations with product owners and make sure that every single developer had a week or couple of weeks of time of doing some kind of contributions or tasks that were related to test automation by doing you learn the practices of how would you go about kind of doing it doing it as part of your regular work so getting the chance of learning by doing need to make space for that but also you need it well I need it at least to grow the individual competencies by doing the work with people so it's not just management it's actually doing the work either in what we call ensemble testing format or in a pair testing format I mentioned the 50 year old the way that they grew in six months and in a year is that we regularly paired with them well sitting next to them when we were not remote and now by sharing screen so that whoever is with less experience and more need of learning that's the person on the keyboard and all the commands all the ideas of what we're doing they are going to be held deciding on the priorities is going to be held by the person not on the keyboard so it's not watching the junior code but it's watching the junior be the hands of the more senior who doesn't get to do anything other than use their words in order to get the coding to happen I've had teams multiple teams learning to do Selenium tests that they had a hard time figuring out how to get the whole team to do by just running a couple of these ensemble testing sessions where we would create our first tests together we would struggle together we would grow together and we would know how to do those things individually after we had seen how someone else is doing them so having someone one single person who knows you can teach a group of people how that's done and it's a really powerful thing but also you can do that on a regular basis whenever you get stuck saying instead of you know writing I get this error message to someone more senior than you or someone well not even more senior another senior in your organization instead of going and you know copy-pasting that and going into this ping back and forth in messages just kind of call the person share the screen work together a really powerful way of doing things I took some screenshots from yesterday because we were doing this style of ensemble testing in one of the sessions and somebody shared this on Twitter this is what learning to do Selenium 4 in an ensemble looked like yesterday we had four very lovely volunteers and what I find interesting is that sometimes in ensembles what we need to learn is not how to write the code but how to contribute to the code speaking up about what we want to be done next knowing when to correct the others and kind of like you know clarifying some of the things we've learned that we think are essential so kind of taking a pause for the important conversations not explaining everything but figuring out how to do things I took this as an example of personally I'm looking at the the lines 50 and 51 here latitude, longitude printed so that we can see them I personally so much prefer just putting like a break point in code and looking at it with the development tools and I didn't know how to do that seven years ago before I started ensemble testing I knew how to write this print stuff but I didn't know how to do the debugging but working with the developers in the same ensembles it taught me that there's a better way for doing pretty much anything that I knew how to do and again interestingly doing things in a group are usually more difficult because there's both the learning and the contributing and when you do things alone it all feels so much simpler I did the same or similar kind of thing on my own machine on Python like a different language the one that I prefer to use nowadays because of what I'm right now living in I used to live in in different worlds now I'm in in a Python world and again like when I take a screenshot even of my own pieces of examples that I just wrote I had this needs of explaining that yes I know that the semantic selectors yes they are they exist in various libraries they are also in Selenium and they're actually really lovely things of Selenium 4 but maybe we shouldn't be using them I just tried them here so yes I know I shouldn't be and I had the other way of implementation as well but if I didn't want to take screenshots of everything and there's always this you know weirdnesses of your IDE for example where red should mean that it's not passing but the icon is passing so there's bugs that also cause you some of these weirdnesses so all of these these patterns they really have driven me to this idea of improved understanding what success in test automation looks like the success is not writing hundred lines so thousand lines so ten thousand lines of code or having this amount of numbers of test cases running success is being able to move fast being able to rely on whatever we've built and being able to also use test automation not just on on regression aspects but also on extending the testing so sometimes my main use of automation is you know I create a Selenium script that monitors whatever is happening on the user interface where there's new information coming usually every five seconds that's the type of kind of streaming application that I work with right now and then I look for patterns in the the in the logs that I created on are they you know changing over the whole working day of mine in the way that I thought I want them to change so instead of you know sitting and watching things for exploratory purposes I have automation collecting me the things that I want and then I can look at whatever the automation produce for me so for me the success with the whole team thing has meant that we have no separation anymore on the manual and automated testing and this style of testing and expecting test specialist test automation specialist and manual testers to intertwine into just doing different kinds of tasks at different time this is what I call contemporary exploratory testing and I don't think manual and automated exploratory is not manual exploratory is actually both of these and that's the style of working that we need to learn so some of the expert team members is really around this mindset of of building things so that's what I I had to share on our experiences and I'd happy I'm happy to kind of take some questions and at any time you know have any conversations so there's some of my contact information here I prefer Twitter DMs that's kind of my my fast tracked way of getting to talk with me also LinkedIn I kind of collect people professionally there so so if you want to link with me I'd be happy to to do so so guess it's time for questions so the first question here is on for the projects and teams that I considered fail or not successful in this whole team test automation what first step would I suggest to bring the change and move towards success I work with these teams still and I have made the suggestions pairing developers and testers so not making it a single person called testers responsibility to maintain that test automation but sharing the responsibility and it's usually something where just asking the asking the asking the product owner to allow for that the developers are more than happy to jump in so as long as you make space with the expectations this happens very easily okay so we looks like we have one yep one more so the this question on pair or ensemble testing and that's it's valid concept when the number of resources billable is greater than one and then well there might be client consultant confidentiality in a way you're right but I actually don't do ensemble testing for learning purposes just on the real code that we work on I also do this online with random people of the internet and it's not that I would share them the confidential thing that we're building I would look for some other application that has some of the challenges that we have I think one of my great examples on this one is is that I meet about every well every two weeks with two wonderful ladies Alex Sladebeck and Elisabeth Elisabeth Sakrova we do currently playwright learning together we have now decided our next focuses on learning to do shadow domes we did some work on this location based stuff that we also ensemble yesterday in this conference and kind of like choosing different things that we have to do in the real projects and and then practicing together we are at different places we're all seniors we're different places and we are at a great position to teach one another it might not be your billable hours but what I've learned from the community way of working hands on on real projects that's the reason why I'm paid relevantly more than any of the developers that I work with and in terms of one move on your personal career you're learning and and somehow turning that up not just accepting that you know if your company isn't investing in you that that's where you are I would maybe go and and figure out a way so there's people like myself there's people like Lisihokke we do these these tours kind of like you know we we go around different people and work with them and learn with them and I'm convinced that the community is open to everyone on some way of doing this Thanks Marat for sharing your experience with us today