 Welcome everyone to the session Stuck in Limbo with Magical Solutions by Isabel. It's going to be an interesting talk and we are glad Isabel can join us today. Thank you very much. Thank you for lovely hosting and thank you for having me. So this is a session called Stuck in Limbo with Magical Solutions and I'm Isabel Evans. So who am I? Well I am somebody who has been in IT since the 1970s. So I did my computer science degree in the 1970s, worked for a bit as a programmer. During the 1980s I came into testing, I was a tester, test manager, quality manager and then from 1990 onwards I was still working as a practitioner but also I was contracting, I was consultant, I was teaching and speaking and that included towards the end of that time a visit to Agile India in 2017 which I have to say was one of the loveliest conferences I've ever been to and I found a couple of pictures. There's me chatting in an interview with somebody about Agile processes and another picture which is I remember there was a session where somebody came and taught us all drumming. So this is me attempting to drum and you can see a look of slightly worried concentration on my face as I learn a new, I'm not going to say skill, it didn't get that far but very enjoyable. Lovely conference and it's a real pleasure to be back. So also what happened in 2017 was I'd started to reflect on my career and I'd started to reflect on some of the things that I was noticing in my consultancy and contract work and as a teacher which was that in the 1980s in the testing work that I was doing at that stage we were using test tools to help us execute tests and in that particular company that was very standard way of doing things and the test tool was built alongside and as part of the software that was under test. So the things were very linked together and we did a lot of exploratory testing and then coming out of that exploratory testing when we got the software stable to a point that the tests were passing we would then automate those tests so that we could keep running them as regression packs in the background and then through the 1990s and 2010s in other organisations I was seeing that people were struggling with test automation perhaps with tools they'd bought in perhaps with scripting of different sorts and not just with the automation of executing the tests actually running the tests but also with test management tools and other tools and I was seeing a lot of debate about what automation was whether it was possible to automate tests and so on and so forth and a lot of problems and I was wondering what are the causes of these and I started researching into this and therefore became a student again so I went back to university and I'm now a student at the University of Malta and I'm doing postgraduate research into the human aspects of test tools with an expected completion date of 2025 by which time I will be 70 and there's a thing to think about so that's who I am and sometimes people say to me why University of Malta and here's a picture I took in February 2020 which was of course with the pandemic the last time that I got a chance to go out to Malta I actually live in the UK and Scotland and I go out to Malta occasionally this was my last visit before the pandemic and I think that picture of the beach in Malta tells you why if you live in Scotland which is chilly in February it's cold in February why going to Malta is a nice idea but actually also importantly they have a big ICT faculty they're doing a lot of research into software testing already they're doing a lot of research into the human aspects of software and human aspects of software testing so it just fitted really nicely with what I was interested in looking at and it's been a real big learning curve for me learning to become an academic researcher after so many years in industry so back to the beginning what was my motivation I said to you didn't I was noticing challenges that people were having with with test automation and one of the debates that was going on around 2015 2017 that sort of time which I think is still going on actually is is this because the testers don't have good enough skill sets or is it that the tools need to be more usable that they've not been designed very well and because of my background which had included quite a lot in usability work I was inclined towards the tools need to support the people and was thinking right so what can we do to make tools more usable and of course that little cartoon there actually sums up many problems that we have in the IT industry in software are the users stupid no they're not there's problems with the software and this is true perhaps for test automation tools and test management tools as well so there was this debate I was interested in the debate what evidence was there I decided to become a student and research this academically so that I've got a basis for my research if you're like I've got there's some rigor behind it I was interested in getting that academic rigor and not simply having an opinion one of the reasons that that's interesting I'm just going to go back again one of the reasons that's interesting is because if you've worked in something for decades and if people see you as being an expert and if you're doing training and so on it's very easy to just think that your opinion is the thing that's worth hearing and of course as soon as you get into academia no evidence is required it's no good me just having an opinion it's no set within my experience this happens that's not good enough I have to go look for other evidence my opinion is not the thing that's driving it and I think that's really valuable for this research that actually it's about it's about gathering evidence so if we think about the purpose of the software industry for all of its history as long as people have been building software and where we put the beginning of that history is a whole other discussion it's about automating tasks and activities we want to save time and money we want to be able to do things we can't do ourselves we want to remove repetitive tasks as human beings aren't very good at that we want to be safe and there's lots of benefits and actually there's effect some people benefits and disadvantages so even if you go back to industrial revolution one and two back in the victorian era back in the 19th century you've got people automating things like hand looms becoming power looms it affects people there's benefits because you can do things faster and there's disadvantages because of the effect on people's jobs so that's in very quick summary so if we think about our own activities within the software industry then we start asking can those be automated because we want to save and turn money and so on so forth remove repetition but is it possible to do that and what is the effect on people there are benefits and there are disadvantages and interestingly if you look at the history of the IT industry we talk a lot in testing about test tools and the automation of test execution and whether that's possible or not but actually there's a long history of people trying to automate programming so I can remember back in the 70s and 80s people saying oh we're going to reach a point quite soon where we're not going to need people doing programming because we can replace that by tools and people are still saying those types of things so it's interesting isn't it that we can see that we could get benefits if we can use more tooling and use more automation but is that possible and what's the effect on people and as an example of that here's a picture of not actually my first washing machine but the model of washing machine that I first had very very simple you filled it up with a hose had a hose on the side that you dropped to let the water out of the bottom you just had a hole in the bottom of it very very simple just an on-off switch it was really amazing it was amazing tool to possess after I'd been washing sheets and so on by hand but simple and it's not automating the process of doing the laundry so let's look at the one on the right which is a modern machine and it works off your mobile phone you offer your mobile phone up to it and it switches on and off and programs it and so on so forth it's a clever solution that works as easily as a contactless payment system that's what it says in the advert and it's called an automatic washing machine but it's not an automatic laundry machine it doesn't pick up and sort the dirty clothes it doesn't do the ironing it doesn't fold the clothes and put them away it doesn't make decisions about what needs washing and and what needs washing together and what needs separating it can do one little bit of the whole process of laundry that's all okay so automation it has benefits and this is a quote from julian hearty who I know is an old friend of this conference no matter how valuable in-person testing is effective automation is able to increase the value of overall testing by increasing its range so what he's saying here is there are some things we can do by using tools to help us that we wouldn't be able to do if we didn't have the tools and even if you look at really quite simple tools like say a screwdriver be really hard to put a screw into a piece of wood without a screwdriver so tools help us that's our purpose and some things like the automatic washing machine we can automate and when we can do things that we perhaps couldn't do before however if you look at barken bolton james barken michael bolton they wrote a they've got a website context-driven approach to automation in testing and they talk about a shallow narrow and ritualistic approach to tools use and they talk about the idea there's a false belief that testing is a mechanical repetitive processing and saying instead it's a challenging intellectual process and i remember talking to michael about this once and saying to him you know people are talking about automating programming and he just rolled his eyes and just yeah that's going to happen and that's the thing you see which are the tasks where we require a human brain and which are the tasks that we can actually automate so there's a lot of things to to debate there and i think there's some interesting things happening with machine learning and AI but even with those you know how do we know that they're going to work where are they going to be mistakes in them is it really driven by the AI or is it driven by the people who are building the AI yet lots of things to talk about there which are beyond the scope of this talk but i think interesting debates so going to do this as academic research it forced me to start asking some really fundamental questions and one of those is well what is testing is it a repetitive mechanical activity or is it a highly cognitive or human activity is it defined by its activities or by its goals and do you know what all four of those things are true it's really complicated just defining testing is not easy so how can we talk about automating testing if we can't actually define easily what it is so that was the first set of questions going through my mind what we know what am i really talking about here and if it's mechanical and repetitive it could be automated but if it's highly cognitive does that mean it can't be automated what happens if it goes wrong um why does it go wrong is that a problem with usability so my my academic supervisor is saying to me you're claiming it's a problem with usability do you know it's a problem with usability i think well i think it is but i haven't really got the evidence i do know that we want to improve it in some way but how do we do that how do we improve it can we pick off some activities to automate and then not automate others if it's highly cognitive human activity is this one reason that we end up with shelfware in other words we buy tools we find they don't quite do what we wanted what are the reasons for shelfware so again my supervisors push me and say you're talking about that there's shelfware is there is there really shelfware do you know what the reasons for it are do you know whether actually there is still so suddenly you realize that the questions you're trying to ask already contain their answers so they're not good research questions so they pushed me and pushed me and pushed me and in the end what it came back to was a simple open question that didn't contain its own answers actually what are the experiences of testers with their tools and automation because they might not be the same as my experiences lots of people might be getting quite different experiences so starting with that really open question I then went and asked testers to tell me their stories just tell me a story about an experience you've had with test tools and automation and this was a piece of qualitative research so you don't look for a very large number of participants but you look for very in-depth answers that you spend a long time analyzing and I had people from across the globe multiple countries multiple business domains over a hundred people took part it's getting on for 200 and for various reasons not all of those came into the final final data and I'll explain that as we go through but they were very wide range of backgrounds and experience and they very often had complex roles with multiple responsibilities so they weren't simply doing testing they were doing a whole range of activities so they might be a project manager who's also doing testing or a developer who's also doing testing or a tester who's also a product owner they're all sorts of different different roles that people had and complexity of roles and responsibilities and I just wanted to thank everybody who's taken part and you can see the range of conferences in Europe ANZTB is Australia and New Zealand Star West Star East and Canada are North America Odin days is Scandinavia so people from all over the world took part in this and the contributions really significant in shaping the reports I came out with and were surprising so that was that that was the thing about the surprising results and I interviewed a number of people experts I ran a number of workshops and I also ran anonymous surveys and then I had some testing experts review the findings and discuss them with me after I'd done the analysis and I mentioned that not all of the participants data was used in all of the analysis I used the interviews and the workshop attendee information with some of the quotes that I've used and some of the backing up if you like some of the analysis but I took my main results the numbers from the survey respondents that were anonymous and the reason for that is that some of the people in the workshops privately said to me afterwards that they weren't comfortable about expressing what really they felt their experiences have been in an open forum so I went to an anonymous survey so that people could really say what what their experiences had been and that's that's where the bulk of the report back and analysis comes from and my findings have come out in three academic papers stuck in limbo with magical solutions scared frustrated and quietly proud and an illusion of usability you can find these papers if you look on google scholar and search under those names so what were the key findings coming out of this the first one is the same old problems and challenges so some of this if you look for example at the test automation patterns wiki that doc graham and sereta gambler put together people have been talking about some of these problems for decades like management support and maintenance of the tests and the tools the shelfware problem is still there people are still buying tools and acquiring tools and then in the end not using them and these basic problems with management support not being able to install the tools successfully maintenance of the tests security usability seamfulness i.e groups of tools not giving you a seamless experience groups of tools being very clunky to work together those are still there decades after they were first being reported that's worrying I think so to give you examples of that here are some quotes from people who filled in the surveys so something looks cool but took time to set up it was it was difficult to configure people getting stuck in limbo when they were asked to do automated testing we'll come back to that one difficult part is maintaining the tests so we can see those are the sorts of quotes that were coming up over and over again usability was a concern so my initial hypothesis that usability was a problem certainly it was at the frequency of comments column you can see there the usability comments coming out about 511 of those across the surveys followed by technical problems a lot of that was around portability performance and maintainability those were the three most critical areas where people had had problems or had concerns and then the management and organizational interestingly fewer comments although sometimes it was bigger problems and of the 111 people who filled in the survey 82 had issues and challenges with their tools and automation 82 so the first new finding is that people are stuck in limbo with magical solutions and that they're scared and frightened quite often and this is about people's lived experiences so how they feel about about the tools what are the tools doing to them in their real lives and the critical thing here is people's in in level of emotion in their responses bearing in mind this is an anonymous online survey 35 percent of people shown emotion and sometimes very high levels of emotion and I wasn't asking them about their emotions in the questions the questions were unemotional and I didn't ask about that I didn't even ask them I didn't even couch it as tell me about problems you've had I you can see the questions there the only question that didn't get an emotional response was sq7 where most people were putting c question six answer which actually helped me learn something about how you design a survey and I analyzed this by the survey participants and by the questions to see we know is this just one or two people who have been very emotional is it a particular question and no it ranged across the people and it ranged across the questions and some of those emotions were very positive positive and but a lot of them were quite negative a lot of people were really quite distraught is not too strong word people were upset and here's some of the some of the just some of the quotes some of the many many quotes and you can see people are actually so angry so frustrated so upset upset people saying that I'm just going to leave I'm just not going to do testing anymore it's too frustrating these tools are making my life difficult it's confusing it's a jungle out there so if you get the papers you'll see all these these quotes are out there but the point here is person after person expressing these high levels of emotion which were actually having a knock on effect into how they were living their lives people people just say they wanted to give up completely and this was backed up by some of the interviews that I did and some of the conversations that I had with people not so much in the workshops but around workshops and at conferences around this time where people were in tears people were in tears about what the effect that the tools were having on their their working life it was quite extraordinary I simply wasn't expecting that that was not what I set out to look for um and then the stuck in limbo is a direct quote and the magical solutions is a direct quote and that is in a way quite a calm set of quotes but they're showing an underlying set of problems and certainly the the security problem it came up several times and then it's come up several times since as I've been discussing these results with people that people are mandated to use a particular tool and then the security set up in their organization doesn't allow it and the level of frustration you know that that's just I mean it's crazy stuff why are we doing this to ourselves or letting it happen to ourselves the second big finding is that usability is a problem but the way that we go around resolving those usability problems is actually illusory that what we're doing to try and solve usability problems doesn't work and then we think we've done usability and actually we kind of made the situation worse so three things about this and one of them is in some of these tools the people designing them had focused on the attractiveness of the interface over its usefulness so the tool looked cool but you couldn't actually do anything useful with it it wasn't supporting people's workflow it was simply looking good and this is something which I think is really interesting that you can see demos of tools and they look really good are they actually going to support you to do the work you want to do and who is being beguiled by that attractiveness is it a purse holder rather than the person who's actually going to do the work the second thing was that it seemed that all of these tools simply focused on one user group and so you've got a concept of learnability which is part of usability and you've got a concept of flexibility which is part of usability and they were sometimes in opposition so you either had people with strong coding backgrounds but couldn't accomplish what they wanted because the attractive interface was holding them back they weren't the options there or you had a situation where the tool had been built for people with very strong development backgrounds and the testers felt excluded and felt that it was too hard to learn now if you're doing usability design or user experience design even more so one of the things you look at is the personas who are going to be using a particular tool so when you design a particular tool you also as part of that you just you look at the personas and what their needs are and what the different personas are which ones you're serving but then when you buy a tool if you don't think about who is going to be using it and that there might be a range of people you might try and get a tool which is one size fits all and it simply doesn't and the third thing was with some of these tools that they didn't support change and growth of the personas and their requirements so as the people using the tools are going to change and their requirements are going to change and if the tool has been built simply to meet them their needs at one particular stage of their career of this project it's not going to last over time so my analogy for this is about the difference between a pianola or player piano and a concert grand you see if you don't know how to play the piano and I put you in the Carnegie hall with a concert grand you are not going to be able to play it and you're not it's going to take a long time to build up the skill to become that concert pianist but once you've done that once you've mastered it you're working at the sort of Angie Jones level you know there's somebody who's a concert pianist of test automation but if you're me no I can't play that that piano it's too complicated I can't play the the concertos and so on so forth but if you look up the pianola or player piano you've got a role there that's going to play the music you just slotted the role it plays that music and it the keys are going up and down but you don't have to be touching them and the point here is it's only going to play that tune you've got no flexibility and you're not going to learn how to play a piano and somewhere between those two there's a set of experiences and types of keyboard which would enable people to move from the pianola not all the way to being the concert pianist at the Carnegie hall but as far along that route as they need to go and I think there would be a way that with the test tools and the test automation that we could sit between those two extremes and have things which allow the people's knowledge to grow and their experiences to grow and the way they use the tool to change and also to allow for growth in their requirements in terms of what the software is that they're testing and what the tests are that they need to use so you're not being confined to something that's automated in a way that it can actually only do one thing so usability is sometimes an illusory goal so the other thing is it's necessary I think I have evidence from this research that it's necessary that people who are designing test tools think about usability but it's not sufficient you also need to think about maintainability performance security portability reliability and so on so forth key it's not just attractive interfaces so just because something looks nice doesn't mean it's the thing that you need to use so don't be beguiled and key it's not just about making it simple usability is not about ease of use that's a tiny tiny part of it it's about making it appropriate it's about making it meet people's workflows so people reporting back about installation you do an update which looked cool but everything's hard to find I love this one I like how the feature works but getting the information back out once it's been entered is not easy isn't this brilliant a store only a piece of information systems you put your data in you can't get the data back out again hmm something wrong there I think so the key thing out of those findings so far that's really important is that test automation does have benefits test tools do have benefits but it's affecting motivation and testers are becoming disassociated with their roles because their job task mix is changing and because the tools aren't supporting them properly so one of the effects on motivation is around job task lists and I've put some references at the bottom there but the earliest one of those Hackman and Oldman talked about how you design people's jobs and people need a mix of tasks some of which are challenging and some of which are more mundane because our brains need that mix different people that mix is slightly different but if you have a job that's just mundane you're not being stimulated enough and it's demotivating and if you have a job that has no mundane parts in its tool it's overstimulating and overstressful and Warden and Nicholson when they did their motivational study of IT staff which was back in the the mid 90s they commented that the testing role was simultaneously the most boring and the most stressful role because of what we're doing so we're doing a lot of repetitive stuff and a lot of fine detail within that repetition which you feel like is a candidate for automating or using tools with and also we're dealing with things going wrong and people getting angry about stuff and deadlines and and a lot of stress about having to give people bad news so one of the things here is if you automate all the mundane bits you actually end up with somebody who's got a role that's just the stressful bits and what's the effect of that going to be actually you know so that sense of job task mix is really important but simultaneously with that you've got testers who are worried about being made redundant who are feeling disassociated from their role because they're being told that the test testing is being done by the tools when they know that's not the case and also and here's a key thing guess what more things need testing than the software under test so the picture on the left there is the worm areberus that eats its own tail so we have software that's under test and that requires testing we have to design tests for it and we design those tests even if we're just thinking them up even if we're just sketching things down we might be doing a really formal process here or really informal process but our tests themselves are artifacts that we have built and as human beings we make mistakes so our tests require reviewing and testing have we done those tests right have we fully understood what we're doing here and if we build automation scripts or tools that is software and software needs testing so now the software test under test is the automation script that we're going to have to build tests for that will require testing and maybe we should automate the testing of the automation scripts oh my goodness we've got more software we're going to have to test it so we should go around in circles when do we stop because at every stage of everything we do we will be making mistakes there's no point where you can say we can trust this absolutely but we have to do that all the time where do we break this cycle and that in itself is a frustration one of the things that was was frustrating for people was seeing test tools and automation being trusted when it was flawed and then having to pick up the mess when things go wrong so I think that's this is very demotivating when people trust the tool more than the human which brings me back to artificial intelligence you know people are talking about the singularity aren't they and that the artificial intelligence will start building more artificial intelligence and each time it's getting more and more intelligent but you know at the bottom of that at bottom of that heap are human beings making their mistakes those mistakes are going to get built into the AI and we know as human beings that the cleverer you are the bigger the mistakes you make hmm there's something going to go wrong here I think anyway that's speculation uh one of the comments I had an interview with somebody saying that the manager hadn't realized software's a bloody difficult thing to build he was building this guy was had built an automation tool and it took much longer to build this automation I hadn't been planned um it actually it was planned for three months I think this project to build this automation and it actually took three years to get it working because it was so complex the automation was actually more complex than the software under test and that's another interesting lesson testing is complex automating testing may be impossible because of the level of complexity and again you know what does this do to people and their motivation and their ability to deliver and one of the things there this is uh from a friend who's a historian he's not in it at all and I was telling him about the research and he emailed me and said you know people change people change to new products um but quite often they prefer not to change to new products not to upgrade because you know it's going to be bad all the software out there is bad the choice you've got is not between problematic and ideal but between different sorts of problems so why why change your software there's a certain despair in all of that it's a lesson for all of us isn't it we're just struggling with messes and our customers are struggling with messes so there's some pragmatically what can you do now two things better people management don't let you watch out for people's uh the effect of of tools and automation on people and better tool selection um so I'm not going to read that slide out to you but you know think about the things I've talked about it's not as simple as just buying a tool and and telling people to use it it's not as simple as just giving them a training course you have to think about the effect of the tool on people's jobs and what it's going to do to them and you also have to think about the attributes of that tool in terms of its usability workflow support but also things like portability performance maintainability think about taking a UX approach to actually tool selection think about the change in growth that's going to happen over time now what am I going to do I've got to finish my PhD I'm about halfway through and so having done those three papers it was time to take stock and say what's the rest of my PhD going to be about and I could have gone and looked more about tester's emotions and lived experience which is incredibly interesting but it's a huge area it's multidisciplinary so park that for the moment it's too big for a PhD however some of the aspects of illusions of usability is possible to look at and the first step of that is to actually think about who is doing testing because if we don't know who's doing the testing and how they're doing it we don't know what tools they need and we don't know what the tools need to be like so my next stage in my research that I'm in the middle of at the moment is to collect information about who is doing testing and how they are doing that and to do that I've got a survey open and the link for that survey could you pop that into the chat for me at earlier could you do that that would be kind so I would urge you all to fill that in and that survey is like an online interview so it's not a short three or four question survey it'll take you about 30 minutes 45 minutes to fill in I'm asking for a lot of detail I need that detail in order to do the analysis that's next I would urge you to take part so my new hypothesis is that people aren't in the center of test tool design and if we could put testers in the center of test tool design we could support a wider range of people to do a better job in testing so that's a hypothesis okay I haven't I haven't demonstrated that hypothesis yet that's my current starting point is this the case and it's based on looking at the previous participants and I said to you before wide range of background and experience and multiple responsibility so I want to find out if that's true in a wider group of people than that initial 100 industry practitioners I'm interested in hearing from you if you do testing as part any part of your role to find out your background your qualifications what you do in your work how you go about testing what sort of person you are to help get an understanding of what the persona groups are within the testing industry and within it and once I've got that data then I'm going to start mapping that and going through an iterative process that's a bit like a sort of a bit like an infinity diagram I'm going to circle around and as I get the data I'm going to try and synthesize a tool design model and and then try and trial that and review the results and then come in and collect more data so there's the link here again for the the the survey your experiences stories really matter for this to work you're going to help academic researchers understand more about software testing there's sort of gaps there I think you're going to help me design a tool design model actually you're going to help your future future self if this works if I can make this work we have a chance of actually better tools in the future but it's going to take a while so three key points I've given you some insights from the research so far about people's experiences I've given your understanding that there's usability in human blockers to successful test tool implementation and indeed the tools themselves block people and I've asked you if you'll help me in the future research please complete the survey so thank you for listening I'm going to stop sharing my screen now and we'll go to a Q&A there we go thank you as well thank you very much for sharing the insights of this case study thanks for having me