 So, hi all, thanks for joining again and today we have Anne. So, Anne Marie Cherit advocates a whole team approach to quality. She says we can't predict the future, but we can prepare ourselves to be able to deal with the change, to develop strategies that facilitate rapid learning and the change. So, let's understand how over to Anne, Anne, sorry, stage is yours. Thank you so much, Pooja. Welcome everyone. It's so fantastic that we finally made it here, right? I mean, my goodness. I want to thank all the hardworking organisers. It's certainly a huge thing to shift. You're expecting to go one down one path and suddenly your whole world changes and you're having to organise a conference online. So, I think, you know, they've done an amazing job. And I just want to thank them all for helping us get ready for this. So, you know, what are you? How crazy is this? When I wrote the abstract for this talk, I was thinking of 2019. And I was thinking of what a crazy year it was. I mean, we had gone back. We had gone through this terrible bushfire in Australia. And, you know, we had all the geopolitical uncertainties that were happening and the changes in the world. And I thought, you know, this is a really good time to be talking about leadership and, in particular, test leadership. And then, bam, we have this next thing comes out of the blue called a pandemic. And we're all like reeling from this. And how do we actually deal with this? I mean, there's so much change happening, you know. And lots of, we've all kind of responded in different ways. You know, we've discovered remote working, right? We can all walk around in our pajamas and work at the same time and go, oh, I'm going to do this, you know, on the positive side. But, you know, it's turning out there's downsides too. I discovered TikTok. I became obsessed for about three weeks. And then realized I was just wasting way too much time on it and had to delete it from my phone. And most of us, fortunately in tech, we've managed to keep working and we keep working in a remote way. And for those of you who deal with anxiety normally, maybe you're thinking, you know what? I feel like I know how to deal with this because I know I've worked so hard to deal with my anxiety. Normally, this is like I can apply some of my tactics into what I'm into the situation right here, which is certainly how I feel a little bit. So, but, you know, as I said, crazy year, absolutely crazy year, climate upheaval. As I said, we had the bushfires in Australia. We had 12.6 million hectares burned down. I mean, this is a huge amount. One billion animals killed. And we see this happening in California at the moment. Floods in Indonesia got droughts. You had a water crisis in Chennai. Certainly things are just changing so rapidly with extreme weather conditions happening more and more. But also in other places as well, our cultures and how we're working have changed so significantly because of the pandemic, right? Yes, we've adjusted really well and we're working remotely and it's amazing considering how much we've had to shift rapidly, how well everyone is coping. But it is creating new problems, right? You know, even logistical problems of onboarding new people and we're having to do that remotely, something that we would never have considered previously. And it's requiring a different type of leadership, isn't it? We need a different style. And people are really coming at, you know, the situation is creating different things in different people. And stresses are coming to the fore. It's proving very difficult for a lot of people to be working remotely and to separate their normal day-to-day life with their work life has suddenly all come together. And that's creating a lot of challenges for a lot of people with mental health, skyrocketing. And certainly there's an emphasis on, you know, are you okay? And so we need to think about our leadership in a different way. How can we help? How can we create social contracts about working remotely? And fortunately, there are a lot of people working in this space to help us through that. We are also seeing distributed teams becoming more and more of a thing. And this is certainly when we look at Australia, we're looking at, you know, people working remotely now and companies going, well, if everyone's going to be working remotely, why don't we just have a whole team in a different country? Why don't we have, you know, why don't we have distributed teams all around the world? And let's start really seriously exploring that. So then those sorts of things are having significant impacts on us. Atlassian, one of our key tech companies, have basically turned around to all their employees and said, you never have to come back to the office. Even when the pandemic is over, if you want to work remotely, you can continue to do that. So that's a huge change, right? But there's also other things. I mean, you know, and these are, you know, these yourself, the technical changes that are happening at the moment. You know, cheap hardware has really dynamically changed how we deliver software. It's become easier for us to trial an error and to test things out and perform little experiments, hasn't it? And so the cost of failure has been significantly reduced, making it easier for us to deploy in small amounts and try out new things and to be able to respond to market demands. Mesh networks, machine learning, AI, I mean, it's all happening, isn't it? And some of us are probably already in the context of trying to figure out how are we gonna test this stuff? How are we going to, what are the risks in these areas and how are we going to discover and how are we going to develop a strategy? So a lot of unknowns here that are coming upon us and we're having to deal with those. And I think because of that, the demand for leadership is just so much higher. And there are demands that we've all got to face, but I feel that testing in itself has its own particular set of demands. For example, I just want you to have a think about this. What? This is a typical user story. And we are all familiar with user stories, right? But I like to create user stories for quality. So one of my favorite ones is to create a user story as if I was a CTO, a C-level person. And why would they do software testing? So as a CTO or a head of engineering, I want software testing performed so that, what do you think that answer might be? I feel this is something that we don't necessarily give ourselves enough time to think about or we don't give enough thought to. Why do people want software testing performed? Now, you could think, well, it's to ensure quality. And I think that's a good answer. But why do they want to ensure quality? Yes, we want happy customers, but is that the only story that's going on here? Yes, it could be to find bugs, but why do we want to find bugs? And so if we dig a little deeper, we want to explore this area and find out what is it that our CTOs are looking for. An interesting statement by Dan North. He said that we do software assessing to increase confidence for stakeholders through evidence. So by actually performing the act of software testing, we're creating information to help our stakeholders be confident about a release. Now, I'll have to say, when I first came this out, I was like, oh, really? Is that really what software testers are meant to be doing? I mean, like I'm a skeptic. I've got my skeptic testers hat on. And am I meant to be a bit skeptical about the product and test it and see and try and look for flaws? That's not really increasing confidence, is it? And then I realized that I was thinking about this from my own perspective as my role. I wasn't putting myself into the shoes of a C-level person or a head of engineering. Their perspective is different. They want us to do software testing because it helps them increase confidence. They're looking for predictability. They want predictable, boring releases, time after time. They're looking for consistency and predictability. They hate surprises, especially ones that come right before release time. So they're looking to software testing to help to provide that information and to give them that confidence that it's okay to release. So in response to that, what do we do? We try and help them with that, don't we? We want to give them that certainty because to some degree, we want to be helpful. We want to add value. And so we generate a type of a plan. We might be asked for estimates. We might be asked for a scope. We might be asked for one of the sorts of things we should be focusing on, especially if there's been asked maybe three months in advance, we might be asked to develop a bit of a plan of how we're going to get to the end. You could be working in a safe type environment. If you're not working in that, you're doing multiple deploys a day. You might have less of a formal plan, but you still might have a mental plan in your head that you've got a big idea of how much time things are gonna take and how much you're allocating your time in order to do some testing. And so you do have this idea. And then, and you've got this beautiful plan, and then what happens? Reality hits, right? We're in the situation where stuff goes wrong. And we see this time and time again, right? It could be that our software gets delayed because there's more complexity that we had initially understood. It could be that when we first initially thought about a story, we didn't necessarily consider another part of the system or we didn't actually consider a stakeholder was overlooked. And because of that, we've actually omitted a whole part of the system that we should have either designed for or tested and we're gonna have to go back and refactor. And all these surprises that happen and all this uncertainty and all these unknowns create a situation where actually our plan just isn't really applicable anymore. This is what I call the rubber hitting the road, the reality sets in. And we just have to make do with what we have. We respond, we react, we cut out some testing, we do some more testing in a different place, we respond to the situation. That's okay when those risks and those changes are within our control. But sometimes as we've seen from this pandemic, things are outside of our control. There are things that are happening. Our markets disappearing, especially if we're working in travel. There's absolutely no idea. How can we respond to a market which we don't even know is going to exist? We know it will change some point, we'll come back. But for now it's just like, well, no one's traveling. What are we gonna do? How, what sort of software should we develop? And we might find that these companies are having to totally pivot into different areas. So some of these surprises and some of these uncertainties are outside of our control and we just have to be able to respond to it. So what do we do? What do we do with these? And what do people generally do? Now, my experience is that what we try and do is we try and create order. We try and make sense of the situation. There's chaos and change everywhere. There's people running around just trying to make do as much stuff as, let's get this developed as well as we can and just get it at the door, right? It's not a very pleasant situation to be in and it's very stressful. And people we don't like to be in this space. We wanna try and create some order out of this chaos. We try and create a story in our heads in order to be able to work through this. That's we're trying to handle all this change. We're looking to seek order where chaos is. We're looking to try and impose some sort of order so we can handle it. In software testing and in software development, we've developed some strategies that help us do that, right? We've reduced our batch sizes. This is a brilliant strategy, right? We're in a situation where with so much more uncertainty we reduce the batch size. And this is something that we should be doing when we have greater the uncertainty, the smaller the steps we really should be taking because we really don't know what the output or the outcome is going to be. If we take small steps, we can respond to those changes a lot more rapidly and a lot more quickly. We try and really impose a test process and I've seen this time and time again where when things get a bit crazy and it could be that your company is expanding rapidly, right? In your hiring, maybe seven developers a month or something like that. And really the amount of work has just escalated where you were having conversations with people, you knew everyone in the organization, suddenly you don't know anyone. And suddenly having those conversations, those collaboration that you relied on has disappeared. And so people naturally start to think, okay, well, we need to impose a process here because things are not working, we need to think about a different way. The challenge here though, of course, is that if we rely too much on the process, we forget the reason why we had the process in the first place and the process becomes the work rather than a tool to support the work. But we try and knuckle down, we try and apply the process even more as if it's something that we are doing wrong. The process is right, we're just not following it. If we just follow it strict enough, then we'll get it right and then we'll get everyone to adhere to it. Often responses, we need to test more. This is really hard because that's really, really expensive to test more and we need to maintain all that code. These are things that you absolutely know yourselves that the cost of testing is high. There's a cost to designing a test, there's a cost to implementing it, executing it, but responding to the information that you get from a test. So every test has a cost. So just to say let's do exhaustive test coverage to reduce risk is a really expensive strategy to adopt and not necessarily one that's useful for everyone. We try and get better at predicting estimates. Clearly, we're asked to provide an estimate, things change dramatically, and clearly we have to try and get better at estimating. And so the focus is how can we get better, how can we get our estimates more accurate than before? And we try and put metrics in place, right? We obviously don't know, we need to put metrics in to just impose and make sure that things are working the way they're supposed to do. And we'll start benchmarking stuff to so that we know whether it's okay, we can control it. Really, this is all forms of micromanagement, isn't it? I mean, it is that tendency for us when things go out of control of our ability is to hunker down and try and control things. At least that's my tendency anyway. Maybe it's not for you, maybe you respond in a different way. But certainly that desire to kind of be able to control things more and more is something that I've seen. We've got to try and avoid this stuff, right? We want to try and avoid predictability. We've got to try and embrace uncertainty here because things we can't control the outcome of everything but we can respond in a way and we can learn from these situations. We've got to learn to embrace this rather than trying to control this. So here's some ways that I've sort of thrown together. We want to look at rather than actually giving people absolutes. So think about estimates. We don't want to necessarily give people absolutes and estimates, consider trying to give people ranges rather than providing one estimate. Think about sharing information about the nature and unpredictability of software development. Software development, as we all know, is really, really hard. And this concept that we can actually create, accurately predict and measure something that's incredibly complex and is changing is not necessarily always accurate. It might be work for some projects who work in contexts that are pretty much not changing very much, very stable. You might be able to be much better easier to make an estimate in that context. But when you've got high change and a lot of unknowns, it really doesn't make sense to predict far ahead of us to predict that. We need to get thinking about predicting in much shorter terms or trying to focus more about learning rather than providing absolutes. I like to talk about providing information as opposed to pass, fail. So talk about potential risks and what did we learn from the testing as opposed to 600 test cases failed or passed? We don't want 600 test cases failing, that won't be bad. But thinking about that, thinking about how can I provide value in terms of information as opposed to absolutes? I find holding, thinking about, taking a step back, thinking about what quality is and having workshops about that. Start thinking about what are we actually trying to achieve here? Take a step back and think about why are we doing software testing? I try and encourage people to do this thought exercise that if you were not able to do any software testing for some reason, whatever, I don't know. But if you couldn't do software testing, how would you know that you have quality? Because there are alternative ways that we can think about and look about obtaining quality. And it could be that we are thinking about that more at different levels of abstraction or that we're looking to find other ways of knowing that we have quality apart from formally trying to test it. Again, thinking about what did we learn over whose fault was it? Looking to learn about situations as opposed to saying that I'm going to provide you all this information and thinking experiments. And finally, one of the things is I can't help but give a little nudge to exploratory testing because I really think that when you have a lot of uncertainty around you and you have a lot of unknowns, exploratory testing is amazing because it's so rapid in its response. You can, on the fly, change your test or change your test design and respond to the situation based on the information you've just learned. That doesn't mean that we can't use any tooling in that. Of course we can use tooling. But in my experience, when I've created, been in situations where I'm having to respond really, really rapidly, I found exploratory testing amazing to be able to handle that uncertainty and those unknowns. Now, enough about that. There is one more thing I want to talk about. And we're going back to one of our favorite topics here and this is boxes. Because if I face it, we love boxes in testing, right? We love black boxes. We have white boxes. We have gray boxes. We love our boxes. It helps us to explain what we do. It helps us to explain testing, different types of testing. It helps us to convey information to people who don't necessarily have that much information about testing. But there's a problem here with boxes. Is that by describing the box, we're actually also fixating people on that. So if you describe testing in terms of a black box, then that's what people think that, okay, well, this is what testing is. And sometimes it's hard to think beyond that. Now, there's lots of other different types of boxes that we have in testing too. For example, we might see ourselves as a specific type of tester. I'm an exploratory tester or I'm a test automation tester. That's the kind of box too, right? Now it's a very useful box because you can explain to someone else what you do. But on the other hand, it's also limiting you too. Because you're putting your identity against that type of box. And then when people see you, people will say, well, you're that type of thing. And the challenge here is that when you create this mental model of the type of testing that you do or the type of tester that you are, is that people start limiting what they think you can do because of that. Now you know you can do way more than test automation. You know that you can do way more than exploration or if you're a black box, you can do gray box, you can do it. But the trouble is is that other people are not seeing that because it's easy for them. They're stressed, they're busy, they're rushed. They don't want to think about the nuances around, well, what else can be done? They're like, oh, you're this type of person and you're this type of tester and I'm a really busy person. I don't have time to think about this differently. And so you'll find that often people will narrow your ability to do something or your approach based on this. So, you know, they'll be thinking maybe that you'll do three acceptance criteria of a story and that's your role. Not realizing that you could do a way much more testing beyond that and that you could be looking at exporter testing of systems or you could do test automation of a whole system. We want to help people shift out of this, right? We want to avoid this type of fixation within, from other people, but also within ourselves because I think sometimes we allow these boxes to limit ourselves to and our ability to think differently. And sometimes we're limited because we think, oh, I'm not good enough or I'm not confident enough and I don't know what to say. These are our boxes. We've put ourselves in a box too in that way. So we want to think outside the box. We want to shift people's mental models, right? So we can have an open discussion and explore different opportunities that might help us embrace the uncertainty. I want to shift the conversation. One of the reasons why I have personally stopped talking less about testing and more about quality is because of this. Because what I see is that when I talk testing, people think test automation, functional testing, that's it. But the reality is that I'm talking about a lot more and I want to be able to explore those things. By talking about quality and by exploring about what quality means, I am opening up the conversation to different areas that people might not have thought about before. Some things that I look at is thinking about maybe speed. We'll come to that in a minute. Speed as a quality attribute. So here's some examples of how I could try and shift the conversation. One of the things I like talking about is outcome over output. Now, you might hear this spoken quite a bit actually. A lot of people are talking about outcomes, especially in the product space. So, and by what I mean by this thinking about rather than thinking about have I done three acceptance criteria or have I completed my tests, what you're thinking is what is the purpose of doing that? What is my C-level hoping to achieve from those outputs? And how do I know that I'm achieving those outcomes instead of focusing only on the outputs? For example, here we go. Some testing outputs that we might have. Though typically we're aware of test coverage, bugs found, bugs fixed, right? We might even measure these and we might even have these as KPIs for you. But anyway, these are things that are commonly used to measure or understand how well we're doing. Rather than focusing on those, what we wanna do is try and focus on the outcomes. As we talked about previously, how confident are we to release? What are the threats to quality rather than the bugs found? What other threats exist out there? Is it just functional? How are we doing on our security testing anyway? What other ways could we be doing testing? Are we really testing early enough? And then what is the state of quality at this point in time? Even talking about the state of quality rather than at one point in time but thinking about it as a trend. So that would be really interesting too. Another area that I like to talk about is recovery over perfection. Again, this idea that we're focused on one release. But reality, we know we're not. We're gonna have release after release after release after release. And trying to get perfect software out the door all the time in that way when you've got massive uncertainty, loads of unknowns, is really just keeping your fingers crossed and hoping things will work. We wanna start thinking about how can we recover as opposed to getting perfect software at the door? Now that doesn't mean we don't care about beforehand. We don't, we stop testing. That we still wanna do really great testing. But again, the complexities that we have in our systems, and you can think about microservices and mesh networks and all the different technologies are only making the systems that we have more complex rather than easier. And that complexity comes over into our testing. So rather than trying to ensure that we have great quality, how about we focus on being able to recover from faults really quickly? And that means shifting our thinking about testing and production, but also having greater observability, greater visibility into how our systems work. If we can rapidly find the root cause of a system, we can rapidly fix it. Now we all know end-to-end bugs are notoriously difficult in order to find the root cause. So how about thinking about designing our systems so that we make it easier to actually identify where that root cause is? So immediately we've opened up the conversation rather to getting our software testing perfect, but thinking to more about designing our systems that will enable us to recover from them quickly. Another area that I've been exploring with is what I call fix over find. And this came from this situation. This was a long time ago, but I was trying to explain to somebody what I did in my profession. Now this person was not technical at all. And I was trying to explain, I'm a software tester and I find bugs. And he just went, okay, hang on a second, hang on a second. So it's like this. I have a bicycle and it's got a puncture in it. And I take it down to the garage. And I say to the garage, I've got, you know, I need this, I've got a puncture. You take the bicycle, you take the tire, and you do a lot of work finding that hole. And then when you find the hole, you walk away. And it really made me think, you know, I was like, well, there's a lot more to that than that. Software is really complex. But then I thought, you know what? That's the reality. Is that our role to some degree is only about finding. Now this is a really important and a really difficult role to perform. Finding bugs is hard. It's hard work. But the actual reality is, is that we find bugs in order to have them fixed. We want to be focusing on how are we delivering value to our end customers? How can we actually get value out the door as quickly as possible to them? And if we think about, can we possibly find and fix the bug with the technologies we have in place? We've got change control folks. We can always revert back if it doesn't work. We can pair with developers if we're not confident about fixing. We could start looking at fixing simple type bugs and then moving more, getting into more and more complex areas. But I would love to see people exploring this a little bit more and finding out how they can incorporate fixing. Because I can tell you, I can't think of many sea level people who wouldn't think that's a good idea. Okay, I know I promised to talk about leadership and I haven't hardly talked about it once, but this stuff I think is important. So as I said, leadership in this time is more important than ever before. We need leaders to help us through this uncertainty. We need leaders to help prepare people, organizations and systems so we can respond and less react to the uncertainty in a productive way. And we are the quality people to do it. We have the most experience about testing software. We know the complexities involved in trying to deliver quality software out the door at speed. So we are at prime position to be able to help in this space. How are we going to do that? Well, first let's look a bit between test leadership and test management because I think it's hard to actually describe what a test leader is. Now personally, leadership is a very individual thing and I think that's why that's one of the reasons why it's difficult to explain. But I love this quote from Potter. Managers promote stability, leaders press for change. Because there is a slight difference between test management and test leadership. And we've seen really much test management go by the by, haven't we? Because mostly large test teams have pretty much disappeared. But there's still a huge need for test leadership. Yeah, hi, I'm sorry to interrupt. So we are just last 10 minutes. Okay, lovely, thank you. So this is where I talk twice as fast in order to get the rest of the slides done. So there's a huge difference for two. We want things, if you think about management and stability that's going things back to normal. Whereas a leader is looking to embrace that uncertainty, embrace that change. And that's the mindset that we need today in order to be incorporated. Because as I said, uncertainty is the new black. Things are not gonna go back to stability. We are going to have constant change, constant. And we need to be able to constantly respond to that. Now, the key thing for me that I look into this is that we really have to take an acceptance here. Before we can do anything about embracing change and changing, moving and changing and having a vision for the future, we need to recognize the reality and acknowledge what it is. We don't know. We don't have all the answers. And I don't think leadership is about knowing what to do. It's actually about recognizing that you don't have all the answers and being okay with that and being open about that. So if you're thinking about test leadership, I'd encourage you to accept that. And accept that maybe it's okay not to know everything. Now, we might feel a little bit discomfort over this, right? It can be hard to accept that we don't know and it might not feel easy. And we strive to have answers and give ourselves answers and give other people answers because it makes us feel a lot better. But I really believe that having this acceptance is the first step. Because then we know where we're at. We can see the reality for what it is. And once we see that reality, we can start working to the future. And this can be hard for us in testing too because I think like in testers, we like our past fails or black whites and it means that we have to start embracing a bit of gray, right? And I'm not talking about hair color here. I'm talking about embracing uncertainty. One way we can do this is by getting curious. Curious about everything around us. How are our colleagues doing? How are they coping? How is that co-worker who's just had a baby at two months old? How are they doing? Getting to know the people around us outside of our organizations, massively important, especially outside of tech. So getting to know your business people, reaching out to them, not for something, but just to get a better understanding of where they're coming from and their mindsets, these are all gonna be hugely beneficial things for you. Try and let go of that feeling of losing control. I know, I find this scary. It's hard sometimes, especially when we feel like we maybe have, you know, we're used to owning the testing strategy or we're used to owning quality. And maybe for a lot of us, we're finding that we actually have to let a lot of that go because the team is now responsible for everything. And so some of the things that we used to be in charge of were actually handing over to other people. Again, it's hard stuff to do. And that's why we need a big dose of courage. Courage to say, maybe let's test less, not more. Or let's do more exploratory testing or let's test earlier. Courage is trying something new without knowing whether you're gonna succeed or not. Courage is acknowledging mistakes. Courage is speaking out. Courage is acknowledging your humanity, your imperfections and your limitations. I'm talking to myself here. Courage is letting people help you. Courage is taking time to breathe and say enough. These are hard times. I think we should have the courage to be kind to ourselves as well. On a more practical level, vision, strategy, alignment are some of the key things that we wanna be looking at. Having a vision of where you wanna be, having a strategy that helps that. And then alignment, I'm talking about making sure that everybody knows what your strategy is and also how it's changing. Now, when I talk about strategy, I'm not necessarily talking about a testing strategy. Again, I avoid trying to talk about testing strategies. Instead, I like to talk about quality strategies. Quality strategies for me open up a whole field of things I can discuss, right? Quality of the product, quality of how we deliver that product, quality in production, quality in ops, quality of infrastructure, our processes, our people. There are so many different things that incorporate into quality. It allows us to suddenly have a much broader field and discuss things. One of the things I really like is Kent Beck's discussion on the three X's. Explore, expand, extract. Explore is when we have an unknown, so many unknowns. We've got loads of uncertainties. We don't know what to do. We do tiny little experiments. And we learn from them until we get a point where we learn enough and we know that, hey, this feature might be, is the one that we want to go. And then suddenly we wanna grow on that feature. We choose that one product, we choose that one feature and we wanna scale up, right? And so we have this chaotic time where suddenly everything's expanding. We're trying to scale the number of subscribers. We're trying to scale the number of developers. We're gonna rapid growth failed. And finally, once we've actually managed to scale, we get to the point of efficiencies, economics of return. And we're looking at how can we actually get efficiencies of our system. I would just leave this with you here to think about what sort of a strategy, what sort of quality strategy would you implement for each of those different stages? For me, they would be completely different depending on because of the amount of uncertainty in those situations and because of what our clients or what our businesses are looking for in each of those. Again, thinking about emergent quality, thinking about quality as an emergent, something that's emerging rather than something in itself. And not focused only on product, looking at infrastructure, looking at process people and outcomes to help us. Things like quality coaches handing over a lot of our test testing to the developers and providing input as a coach as opposed to someone who's doing the testing. Again, this is proving to be a very popular approach with many people. So we are in new territories, right? We've got new risks. We've got things that we hadn't even considered at the start of the year, stress and burnout. These are big impacts into quality. Psychological safety, in my opinion, is one of the most important things when it comes to quality and understanding quality. For me, I need to feel safe that I'm actually able to speak up and speak out when I'm talking about finding bugs. If I don't feel safe about raising concerns, then it's gonna be much harder for me to do so. And so immediately, we can start seeing the impact of psychological safety on potentially the quality of a product. And so you see that really we need all these factors, all these pistons to be firing in order for us to have a great quality product. And so you could consider looking at all these different elements in order to be able to build a good quality product. Optionality is something else I would have a think about. So often we get fixated on one solution. I would just encourage you to use Jerry Weinberg's Rule of Three. Think about three solutions before you actually decide on your final solution. Explore them and find out are the things that maybe I am biased about or I'm fixated on that is preventing me from thinking outside the box or looking at things in a different way. And finally, looking at habitat. I think this is one of the crucial things creating as test leaders, we wanna be creating an environment where others can thrive and grow. And when I talk, think about thriving and growing and thinking about being able to self-learn, right? To be able to learn about new things and to own that learning. Really important to have the safety to be able to make mistakes in a considered way to be able to grow, to be able to try out new things. To encourage collaboration and people to work together to solve problems as opposed to trying to have as opposed to having people trying to solve them or by themselves, giving people autonomy. And as you see safety is so important, it's there twice. When a flower doesn't bloom, you fix the environment in which it grows, not the flower, right? So often we're fixated on trying to fix people, but really what if we created a habitat where people would naturally thrive and grow? And just maybe if we create that environment, then people will feel, will be able to naturally grow and become more confident in what they do. A huge part of this that I look at is coaching. Coaching is an enormous part of my test leadership strategy. Not mentoring, not training, not consulting, but coaching. And the way I see coaching is for people to self-realize and self-learn. This means that you're putting people in situations where they don't have all the answers and they kind of have to figure it out themselves. That's part of it. But it's also that you're there alongside them, encouraging them, helping them if, or pointing them in a different direction if they seem to be going off track. Not providing them with all the answers, but helping them to learn how to learn. I don't really have time to go through this, but I would encourage you looking at the Saturn Change Model. The Saturn Change Model is by Virginia Satyr. Jerry Weinberg used to work with Virginia and really it's just a great model for how people potentially respond to change. So if you look at the foreign element, something happens that you're not expecting. Maybe we'll call the foreign element a pandemic, right? And this puts people into chaos. The thing about the chaos, the chaos is a crucial part of people learning how to do things better. At some point, there has to be a transforming idea that helps them see that, you know what, maybe there's a way through this. And I'm sure each of us, each of you have had experiences where you've had this yourselves. And gradually through owning this learning and going through the struggle as you actually get to a point where you're able to have a new status quo where things are better and are improved. As Michael Bolton said, we can expect the unpredictable. We can anticipate it to some degree. We manage it the best as we can, but ultimately we learn from that experience. And my learning about being a test leader in these uncertain situations is this, constantly reinvent your strategy. Create feedback loops in your strategy that allows you to refactor based on the information that you're getting back from what you're doing. Are we hitting the right way? Do we need to change? Do we need to pivot our strategy? Do we need to do something different? Be open to that feedback. Make risk your guiding star. If you're not sure what to do, think about what's the biggest risk around me and go for that. Coaching, as I said, massively important. Push decision-making down to other people. Don't feel like you have to own everything, hand it out to other people and let them grow and thrive by doing that. And build external relationships with people outside of tech in your businesses. Learn from them. Try and understand what they want. This will help your vision expands considerably. And finally, it's okay to have a little bit of healthy tension and conflict. We can expect these times are difficult. Finally, one that's not there is be kind to each other. Thank you very much for your time. Thanks, all thanks. And it's a beautiful learning. My favorite was like collaborating, fixing the bugs. So thank you so much for the minutes, minutes of wisdom with us.