 All right, thank you. Do we have the mic? We're good? Can you hear me? Yeah, OK, great. Thank you. So I have a talk today that I wanted to give you, and I call it Checking as a Service. And it's about how my thinking has kind of evolved recently to think about how some of the traditional ways we've thought about testing kind of have changed with some of the recent changes in technology and how an architecture and so forth. So I'll tell you first a little bit about myself. I work for HomeAway. HomeAway is recently acquired by Expedia. So I'm part of the Expedia family now. And I work as a test architect where I develop strategies for GUI, API, integration testing, all types of automation, including Selenium. We're big users of a water web driver. I've been doing automated testing for over 20 years. I started working with a lot of the commercial tools. I've worked for several of the tool vendors years ago. I started publishing criticisms of the commercial tools and what I thought was lacking in them. And when I felt like those needs weren't being addressed, I started getting involved in open source. That's what brought me to get involved with the Selenium project. That was when I was at ThoughtWorks. That was in 2004, as well, when I met Duresh. And at that point, we launched the water project, which was something I did. And then the original release of Selenium that was sponsored by ThoughtWorks. I helped get that sponsored and released in open source. I think that was one of the first things that was officially open sourced from ThoughtWorks. That was still when it was a new strategy. Now you see a lot of companies with open source strategies, but it was a new thing back then. Before that, in 2001, I co-wrote a book called Lessons Learned in Software Testing. And some of you may have read that. And then with time, the water project, which I've been working on, the Selenium project, we're kind of on parallel tracks. And we brought them together in 2009. Yari Bakken did that implementation. And so that was the water API, which is a Ruby API, that ran with the web driver backend. So that's what I've been using since then as my primary GUI automation driver. I also wanted to mention that I have a background in a degree in philosophy and mathematics. And that may show up in some of the way I've been thinking. Because so much of what my career really has been, has been to look at how can we do a better job of automated testing? How can looking at what's happening and I see where. So I've been thinking systematically about this kind of thing for over 20 years. So I wanted to start with this paper. This is a ThoughtWorks paper that was recently published that I thought was very good. And the title's on the bottom. You can find it if you want to read it. And this graphic comes from that talk. The basic premise of this paper is that 10 years ago, this was the type of application that many of us were working with in developing, testing. And so you had a browser hooked up to an app server. This one here could be Java or .NET. And you've got SQL behind it and some kind of reporting service. And that was a fairly traditional, in fact, that really had architectures 10 years before that weren't much different. The main difference that had happened in that time was moving things onto the web. But the back ends were still fairly similar. Also, a lot of the applications back then were still softwares of product. That's what I was working with. And this was to say that we would develop the software. And then we would release it to the customers who would install it on their site. And so with softwares of product, in QA, you would take it. You would test it. You would say, yes, good. And then it would get shipped out. So it's still the traditional type of shipping system that you had, where you had a fairly long development process. And it was fairly important that you get it right, because you wanted to have QA involved. You wanted to make sure, because reshipping was an expensive and difficult task to work on. And what I've seen in my work is I certainly have moved along from this type of architecture to this is much more aligned with what I'm working on nowadays at HomeAway. And I think what a lot of people are is maybe just get a show of hands. Who feels like this is something that they're seeing as well, moving from this type of simplicity to something. Here we have the microservices, multiple front ends to it. Does anyone else have this kind of? So some of you have been seeing this kind of change too. And this is a lot of complexity here, because this was stuff where I could just install it myself. I could get it. I could update the SQL. I'd drop in the new Java code or whatever. And here we have microservices. We've got different data stores coming behind it. We've got queuing services. There's a data lake. And then there's various analytics engines running off that data lake. So that, to me, is a more typical architecture, both for what I've been doing in my day-to-day work, but also I think what a lot of other people are. So I wanted to say that, because to me, that's the situation we're in. And the big challenge to this type of change is this, that we're releasing more frequently. At my company now, we regularly have components that are released every week, every two weeks. And we're being pushed to go much further than that and to start going to daily releases. So how do you do your automation? How do you do your QA? When you have that very, very rapid turnaround? That, to me, is the challenge that we're facing today. So the traditional approach for how do you divide up the QA roles that I've seen is you've got the manual testers and then you've got the automators, the people who are developing the tests and developing the scripts. And I've been doing this for a long, long time. And I always said this was great, because with a manual tester, you're doing testing, right? So what's the output of manual testing? It's the bug reports. It's a coverage report. You'll do some planning in advance. But the point here is the testing only happens if the tester's around. And what's great about automation is you can write the test so that anybody can run them. And so you don't have to be tied to have a single person. So there's a new release now. We need someone to test it. And that's part of the job of management, to figure out how do I line up the resources I need in order to get the testing that's done so we can keep on that schedule. And with manual testing, you've got a certain amount of time you need. You need to make sure you have the right people. You need to make sure that, okay, I won't worry. Thank you. So that's to me why I thought this, this is what made me really fall in love with automation. I've been doing, like I said, automation for a long time. But what I've seen a lot of times we really get something more like this where ideally anyone can run the tests. Ideally anyone can figure what's going on. But I've seen in many cases where the tests run automatically, they might run overnight in a CI system. And then the morning, it's the job of the test developers to say, they're like, well, you need to go look through the result and figure out if there's any bugs in it. That there's a lot of false alarms, a lot of, there's a lot of red in there and you sort it out. Does anyone have this kind of challenge? Okay, so this is something I see too, right? And I see this as part of what I'm trying to work with my teams with, how do we get out of this? This is what I call lately, manual automated testing. This is automated testing that still requires an automator to get involved to look at what's coming out. And it's still better than the manual testing in the sense that you could get a lot more coverage, faster, you could get a full run in the night, and then in the morning you come in and you figure out what's right and wrong, right? And you can, you know, it's better than having to do all that immediately. But I think like from a manager's perspective, because I've been both roles, it's kind of frustrating because you're like, okay, well, you know, then you come in and you're like, well, what do we have and how long does it take? And what happens when your automators go on vacation, right? And that's been a challenge for us this summer, right? Summertime when people are away and then, okay, well the guy who wrote the test suite, he's on vacation, we can run his test because he's not here, but, you know, it's rad and we don't know if these errors are real errors or fake errors or something we've seen before or new and that takes time and you need someone who's familiar with the tests and it can get tricky for the manager in that situation, right? Because not only do we need, you know, with a manual tester you need to have someone to do the product, now we need someone who knows the test suite, right? So that's even a born arrow thing. So that's kind of the situation we're in and that I'm in and that I think a lot of people are in. And I've been thinking, and there's several different ideas, new ideas that I've been really thinking through that I think give us some leverage to change the situation. That's what I wanna go through here. So the first thing I wanna talk about is this distinction between testing versus checking. Has anyone heard about this before? Yeah? So can someone tell me what you think that means? What's the difference? I don't know if we can, yeah. Right, so he said, with the manual, it's more of manual exploratory and what was checking, would you say? Right, so it's looking for your expectation. Do you have an idea of what you expect and do you find, is there any other ideas about what the difference of these are? Does that sound right? Yeah, so this is how, this is the words that I put together for it. I actually, when it first came out, I was like, this is kind of a silly distinction. And that was what I thought at first, but I was like, you know, this is actually interesting because I think this distinction was first put in place as a way of focusing on testing and on manual testing and all the things that are involved so we didn't undersell the manual testing. But I realized that checking is actually really what is what we want with automation, right? And but with that manual automated testing, we're not really getting checking. We're still getting testing with that. And so that's kind of what I've been pushing lately on the automation is let's get, do we have checking? Do we know, do we know right away whether there's an issue or not there and what the issue is? Another thing that I think, but even behind this is A-B testing. Who does A-B testing? I know we've had some talks about that yesterday. So with A-B testing, we have, you know, and I think a lot of QA people are like, well, what's our role in A-B testing? Should we be involved? Are we doing that? And I feel like part of that testing idea of exploring, you know, what I put here is exploring a product that's fit for use. That's what A-B testing is doing. And so testing is kind of coming out of QA. It's coming out of that role and now it's something that the whole team is embracing. And I think this is really good. I remember 15 years ago that a lot of QA folks were saying, you know, why are we the only ones that care about testing? Why are the old caring about quality, right? And now what I see is teams being organized around testing. Say, how are we gonna test this? How do we know if this is good or not? I used to be someone who would report usability bugs. I would say, oh, this is hard to use or whatever, whatever, whatever. And I was good at it and it was an important part of my job, especially that old, you know, software is a product thing where it's gonna get shipped out, it's hard to send updates. We want something that people can use. But with A-B testing and with a software which is delivered as software as a service so you can deploy more frequently, it becomes easier for the whole team to take charge of the testing. And so it seems to me, because of this, that the QA role or the dedicated kind of focus needs to be more on the checking because we've got other people looking at the testing of the whole thing here. So the other thing that I've noticed is a change in focus. And this took me a longer time to really sink in. But when we're focusing more on APIs and services instead of libraries for a long time, you know, I mean, I wrote code and I developed test code. I've developed the test libraries, the test themselves were a type of a library. And, you know, I was focusing on delivering that, saying make sure my code is good, the people who want to use the code can use it. But when we think of the focus on APIs and services, then in particular the reliability of the system now becomes our responsibility, right? We have to think about not just is the code itself good, but how is it being deployed? How's it being run? How's it being supported? And I realized that we could say the same thing about our tests, right? And instead of thinking of our test as just artifacts as things that are there that anybody can use or that can be run, we see these as things that are happening and things that need to be supported and developed. And the other thing that I've been spending a lot of time lately is learning about DevOps, learning about Docker, learning about the technologies that help us support automated deployments really is what this is about and about pulling the roles between QA, between the Dev and the Ops together. So I feel like part of what the QA role used to be was that we would protect the Ops team from the developers. So the Ops team and they were often they would want to make sure because they didn't want, right, they're responsible for the reliability of the service, right, of the product. And they knew, statistically, I think about a quarter in general issues of production are caused by the introduction of new software. That's just a general number that I've seen in different places. And so they want to make sure that this new software that's being deployed isn't buggy, right? And so they are counting on the QA team to go through and say, yeah, this is good. This is ready for deployment. And so the QA team was kind of the middleman there in many cases, the gatekeeper, right, between the Dev and the Ops. And I think what Dev Ops is all is about is about short circuiting that and saying, okay, we're going to have the developers working directly with the Ops folks for the Dev Ops. And we're going to build, it's both a set of tools and technologies, containerization, but it's also a different practice, right? And it's a different sense of ownership where instead of saying, well, I am responsible for the bits and for the code and you figure out how to deploy it and manage it, it's like that becomes a team responsibility. We have to work together to make sure the whole thing can work well. So this is the, this is the, who's doing this kind of Dev Ops? This is a, okay. So not a lot of, I don't see a lot of hands on this one here, but this is definitely something that I'm seeing. And I work, I work, I'm from Austin and Texas. And so I work, not only at home away, but I talk to a lot of my friends in the town and this is something that we've seen there as well. So those are some ideas. And I got these ideas together. I'm thinking, okay, how does this change how we think about QA and what our roles are and how we set our priorities? And one, years ago, I was put on a team solo. I was, it's kind of a funny situation and there was kind of a corporate shuffle. And I had been brought in a coach, a bunch of people and helped develop kind of, I was an automation lead. I've been doing automation for a long time. So that's usually what companies hire me to do. But because of the way a bunch of things moved around, I became the sole QA on a project and it was kind of an internal startup. And so it was exciting and I had to do a lot of different roles and I had a much broader sense of responsibilities. I had to not only write the automation but run it like the bug reports figure out the test plan was and I even was going outside of the QA role because it was such a small team and we had this new product we had to launch. And so I had this sticky note that I put on my desk is what's the business value of what I'm doing today? And that was something that I thought about every day. What am I doing that's valuable to the business? Right, because it's very easy to get into a rut of saying, okay, it's my job to make sure all of these regression tests are passing. But what's the value of that? What's the value of the things we're doing? How is that meaningful to the business? And I'll tell another story here. So another job, again, I was hired, this is a different one from that one where I was brought in and I said, okay, you're a Selenium expert. You do a lot about automation. We want a lot of automation. They interviewed me for the job and I talked to all these folks for that. And so I came in and even though I was hired to do the automation, this was about eight years ago and Agile was still pretty new and so I was doing a lot of Agile coaching. I had been like the rest of it, I've been with ThoughtWorks. I'd seen a lot of effective Agile teams and that was the time when it was still pretty new in a lot of places. So I was doing a lot of Agile coaching and again, I was trying to be helpful in whatever way I could. But then there was a point where I was like, I really wanted to do the automation. That's still kind of in my heart what I like to do and it what makes me feel good, right? So there was a certain point where I said, can I just really focus on that? I'd like to get these tests running and I remember I had a particular conversation with an executive there who had been, I think he was kind of the one who had sponsored my hiring when I came in and signed off on it and I talked to him briefly. And that's why I told him. I said, well, you've got me doing all these different things and I was doing audits on things and trying to make sure we had good quality software. But I said, I'd really like to focus on the automation role that I was hired to do. And he said to me, he said, don't rewrite history. And I thought that was an interesting comment, right? Because I thought, okay, well, that was, and I realized then that there was conversations that had happened when they'd hired me that I hadn't been a part of, right? Because I think this is true for all of us. Whatever we're doing, we're being paid out of a certain budget. And that budget is being justified at the executive level for certain reasons. And I realized, you know, I was really being paid out of a QA budget, which is really a risk management budget in many ways, right? At a certain level, people aren't saying, oh, we want more testing because testing is good. What they're saying is we want fewer bugs. We want fewer support calls. We want fewer customer complaints, right? And the general belief is, okay, well, we'll get better testing and that's what it'll give us, too. So it's really a risk management thing. And so it got me thinking more about, okay, what is the value? What is this really doing here? Because if the automation isn't leading to these things, then those, and so that's why I started, and I really wanted to know what does the money want? This is something, this is the phrase I came up with because, I'm sorry, I'm not used to having paparazzi. Yeah, what's, and I see this financially right now, too. This was, when we see these economic events and how they change things, you try to figure out, well, what is driving, it's not even me and the executives, but how is my company getting the money and how's this all coming through, right? There's a expectation I realize that money has around what it's getting. And that's what I found a good way to think about this. So this is where I feel like this needs to go and to say that we need to start thinking about checking as a service. And so we need to focus less on testing and more on checking on how, because I think is what the money wants. The money wants us, and this is really what I think was originally sold, right? When they came in, they said, well, we want these automated tests. And I think what they were asking us for was checking, was we want an automatic way that any developer or anyone in the company can know whether the code is good or bad with a very quick turnaround, right? And what we gave them here, what this is here, right? This is really your manual automated testing. That's kind of what we gave them instead. And so this is a phrase my friend, Mark, came up with a Ruby tester, a check ops engineer. How can we come in and think of checking as a service? How can we come in to say we will make sure that we can give you that reliable thumbs up or thumbs down about whether things are going as much as possible. And so that makes me think of this now as checking as a service. Now, in a more traditional role when I was thinking of myself as a test developer and as someone who is creating tests and adding to the test suite, I was generally focused on coverage. And I think that's how a lot of people think about this as well. They say, okay, how many tests do we have? We start counting our test cases. We say, you know, how many of our scenarios are covered? How many of those do we have? And what I found on some of my projects was I run people's tests, I look at their test suites and I say, well, it's never running green. Like how come it's, you know, and I look at the test suites and they'll be, oh yeah, well those tests, those fail a lot. We see that problem a lot. Or this is one that gets my go is I'll say, well, what's the issue here? And then someone will say, well, there's a timing issue. Like that's an answer. Like that means something. And all it means is another way of saying, I don't know or we're confused or whatever, right? Cause, and so, you know, when you're focused on coverage, well, how reliable are your tests? How often do your tests correctly report defects and how often do they give us false alarms? And I realize we need to start thinking again like the test as a service. As we think of them as a check ops engineer, we say, what's the reliability of our service? Operations, you're always looking what's the reliability numbers, right? You're measuring them. What's your uptime? How often is it correct? And yet I hadn't seen that kind of thing happening with the QA. So I was like, well, you need to look at this. I call that a health rating sometimes. How often are your tests running green? What percentage of your tests are? Because at some level, and people say, well, well, those tests are failing or there's a test that's failing a lot, right? But you go in and say, well, that's because the developers didn't fix the bug. I said, well, why are we running the test? Right? If we know that until that bug is fixed, it's kind of the feeling, I think sometimes it's like, oh, we keep running that test over and over again and it keeps failing. That'll like force the devs to like fix it, which that's never, I've never seen that happen actually. Has anyone seen that happen where they just fix it because it keeps running red? But yeah, you've seen it. Okay, great. So that's good because if it works, it does, but a lot of times it doesn't. A lot of times when I work with people, they're like, well, let's keep running it because the devs aren't looking at it because there's so many false alarms and other issues, right? That's what we wanna get to. We wanna get to a point where they are respecting the outputs and wherever the whole team wants to keep things green. That's why to me, that health rating is so important is it shows the health of the whole project. And if you say, well, oh, that's an environment issue or that's a config issue or that's some other issue. And it's like, well, as if, and when someone's saying that, they're not thinking about, they're not thinking of it from this kind of check ops perspective. They're thinking of it as just, well, my code's good. There's something else out there that's some kind of operational issue that is the problem. So, and of course speed becomes the big one too because if you have a lot of tests that take a long time, it's really inconvenient, right? So we try to figure out how do we run things horizontally, how do we horizontally scale so we get a faster turnaround time? And that's really what the speed, sometimes it's getting your tester run faster and not putting lots of long hard coded sleeps in it, right? Cause that, that, that wastes time. But it's also getting it that way. And it's also being judicious. Cause once I found, if you focus more on the reliability and the speed, you actually realize that you need to be smarter about the coverage and just adding test cases all the time may not be the right thing to do. And in fact, sometimes you want to start in practicing test cases because you're realizing, you know, what we want is a reliable signal. We want a reliable signal if this is good or bad and having lots of tests, especially if there's a lot of many of them are unreliable is actually going to undermine our ability to do that. And so this is the types of reliability that I've seen come up with this. And you know, you have the reliability of the test of the framework of the environment. To me, these are all part of the QA role now or whatever the role you want to call it the automation role or the check ops engineer, if you want is to figure out how do I take care of all these things. This is one of the reasons why, you know, in the old software as a product world, I would install the software myself frequently. And I knew how to do that on top of that. That was still part of what I was testing because we would ship something that other people should be able to install. But as we've moved to the software as a services, it's become harder and harder to figure out how to install these things. And the configuration is so much more complicated. It gets kind of, and we just kind of live with what we have. So that's what we have at HomeAway. We have a bunch of test environments. And people are like, well, that's an environment issue. We're seeing environment issue again. And I feel like we have to own that. I feel like we have to say that's something we have to make right just as the developer. We're just kind of like the developer who's saying it works on my machine. Oh, well, my tests work on my machine, right? Well, we want them to work on everyone's machine. We want them working in all of our test environments. So that's kind of the ideas I have. I would love to hear your thoughts on this. I've given this talk several times. And this is the first time I've never been interrupted 17 times by the time I got here. So you're either very tired or just half awake because it's early morning or just very polite. But anyway, can we get a mic or something for questions? Do we have any questions or any comments on this? Yeah. Okay, the question is checking versus testing, right? And the what? Yeah. So I think the idea of saying, let's distinguish between testing and checking was originally introduced as a way of saying, you know, we don't wanna be just manually going through checklists and doing things that could easily be automated. Let's think about all of the things that are involved in coming up with a good test plan and a good test strategy and we're gonna give that word testing. So the terminology I think comes from Michael Bolton and I think it was intended as a way to help glamorize the tester role, especially the manual tester role and to kind of dismiss some of the role of automation or just saying, you know, don't be, what it's saying is you don't just, because you know, some places may do this where you have just a script, a manual script that says here's the things you need to go through, right? And then the thinking is, well, that's not really good use of the human being's time, right? We wanna automate that and have the human doing it, the exploratory testing, thinking of what are the risks, what are the scenarios coming up with new ideas about what can be really effective on that. And what I did recently, I realized, I can flip that around and although this terminology was originally introduced as a way of kind of diminishing, and it was kind of like, well, checking is things that can just be automated, so let's automate them as if automation were somehow really easy, as if we could just say, okay, well, if it's a script, we could just write the steps, but I don't know, for me, I've been doing automation for a long time and I find it difficult. I feel perfectly getting a reliable test to be a difficult job, because you go in, you get the false alarms, right? It comes through, there's delays on the network for some reason, maybe you're testing on a system that's under load at certain points, you're getting a slower response times, your tests start failing. So getting checking to be reliable to me, I realized, is actually a difficult but honorable role. And so that's what I'm trying to say is saying, let's embrace this idea of checking. Let's not accept, okay, you're just doing some checking, you're just doing this easy thing. I don't see it that way. Thank you. And can you go to your last one? Sorry? The reliability select. So test reliability, do you have any tools to measure the reliability of test or test framework for the environment? How to measure it? Can people hear her question? Can you please speak into the mic? So you are talking about reliability right here. Test reliability, test framework reliability as well as the environment reliability. How do you measure it? Do you have any tools or anything you know about it? What I can measure easily is reliability, right? Because I can measure test reliability by just looking, every time I run a test suite, I can say what percentage of the tests are red and what percent are green, what percent passed and what percent failed, right? And what I've been pushing for this, I've been calling this a health rating or a health check. And the problem is that number includes all of these. Yeah. Right? Segregate in a demo. No one can hear you. How do you segregate, you know, we know that test failure happens due to either test framework or duty or test or an environment failure. How do you like, how do you distinguish between them and present it to stakeholders? So to me, as a check ops engineer, I don't really care what the problem is, right? To me, it's our job to make sure that, like I don't, I hear what you're saying. You're saying, how do I let them know what the source of the problem is, right? Yeah. I think for me, we need to own all of them and we need to say it doesn't matter. So it doesn't matter and so what's, the reason we do this is because we say, well, I'm doing the tests and someone else is the framework and someone else is in charge of the environment and I want to figure out who's finger, who I should put my finger at, right? That's the problem. The problem is the question that says which of these is it and who do we point our finger at? The answer, I think, is that we have to own all of this. That if as a tester, as an automation engineer, if my framework is the source of reliability, then I need a better framework. If my environment is the source of ability, I need a better environment and I have to own these all. Now, obviously on a case by case basis, we can figure out what the problem is, right? We can go when we look at it, but when I come back and say, oh, it's a timing issue, that doesn't answer this question, right? So we have to go through the goal of saying, so we have to get this number high of all of it and that means that we need to systematically own all of the things that are keeping it below. That's my claim, I'm trying to say here. Hi, I don't want to sound more of a, it doesn't want to sound more of a philosophical question kind of this thing, but again, it's related to the, what does the money want thing that you touched upon? So why don't you throw more light? How do we really correlate our work to the business, I mean, the value addition to the business? If you can just elaborate more, that'll be helpful. Saying how can we be more effective at helping the business? Yeah, basically, not being more effective, but how do we really understand the impact that our day-to-day work creates? You know, to the business. So basically, just for everybody's benefit, if you can more elaborate on that. I think part of this means we need to be talking to the people we're working with about what they want. And we need to be looking at it. I think it's easy to get into a system where we feel like, okay, it's my responsibility to own the quality or it's my responsibility to write these tests. A lot of times I see, I think this is part of the reason why I think, you know, a lot of what I'm pushing for is that we need to focus more on reliability and less on coverage. And I think that's because that's normally what the money wants. But that's, you have to ask. You have to go and talk to the people you're working with and say, okay, you know, do we want, right now I have a test suite, it's got 50 tests in it. And every night, five of them fail for random reasons. Should I be writing more tests? Or should I be trying to figure out how to make this more reliable? I think it's a question you can ask the people you work with and they will tell you. And I think this is partly also about us, you know, as we work with the managers, what do they want? What are their expectations for it? I think sometimes what happens is people get frustrated, right? And they say, well, I don't know how to make this more reliable, right? My skills are set up so that I can write a lot of tests. And so I'm gonna focus not on what the money wants, but on what I'm good at, right? And I think that's the problem, is that we have to figure out, well, do I, you know, maybe I need to alert more. And this is why personally, I feel like I have to be more involved in DevOps and I have to be more of an operations guy because I'm realizing that in my environment and the situation I'm in, that's what they want. They want reliable test suites. They want test suites to give a clear signal. You know, we have cases, and I look at these things where I have a test suites and we have this a lot where they're between 90 and 95% green every day, right? Which means there's just, and I think we accept that as QA engineers, but as an operations guy that's horrible, right? In operations, you're looking 99% uptime. You're looking for a very high level of reliability. And I think in QA we accept a pretty poor reliability. And I think part of it is because we just say, well, I don't know how to fix that or this framework, it doesn't work, you know, it's not going in there. I would say, oh, well, there's a, and like I said, there's these timing issues. You know, another thing I didn't put in the slide here, but I think it's kind of related is the environments. And I think part of what's facilitating this is to move into the cloud and making it easier for us to build environments again. I think there was kind of a period where, you know, I said, you know, 10 years ago or so when I could install the entire software stack, the entire application on a machine that I had. So it was kind of within my scope to own all of that. And then as we get this more complex architecture, I can't do that anymore, right? And so what's happened is now I'm having to depend on a team server and that's what happens is, well, there's things that are, and I've just kind of allowed myself to not own the whole environment anymore. And so I'm feeling, this is a challenge for myself as much as for the rest of you. How can we get back into that? How can we say, how do we build that? I think the cloud is one way to get there where we say, okay, now I can spin up my own environments again and I don't need, I don't need to always just use some environment that some other people put together. That's pretty good, but not great. The answer your question? Yeah, definitely. Thank you. I've been listening to a lot of this about the emerging QA with ops, tech ops. The thing is, I'm a QA, not an automation tester. Do you really think we should be learning a new skill, add another responsibility to our already big group of responsibilities because we need to know front-end code, back-end code, business, database. Do you really think we should add another responsibility? That's a good question. I think that's the challenge we face because this responsibility of owning the environment and of taking away, and like I said, you said that we had all these other things we have to own, but I also feel like we've let go of some responsibilities. That's why I was talking about the AB testing. I feel like I used to feel like usability testing was part of my role as a QA. I mostly don't anymore. I mostly find that I look at something and I'll say, well, this doesn't look the greatest to me and I may be able to suspect it, but I'm realizing we have a methodology that can help us really find out whether this is gonna confuse people or not and whether it's gonna make sense to them. So that to me is an example of one piece of responsibility that has moved out. And so that's one of the things that I'm encouraging us to let go of so that we will have more space to own in some of these operational abilities. But you're right, it's taking on that role, but the problem is if we don't, then what happens is it's the two questions. It's what's the money want, but it's also what's the business value of what I'm doing. And if the value of what I'm doing is being undermined by things that I'm just gonna accept and say, well, I can't, there's a timing issue. I don't know what the issue is. That's why it's failing. It's not a real bug. Then I don't feel like we're gonna be able to move to a new level. And I guess for me, maybe for you, it's different, but like I said, I've been looking at this for a long time when I felt like, okay, the commercial tools were crap. I said, let's use open source as a way of making better tools. So personally, that's how I've always been. Now, does everybody take on that responsibility? No, but I think that that's, as a community, something we have to move to to figure out how can we own the reliability, the health of our test suites and making sure, because the value of our work is the value of the signal that they provide. And so I think we have to accept that. If we wanna accept, okay, we're just gonna accept a certain amount of back level noise in what we have here. Maybe that's what we have to do. Maybe we're in a situation where that's what we have to do. For me, I'm trying to find ways to get through there. And personally, do you think it's better for a QA or a person working in IT to have a lot of skill sets to know a little about a lot or to know a lot about just one particular skill set? They should have a lot of what tests? If you think a person should have a lot of knowledge about like a lot of skill sets. A lot of skill sets, for example, QA, backend code, JavaScript code, anything, or just focus on one thing. For example, automated testings or hops or something like this. I mean, it's personal opinion. I think in general, what we've seen is that the skills that we're expecting for people are broader. It used to be, I would say 10 years ago, 15 years ago, we would have, okay, I'm a database programmer. I'm a front end programmer. I'm a middleware programmer. And we accepted a lot more specialization than we accept today. I feel like in general, everybody, not just the QA, are being expected to have what they call the T-shaped skills or to have a broader understanding of the different pieces they're working with. And then there are certain areas where they can go deep. So I think that's true of everybody and I think that needs to be true of QA as well. Do you have another question? Yes, sir. Yeah, in the back. Yeah, I have a question on DevOps. Yeah. Like you mentioned, how QA can involve with the development and operations team? Like how QA can make a difference for both the development team and the operations team separately? Like I wanna, I'm trying to understand that with an example, if you have. So that would be helpful. Okay. You know, I worked with one team where we really didn't have the DevOps stuff, right? This was maybe five years ago. Well, even less than that. And so it was still a mostly manual deployment process, right? And so it would be, it would go into, we'd have a staging environment, we'd manually put the environment, QA would check it out. There was a very, you know, we had a protocol, here's the testing that we'll do in this environment, make sure it's all ready to go. And then once that had happened, we would give it the thumbs up and then it would be put into, you know, and then it'll be manually deployed into production, right? So that's your pre, it's not DevOps, right? That's just your kind of traditional approach. And in that role, right? There was a lot of pressure and focus on the QA. And that's why I said, you know, to me, I realized at one point I was in an environment like that and we did not have the ability to rollbacks. So, and we had a really big QA team and it was clear to me at one point that the company was spending a lot of money on QA. And I realized, well, that was part of the reason why was because it was such a big risk doing a deployment. And that was, so we were part of the way of mitigating that risk, right? Then I worked with other teams where it was either easier to do a rollback or easier to do a patch, right? And so in those situations, there's less risk if there's a failure deployment because we can either pull it back or we can patch it and put it out quickly. And in those cases, and this is what I've been seeing with some of the teams is the QA people sometimes are getting mad because they're like, well, people don't care about us as much anymore, right? We're not as important anymore. And I think that's kind of true. And so I feel like what we have to do is we have to say, okay, how do we get an embrace instead of saying, hey, we need to have a formal handoff. What we need instead of saying is like, how can we provide the tools? Again, this comes into, I wanna go back to that original idea where, which I had years and years ago that we're creating artifacts that other people can use. How can we create a test suite? How can we create an automated system so that anybody can run it at any time and know reliably whether something is good or bad? And I wanna get back into that world. And to me, to do that, that means we need to have the ability to easily spin up systems reliably, run the test on them against the given code build and then give accurate information using CI or whatever, saying here's what the problems are and looking at that reliability of that system. I think the traditional role where we have this manual automated testing and we have the automator who has to be there to interpret slows that down enormously. And so that's why I'm saying we have to pull, that we need to pull out of that. And the way we do that is by focusing, again, less on coverage, more on the reliability of the test, the environments and learning all about it. This is, like I said, this is to me where I think we need to go. I don't have all of this figured out how we get there. That's part of what I'm here to share. Yes, sir. Thank you so much. Yeah. Hi. Morning. The question I had is more about the roles you've kind of answered in previous questions, but I wanna be more specific on, you said there are two different roles as a QA engineer, the test developer and the check off engineer. I wanna know that where they each fit in in the software development cycle? Like, as I've understand correctly, check off engineer mostly to make sure everything is working correctly. At the end, like everything is green. If I'm maybe wrong, but, and the test dev engineer would be review all the failures, but who would be responsible for writing new tests and then fix it at the end? Like how does both roles fit in the idle world? Yeah, that's a good question. So I guess I don't see them as two different roles, but more as an evolution. And I'm trying to evolve from being that test developer who's focused on just writing tests to being just as in DevOps, the idea is that I think of a traditional developer, right? You can say, well, there's the developer and there's the DevOps, but to me DevOps is not a role. I know some people think it's a role, but to me it's more of a philosophy. And in a DevOps philosophy, developers need to own and understand and get involved in operations of the systems that they're developing more than they used to. So this goes back to the question I had about it, you know, additional responsibilities. I think that's what developers who are embracing DevOps are doing is they're embracing more responsibilities. And that's what I think when I say check off engineer, I'm saying that test developer needs to also embrace more responsibilities in there. Now maybe there are some different roles, maybe we do still have some more traditional test developers and we have some other people who are looking at broader, haven't really, I'm still figuring this out for myself and for my team and this is something working out. So those are good questions. Do we have other questions? Hi, I want to know with your experience, what is the average lifespan of an automation test case? The average lifetime of an automated test case? Yeah. Like how long we have it before we delete it or how long it's good? How long is the lifespan of a test case when we have to retire the test case? I'm confused about the question. How long do we have the test case? Like how long does it take to run? Every sprint we have to retire some test cases. So with your experience, what is the average lifespan of a test case would be? You said that every sprint you have to retire some test cases? Yeah. Is that because of something I said or is that something that you want to do? I just want to know your opinion on that with your experience. I'm not sure if I want to answer your question, but let me say something here. Maybe this will take care of it. I find that it's easier for automators in general to write new test cases than to analyze the suites they have. And I feel like that's part of what we have to look at and we have to look at like a time budget. We have to say, you know what, we need to run. We want to be able to provide feedback quickly to our teams. And so we want that to be within a certain amount of time and usually have different test suites. We have some test suites that run in five minutes or 15 minutes or an hour, right? But like I recently was checking our results and we had a lot of test suites that took between two and five hours to run. I thought that was kind of crazy. So because that means your ability to improve and develop and whatever, to me I want to break those into smaller pieces. And then we want to look at, okay, is there duplication happening in those tests? That's the thing that I think is a hard problem. I don't have an easy solution to it, but do we need to test every scenario every time, right? What's the right level of coverage? And I think part of it, it comes back to the signal. Like if, you know, what are the tests that we really need? And so I've been trying to push teams to just focus on getting a green smoke test, right? But what's the subset of the tests we have that we can get to run very quickly and very reliably and then add to that and work from there. So I'm still thinking of different ideas about how to get there. That's one of the ideas I have. I'll quickly jump in. There's actually a talk from Julian next after this where he's going to talk about using analytics from real field to actually determine which tests you should still keep and which tests you can actually do away with. So I would recommend having a look at that talk. It happens right after this. Thank you. There are companies where they have separate automation framework team and test scripting QS are different from the automation framework QS. Right. If we cannot distinguish between all the factors, so how do you know where the problem is occurring? Well, I think we can distinguish between it. I think the question that he asked before was whether we can measure it. I said, I think that's difficult to measure, you know, but I think on a case-by-case basis, we can look at the failures and we can say, okay, is this a failure in the test or is this a failure in the framework? Yeah, one other thing was that if we have to own all of those, then, I mean, the automation framework is not built by the scripting team. So why you should own those things when you move into the network? Right, so to some degree, you have to have a one team philosophy or none of this works. So you've got to be able to say, as a team, we have to own these problems. And if you have two separate teams and this is doing this, then you're going to have exactly this type of reliability problems, right? If I'm in charge of building a framework and providing it to a bunch of people and that's a job I've had many, many, many times over my career, so I understand that role. But if my job isn't reliability, but just to give them a framework with features and then they own the reliability, that's going to fail, right? So we have to have a cooperative relationship where, and in my view, the framework people should be finding ways of making everything more reliable, right? If you're providing general code for other folks, that's part of your responsibility. And if they're saying, well, you know, and you could get some finger pointing in this sometimes, but the point is someone's got to have some vision about how we're going to get through this or it's not going to work. So I like that triangle where you said, okay, reliability and speed are more important compared to the coverage. So even I support that, but I want to know your experiences. So I'll tell you a situation. So we are in a situation where we have a lot of, we have to add a lot of end-to-end test cases to increase our coverage, because we have almost nil unit integration test. So developers have a mentality of saying, okay, the testing is a test, it's a pure job. So they should be writing the test. So in order to have a coverage, we have to write a lot of test cases. And I met a lot of other people also, they also have experience that they have very few unit test and most of the test cases are end-to-end or UI test cases or API test cases. So I want to know from your experiences, if you have experienced such a situation in your experience, like 20 years experience, and then did you lead efforts to do a transformation from basically inverting the pyramid from pushing the test to lower layers and then getting more unit as coverage compared to the end-to-end coverage? I completely agree with what you're saying. That's exactly what you have to do, because the problems with these reliability, to some degree, if you have a large number of end-to-end UI tests, you're gonna have to accept that you're gonna have reliability issues and if you don't have reliability issues, you're gonna have speed issues, right? That's absolutely true. And so what you're saying, I've already seen several people talk about the pyramids and how you want to move down, down more unit tests, more integration tests. I completely agree. That's what I am also working on as well. I think that's a key part of the solution is you have to do that. And these things I'm talking about, I think are just more reasons to do that. So are there any, from your experiences, some approaches, some way we can actually drive those in? Because it's hard, the developers were used to working the way that, okay, I'm just writing the code, you test it out. So how do you drive those things down? So can you share some tips or some experiences on that? Yeah, I don't have, I think we have to do that. I think we do it by working with the developers, by crossing boundaries, by not, by saying, okay, as a team, we need more integration tests. As a team, we need more unit tests, and then we do it. I agree, it's a challenge. It's a challenge for me. This is something that, this is part of my daily job is to figure out the ways to do this. And I don't have any magic bullets for that. But I think these are more reasons why we have to do that because otherwise we're just gonna be in a situation which is very frustrating for a lot of people. Hey, Brett, just, okay. Brett, that's gonna be around. Yeah, so I'll be around to chat. Thank you for your questions. I really appreciate them. And I will be around for more conversation. So I'll give this to you to pass on. Thank you. Thanks, Brett. Thank you.