 I had a technology, small agile transformation company. I had the technology practice for a company called Lightspeed. They're based in Washington, DC. And I'm here to share some of my experiences with exploratory testing. You all heard the term exploratory testing? A little bit about my background. So I've been in the software development arena for about 20 years. And the past 12 years of which I've been primarily in using agile methods. So my background is development. My introduction to agile was through XP in 2002, the early days. And since then, I've been primarily worked in organizations doing a lot of development and leadership positions since then, using scrum XP methods and so forth. More recently, I don't do a whole lot of hands-on development anymore, unfortunately, because I'm a coach. I'm sure you guys have like 100 different coaches there. There are some coaches here. How many coaches here? All the coaches, so. And how many developers? Testers? Oh, good. Nice, nice. So I do a lot of coaching, technical coaching in the US federal space. It's a large transformation. And I'm here to share a little bit about exploratory testing and some of the things that we are trying in the US federal space in terms of exploratory testing. The outline, as I mentioned, just give a brief introduction as to what exploratory testing is. We'll take a quick survey as to see. I need to get a sense of what your knowledge is about exploratory testing. And then we'll adjust the content based on that. Look at the different aspects of exploratory testing and share my adoption experience, some of the failures, some of the successes. And the outcome is, I hope, that I demystify the practice of exploratory testing and become a fashionable term these days. I'm exploratory testing. And then also maybe provide some nuggets of ideas for you to go back and try. You're here and you go, don't go back and try. And that's basically it. All right. How many of you are aware of the testing term? Everybody knows it. You say, oh, you. But reality, I guess, is most organizations, we have the nice, cool-pee ice cream cone. It's true in my organization, for sure. Only one? There are lots of places. So lots of scripted, pre-designed test cases, manually being executed seem to be the predominant testing paradigm in a lot of organizations. And then automation, of course, you have coaches saying automation is important for agility and so forth. But a lot of our automation endeavors are squarely about UI automation, so any and so forth. And probably no unit testing at all. Forget about TDD. That's a long shot. Who does TDD? Good. Yeah, I want to say it somewhere. I'm glad to hear that. Anyway, so this is not a desirable pattern, for sure. But coming back to our suggested testing paradigm, which is based on the pyramid, our focus is going to be about exploratory testing, the nice little white cloud at the top. At this time, I'm going to ask you, what does exploratory testing mean to you? Who's using exploratory testing within their organization? Can you just? It's called differently. I'm going to write these things down. So what's called differently? And what does it mean? So something random. Just a fancy word for something random. Something random. Anybody else? Smoke testing. Random. Random is a prevalent theme. Smoke testing. Anything else? Let's take a few more. Session-based test management. Session-based test management. Ooh, I like that. Discovering to test. Can I use this in some other future presentation? Discovering to test. Let's leave it at that. OK. So some interesting answers. Random, it's session-based test management. Discovering how to test. There's truth in various degrees in all of those answers. So let me start with what exploratory testing is not. Exploratory testing is not doing random stuff. I've got to debunk this a little bit. There is a certain level of, I won't call it randomness, but we'll get to that in a second. There is a structure and there's test discipline that you bring to the practice, but it's definitely not madly flailing at the keyboard and just putting shoes on the keyboard and saying what happens, things like that. It's not that unstructured. It's not repetitive. They already mentioned that. What do we mean by it's not repetitive? That means you're not creating long test cases based on requirements upfront for reuse at a later time. That you can go on as a smoke test or a manual repeating the same thing. Click on this button, see this button, that's all. It's not about being repetitive. And it's not about, well these two sort of go hand in hand. It's not about producing comprehensive test cases by your QA organization and for use at a later time. And it's definitely not about replacing what the practices that you have in place. A disciplined approach to exploratory testing can augment your current test practices. Perhaps eliminate a little bit of this randomness. If it's working for you the way it is, that's fine. But perhaps there are ideas expressed within the practice of exploratory testing that you can sort of augment your own testing or practices within the organization. So what exploratory testing is, the definition is quite simple and it's easy to put up in a PowerPoint. It's simultaneously learning about the system that you're testing or the functionality. And as the gentleman said, discovering what exactly are you trying to test. Discovering the kind of, the test practices that you're gonna bring to bear. Designing of the test and execution of the test. All of them happening simultaneously. Okay, that's the true definition of exploratory testing. And then using, you interpret, you run a scenario of some sort, you look at the results, then you either adjust your test cases or you report an issue or a certain finding. But it all happens dynamically, adaptively, and it's not a repetitive type. So having said that, the question is, did you want that? Why do we care? Any thoughts? You're not bringing a new discipline. You still use the existing testing best practices that your testers and the cognitive capabilities that they have. They bring it all to the table in exploratory testing. But why do we care about doing these things simultaneously instead of being a staged execution? Any thoughts? A living test, can you elaborate? Okay, so it's more action oriented, right? A lot of this designing of the test, all that is happening as you execute instead of just when you're planning to execute. That's a good answer. There's a bias towards action. Fantastic answer because I'm not here to bash scripted testing. It has its place. It's good at verifying things that you have. Testing has its place. It helps you verify that things that you have a certain expected outcome, is it being met or not? But what happens when you do that, is it's good at just doing that. It doesn't necessarily look at the bigger picture. Things that are happening in the vicinity, but you're not so focused on that specific outcome that you're sort of oblivious to some of the obvious things that are lurking there, but that's not your focus. Which leads us to, okay. It'll take a minute. Okay. Because I want to sort of do a little show, share a video which illustrates what you just mentioned. And psychologists call this phenomenon inattentive blindness without giving too much away. Let me just wait for this. It's kind of a wrong time to stop because I need the video. So, okay. So what I would like to do is I'm going to play a little snippet of a video. How many of you have seen this? That's good. So our job is to count the number of times when I play the video, number of times the white team passes the basketball. Okay. That's the game. Cool. White team, how many times does it pass the ball? All right. Let's do it. Ready? Can you guys see the video? What's the answer? 14. 13. 12. 13. 11. Yeah, all within the vicinity. You start, okay. You're like a useless game. What's the point of this? Right? And I think some of you are right. I believe you're in the ballpark. The answer is 13. Now I'm going to ask you the question. How many of you saw a dancing bear? One. Two. See? One person? Two. Three. Pretty better than when I've run this before. Okay. So there is an actual bear there. And I'm going to play this again. Well, it's a man in a bear costume. There? Anyway. So the point is exactly what you said, right? It's hard. I mean, it's not, it's easy to miss the ballpark. It's easy to miss the ballpark. It's easy to miss something that you're not looking for. And what's the connection with exploratory testing? You already answered it, right? When you're doing scripted testing, while it hasn't placed, you're focused on finding the expected outcome. You may miss the things that are within the periphery because you're not focused on them. That's what psychologists call inattentive blindness. And using a technique such as exploratory testing to augment your test procedures can help maybe surface those surrounding issues of the age. How many of you have read this book by, have you heard of Elizabeth Henrykton? I love the work that she produces. And when I read this book called Explore It by Elizabeth Henrykton, it's a book that came out a couple of years ago, maybe a year ago. It was kind of a revelation because, so her hypothesis, and she's not the only one. I mean, session-based testing is based on the same principle, James Bach can came out. This is not a new idea, but lately it's become pretty fashionable to say exploratory testing. So her hypothesis is that testing includes more than just about verifying. When I say verification, it's scripted, manual, or automated. Even automated testing are gonna find the same thing and fail for the same thing. They're not intelligent enough. So perhaps testing encompasses this as well as a little bit of exploration, which is going off the script a little bit. As she says, no matter how many tests you write in the scripted fashion, some interesting observations and issues can be found when you sort of go off the script. So that's at the heart of exploratory testing. Now does that mean it's fully random? Not quite. So here's a depiction of contrasting the left side, which is the verification, and checking through scripts, even if automation or manual, and exploration. And it essentially illustrates what we've just talked about. It's on the left-hand side, pretty well documented either through automated script or your test cases, comprehensive test cases. It verifies to make sure that what was working is still working. That's its purpose. And you're doing all of this, designing of these test cases upfront. And in the testing continuum, exploratory testing doesn't automatically mean complete lack of structure with the speech style test. It's a continuum. There's aspects of a structure and rigor that you can build into your exploration of what I'd like to talk about. You're sort of still doing this, learning, designing, and testing, doing all of those aspects simultaneously, not in stages. And you're testing, probing the boundaries, the peripheral aspects. And there's a chance that you're not going to be able to replicate these issues because the documentation aspect can be pretty spot-on. There's no prescription for how much talking station you need to provide. Depends on your organization. Typical consultant aspect. It depends. All right, so the nuts and bolts of the process, if you will, for exploratory testing at least, as I sort of gleaned from reading the material and as we tried applying it in the federal government is what I want to share. The flow, four steps, and the sort of dissect and walk through each of these steps. As I mentioned, unscripted testing doesn't necessarily mean it's completely random. There is a little bit of a structure. And you have to prepare for it. Focusing the exploration. The four steps are focus exploration, learn about the system capabilities, as the gentleman just mentioned, discovering to test, then designing tests with interesting variations. This is where we use existing characteristics and existing testing practices in designing tests. And then actually go ahead and attack and report your findings. You do all of these simultaneously within your session-based tests. We'll talk about session-based tests in a second, okay? So let's start with what does it mean to focus the exploration. So again, like with most things in Agil, a nice little template is available. So this is a template that Elizabeth Hendrickson, and for those of you who are not familiar with the work, I'd urge you to check out her website, testobsessed.com. I mean, she's moved away from, she doesn't maintain it a whole lot, but there's lots of wealthy information that you can sort of mean and try in your organization. But anyway, so this is called a test charter. A test charter is a simple little description that says I'm exploring a certain area within my system, the target, with said number of resources. Resources could be what kind of an environment are you running it in? Is it your mock demo environment? Is it your UAT? What kind of test strategies are you gonna apply to test that system? That's your resources. And to discover what sorts of information, you want to focus it. You don't want it to be completely open-ended. So that's what a test charter does. And I'll show you examples of some of the charters that they've created. So pretty straightforward. So I'll talk about how we execute our charters in a few minutes, but here's an example of some of the test charters that we've created. Again, they're in a large federal enterprise, dealing with immigration forms and so forth. But we have multiple teams and they all create their test charters in that format. Explore the G28 functionality, using different browsers, trying to discover UI descriptors. Seems gives you enough focus to run your specific session, right? So that's it. So that's an example of a test charter. Very simple, but enough guidance for a team to execute on. And the good charters should just provide that level of guidance, but not be too prescriptive. You don't want it to be like a test script. It's not a test case, like a lengthy word document. Should be able to be completed in one session. In our case, we use, our sessions last three hours, that's it. Because the theory is that more than two hours at a time, you're not gonna be affected anyway for your session. And then, this is an important one. So, and the other aspect I'd like to mention is, this is not mentioned, but this is something that we've tried because we wanna bring the test charters to the fore. Because often, it's natural, as you all know, you know, the role of the tester becomes sort of nebulous. Where does it really fit? And too often, developers keep cranking the code for the entire sprint and fling the code at the end of, or close to the end of the sprint boundary and say, go ahead and test it, right? Is that true? And this happens in organizations, right? Yeah, right. So that's right, probably you're doing a few amigos, you're probably using cucumber, all of those business, gherkin, whatnot. All right, so the point is what we have is we have the business hand of the testers before the session, actually talk to our stakeholders, our product owners, and so forth, to identify which things they think are more of a risk. And we also, we look at risk parameters, which I'll show in a second, to design our charters, right? They're bringing them up front, they design the charters, and then we get into the test session. Anyway, so some good and less desirable practices than writing test charters. Any questions about focusing on exploration? Simple enough? Learning about the system, right? What are you really trying to test? You have this focused area, how do I go about testing it? Even before you start designing your tests and variations, you need to understand, perhaps, the system's capability. If you already know it, maybe you don't spend a lot of time doing the learning part of it. But think about test charters where a team is about to embark on some refactoring, let's say, right? It's not about capability that they've already built. Perhaps you can have a test charter that says, what are the implications of this? That's a worthy test charter to spend a couple of hours. So we use that quite a bit. Or if I don't understand how a certain component works, you can use a test charter to sort of focus. Don't think of it as only about finding issues. It's about discovering and learning. So the learning aspect is to understand the general shape of that focused area that you're gonna be working on. You can have several, depending on how you're gonna execute these. You can have multiple test charters that groups of people are working on. So it's not that every session you just need one test charter, no. Absolutely no. So typically, I mean, you're not launching a spaceship, right? We are dealing with input, output, manipulation, and some reports, more or less, right? And we typically, I'm a big fan of mind maps. I don't know if you guys use mind maps for different things. So we typically create a mind map for the component we are working on. And it's there in our team rooms. But it's not a big architectural diagram. It just gives us the team some idea as to what the various interactions are gonna be. And then we mark the spots that we're gonna explore and so forth. We may sort of rate them based on risk, which is what the different colors are. But again, nothing out of the ordinary. It's a simple mind map to orient our exploratory sessions. Very useful, and so it's a low cost way of doing it. And the different color things that you saw there were really based on risk profile for a given module. Like we have a heavily interface system and it's the shared code base. So we have a lot of issues when one team does something else, it breaks something. And this is a large transformation. I'm talking about 10 to 12 teams, about 10 people, so 120 folks, developers. So it's pretty large. So we often can't blindly say I'm gonna pick this area to explore. We do this where we break up our software components based on risk profiles, high frequency usage, high risk. If this goes down, everything else goes down. We break it down in that manner. And that gives us some sort of a sense as to the priority of, as we sort of define our charters. That's all. Again, a very simple idea. It's nothing to do with agile or not. These are ideas coming from the testing community for many, many years. Who uses risk-based test case prioritization right now? Yeah, same thing. You can leverage those ideas when creating your charters. So the third aspect is designing tests with interesting variations. This is where I think the tester's creativity and skill really comes to the fore. It's not about blindly executing. It's like click on this button, do this. That's not your strength. The real strength of the test to bring to the table is really this. How best do you design tests to sort of attack the system? And to that end, so what we use is, again I'm gonna refer back to Elizabeth Hendrickson's and I'll share that link in a second. But we use heuristics. Heuristics are very simple guidelines that you don't have to think about it when you're running your session. You have a certain focus area that, in our case, cross-brothered have discrepancies and so forth. Heuristics are simple ideas that let you sort of, if there's a form field, what are the different kinds of things you can explore? Making the, you know, entering a lot, lots of things and entering nothing. Lots of different practical suggestions to sort of vary your tests on the fly. On the fly. Remember you're doing all of this within that session. Now what I've depicted here is just but a small subset of what you'll find if you go to this, it's a cheat sheet that we print in our team rooms which shows all the typical heuristics and this is agnostic to a platform. You can apply this to whatever solution you're working on. I'd highly encourage you to sort of download this. It's free for consumption. That gives you the mental model as to how you explore. And another great resource is Jorgensen's and Whitaker's sort of post on 17 attacks. Just look for that in Google and you will find again ideas for how you vary your tests on the fly, okay? So questions about that? Straightforward so far, right? Nothing out of the ordinary. You were expecting some like new fangled thing. And finally, so you've explored the space, you've created your charter, you've learned about what the intention of that specific component is. You've started to vary your tests based on heuristics and then you're gonna start playing around with it, trying the different things and recording your observations. So some guidelines, again, there's no prescription for how you would go about doing it. But some of the things that have worked for us is this. We wanted to essentially move away from this end of the game, independent verification, running through these manual scripts. We wanted to do this more often, early and often, run these sessions early and often, time box the session, right? This is where the session based testing is based on the premise. You wanna time box the session and encourage pairing, right? For us, we mandate that every charter has at least two people sort of exploring that space. And nobody cares if you go off the script but you're still within the, there's no script to follow honestly, right? You're still within the charter scope. And documentation, again, I'll show you what we do. Maybe you have your own way of documenting the finding. So in our case, we run these sessions once a sprint and we use a whole team approach and I'll get to that, but we just use stickies and such to do all the observations and then we come together and say, which ones are real issues? Which ones are nice to have? Which ones are not an issue and so forth? That's a collective decision before we enter tickets into JIRA and whatnot. Whatever works for you, but low documentation over here is the key, all right? So that's really the mechanics of running exploratory testing, right? Any questions about the mechanics of and what exploratory testing is? You never know. Yeah, right. Something somewhere else. It happens all the time. Yeah, and I think the idea is not to be too prescriptive about where to end, right? And it happens with us all the time, but we just give them the free rein, but we don't want this recording session to completely go off on tangent and somewhere. Maybe we'll just jot down quickly as to what path you took and say, hey, let's revisit it. Or maybe we'll just design a charter on the fly and that happens too, right? If it's significant enough, I'll find it. So no other questions on the mechanics of the four steps. So now that we understand how exploratory testing is conducted with the focus and the learning and the design, the theoristics, and attacking it, how do you run this in a time box manner? And that's what I'll sort of share. And again, this is the story of what we are trying to do. We've just started doing this eight months ago or a year ago. So there's already changes that we've made to the approach, but I'll share what we're doing. As I mentioned, so this is in the federal space, 120 people, 10 to 12 teams, and highly, highly interface systems. So this is not, it's not where 10 teams have 10 different product lines. It's a shared product line. So it makes it particularly a big challenge as code is added and removed and so forth as to how it affects the other aspects. So our primary mission was this, right? Before we started the exploratory testing sessions, what we were, we had some level of automation, still somewhat of a kulfi cone, but still we had some automation and we had lots of manual scripted testing, but it was all done towards the release part. We release every three months, so it's not like it's a one year release, but still it felt like it was just too long. And then we also have IV and V teams. How many of you have independent verification teams here? Your testers are embedded within your teams. So we had the same thing. So these bug batches that I call are like the testers within the teams executing these. And then we also, because we're in the federal space, we have compliance. There's an independent test group that does all of this stuff at the end. Anyway, the point is a lot of it was being done at the tailor. So the hypothesis was that we wanna essentially make everybody responsible for executing tests, exploratory tests, participate in ET sessions, including the developers and the BAs and the testers. And we wanna do it on a cadence, early and often. And the hope, the hypothesis is that we're gonna surface some of these things that were being found late earlier under this higher confidence in our software. That was our hypothesis for running this. So to that end, our approach was this. Yeah, right on time. So our approach was 10 teams synchronized cadence across all 10 teams. What I mean is already we run the 10 teams. We have synchronized cadence in terms of our sprints. We do two weeks sprints. And also we wanted to make sure that the exploratory test sessions were also done on a certain cadence because of the fact that that way all of the different interactions across teams because they're highly dependent, could all be discovered while that's coming up. So that's what I mean by synchronized cadence. Everybody does it. And we typically do this at the beginning of our sprint. Two weeks sprint is what we run. At the first or second day of the sprint, after the planning sessions is when we run our sessions. And it's three hours. And I'll get to the details in a second. The second aspect is we give complete autonomy to the team to come up with a test charter. So again, encouraging our testers and the BAs to sort of come front and be a little more involved up front in driving quality concerns upstream instead of doing the PLA. And then we have time boxed the session and I'll talk about that in a second. Is this not gonna come up now? But I think some of this, I mean these slides if it's useful to you guys I'm happy to share them. But let me go through that in the interest of families and take some questions. So talking about synchronized cadence, as I said, three hours of sprint, no more than half a day is what we've, again, it's a negotiation with the team. If you go back to your leadership team, will they give you three, four hours for the entire team to participate in each team? I don't know the answer to that. For us it was a challenge, because they're saying developers need to be just cranking out code. That's their job, that's what I'm paying them now. So it will be a little bit of a challenge perhaps. It was a challenge for us. But anyway, we got the commitment to have three hours of sprint for the entire team, 10 people to participate in this. And like I said, it's synchronized across all of them. And test chart autonomy, again, we want, it's about empowerment. It's about collaboration. So each team decides what areas to focus on. Again, they're talking with the product owners, identifying the risks and such. Charters are created by the team. Specifically what we are sort of moving towards when I say charters are created by the team, it's really having the testers create the charters in preparation for the expertly testing sets. So there's a little bit of upfront planning. It's not like you come into the session and decide the charters. We do a little bit of upfront planning there. The BAs and testers will talk to the stakeholders and say what are the risky areas so they can create the test charters. And also interesting, it's important because if you go back to that test charter template, it's about the area you're focusing on and what kind of resources. What kind of environment are you gonna run in? What kind of data sets do you need? These are all implications. If you're doing performance sensing, the data set needs a different. So the testers are sort of helping prep the environment as well. So you don't come into the session cold because you're getting all the entire teams to participate in the session. So it's easier said than done. Creating the focus, the charters in such a greedy part. Getting your environments ready for conducting this that may take some time. So a little bit of upfront preparation. So that's about the start autonomy. And then, so now that we have the undivided attention of the team, what we do is for the three hours that leadership granted to us to say, yeah, go ahead and do this, we break it up into two parts. Two hours of execution and recording. So all 10 teams are going to go back, go into their own spaces or sometimes we do it collectively and they start executing on their charters. And this is where again, testers are the ones who help the developer with some of those heuristics and stuff. So developers inherently are not very good at alternate paths and so forth, but this is where testers are leading a lot of those aspects. And recording within the session, like I said, it could be sticky notes, each team sometimes we put it out on conference. We let the team decide how they're going to record it. You don't care. But once you're done with two hours of exploring and recording multiple charters, all 10 teams will come back together to a shared or scrum of scrum board and all the PO's will be there and all of the findings are sort of social. By that time, the BAs and testers have already distilled all the, what are true defects, what are enhancements, what are some questions for the PO, all of that happens. And oftentimes there can be a lot of overlap. You don't want to bring all of that to the table here. So all of that stuff happens with some leadership and shepherding by the BAs and the testers. We summarize and report. It's almost like a stand-up, if you will, with all the PO's that are from board. And that's why when you, the earlier picture that I showed, we had the test sessions for the different, I mean the charters for the different sessions and the finding, right? So it could be screen captures, it could be questions, you don't care, whatever it is. That's discussed and prioritized accordingly. So some of the things that are found are not always issues. Some of them could be ideas for improvement. Some of them may not be bugged at all. Maybe things are working as intended. All of those foundations happen. When you say clubbing, so oftentimes, or maybe if I'm understanding your question, if there's some things that were found with the blatant issues, what's the next step? It's perhaps it's worth it to sort of automate this. Yeah, absolutely, we do that all the time. The test for bug initiative, that's probably something that we all do, right? When things are found, if it's critical enough, you want, before fixing the problem, you capture it as a test, as an automated test. But not every finding is gonna be a bug, is my point, right, so. Absolutely, good point there. Test for bug, good way to bolster your regression speed. Okay, simple enough. Again, nothing earth-sharpingly complex here. So the benefits we've gleaned from this is unleashing the creativity, bringing the testers, making them more impart, impart and parcel, and so just being at the tail end of being dumped stories and so forth, right? So that helped a lot with that. And being able to provide quick feedback to the developers, because they're also participating in, right? So that was, to us, this really has been the big benefit these days, the rapid feedback, the engagement of the developers, and testers feeling more impartial. Before you get into the session, right? Basically, it's two, right? So, and each team does it differently. What I've seen is typically, because we know that it's on a cadence. We do it every sprint, right? It's like another ceremony, if you will. So they already know that they have to create the charters, and once they create the charters, let's say I'm doing performance testing with some data set requirements. So the team will have to reach out to the ops people to make sure that the particular, yeah, yeah, yeah, right. Yeah. Yeah. No, not quite the possible way. You're just creating the charter. No, no, not all the, the rest of it happens dynamically, right? Just the charter and the environment. The challenges, again, you know, the mileage varies, depending, it's a highly cognitive process. It depends on the skill of your testers, you know, how interesting variations you can create. So all of that, it depends, it's a highly cognitive process. And also the fact that sometimes you couldn't repeat things to your point, you know, sometimes you found something that maybe you stopped and you couldn't recreate the problem. These are some challenges we continue to face because of the less emphasis on documentation. So in summary, really, that's all I really wanted to share about that little story as to how we engage the entire team to participate in this. And I kind of like this quote by James Bach, another guy who's, there's three people who are like taught leaders in the space, James Bach, Elizabeth Henriksen, and Ken Kamer. So they, you know, you can look up material on exploratory testing, you'll find lots of information. Anyway, so he sort of compares it to like fishing, right? Like fishermen, you're always trying to study the water, see where the next tasty meal is, a castanet. Similarly, you know, that's what you're really doing with exploratory testing, et cetera. So and what we found is it definitely helped us surface the risk sooner for sure. The fact that it's highly adaptive, not prescriptive, was of a benefit. And it's not like we toss away all of our manual testing effort, meaning other scripted effort, or the automation, all that stays in place. And we are somehow able to make this fit within our agile cases. The next step, what we wanted to do is, I mentioned the independent testing teams. There's still very traditional testing, like lots of test cases and requirements. The problem is in the agile kind of, you don't have, the requirements are a little fluid. So they're struggling to come up with all these test cases up front. They're trying to have them run one week cadence, where they essentially follow the same path. So that's our current experiment. One week of sessions for the traditional, I mean, independent testing. So that's it. Any questions, thoughts? Oh, I think it's deployment. You're not saying continuous delivery, right? And that's a great question, right? I mean, is Google and Netflix and Flickr, these guys are doing 20 deployments a day, are they necessarily doing it for them? Perhaps not, right? Because everything is fully automated. So I don't have an already answer, we used to see as to where it fits. But what we are moving towards is a continuous delivery. And so we want to get to weekly push and so forth in the federal state, which is a big deal. So what we do is we have our desessions sort of lagged behind a little bit. We have to do that. So, essentially, we ought to make do. So I'll definitely answer your question. Maybe we can have a conversation right after. It looks like the other guy. Yeah, I'll be here. All right, well, thank you so much for your participation. All right, thanks. Hopefully it was useful.