 Hello everybody My name is Dave Larrabee. This talk is testing strategy new model better outcomes Just requisite intro slide 20 years deep Multi-platform guy I these days I'm doing a lot of Agile and technical coaching in dev ops and large enterprise So a lot of what you're gonna see here. It comes from my early frustration in trying to communicate testing strategies putting together a Kind of a lucid Survey of the testing models and and presenting that to larger enterprises So let's start with the definition because words mean things Model noun a system or thing used as an example to follow or imitate So we're gonna start this kind of a duplex talk We're gonna start the first part of the talk looking at some testing models that are Some of you may know some of them might be kind of fresh to you but these models are either like Represent inventory models like physical type models or logical models of how you should structure your test suite or Sometimes they're more mental models or attitudes towards testing. Okay, and now this is not a complete list This list of models simply supports the end of my talk in which we kind of put them together You will find other models and hopefully we'll have some Q&A time Think about the models that you're aware of and and maybe we can trade Okay, so the first model is I like to call debugger driven development DDD Obviously an over overloaded term, but debugger driven development is typified by the so-called cowboy coding paradigm This is a over reliance on debuggers, right? This is where you're not really doing any testing at all so absence of testing as a kind of model, right and Hopefully this isn't your situation Of course, it's possible to over test but this is wild under testing, right? We're just kind of relying on Tools to figure out why something happens after it happens. We don't have a lot of intent Maybe the best thing we could hope for this is a bit of a throw-in, but I used to work for version one It was an internal product development coach for them for quite a few years And so what you're seeing here is version one's bug lists anyone ever use version one as a planning tool Okay, so if you've used it for a while, you know, you know, it gets better every year, but when I started this is back in 2008 Of course, you're going to see defects around Front-end stuff because users tend to be supporting these defects and this is typical of debugger driven development, right? Where you have jas JavaScript always a problem, right? Action which is an internal concept you have things like dialogue lightbox and these are the main kind of pain points Now how I created this it simply took all exported all the text fields plopped it into wordal produce a word cloud And this is like one of the fancier techniques You can use to figure out where your pain is where you're having real regressions where you're having problems when you're using a debugger driven Development model as your testing strategy. So let's review by point of introduction I'm using this rage face pretty sure Goosa Goosa Spanish for like pretty sure like these are things I like about debugger driven development or things other people you encounter may like remember I'm a coach so I'm having to sometimes bring a tribe together bring a group together kind of norm on what that testing strategy is So things I like about debugger driven development or other people may like I love my debugger and static analysis tools They're amazing. They're powerful especially when you're learning an API or you're not quite sure about What you're doing or you're into a new platform or if you switch to Java from.net or whatever It can be handy to inspect your program right can be handy to understand what's going on especially, you know, if you're younger or early in your career You may just not know a difference right when I started Programming is programming see all alone in a dark room, you know with bunch of jolts or Mountain Dew and Kind of real pasty like a little creature in a cave and you know, I didn't really know I didn't have any access to community I didn't know any better. I was you know, I was a religious studies and psychology major, right? So I'm kind of figuring this out on my on my own though testing was like what's testing that seems dumb That seems like a waste of time, right? So I'm just kind of cruising through And then you know, this is kind of going away a little bit But the maintenance team will handle it and I think that's really especially being in India I'm sure you've kind of most of you have felt that because you know off-shoring really started buying testing departments It started buying like shipping departments now over and now the companies I see You know in the fortune 500 that have a good story with Collaborators in India. They have autonomous teams right just owning their own services owning their own product So this is a little bit old-school, but you know, you still may run into it So this guy this is anxiety a source like this like you know, I don't really I kind of does it rubs me the wrong way I don't feel right about it Either me or someone else, but you're gonna hear fml. That's internet slang. I'm sure you can figure out what it means, but Something my life. This is getting complicated. So, you know, this debugger stuff was fine as I was learning in Node.js with web storm You know learning the API's to make this a little microservice, but now that I've got know it on lock. I Got no tests, you know, and this is a true story from you know, a couple years ago Oh, so sometimes I'm better at giving advice than taking it So I just need to keep giving it hoping it will stick The app was supposed to be small. It was a one-off, right? There's a little tiny ETL that just turned into now this big mission critical application, right? That's a common thing where this model will bend or break And did you know so in terms of cowboy coding? It's just not very sustainable Marlboro man It's this advertising icon from the United States who you know, it's always pictured with a cigarette and you know Five out of eight of them died of smoking related diseases. So it's not it's not a sustainable thing Okay, so the testing pyramid. This is like the blue chip model of you know, every Agile testing group or people doing some kind of testing Pretty much this speaks in terms of inventory, right? Forget about the eye for a second. Also just side note the deck. This is a this is a reference Fest there's so many references in this and they're all linked to the original papers So credit where credits do I won't bother calling out the name sometimes But if you look there, that's that's who's kind of innovating on this Is more of a curation than a creation? So up here, we have kind of automated GUI tests or whether they like end-to-end tests where you have a lot of dependencies or you're flexing the whole You're flexing a whole system. You may even have an environment Set up to support that testing. So you have a lot of things that could go wrong, right? If something fails, you really have to start an investigation to understand So you just want a few tests and like the the kind of rule of thumb is 10% right 10% and or you know Like the cool kids from Ruby Python and node community will tell you 1% or none, right? Or unless you're a Ruby on Rails DHH fan, they will tell you no unit tests and do all automated You know end-to-end tests, right? So it's confusing but in the middle you start to end up with things like API tests Just cut off the UI and see if the behavior of the API through something like soap UI or something works, right? Automated integration tests. These are kind of testing one little aspect of the integration. Does my integration with some kind of Well, let's say Google API Because we've all had problems with that on the hotel internet. Does that work, right? Just come testing one aspect of integration Maybe testing that a query that's written in some kind of object language gets shipped off and returns the right results from the database over an ORM And then we end up with component tests like just testing a black box a large chunky black box in our system Maybe the domain model something like that. But down here at the base of our pyramid That's really giving the pyramid shape its strength. Well, it's kind of a triangle not a pyramid, but whatever Is our unit test. So a strong developer testing practice where developers are shipping tests with their code as part of their definition of done Right, so this is great But pyramids, you know, there are all kinds of pyramids. This is for example, it's David Bowie's diet in the 1970s Which he was a vegetarian at that time and milk red peppers drugs Pyramids have some weaknesses. Let's we'll go back to what's good. I like it's about inventory taxonomy It starts to give us a vocabulary for what kinds of tests who's doing those tests Like what are the types of tests right a basic taxonomy of testing and it gives us a notion of inventory It's going to cost a lot of money and to end tests are costly and brittle Sometimes impossible Netflix talking to chaos engineering group. I was talking to cast engineering group there And they don't they don't do any end-to-end tests. They can't it's a distributed system. It's super complicated Right, so they have to kind of rely more component tests and then chaos engineering where they just shut down whole data center, right? This is great it explains past mistakes I've definitely done the inverted triangle when I first discovered testing, you know when I first discovered cucumber I'm like, yeah, this will solve all my problems So I'll just do a bunch of end-to-end tests and you know, you end up with this kind of weighted You know inverted pyramid, which we'll talk about in a second Isolation yields more pointed feedback. So these unit tests these isolated fast ideally fast isolated test will yield when When something goes red there, you know exactly what went wrong Whereas an end-to-end test is more of an investigation, right? And I like the idea that developers test to that testing as a skill that is a whole team thing I'll a extreme programming is a very valuable thing to get a team To achieve a high level of quality So things that kind of throw me off a little bit What's 10% me 90% me is it straight count of tests? Is it runtime? There's no inherent guidance in that model for that, right? How big a pyramid are we talking so we could have a pyramid like a great pyramid of Giza a giant frigging pyramid That's an awesome like perfect ratio of tests Covering a giant monolith code base. That's so hard to understand. It's so hard to dig into So, you know this pyramid thing kind of flexes in the day and age of microservices and the kind of hexagonal architectures were pursuing now I'm a test automation person. What does this mean for you? If you come in now This is more of a coaching note if you come into a testing group with this with all these models You should be doing it this way you should prescriptive approach You might be kind of challenging people's personal identity and sense of work, right? You may also even be challenging how they're Incentive or compensated or measured in their organization. You may be challenging their family's well-being financially, right? So this might not necessarily be the best model lead-in It might be something that you want to introduce after establishing a little bit more rapport And I like to think about these models and how they'll be received in groups How that will affect my ability to influence that group or collaborate with that group, right? And where all these unit tests come from so developer may have the same thing. It's like, oh gee I haven't been testing like that. So, you know, we're a little bit far off from that So the corollary model is an anti pattern. It's the testing cupcake anti pattern. I mean you will recognize hello kitty So in this anti pattern, we'll start with it. This is actually the ice cream cone and a pattern I have no idea why you would name an anti pattern after such delicious treats, but I didn't name it So it is what it is. Here you have a bunch of manual tests. That's like ship it to the testing department QA will fix it, you know, they'll come back and they ship a bunch of bugs back, right? And down here you come down and so you have the inverted pyramid you have a unit test maybe a Couple developers went to a software craftsmanship conference, and they're really stoked and writing a few tests But it's certainly not a cultural thing. It's not like a normal behavior, right for the whole team So the cupcake comes into now we introduce Conway's law into the testing ice cream cone so Conway's law paraphrase, but is that the communication Patterns or style of an organization will be reflected in the software systems. It produces it's paraphrasing it But that's the notion that culture is influential on product, right? It's a law So in this case what you have is a very hand-off driven culture, right? Where you have maybe a big time zone difference or maybe what you have is just a very kind of siloed Organization that wants to own, you know, you have the VP of quality and the VP of engineering and whatever and they kind of Protect their boundaries, right a fiefdom, right? So developers throw their code over the wall. Maybe they're two weeks or sprint or iteration behind These automated testers are writing selenium script. There are two weeks You know a couple iterations behind waiting for that code to firm up so they can write their automations And then they signal to the manual testers now Let's do some kind of let time to get in HP test center and pull out all those scripts and just start Going through it line by line by line by line, right? You know, I think Dilbert's funny because it's true, right? And I you know it creates bad blood you have you know this kind of apathy or you have straight-up antagonism between these departments, right? You don't have a lot of synergy. You don't have a lot of like I can't believe I just said synergy. You don't have a lot of cooperation So again Conway's law when you have a very hierarchical organization You end up with a test inventory that looks a little bit like this All right, so in review testing cupcake anti-pattern pretty sure goosa that explains some of the angst I feel you know when I encountered this when I encountered these sweet sats of anti-pattern it explained Prior years of my life, you know it kind of set Things clear for me a little bit your organizational culture has huge influence on Your testing strategy and what that testing strategy is if you don't have an explicit testing strategy Your culture will guide what strategy you end up with, right? And I like to show this with the testing pyramid Model because I think it's nice to always show the model and anti model together It gives people a little bit more meat in terms of some consultant dude coming in saying thou shalt do it this way, right? So I think look here's a spectrum. Where are we on the spectrum? Okay, things that kind of bug me about this this describes us this describes us, right? So changing this threatens my role, right? So this is again an impactful thing to show if this describes and you kind of couch this as a negative How people are doing it then you know you could Change the way dynamic of that engagement, right? And it might put people on edge a little bit. We have a major investment in test center. This is an actual quote It's like lit working for a giant one of the biggest retailers in the world And talking about their testing strategy and saying well can why let's explore the idea What if HP test center wasn't around how might we do some like exploratory testing stuff and they said no We've paid too much for HP test center. So that's a reaction It's an interesting reaction. It's sunk cost fallacy, right? Why do we keep ignoring these 500 cucumber test failures? That's a direct quote from yours truly You know, this is the notion that when you have this cupcake and you have some kind of fundamental design change in the At a base layer like a schema change or some kind of change in like a data layer or a domain model or something like that core service That change can ripple through in effect a bunch of end-to-end tests So if you have a very heavy end-to-end test weighted test Inventory, you know, you're going to run into what Gary Bernhardt calls the binary test suite anti pattern where all of a sudden half your tests are failing and now you've got Multi multiple investigations to open to understand what's root cause analysis of that and what happens is typically if that If that automated build is in a pipeline You're running these cucumber tests for confidence prior to a release if you're trying to get to continuous delivery or continuous deployment They uh Well, I'll tell you what I did is I took it out of the pipeline Right our team decided to take it out of the pipeline and then it became some You know pair of developers job to fix those tests again sunk cost Rather than just jettison them or really dig deep and understand No, we eventually got to the point instead of version one We eventually got to the point where we said look cucumber We need to pull back on how many cucumber tests we're writing We're not automate every single except And but it was a painful lesson Agile testing quadrants So now we're starting to get into like real agile testing whatever that is and um Agile testing quadrants what we have is along the Uh On the y-axis we have business facing tests And technology facing tests on the x-axis We have tests that support the team versus tests that critique the product And then in the corners we have some guidance about tooling right so over here We see this is the base of your pyramid unit testing component tests fully automated fast isolated off the shelf You shouldn't have to build a lot You should be able to source a lot from either open source community or your vendor of choice right up here We have more functional tests. We're kind of getting in the tip of the pyramid right where we have Um example story tests. These are kind of maybe selenium cucumber that kind of stuff right bdd Uh up here. We have manual tests where we're doing exploratory testing Maybe even you know test scripting if we have a you know kind of known path that we want to cut through Uh usability testing and you can tell this is old because there's alpha beta that's ab testing right which Kind of the edge case in testing but um if you have enough uh traffic sure And then down here in quadrant four we have the kind of testing for architectural qualities And are we achieving those qualities things like performance availability? Uh that security, you know pen testing that would fit in there and these tests these generally involve very kind of expensive or uh High uh High maintenance in terms of rigging type tools, right? Yeah, these are these are things you want to source or buy you don't want to necessarily build So If we compare to this pyramid, we have you know our handy pyramid here with our little taxonomy Basically along here more or less we have the business facing tests versus in the technology facing tests So one critique on this taken from goi koad sick It's a good guy to follow if you're into testing um He kind of recast this Quadrants and said well really over here what you are doing is checking for expected output outputs versus Supporting the team and what you're doing over here is analyzing undefined unknown and Unexpected phenomena versus critiquing the product, right? So that he said he's playing with the notion that these axes sees axes 10 years later Maybe have changed a little bit. So over here you have things like, you know, maybe like canary Deployments right where you kind of deploy to a small group Right and kind of test and see how that goes that there's a new set of tests, right? I think The quadrants are nice, uh, what I like about them is that I think the main innovation here is that tests have an Audience they're written for someone someone cares about the outcome of a test, right? I think that's something that's probably A lasting addition to our discipline, right? Maybe we shouldn't invent everything and or automate everything ourselves So how many how many of you have written a selenium testing framework or you know? It's just like everywhere you go. Someone's got their testing framework Is it slightly better than the next person's right? Um and you know manual testing isn't that so tests all automation extreme on automation is an option It's certainly possible to over automate. It's certainly possible to over invest in automation things that kind of maybe Bug me a little bit about this model or but you might run into issues with uh is You know the notion I don't get the support the team versus the critique product divide This is uh Goyko's idea that maybe that's a little bit legacy. Maybe it's more about analysis and review versus straight-up correctness and checking outputs, right? um, I think that the business technology divide is kind of limiting in some Applications like especially if you're doing something like a domain driven design or you're doing some of the stream based architectures business and technology in certain domains It's good just to marginalize technology as much as possible and try to fuse business With whatever technical aspect you're doing so to see the terms modeled in the code and that kind of thing, right? um, and How to test tell us if we're building something people will want or use that quadrant is very limited It's very much about right way It's not very much about right thing, right? And so it's not using some of the so in the innate testing innovations that have come around So it's like lean startup. It's not using some of the testing innovations like pre-do typing, you know, like idea testing business testing Okay Gotta talk about tests first and test driven development. I initially pitched a workshop On test driven development and the rest said no one will come People don't pay for test driven development. So I said I also do continuous deployment He said people will come people are interested in that. So This is my sneaky way of making you listen to a little bit of tdd stuff Red green intentionally left off for a factor because a lot of people forget to refactor or don't refactor or that's one of the things that kind of fall By the wayside Two main schools from test driven development. I think this is an interesting divide Detroit school, which is kind of state based verification. Of course, you know this from extreme programming Ken Beck The chrysler c3 project red green refactor loop write a failing test get that test passing Have the discipline to apply refactoring that is actually changing the structure of the code without changing the behavior of the code Right a lot easier with statically type languages like java p chart Black box you're driving on an implementation from a public api So if you're not doing refactoring, it's very easy to end up with a series of small balls of mud All right Heavy emphasis on refactoring refactoring is very disciplined very very important And if you were at doc norton's talk, if you're following this you are producing technical debt Not producing thrust Right you I think that's I'll have to check with them, but I thought it was my takeaway from that thought So martin fowler calls this the classicist style of testing London school tdd So a few years later came london school, which is using mocks and stubs to drive out Your design and this is much more of a design tool. This is all about designing object interactions It's kind of the pinnacle of o o design in my mind Uh, the main book here is growing object or in its software guided by tests by net price and c freeman But here you're using tests as a designs design tool Heavy use of mock objects that is a mock object if it a certain call that you set up It doesn't happen the test will fail. So it's like a stub within assert, right? And your tests tend to look a lot like your implementation code Right and if you're not doing this test first mock objects make zero sense Because why would you write Your code and then go write a test that had your code that you just wrote in the code, you know I mean, it's a little it's a little redundant So tests here are really an industrial byproduct of design. They're scaffolding, right? They are the bamboo around the infrastructure They are the kind of the apparatus that just goes away once the building's complete, right? And martin fileur somewhat derisively calls this the mockest style of testing. So I'll give you a little kind of Example of I have to read this because it's a script and it's hard to remember But let's say we're driving out a controller using mocks london school We may drive out the interface of an adapter. So we're starting with this controller and we're writing our tests there So red means that's where the center of activity. We're writing tests writing code doing refaction perhaps And here what we're doing is kind of going through that red green loop And we discover some kind of adapter Maybe our controller is going to talk to some kind of logging service and log that this happened Maybe a security service some cross cutting concern So we drive out an interface that we own and we mock that interface, right and we set up an expectation on that Then we keep going maybe we discover an application service. So here what we're doing is london school tdd We're we're discovering the interfaces that are going to collaborate with our controller Okay, and this tends to work really well for the kind of highly unstable parts of your systems systems Are objects that have lots and lots of dependencies like controllers And then at some point you're going to hit in a business object or entity Maybe if you're in a financial it's not a financial domain It's not uncommon that you'll have I'll use the classic bad example of the industry You'll have an account right that account It's probably some kind of stateful object that has state plus behavior, right and here what you're doing is Sorry Here what you're doing is uh, Detroit school So when you drive into this you will be kind of writing, you know black box s and so you're kind of switching between the two And at some point you drive further in if this is greenfield you start working the application service driving test driving that out And later you come back and hook these adapters up to you know To the actual implementation or they exist already and this tends to be Okay, this tends to be where there's like a lag between that and that This tends to be where you have that black swan moment the aha moment of oh my god I just shipped a story And there are 150 unit tests attached to that whereas normally I shipped the story and then maybe I write the tests in Immuneration sprint or a fixed print or something like that later down the line And that's what kind of tends to get people hooked on this approach So you have to kind of run through it in one whole story And the learning curve is pretty intense. Okay So that's why maybe people like when managers hear tdd they're like No, maybe just maybe just better don't right, okay, so what I like about it Clearly a lot, right, but test first and to a lesser extent test closely or in proximity Uh ensure unit tests get written and not forgotten put off doesn't mean your unit tests are any good But at least they're getting written and then we can start to talk about the quality Design your code uh by treating everything as an api your tests are simply the first client But through testing you are kind of flexing the quality of your code and saying is this expressive Is this something I would understand six months down the line when I had to come back into it Is this something I could I don't I'm not pinned to for the rest of my life Um scales with level of detail So we talked about very code artifact level or very technical design tdd But I think the same red green refactor approach is can be applied to business ideas Certainly we see things like lean you ax lean start up the lean family of processes employee test driven at a at a higher level um And it's a nice tool, but you know what they say about silver bullets So I think right next to me nourish is presenting about why he dropped tdd You know and and I'm conscious of that I wish we could just have a little door and he could like and I'll just come over there and I don't know what I do But he could come tell us why and I've kind of tracked that way It's a tool I use now, but I'm also using things like collection pipeline programming functional programming and straight up diagramming, you know my basic Kind of redneck uml still work, you know works pretty good to get a group going right Okay, so things that kind of throw me back a little bit on this or might you Run into problems with disciplines hard. I forget to refactor sometimes so that notion of conscious discipline Over time becomes unconscious habit. It's true But it takes this particular practice will take you some time So you need to have the venue to do it You need to wake up early and do code katas if you're not allowed to do it at work Or you need to ask for forgiveness rather than permission speaking to dox point You need to find a way to do it and it's going to take some time to kind of make it stick And now that's one person if you're on a team, you know, even a two-piece a team 89 people You it might it might take an outside influence I do this by the way, but anyway So rugged seep learning curve mock objects are weird london school tests seem redundant Like you're just writing the implementation of the test if you're not doing tests first, they seem reserved Right, uh, they're also kind of like the london school tests They don't really do much for you in terms of proof verification or confidence They're there like it said the scaffolding You can delete them and it's hard to overcome that attachment and sunk cost fallacy sometimes, right Practicing both detroit and london state versus interaction based verification blurs the line of unit This is just the notion that when you get into this not every class Has its own test suite not every function has its own test suite It's that you could have graphs of functions that have one test suite Little technical um, so We'll talk about this, uh Pipples, I don't know if you have them out here in america where i'm from alana, georgia The wrappers all have pipples because it's like the tough thing to have Right, so like if you're tough and hardcore, you've got to have a pit bull So that's what that image means. Um, t a t f t will become clear in a minute So let me ask you a question. When do you test? When the code's ready. I heard all the time. Okay when the Okay well So This uh is a mental model for testing and um, there used to be this when do i test calm? I reckon whoever had this register. It's registered. Maybe he had to go get a job somewhere or something I don't know, but they took this off. Uh, I you know, you can Whatever so this is the notion that it's a really intense approach to testing that you're going to just Throw yourself in and what you found this kind of happening with the java script in the ruby Communities where it was like tests all the time, you know, let's test and um They kind of just followed the classic hype cycle, right? So the test has test has test ever over testing basically testing everything Testing everything rather than a gradual and intentional adoption of the practice They just wrote a 11 billion tests. This is something like you would see testing getters in straight getters and java or Accessors, you know in net like or testing behaviors of collections Like that should you know, you're taking that dependency because you assume Someone got it right, you know, so you're testing things that maybe you shouldn't test your over testing So what happens is you have some kind of shift in your system and a bunch of test break And you run into this really sad disillusionment phase right versus a pragmatic approach or like a gradual approach to testing so Yes, you might say what could you possibly like about this Whole team testing mindset and I think tests infected the classic paper, right? That's a great thing. It's a great thing for a team to adopt testing as an activity versus tester being a role or responsibility of one individual Right tester could be maybe the leader of that activity Uh I like this quote from alvis huxley the secret of genius is to carry the spirit of the child into old age Which never means which means never losing your enthusiasm. So just because I've had Some lessons learned the hard way doesn't mean i'm going to necessarily reign on the enthusiasm parade enthusiasm is a great thing It's really Something we should you know cherish and nurture in deans, right? So if you want to test let's Let's get in there and pair with them maybe and that's a better way to So things that kind of bug me a little bit is execution bias creating a team fixation so What other things can we do to improve our product and or code quality? So maybe like hardcore developer unit testing isn't just the straight answer Maybe there's other testing approaches that you could adopt And I like this corollary quote enthusiasm just creates bubbles. It doesn't keep them from popping So enthusiasm is great, but it needs to kind of be paired with a little bit of uh, I think prudence okay, so I I think code coverage counts as A testing model. Um, it certainly is Interesting, I think people have different relationships to the metric. Um, I kind of if you went to dox talk agree with Pretty much everything you said about code coverage Where I have a real problem is when it's used as a kind of a as a As a kpi for quality Right, it does not make sense to me. I think at best it's a spurious correlation Right, so just because we have a high correlation between per capita cheese consumption in the us And number of people died by getting tangled in their bedsheets Doesn't mean that there's a causation, right? So high coverage doesn't necessarily lead to a high quality now that said um I was Pairing with it is a net. I think digital studio. We were doing the bowling 10 pin bowling kata And we hit a point in test driven, you know, very kind of evolutionary emergent design very Detroit school Test driven development, but we hit a point where design just became unsustainable Right, it became very psychomatically complex. We had a number of ifs and counters tracking what frame you're on Um, and it's you know, it's a simple game But it's a good kata because it kind of gets to a little good level of complexity where test driven works And what we found is we had to kind of pivot and switch our implementation So we decided to Introduce uh The notion of a frame and the frame would listen to the game basically observer pattern Right, and so we tore down a test and we started it not a refactoring. It was a mini rewrite Okay, so code coverage was really handy in this case. You can't really see Um, but there's this notion of rolling machine, which is this abstraction That tracks the rolls Right, and there's uh, this thing this function on frame called is spare. That's partially covered 86 So code coverage tactically very useful here um To when you do like a mini rewrite to say I need to go retrofit tests, right code coverage also I think well Very useful when You're Introducing a testing practice right because it's it's tempting to go back to code first versus test first Right, so coverage can help you keep you on on the rails and say oh I could have done this maybe test first this way Um What I don't like about it is the kind of various divides it creates. I don't like it as a yoke metric I don't like it as a kpi. I don't like it as Something that is used to You know I think they're very few metric. I I can't think of a Quantitative metric that gives you code quality Like at a systemic level It's you're in a don't qualitative domain quality, right Quality is qualitative. So I think that's gonna your your outcome your um, Code quality is going to be seen reflected in your release outcomes. It's going to be seen Reflected in the conversations that you're having if you're a leader with your technical leads your senior technical staff It's going to be more of a kind of echo metric It's not very satisfying, but it is what it is so Uh, so one other model we want to cover before we go into a synthesis of models Exploratory testing who's doing any kind of exploratory testing? A few a few okay Who's doing like who's working in teams that are doing highly scripted testing like at hbt test center So no, so okay. What's the alternative? All right. So exploratory testing simultaneous learning test design test execution james bach is the inventor here FedEx delivery service so James wittaker created created this notion of Um Exploratory testing with a tour metaphor. So you take a different tour metaphor So this FedEx tour metaphor DHL tour or whatever is taking a package and tracking it all the way from Origination to delivery so where that package is data So customer enters an order tracking that through your system And that's what you start with that metaphor and from there you kind of go through and just do a little exploration to see how That scenario works So you're starting with a very high level thing rather than a scripted thing and you're doing a more improvisational Approach through the system, right? So that's really in essence what exploratory testing Contrasted to the more scripted which is prescriptive follow this cut through the system, right? So FedEx tour is one example the other example will treat here hug ops, right? But we have uh We're not going there for a free hug, but um The back alley tour this is a notion where you're coming out of system with with maybe not the best of intentions So like pen testing you're trying to break it. You're trying to say what can I do? Can I sequel inject and I can I hack on this, right? And so the James wittaker book is really good. Uh, he lays out like well It's a good reference book for tours Let's put it that way and that you will give you a number of tours that you can do There's a sales tour money tour like saying Trying to demo your software someone that might buy it, right? But these become the basis for more free form Kind of explorations into your software. So what I like about exploratory testing People support a world they help to create it's very easy to bring someone into the mix Into conscript testers from your pool of engineers or pool of designers and give them like this basic Vector into your system, right? All hands-on-deck approach to sound alternative to maintaining expensive db of manual test scripts, which often go stale This is inventory, right? Lean tells us to reduce inventory inventory carrying cost in inventory, right? Things that might kind of bug you or the people around you Developers develop testers test stay after long, right? So the kind of ownership What about my test scripts again? This may be something that you are Incented on or as part of your, you know professional identity And as a manager of testers, how do I know we haven't regressed so there might be this kind of belief That these rigorous test scripts Correlate to high quality right and that a more loosey-goosey hippy-dippy approach will not yield the same amount of quality So it could be a tough sell Okay So in remaining time, I'd like to pivot to strategy And I'll show you an example of a strategy That combines these models and this is where I'll often Kind of begin educating a team or introducing a team. This strategy is built for a Typical kind of web application. It's not too fancy. Not a distributed system Not a microservices architecture, but your run of the mill web application The strategy now a high level plan to achieve one or more goals under conditions of uncertainty Um, and I like to call this kind of approach to strategy gestalt then gestalt Is simply an organized whole that is perceived as more than the sum of its parts It's a concept that comes from psychology and art It's a notion that by using a few pieces you can get a larger perception I in software testing I kind of apply this To the idea of how can we get the maximum confidence for the minimum or effort, right? What are the how do how do we leverage a few strategies wisely to get confidence in a tighter feedback loop? Okay, so you should recognize already there's some models in here. There's a bit of quadrant. There's obviously a pyramid There is an eyeball So let's take it Step by step at the top rather than talking about end-to-end tests or api tests. We talk about safety These tests promote will give us confidence the software is working They help us find regressions before our customers do so prevent embarrassment or maybe loss of income All right, uh, so safety is acronym smoke tests Is it working get it shipped? Is it in the data center? Is it serving? Is it available? automated acceptance tests Maybe in the course of delivery you find some core acceptance tests that you want to automate feature test testing large Important features so we're getting the concept of the harvest and yield if we have a large system Are there features that are more important to be available than others? Exploratory testing and this notion that exploratory testing is kind of the way that tests get born, right? It's the central Nursery for tests like we find and cool or interesting cut through the system that we want to automate our exploratory testing informed their automation effort okay curation over collection so curation is that like in a museum a curator will pick from their whole collection And they'll put on a show and that pick they they choose They choose objects or they choose works of art that Go together that tell some kind of longer form story. That's curation collection is just let me just collect every piece You know that's important Right, so curation is all about saying what are the tests that? Really promote our safety at this time and it's an active process It's not about writing and automating and just accumulating tests. It's about curating the tests that give us a sense of confidence Written and understood by the team so specifically understood by the team. This is a notion that a test failure Should be meaningful to the whole team at this level these are kind of business-facing from the quadrant realm time box So part of this in an implicit test is that all of these tests run in the amount of time given So that will vary from system to system But let's say you pick half an hour. It's a small application These tests will need to run in half an hour or if it goes 31 minutes That you fail you fail and maybe that's in your cd. Maybe that's a step in your cd pipeline Maybe that's something that makes you do a rollback right depending on how sophisticated you are but There's a time box here. Why do you think there would be a time box because curation over collection the time box forces you to curate it forces you to make choices about what goes into your safety suite And I'll have links. So if you didn't get a snap then we'll uh, you can download this So at the bottom at the base, this is where we kind of had developer tests or unit tests But I like to divide these into two. We're roughly along Detroit and london school style So here you have specs examples developer tests or developers are writing these tests millisecond feedback. They're fast High value feedback, right? They're pointed. They're isolated They fail for one and one and only one reason, right? state-based verification Detroit school black box refactoring dependent Detroit school old school And these are testing off in small graphs some independent objects. We're not so worried about unit equals class We're saying unit equals maybe an aggregation a composition You know of small network of object so we'll come over to design and discovery tests also unit tests also fast feedback heavy use of test doubles such as mocks and subs to design object interactions Because you're doing this kind of microcode generation where you generate the code as you go, right? It may not exist. You type it as you go. You have a lot of red you generate it tools help you Help us discover boundaries and layers, right? Help us say well, there's something wanting to get out here So by using our tests as our first client we kind of uh can feel that white box that is Our tests Implementation often looks a lot like the implementation of the code. That's okay London school in the new old school Deletion is okay. It's scaffolding. You tear it down when it's done Or you just leave it there because it's fine, but You don't really get hung up on if these tests fail It's very easy to make an interface change and have a bunch of tests these types of tests fail Right. It's easy to make a contract change and a bunch fail It's hard initially if you don't know to let those tests go and just believe them because You think they're tests, you know, you of course you want to keep your test because look I've looked how many thousands of tests I have Okay lastly we'll go into the uh The eye here what we have is product level testing. This is kind of driven exploratory testing whole team testing We hopefully have a testing culture exploratory prompts automation um It's mindful of its own size right so the notion is that Mindful of its own side. So the notion is that the governance of this testing strategy Is such that the size of your test suite is a leading indicator that you might be Have an opportunity to introduce a partition. You might have an opportunity to introduce a boundary You might have an opportunity to componentize your software Uh adopt, you know, so the eye is kind of also about you know Evolving and you know deciding how that strategy kind of evolves with the software you're producing Adopt maybe you evolve passionate users Right in the testing Approach maybe you adopt session testing So you get all hands on deck before you do a larger release And you kind of do an exploratory session testing uh toward before the release With the people that can actually fix this offer Adopt the testing shard or the canary release. This is kind of similarly to how Facebook at least used to treat um Australia And that they would release all their new code to australia and if they heard any Complaints from australia they would maybe roll back, but they don't apparently care too much about australia's business, but um, I've heard Worked with a client that had a storage service And what they would do is they had pivoted to a new model a verdict more of a vertical storage service, uh And the the photographers that they had in their previous You know focus they put on a shard and they would just release the new software to that shard Because they didn't care about them leaving so it's kind of a Uh negative, but you could also have a testing shard that was released to I don't know like google canary chrome canary release Where I actually want the new features. So it's a positive negative thing. It doesn't always need to be like sneaky evil silicon valley stuff, right? um Contract testing this is a notion, uh, we don't really have time to get into this But this is a notion that we might start testing our third-party dependency If you come to my workshop on sunday, we'll deal a little bit with contract testing And Voila, we have a number of models in a testing strategy. This is a generic approach This is something I bring to conferences But the idea here is that what we're doing is we're kind of pulling in a number of approaches To formulate a strategy that is suitable for in our context, right? Testing we talked a lot about testing. I'd submit to you Really what we're doing in testing is this is classic xp. I changed listening to observing but There is a gestalt between and confidence of these core activities One for example, we're doing test driven development london school. How much testing are we actually doing? We call it testing but really Going through this loop, right? So at this point think I'm out of time Am I? Okay, so this will be a rhetorical question. You can reach me in the hallway and I'd love to hear about Your testing models. Which models do you see in your strategy? Thank you very much Appreciate it