 I just realized I didn't pull out my notes, which is important since there aren't any slides. So we're going to talk about bugs, and the most important thing on this is where we start the conversation. So what's the definition of a bug? Because I'm talking here about bug zero. I'm going to make the claim that much like there's that whole inbox zero movement where e-mail it just becomes the thing you don't worry about because you don't have to deal with it. We're going to do the same sort of thing with bugs, only even further and we'll tell some stories about teams out there who just don't ever have, worry, think about bugs. It's just not a thing and how they got there. So the first thing we have to do if we're going to get there is get that simple definition. What's a bug and more importantly, what's zero? Because I find that while there's a lot of disagreement about bug, there's even more disagreement about the definition of the word zero. So let's start with the easy one, bug. What's a bug? A feature not guarded by tests. If a user loves it and it's all working, is it a bug? Something that doesn't work. Okay. Why is that a bug? Okay. So it got in the way of a user somewhat. Is that a bug if it has not yet gotten to a user? Is it a bug if a QA person sees it? Okay. Is it a bug if it has not yet gotten to QA person? Okay. Is it a bug if it has not yet gotten to source control? Yes. So there's some time at which a bug becomes a bug. Before then, it's just like, I'm in the middle of this, please piss off. Right? Yeah. So there's some time when it becomes a bug. Okay. And then that one we said it was a bug because it got in the way of a user getting something done. What about a bug that doesn't get in the way of a user? Are there other bugs that we would consider bugs, but they don't actually get in the way of a user doing what they want to do? Such as? I saw an odd. Any other ideas? So what about if I'm running an ad campaign or something and I change the look of a button and I get half as many clicks on it as I did before? Is that a bug? I just cut my revenue in half. What was that? Yeah. So it depends on whether it's in the requirements. That's an interesting answer because that seems like blame shifting. Like, well, it's a bug if I screwed it up, but not if you screwed it up and didn't put it in the requirements. I mean, did it change the amount of revenue the company made whether it was in the requirements or not? And I don't mean to pick on you in particular. I mean to pick on that answer in particular because it's a common one. So the definition of bug that I use, and I'm always looking for a broader definition. So if you can broaden this definition, great, tell me how you would broaden it, and we'll use that one instead. So the definition that I use is anything that would frustrate, confuse, or annoy a human, and is possibly visible, potentially visible to a human other than the person who is currently actively writing it. So basically it's once it hits source control, then it could be viewed by some other human, that human potentially being me in my next commit. So if I do something and I think I had it working a particular way, and in the next commit I go, and then I'll go, wait, why is it bug? I just got confused why it was doing something. And I'm totally welcome to broaden that in any way that I possibly can, but my belief is anything that would get in the way of user doing their job would frustrate them. Anything that's going to drop the company's revenue by half, there is at least one human who will be frustrated by that, perhaps disappointed. So I'm using that definition of the word bug. So given with that definition of the word bug, and I'm talking about teams that have no bugs, what's the definition of the word zero? One minus one? So if you've got an additive ring, no. So, well, let's just start by sampling. So on your team, that you're on, oh okay, we don't want to talk about your team. So on a team of someone who works at another company that you know very, very well, that's totally not yours. I want to get an idea of for that team, I want you to give me a couple three numbers. About how many bugs do you have, does that team have in their bug database right now? Some total of anything that's not closed done. So if there won't fixes or open or I haven't gotten around to it yet or even triaged it or whatever, some total of that. Total number in your bug database. And then tell me the number of new bugs that you write per, let's say per month as a team. If you don't know the number that you write because it is much larger than the number you can detect, then say, don't know, but the number of additional ones we detect each month is. Because a lot of teams are in that particular circumstance where we write a lot and the number that we create is limited by how good detection we do and how many hours we spend on QA, right? That's normal, right? But I just want to get an idea of teams in the room. Where are you in terms of number of bugs in the bug database and number of new ones created per month? Or teams that you know? Anyone willing to volunteer some numbers? Yeah. 250 bugs in the database and about 50 a month, okay? Other numbers, about one new one a month, three in the database, okay? Other numbers? Anyone have more in their database than 250? All right, we got an honest person in the room. Yeah, anyone generate more per month than QA can find? Oh, we got a lot of honest people in the room on that one. Yeah, okay. All right, so interestingly when I asked this question, I find teams roughly categorize into three camps with an asterisk. So the first camp is how many bugs do you have in the database? No idea, lots. Because there's the number that are active in the database but then there's all those additional ones that were triaged or won't fixed in previous releases and we didn't roll forward and I don't really know how many because we haven't even looked at all those in 15 years, right? So there's that camp and then you ask them how many they generate per month and they're the ones that say more than QA finds. How many total bugs in the product? More than QA has found, don't know, okay? So that's the first camp. The next camp is I find there are a set of people that are able to give a pretty concrete answer like the one that you gave. We got 250 in the database, plus or minus eight, you know, whatever, and we generate about 50 per month and whatever and those numbers, they vary a little but they're often in about that range. We got a couple hundred open, maybe as low as 50, 75 open and never more than about 3, 400 open and we generate per month on the order of one, like it's a lot like one bug per developer per day-ish. And then that varies, you know, some go down from that but you're still in that range, right? And then there's a number, the third camp are people like you. Number of bugs in the database is below five, often zero, the number of new ones is once a month, twice a month as a team, okay? And in that camp, they often answer the question differently. Number of bugs in the bug database, zero. Always zero, well, a sampling error. And in order to ask the question of how many bugs are in the database for that camp, often you have to ask how, during how many hours in a month is there a bug in the database? One, two, four, you know, those sorts of things. And then the asterisk is teams like that. There's a bunch of teams in that category and then there's teams like Hunter Technologies, which ask them the same sort of numbers and their definition of zero was, well, you know, there were a bunch and then we changed the way that we worked. And we started doing mobbing and we started doing some of the other practices. And the first week after that we had a bug. Then 18 months later there was one. Then we root cause that one so we didn't have a recurrence. We had a situation about a year and a half after that that could have been a bug but we'd fixed our practices so we noticed that and it didn't actually happen. But about two years after that there was one. Those are the definitions that I consider zero. Now, does Hunter have zero bugs? No, over six years they had three. Does your team have zero bugs? No, every month you've got one. But from a planning perspective, that team can pretty much treat it as zero. How do you plan an account? What is your triage process if you've got three bugs in the database and you create one per month? Yeah, the triage process is a fix it. Your management and oversight process. So when I talk about zero bugs, those are the groups that we're talking about and we're gonna talk about how people got there. So, Hunter Technologies I alluded to, they'll probably come up a few times and they're a common example for this because they went in relatively short order over the course of about a month from typical defect rates to just not. And that was when they introduced mobbing and in particular brought in, they didn't just mob with programmers, they brought in other people. So they were doing factory floor automation. I mentioned that defect that, oh, we would have had that one but we didn't. Well, they were doing factory floor automation and they had a bunch of tools and things that they were developing for. And they produced something like 50 products out of the one for control systems for different parts of the thing. And the second bug that they'd had, the one that was like a year and a half in, it was of the form of, the software does the right thing, it's just not actually usable in the physical plant because of actual user concerns. So they brought in, once they were to cause that, an assembly line worker and as part of the mob, they had one of the customers always on the team and he'd rotate through and he'd code all the roles that everyone did. And sure enough, about a year later after they, and they had a person in for a couple of week rotation and then get in some new blood. So a couple of years later, they had one where they're designing this thing and it's another sensor gauge display when something's in a control band and when it's going out of control so that people can do something with it. So they've worked it out and they know what they're doing and they're trying to figure out where to put it. So they put it up on this display. It's a lot like, it's for this one particular section of the system and so they put it on the screen for displaying that and they're going on with life. And then the guy pauses, I've got a problem guys. This we're putting on the thing that it's related to. Like yeah, that won't work. Well, that screen is for programming the machine and it's over on this side of the machine and the reason I need to look at this control band thing is because I need to stand there holding a lever and if it goes out of control, I need to throw this switch instantly and bring things to a grinding halt before the machine blows up or whatever it's going to do. And the problem is that that screen and this lever are nine feet apart and my arms aren't long enough. So I need you to put it on this other screen on a different machine that makes no damn sense. Yeah, that's true, but that's where the lever is. So that was a bug that absolutely it was a bug. It would have made the system not work and it would have frustrated the heck out of some user, but they didn't have because they had that person on their team. And those are the sorts of things that we need to do if we want to eliminate bugs. So in Mary's talk this morning, she talked about in the old world, we built systems to be defect free and in the new world scaling out, we're building systems to be fault tolerant. Okay, yeah. So when we say fault tolerant, what does that mean for these kinds of bugs? They're introduced by humans and developers and whatever else. How many of those do we have in a system that we're building to be fault tolerant? In the old world, we would try and build defect free and make sure we didn't have those and that was a big important thing, right? So what about when we're building fault tolerant? Thoughts? I heard two things at once. What was that? Fixed bugs faster. So it's focused on fixed bugs faster. Okay, what was you thinking? Yeah, if you're doing a fault tolerant system, yeah, how many bugs and what are the bugs like there? Okay, so high bug count? No, yeah, so this is a key mistake that a lot of people make. They think that in the old system, we were trying to build defect free and the new system, we're trying to build something that will fuck up a lot but fix it quickly. No, that's sloppy, that's undisciplined, that's not scale out. That's just sloppy, right? So there's defect free versus fault tolerant versus unintentionally screwing up all the time, right? We don't want to be unintentionally screwing up all the time. A fault tolerant system has just as much discipline, just as much, if any defect is possibly detectable, the whole point is detect a defect early and recover from it early. So what's the earliest point we could possibly detect a defect? Is it live in production a quarter second after it happens or is it during development a quarter second after we created it or maybe a quarter second before we're gonna create it or the pair partner goes, uh, that thing you're gonna do, I do not think it does what you think it does. Yeah, so when we're building fault tolerant systems, that is not an excuse to be sloppy. In fact, if we're gonna build a fault tolerant system, the only way that it can really be fault tolerant is that when it goes into a state of fault, which happens rarely and it's hard to test and is usually not so well thought through, it needs to work exactly as the developer thought if it's going to recover when resources are constrained, the behavior of the world is not as we expect and it's still gonna survive, you cannot have any sloppiness in that system. So when we're talking about fault tolerant scale out systems, there's actually a higher bar for none of these sloppiness bugs. Another thing is I talked about a bug happens as soon as it would potentially be visible by a user or by a human, okay? So what does that mean in terms of testing? Is testing a component of getting to zero bugs? What role does testing play in getting towards zero bugs? Yeah, testing surfaces bugs that are already there. Therefore under my definition, it can't get me to zero, right? Because I've already got a bug. I just didn't know it yet. So testing has nothing to do with zero bugs except it's the outer control loop that tells you whether your actual process for getting to zero bugs is working or not. So we're gonna get to zero bugs and we need to have a mechanism that gets to zero defect development before tests are ever invoked. So I guarantee TDD is not part of the answer. I'm a big fan of TDD. It isn't part of the answer, okay? So there's a really interesting example here of, so what is part of the answer, okay? So if we're going to prevent bugs, that's the only way we can get to zero. We have to prevent ourselves screwing up, right? And a bug is an encoded developer mistake, right? So if we're going to prevent them, we have to know what are the causes of bugs, right? So how could we detect the causes of bugs? It's really two kinds of answers for how we could figure out causes of bugs that we can then go fix, prevent, whatever. Okay, so particular technique, pairing and tripling and the like can help us detect and notice mistakes, yeah? So that's an example. Another example, there are a couple of common practices that are built into extreme programming that are related to being able to source categories of defects. But there's basically two places that you can look for what are the causes of bugs, in your team or in the industry in general, right? So in your team, you've got a bunch of practices, retrospective and pairing being two of the most significant ones, right? For looking at why do we screw up? What is it for this team, right? And then industry in general, you can look at what are the things that screw up all the teams out there? And how could we learn before we even make the mistake the first time, right? Both of those are useful, okay? So Nancy Van Schodenburg has a really great example of really depending only on looking at their own team and how they screwed up. And so this was a paper that she published. Agile by the Numbers with Newbies, something like that, Agile Software Development by the Numbers with Newbies, something like that. Google will find it for you. In about 2006, 2007, something like that. And she had been working on this project. It was a control system for a combine, I think it was, hardware software combined project. And she'd been arguing at her company for several years that we need to do this Agile thing. And the powers that we had gotten annoyed with her, so they said, fine, all right, we'll give you a project that you can do Agile. But silently, we're gonna make sure it fails, right? So you can go ahead and do that. All right, so you're gonna do this combine and you've got limited scope, limited budget, you have to make it work. And here's your team. So you're building a combined hardware software thing. You've got a web developer, a DBA, a person straight out of school, you, who has done some embedded control work before, but you're supposed to be the manager, so you shouldn't code. And one person who has used C before. How many defects do you think this team had? Over the course of, I think it was about three years of active development and three years post-delivery that they examined, something like that. Two or three years after delivery, I've forgotten the exact numbers. How many defects do you think were found? So guesses, ballpark, 15, you'll only say that because I picked this as an example to demonstrate goodness, yeah. How many do you think a typical team in that circumstance would generate? This is 7,000. Yeah, something like that. This team, I've forgotten the exact number, it was like 53, 54, something like that. It was 47, thank you, 47. Yeah, and they know exactly how many it was and if you go look at that paper, they've got root cause information for every single one. And this is a team where, when you go look at some of those, one of the early bugs, I've forgotten which one, happened when they were caused it, the story that Nancy tells is, so we're a couple of months into, a month or two into development and a couple of the developers find this cool, interesting new language feature, global variables. They've never heard of these things before because they haven't used languages that have them. But they've been having this problem that they're passing information around from function call to function call in their program and the call stacks are getting, the argument lists are long and it's annoying and there's information that's used over here and they have to pass it all the way over there and wow, global variables will totally solve this problem. So they start just jamming all the information into a bunch of globals that anyone can access whenever they want to and building out this system. And what's Nancy's response to this? She's a manager, she's built a control system before, she knows what's going on. She has heard of global variables while her team hasn't, all right. What do you think she does? What would you do in a similar situation? Tell them holy crap, stop, yeah, yeah. So what would happen if you told them to stop? They won't understand why, they'll resist. They'll use them sometimes but not others, they'll, yeah. So what Nancy did was she said, cool, that's about it. They did their things, they did their thing and then from that moment on, she paid very, very, very close attention to every commit, everything that anyone was doing and the first time that she was able to detect, ah, they just introduced a bug. She then figured out what the test case was and surfaced the bug, all right. And poof it shows up and everyone, oh crap, we had a bug and she had built in from the get go. These things are optional, we don't need them, right. Let's not. I said, oh crap, we had one, all right. Root cause, she'd built the practice around, anytime there's a bug, all work stops. We root cause the bug, we find out what the total expanse of exposure to this, we're going to fix this symptom and any other symptoms because the bug is not the symptom, the symptom just surfaces it and we're gonna find this pattern and what pattern led to this error and we're gonna prevent the whole thing, right. And they do that and they go, global variables, this shared state and we can't do good reasoning over who's modifying it when and how and crap, this language feature is dangerous and the team says, yeah, wow, is there any time that it can be less dangerous and they thought about it, think about it and they're like, no, not really, right. We just need to not use this feature. She goes, yeah, seems like a good answer, right. That team never made another error related to global variables again, right. So Nancy in her role as management was focusing on making sure the team learned from every mistake they made and had the opportunities to make the mistakes in a safe way. And they were continually refining and inspecting and detecting and then when they made a type of mistake they never made it again and that's why they had 47 total bugs. We can do even better than that. I mean, first of all, we should do at least that good. So we should all go home and just stop writing bugs. Yeah, but we should start learning from stories like Nancy and like Hunter Technologies, right. And look at when we have mistakes, how can we surface them quickly, learn from them, prevent them entirely and go solve their root causes. The way that we can do better than that, I have a slide, don't worry, it's blank. The way we can do better is we can learn not just from ourselves but from industry, from what are common causes of failures. So, simple chart we're gonna fill out. We're gonna look for what are the reasons that bugs happen. So I mentioned earlier a bug is a mistake by a developer. Developers create like hand-crafted, artisanal, carefully selected errors, right. That's what these bugs are. Every bug is a one-off, magnificent creation. Now, do we honestly think that developers want to create the bugs? No, not at all, right. So what is causing them to? So, what are examples of some of the things that cause developers to make mistakes? Not understanding the problem. What do you mean by limited scope? I said limited understanding what they mean. Okay, I'm gonna call that the same as not understanding the problem. It's another aspect of the same thing, yep. I heard complexity and rushing. So complexity, what do you mean in terms of complexity because that has a number of different meanings. Okay, yeah, complex interactions where anything, I don't know what all's affected by what all, yep. Distraction, okay, and someone else said business. Actually, okay, other wrong assumptions, okay. Thank you, thank you keyboard. So by the way, these are ordered. The top line is the one that's the cause of most bugs. The second is the second most significant and third is the least most, or is the third of the top three. And these account for large, basically, most bugs. So they're all significant. But it is pretty common that all of the examples that people think of point to item number three. That people have trouble seeing the most common and the second most common cause of bugs. So there was an interesting paper that I read. It was in, I mean it was published in like the early 80s or something. That was analyzing, someone tried to analyze, they were comparing different programming languages and trying to identify, do we produce fewer bugs when we write in C than when we write in eta or assembly? And why? And as a side effect of that, they looked at what are the common bugs? So the number one most common cause of defects up until the mid 2000s, when it changed because there was a change in development tooling, not language, but the tooling that was commonly used by people. It's still fairly common today, but there are tools that you can use that absolutely prevent it. What do you think is the number one most cause of bugs in code that sets people up for error? What was that? Spelling. Actually spelling and poor names in general does appear it's a little further down on the list, but yeah, yeah, especially misleading names. When you call something log in and it doesn't actually do log in, it does something more interesting than that. So number one cause, no pointer, uninitialized data, that's actually further down. Compiler behavior change. Compiler behavior change? That doesn't even make the list. Number one, nobody ever guesses this. White space. Yep. If you have inconsistent indentation in a file, that will cause more developer errors than anything else. The best way to screw up a program is to screw with the characters that are not even interpreted by the language. Because the instant you do that, then any human reading the code and any compiler reading the code get very different results. And from then anything the human tries to do will be screwed up. So the tooling innovation that changed that and that made it significantly less is an IDE where you can auto format. Because suddenly your white space isn't screwed up. If, by the way, you are still programming in a text editor and your development environment does not have semantic awareness of your language, it only has syntactic awareness of your language, no, you are choosing, actively choosing to write bugs. Your choice. Yeah. Now a number of the other common ones, that first camp, here are a couple of the other items up there. So white space, names, misleading or poor names. What is another one? Long methods. Really, really, really long methods, like 26,000 lines long. Yeah, complex interaction on testable code. Presence or absence of tests doesn't seem to actually change much of the defects, but if I take, if you take code that was written in a TDD fashion and then remove all the tests and then modify it for a while, it still has fewer future defects. Doesn't matter that it was tested, it matters that there is something different about the shape of the code that results from it having been created, having been testable at some point. So anyone wanna guess what this first category is? Human error style, I actually heard the answer here somewhere. Readability, yeah. It's the ability to look at a piece of code and tell just locally, this code in front of me, what the hell does it do? A lot of code, you look at it and you see even these eight lines and you ask yourself, what the hell does this do? That's a bug, it's a bug waiting to happen. In fact, by my definition, it's already a bug, it confused a human. But if you can't tell what a piece of code does even locally, there is no way you can safely modify it. So what's this next category? Architecture, design flaws, yeah. And it's actually a particular kind of design flaw. Mox is the big clue here, actually. So side effects, yes, side effects are an example of coupling, interactions, yes. These are context dependence of any sort. Context neutral code is very easy to reason about, whether it's because it's peer and side effect free, very easy to reason about, or if it's tell-don't-ask and it has no returns and it only sends messages, very easy to reason about, right? Asynchronous and interacts just via message passing, easy to reason about. It's a co-routine that executes in parallel with a number of other things and does non-local returns back and forth between not so easy to reason about, right? Even if I can read it locally, I can't reason about the damn thing, right? It uses threads, you know, just freely and mutexes occasionally to block. Not so easy to reason about, right? Yeah, so anything which is context dependent, even if I can read about it locally, then there are gonna be other aspects, you know, what happens when it accesses that other object and that can screw me up, okay? And the third one, what's this? So the first two were something about the code and they really are, and the things that most commonly can screw up developers and cause them to make mistakes are encoded in the code and it's actually your code that is biting you, which is nice because as a developer, the easiest thing for you to fix is your code, right? So the two most common causes of bugs are totally within the developer's control and they can just fix that for how C read by refactoring and other things I'm not talking about at this conference. But we can talk about later. This third one, are those code problems? Miscommunication, that's half of them, yep. Anyone? Experience, yeah. And there's an even broader category of that. Human, they're all human factors, yeah. And a little bit narrower than that. So some of these, you know, business wrong assumptions, not understanding the problem. Well, I guess all of these are miscommunication sort of things, but what about user doesn't know what he wants, but he sure knows it when he sees it. If the user can't tell you at the beginning what they want, but then they use it and they get really frustrated, is that a bug? Yeah, absolutely it's a bug. So what was the cause of that one? Yeah, well, it's not understanding the user, but it can be, it's actually a complete lack of information. Like nobody in the world knows the answer. Sometimes, you know, we're innovating, sometimes there's learning to be done. And if no one has done that learning yet, you have to do the learning somehow. If you do that learning by releasing your product out live and building it out in the product, then it's pretty expensive to learn. If you do that learning by doing a design sprint and figuring out what the right answer is, then it's a lot cheaper to learn and you don't have to write the bug. So, each of these has ways to detect and ways to mitigate or fix. So how do you detect a readability problem? Yeah, yes, and this by the way is one of the ones that everyone forgets because developers read code that's ugly and they go, well, my job's to understand this, good thing I'm a smart person. No, you just got a signal that you're most likely to make a bug. Your answer should be, holy crap. Okay, well, I've got some work to do so that I can make this into a form that I can read it without being a smart person. So how do I do that work? I heard it mumbled. Refactor, yep. And in particular, what you see here is a lot of the core six refactorings and naming as a process. I'm giving those because you can Google them. I've got some blog entries and the like about that. But all of that is fundamentally about creating local understandability of the code. And it's a very small set of techniques that you have to know and you do them a lot, several hundred times a day. Okay, how do you detect complex interactions and context dependence? Use it, and in particular what way? Yeah, so if I want to see where the code is context dependent, then I should try and use it in two different contexts. So one is integrated into the live system, what's the other one? In a test, and in particular, for it to be a different context, it has to be in a test that it's not connected to anything. Because I've seen what it looks like when it's connected to anything, so then not connected to anything. Great, what is the way that I take code and connect it to nothing? That's exactly the wrong answer. Yes, it is the answer everyone gives. Mox are not for that purpose. No, when used that way, what a Mox is allowing you to do is mask exactly this problem. It is telling you that you have code which cannot be verified outside of its context. And so we're going to make a fake version of its context that we promise we will always and forever make match the real context, except when we forget, and then we'll just have some false negatives in our testing, but that's okay, right? I don't like Mox. What Mox are telling me, when I have a test and I need to use Mox in order to verify this in some different context, that's telling me that my design has a context neutrality violation. It has a flaw. It has some really heinous dependency, like an object that I'm depending on an interface for. And it's got like a couple of methods on it, some of which have both arguments and return values. Like the huge heinous dependencies like that, which we should not have, right? And when you get a lot of sensitivity to this, it really starts changing your design. So this one is tests. Yeah, if I can't test it or I can't test it without use of Mox, I have a problem. And the fix here is refactoring again. And some people that I've worked with when I was in Microsoft have since gone and done a whole lot of stuff on dependency elimination. I mean, I've got some good blog entries. Like Brian Giesler has one there where he talks about with the dependency elimination principle, solid design becomes so. Because everything is single responsibility and that's fairly straightforward. And all the other stuff is to work around the fact that you're using a whole lot of inheritance and you just don't need that anymore once you eliminate dependencies. So the rest of solid vanishes. And the third thing, how do you detect miscommunications or learning to be done? How do you detect them? What was that? Retrospective and discuss. Yeah. Yeah. Honestly, this category, you're gonna make each mistake once. I don't know a way that you can detect it in advance because these fall often in the camp of you don't know what you don't know. Yep, you're gonna make those mistakes. So retrospective is your chance to, as soon as you find out, oh, I didn't know something I could have or should have known to do something about that. And how do you fix this? Who says refactoring is just for code, right? You make small local changes to your human systems, which are verifiable, which are rollbackable, which have all the properties of refactoring. So if you do these couple of things and you really look for those sorts of bugs, then go ahead and do the root cause analysis. You, too, can get to a very low defect rate. If you aren't already doing XP practices, typically teams do XP practices and just apply them blindly and they get to similar to the numbers that you're getting, right? Maybe not all the way there, but most of the way there. And then if you do these, you can drive it down to Hunter Technologies sort of numbers. So I encourage you to join the camp of Bug Zero. We believe that defects are optional. We'd like to have them stop being a thing in our industry. We should talk about bugs in software about it as often as we talk about bugs in surgical procedures. That they're not a good thing. Or as often as we talk about fires that burn down entire cities, right? We could prevent all of those things. And if you're interested, please reach out to me. There's a Slack group that we occasionally chat on. And we'll just take this approach back to your team and fix your own bugs before you have them. Thank you. So I woefully ran over my end time, but there's a- We can take some questions. Anybody has any questions? I think all these categories and examples probably, there's a whole category missing which is the omission errors. So you haven't done it yet. Is there such a category? So, yeah, errors of omissions. So there's two possibilities, I believe, in something that I haven't done. Either there's something that I haven't done and nobody really expected it to be done, in which case no one's frustrated or disappointed. Or there's things that haven't been done and someone for some reason had an expectation that it would be. And those I would consider errors, consider bugs. And those I see as miscommunications because if I am setting up, like if I am implementing half a feature and I know I'm implementing half a feature, I should clearly communicate the half of the feature that's there and the half of the feature that's not there yet, right? And then people will go, oh, it doesn't do the thing for me yet. You say, yeah, we're gonna get to that. Actually, it's not next, but soon. It's one of our interesting items. Okay, right, people are okay with that. If, however, I mislead somebody, then they get partway through and they say, but it doesn't do, right, and now we have a bug. Yeah, because now we are doing this continuous delivering thing. And we are always at some point of our development. So does that mean that this kind of omission error is actually acceptable? No, I need to always be honest, right? The system does exactly what I say it does all the time. And so if I have implemented a half of the feature and it is useful to users, great, then I do want to roll that out and I wanna include that in my deployment. And I need to also have a way to communicate to those users that this does this part of the thing and it doesn't do other things, right? If I mislead users and they go, oh, finally, I've got sorting and I expect the entire system to everything that I can sort and I can define all the sorts however I want and whatever else, then they're gonna be very disappointed when all I've done is alphabetical sorting of a couple of columns, right? If however I say, we are starting to roll out sorting capabilities that seems like it was useful, we're doing alphabetical sorting in these areas because those are the most important ones to people, how does this work for you, and so on, then people go, okay, cool, you're headed down the right direction, please give me more. Do we have any other questions? If not, can we thank Alo for his very insightful speech? Thank you.