 Still not, there we go, I can hear it. All right, you having a good day so far? Doesn't sound like it. Okay, just so so. All right, so welcome. I don't think I'm closing, but there's, I think there's just keynotes after me, which is happy, happy day. So, I'm Corey Haynes. I, Nouresh, asked me to come talk. And I thought, you know, it might be a good time to do a little bit of reflection on sort of some of my experiences over the last 10 years of practicing XP, working a little bit with it. So, my title's pretty much just some stuff I've learned over the last 10 years. A little bit about me. I'm a developer at the core. I'm not a coach. I don't travel around helping teams improve their process. When I work with a company, it's really with the development team working on the core practices of TDD and continuous integration and pairing and stuff like that. I also have a cat, Zach. She's wonderful, and I thought that might be important. So, a little bit about my background just so that you don't, it's not just some guy standing up here rambling about 10 years of doing TDD and XP and all of that. And so, it can be tempting a lot of times when you hear people up here to be sort of like, oh, well, the things that they're saying are the right things. Those are definitely what we must be doing. But when you think about the ways that the people who are speaking and are sharing some of their knowledge, the way that they gained those are really through the experience. And I've been practicing for about 10 years. I was introduced to Agile and XP in particular in 2004. Oh, wow, there's a big screen here. I was at a company, big, huge national company in the United States that decided that they wanted to do an Agile experiment. I don't know if you guys have been at a company that decides to do that, but they have these big, huge, heavyweight processes and they bring in a couple consulting companies and they're gonna go Agile and they're rolling out Agile. And so they brought in Object Mentor, which is run by Bob Martin and they also brought in ThoughtWorks, one for each team. And I was lucky enough to join the Object Mentor guided team and not lucky that I didn't join the ThoughtWorks team, they were good too, but I was lucky enough to get on the team. It just was happenstance. I didn't have anything to do and I thought that basically everyone had forgotten about me, so I kind of wandered the halls like a ghost. And then I ended up on the Object Mentor team and I ended up focusing on automation tools for a lot of the testing, a lot of the build processes and stuff like that. When we finished our project after about three or four months, it was this huge success at the company. We had a record, I always have a hard time saying this, record low number of bugs, not record high. We hit our schedule. When we did this of course, whenever you have a small team that's working very well together, you always break them apart and make an agile consulting team. And what ends up happening is the goal of them is to run throughout the company and embed one person from that team on these other teams and try to roll out agile, enterprise wide. Now, if you've ever seen this happen, it tends not to work because you've got a team that's very focused and works well together and everyone sort of has their specialty, like I was really big on automated testing. I hadn't done a lot of focused thinking about things like project management and planning and all of that. I was just writing code around the automation tools. So when I heard about this and they asked us all to join this team, I joined the automation testing team instead because they were actually moving forward. They needed help, things like that. I attended, I was lucky enough to attend the last XP Universe Conference in Calgary. Was it in 2004? It was the time when it was the last conference and the XP Universe Conference merged with the Agile Conference, which later we found out was the Scrum Conference. And so for about 10 years, I slowly watched the technical practices be abandoned by the conference for valid reasons. I have no bad feeling about it. But I spent the last 10 years really focusing on practicing, really focused on reading, learning about it, trying it out, trying out different things, evolving my process. And rather than a strict kind of like start here, finish here idea, I really like tried this thing, tried that, along with the XP principles of saying, like, okay, well, try something, see if it works. If it doesn't work, fix it. Try something else, make a small change to it. So this talk is really a, it's really a set of, I guess kind of small pieces of thoughts around different parts of the XP practices. Different things that I've looked at, different things that I've learned. One of the things that I want to remind everybody, I have some caveats for this. I'm on the scaling Agile track, but I don't actually scale Agile. I always focus on small teams, always work with teams, six people, 10 people, something like that. I'm not sure if scaling is the right thing to do. I wouldn't know, I don't tend to do it. So it's really about, this is really sort of stuff that I've learned from experience over the last 10 years, really working with small teams. A couple more bewares. Beware sort of the post Agile thing before I get into stuff. I've always wondered about the need to move past Agile, especially when very few people actually are doing anything remotely like it. But not to say it's bad, but what I find and I've fallen into this trap myself is that you start working, and after a while you get pretty good at it. You start to be effective, you start bringing practices into a team, they start being productive, and you start thinking, hey, this doesn't look like the stuff that I started with. When I first learned about XP and when I first got my immersion and started working, I was like, okay, I'm gonna do this all the time, and I'm gonna do this all the time. And it was a very by the book. But over the years, you sort of mix with it a little bit as you learn, as you get more experienced and better with it. So when I'm talking and when you're listening to the stuff I'm saying, and pretty much anybody else, you always have to remember that I'm just some guy. I'm just some guy standing up here who's been doing this for a little while and sharing some experiences. The experiences that I have and the results that I've gotten are not necessarily going to be effective if you just take them sort of off of my plate and move it on to your team. We'll talk a little bit later about the idea of inspect and adapt. But it's really important to remember that a lot of my stuff comes from really thinking about it and changing the stuff. So now that we got the bad stuff out of the way and I've warned you not to actually listen to anything I say, there's a couple things that people don't often think when they talk about XP. They think about the practices and I'm gonna talk about the practices, but really what's the core of it? What's the core of XP? And there's a couple of them that I think of when I think of doing Agile or when I think of software development and this is one of them. If it's hard, do it more often. If it hurts you, do it more often. Chances are it hurts because you're not doing it at the right time. And in XP, the whole extreme part of it is cranking everything up to 11. If something is good, try doing it more often. Try doing it all the time. Doesn't always work, but it's definitely worth a try. And last, when you are doing it, periodically stop, look to see what's working, what's not working, and adapt. And if you kind of take these all together, things just kind of work well. Over time, you end up building a process that's really well suited for your team. There's a quote on extremeprogramming.org just once achieved, productive teamwork will continue even as rules are changed to fit your company's specific needs. And this really comes down to that core principle of it, start doing it, and then over time, you might get to something that doesn't even look exactly like the book, if you read the extreme programming books. But instead, it's something that works very well for your team. I have a very small team right now, and if you looked at it, you would probably not say we were doing XP. There's certain practices that we're doing, but there's some things that we don't really need necessarily for our situation. So for example, we don't have a continuous integration server. And that's because most of the development is done on my machine. There are a lot of benefits to having a continuous integration server that we could get, even with most of the development going on my machine. But there's other things that we need to do that will give us more value. We don't have a lot of time, so we slowly roll out things as we find the pain that we need to solve with it. And the other part of XP that sadly very few people seem to mention when you talk about XP, and this is the XP values. I found that when I started really focusing on it and really working in the framework, not only did these affect the way that I developed code, but they started to leak out into the way that I interacted with people. They started leaking out into the way that I sort of viewed the world around me. And the first one is communication. That's really kind of the core of everything. Talk, talk, talk, talk, talk. The more you talk, if you look at sort of the whole team idea, it's really about getting people in a room together so that they can talk to each other so that you don't have these things in between communication. Simplicity, always strive for the simplest solution to what your problem is. We always think about these agile tools. There's a lot of vendors that sell these agile tools and say finally you can be agile because you have these. But if you've got a very small team and they're all co-located, there's one tool I think that outshines every one of them and it's a big wall and like post-it notes. I remember working on a small project. It was just me and my girlfriend and we were building something and she had always at her company used a tool called Basecamp to do their planning and everything and she wanted to set it up and I had never actually been able to wrap my head around Basecamp. I'd seen people use it and it looked like it had some cool stuff in it but I'm a simple man. So when she would show it to me, I could never figure out how to use it and so it wasn't very good for our project planning and it turned out that we had a oddly and wonderfully, we have a 12-foot whiteboard framed in our living room and you have a couple of developers together that comes in very handy and so I said well, why don't we just write on the whiteboard every morning what we're gonna work on and then the stuff we get done will cross off and then the next morning we'll reprioritize stuff and every day we took pictures of it and looked and said what are the things that we have to get done before we release and what are the things we wanna get done today and in the end it worked perfectly. We actually looked at pictures and a lot of the things that were in that column of must get done before we release never actually got done which I'm sure a lot of you have found. The other is feedback and it's really feedback cycles. It's about shrinking the time between doing something and figuring out if that's the right thing to do. Preferably you can shrink it to the point where it's inverted where you're actually figuring out if it's the right thing to do before you do it and the last is respect. This is really important when you're on your team. It's not just saying nice things to people or not saying bad things to people but respecting people's time, respecting the work that they do, respecting the code that they write, respecting the planning that they have and their ideas and the other last one is courage. It's courage to say that respect is the last one when really there's another one. It's the courage to stand up and be honest. It's the courage to stand up when you know that you can't get something done and not stand up and look to your manager who's telling you you have to get this done. Can't you just try? And having the courage to stand up and go, I am trying, I try every day. Trying means that I'm not doing my best every day and I won't share that story, that's not so great. But it's standing up for the team, standing up for the project, standing up for the stuff that you do. So in this talk, now that we're done with the introduction, there's three core things that I kind of want to go over and tell some ideas around, tell a couple stories. I'm gonna talk about technical practices which are near and dear to my heart. Planning practices which are also near and dear to my heart, they come in very handy when you have small teams and you have to be the one managing it. And the last are some team practices that I've picked up over the years. So let's get started. The two technical practices I want to talk about are testing and evolutionary and iterative design. Testing, there's a couple things that I've learned is one of the most important things that you have is you need to keep it integrated with the team. You need very much to keep testing going on as you are developing. It's, I'm not talking about just getting rid of a separate QA department and throwing over the wall but having it really be part of your everyday work, both at the developer testing level at the actual acceptance testing level, having it very close, setting those feedback cycles and I'll come back a lot to these feedback cycles because if you write a line of code, it's good to know very rapidly if it's the right line of code to write. If you finish a feature, it's very important to know if that feature is the right feature to write. And it's essential to keeping this whole team mentality going. It's easy in a team to let the testers sit off in a corner and go through their test plans and work through the requirements, building automation tests and throwing them over the wall to the developers. But you've got to watch out and keep the testing part of that whole team mentality. And you can't let it be a bottleneck. Too often times I've seen on teams where it ends up being not just the last thing that gets done, but it ends up taking a little bit of time. And if it ends up being a bottleneck, people won't do it. So for example, on the project I was talking about, our first project that I ever did with XP, we had an automation testing team at Progressive, the company I was at, I'm not supposed to say that, I think. But we were rolling out a new technology, we were rolling out a new platform, but the features had to be the same. And we had an automation team that was working very hard, trying to get their automation scripts to run on our new platform. And they kept coming close and they kept almost having it and eventually we were so far ahead of them that it didn't even do us much good, it wouldn't have done us much good to have those. And we got to the point where we're almost done and they still weren't ready with the automated tests. So what we had done through the course of the project was build our own automation tool. We built our own tool that our local testers were able to use and ended up bypassing the whole automation team because they just, the structure of the organization was such that they were a bottleneck to us. And it turned out not, it turned out well for the project but not that great for the teams because we had this wall in between us and we kept having a little bit of friction. And that we would, they would be saying we're almost done with the automation tests and we'd say well we already have our own automation tests, we're not probably gonna use yours. And that comes back to really holding it as close as you can. Have those people part of your team. I know if you've seen this, the testing pyramid but really focusing on having the right amount of tests, really focusing on keeping everybody involved in writing at all levels. It's easy to break up this pyramid and having developers working on unit tests and another team working on integration tests but everybody needs to be involved in all of this. Knowing what to do, when to do, how much to do about it. And one of the things about the different styles of tests is that it's tempting because they take time and because it takes effort to write them. It's tempting to wanna start small and scale up. It's tempting to wanna say, well we're just gonna write the tests that are good for our team. But I'd highly recommend that you better to start big and scale down. Do too many tests. I tend to tell people write more tests than you think you need because you probably need more than you think. You probably don't need as many as you write but you need more than you think you write. And this is the same with all of the practices I've found is really always start with more than you need. And then as you learn about it, you can scale down. You can find out that certain tests aren't as valuable as others. Which begs the question of when do you need to change these tests? And the answer I always give for when to change, when do you change any of your process is around when you have pain. Do stuff until it's painful. Write tests until you are blue in the face, write tests until your fingers bleed. Well, don't do that. But you get the meaning. And some of the kinds of pain that we have, fragility, a lot of the complaints about a good solid covered test suite is that they can be fragile at times. They can make it where you make a change in your code and bunches of tests break and you have to go and fix them. This is a good indication that maybe you're testing not in the best way. So look at it, see if you can either not necessarily scale back the number of tests but really look at what your coverage looks like, how you're covering your code, what are the kinds of tests that you're using to cover the code. Speed often causes bottlenecks. I tend to focus more on the day to day kind of writing of code and writing of tests. And the platform that I tend to use is notorious for having slow unit tests. And so really looking and that's something that people often bring up and say, well the test suites are slow so we're not gonna run them, we're not gonna write as many of them. But it's really about figuring out what is that level? Can you split your test suite up into a slow suite in a fast suite? Are you sure design a problem? Things like that. Having a separate team that's working on automation tests, not being as fast as you need them to be. All of these are things that you want to see, feel the pain before you start making assumptions about their effectiveness. Maintenance costs, oftentimes when I was at the large company, for the application we were working on, we ended up with about, I think it was like 2,500 sort of test runs in our automation suite. And maintaining them was very difficult. It was one application but it ran over 50 different states. And so, well 48 different states. And so maintaining them you had to make changes in all of them when you would make a small change in the application. So we looked at it, that was pain so we adapted to it, figured out how to do mass updating of the test, things like that. Another thing is the role of the tester on a team. When you have a really good automation suite, you tend to find out a really nice thing. And that's that automated testing, or not automated testing, manual regression testing, tends to go away. And it tends to be replaced potentially all of it, if not a tremendous proportion of it, tends to go away. And so the role of your testers starts to change. You start to look at it, you don't want to kick it out, they're still very important, because they can focus on facilitating the correctness. They start being more involved in making sure that it's correct, making sure that the tests you write are correct, making sure that your understanding is correct, more so than sitting there following scripts. And I like to think of them as becoming sort of a quality facilitator. We have this quality assurance term, which depending on who you talk to, gets different levels of anger at the term, but I like to think of them as quality facilitators when you have a really good tester. And I want to make the caveat that it's not an actual term, so it's just something that I use when I think about what their role is. It's focusing on facilitating the correctness. You've got to involve them very, very heavily at the definition phase. Very heavily when you're figuring out what is it that you need to get done? What is it that the business needs? They're there, they have an intimate knowledge of the capabilities that you're looking for. Since they've been working very closely at that initial stage, it allows them to write effective automated tests because of the fact that they have such an intimate knowledge of what it is that the application is supposed to do. They're the people oftentimes that I go to when I have questions about code or about the application. If I need to know how something's supposed to work, I'll often go to the tester because they're the ones who are there at the beginning and they're there at the end. At the end, they're allowed, or not allowed, sorry, I'm gonna, let's go back two seconds and I didn't say that. At the end, their focus really is on exploratory testing. It's really about just going through and seeing if there are things that you missed. All of the known stuff has been covered by the automation and covered by the automated tests so they don't have to sit and follow scripts anymore and click through your application. I highly recommend this book. If any of you haven't read this book, you really should, it's fantastic. It really brought together a lot of the things that I had seen around testing as I got more involved and started working more closely with our testing team. This really came in handy. Second practice that I wanted to talk about was evolutionary design. As you're building your system, since we give up that initial phase of very heavy building, drawing boxes and drawing huge walls of UML designs, you start to design your application as it moves through and the two things that you use to do this primarily are TDD, which is the red-green refactor cycle, and the four rules of simple design. Talk about those in a little bit. TDD, it's very simple to understand. Some people say it's like chess. It's very simple to understand the rules, but it takes a lifetime to really grasp it. I like to say that when I first got introduced to it, it took me about six months until I really got comfortable and really understood TDD. And then a year later, it took another year and then I finally understood TDD. Like I really got it and I had all the subtleties behind it. And then another year went by and I finally understood it. And then 10 years now, last year, I learned this tremendous amount about TDD. I started spending some time talking to people and got just a bunch more insight. So finally, I can now say it takes 10 years before you completely understand TDD. I come back next year and I'll, I might increment that number. And one of the core things that you get more of an understanding of, one of the things that is so very subtle is this relationship between testability and design. Michael Feathers gave a fantastic talk called The Deep Synergy Between Testability and Good Design. It's well worth going and watching. He talks about some of the things that we generally consider code smells. And then talks about how those code smells are dealt with and how they're circumvented by making your code testable. And how making your code testable has to do with the code smells. What's that synergy? What is that heavy, heavy link between them? And when you're doing this and as you're refactoring and as you're doing the changing your design, you wanna react to difficulty in testing. This is sort of the core I think of the TDD cycle, is knowing that when it's difficult to test, you need to change your design. Some of the difficulties in testing, too much setup. If you have complex DB setup or test double craziness, if you use a lot of test doubles, which I do, non-deterministic test failures, I have one in my code base right now and I'm trying to actually figure it out. Dependencies on external components, things that are external to the part that you're actually testing. When you have too much setup, a lot of the time it's because you're missing an abstraction. A lot of the time it's because you have too many dependencies in there and you need to take some of your dependencies and move them perhaps behind a facade. The other thing to think about are these four rules of simple design. They were codified in the late 90s by Kent Beck and I've come to think of them as sort of the core, core piece of all design. All of the principles that we think about the solid principles and some of the other ones around design. Over the years I've come to think that these are really the key to all of them. There's test pass is the first one. The second one is that it expresses all of the ideas or you can shorthand kind of good naming. No duplication and small. Oh, the little thing didn't come up. I'm gonna be talking about these tomorrow. And I have a talk tomorrow which is gonna have a lot of code in it and a lot less of me just talking. So we're a little ways through. Thanks for sticking around. I've got more to talk about but here's a reward, here's a picture of my cat. Okay, so planning practices. It's four parts to it that I'm gonna make it through. Iterations, stories, estimation and prioritization. Iterations are one of the kind of the heartbeat of the software development cycle. When you talk to people who are doing Kanban, a lot of the times they'll talk about this continuous flow where you're just chugging cards through and chugging features through. And it's easy to lose that heartbeat. I'm not saying Kanban says to lose it but it's easy to lose that heartbeat of getting together periodically and reflecting on your accomplishments, having discussions on upcoming work and just pausing and taking a breath. Developing products across the board, all of the roles on it. It takes a toll from you. It's important to just sort of pause and take a breath. I like to split planning from iteration. Let iterations happen on a very frequent but regular basis whereas planning can happen on a scheduled basis or it can happen continuously but splitting those two apart allows that regular heartbeat to happen while planning is sort of taken into account in a different schedule. It has a different effect. There's a different reason to do planning. And when you're doing iterations and if you cut your stories right, even large tasks, tasks that might take a week, they can be interrupted by one to two day tasks without losing a lot of context. If you have this sort of regular heartbeat. Stories. Stories, there's basically one thing that I say which is cut, cut, cut. The stories that you write are too big so you should always cut your stories and then when you've cut them down you should cut them again. And rather than focusing on the story itself, you should sit there and look at what's the business value? Don't look at the implementation of the story but look at the business value of the story. How much is it going to contribute to that bottom line of the business that you're doing? Technical stories should always focus on value to the business. At the current company that I'm at, we had all of our logs were scattered across a few different machines and whenever we had a problem we had to go to each of the machines and we had to look at the logs, figure out where it was and we decided that we wanted to consolidate the logs into one of their services that you can send all your logs to and they put it into one place where you can search and the lead of the team at the time he was like, we're gonna do this and I said, well, it's gonna take probably about a day to do. You need to prioritize this with the business. And so we called it a technical story but we actually did a little bit of due diligence to figure out how much time we spend going from server to server to server and we were able to put that into terms that the business understood and really got it to say this is actually going to provide value to the business because it's gonna allow us to spend less time spelunking through logs and more time developing stories and it got prioritized and we did it and it was great. We often think of stories as large features. I think this comes a lot from previous experiences that we've had with use cases where we have all of these edge cases and all of these different paths through our system but I like to look at them instead as value delivery. Don't look at what the different paths are and all of that, look at what the story, what the value the story is gonna provide to the business and then each part of the capability, each part of the task, each task in your story link that task to the value it delivers. So you have an overarching value and then a smaller one that is linked, each individual piece of it is linked to the value. This allows you to cut your cards based on value. You can cut them to smaller but essential value. So if you have a story, you can cut it and say, if I cut the story here and cut a few of the feet or the capabilities of that story out, I can provide 60% of the value. And so we can cut this and have basically two stories and when we talk about cutting stories down, this is what I like to do which is really focus on where can we get 50, 60, 70% of the value and then can that remaining part of it be reprioritized and done later. So if you can deliver the high priority 60%, you can say, you can reprioritize the remaining 40%. That leads directly to estimation. Once you have these split and as you split them, how do you actually estimate your cards? I always come back with the question of why are you estimating? And a lot of the time it's because people want to know when something's gonna be done. When is this whole backlog gonna be finished? When is our project gonna be finished? But if you think about it, a lot of times projects aren't really finished. You're sitting there working on it and more features come in, more requests come in, it gets rolled out. And it's arbitrary. A lot of times it's arbitrary when a project is finished. You go in, oh, phase one is done, phase two is done or you go with betas or releases. But we'd like to do incremental delivery, it's better. It's better to slowly start feeding out features. And then at that point, at some point, you can just say, we're done. We have enough value that we've provided to the business. So I find that done for a project is less interesting. It's less important. Frequent delivery allows us to de-emphasize it. And instead, if we focus on the value that these tasks, the value of these capabilities provide to the business, you can cut on that and then deliver pieces of it, roll them out, get feedback on them, and you can stop estimating. I'm not saying everybody should and it works for every case, but like at my current company, we don't estimate at all. We only look at business value. And we cut our cards down to the point where we can just say, let's prioritize these cards in the right order. You're going to get it when you get it. And we cut them down so that the cards, we're not working on month-long things. We're working on cards that take a couple days to do. And we roll it out. We get it into production as quickly as we can. We put it into production as soon as it's done. And what ends up happening is you start replacing estimation. You start replacing the planning and estimation meetings with pure prioritization meetings. You start splitting your stories so you don't have to care how long they take. And you instead focus entirely on delivering them in the right order. You deliver them according to the value to business. Because if you can begin using 50% of your application, but the most valuable 50%, you can deliver the rest later. This brings us back to some of the XP values of really looking at the stuff you're doing and being honest about the value. You have to be honest about the value. Somebody can't have just a pet project. They can't just have a feature that they slide into a story because they really want it. And it takes courage to really postpone that rest of the value to say this isn't necessary. And as values delivered, your regular prioritization can just adjust. When you keep your iteration small, you can sort of reprioritize on the fly without having a really detrimental effect on the team. Now a lot of the times that iteration cycle felt to me like a guard against pulling the development team in a lot of directions. People would be coming in and telling the developer, no, now this is important, now this is important. And so when I first started, the iteration cycle felt like a bubble. It felt like a way of guarding ourselves against that, against just the chaos of always having a new priority. But if you really cut down to the value and you've prioritized on the value, anytime anybody comes, you can ask them, is this more valuable to the business than the stuff that I'm currently working on? And it keeps you from being pulled in a lot of directions. And if you have short, you've cut your cards into very small pieces, then it's okay if something comes in, you could tell them, I'll get to it tomorrow because you are allowed to reprioritize as often as you need to do. And you end up finding the balance, you wanna find that balance to adjust regularly. When you can reprioritize frequently, do it as much as you need to do, you start to react to the business needs. As the business context changes, as you learn more, you're able to react much more effectively. This is the benefit of when you really focus on value-based story cards. Okay, brief pause, another picture of my cat. Okay, let's get through the rest. There's two parts of team building I'd like to give some ideas on, pairing and retrospectives. I love to pair, it's me and Brian Merrick. I mean, I really love pairing. Here's me and a guy named Aaron Patterson having a good time pairing. But if you're gonna roll it out to your team, it's not, you really have to understand what are the goals of pairing and what are the goals and the values that you want to get out of it? There's two primary ones that I can think of when I do and all of the sort of benefits come underneath it. One is mentoring. If you have junior developers pairing them either with the other junior developers or with senior developers, you get a lot of learning there. And that comes in, but it slows you down. If you have a senior developer pairing with a junior developer, they're going to have a period of teaching. They're gonna stop and really talk about it. And there's times when you really need to ship it, you just have to get it out there. And this is the time that you could have two senior people pair together. If you don't have two senior people together or two senior people on your team, you might not pair because they've gotta get down and just get it done. And you can't afford to lose the time to the teaching aspect of it. You wanna make sure that you do that when you have the time. So you need to know when and why. You see retrospectives I think are important, but it's important to treat your process the same as your product capabilities. One of my favorite games is the Four Quadrants game where you get everybody gets up and talks about things you should stop doing, things you should start doing, things you should keep doing, and then kudos or thanks to other team members. Kudos is really nice because it's a way of congratulating each other. It's a way for your team to sort of reflect and say, yay, we did a great job. This person did something wonderful for me, thank you. But the other three are really about changing your process and it's important to treat your process the same as business capabilities. It's valuable to experiment. Look at it as experimentation, just like you might do tests and experiments with your product, you wanna do that with your process as well. But you need to make hypotheses about the process and then create tests. It's not enough to just say I'm gonna do X. You need to be able to say I'm doing a test about X and then ensure enough time for that test to surface. If you're trying something that you think is gonna take three months to get the effect for, you don't wanna take a month later and stop doing it because you didn't get the effect you were looking at. So it's important, as with any hypothesis, that you actually understand what the timeframe is, what is the results that you're trying to look for? Because each change does have an effect. It might be negative, it might be positive, but if you don't know what it is, it's harder to figure out if it did. Is there something you can measure because of the change? And understanding how long? And this is the important thing. If you are making a change to your process, set a time for when you're going to discuss the results. When is it going to be done and when are you gonna figure it out and how are you going to figure it out? And the whole team should understand why. Why are you doing this? I put it in big letters because I think that's really important. That you have to have everybody on board. And if you do make this hypothesis, if you say we're going to do X because we think it's going to improve our process this way, it's easier to get everybody on board. It's not just some random thing that the person running the team says we're gonna try. So in the end, it's this. All of the things that I talked about, all of the lessons that I've learned have come down to communication and transparency. And the more you can do this, everything about it changes to your process, it's about communication. Prioritization, it's about transparency, it's about communication, it's about letting the business know what can and can't be done. And they all come down to those things. You need to have transparency because business needs to have that information. The people who are making the change, the people who are making the ideas up, making the decisions about what we're gonna be working on, they need to know what can be done. And it's not enough just to come back and say yes, I can do that or no, I can't. But if you provide them up to the minute access to your status, do they know what it takes to build features? Do they have an idea of the effort required? And more important or equally as important, do they know how much your team is capable of delivering? I have a very small team right now. And I'm upfront and very frequently I tell my business, the business units and the people that we work with to do the prioritization, that we don't get very much done. We're very small, we don't have a huge throughput. And so that allows them to adjust their priorities and it makes it more reasonable for me to say, can we cut this story down? We could get you 40% of the value in a short amount of time because we don't have a lot of capacity. And that's a transparency thing, it's a communication thing. More transparency makes it easier to set goals. And some people, when I was first starting off with Agile, I got this impression and I got a lot of feedback from people and talking and they thought that Agile was really about getting stuff done faster. But what I found is that it's not really about speed but it's about knowing what can and will be done. Isn't about speed, it's this. And more importantly, actually accomplishing it and knowing all of that stuff, you can get it done. All right, just as Nuresh comes in and plays me off the stage. All right, well thank you very much. Have a good night.