 is the difference between what's called empirical and defined process. Scrum acknowledges that software is an empirical process and that because of that, there's certain characteristics that you want in an empirical process. The first is transparency. You want the ability to actually inspect the process, to look at it, and understand the state of what's going on. That's transparency. Sexually, you want to engage in frequent inspection of the process and the state of the process so you know where it's at. And based on that, you want to adapt. One way to think about this is you want to have well-designed experiments. You want to frequently check those experiments, and then you want to run new experiments based on the results of those experiments. That's one way of thinking of an empirical process. Next, we talked about scrum a little bit. Talked about the roles. And there's some important roles in scrum, first is the product owner, then the scrum master, the product team, and the stakeholders. The product owner. The product owner is the person who maintains the vision of the product. Where is it going? They're responsible for deciding what will be done. The scrum master is there to facilitate the process, to help the team move forward effectively, and to clear any roadblocks in their way, or what are called impediments in scrum ease. Next is the product team. The product team, these are the people who decide how they're going to do what the product owner wants. So a very clear distinction. Product owner is this is what I want, what we need. This is what we need to meet our customers' needs. The product team is, OK, this is how we're going to tackle it. This is how we're going to make it. And this is how much work we're going to do in any quanta of time, OK? Next, there's some artifacts. The most important artifact is a definition of done. We'll talk about that a little bit tonight. Is it ties into user stories? It ties into acceptance criteria. Next is a product backlog, which we'll actually get into more next week. And a sprint backlog. But final artifact you deliver is customer value. If you're not delivering customer value, it's a total waste. Anything that doesn't lead to customer value directly or indirectly is waste. So that's our real quick recap of scrum. There's a couple other things we're going to hit real quick. And then I'll take some questions. So this is the whole empirical process. Empirical process is inspect, adapt, inspect, adapt. That's the cycle you're constantly going through. You're inspecting. You're adapting based on that. What does this imply about the process? Well, first off, you've got to be able to inspect it, which goes back to transparency, which I already talked about. But the next is that you actually can do something with those findings from the inspection process. Can you actually change your behavior? And that's not just that, can you, as in, is it possible? But are you willing as an organization to change? A lot of organizations fail at that piece. They're unwilling to change. They're unwilling to kind of face the bad news that they're seeing. So you constantly need to be inspecting. And I think this is where you have to come with unflinching honesty and look and say, this is what's good, this is what bad, this is what's going on. And then you need to adapt. And this is an ongoing process. And this expresses itself throughout the whole framework, the whole process framework that Scrum gives you. At the heart of Scrum is the whole idea of the sprint. And the sprint really is it's sprint planning, sprint review, retrospective, excuse me, sprint, sorry, typo, sprint planning, review, retrospective, and the, excuse me, sprint review, retrospective, it's probably, I messed this up, sorry, my bad. But what you have as a cycle, in which you plan out a sprint, execute the sprint, you review the results of that, demonstrate that you've delivered it, you conduct a retrospective, and then you go on to plan another sprint. This is the ongoing process that you're going through. And this is what is at the core of sprint. Apologize for this slide. What is a sprint? A sprint is a time boxed, time boxed period of development, delivery, potentially shippable product. So time boxed, it's a defined amount of time. It's every sprint, same length of time. It's time boxed. When the clock is up, it's pencils down, like an exam, if you remember that in school. Potentially shippable product. Whatever comes out the other side of a sprint, you should be able to ship. Doesn't mean you choose to ship it, but it means that it could be shipped. Everybody clear on that distinction? So for marketing reasons, or because the support organization isn't ready, or because any other number of things, you may not choose to ship it, but you should be able to ship it, okay? So that means that there is something that's potentially shippable. If you come out the end of a sprint, and it's oh my God, we couldn't ship that, then something's wrong. Something is wrong, okay? If you're going, well no, it's not good enough, or we're missing features, or we've regressed. Any of those things, then that is not good. That's not what you want. It's potentially shippable. Now whether you ship or not, that's a business decision. Be very clear that you separate engineering decisions from business decisions. Whether you ship it or not, that is a business decision. Whether you call this version 4.0 or not, that is a business decision. Whether it is ready to go, and whether you've built it, that is an engineering decision. Does that make sense? So really quick, any questions on this recap? I know it was quick, but I wanted to just make sure we're all on the same page. So sprint constraints. So there's a few constraints that you apply to any sprint that are very critical for it to be successful. But versus no changes are made which endanger your sprint goal, okay? This means that when your CEO shows up from having seen customers over the last week, it says, oh, we've got to do this now. It's like, no, we already had a goal. We already have what we're working towards. We're going to finish that. And that's where you need to have a strong scrum master, someone who's going to say that, who's willing to say, no, we're not going to do that. We're not going to take that. Now there is a whole process for abnormally terminated sprint, being able to move forward on that. That's all possible. But no changes are made which endanger the sprint goal. Now remember what we said, potentially shippable product? So one of the things that's important there is that you're not dropping your quality goals. That you're not becoming below the targets that you have. A lot of times people are like, oh, let's just make it a little less good and we can get it out. You don't want to be doing that. That is not a dial you want to be playing with. And, but here's the thing, is this here protects the sprint from basically future creep. But at the same time, part of the process, it's an empirical process, is learning and applying those learnings, inspection, adaption. As you do the work of the sprint, you will discover things. And you need to work with the product owner to figure out the implications of that. So if you just sit there and say, sorry, we're not changing our plan. Well, that's self-defeating. What you need to say is, we will change the plan as long as we don't endanger the goal. If what we're doing in changing this makes things better, of course we're gonna do it. As long as it doesn't endanger the goal. That make sense? Any questions at that point? Sprint planning. So sprint planning is to define that initial scope, to get that first cut at the initial scope. That's what sprint planning is all about. That is what it's about, and that's the key. And at the heart of sprint planning is, what will we do? How will we do it? What will we do? How will we do it? And then there's a third one that's unstated, which is, how do we know it? What will we do? How will we do it? How will we know that we did it? That we reached it? That's what sprint planning is all about. We'll go into this much more detail because there's a lot more to this, but that will be next week. So your sprint plan is gonna be made up of stories and tasks. Stories are a description of customer value, of end user value. It's something an end user wants to do. Tasks, they're things you have to do, okay? And we'll talk about how do you get a good story and how do you build it in just a second. So, going on to stories. A story has a very simple structure. The first is, as a type of user, that's the very first clause. There's three clauses here. As a type of user, I wanna do something. So that some value is created. As a type of user, I wanna do something so that some value is created. That is the heart of what you're doing as a software engineer, as a product team, is you're taking and figuring out how to do this, how to create value for someone. Let's walk through a concrete example. As a shopper, so I think we're doing with some kind of e-commerce site, right? As a shopper, lots of e-commerce sites out there. I wanna search products. So, I'm a shopper, I wanna search products. So that's what I wanna do. How do I create value that? So that I can find what I want, okay? That's a very simple user story. Now, this is a very, very high level user story. When you start off with this, this is the kind of story that then needs to be, as we like to say, broken down, okay? Because this is big. This is really big and there's a lot here. It may seem like a really simple thing when you start thinking about, okay. So, what can I search on? Can I search on name? Can I search on brand? How do I indicate it, that? Can I search, you know, on stuff that's in the description? Can I search on a price range? When you, so what I find things, how do they display to me? How is it sorted? Whole bunch of things. So, you're gonna wanna take and break a story down. So let's now, as a group, let's talk about how we break this story down, okay? So I want someone to give me a variant of this story that breaks it down into a bigger and a more bite-sized chunk. Any ideas? Because you're gonna have to do this with a product owner. You're gonna have to engage with them. Is this potentially gonna change? This is probably our anchor, right? We're talking about shoppers right now. So, how about this? The action you wanna do. How might that change? Search by category. Search by, great, excellent. Search by category, others. Brands, category, brand, others. Yes, colors might be, you might be nesting these things, yes. Correct. There's a whole bunch of things. Each one of those becomes a story. You break it down. Because here's the thing. If you break it down, then you could potentially say, okay, these are ones we can do that are easy. My colors, hold it, do we have the metadata for color? Hmm, I don't know. That might be something we wanna do in the future, but I don't think we're well set to do that, okay? But, oh, category, brand, yeah, we could do that. Or to search within a brand, or to search within a category. I wanna search within a category for products that meet my, a particular category for products that have this name, okay? Or within this brand, or across a set of brands. These are all things. So you could write these stories, and you know, you all could do this. And I actually go home tonight, and write a basic story like this, very high level. And then take some time to break it down, okay? You know, think about it, okay, what would be a refinement of this? How would I take this further? How would I develop this further? But remember, this final clause is the most important. This is the customer value. If you don't have a handle on that, you don't have a handle on whether you finished it or not. And don't just, you know, don't just say, as a shopper, I wanna search products. Why do you wanna search products? What are you going to do? So, in fact, this would be, this might even be actually a bad final clause. But, well, because, so I can find what I want. But why do you wanna find it? So I can buy it, okay? So, get as concrete as you can, as you do this, okay? So, does this make sense? This is an arc, and this is a point of tension when you're working with a product owner, because you're gonna be pushing them to get very crisp about the customer value. Very crisp. Why? Because that's gonna lay the stage for the next piece. Okay? And the next piece is, how do you know what you've got? Okay, now there's, when we talk about sprint planning, we're gonna talk about a further breakdown, how you break it down further, and then how do you figure out how you're gonna do it, how you're gonna make it happen, and estimation, these are all things. But right now you wanna get good stories, and you wanna get the next piece, which is moving towards our definition of done. So, when we talk about the product being done, we have code complete. Well, we need stories so we can build code. We need to be test complete, so we need to have tests. And then we need to know that we've actually met what we are, that there's some objective criteria that says we're there, that's the acceptance criteria, which then is tied to the approval by your product owner, okay? So, this is a kind of a template definition of done. Every team is gonna have their own definition of done. There's gonna be some customization of it, but these are probably the things you do need. Need to be code complete. If it's not code complete, you're not there. If it's not test complete, if the tests aren't green, you're not there. Now, you might take that and develop it further. Acceptance criteria met. You need to know objectively what was supposed to be there to get there. And then the product owner needs to say, yeah, you've met my acceptance criteria, but you just don't want that to be an arbitrary oh, I'm looking at it. It's like no, it's gotta be met. Code complete, you could expand that. Code complete could be that the code has been written it has been run through static analysis and passed that it has been code reviewed. That could be your definition of code complete, okay? So, it could be you could build on it, you could grow it. Test complete, hey, that we have unit tests that with 80% coverage. We have functional tests with 80% coverage. We have, you could build this out, you could say, oh, we have user level tests with 80% code coverage. You could say, we have load tests and that they have certain criteria. So, these all can be developed out, but they start from these basic building blocks. Acceptance criteria is what we're gonna go into next because this is key because if you don't have an objective set of criteria, you have no way to build these tests, okay? And you have no way to know if you're there, okay? If you don't know where you're going, you have no way to know when you've arrived, okay? And yes, the journey is the reward, but you do wanna know that you've got that. Okay, acceptance criteria. I'm using a very structured way of talking about acceptance criteria here. In fact, there's a language that you can use to express it. It's called Gherkin. It's tied to something called Cucumber, which is tied to behavior-driven design. Actually, highly recommend you look at it and you learn about it, but the same model exists whether you do this in quote a programming language or you do it by hand, okay? The first is you wanna talk about the feature. You want to break these down by features and each feature should have a set of acceptance criteria. For each feature, there's scenarios, okay? And then there's conditions and potentially additional conditions. And then there's a when, which is when some action occurs and then some result, okay? So let's walk through one of these and kind of develop it. In fact, we're gonna start with the story we did before and you all remember that story. So, feature product search. Scenario, my lane searches for products sorted by price. Give it, my lane has entered a partial or full name. In fact, and she has selected sort by price. When matching products are displayed, the products are sorted by price. Kind of captures it, right? Now, but there's ambiguity here. There's things that could be further refined. Perhaps this should not be entered a partial or full price, but you should have a different one. Entered a partial name and then another one for full name. Two different scenarios, two different give-its, okay? Here, she has selected sort by price. When matching products are displayed, the products are sorted by price. So let's look at how we might evolve this and develop it. So, here's improvement. My lane searches for products sorted by price ascending. See, it's ascending, not descending. My lane has entered a partial or full name. Now, I still think you could probably refine this further. She has selected sort by price and she has selected sort ascending. Matching products are displayed, then the products are sorted by from low to high price. That is actually a pretty good acceptance criteria. But we could take this, I bet you we could take this same story and come up with dozens of acceptance criteria. Because there's one of these where she sorted by, the scenario is sort by price descending. Price ascending, descending. If you go back to the story, which is about I search for things, you could talk about the various different types of searches she could do. There would be name search. So, this could be feature product search by name. Product search by brand. Product search by category. These are all further refinements of it. So, right now, I want you mentally, step back. For this feature, create two in your mind, two product acceptance criteria. Think about them and then somebody be brave enough to share them. You can talk with people, it's okay. This is not a closed book test. Okay, is someone brave? Or am I gonna have to just call on someone? Anybody brave? I'm gonna have to call on someone. How about you? You seem to be very much into it. Do you have an acceptance criteria that you came up with? Okay. Product search by full name. So, scenario, so product, this is good, okay. So, product search by full name. So, mainland searches for product by full name. She has entered a full name. No matching result, excellent. Excellent, very good. Let's give her a clap. Very good. And in fact, you hit on something I didn't talk about at all, which is the importance of negative test cases. Negative acceptance criteria. What happens when it doesn't work? What do we do? Just as important. Just as important. Very good, thank you very much. Anybody else? If you work for me, you might be a victim here in a minute. Somebody else has gotta have one. Let's start from the feature, what's the feature? Product search. Product search, but any variation on there or just product search? Product search. Okay, scenario. A scenario she enters a partial name. Enters partial name. Given she has entered partial name. Assuming she hasn't hit enter or search yet. And she hasn't yet hit search. Yes. Maybe suggested products are displayed. Okay, so yeah, so what you're basically trying to get to is like a path, a match, a match, a partial match. Which I would say that this is probably not product search, but this is suggestion. Search suggestion. Product search suggestion. So products are there. Meiling is searching for products by name. Meiling has entered a partial name. So she's searching by name, which may be full or partial. Given she has entered a partial name and has yet to hit enter. Waiting for her to hit enter. Display a list of matches. Yeah, good, good. So again, you've got to be thinking about this. The thing is what you're going to find is that for each one of these stories, you're going to break that story down. And for each of those stories, you may have dozens of these acceptance criteria. Sometimes this process of acceptance criteria is called definition by example, or requirements by example. Because really what these are are examples. You know, and think about the conversation you're having. Okay, so now we do this. What happens? Let's unfold that. Let's walk through that and understand what happens. Let's create an example that helps us understand what you're really getting to. Because you're a product owner. They've got this picture in their mind. There's this little movie playing in their mind. And guess what? You don't get to watch the movie. You can't see it. Be much easier if you could, but you can't. So what you have to do is you have to get them to describe the movie. Which means you've got to go and you've got to say, okay, so what happened then? What happened then? And then what? It's kind of like listening to a really engaging story and you're hanging on every word. And then what happened then? Oh, okay, fill it out for me. When the person opened the door, what was behind the door? It's unpacking it that you've got to do. And that is how you get good stuff. So what's gonna happen? And we'll talk about this next week when we get into the sprint planning process and talk about that is what's going to happen is they're gonna have to find some stories. And these are gonna be your product backlog. And you're gonna be getting ready for a sprint. And you're gonna pick one of those stories and you're gonna, wow, this is really big. In fact, it may be too big even for a sprint. You may end up with two sprints worth of work out of that story once you start drilling down on it. But you need to do that drill down. You need to have that conversation, break it up into bite-sized chunks. So stories, when you think about them, any story or task that you tackle should be two to three days in length. Because otherwise you don't, you have no way to measure whether you're making progress or not. You know, if something is gonna take five to six days, okay, five to six days, and you're in a two week sprint, you really don't know whether you've hit it or not until you're out of the sprint. Now, the number of people that are engaged in a story, there's a lap time and then there's the people that are working on the task. Because you may swarm on a story. In fact, I highly recommend you swarm on stories. When you do that, then you're going to have maybe three or four people working on that in parallel and breaking it up into bite-sized chunks that they're gonna tackle. And then you deliver that. And really, the complexity and the difficulty will vary based on how many people need to engage on it, not just how long it's gonna take, okay? So that's part of the thing. So you're gonna take that and you're gonna break that down. And then for each of those, you're gonna go through this exercise of building the acceptance criteria. Even if you're not using Gherkin, I highly recommend using cucumber and such, which is a great tool to use. I recommend use this structure. And in fact, run it through the Gherkin compiler to make sure that it passes, that it's leaving. Because what's gonna happen then is you're really, this reduces ambiguity. This structure reduces ambiguity, okay? And really force yourself to use this even if you're not going through the process of translating these into tasks. Go through the process of using this to get a handle on what you're gonna do, okay? Now here's the thing. Who's responsible for writing these, the team? It's a conversation. It's a conversation. You want probably three people involved in this conversation at any one time, okay? Product owner, definitely, gotta be there. They're the one who's got the movie in their brain, okay? Then there's people who are gonna think about how do we implement that? And then there's someone who has to be thinking about all the different things you can have. Sometimes you call them a test engineer. Sometimes they're just a software engineer on the team. But somebody's need to be there who's going, who's thinking about, oh, yeah, but what about this? What about this? You wanna find the person who's kind of like a five-year-old but why? Why? Why? But it becomes what? What? What? What? And drives that and pushes that. But it is a team responsibility. So last week when we did this, one of the feedbacks we got that one of the best parts of this was the Q and A time. I've tried to create polls in here for questions and answers but I found that also people just don't seem to like to ask those in the middle of the talks here in Singapore. So what we're gonna do now is we're just actually gonna go to Q and A. And this is really a chance for you to bring up scenarios like, okay, how would we, you know, we tried this but it failed, you know, why did it? Let's try to figure out why it failed. Or how would you apply it here or things like that? So, we now go to questions. Go ahead. So when breaking down a story, it's how far you break it down and determined by, like, can you put this work in two, three days? Two to three. Two to three days. And that's kind of, that's the biggest part here. That is really for me how you want to go down breaking it down. Is can you do it in two to three days? I mean, just kind of keep iterating until you have a high level of confidence that it's two to three days. Now, there are times when you don't know there's just so many unknowns. And that's where you use a tool called a spike. And a spike is a special kind of story which is the customer value there that you're producing is knowledge. The knowledge of which direction to go. Okay, it's like, gosh, we really don't know how to implement this. There's a lot of uncertainty. Then just own that uncertainty. And time box and investigation. We're going to spend two to three days investigating it. We're going to do this and this and we're going to evaluate it. And the outcome, the customer value produced is a decision. We're going to use Cassandra to store this data. Because we don't know, right now we don't know which, whether we should be using Cassandra or we should be using Bosho, okay? So we're going to do as our key value store. So we're going to do an investigation and this is what the investigation is going to look like. This is the data we're going to collect and then we'll make a decision based on it. And oh, by the way, we had a set of acceptance criteria that tells us how we make the decision that time must be, you have a bunch of acceptance criteria and you can even use this format for them. Result must be within this. And you can even have one at the end. It says, and all things being equal, we're going to go with Cassandra, okay? That's the final acceptance criteria. If all other things are equal, we go with Cassandra. Just because we like it better. But again, it makes things very clear, very objective. Other questions. So this goes back to this question here, which is how should you be breaking these down? And you should be breaking these down into two to three day chunks. Now, I know I've been there. There will be times where you thought it was two to three days and it turned out to be more. At that point, break it down more. When you have that data, you know, just own it. Just say, okay, we didn't nail this. We didn't know this. We need to break this down further. Let's do that, okay? And then, you know, deal with that fact. But yeah, if a store, so it's all right to deliver a feature or an epic across multiple sprints. It's just make sure that you have something going back, potentially shippable, okay? And that you're delivering some customer value. So guess what? To do, if we've gone with the story, which is that the very first story I did, which is I find product, you know, as a shopper, I find the products I search for the products I want so that I can find the things I wanna buy. That is, that's a huge story. It's a huge story. As we unpacked it, we found that that could deal with how do you sort the results and display them. We dealt with negative cases. We nailed with names, categories, brands, you know, other metadata that you'd be searching on. We dealt with suggestions. All can fall under that story. So break it down. Break it down into more precise pieces, more clearly understood pieces. The thing is, when we're building software, we're going from a point of no understanding of what the person wants, to hopefully full understanding what the person wants as captured in software we built, okay? So the question is how do we get there? And, you know, when I start out in this business, we had a term, I don't know if they still teach it in computer science, step-wise refinement. Step-wise refinement. So what we do is we start with something and you break it up and you break it into more pieces. And this was the basis for, at that time, writing functions or procedures, okay? You're like, you know, you wanna do something, you break it down, you break it down, you break it down until you could get it into a single function or procedure. Now it would be the step. I still kinda use that approach, you know, whether I'm working in Ruby or Python, you know, that's kinda how my Ruby methods are built, is I break it down and it's like, okay, I wanna be able to basically say something like, you know, do foo and I have a method to do that, you know? And I keep breaking down until I get to that point. Same thing here, it's kinda, you know, do X, make sure you have it broken down into those steps. Other questions? What should be the idea of the original sprint? How long should it last? So I think when you add up everything, all the overhead, for a sprint, you wanna be below 5%, you wanna be below 5%. So if you're doing stand-ups, you know, those 15 minutes a day, you got that. Then you need to look at the year due. I think, you know, for retrospectives, I'm really hard-assed about time-boxing for sprint reviews. I'm really, really hard-assed about time-boxing. For sprint planning, it's like, you're gonna adjust that number. But I think you do wanna do time-boxing. I think the other thing, which we didn't talk about, which we'll talk about next week, is something that's not an official part of Scrum, but what some people call storytime, which is an ingredient to maintaining a good, healthy backlog. The thing is, is you don't want all these stories broken down, you know, you don't wanna have six months of stories that are actionable sitting in your pipeline, in your backlog. Because the reality is, you're gonna throw 90% of them away. So you do, you wanna go from a state of, you know, kind of fuzzy definition, big chunks to when you intake them and you jump into a sprint, they're more broken down. So you also have, not just sprint planning, but you have the storytime that's kind of an ongoing process, where you kind of say, hey, and this is a process where the product owner says, hey, I'm thinking about stuff along this line, here's some of the stories I'm thinking about, and you could start asking questions at that point, and then they go, oh, I hadn't thought about them, let me go talk to some customers. Maybe I need to do some user research. Maybe I just need to think about that. Maybe I need to download the product designer and play with some ideas. You know, it's better that those questions be raised earlier rather than later. And so that means you have, you can't just throw it all into the sprint planning time. The first one is, how do you dedicate time for all the customers in the sprint and the beauty of your business? Okay, so the first question is about how do you deal with bug fixes? So tasks, I treat bug fixes as tasks, and you put them into the backlog, and you manage them, that would be, that is the ideal, okay? I think that how much time you sprint is really a factor of the customer value that's being delivered. If your product owner is like, oh my God, these are really horrible, we have to deal with them, maybe you have a sprint that all you do is fix bugs. Because that's the customer value you're gonna deliver, a more solid, more robust, more reliable product. I think that that is the product owner. I think what you'll tend to find, it's about 20 to 25%, is what on a kind of a mature product is what you're gonna see. But that's just a rule of thumb, depends on your code base. Now one of the things I'll say is bugs that are tied to the stories you're implementing, that's part of implementing the story, okay? And you just write them and you put them into the sprint, you just write the bugs up, you put them in the sprint. So that's, but again, it's going back to acceptance criteria, are you meeting the acceptance criteria or not? If you're gonna do something like a refactor, that's a task, but what is that enabling? I mean, there are some of us who, we look at something and we just have to get our hands to refactor it, we just, you know, our skin's crawling, but we have to ask ourselves, are we really delivering customer value in that? Because I'm one of the people whose skins ends up crawling sometimes. And I have to pull myself in and say, no, there's not, yes, it will make me feel better, my skin will crawl less, but really we're not delivering any customer value, okay? That's the reality. So we have to ask ourselves that question. And that's where the PO comes in, the PO decides what gets done, the team decides how it gets done and how long it takes, and how much they're gonna commit to, okay? So there's a tension there, and that's a good tension, okay? By separating those things. Now, if you have a PO who is saying things like, well, in four weeks, you're gonna ship this and they're setting their community up and they haven't talked to you, and all you have is a very high level story. You need to very gently push back and say, I love that fact that you aspire for us to deliver that feature in that time, but I don't think we can commit to that, okay? There will always be a tension. The POs, your product owners, your product manager are gonna push you, they want more, they want it sooner, they're seeing a path forward, that vision that they're tracking to, they have that movie in their mind that is so clear to them. That you need, but you need to say, hey, let's figure out how we're gonna make that movie happen, okay? I remember reading famous movie director, and I can't remember who it was, and the interview was that he said that every day before he would come on the set, he would actually visualize every scene they were gonna film that day. But then it sometimes took 20 takes to get his scene made reality. So, for you, your PO, your product manager, is gonna have all these scenes in their mind. And, but the thing is, it may take you guys 20 takes to get them right, you know? But imagine that the script wasn't even written. You know, you guys are having to write the script at the same time. It's not like you have a written script, okay? This tension is good. So don't get me wrong, this tension is good. It's a powerful tension. What you ideally want in product and engineering are two people who are very passionate about what they can do, can disagree with each other, maybe even raise their voices in the conversation. But at the end, they come out and they say, this is what we're doing. That's what you want. So it's okay that they're arguing. It's okay that voices get raised. It's okay that maybe even tables get pounded on. But what happens at the end is there is a commitment to move forward. Amazon talks about disagree and then commit. That's the model they talk about. This is what you're talking about here. Second question? So the question is, what happens when you need a separate design spring? So first off, that's part of what should be coming into the grooming process is thinking about what this is going to look like, what the interaction is going to be, because it goes back to acceptance criteria. But you also may say, hey, what we're producing is we're producing something that can be tested. We're producing something that could be the result is something that's testable. Again, acceptance criteria. Go back to acceptance criteria. Is this something that can be used to validate this user experience? Okay? You know, again, customer value can be understanding of the customer if it lets you make good decisions going forward. Okay? Cool. Others? My question? Yes? When does performance become an acceptance criteria? When is it really good that you really power the acceptance criteria? Question is, when does performance become part of the acceptance criteria? Just now the product search, they can put it in the partial grooming, but if I take five minutes to get better with some. So that's what part of your criteria? That's part of your result displayed within 500 milliseconds. I can't achieve it in this way. It can be one of your criteria. It can be one of your criteria. You should talk about, you should say what is acceptable. You should say what is acceptable. You know, is this a potentially shippable product? Is that, if it is five minutes, is that usable? No. Okay, it's not. I mean, let's just be honest. We know Jacob Nielsen's research, Donald Norman's research on how quickly people bounce away. Very clear. You've got no more than two seconds. And for something like this, this is measured in hundreds or tens of milliseconds in terms of how quickly it has to be there. To be, how long does it take to press a key and change, and give that feedback? You've got to be able to press the key, see the feedback. You know, if it's not there. The question is, is it critical to the user acceptance? Now, if you're saying, oh, we're going to, we're going to do a proof of concept of recommendation, then that's a different thing. But be clear about what the criteria are for what you're trying to do, okay? But performance, I think it's, you know, you don't want to premature optimize, but to deliver something and say that it's good, then you've got to measure the performance. And again, it's about what's acceptable, not what's the best, okay? What's acceptable versus what's the best? We may say that the ideal for something like this would be 50 milliseconds, which I think is an outright. But in that case where we say, okay, but 250 is acceptable, okay? If you don't know what it is, you're never going to reach it. If you haven't talked about it, there will be disagreement. You will get to the end and your PO is going to say, this is unacceptable. And you're going to say, well, why didn't you tell us? Okay? Because you might come at the very beginning of sprint planning, and if they set 50 milliseconds as the criteria, you know enough about the architecture you know you cannot deliver that right now. Should you accept that story? No, we should say, we can't do that. I wish we could, but we can't, you know? And this is why. And then figure out what you do to make that happen. I mean, these are not easy conversations to have. These are hard and painful conversations. We all want to deliver good stuff. We want all people to say, yes, you've done it. So when you're telling me I can't do it, it's not pretty. But it's how you have healthy product discussions. Next? Yes? You mentioned the computer for scientist test. I created a very similar to integration test. I know they have different, they have different process, but technically they are very similar. So can I give this one with another? So the question is around integration test versus acceptance test. Sounds like they're similar. These are names, these are labels we put on them. Just make sure you as a team are agreed on what it means. When you say unit test, everybody better know what that is. When you say functional test, everybody on the team should know what that means. When someone says user level test or acceptance test or integration or system test, just make sure you know what you're talking about, that you're in agreement. You know, you can call it, you can call it, you know, star spangled nonsense, you know, whatever, you know, pick out a term for yourselves. Just make sure everybody knows what that means. Okay? I mean, this is an area where there is a lot of different terminology. They're people, you know, they're not really good at defining it. I know it when I see it, you know, it kind of phrase. But at the same time, it's important as a team, go back to the acceptance criteria, that you know that you're talking about the same thing. You know, if you say integration test and what you really mean is a functional test. And I say acceptance test and what I really am saying is a user level test, you know. And we, you know, and we never have the discussion to work out what we both mean. You know, you're gonna deliver a set of integration tests and I'm gonna go, what the, that's not what I wanted. You know, you know, just make sure that you, you know, and that, you know, going back, that's definition of done. That's make sure that you're speaking the same language. Others? Should we limit our storage account based on our assisting resource in order to maintain the user accession? So should you, should you, should you manage or manage your story breakdown based on your resources? Yeah, I mean, here's the reality. If you only have five, three people on the team, you should, you should be planning for three people. I mean, I don't know about you, but I don't get resource fairies that come and drop resources in my teams in the middle of sprints. And, you know, you should, you should, you should take into account things like, is someone gonna be on an extended leave? Okay. You know, that, you know, you know, I can't tell you how many teams I've seen go into a sprint planning and everything. And then the day after the sprint starts, oh, but by the way, John and Jeannie, they're on leave for two weeks. You know, it's like, okay. Very helpful, you know. And, and you take into account, you know, what are the demands that are on people? You know, who's gonna be there? I mean, it's a team that's gonna tackle this work. And, you know, you know, and don't, and don't expect that person who joins that week to contribute, you know, full onto the sprint. Again, another thing I've seen teams go, oh yeah, so and so is joining on Monday. Well, they'll be able to handle these three stories. Well, first off, you should not say that they'll be able to handle those stories because they didn't have anything to say in that conversation. Okay? Just like you wouldn't want the PO say, well, you guys can handle nine stories, can't you? That's not the PO's place. The person needs to be part of a conversation if you're gonna do that. Yeah, scale things to your resources. Scale things to your resources and don't expect magic resources to appear. Now, I don't know if there's a cartoon series in the U.S. that was very popular, very nerdy, called The Far Side. I don't know if people ever saw it. Look it up, it's kind of fun. But there's one, there's one Far Side cartoon. A lot of them were about science, you know, different things in science. And in one of them, there's this person, you know, working on the whiteboard, da-da-da-da-da-da, laying out these equations. And then in the middle of it, it says a miracle occurs and the rest of the equation is below it. And I sometimes think that in software development, we're kind of like, yeah, we've got this clearly defined. We've got this clearly defined. We just need a miracle here to fill it in for us. And, you know, and we keep expecting a miracle to happen. And they keep not happening and we keep getting hurt. And so let's just quit planning on miracles happening. There is no resource fairy, there is no testing fairy, there is no QA fairy, you know, let's just quit pretending that there are. Okay, other questions? Well, we promise that these would be an hour and we're at an hour and four minutes. So I'm gonna call it for tonight unless there are any other questions. Look forward to seeing you back here next week. Next week we will be talking about the whole sprint planning process, estimation, sprint planning, all that kind of stuff. Okay, and then week four we'll wrap it all together. So thank you very much, good evening. Oh, by the way, we are hiring here at Carousel. If you'd be interested in being part of our software engineering team, feel free to give us your name. If you know anybody who would be, point them at us. We actually have a referral bonus for people outside of Carousel who refer people to us. Self referrals do not get the referral bonus. So if you're gonna do that, you know, get a friend to refer you and then collect the money from them.