 Good evening everyone Welcome to Beyond Estimates for casting with Little's Law and we've got Todd Little to explain it to us So thanks a lot Todd for joining us today. All right over to you Todd then So welcome to Beyond Estimating Forecasting with Little's Law I'm Todd Little I'm chairman of Kanban University and I'm going to be leading you through some But I'm particularly interesting perspective on estimations and and how we can apply things like Little's Law So what is Little's Law? Little it's not mine My law I have lots of laws, but they're not this one. This is John DC Little who's known known relation So John the DC little this was involved in queuing theory looking at how how long things take and in going through queues and Looked at it and came up with this perspective that the average number of customers in the system Over some interval is equal to their average arrival rate Multiplied by their average time in the system So the average length of the QL is equal to the average arrival rate Times the average wait time a fairly straightforward perspective And it's proven to be quite valuable quite useful in looking at the relationship of of these three things the these three variables What we have found frequently in what we're looking at in in Delivery of knowledge work is this frequently is reformed as the average delivery rate is equal to average work in progress divided by the average lead time And if you think about it, it's fake basically it's it's a rate equation, right? It's it's going back to high school or even you know before algebra looking at The slope of the line the throughput average arrival average departure rate Is equal to you know the average number of cut is the the rise the work in progress or the average number of customers over the time in system or the average lead time Now there's a number of assumptions that are that are in little flaw in order to make it work one is that the average arrival rate is equal to the average departure rate Which is to say that we are whatever is coming in the system is actually going out the system and The pace at which it's entering the system is fairly consistent With the pace that it's going out of the system. So we have a steady state flow Very important very important base base assumption We assume that the tasks entering the system will eventually exit the system Otherwise, we would not be having the same arrival departure rate We're not expecting large variances in Work in progress between the beginning of the end of time period examined The average age should be about the same neither increasing nor decreasing and of course you need to be using consistent units throughout to measure cycle time whip and throughput Some fairly core assumptions But if we if we stand by those core assumptions it turns out to be quite a useful tool It tells us a couple things one is that it tells us the relationship of what happens if we were to reduce With familiar to work with reducing work in progress for example, suppose we reduce our work in progress For the same throughput we would expect to get a reduction in lead time as you see And this is a fairly important concept that we use in the Kanban world of why we why we Limit our work in progress because if we can limit work in progress we can get things through the system faster They don't stay in the system And that's it. That's just the throughput stays the same now met most of the time what we find when we have Overwork situations is that there's so much multitasking and other things going on that actually throughput increases Now that's not specifically part of little flaw It's just something that we found in the Kanban world is another reason why limiting work in progress is good because it Reduces multitasking and other issues But the core things of those law are quite useful The other thing we look at when we in the Kanban world in particular But I think also other places is looking at a cumulative flow diagram And the nice thing about this cumulative flow diagram since it's tracking our delivery arrival rates and departure rates over time is that we get We can imply little flaw essentially from the the cumulative flow diagram the average lead time is the horizontal distance between the arrival and the delivery the average work in progress is the is the is the vertical distance between the two and the the average delivery rate or average throughput is the slope of the line and Despite all the assumption restrictions that I mentioned earlier this formula is proven quite useful for understanding how flow systems behave as you change with and What we have found is that if we have this steady state We can actually average actually use the the average delivery rate as a means of predicting and forecasting How our deliveries will continue? So we look at that and we say that's interesting, you know, but where are the estimates there and What you'll see is there we're generally don't be In little flaw requires to have sizing and estimates in order to make use of this So where the estimate so we don't usually have estimates here. We're basically assuming that that We can track based on The assumptions a little all that this is work. Does it really work, right? So this is exercise I won't do I'm with you but I I'm gonna show what I frequently do as I Look at when I'm looking at estimates I bring around this jar of jelly beans and and have a group estimate I'm gonna just show you the the exercise I do and and I've run this example enough times that I know what the results will be So I'll just share that later But we're sort of obsessed with estimation and one of the things we think we're we think we're much better estimators than we actually are We and you know, this is an example. I used to demonstrate that I'll come back to it later And and we're so obsessed with estimation that I think We consider a major part of our job to either create estimates or deliver against someone else's estimates This is what has been ingrained in in the way and particularly in the software world, you know, we need estimates to give us my estimates So the challenge is it's difficult to get a man to understand something when his salary depends on not understanding it So when we're we're in a business where we're we're thinking oh, we have to work against estimates It becomes very emotional. So let's let's let's look at what actually the case is Regarding how good we are it, you know, why are we estimating? What are we using those estimates for and Which are the ones which makes sense and which ones maybe don't make sense So basically it comes down to decisions what sort of decisions are we making? You know early on at the very beginning, we're making a decision to sanction the project or not You know, we're overall looking at it at a gross perspective on, you know, what's the cost? What's the value? Do we have the right investment in order to make this so so macro level estimation is is something that's this happening It's a decision point. Do we do it or not as we steer towards the release? This is where we're making, you know Are we doing are we on target to meet whatever commitments? We may have made and what might we be able to do about that and then in the case of scrum For example, we're trying to determine in the tactical sense. Can we manage our iterations right now? We're making our iteration commitment or our capacity These are questions frequently made in in scrum, you know in in the Kanban world You may or may not be making these types of considerations What is it that we estimate? You know for the release we're looking at duration effort cost Frequently, this is in more of a scrum world. We're looking at story pointing and t-shirt sizing stories looking at story points and and and then those that do it for a task looking more at hours type of thing and And and what is our what is the challenges we typically face in estimation? Let me go ahead I'm just gonna skip this one because we've got some technical difficulties getting sound to come across So the point of the talk today is to look at can we use little flaw for forecasting? Can we essentially discontinue estimating story points since we don't require them in in our formulation with little flaw and instead? Instead simply count the number of completed stories per time period and forecast using a burn-up or burn-down based on throughput I eat we use little flaw. How would that work? So we collected and analyzed some real data something we did I did with a partner Chris Verhoff a few years back and We looked at some data collected by Vasco D'Arte. He had 55 projects from nine companies and the data is there in the in the link And if just so you have the definition when I'm going through things when I'm talking about velocity I'm looking at story points delivered in iteration Or average total points delivered over all iterations and we're talking throughput It's the number of stories delivered over a period of time and the average total number of stories delivered over all time So if we just use throughput and little flaw to forecast completion is it any more or less accurate than using velocity any story points So first they have to go back to the basic burn-up. What's our basic burn-up chart looking like we have? Chart here in this case what I've normalized down to zero to one. So a percentage of time on the bottom and the percentage done on the top and We start our work done on the on the system and we start burning up the amount of Stories our velocity then our average velocity can be calculated based on what we've done so far Over the time so far and then we can extrapolate it out and Based on this extrapolation We can see when we think we're going to be done with it now had this been a perfect perfect data We would have been extrapolating out to the to the one one Coordinate, but it's not perfect. So we're off a little bit. That's fine. That's expected a little bit So this is what we're seeing from a basic burn-up. Now if we plot multiple of them for story points. This is just some some samples of From the data set first three the first three projects in the data data collection This is sort of the typical thing we see for story points for a story points system Now, how does it look if we add throughput? So if we look at throughput instead it turns out we get something very similar There's not a lot of difference here for this from then from the trained You know from the untrained eye or even trained I it looks pretty much the same. So Initially it looks like hey, this may not this may work right where we've got we can use throughput Maybe and get the same type of thing But we want to look deeper than that we want to look to do a little bit more statistical analysis of how things actually are looking Just an additional item out here We realized that we saw a number of when this data said it was a fairly common behavior for there to be sort of a hardening phase If 10 of 2 to 12 percent at the end of the For about 50 percent of projects, so they weren't at the point where they had completely Eliminated the need for for some sort of a end state Not just an interesting tidbit that came out of the data So now we're going to go with just a little bit of statistics. I'm going to try to make it as as You know not go deep into it, but just some of the very basics Reality is that Anything is a probability distribution when we're looking at how long things are going to take it's a probability distribution And then this is a fairly typical Distribution type of curve that we see just using it for reference and the thing is it's it's showing a Probability of what things may be over time and typically what we find is there, you know that when we're looking at at Complex work is that they tend to be skewed a little bit to the right side So not going to do it in deep depth right now, but you'll see some numbers like p10 p90. What does that mean? so p10 and p90 are terms that are commonly used not so much you within our industry but within other industries I come from oil and gas. It's very common you commonly used It's also used for for wealth distribution Looking at sort of what's the ratio between the 90th percentile and the 10th percentile so pp10 p90 means 10th percentile 90th percentile or Probability of 90 percent probability of 10 percent if we look at a collection of data here We've got 10 data points And we sort them out You can look at the p90 is not the the extreme and the p10 is not the extreme But one day one up and one down from the extreme So we have a 12 and a 2 and the ratio of the p90 to p10 the ratio here is 12 over 2 or 6 Okay, pretty straightforward and we're looking the reason we do this We don't really want the extremes we want a good thing that tells us the band of the data We're looking at but we're not looking at the extremes. So we're looking in a wealth distribution world We'd be looking at people like lawyers and doctors Relative to people may be at a minimum wage, but we're not looking at at the extremes, which would be beggars or perhaps The filthy rich, I mean, this is not the type of this we're not looking at extreme we're looking at sort of the the Height and low but not the extremes and it gives us a good indication of the span of the data. So in this case the span is six Okay, which means yet so ranging from two to twelve If we look at in in what we're predicting out We can also look at Given where we're at and we can do some perhaps do some Evaluation of it, you know, and we do some forecasting. What's our 10% confidence and what's our 90% confidence? So in this case, perhaps we may have six months as our best case or our 10th percent probability of being successful in six months 90% Successful being in 12 months or less in this case a p90 to p10 of two Okay So let's move on So I told you I was come back to this Well, what what type of things do we see when I actually run this exercise of estimating the number of jelly beans? And what I see fairly consistently in this case about a p6 a p10 of about 60 on a p90 of about 360 Which gives us a ratio of p90 to p10 of about six and you know It's fairly consistently in the range of four to six in this p90 to p10 ratio And why do I do this? I do this because most people would expect the estimates to be much more of narrow bandwidth than this much more narrow band And so it exposes them to the fact that our estimation capability even for something simple as jelly beans is actually a pre-wide range Of quite a wide range and and the fact is that estimating jelly beans ought to be a whole lot easier Then because we can see it then then doing something like estimating how long Knowledge work will take knowledge work is much more complex much more dimensionality to it than just a simple visual 3d 3-dimensional image here, so We don't we're not so good at at estimating maybe if we use the data we could get better We can use use them it to help us forecast So as we go on here's here's the type of thing we did we looked at the data and we looked at how accurate Given the data the 45 projects what sort of ranges did we have? between our p90 and p10 And this is just an example here again to read first before we do that re-emphasize what we're doing We took data at various time points say example at at the 30% completion point We might have something like a p90 of 1.5 and a p10 of 0.6 If you feel more comfortable multiplying you can multiply that out by 10 as a well 15 months or six months remaining to be done or So yeah, no 15 months or six months come total time But what we're really looking at is how much is remaining so we've subtract off the third the the three months and we end up with 1.2 over 0.3 or a p90 to p10 of 4 so since we're looking at a forecast We're looking at how long it's going to be from this point forward and that's what we used in our reporting so when we looked at the data we discovered that velocity and throughput provide Comparable accuracy in forecasting release dates of the ranges of the p90 to the p10 for velocity and throughput We're almost almost identical right around the four range and Sometimes throughput was a little bit better sometimes velocity was a little bit better It was inconclusive as to you know was one better than the other based statistically inconclusive You're basically getting the same information through the throughput forecast as we got from a velocity forecast So that's an interesting insight that we had from the data And then the question we asked you get better at estimating the further we go into product under a project I mean does our velocity become more stable over time? Does it does it increase and in fact? No velocity predictability does not get better over time. We aren't getting better estimators We aren't getting better predictability of velocity and so a lot of the things people say well velocity is going to get better over time It doesn't show up in the data that we had In fact, it actually to a certain extent gets worse our predictability goes down as you can see that the solid red line is Going up so this is what means our range of our predictability is is getting higher meaning our predictability is lower Some other conclusions from the data So first of all, I think pretty important story point estimations didn't provide us any improvement in predictive power over just counting stories pretty important insight and velocity and throughput both showed high variance and That variance does not decrease over time. It actually increased Which means that that you know the the thing I think we frequently count on in Is these hopeful things that just turn out to not be the case? So the reality here is that given what we've seen in data It says that throughput using little flaw is actually every bit as good as trying to do a much you know going through the the task of of estimating the stories and And getting no additional value There was I mentioned there's this hardening phase we saw which is something we can take into account as well and while throughput is as good a predictor as velocity Neither is a fantastic predictor predictor, right? We're not going to magically be able to get answers that are that are accurate What we can do is understand what a range is and and manage that range accordingly So our p10 p90 to p10 ratio is if we use the forecasting approach using throughput is about 3.5 just roughly saying that if if our forecasting is Forecasting out six months remaining then the p10 to p90 bands would basically be from 3.2 months to about 11.2 months So you know you have to tune this for your own situation, but this is just the indication This is about as good as we're going to get and you know you you're in terms of being able to predict And the and going through a detailed estimation activity doesn't help that phase that doesn't help that based on the data So we said look at good. That's what we found in the data What what would we find? You know, is there a way we can model this from the simulation perspective? And using using a simulation can we get a similar can we match the data first and then based on once we've matched the data Can we explore the range of when wind at work? When might it be worthwhile to to estimate? So we start in with how our stories are coming in so we We have a range of distributions of our a probability of distribution for our our story point But remember story points are not size story points are estimated size Estimates are not accurate. So we have to look at and multiply that by a distribution of estimation accuracy So a combination of these two distributions can then be fed into a Monte Carlo engine and gives us a Distribution of projections coming out which we look at the difference between what philosophy would tell us and what throughput would tell us and What we find is that we can simulate we can come up with a simulation that produces results quite similar to the measured data so we we can tune that basically feed the data back in from the real data run it through the simulation and The simulation models quite quite accurately What the original data showed so we say we have decent confidence we can match the existing data Let's now start looking at expanding that when do we see you know If we if we make larger district wider distributions with higher p90 to p10 ratios or narrower Distributions where we are very much more either much more higher accuracy or similar sized stories. What difference would that make? So what if We start looking at you know at the observed data throughput and velocity give nearly identical results with within in the actual data throughput was slightly better than velocity, okay, but that's I think that's a Statistically this I would say statistically. They're roughly the same there There's no there's nothing to indicate that one was better But if you actually look to the wrong numbers you could make a clue Oh throughput is slightly better than velocity. So our estimates actually made things worse But if you do look at the simulations If we look at the simulations We found that's about the set that they're about the same and actually probably in the simulations velocity just that is it had bit better But but not by much not enough to make it worth You know in general to feel like it was worth worth the effort going on into it So what if we have a large distribution in story size? What is the impact of that? Well, if we have a very very large distribution in story size, then it did show that velocity is better So there is a point at which you know if you have a huge distribution Difference in story size, then it's it's better if you have what's probably considered a normal and more Regular distribution in story size where their stories are about the same or you know, maybe ranging in Size from a 1 to a 10. It's probably not going it's not going to help that much The the throughput will be just as good What if we're really good at estimation? What if our estimation range is very tight? Well, it turns out that if this is the case it actually helps both throughput and velocity, right? So so What it means is your it so it turns out it really doesn't matter in this case it does make the overall prediction better Which is great, but It's not the answer. It's not the panacea to get a greater estimation about the panacea And I think it's probably actually not not doable as well I've not I've looked at a lot of estimation data and the reality is that despite our Amount of effort we've put into trying to get better at a pay or e estimation Estimating prior to to the date to the delivery The range is the p90 to p10 is consistently upwards of of 4 Which means, you know about the same as our jelly bean exercise and at that level It's not helping us. It's not adding information Value to our decision-making process Utilizing throughput and forecasting based on throughput is at as good and probably More valuable because it's actually forcing us to be living by the data rather than forcing it then putting us into a mode where we're looking at wishes and hopes It also took a quick look at when you did it make a difference whether we used Fibonacci or types or Bucket sizes of two In this in the estimation and it turns out that that didn't really impact anything So if you're do if you're finding yourself that you still want to be doing estimates And you want to be using using things like you're planning poker Fibonacci sequence or buckets of power of two It's fine. It doesn't it's none of it doesn't hurt you. It doesn't help you So what does it tell us if we're in a story I look at is if we're estimating some mixed nuts I don't really care whether I have some, you know, whether I'm not going to try to look over the difference between peanuts and cashews and Brazil nuts That doesn't matter But what I do care about is I want to find if I've got any coconuts, right? The big things are the things which will impact us But if I've got a pretty reasonable distribution My little law forecasting is going to be perfectly acceptable as long as the other characteristics are there now It turns out a lot of the characteristics a lot of the assumptions of little's law Are exactly the same types of assumptions you'd need for velocity to work The only one is really different is that of in velocity We think we can we can have information to add by estimating story points and what we find in general That's probably not going to be the case The other thing the simulation is tell us is if I do have coconuts Then I should see can I split those into into smaller nuts is there a way to do that and usually they're usually those are the cases that we Frequently and particularly in knowledge work particularly in software development It's rare that we we actually can only work on a coconut as a whole We usually can split it down and that splitting is a good idea What other things did it tell us about estimation? That that decisions to tears when we're looking at decisions to steer towards the release that velocity and throughput are equally good and equally bad predictors and But there's nothing, you know, it's but it's better than nothing. So it is something that can help us Making use of velocity understanding how scope is and working it at the the scope line velocity line where they intersect perfectly valid look at velocity you can also look at throughput the same way and They're both useful tools and they're But they do have their limitations so that's where you want to make use of of understanding what those limitations are What does it tell us about? Decisions helping with with iterations May my view is that that given that story points don't help us much I don't think that the task estimations are adding any value either and because I think it's solving the wrong problem It's not solving the high-level problem, which is is is what business decisions? Are you actually making so and part of this is is that you know in a can bend world We don't you have iterations and we're not really trying to meet some particular What might be considered an artificial deadline so I think the value out of task estimations is quite low You know, we're basically once we decided we're going to do a story We're going to do it why bother With estimating which you've made what you've already made that decision Was it tells us about macro estimation decisions at project sanction Some level of macro estimation cost and benefits is likely necessary for your business decisions So, you know, you're going to have to have some degree of you're making a decision up front You know, hey, you have no data really to start from you're gonna have to probably have some level of a macro estimate But but I think trying to But it's just gonna be a macro estimate shouldn't be spending more time on Estimating cost then you spend on estimating the value It's just and it's a guarantee if it's a decision is I'm going to do this regardless. You shouldn't be bothering with estimates at all So just a look, you know, how does this look to other research just quickly I did a An article in IEEE software based on data in my company landmark graphics back in 2000 and published in 2006 where I was looking particularly at that estimation accuracy and and how it related also to the cone of uncertainty Just on a high-level perspective if we'd looked at at things that what is sort of the range of That we're looking at a range of data if I looked at the actual versus initial estimate my data looked much like this scattered and that the Like also compared this to other data from Tom DeMarco that he had published it looked very similar when I you know So I said this this is looking like we've got some trends that are fairly common in our industry The p90 to p10 ratio of this data was about four to one. So, you know fairly common same type of thing We see all the time The data from Steve McConnell similar, you know, we're we're not only are we a high scatter But we're also quite optimistic very low belitton almost nothing below the line Almost all the data points have a longer completion than than the than the estimate So if we look at you know, 10% probability of you know, if we really looked at how good are we with estimating? This is sort of the state of the industry in terms of estimating for software development 10 to 20 percent chance of delivering on time and if we wanted to have a 90 percent chance of delivering on time We'd be needing to look at four times that so this is where we're at and And the the thing that we can make use of it a little else a little laws going to give us a little bit better Let's like you know in this range or or better, but what it's doing is taking advantage of our real data It's also de-biasing it because once once we you know without with estimation we have this tend to be an optimism bias and and With little so it takes that optimism bias away from us It basically relies on the data to tell us how things are going rather than our optimistic projection This is a great study that the Monet Jorgensen did in 2013 and I love this one because this is the first time I've seen a situation where they actually not only had estimates done But actually had work done By six different vendors so six different vendors well 16 initial estimates six bidders Actually did the software delivery as well. So we have data both on the estimate and on the delivery And it turns out the range between the highest and lowest estimate was about eight to one The actual range the actual overestimate range Went from slightly better. So there was one case where where they did better than the estimate So it actually took less which is quite rare but that range between the low and the high was about four to one and Actually interesting thing is that the lowest bid actually came in under their bid So it was just an interesting tidbit and the actual performance range The worst case took 18 times of the effort of the best. These are the types of realities we face in our in our work and software in knowledge work and You know, so I think that that's why we can make use of things You know take advantage of little flaw. So candle little flaw really work in practice, right? How do we so here's an example of a case of the company that is at? and What you see here is that they had been they went to approach, you know Basically usually utilizing this because they had been struggling and struggling and struggling with estimates It was taking so much time and it weren't really feeling like they were getting getting any progress So instead what they started let's not worry about estimate. Let's just get things started Let's start working on things and once we start working on things Let's see how the trends are showing so what they saw was that they had a situation like this the blue line and the the orange line and This is what they're seeing is their scope is growing as fast as they're delivering In fact, it's even getting it's even getting worse. So you can make a very clear assessment here and say I Got a problem if this continues. We're never going to get done. So what did they do? They started looking at well what if we improve some things a little bit and they think the best they could hope for to improve got them up to this this Green extended line. They're still not going to Make make things work. So they need to make some decisions. What did they do? They said, okay we're going to squeeze down hard on the scope and we're going to add some people to the project and Basically, that's going to be the means by which we can can help this come together and eventually that's what happened They made some decisions and those and that's what we're trying to use estimate What are the things we need to what are the actions we need to take and the little law Provided that type of capability quite well in this situation to help look at where things are at What are the decisions and how do we we steer that appropriately? So this is my contact information that's available happy to reach out to you to Discuss any of this or or other things with you At any point in time and I think we've got just a few minutes left for for some Q&A Right so the first question is what is the best way to balance the rate of delivery to demand? Ah, okay. Well a couple of things one is Can ban is the right answer to this that's what we do in Canada can ban and we try to make sure that we've got Consistency of flow and a lot of it's sort of essential to the sense a central part of what we do. We try to balance we try to look at limiting whip in order to make sure that what we are working on is going to actually make it through the system so And and the it doesn't we might have things backed up before a commitment point Look once we do what the can then we have what we call a commitment point And when we commit to something is the point at which we've got agreement that yes This is a piece of work that we agree that we want to deliver and we degree that we will deliver it And so we keep things limited in the in the pipeline and by limiting things in the pipeline That gives us the means by which to get predictability. So it's all about Maintaining flow Consistency of flow if we can get to a system where we have consistency of flow We can have that balance and so there's a lot of capability a lot of tools that we have in can ban for for helping that various things various parts of it where we can potentially shape the demand coming in or or Make adjustments to the system in order to Increase our delivery rate, but yes getting it in balance is key and and that's what gives us increased predictability Okay, that's great. This one's interesting by Sri Devi What estimation model would you suggest for the macro or the initial estimation that we do when the scrum team is not there yet? Yes, so that's really a business. That's really a decision at the business level and so it's it's a it's a It's a high-level perspective You know, I think that we know we're going to be off in both our the key thing about that initial Assessment is we have to realize You know, we're going to be way off on both our value estimation and on our sizing estimation But what ultimately what we're trying to do Is make a go-no-go decision? The problem is that frequently what happens is once that early estimate is made it gets cast in stone and That becomes problematic so the approach I would say is is that It doesn't matter too much. What approach is used? I don't I don't think a detailed analysis, you know a Work breakdown structure or any type of approach like that is is justified because the There's no similar type of activity like that done on the value side So there's no point in making that estimation activity on the cost side any higher degree of fidelity than then is done on the value side And we know how to do it. We've done it on the value side. It oftentimes is a gut feel And that's okay. I mean, I'm perfectly fine with you know companies need to learn how to do that I don't think we have a particular Magic bullet and there is no magic bullet We've been at this this trying to get better at estimation for 50 years and fundamentally things have not changed drastically in 50 years The range of estimation is pretty much the same which tells me it's diminishing returns to a large extent trying to get a huge degree better But what we have gotten better at is getting more predictable systems And and that's the type of thing that agility helps us with but is it can help us with it? Certainly the thing that can bend tries to dry for us is increasing predictability But utilizing data to get that so at that earlier level It's gut feel it to a large extent that the the real thing is don't make any approach that Tries to pretend accuracy at that early level because we know we don't have it And don't get things in things Locked in don't don't make commitments Don't take commitments about time at that level without realizing the consequences You are making a decision. Yeah, we're gonna make forward move forward with it I mean the approach I've also used effectively is to commit in smaller chunks, right? So so make make a decision. Yes, we're going to go forward with it but we're gonna do so with with a you know a First first phase approach to it So commit to smaller chunks and then and then look at incremental rolling funding So these are all types of things that can be done at that level, but the the detail estimates will be will be Miss will be making us think we know more than we do Great thought can we squeeze another one? We're almost out of time. Yeah, absolutely. Okay. Yeah. Okay. So yeah So this one says this one is by Tala and she's asking. I'm sorry They're asking how does the effectiveness of little law forecasting change? When the number of teams is really large you have a large number of teams, which means we have a large backlog So how does that? Yeah? Yeah, so what you're looking at there is You're aggregating a thing. They also think you have to look at when you have multiple teams involved is that That would be one of the assumptions of little law that you do have to look at carefully is that The the site they all have to be behaving in a similar fashion, right? So if you could have a situation where You you have to be careful about trying to apply Little law Behavior in one team and apply it to another team each team is almost has to be dealt with Independently now if those teams are similar enough and behaving often you have the data that demonstrates that they're behaving up Then you can aggregate them accordingly, but you know, you might have a situation where one team is working on items different type of items than another and Those items are not Effectively not consistent units So so in this is a case where stories were sighs might matter because if they're consistently higher on one team and Consistently small in another team. You won't be able to really compare those And and little law would say that you know, you can't aggregate those so it's You can do it if you just you have to figure out, you know, how do I how do I pull these together? But I tend to we tend to start by looking at each individual team and how that's how that's progressing You can't I could say you might be able to aggregate But if you are aggregate and just make sure that there's some some additional caveats you have to be cautious of before doing that and Then and then what you do look at is is just like in in in this slide You look at is how is my how is my committed backlog growing compared to my delivery rate? And and am I on target to deliver in what decisions what actions may I need to take in order to to get the desired results that I want? Thanks a lot Todd for the for this session. Thank you very much. Thank you