 Welcome to today's session on Predictably Agile by Carl Scotland. Carl, so happy you could join us here today. A little bit about Carl. He heads the TGS Agile Transformation Services practice in EMEA. Passionate about helping businesses become learning organizations. He was awarded the Honorary Brickle Community Contribution Award at the 2013 Lean Kanban North America Conference, founding member of the Lean System Society, limited with society as well, a pioneer of using Kanban systems and strategy development for product development. So Carl, without further ado, over to you. Okay, thanks very much, Isha. Let me share my screen. Welcome everybody. Thanks for joining this session. It's great for me to be back at Agile India again, even if I'm not there in person. I've not been able to travel in person. I was just looking back. I think it was 2013. I was last at the Agile India conference. So when we were able to travel internationally, which has like nine years seems to have been a long time. So hopefully one time one day we'll be able to travel again and I'll be able to come back and be there in person. As I said, I'm currently working for Tech Systems Global Services. We are, as it says in the name, we're a global organization. We work with organizations all over the world. I myself, I'm based in the UK, live down in Brighton on the South Coast. But the great thing of working with someone like Tech Systems is I do get to work with people all over the world. And I've actually just finished early this year an engagement where we did have a great team of Agile coaches that were based in India. So I really enjoyed that. So we're going to talk about predictably Agile today. So I thought just to kick off and get going. Let's do a Mentimeter. So I'm going to switch my share over to Mentimeter. And if you can either scan on that QR code or go to menti.com and use the code there 34112028. You should be able to, well, you probably just see these instructions. So I'll just hold this slide here for a minute just to allow people to get into Menti. And I'm going to ask a question. I'll give you the question now so you can start thinking about it. When I talk about predictably Agile, what kind of predictable? What does predictable mean to you when you think of the phrase predictably Agile? I'm going to flip over then. So you should be able to enter some text. So just add a bit of text, a few words, really short phrase. What comes to mind when you hear the phrase predictably Agile? What kind of predictably? So hopefully we'll start seeing some answers coming in soon. Great, thank you. Meeting goals consistently. I like the word consistent there. Assuming, yeah, interesting. Vision is clear. Deliverables, predictable deliverables, yeah. Dictly impediment as well as the goals for the future, great. Yeah, deliver goals of time. Forecasting, yes, interesting. At least the word forecasting. We might come back to that a little bit. No spillovers. That means delivering everything you said you do and nothing coming in late. Quality, yeah. Great, thanks. Keep on putting those in. I'm going to switch back to the slides now, but I'm always interested to find out what other people think. I think when we talk about predictability, it's important first to kind of understand what do we mean by predictability and what are the kinds of different ways of thinking about it. So this is the way I think about it. And I went and pulled out a definition from the Merriam-Webster dictionary. And I think what's interesting here is the two definitions it gives. The first one, the bit that jumps out to me in terms of the capability of being predicted, able to be known, seen or declared in advance. And I find a lot of organizations when they want to be more predictable, is they want to be able to declare in advance what they're going to get and when they're going to get it. That to me is the most common interpretation of the way about predictability. So the idea of forecasting, can we forecast and use the forecast to declare in advance? Somebody said no spillover, so we know to declare in advance what we're going to get and nothing spills over beyond that. But the other definition is the one that I find more interesting and behavior behaving in a way that is expected. And the things I'm going to talk about, when I talk about predictably agile, are more about predictability of the system in the terms of the system, where the system is our development system or organization as a system. Does our system behave in a way that is expected? And therefore we can have an understanding about what we're going to get out of it. So for me, it's more useful to think of predictability of agile is using agile techniques to help our organization behave in a way that is expected, less useful to think about agile in terms of helping us declare in advance what we're going to get. Because that kind of starts getting us back maybe into waterfall because we start trying to figure things out in advance and doing lots of analysis and lots of design in order to do that. So agile shifts us, I think, more towards definition of two here and trying to move us away from definition one. But why would I, you know, why be interested in predictability in the first place anyway? That the reason I kind of started digging into this and exploring it is because when I start working with organizations really, they want to go through an agile transformation because they want to have more business success. They're failing to deliver or they're delivering poor quality or some of these things you see on this slide are all elements of that they want to improve in order to deliver business success. So we can kind of talk about these six. And five of these six, I've always had a good answer. So if I go into an organization and I kind of talk about these business impacts, these impacts we want to have on the business in terms of helping the business be more successful, then for something like responsiveness and I define responsiveness as can the work be delivered quickly, then we can look at something like lead time. We can measure lead time as a responsiveness, nice easy measure. Sustainability, can we, can work be delivered in the long term? I can measure things like employee engagement, you know, the sort of HR measures we get, employee surveys that gives us a measure of sustainability or maybe we can look at some kind of technical measures around the quality of the code base and use some of those to look at is the code base is sustainable. Do we have a good architecture and a good design? Value, we can kind of get into, well, what's the business doing? You know, how do we define value for the business and hopefully a business has some way of thinking about value, whether it's number of sales, number of customers or some kind of market segmentation. But again, we can usually kind of figure out how to put a number on value. Quality, you know, we can measure escape defects or customer support calls or something like that. Productivity, you know, how much work we deliver and can we deliver work in quantity? It's typically throughput. So either the number of features we're delivering, the number of stories we're delivering or the number of releases we're making or how often we're able to make releases. And then I get to predictability and I've always struggled to be able to, well, actually how do we measure that? Why do you put a number on whether you're able to deliver work consistently and reliably? And it's kind of interesting somebody use the word consistently in the Mentimeter poll because that's trimming what it is. Can we deliver consistently and reliably? How do you measure that? And I never had a good answer. So the things I'm going to talk about are the results of my explorations and experiments in that area to maybe give you some ideas around how you might be thinking about predictability and measure predictability. A lot of this is a hypothesis. So some of these ideas, some of these ideas I have tried out and used and I kind of talk a little bit about the results there. Some of these ideas are still absolutely early days that, you know, I want to dig into a little bit more. I want to try and validate my hypothesis. So my hope is what you take away from this is some ideas that maybe you can go off and try some of these things out for yourself and take my hypothesis and test your hypothesis and maybe even let me know what you learn and what you discover. My goal here, the more people are playing around with this and doing this and trying out different ideas, the more data, the more feedback, the more results we'll get and kind of see. Am I onto something here? Does this make sense? Am I gone down a rabbit hole and I should give up? So let's talk about some ideas then. And before maybe we get into some good ideas and thoughts to measure predictability, I want to just talk about some, one primary way of doing it, which is the idea of the say-do metric. Because this is the measure that I see most common when I talk about predictability. And I don't love it, to be honest. So I thought I should at least explain why and there's two main types I see. One is the idea of velocity predictability. So do teams deliver the number of stories or the number of story points they say they were going to in a sprint? You could argue that could be a measure of predictability. Can they predict what they can deliver in a sprint? So it's useful to agree, but I don't think it's really predictability. Or if it's predictability, it's more about what you're measuring is the idea of going back to the two definitions in the dictionary. Can a team declare in advance what they're going to do within a two-week period? Now a lot of teams, they can't. A lot of teams plan way too much into sprint. So I'm not saying this has no value at all, but I think it's a very limited value. You're kind of helping a team with their planning, but I don't think it gives you any certainly long-term predictability. Any predictability is over that two-week window. So it's a very short term. But the main thing I find with this is it's not really actionable and it sometimes kind of just leads into reactive behavior. So the screenshot here, and this is a screenshot from Microsoft DevOps as your, I just took it from their help page as a kind of interesting example. What I do like about this is it's, this is working on account of work items rather than a velocity. So at least we're not measuring predictably in terms of a made-up number. But you can see, let's take this sprint here. We planned 85, we completed slightly more, there are a few finished late, great. So what have we done the next sprint? We've said, well, maybe we can deliver even more. So we've planned even more, but now we've delivered a little bit less. So the next sprint, we plan a little bit less and deliver less again, plan the same, don't deliver less. So there's always going to be some variation in there. I think with this you can end up maybe overreacting to what you've done in the past. And actually potentially you'll kind of tinkering and tampering with the system almost rather than kind of thinking actually, long-term, what does the business need and what's that long-term predictability. So useful in some situations, but to me, I don't think it's a great measure of predictability. The other one that's common at the moment is the idea from SafeScale Digital Framework on what they call program predictability. So here, for those of you not familiar with Safe, I'll describe how it works. At the end of a PI, PI is a program increment, which is typically a number of sprints, typically around about three months, two or three months' worth of sprints. You say what your objectives are to that sprint. So the plus point here and the thing I like about it is it's more objective-based rather than just did we deliver stories. So it's getting more towards the value that we're delivering. You put a number on that value, but it's a fairly arbitrary scale of one to 10. So it's a bit of a made-up number for value. And then at the end of the PI, you track what you actually delivered in terms of that objective. So we can see this top objective here. We said it's had a business value of seven, and we think we've actually delivered seven, whereas the fourth one down, we thought it had a business value of 10, and we think we've only delivered a five. You can then calculate this percentage. So what percentage of the actual, of the planned objectives do you actually deliver and chart that over the PIs? And the idea is that you're probably not going to be delivering 100%, but you should be delivering about 80%. You can track it by individual teams and then look at the overall. Again, it's kind of made-up numbers again, which I don't like. And I think all you're doing is measuring how well you are at planning what you're going to do within the PI. And again, a lot of organizations, there's value in doing that because they clearly have no idea about how much to plan and they over-commit and they over-plan. But I'm not entirely sure it really gives the business any long-term predictability. It may be a step towards it. So I wouldn't say don't do this, it's bad. But to me, this is another example of this idea of say-do, not really what I think of predictability. And I kind of like this cartoon as a way of summing this up. Quite often we're just making up numbers to try and prove a point. But they're not really telling us anything meaningful and they're not telling us anything actionable. It's just, hey, it looks like we're saying what we said we were going to do using this set of numbers that we've made up. I'm not a fan, as you've probably picked up by now. Okay, so if I'm not a fan of that, what else can we use? So I'm going to introduce the idea of lead time variation, because this is where I started and I think there's some value in this. So I'll talk about lead time variation and introduce how we might be able to use the idea of lead time and variation in lead time to give us a better feel for how predictable we are. And this is getting into and starting pulling some ideas for Deming. So by measuring lead time and the variation of lead time, we're looking at the process as a whole rather than just our ability to plan. So this quite here, a process that is not in statistical control has not a definable capability. So what's the capability of the system? And then if we can say that that capability is in control, then we can say the performance of the system is predictable. And this is moving into the idea of more probabilistic thinking and probabilistic forecasting and less deterministic planning. So those say do measures to me are more deterministic. They were just trying to, what do we plan? Did we do that versus what's the probability of us delivering a bunch of things by a certain date and the risks around that? So much more kind of system thinking based. So here's my hypothesis. And this is a lead time or a cycle time. And I personally, I use those teams fairly interchangeably. But what this shows us, the horizontal axis, we have time. And you can see dates along there. On the vertical axis, we have cycle time or the lead time. So that's the number of days it takes to complete something. Each dot represents a piece of work. And what we can see where the dot is horizontally tells us the date that piece of work was completed when it was moved to done and how high up that dot is tells us how long it took to complete the work, the elapsed time from starting the piece of work to finishing it. And by looking at a range, a time range and all the pieces of work over that time range, we see this, the behavior of the system. And I've got to kind of split the data set in two halves here. On this left hand side, you can see there's a wider variability. So when I talk about lead time variation, there's a wider variation or a wider variability of lead time cycle times for the completed work. And then on the right hand side, you can see it's much narrower. There are fewer items that had a longer cycle time. The wrong button. So this was my hypothesis that the data on the left or using the data on the left, the system on the left is less predictable than the data on the right, which is more predictable. And the reason for that is because there's less variation in there. So that's the basic hypothesis. The less variation you have within your system using lead time to measure it, the more predictable you are. Now, actually, Deming would say, if I'm going to jump back to the Deming quote, Deming would say that a system either is predictable or it's not predictable. I think he'd say that he's not around to tell us anymore, but that I think that's my understanding of his work and chatting with people like Dan Vakanti, who knows a lot more about this than I do, that's his view as well. And the idea of becoming more predictable doesn't make sense in the way Deming describes it. And I kind of struggle with this because whoever might argue with Deming, he clearly is much clever and brighter and knows much more about this to me, but I have this hunch. And the conclusion I came to is that while both of these actually are equally predictable, what we mean by less predictable here is this predictability is less useful and this predictability on the right is more useful. And when we talk about helping organizations to be more predictable, what we're really saying is we want the data to be more useful in helping an organization make predictions and do their planning. So I'll dig into that a little bit more. What do I mean by more useful and less useful? So the other thing we can do with this chart here, this cycle time chart, is look at centiles. And this screenshot here is from a tool called Actionable Agile. And I'm just using a sample data set that they have in their demo version. But it's a really nice tool for visualizing this data and starting to pull some of this information out. So this shows us that 85% of the work, which is 85% of the dots are below this line, which means that there's an 85% chance of work being completed in 16 days or less. And that's based on having this idea of a stable system. In the past, this data set has delivered 85% of work in 16 days or less. Therefore, assuming the system behavior stays the same and the mix of work stays the same, there's going to be an 85% chance in the future. The other thing we can take from this is that there's a 20% chance of work being completed in two days or less. So there's a very small chance, a fifth of the work gets completed in two days or less. So we've got that 85% chance and the 25%, 20% chance. And I've picked, so 85% is a fairly standard number. I've picked 20% just because that was the easiest thing with the data. What I'm really trying to get at is, there's now a gap between the 20% and the 85%, which is 65%. So there's a 65% chance of work being completed between two and 16 days. So when we go to a business and they say, when's work going to be done? We can say 85% chance, most likely it can get done in 16 days. But you know what? We might deliver it in two days. To me, yes, that's predictable, but is that useful? Because that 16 days could be 160 days or 16 weeks. It could be a really, really long time. And if the business wants to plan, okay, knowing that you might do something in 16 days is great. But what happens if it turns up in two days? You've kind of wasted those 16 days, haven't you? So to me, that's not so useful. If that gap of 16 days is a much bigger gap, yes, it's predictable, but it's not so useful. Whereas if we reduce that gap, maybe it's more useful. So what I did was I took the data set and basically reduced the cycle time of every single, the lead time, cycle time of every single work item by a half, plugged it back in. And so now we can see there's an 85% chance of work being completed in eight days or less. 20% chance of work being completed in one day or less, which means that 64-hink gap is now eight days. Equally predictable, but more useful. Because now there's that narrow window when the work might be completed, and therefore the business can plan around that much better. So that was my hypothesis, that narrowing this variation gives us a better system is more useful predictably. Can we measure that? Now, obviously, the simple way is just kind of go 16 days down to eight days. That's numbers come down. That's the obvious answer. I tried, and I thought there's got to be a clever, more statistical way of doing this. So I tried a couple of ways. One is the notion of cycle time inequality. So I kind of got this from the idea of income inequality where they compare the P90 and the P10. So that's the 90th percentile and the 10th percentile for income. So 90% of income is this or 90% of the population has this income. 10% of the population has this income. If that gap is narrower, that's then there is more equality for income. So we could do that for cycle time. Just divide the two. So in my data set one, my P85 was 16. My P20 was two. So I can say my cycle time inequality is eight. Calculated the same for data set two. And this is kind of surprised me, but shouldn't have done in hindsight, my P85 is eight. My P20 is one. My cycle time inequality is exactly the same. So this was a kind of a big, you know, my hypothesis has failed. Cycle time inequality may be not a good measure. I still think there's the variability is something interesting, but cycle time inequality, not a good way of measuring it. Hypothesis two maybe then is look at the coefficient of variation. That's the ratio of the standard deviation to the mean. Don't too worry about all the kind of the detail maths in here. I probably should have warned people at the start of the talk that I'm kind of get into a bit of maths and statistics. I'm not a mathematician myself. So I'm trying to keep this as simple as possible, but we're looking at the ratio here. You can go online, figure out standard deviation mean trust me on these numbers. I'll have double checked them. But in that data set one standard deviation 7.33 mean 9.37. So the cycle times coefficient of variation is 0.7 eight. What if we do the same for data set two? You end up with a cycle time coefficient of variation of 0.76. No, it's basically the same. I think the difference is just to do with the way some of the things got rounded when I when I halved the cycle times. But again. Okay, turns out cycle type coefficient of variation didn't work. So I'm kind of a bit stuck now because I've still got this hunch. My I still have kind of clinging on to this hypothesis, but I've not found a good way of measuring it. So that led me to the idea of, well, okay, if we can't measure it. So well, two things. One is why is why why why have those numbers turned out to be the same even though my hunch is correct. My I still have this hypothesis. And what I came to the idea is what I really want to do is is reduce the upper items, but not necessarily that the items that have a long cycle time, but not necessarily reduce the items that have a short cycle time. So what can we do to tell us when things are taking a long time and worry about those things rather than worry about the things that are already flowing through the system quickly. So that brought me to the idea of leading indicators. So the idea of a leading indicator and a lagging indicator, the lag measure is the trailing measure. So that's that's our ultimate goal. So the lag measure might be something around actual predictability that might be measuring that variability. The lead measure are the ones that actually impact on the lag measure. So the lead measures give us an indication of things that we can measure that we think will predict that we achieve the goal and things that we can influence. So these definitions come from a book for disciplines of execution that I really like and it kind of takes that idea of the lag measure being the overall goal, but you can't always influence those. The lead measures are things that you can influence and you think are worthwhile working on. So what would the lead measures be if we wanted to measure and improve predictability. So go back to our cycle time chart. As I said, what we want to do is make these items, these items that have a long cycle time shorter, we're not necessarily worrying about making these longer or even making these any shorter because we're kind of quite happy to get something in two days. So the focus is on understanding the things that are taking a long time and working on those and maybe if we can do those then that brings down the variability but doesn't and does it in a way that that's more meaningful. So that means we need to pay attention to aging whips. So this is another chart from Action Volagile and I'll kind of just step through this to explain how this works. And this is using the same data set. So we can see that when something's done, we already know that 85% work is done within 16 days. What we can do is take that and step back through the process and work out that actually 85% of the work is in testing in 14 days. 85% of the work gets into dev done in 13 days. 85% of the work gets into dev active in 10 days. 85% of the work gets into analysis done in five days. So we can see not just the percentage of the work that gets to the end of the system but the percentage of the work and how long it takes to go through the various stages in our workflow. And this is a, you know, a major workflow. The actual workflow doesn't make any sense. So we can look at the actual work in progress and compare it to the historical aging of work. So how is work aged historically? What's the current work? So we've got three items. So this little three on this dot here means there's actually three that single dot is representing three pieces of work. So these items already have a more than 85% chance of taking longer than 16 days because they're already slower than the most of the work that has typically flowed through the system at the typical pace. So this gives us an early indication that if we want to start bringing down our lead times, these three are already going to have a long lead time. So we should pay attention to these three. So looking at aging whip and starting to measure the age of whip and looking at reducing the age of whip gives us a way of maybe a leading measure towards predictability. So we can look at things like again, another screenshot from Axiom Lajal. What's the whip age today? And this is looking at average whip age compared to last week, compared to last month. I'd love Axiom Lajal to kind of just give us a better chart to visualize this. We can see here that last week our average whip age was lower and before it was even lower. So we can see here with this data set, our system is becoming less predictable and not more predictable. So whip age, looking at whip age, I was chatting with Dan Vikandi and his critique saying what would it look like if you measured total whip age? You took all the dots of all this work in process and just aggregated together and used that as a single metric and used that as a single metric. So we can see here that the whip age is now and over time might be another way of looking at it. The other way of looking at it then is looking at blockers. So what are the things that cause a piece of work to age to stay in progress for a long time? Typically it's because things get blocked and maybe there are some things that are that is kind of taking this notion of actually maybe if we start tracking blockers and managing and doing some analysis on our blockers better then we can start reducing those blockers and therefore we start reducing the age of work in progress, work flows through the system more quickly, the system becomes more predictable or the predictably the system becomes more useful. So you've got a number of blockers but it's colour coded by how long that those pieces of work have been blocked by. So here you can see there's about 25 pieces of work that have been blocked for more than 30 days. We want that to be coming down and we can see that here trending over time where down here we've now got less than 10 pieces of work that have been blocked for over 30 days and you can see the number of pieces of work with blockers going up. So this gives us an indication that we're managing our blockers. And then blocker flow this is the net flow of are we creating more blockers than we're resolving so it's basically the number of blockers raised minus the number of blockers resolved gives you the net flow and ideally you want to get to a point where this is hovering around zero probably maybe you know if you've got a lot of blockers then you want to be resolving more of them but once you kind of get into that steady state blocker should be coming and going fairly smoothly. So measuring blockers is another way of looking at how do we reduce those long cycle time pieces of work in progress so we can start narrowing the variation of our lead time our cycle time and therefore that makes the predictability of our system more useful and we can start doing really good forecasting with that. So this is my kind of current hypothesis that I'm currently working with and testing so I believe that measuring blockers will result in less aged work in progress which result in fewer long lead time work items which will make the system more consistent and by consistent I'm kind of saying that that variation is the variation is narrower and that will make the predictability of the system more useful and we'll have confidence in that hypothesis when we start seeing stakeholders place more trust in the plans and forecasts it's not that going back to our definition it's not that they know exactly what they're going to get it's not that we can declare in advance but we have more confidence in the plans and forecast we kind of can trust them more because we have a better understanding of the behaviour of the system and therefore we have a more useful data to make those plans and forecasts. So that's kind of where I am so I've just kind of done a fairly quick go through I'm kind of interested in people's reactions to that so I'm going to go back to another Mentimeter poll so it's kind of go back to this one here and just go on to the next one so if you go back into the Mentimeter same one the codes at the top if you close the window go back to 34112028 how do you feel about that these ideas that I've just talked about because some people kind of the idea of predictability makes them nervous so I'm kind of curious do people still think it's a terrible idea and we shouldn't worry about predictability people curious about it curious to learn more about it or do people kind of think they really want to go out and try this I'll just let it look like we had about nine people respond last time so I'll just give a bit of time to people result to submit their answers and then I'll show the results we'll see what people feel about this Carl speaking of curiosity there are two questions from people when do you want to take those so let me just kind of finish this poll and then I think I've done so yeah we should have five or six minister to answer some questions at the end so let me do this oh great nobody thinks it's a terrible idea that's always reassuring those people are curious to learn more ask some questions now at the end I'll come to the hangout that I'll be going to straight after that I'd love it if people were trying out some of these ideas testing the hypothesis and let's share what we learn so I'm going to go back to the just quickly go back to the slides then I think that was yeah so I guess there's something I'm kind of always nervous and this is this idea that people worry about predictability is they worry that sometimes we would we're trying to be too precise and we're not kind of allowing for deviation so I just want to kind of emphasize that that's not what I'm talking about when I'm talking about predictability it's not that idea about being able to declare in advance because there's no deviation there's no variety there's no variation in value we do have that and that's why I kind of shift towards more that behaviour based definition of predictability so a couple of quotes which I kind of think are interesting and amusing the one is the Frank Zapper quote without deviation I can't see the full quote because I've got the zoom power bar is in the middle but you know progress isn't possible without deviation and then the one that always kind of comes around every Christmas deviation from the tone will be punished unless it's exploitable so Ruda for the red nose reindeer deviation from the norm but actually a lot of value in there so it was exploited so we need deviation because we don't make progress and that's a lot where the value is but we can still be predictable if we understand that deviation we just kind of manage it and make sure that it doesn't get out of control great so thank you so I think as she said if there's some questions in there let's take those questions and I'll I shall share the slides later I think there's a way of sharing them within the conference system I'll try and figure that one out I'll probably put them on Twitter as well so let's stop that and yeah let's take some questions the first one is from Joel Rosario he says or he asks I love the second definition of predictability however how does this relate to story points or t-shirt sizing in which historical measurements of the team's execution is used to predict what will fit in the next print yeah so I hopefully I maybe answered that when I started talking about the say do measures but yeah that's why I think using story points and my preference is always to you know just do story counting I'm not a big fan of estimation so yeah great so Joel just kind of put a comment that we've answered it so yeah I just kind of just summarise yeah that idea of story using story points gives you an idea of how you're good at planning in the short term not necessarily useful for long-term predictability the second question is from Gayatri Ishwaran she asks can you help us with the tool used to achieve the reports shown in the slides yeah let me can I type let me it I'll type the answer into the sure sorry into the the zoom answer so it's a tool called Action of Agile so if you google for Action of Agile then that's what I was showing there's I think I put a link to it in the slides as well let me just go and double check whether I did that or not yeah so it's actionofagile.com and actually the other thing I can do maybe I'll just put this in the chat there's a there's a URL so let me you can put it from the chat otherwise yeah so there's a URL and this is on one of the slides to a blog post so one it will take you to the Action of Agile website so you can find out more about the tool but they have a blog so it talks a little bit about predictability as well but it is kind of where I got that idea of the screenshot from as well so hopefully that answers the question on the tool itself it's there's a number of variations but one of the things it does nicely so I mentioned a client we were working with recently with some coaches in India we used Actionofagile and they have a Jira plugin so if you're using Jira you can get a Jira plugin it's not free but then basically those reports get generated directly from your Jira configuration so it's pretty powerful okay are there any other questions in the chats generally none so far anyone has any please put them out on the chat I had one call predictable and thing about blockers so I was wondering is there is it something about trying to predict which of the work items might run into blockers not necessarily but I think one of the things you can sometimes ask in terms of a definition ready I think teams often start a piece of work knowing that it's going to get blocked so it's not about predicting that it's going to go but sometimes teams know that we have a dependency here that dependency is not resolved but they start the work anyway so there's sometimes the idea of don't start a piece of work unless you have confidence that you can finish it without it being blocked now that's not a guarantee and sometimes work gets blocked for reasons that are out of our control so there's kind of the anticipation of blockers and not starting work but I think the way I was describing blockers is just as well that's one way of reducing the number of work items getting blocked because if you start a piece of work now it's going to get blocked you're kind of asking for it to have a long lead time okay thanks everyone Kaal thanks so much for this session new insight and I'm sure we got a lot of information by joining the session