 All right. Welcome friends. Can everybody hear me? All right. So I hope you're having a great Drupal Con last day. Hope you had a good lunch. I skipped mine. Your food makes me lazy. So yeah, I'm sure you know a few of you would also be feeling a little easy. So let's do a little warm-up exercise. Show of hands might work here. All right. Let's try to understand, you know, do a little bit of a profiling, see who we are, and it'll give me and, you know, you an idea of who is present in the room. So show of hands please for all those who are coding developers, programmers, engineers. All right. And what about managers? All right. That's awesome. More managers than engineers. Excellent. Always the case. And what about Agile? You know, all of those who have been doing Agile or, you know, know well about Agile. Okay. Good few. So, and how many of you know or have heard about no hashtag, no estimates? Ah, okay. Cool. How many know me? Okay. Great. Thanks. So for all the others, my name is Piyush Poddar. I come all the way from India. I'm working with Accelerant as director of professional services. In my past, I've been, you know, everything related to a typical project, SDLC, both, you know, development and sales, developer architect, manager, business, account exec, account manager, you know, so on and so forth since 1997. So it's almost say 19 years now. And I've been associated with Drupal since last eight years. And I live in India in a beautiful city called Jaipur, a city of forts and palaces, you know, forts that never saw anywhere. So let's jump straight through estimations. We'll start with estimations and then we'll see what are the challenges, you know, you all are facing, and then we'll see what no estimates is all about. What's the philosophy? How can we use it? And then we'll see some use cases. And few people behind this, no estimates who advocated what these approaches are. It's an interesting thing actually, you know, it started with a Twitter hashtag based tweet by someone a few years ago, which led to some conversations led to, you know, more research and interest, people blogging posts, posting blog posts, videos, lots of presentations at conferences and all, and a lot of companies, organizations and individuals are using this going forward to ease some of the pains that are associated with estimations and such. So we'll see how, as we progress through that. So before we go there, you know, let's understand what estimates are, what estimations are, and why don't, you know, what didn't you tell me? What do you think are estimations? What do you think are, you know, estimations? Why do you do it? How do you use estimations? And yeah, I mean anyone, you know, you can just sit there and speak loudly. Clients need them. Excellent. Excellent. I don't need a show of hands to see how many would agree to that. I believe majority of us would. Any other answer to that? All right. So to plan the resources and project planning. Okay. Anyone else? Please? Sorry? Clients think they need them. Yeah. Okay. Which again, right? It's the same thing. Yes. All right. So stakeholders are clients. You know, I believe they are the same clan. So they need, someone else needs them. And that's what, that's why we are asked to do that, right? All right. I think those are very common, common use cases which majority of us, you know, agree with or have been, I've seen that. So I did a quick, you know, Wikipedia search, Google search. And this is what I came across with. Estimations are a rough calculation of the value, number, quantity or extent of something. A judgment of the worth or character of someone or something. Nobody did I find any mention of the word called facts. So, you know, we could safely conclude that estimations are guesses, not facts. Unless, you know, otherwise they would have been called factimates, perhaps not estimates, which they are not. Now, what is a good estimation? Well, that varies from, you know, individual to individual companies, organizations. But a good few years back, you know, a few people decided their estimations. And this was fairly relevant as well. By definition, a good estimate is within 25% of the actual result, 75% of the time. So which means if you are, you've done an estimation of a project for 400 days, if you are able to deliver that within maybe 300 to 500, you're good enough. If out of four such instances, you are able to deliver three, good enough, not bad. Let us look at some industry statistics. I'm sure you have heard about chaos report. Chaos report is an industry report done by, shared by a Standish group. They do it every year. They take data from around 50,000 plus projects across various sizes, including, you know, minor enhancements to mammoth enterprise applications and implementations. And they do this for every year. I believe they have been doing this for at least good 15 years now, which gives us a good benchmark and baseline to compare how industry is, you know, performing in terms of these software IT projects success. And these are some of the numbers. We could focus on 2015. I don't think you'll be able to see the green ones. So that's successful. The second one, the second column, actually the third column, yellow is challenged. And third is failures. And as you can see, 19% of have been absolute failures, which means these are projects that have been canceled prior to completion or delivered and never used. 52% are close to that have been challenged, meaning they have been delivered, but, you know, delivered less features and functions that specified specifications within both time and cost, with to both time and cost overruns. And close to 9% have been successful, which means we are looking at only one third of success here. This is industry data. Furthermore, the unsuccessful project that we see here took around 200 plus, it's actually 222% longer than planned to complete time, 189% more cost than was budgeted, and delivered only 61% of the features and functions specified. So that's not more than two-third of what was expected based on pre-planning and all that stuff. Another data point from Gartner. I've been attending Drupal Cons and hearing a lot about Gartner data, so I thought maybe I'll take my chance as well. So this is a little old data from 2012. It says small, mid to large-size project failures have been close to 20%, between 28% again, one-third. Mackenzie, average cost overrun, 66%. Average time overruns have been in the likes of 44% and average benefits shortfall, which is, you know, they have not been able to deliver the benefits expected close to 17%. So question comes, are estimates reliable? This is an interesting graph adopted from Steve McConnell's book, Software Estimation, Demystifying the Black Art. I'm sure a lot of you would have read it, if not at least heard about it. This shows some plotting of project estimates versus actuals in days, track record for one organization. I'll have to step down because I can't see that. So as you can see, the x-axis is estimated days to completion and the y-axis is actual days of completion. And you can look at some of the data, which is interesting. By the way, this diagonal line is the average, the perfect accuracy. So anything falling below this is a highly profitable project. Anything falling above this is not so. You see this number here? It seems this was estimated to be around 12 days to around 225 days. That's interesting. There's another one here. Close to 100 days were expected, delivered 150, not so bad. Look at this one, 200 days, 250, not so bad. But again, if you overall look at these stars, these projects, these have been those two-third of failure projects, I believe, that we've been talking about in those data points. So this is a chart that has been popularly used in lots of presentations on project management and software failures, examining how and what to do about that. Okay. I'll try to pronounce that. Hofstadter's law. It always takes longer than you expect, even when you take into account this law. So Douglas observed when he was working on the project, which I don't remember the name of the project, but that's the project for playing against chess grandmasters. He observed that no matter how much work went into developing computer programs to play chess against grandmasters, the winning program always seems to be 10 years away. Another one. This is very relevant to software industry, something that we come across every day without realizing Parkinson's law. Work expands so as to fill the time available for its completion. There's a guy called Dhaval Panchal. I stole his diagram here, which was very interesting. This is about depicting what you don't know, you don't know. This relates to how we estimate and where do we actually stumble. The small yellow circle you see is things we know that we know. The bigger circle, the larger yellow circle you see are these other things that we know that we don't know. Then there are things that we don't know that we don't know. I did a Google search on this thing and I also came across a famous saying by Donald Rumsfield where he said, I had a video, I wanted to play that, but it spoke about terrorist activities and Iraq and all that stuff, so I avoid it. Sorry. He said, there are known unknowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know, but there are also unknown unknowns. These are things we don't know, we don't know. He was talking about the issue around weapons of mass destruction and Iraq war and all that stuff, somewhere in 2002, a lot back. The reason why I'm showing this diagram here, this chart here is to take your attention to the big large red circle, which often gets neglected when we try to do estimations. We often pad and add buffers to our estimates for things that we don't know now, but what about all those things which we just don't know at all? They may happen, they may not happen, and if they happen, they'll screw up the whole planning process there. Trying to explain this in a little more technical way, there are two things which basically define the cost of a feature, essential complication and accidental complication. This was explained by J.B. Reinsberger. There's a nice video on Vimeo that you can go and watch. I can share the links later. Essential complications are apparent complications. These are things we know are there with the project, complexity, technical challenges, et cetera, et cetera that we are aware of, how hard a problem is on its own. But then there's also called accidental complications. These are complications that creep in because we suck at our jobs because of inefficient organization structures or blockers because of those structures, because how we code, et cetera, so many other things which are not really related to the problem, the requirement, the project itself. A cost of a feature is often calculated based on these two parameters. While doing an estimation, you do relative estimation. Based on your past experience, if something was X, you imagine that if we have similar size of accidental complication and essential complication, it would be 2X or 5X, that never happens in real life. When was the last time that cost of accidental complication was nearly zero or exact multiple of an essential complication, which is the X there? Cost of feature and functionality is driven by both of these. Oftentimes cost of accidental complication supersedes way beyond much more than the essential complication component. That's where the relative analogy based estimations fail and are not accurate. Later, we realize that we've gone overboard by unnecessary numbers. Woody Zuil is a thinker and agile advocate. He was one of the guys who started talking about this hashtag, no estimates. I'll quickly take you through a brief profile of some of these thinkers later. But one of his thoughts about what estimations are these, he believes those estimations are useless because they are guest based, what we think we know about the unknown. They are wasteful about the work that's not going to be used or the other way. They are harmful because they lead to decisions based on incorrect data, which subsequently leads to consequences later. They are easily gamed. Estimations are easily gamed. I know I'm sure everybody would have done that or at least come across such instances. I did that as well a good few years ago. Reminds me of an instance when I was a manager, I would often ask my team to do estimates and then they'll give me some data. I would have a separate estimation sheet with extra 20% padded there, which these guys won't know. But subsequently, they would start guessing that. They know that there was something extra being padded to these projects. Later in numbers, they automatically started padding those numbers as well. I still kept padding my 20% right. We were looking at failures. You can easily gain things. Again, in planning poker as well, you might have seen so. Breaking the user stories into a larger number of pieces and incorrectly guessing the numbers. One person says five, another says three. Whoever is powerful is able to convince the other guy, of course, with some data and reasoning. Somehow you gain those numbers. Estimations are easily gamed. They are dysfunctional. This whole slide, this whole presentation is about it. Deceptive, that's I've added at the last point for myself because they give you a wrong impression of what you're going to do. I won't go through each of these. This is just a table to show you. These are some of the estimation practices and frameworks that have been developed and evolved over years. Question is, are we trying to get better at estimations or what? The thing is, we need to question our status quo. We need to start doing something less, not keep iterating over the same thing if it's not giving value and being more accurate, or stop doing something and find ways to facilitate that process. Let's talk about no estimations. I believe I have fairly built a background now. No estimates is based on a couple of principles. The first one and most important is embrace agile principles. Second is focus on value. It's all about value and that's what the customer, the client wants in the end. Deliver small slices of working software and it's very particular about how and why should you slice and what advantage will we get from that. Deliver early or frequently. Some of the advocates even say deliver daily. Only do estimate, only have a story which is as small as what you can deliver tomorrow, but we'll see that. Customer collaboration. You need to have your customer collaborate with you on those things. No estimation is not about no estimations ever, by the way. It's not about not doing estimations. It's about the minimum amount of estimates that is required to be done and then look carefully at ways to reduce that need even more. I did mention Woody Zule. There's another gentleman called Vasco Duarte who was also one of the early agile advocates of no estimates. He had an approach that I personally like. His approach is what I'm going to talk to you about. Share this with you. He talks about focusing on throughput rather than story points. This is a chart that was shared by Corey Foy on Twitter. This is based on a team's data from few sprints on a particular project. The bottom numbers you see are the story point estimates and the left numbers, the y-axis you see are cycle times in days. For those who don't know cycle time, cycle time is the time spent working on an issue right from the start till the end. That's the cycle time. As you can see, some of these are big lines and some are small lines. These are the story point estimates. You can see those the story point estimates. This is not exactly Fibonacci series I believe. There's one or two numbers different here, but close to that. The length of the line, the height of the line is the time that actually took to complete a story of that particular size. You can see how variable they are, how different they are. The chart is not very clear, but there are some small green triangles right at the bottom of most of these lines. Those triangles are average numbers of the cycle times for those particular story points, those particular user stories of those sizes. The point is the actual cycle time is differentials are huge, but the average is not so much. It's better to base your predictions and forecasting on something which is not so variable as opposed to something which is. We'll see how shortly. You can do estimates in couple of steps using no estimates. The first step is you absolutely have to have a stable team because what we're doing is we are trying to use data from the past few sprints, actually two or three sprints, and ensure that you have reached a stable team or stable system state because then the data that you'll have will allow you to make some forecastings. The way you do it is you take the size of these stories. I'll actually take you through that in the next one. Here, when I mentioned stable system, what's important is to identify how would you ensure that a system is stable when the velocity of a system, of a team, or a project would be outside limit three times. I'll show you a chart where you have to have an average size of each story, and then you have a high interval and low interval. As long as all of your stories are getting completed or sized within these two intervals, and if you are having consecutive three, more than three stories outside this limit, your team is not stable. Your system is not stable. You have to plot those numbers on a chart based on data that you received from the past two or three sprints, which will allow you to see if you have a stable system. If you have, then you've got the right data, and based on those, you can do some forecasting. You can plan things and go forward. As well as the chart will show that if there are more than five points going in the same direction, meaning you have a velocity, a lot of acceleration happening in terms of throughput, you have not reached a stable state. A stable system and team is very important here. Second step is you have to select the most important piece of work that you need to work on. You have to focus on value. This is where a lot of projects and a lot of deliveries fail because end of the day, we realize that by the time we delivered something, 80% or 40% of those were not really useful, what was useful for something different. For this, you have to involve product owners and a scrum team because the scrum team and the product owner client stakeholders, they will together assess if this X feature is more valuable than Y feature. This is very important. It should not just be driven by the client stakeholder. Value can be non-monetary. In fact, it has to be non-monetary. If all of you are aware of the user story format as a user, I want to do this so that XYZ is the value that we are expecting from user story, that's the value I'm talking about here. Once you've done these things, then you need to slice your work into smaller pieces, right? All right. So, you have to slice the... Thank you. Thank you. Are your clients stakeholder? All right. Thanks. Let's continue. So, we need to slice features into user stories. So, Vasco suggests that the larger piece of work is a feature and then you break it down to user stories and then you break those user stories further down into what you can deliver. You have to... You can break them down into multiple user stories and generate options. And options will allow you to do those prioritizations, take some decisions. A larger user story may have lesser options and thereby less flexibility of being very flexible. And there's an invest principle which I believe agile developers and managers and everybody are aware of. This is an invest with a slight twist. So, this is about identifying to what level or when a user story is ready to be used, ready to be processed. It has to be independent, vertically slice instead of horizontally. When I say vertically slice instead of horizontally, I mean do not slice the user stories into architectural and system levels. So, saying one is database abstraction, another is maybe API is 30 something else. Rather, slice the user stories into vertical slices so that individual stories are independent and they have all the components required and can be ignored as well if required if they have lesser value. Second, they should be negotiable which is essence. They should have essence of value and not implementation into the technical data, technical details. Third, they should be valuable, most important thing. They should generate value. Fourth is essential. So, agile principles advocate estimable here but no estimate advocates say instead of estimable, it should be essential. Do not work on anything that is not essential at all. Fourth is small. Understood and small enough. Understood when I say you have understood that, yeah, what is required, what is to be done and they should be testable. So, the acceptable criteria is what comes here. You should do slicing along with the whole team, full team. Then there shouldn't be any huge stories. Typically, anything which is greater than half a sprint is a very large story. They are not what you need to work on. You need to slice them further. These are just some interesting valuable rule of thumbs. You can revise and change these to what really works for you as well. Typically, you can have six to 12 user stories delivered in a say two weeks period. If you follow a two week period sprint for your development, five to one mandate per user story is a good enough estimate, not estimate, sizing. What's important to ensure is that the statistical distribution of these user stories, the large and the small ones together, should be spread across the entire project. So, it should be you are working on all the large user stories now and the smaller ones later. It will not allow you to do the forecasting which is required in case you need to answer a few questions for stakeholders and do some planning around that. Each of these user stories are actually going to be working software. So, there should be testable running tested stories. Meaning, once you deliver these stories, you have to either deploy them on a production or production, a like production server so that your user or your customer can use it. That's when you have delivered a value and that story would be completed. Then, you know, once you've sliced and reached on the right sizing of the user story, develop each piece of work delivered in a product ready environment and iterate and refactor. So, another one, another one, another one. Now, while doing this, you have to ensure that you do an active scope management and how do we do that? So, you have to do hard limiting of the duration of certain parts of the project, time boxing. You must ensure that features are, for example, features could be around one month and user story should be around one month. So, because you're not, you haven't been doing, you won't be doing any estimations here. You won't be doing any planning based on those estimations. That's why you need to ensure that the sizes of those pieces of work are more or more or less similar. They may vary, but not very largely. Rest low value user story. So, what happens is once you are breaking down a user story, you find that you broke down one into five. Two are valuable, three are not. Those three don't make the cut and, you know, they get regrouped into a different user story or go back to backlog. The product backlog would keep growing, but the beauty of this, and in fact, it would be that you are talking about embracing change. So, why I say so is that, you know, the next user story that you're going to deliver the day after tomorrow could be the one that otherwise in a maybe waterfall or other project methodologies you may call as scope creeps. So, there are no scope creeps really. We've converted scope creeps into embracing change principle here. Keep prioritizing the backlog regularly to evolve to a more accurate prediction because this is only required if in your project case, you need to do some predictions or forecasting for stakeholders or planning purposes. It could be resource planning. It could be timeline planning. It could be anything else. This entire system helps keeping system of development as stable and thus you actually have to keep your system of development stable and thus you will be staying closer to the forecasted cost timeline target. Another interesting way of, so while you're doing this, you know, some companies and advocates suggest using active scope management via story, story, user story mapping as well. This was popularized by Jeff Patton. It removes the flat user story backlog concept, you know, a bag of contextual free mulch. This is basically visual presentation of a product backlog. I will not go in depth into this. You can read a lot about this approach on the internet. Go to the website of Jeff Patton if you want. But this will allow you to visually look at your product backlog and while slicing push things into a current release plan or the next release plan or push them further down away and say that, okay, this is not required, stuff like that, and give you a visual aspect. This fosters collaboration and build shared understanding because oftentimes, you know, text written requirements may not give you the bigger picture. And this helps you identify gaps in backlog, see interdependencies and helps in release planning activities. In case in your project release planning is not required, the release aspect may be ignored, but I would still recommend using story maps. It's a very good way of managing product backlog and prioritizing. Then comes forecasting. Now, you know, a lot of questions are to be answered like, you know, when can the client or the customer expect X feature? When can he expect something else? How much of resources or investment do we need to do for a typical project? So that's where forecasting comes in. So instead of doing assumptions, we are focusing on forecasting here. Forecasting is based on empirical data based on from from your past experience, calculating or predicting some future events, usually as a result of analysis of available pertinent data. Forecasting uses data while estimation does not. Estimation is purely guess or guesstimate as well what we call it. So the questions that should be answered instead here or asked are, you know, given the rate of progress so far and the amount of work still left, when will the project end? Or when will phase A or phase V end? Given the rate of progress, how much of work can be finalized by, you know, X date or Y milestone or by Christmas? This is a chart that you can map from your user story counts. And this explains the whole idea of how predictions and forecasting works and how the user story averaging would work here. So the red line up at the top is the target. The, in the bottom, you have the user stories. And on the left, on the left hand side on the Y axis is the time taken for completion of these user stories, sorry, the number of user story points that you have achieved. And as you can see, once you are done through three or four sprints, and if your lines are within the interval, the high interval, the interval, you'll have to define those intervals yourself, looking at the data. For example, sprint one, you delivered five user stories. Sprint two, you delivered eight user stories. And sprint three, you maybe delivered, say, two user stories. So you're looking at 15 user stories, so an average would become five. Now that two number is a little catchy here. So maybe instead of two, it's a little larger number. It's like these user stories are closer to each other. Story points, I mean, story counts, I mean. Then you can do an average. The red line you see in the bottom is the average line. And then you just need to ensure that all the user stories are, you know, either above or below, but within the interval. The moment they seem to go beyond those intervals, something is wrong. You're either not slicing your user stories properly, or the system is not stable. So this will allow you to keep within those boundaries. Thus, what you can do is, based on this throughput, you can forecast, right? So if you're looking at an average of five user story counts here per sprint, and the project requires around, say, 500 stories, story counts, you can say, okay, it should roughly take around 100 such sprints. It could be 150, it could be, sorry, it could be 130, maybe it could be 80, but it definitely would not be 200, or it definitely won't be 25 sprints. And based on those sprint numbers, you can make some budget planning, provide some sort of a date idea as to when can X number of features or the entire project be expected to complete. And then as you go, you keep reverting back to this chart on a sprint by sprint basis, and keep tweaking the charts, and you'll see that you have a stable system. End of the day, you'll be able to achieve what you're looking for. So three to five iterations average is suggested here in terms of throughput, because it's sufficient to predict future running testable stories. And if you remember the system stability rule that I mentioned a few slides back, this is where you can see it in play. Velocity outside limit three times in a row. So the interval, the blue and the green line are the intervals. So it should be within those intervals. If it goes out, it's not a stable system. Likewise, if there are four same points in the same direction, the system is not stable. So your system is not good enough at this point of time to make those intelligent forecasts based on your data. Another way of providing forecasting data to your client would be because you are focused on smaller user stories and the current sprint, really, you can create a window saying these F1, F2, F3. So these three features can be delivered in the period of, say, current sprint, remaining Y number of features can be delivered for larger number of longer periods, say one year. And then remaining these seems unlikely to be delivered within this project. So this provides your client, your manager, your management, some actionable data. And based on these, they can make some decisions because, remember, they have not asked for or received any estimations as such. So they need some values to plan things. So this is called rolling wave forecast. Low overhead reporting. It can be updated, tweaked weekly. It's purely based on throughput measurement. It provides actionable information. Some common use cases, not exactly use cases, but the way people are using it is using no estimates plus can ban plus agile contracting with cab TNM and incremental delivery plus some high-level planning. Idea is not to spend too much of time doing planning, not to spend too much of time in estimating. Estimating a backlog is a waste. If estimations and predictions are absolutely required, then use impact mapping. And I would like to invite one of my friends here. I happen to sit with, I happen to share a seat right next to him in a flight and learn that they're actually using all of this process and delivering some real great projects and complicated ones. So I thought maybe me talking about projects that have been delivered by my organization or someone else is why not someone else validate that by saying, yes, this is working out for them. So before I hand over the mic to them, I know we have, let me estimate, we have around 18 minutes. One of the approaches that they may talk about is behavior-driven development, which really works out for them. So Rob, would you like to come please and talk about some of the projects? Thanks. Do you need the slides or something? Hi. Can everybody hear me okay? Yeah. So I'm Rob Knight. I'm a CTO at Fluxus. We're a consultancy based in the UK and in Asia. We've actually been using a no-estimates-based system for the last two years. And I think I'd echo a lot of the sentiments and the kind of the data that Piers has kind of put forward in his slides. This is definitely the experience that we've had as well. I think one of the questions that people often kind of ask us when we talk about moving away from estimation is really how do we know how long things are going to take? But if you actually think about that from the point of view of the person who's leading a project, who's funding a project, that's not normally the most meaningful way of seeing things. People are thinking we've got a certain amount of money to spend and we want to see certain benefits for spending that money. So people are thinking about return on investment rather than thinking purely about how long is something going to take. And what we find with our approach is that by time boxing and by thinking in terms of value you start to drive the question towards okay we're going to spend this amount of money. What limits does that place on the level of complexity that we're able to kind of absorb in our project? So if we've got two days to do something we really can't have super complicated set of features in that time. We need to break that down into a series of features that can be delivered within those constraints. And that actually drives a much more value-driven approach. People can think well for this amount of money do I want to spend that amount of money to get this benefit? And it's kind of much more easy for clients to deal with in those cases because it's thinking in the language of the business and thinking in terms of the business rather than kind of six months, twelve months kind of roadmaps of software delivery. So what we've actually done is we've put together a whole process based around this approach which we're sharing with other companies now. We're sharing that with other Drupal agencies and people outside of the Drupal space and we've done this to name focus because we believe this is really the thing that's most important in project delivery is to actually focus on the outcomes that you want to get and not get lost in the kind of the weeds of some of the details or committing to some kind of long-term plan and you then lose sight of your original objective. And the way that we do this is we begin by establishing what the goal for the project is and we do that using a process called impact mapping. We then work out from that to look at who's going to be affected by the project, who needs to contribute towards achieving that goal. Obviously in a Drupal context content editors are going to be a big part of a lot of CMS-driven projects. Obviously customers and end users which might be existing customers, it might be new customers, it might be different segments, there might be a legal team, there might be an operations team, there might be a marketing team, one of these different stakeholders are involved. We think about what impacts we want to have on those people and only once we have that do we then start to think in terms of what features we can deliver and that focus then is really kind of driven on achieving an impact and not just on kind of ticking off items on a kind of delivery plan or on a road map. We then take that through a lean UX process so we try to avoid creating a large inventory of designs up front because that then is in much the same way as estimates can be wasteful, doing designs can be wasteful in the same way. And we then also go into BDD, so behavior-driven development, thinking about what does this thing actually need to do? What's the interactions that people need to have that they couldn't have before? How does this create value through the behavior of the system? And what we find is that this works really, really well with Drupal. Drupal kind of solves so many basic problems for you out of the box. You don't need to think about how are we going to do access control, how are we going to do user management, how are we going to do basic content creation. So you can think very much in terms of business value, you're not kind of putting a lot of technical tasks into your backlog, you're putting business value directly into the backlog, and then you're delivering that in very small increments, kind of really like two, three days at a time. So in terms of the actual success that we've had with this, I think we've had some very successful engagements where people have kind of said to us that they'd never seen an approach like this used before. They're used to a traditional waterfall process, or they're used to a very rigid scrum process which had kind of become kind of a little bit unmoored from some of the original values of the agile manifesto certainly. And we basically kind of, I think I would say at this point we've got no regrets really from moving away from estimates. I can't really think, at this point, I can't really think what I'd ever want an estimate for ever again because kind of as has been pointed out, a lot of estimates are just wrong. They're just plainly ludicrous, and I think people start to lose confidence in a process when that kind of thing becomes apparent. So I know I've only got a very short space of time just to kind of validate some of these things, so I'd say if anybody wants to talk to me or any of us about how we've implemented no estimates, I'd be more than happy to talk to anybody afterwards and answer any questions. For now, I'll hand you back to Piyush. Thanks, Rob. No principles or practices or philosophies are really valuable without validations. I'll take you through two or three more slides just to touch upon what this movement was about and what it is, what's happening in hashtag no estimates world. So basically hashtag no estimate is a hashtag for the topic of exploring alternatives to estimations for making decisions in software development. It's been fairly active. The hashtag had been fairly active since 2011. You can actually go to Twitter and check out this hashtag. There have been lots of discussion both for and against this. So unless you have against happening, it's not really a good thing, I believe. And this has led to various blog articles, research papers, interviews, podcasts, presentations and conferences that I mentioned at the start. And learnings have inspired lots of teams like what Rob mentioned. And they've stopped doing estimations or minimal estimations and made life and business more happy. These are some of the advocates and thinkers who have been involved in this hashtag, conversations and this world. Woody Zool, Vasco Duarte, Neil Killick, Chris Chapman, Henry Karatsudhawal, Panchal. I have copied some of their ideas and repurposed some of their thoughts and slides into my presentation as well, which you saw. Ron Jeffery, Steve Fenton and many more. And these are some of their, each of them have slightly modified version of how no estimates should be worked upon. In the interest of time, I believe there are a few questions that need to be answered, so I'll skip through this. But you can go online, check out my slides later and read more about their approaches. Vasco Duarte has also written a book called No Estimations, No Estimates Book. You can go online, buy that book. It's a very interesting book with a story of a lady project manager called Karaman and how she manages these challenges and introduces herself to No Estimations and delivered a fairly large, complicated, guaranteed failing project and converts it to a successful project. It's a very interesting read. And in the end, before closing, a quote by Woody Zool that No Estimates is merely a call to refocus on the agile manifesto. We all are aware of what agile manifesto are. And the takeaways from these sessions, don't stop doing what you're already doing, please. I'm not saying stop doing your estimations. You don't want to create challenges with your boss and conflicts in your workplace. Don't stop doing that. Please continue doing that. Start exploring No Estimates in your own way. Try taking out data from your last prints, retrofitting them, see if you are able to trust those throughputs or story counts more than story numbers. See if you were able to time travel, if you would have used this approach instead of estimation, would you have actually reached at a better conclusion without estimations, which will lead you to actually move over from estimations to No Estimates. Run small experiments, analyze measures. That's what I did as well. Try and get better at creating simple and unambiguous slices of functionality. Measure your throughput. Compare story count data with your story point data. Question that, can you take better decisions with this? If yes, then that's the way to go. And discover for yourself if a No Estimate approach is right for you. So there are multiple of these approaches out there. The one I was talking about, the one that I took you through was suggested by Vasco Duarte. Woody says, do not do any predictions. No, nothing. I mean, just focus on value. Deliver the right thing right now and then the next one and the next one. And that's it. And he's written loads of stuff about that. He's talked about one or two large project case studies as well. And questions. And Rob, you're welcome to answer the questions if they are for you. Yes, please. There's a mic here if you could come to that. Yeah, I think, you know, customer education is very important here. So this is a new thing. Customer needs to be educated. They need to be convinced that the end of the day they are paying you to generate value. So, you know, let's actually start it from the reverse and only work with value. Rob, what challenges are such you have faced in trying to educate your clients? And, you know, start saying no to clients if they don't believe that, you know, value is what is required. And value is what you want to commit, not to a number that can be made up or, you know, a number that can just be there because a client wants that, a project manager wants that, or it is to be done because that's how we have done it. Yeah, so I think what we found was that that initial conversation with the client often isn't super detailed. So that isn't at the level of individual acceptance criteria or requirements for a story. So it's okay to kind of talk in a big picture about, you know, we want to be in roughly this place in about three months. That's kind of the conversation that you can have. What we found was that when we're actually working on delivery and because we're working in an agile way, we're not kind of creating a massive inventory of requirements up front, that we can use that time budget essentially to kind of drive the conversation when we're actually firming up the requirements. So we can say, well, you know, how much complexity, how much risk are we willing to take at that point when we're defining the acceptance criteria. So as long as the kind of the executives or the sponsors or whoever is kind of funding the project is satisfied that they're going to get the value that they want after about, you know, after a certain time period, the actual kind of conversations with product owners and people like that can be kind of driven much more in alignment with, you know, well, this is your budget, this is how long we've got, and these are our value priorities. So we haven't found that to be a particular problem. It's been, we've done that with both. I think it's easier when there is trust. I think that you can have suspicion sometimes that, you know, people want to know that they're getting value for money. They want to know, you know, that you are genuinely kind of giving everything that you possibly can. But, you know, we haven't found that to be particularly in question, really. Yeah. We've found this to be easier with existing clients significantly. It's, I mean, the trust has been established, right? So whatever you suggest, they would be fine in at least trying it out, as opposed to a new client who's yet to be, yet to sign on the dotted lines, right? So yeah, new clients. But the fun is, you know, convincing new clients about this approach. The fun is in convincing new clients. That's when you actually get good at it. This is a sales guy saying, I believe. Any more questions, gentlemen? Yes. So how, how do you get to there? Well, just stop doing estimations. I mean, you know, no, no more planning poker, right? You know, once you start having a stable system, and once you have start building these average story count throughputs on projects, you, you don't need to do those estimations, right? I mean, you can just see the beauty is you will say that a backlog has some large stories and smaller stories. How can you really make count them and say that we'll deliver them in X number of days? Well, that's the beauty of, you know, doing averaging here, as well. Average really takes care of, you know, on a larger project, on larger timelines, you can actually deliver those numbers of stories. Because remember, you're also slicing them as you go going forward. So that's, that's one of the ways, you know, once you have this process implemented, you would not need to do any more estimations. That's the whole idea, right? It will be hard to convince initially, right? I mean, that's what I also went through, but this has a huge value, I can tell you, you know, all, all of those estimation processes like a waste. Yes, focus on the story counts and ensuring that the sizes of those counts are appropriate and, you know, keep delivering the value on a regular basis. A lot of times, so there's an example from my, one of the projects we're delivering, we didn't follow this approach there. It was a 12 month project. We're actually releasing it on end of this month. And after six months, the first six months, the project client comes back to say, hey, we need to cut down the monthly expenses because the markets have changed. We've started our website traffic has gone down by 50%. This particular industry is not using this product as they used to. What about those, you know, 80% of features that we've already delivered in the last six months? It's a waste. That's fine. So had this been an approach from the get go, they would have only delivered those 20, 30, 40% of user stories or features, you know. With time, you realize that, okay, this was really not important, right? So I hope I try to answer your question. Yeah, thanks. Sure. So there's special considerations of your preparing proposal in which it's a kind of bidding situation. This might not work for RFPs and certain cases. So this, I mean, yeah, I believe so. These work with RFPs and those things where a fixed estimate is required. I would say go ahead, you know, based on their estimates, do those estimations, get the project, throw away the estimates and, you know, work on this approach going forward because you got the project, estimations are done. Now deliver value. Yep. Have you been in a situation where you have, as you said before, try to convince a new client, so if you know you're in a competitive bidding situation, actually addressing this model in your proposal? Oh, yeah. I mean, you have to express this in multiple ways. So in your cases, have you done that? Yeah, yeah. So I think kind of echoing the question that we had earlier, I think that that initial process, I think it depends partly on who you're talking to, but I think if you're talking to someone who is sufficiently, you know, senior, who is involved in commissioning the work, what they care about is, are you going to deliver on my objectives, not necessarily are you going to deliver according to what a project manager might think, you know, somebody who has a particular approach that they might favor. If that's not going to deliver value as well as our approach is going to deliver value, then our approach is going to win. And so that's what we sell on, really, is we sell on, you know, this works for our clients, people are happy with the results and it's the best way of getting your objectives to be realized. Okay. Thank you, gentlemen. Thank you. Thank you, Rob, for helping.