 Thank you for coming. It's the first session, not after the party, but still, yeah, first session. So I definitely appreciate that. So what can you expect from today's talk? It will be about adopting existing agile methods for environments that are generally big, big projects and bigish teams. So in the first part of the talk, I will try to bring us on the same page. Then I will be concrete about how to do this adoption or about a proposal, and there will be then a demo and then some questions and answers. We'll see what the time will allow. So at the beginning disclaimer, agile is a term which covers a big surface. One of the subdivisions can be that it's about doing the right thing, delivering value for customers, and another aspect is to be kind of efficient to work well, yeah, not necessarily fast, but also in a sustainable pace, predictable and so on. And in this talk, when I talk about agile, I mean like the second aspect, yeah? I won't talk about the first one at all, not because it's not important, but that's what the talk is about, so just to be clear. So I am a redhead developer, I'm a team lead also of the security compliance team, and when I joined the company like six years ago, we were brought to, or the scrum framework was introduced to us, yeah? And we were on one of the first teams, and what we found out that it's not so easy, it's not a smooth ride. We kind of introduced it, and then we noticed that we still have some issues with mainly like this predictability. And because we are like an open-minded team, we checked out what can we change in order to improve the situation. And we made, I think our agile practitioners happy because we were experimenting and desperate because we have tried like various things, and we have evolved and the evolution still continues. And we have simply found out that the problem is a difficult one, so even a good approach is not good enough. We need a great approach, yeah? So we will see, we'll see. However, the main pay points that we have identified are the need to have a groomed backlog, which is like a very important concept in agile as far as I understand, being like a software engineer, yeah? Not a person with agile education. And also, as I'll get to it later, we are kind of a bigger team, so a lot of things are in progress during the execution, and it's like a mess to some degree, very difficult to find out whether things are going good or not, yeah? So why is it so difficult? Very briefly, it might be that one is not developing a web app from scratch for a well-defined customer, but one is developing a small feature on top of some kind of big product, yeah? And because the product doesn't have one customer, which whom we can communicate and say, we will do this and that will be different, but we need to keep everybody more or less happy, then actually a lot of work will be integration and not deliverable, yeah? Like putting the feature there, integrating it with the rest, and just like the small piece here is like presentable, but the work that has to be done is really huge, yeah? So things are kind of slow. Also the team can be big. Usually big teams, I don't think that they are necessarily faster, but they definitely can do more things and they are in some degree more stable. If the conditions change, big team can really adapt, take more responsibilities, or even exploit opportunities, but probably won't be doing it like in a specifically fast way. One offering is if you have a big team, split it into small teams, work much better, but maybe you can't do that, yeah? Because the tasks that are coming are kind of changing, they are not stable, they are not always the same. So if you subdivide, then you will find out that you need to rethink the subdivision and so on. So it's not always a possibility to get rid of big teams and make small teams. So we are in this big environment, which means that things are slow in order to accomplish anything interesting. The iteration can't be those two weeks tops, yes, but really like more times needed. During the execution, a lot of things are in progress at the same time as a result, especially if the iteration is longer. And what suffers is the predictability. And as an engineer, I definitely came across the idea like screw predictability. What counts is the work that is being accomplished. What about giving engineers the right tools, the trust, the resources, and they will do the thing in a fastest possible way and screw the predictability, yeah? Why this is not a good idea? Some people know, but some might doubt that. So I have an example, come to the example, imagine that you are a student and you are learning for an exam and you know that you can get questions from three boxes, the green box, 50% probability of the question, red box, 40%, blue box, 10% probability, yeah, that you will draw questions from the box, a topic, yeah? And the question for the student is, in which order you will learn for the exam, yeah? In which order? And it looks like that you should start with the green box questions because this is the most likely, right? But the right answer is, you can't really know what is the right order unless you know how difficult it is to master questions in the box. And if you know the cost, then you can make a good decision. And in this particular example, it is clear that you should start with red, continue with blue and then do with green because red and blue together are also 50% and their costs are much lower. So where am I aiming at? Predictability basically means that you know the costs and you need to know the costs in order to prioritize, yes? So whenever we do prioritization, we are in need of predictability. So if we have a big team, big projects and issues with predictability, we simply can't ignore it, yeah? We need to address it. If you think of the previous example as well, it turns out that we estimate all the time. We estimate when we go shopping, when we travel, when we do pet projects, software we estimate all the time. So why is it that the groom backlog or the estimations that are so painful? So there is like another example and we see a situation like that in software development quite often. That's how much effort will it require to remove the rock when we see the top, we don't see, we don't see the bottom. So those gentlemen might have different ideas and whoever played Scrum Poker definitely experienced that. Anybody who played Scrum Poker in their lives, could you raise hands? About half of the audience, I would expect more. So the other half will experience it maybe if they are engineers at least. So what happens is that one person says, I think the task, difficulty is five of something, yeah? Other person says, the difficulty is 15 and then they discuss and they find out that there is nothing that one person doesn't know, yeah? Everybody knows everything, but still one thing's five, one thing's 15. What to do that? It is a very difficult situation because it's about the unknown, it's not about the known. And it definitely makes people uncomfortable because the number will be settled on 10. The number will make it into the tracker, it will be visible to managers and people are not so happy. So in order to know how to help this situation, we need to take a look what is an estimation. And if somebody says, I think the rock can be removed in two weeks, what happens in your brain, yeah? What happens in your brain if you hear from somebody that they will do something in two weeks? It means that probably they won't be able to do it in the first week because it's very difficult to do things fast. It's easy to do them slow, but to do miracles is very difficult. However, it is somewhat possible that they will do it earlier, between the first and second week. It is however most likely that they will do it slightly later between the second and third. When the third week has passed, you start thinking it might never be done, yeah? It happens, it happens in software, it happens anywhere. So what is an estimation? What is it, yeah? And we know it all. It's a distribution of probability actually. And the thing from the annotation that everything has been invented, here it is. It has been introduced in late 50s of the last century. And this concept that estimation is a probability distribution, it's called PERT. It's being taught, I think, as a part of project management. And it introduces this concept of three-point estimation. So instead of estimating tasks with one number, they are supposed to be estimated with three numbers, which sounds quite scary because if one number is a problem, three numbers might be a much bigger problem, right? But maybe it's not the case. The guidance is that we come up with the optimistic estimate, which should be so optimistic that we think that we can do even better with one or 2% probability, yeah? So it's not like, it's not completely a rosy dream, it's just like optimistic. The most likely estimate is what we normally target. And finally, we have the pessimistic estimate. It can be like story points, it can be weeks, it doesn't matter really. And again, the pessimistic estimate is not a promise that it won't be later than that, that it won't take more effort than is the estimate. But again, there is supposed to be like a couple of percent probability that even that will be slipped, yeah? Whether it's one or 2%, it probably doesn't really matter. So what does this approach, sorry, one more thing. Usually we are required to put a single number in some tracker, yeah, in JIRA, whatever. And the good news is that as any probability distribution, this three-point estimation probability distribution, third or beta distribution, whatever we like to say it, it can be represented by the mean and the mean value is not the same as the most likely value. The mean value is somewhere between the most likely and the long tail. So in this case, we see that the pessimistic estimate has the long tail, so it's like more pessimistic. But if the task would be more like optimistically inclined that for example, there will be no bugs to fix after we run tests, yeah? Then we say, we think it might be like five points, but maybe there will be no work to do, yeah? In those situations, the expected value would be actually skewed to the optimistic end. So what problem does this solve for us in another point of view? There is this concept that we don't have in check, that estimations or whatever quantities, they can be accurate, they can be precise, yeah, we don't discriminate that in check. So when we put numbers to estimations, we are infinitely precise, yeah? Number is simply a point, so we are infinitely precise, but as we all know, we won't be accurate. So we are in this unpleasant situation. However, if we substitute the estimate with this interval estimation, we are probably slightly improving the accuracy by using the expected value, and we are making the precision actually less, yeah? Or we have lower precision, but it is more proportional to the actual accuracy. So we have actually better odds of hitting the target if we have lower precision. Interesting, if you think about it, but undeniably true. So this is like a very interesting theoretical concept. Has anybody ever used it in this room? Has anybody ever used it in this room? Apart from my team? Nobody, nobody, nobody, okay. So me and my team have been giving it a shot for a couple of months, and there is one excellent news regarding that, that we have concluded that the process of estimating and using the estimations, when you use three-point estimations instead of like traditional ones, the three-point approach is not worse, yeah? The process is not more painful. It's not like three times more painful to produce those numbers. So the process is not more painful and maybe it's even more comfortable. We are not completely sure about that, yeah? But we will continue our investigations. At the same time, this is a difficult problem. Estimating is a difficult problem. So even a good solution, even a better solution doesn't completely solve everything. So we found out that you need some practice, you need some guidance to estimate, using three-point estimations. And also it's like difficult to explain. What does it mean? That there are like misunderstandings and we don't know exactly. Then the second thing, the second pain point was the execution, which is busy, plenty of things going on and so on. Sometimes things are in progress for a long time and it's usually a problem, but if you have a lot of things in the sprint, then you don't notice. On the other hand, some things are not in progress and maybe it's okay, maybe it's not, maybe they should be and so on. So when you have a long iteration, you can't do this traditional approach that after those two weeks, you do some retrospection and you say this was wrong, yeah? If the iteration is very long, then you need to act in progress as soon as the problem is obvious enough. So it's necessary to have some tooling to introspect what is going on and we will see that in the demo. So yeah, I put here a screenshot of a Jira because this is the only software that we use which basically has a field to input estimations. I'm not really sure how Jira can be configured. However, I think that for big teams which have more challenges regarding estimating and so on, having this small field next to some other stuff, like having this small field basically invisible when you are not looking at the issue, that it's something which doesn't make the estimating easier. And for some reason, at that time when I realized that, I thought like, oh, might be a great idea to try to develop a software that would kind of address this issue, that would provide an interface to the data that might live in Jira or whatever. Yeah, GitHub even doesn't have a field for estimations. Maybe it has something, I guess. And the software would allow the team members to estimate and it would allow them to see the execution of their own long sprint and maybe even do some stuff like deliver forecast and so on. So the project exists, I wouldn't advertise it at something that works, generally. It's like a demo. And if you are a Red Hatterer who uses Jira, then it might be a good idea maybe to reach out if you like that. In other case, I would suggest waiting or maybe using the Gitter channel that the project has. However, I hope the demo will work. What we will try to model? We will try to model like a sprint, which has two epics, if we say it like that. One is that we do an upstream release, which includes fixing bugs, running tests and writing a blog post. And then we do the downstream release that includes again running tests and then making the package, the downstream package out of the upstream source code. So, five minutes delay. Five minutes delay. But I guess we will manage. So, let me see. So what do you see right now? Is like, is the interface of the program? And it allows us to estimate, to track the execution and also to simulate that the execution is in progress. So, we go to the planning interface and there we can see that somebody, it was me, input those two epics. It's like not a software used for, that can be like seriously offered. It's like more something that works. So, we have this fixed blockers issue and somebody already puts that it's five. The cost of it is five. And we can make out of this one point estimate, we can make the three point estimate. So, if we say fixed blockers, usually it's like pretty difficult. So, we say optimistic estimate is four, pessimistic estimate is eight. Then I click save. And it kind of, it stores it somehow and we have this plot. So, I will do it. I will do this for that issue, releasing upstream. Again, yeah, probably it will be easy. So, I will say pessimistic three, otherwise it will be the same. Writing block post, three points, yeah. Sometimes people are very, very nitpicky, yeah. So, we say optimistic three, most likely four, pessimistic I would say it can be eight. And let's estimate the rest two issues. Fixing issues in downstream test, running downstream tests and fixing issues in downstream tests, I think that they might be nothing to fix. Everything will work. So, I will say optimistic estimate is two. And pessimistic estimate I would say can be like five because there might be problem with infrastructure, yeah. And so on. So, if we take a look, it all can be added. Like the individual estimates can be statistically, correctly added together. So, we see that the cost or the aggregate estimate of the entire sprint is somewhere between like 15, yeah, to 22 story points. Something like that. All right, and what we, any question to that? Might be a good, yeah. So, the question was what the points mean? Yeah, it's like a story points that are supposed to be proportional to objective complexity of the task. One could estimate in person weeks, that's not a problem, but estimating in points is like an alternative, yeah? Fernando? Yes. Yeah. Yeah, the question was whether we have a story point definition there. Well, we don't. We don't at the moment. We use currently the basically conversion ratio to our earlier units that we have been using, yeah? The plan for the next planning season, not the current one, but to the next is to come up with this definition of table and so on, yeah. Yeah, the question is about like epic level summaries, yeah, whether they are just reflect the expected value, which is like the traditional work or whether they reflect the whole. What you see now, it reflects everything, yeah? It reflects really all those three values. It's like a statistical process, some of random variables. 10 minutes left, so I will proceed with the demo of the execution. Yeah, so what we can do here, actually, I can say in a separate tab in that same web app that the team will deliver certain amount of story points and there will be some progress. So I choose that the team works on fixing blockers and they deliver two story points. So I click next, hopefully it won't crash and I will refresh the web app. So the not burned down chart, because burned down chart doesn't distinguish between states, yeah? The not burned down chart basically says that nothing has been delivered yet, yeah, that's correct. So I continue, let's say 1.5 story points, the team delivers a reflection again. Yeah, so it shows that something is in progress, but still not done. So continue, 1.5 story points, yeah. And we have our first task complete. Oh yeah, so what we can see in this not burned down chart is that the burned down is not linear. Why is that so? Because there are deadlines to those epics and we don't expect to work on something at certain time and conversely we expect to work on something else the other time, yeah? To make it clear, individual epic burn downs look like this. We suppose the upstream release to be finished early and the upstream release to start a little bit later. All right, so one thing has been finished and when the task is finished, it already makes sense to talk about velocity of the sprint or in the sprint. So we can see that because one task has been accomplished, then the team at that time when the task was in progress had a velocity of the cost of the task divided by how long it took, yes? So this is like the measured velocity at that time. I continue with the upstream release. So 1.5 story points, one story points and we have an upstream release done, yeah? And again, we see that something happening with this kind of burn down and we also see that there is some velocity increment around the upstream release because we have some data regarding the velocity and we have also some data regarding the size of the task. We can estimate when the task, when everything will be completed, yeah? So right now if the team continues like that, it looks like that they will be done somewhere after the second week, yes? The 95% confidence is before the sprint is supposed to end. And this vertical line is where is the day of the simulation. So I will just click some numbers here to have time for questions. I will put two everywhere, hop, hop, hop, oh, yeah. And the team has delivered everything. They did it fast because I put two and there was not, there were like three weeks and the total cost was like 18 story points so that was very comfortable for the team. But this chart gives you some information, yeah? You can imagine that if there are not two epics but if there are like 10 epics and the number of tasks in the sprint which is maybe like not even a month, maybe three months long, it can be quite big. This chart gives you an overview whether things are in progress, they are being done or not. At the same time, this velocity plot, of course, when one works on one task only at a time it looks like this, yeah? Of, well, not a very good this, so to say. And technically it's possible to somehow compute the estimated completion based on data. All right, you have seen something. So I guess that's pretty good. Better to give you a couple of minutes for questions if you have some. Yeah, the question is, we have been using this for a couple of months which is kind of correct. And whether we have been able to validate that the things kind of match, the estimations match. So we, as we are using the story points, the only thing you really match is the proportion, right? You measure the capacity based on previous executions. So the only thing that you need to be sure of that you estimate consistently, that's a task which is two times more difficult to get roughly two times more state points and so on. So definitely there is, in our case, a concrete case of the team, there is definitely room to improve. One quarter is a little bit better and the other quarter is a little bit worse and that's usually a symptom that things are not clear. So we have identified, we have done other mistakes, yeah. For example, like the composition of epics that it was not really matching the execution and making it easy to collaborate. So we are not even in a position to conduct it and hope that it would give like a positive result, yeah. So we are mainly removing the big problems at the moment if it answers roughly the question. Yeah, so we prototype, yeah, yeah. What are the outcomes of using the tool, yeah? If I summarize. We definitely found out that it has value during our secondary iterations, our primary iterations are long, but every two weeks we kind of reevaluate but we don't have any deliverables so it's mainly internal event. And on those secondary iterations, this tool is undeniably providing good overview of what's going on and whether there are problems that should be addressed. Yeah, this is what really works. It's not some more about three point estimations, it's more about like these charts and taking a look and saying, yeah, this doesn't look so well, mainly decision support. Yeah, the outcome is that it provides a decision support for those interested in completing the work and also as I mentioned at the beginning, like there is probably a little bit less friction during the estimation process and we hope that we'll be able to use this completion projection and to see whether we are actually sleeping or not. So we are out of time, technically. I'll be around, the tool, the slides will be online, the tool is open source, it can connect to JIRA, whatever and so on. So you will know what to follow. Yeah, so thank you for your attention and have a great rest of the conference.