 I'm an agile coach with Rally. Every two or three times a year all the coaches get together where we share our learning, our knowledge and brainstorm ideas and share problems. Last summer at one of these gatherings one of the coaches was talking about the fact that she was going to have to do and do an introductory to Waterfall, an introduction to Waterfall. She wasn't quite sure how she was going to teach this as an agile coach. And it was lunch time, we weren't taking it that seriously. And I kind of came up with this great presentation and I suggested all she has to do is stand up in front of the audience and say, no, no. And then we put together a little slide deck which had the word no in lots of different colours and fonts. And then one of my colleagues found this video and said just play them the video. And that's relevant to this talk because actually the way I think about Kanban is you don't have to say no. The way I'm going to talk about this afternoon is some ways to think about your process so that the choices that you make are just that. They're choices and they're not right choices and they're not wrong choices. And there's no best practice and no dial that you turn up to 11 because that's clearly the right thing to do. They're levers. So this is where the idea and the title of the talk, leverage points come from. Sometimes you might want to pull the lever back a bit, sometimes you want to push the lever forward a bit. And the combination of levers is what makes the whole thing work. I'm Carl Scotland. As I said, I'm on a coach with Raleigh. This was me a long time ago in a former life. It was about when I was at the BBC. This is a reminder about when I started getting to grips with Argyll. It was a kind of good playground for me. I got to experiment with different ways of doing things and XP and Scrum and DSDM. And one of the projects, it wasn't really a project, it was a product and an ongoing stream of work. Before I knew that that was a good way of doing things. We won a BAFTA. So I'm very proud of that. That's me on that side just in case you couldn't watch. That was my project manager on this side. Canban. Who knows what canban means? Sandboard. Literally, correct. Kenji's not in the room, is he? No. This was actually written on the back of Kenji Herabny's business card. He gave it to me back in 2007. I've kept it to this day. Partly because it reminded me how to write it in Japanese. So that's the literal translation in Japanese. It's sandboard. When we're talking about canban in software, what do we mean? Anyone want to have a guess? So we are using visual techniques. We are trying to create a pull system. There was another answer. Is it just in time? Just in time. Yes, just in time. So that's the element of pull system. I think of it as a way of designing a system. So canban was kind of the name that Tachio knew gave to the production system. So what he called the thing that brought it all together. The central pillar. And in software we're not trying to copy that. Or I'm not trying to copy that with canban in software. We're not trying to take this thing from a car manufacturing process and replicate it. The way I think about it is we're taking the thinking and the mindset behind it that led Tachio and Toyota to come up with their implementation of canban system. I'm going to use that thinking to come up with our own implementation of canban systems of software. We're not copying the implementation. We're copying the thinking behind it. So for me it's very much canban thinking. This is the way I talk about it. This is what I call the canban thinking model. We're going to focus on this talk on this side, but I'll just give you the overview to give you some context. It's a systems thinking approach. That's what Tachio was doing, taking inspiration from people like Deming. Let's look at the whole system. Let's look at optimising the whole. That means understanding what we want the system to do and understanding what the boundaries are of the system and thinking about the system and not sub-optimising the little bits of it. So that's the overall approach and also recognising what type of system we have, which is where some of the differences between manufacturing software come in. Manufacturing is generally a repeatable process. It's a more complicated process. Software generally, in most cases we're dealing with more complex problems, so that means we need to take different approaches. It means that there's probably no right approach. Certainly no best practice, probably some good practice, but there's a whole lot of practices out there that are emerging practices that we need to figure out for ourselves. If we're going to try and figure out these practices for ourselves, because some of the good practices might not be appropriate, we need to understand what makes it a good practice, what doesn't make it a good practice. That's the stuff on the other side, the far side from me, the impact. We need to know when we're making changes to this system, are those changes having a positive or a negative impact? I think about three types of impact. The flow. That's process-related stuff. How well is the work moving through the system? How quickly are we developing stuff? Are there any delays? If we have good flow, probably means we have a good process. Equally, just having a good process isn't going to be enough, because we actually want to be developing good work. We want to be developing valuable work. At the top, at the bottom, whatever process we have, whatever Kanban system we design, it needs to be one which delivers value. If it's just an optimised process that delivers crap, that's not really going to help much. So we need to have a good process, we need to be delivering good product, and then the thing in the middle is what I call potential. We need to be developing and continually evolving the system to meet its potential, because we're never going to have the perfect system. There won't be a perfect system. Even if we do get to a perfect system, things will change, and it will be imperfect again. It's not just about solving a problem, it's about creating problem solvers. I think potential is very much looking at the people. You could talk about flows being about the product, flow being about the process, value being about the product, and potential being about the people. I'll come back to talk about how we can know whether we're having that impact. We want to design this system. We want to improve our system. We want our system to have an impact, and we have some idea about what impact we want to have. I've just said that there's no best practice. There's no thing that you do this. This is where the idea of heuristics come in. A heuristic is an experimental approach to solving a problem. It's used in science when you don't know what the answer is. You apply heuristics to figure out what the answer is. We don't know what the answer to what process is. We can't just take a process out of the books and make it work, but I think there are some heuristics we can apply to help us and guide us in figuring out the process. What we're going to do is go through these heuristics. We're studying the system, sharing our understanding of the system, putting some limits and policies in place, getting a sense of the current capability of the system, and starting to explore and evolve the system. We'll start with study. I kind of think of studying as, like, wallpaper archaeology. Do you have... Does that mean anything to you? Do you have wallpaper? You build your house and say, okay, this is maybe a Western thing, or a British thing might even be. I don't know. You put a wall and then you start putting paper on the wall to make it look nice. I put a pretty pattern on the wall. What tends to happen is you get build of that paper, you want to put some different paper on, but you can't be bothered peeling the last paper off, so you just put the new paper over the top of that one, and then the new paper up top of that one. At some point, you decide that you've just got too many layers of paper in there, or you move into a new house and you say, we're going to start from scratch, and you start peeling the wallpaper back, and you have no idea what's underneath that wallpaper. Typically, you end up something like this, where you've peeled back this layer, and then you find another layer underneath it, and then you start peeling that layer back, and you find another layer underneath it, and then certainly in the houses that I seem to have ended up buying, you get down to the actual wall and it's crumbly plaster, and it's a right mess, and you have to rebuild the wall and then rebuild the wallpaper. That's an analogy I use for what happens when I start studying software processes. You go in and you think you know what you're going to find, and you start peeling back the layers, and you find other stuff, and then you kind of get down to the end, and it's a mess, and you have to rebuild the mess and start getting a new process and a better way of working on top of that. This is what I think, in the keynote this morning, Craig mentioned that it's going to be a bit... I can never pronounce it, because I'll try and avoid using the Japanese terms, because I think of it as going to the scene of the crime. In these detective shows you have on TV, the detective just doesn't sit in his office and say, send me the crime report, I'll flick through it, and I'll tell you who solved the crime. They have to go to the scene of the crime, look at the evidence, figure out what actually happened, and then work it out from there. So we need to actually go to the scene of the crime, go to both where the software is being developed and where the software is being used. There are two scenes of the crime with software. If we just go to where the software is being developed, that doesn't help us to understand where that software is being valued. In order to figure out where the software is valuable, we actually need to go to see our customers, see the end users, figure out how they're using it, maybe how they're using the current software. What problem are we trying to solve? But the first thing we want to study is empathy. This is an idea that I really picked up from design thinking, and this is the design thinking process, and they say, you can start with empathy. Start with having empathy for your user, for your customer, to understand what they're going through, and then you're more likely to be able to solve their problems. At D-School, where this came from, there's two guys, they're both twin brothers, George Campbell, and I can never remember the other guy's name, Campbell Brothers, and they came and did a talk at Raleigh last year, and they told this great story that stuck at me. They went to work with the guys that developed MRI scanners, these big things that you sit in and you go in the tube and you have to be really still. It's really claustrophobic and they take a kind of brain scan and a body scan. At the hospital they went to, they were having to give children, small children MRI scans, and they watched as these children went and had their MRI scans and saw the petrified fear that these children went through as they got taken into this big sterile room with this huge metal monstrosity with a small hole, and they got kind of shoved into this hole, and then everybody left the room because it was so dangerous to stay in the room and left the kids there in the room on their own, and then the kids hated it, and they thought, how can we have more empathy for the children so that this isn't such a terrifying experience for them? And what they did was they redesigned it so that the room was made out like a jungle and the MRI scanner was like a tent, and they told this whole story around, you're going to go into the jungle, you're going to be really brave and spend a night on your own in this tent, and now we're going to have to leave the room because there's wild animals out there, but we'll be watching you. If there's any wild animals come, and they made an adventure out of it, and they did this and they went back, and the kids left it. Suddenly this terrifying experience was something that was actually quite fun and more enjoyable. But the product itself, the MRI scanner functionally was the same, but having empathy just changed some little design things. So starting with empathy and studying empathy means that at least the thing that we're building is more likely to be valuable. That brings us on to studying demand and our customers come to us with requests. They place demands on us as development teams or development organisations. Quite often we just take that demand, we just respond to that demand with the right way of questioning it or understanding it. This is a nice way of looking at demand. It's kind of the core profile of design demand. I picked it up from a girl called Stephen Parry, and he's written about it in a book called Sense and Respond, which I highly recommend. I'm almost looking at lean thinking, looking at designing processes from the customer's point of view, understanding customer purpose. It says there are four types of value. There's value creation, where we're actually generally creating new value now for the customer, and those sorts of things we want to optimise. We know what value ways we're creating and we can optimise the process. But there are some things where there isn't the opportunity to create value in the future. And this is more an innovation space. So we need to treat value, creation value and opportunity value slightly differently. And then there's the remedial value. Value that has been removed and we're having to put back in. Effects. Customer support demand. A thing where it doesn't work anymore. We want to try and remove this demand. This is why we want to focus on quality, because if we focus on quality we'll have less remedial demand. Optimising for remedial demand isn't the right thing to do. You want to remove it. And then there's this external demand. So demand that's coming in from outside, it's not really to do with your customer. You're not really innovating. It's not anything to do with D-Vect. It's more dealing with partners. And quite often we need to rethink the way. And this is quite often from moving to from third parties to truly partnership where you're working better together. Understanding the demand and profiling that demand is the top two. Generalised is value demand usually. These things that are valuable. These two are sometimes called failure demand. And I'm kind of with Stephen that he doesn't have the term failure demand because it feels a bit judgmental as like you failed. Quite often nobody's failed. It's just there's an opportunity for improvement somewhere. The understanding of demand is about designing your Kanban system and designing your process to better meet that demand or to actually try and get rid of some of that. Then we're on to studying the flow of the work. So each type of demand will have a natural workflow. And this is where we kind of get into waterfall versus agile is waterfall very phase based and agile is not. Should we be doing this sort of valley stream mapping? I think it's useful to do. I kind of drew this diagram because for me it shows that while we have these dominant stages in the process they're not handovers. We're not talking about phase gate flow here. We're just saying at the start something's more of an idea. That's kind of what I call incubation. We're incubating an idea. And then we take that idea and we start fleshing it out a bit. We're not building it yet. We're just trying to talk about a little bit more about what we're doing. Then we start instantiating it. That's when we're into building something. We only want to demonstrate it at the end. Here's something we build. Let's demonstrate it. Is this, have we built the right thing? Have we built the thing right? And then we liquidate it. Then we kind of hopefully get our return on investment. I kind of came up with incubate, illustrate, instantiate, demonstrate, liquidate to try and move away a little bit from the typical requirement and analysis test. That's essentially what those things are. We're just not saying we're going to do all the analysis, all the building, all the testing. We're doing it all at the same time but at the beginning we're doing a little bit more of the analysis stuff. Then it's predominantly coding and then it's probably predominantly testing and then predominantly releasing. Understanding where along this line you are is useful and it's going to help us understand how the work is flowing through the system. And then that leads us to things like value stream mapping. Where we have our demand. We have these predominant kind of phases. Sometimes I like to think about this as knowledge acquisition, what's the dominant form of knowledge acquisition? That's really what we're doing when we're doing analysis. We're trying to get knowledge about what to build. When we're actually building it, we're trying to get knowledge about how to build it. When we're testing, we're trying to get knowledge about how we build the right thing. What you're trying to do is then understand this kind of transformation, knowledge transformation. Are there any kind of other coordination points along this flow? What are the feedbacks that are leaving here? Where do we take a big piece of work and expand it into lots of smaller pieces of work and then take those smaller pieces of work and collapse them back into the bigger piece of work? That's usually analysis and then integration. Where do we have delays in the system? Where does the work queue up? Where do we batch things together because we think it's more optimal to process them together? Where do we have and us? Where do we have key decision points? These are all sorts of things you might look for. When you end up with something like this, it's a lary just because it was a workshop I did, and they just brainstorm activities that I did. What things do you do when you're working? Let's join up the inputs and the outputs and I think on this one, I don't think colour, but sometimes I'll use colour for the different groups, so you can see how the different groups work together on where they work separately. This is more of a work called a sense-making approach to alleystream, I think. We're not just creating these categorising boxes. I'm just saying, let's put everything up there and then let's make sense of it. Once we've made sense of it we can start figuring out how we want to share that and how we want to use that and create some structure around that. You've studied your context. Do you understand your customer? Do you understand the demand that customer is making on you? How do that work? How that work flows? Unless we understand the current system, unless we understand where we are now, we can't really know what the best way to improve it. So you really do need to study where you are now. That's why studying isn't called the studying system. Once we've got that understanding we then want to share it. This is where the visualisation and the visible cards comes into the context. This is a picture from the Battle of Britain World Rooms in London. When we were fighting when Britain was fighting the Battle of Britain they had this big room. Probably the size of this room may be a little bit longer. Tables all along the edges with telephones and this big table in the middle with all these markers on it and a little shuffle thing on here so they could move. As the calls came in about what was happening in the actual battle people around the table would start moving these things around. They were using visual management to manage the battle. And we watched unfortunately for me. I'd like to hypothesise around what would happen if the Battle of Britain had been fought with a Microsoft document on a SharePoint site with change control. Maybe not so successful. Work in for Alley, obviously I'm not knocking tools completely. No, those tools need to bring the information out rather than hide it away. The reason visual management is so powerful is actually neurological. It's the way our brains work. Our brain spends 50% of its time doing visual processing. There's a great story, great experiment around wine tasters in France. And they took these wine tasters. Any expert wine tasters? Now they just have to trust me. Apparently, I'm no expert. When wine tasters test red wine they use different language. Different types of vocabulary to describe red wine versus white wine. So they got all these wine tasters and they gave them some wine. In some of the white wine they put in some odorless, flavourless red dye. So it still smelled like white wine tasted like white wine but it looked like red wine. And then they said describe these wines. And the vast majority of cases these white wines that have been made to look like red wines got described as red wines using red wine vocabulary. Just because we assume that what we see is true. A pick of a red wine it must be red wine there for I'll use red wine. I've heard of another experiment that I've not had a chance to research where they did the same experiment with non-experiment wine tasters. We weren't fooled so much in pattern and training and you know what you're doing so you stick with it. The other reason the other interesting thing I think around visualisation is the fact that we are all constantly hallucinating a little experiment put your left hand on your nose and close your left eye and then put your right finger on it and just put it to the right finger on the nose and then just blink your eyes what you should see is the thing you're kind of moving on either side of that. See that? Our brain is taking in two completely different images from our eyes and creating this mash-up of the two images to create a single image creating a hallucination of what it thinks we should be seeing based on these two images. That's why it's doing so much processing. If we're hallucinating we at least want to make sure we're all having the same hallucination that's the way I like to think about it. Therefore let's create a nice single visualisation that we can all share. So we're trying to share understanding and we're trying to make sure that we can all see the same thing. There's some science around the way we do we kind of get meaning as well around that we get more meaning from interaction so if we want a blank hand on board to be interactive that's the way it's even as a tool somebody worked for a tool vendor I still like physical cards and being able to read them around. I want to make sure it's readable. One of the anti-patterns I heard of was a guy that created a Kanban board and then put it in the office and you had to make an appointment with his secretary to go and look at the board and change it. Not really very visible not that interactive and not readable not sharing the understanding so we're trying to get this common mental model so we all understand the same thing that's going on. So that we can understand the flow, how the work is flowing. The typical pattern is you have these columns for these dominant phases in your workflow and you want the work to be flowing across. If you can see the work and see how it's moving we can get a feel for how the work is flowing because really we want to get the work across across the board. That's our goal. So we're looking at sharing the flow and then what work we're doing and how well it's flowing so you can then start populating the work. In the same way that when I'm doing value stream mapping I'll sometimes create boards like this as well whereas I'll just get people to I'll not draw the columns. Put your work, everything that's in progress anything that you're working on just put it on a board and let's start doing some affinity mapping group things together where we're all in this sort of state and now we're all waiting for Joe to fix things. These ones are waiting for sign-off and suddenly the right things to visualise and the right things to merge rather than trying to predict. Here's some little visualisation tips. I think of boards as being about the tokens that are on the board the inscriptions that are on the tokens place. The tokens then could be any sort of material any sort of size, any colour, any shape and the material size, colour shape they can all mean different things. Similarly we can write annotations on them we can use graphics, avatars we can have some kind of linkage into a document or into a the idea of your electronic system use formatting so that when you look at the board it's readable and then where on the board the cards are placed typically if you've got the columns that's kind of alignment you're saying if these cards are aligned in these columns there is some meaning behind that if the cards are aligned in a swim lane there's meaning behind that. Rotation one of my favourite little tricks is knowing that a card has moved rotate it from landscape to portrait. So when you have your standard meeting you can say well let's talk about all the work that's moved and we can identify the work that's moved because we've rotated it and then we rotate it back or is there anything that's not moved that should have moved which is maybe a more interesting conversation is there anything that hasn't moved for a week? That's the work that's not flowing there are other ways of identifying that you've got some cards for every data in a column and that's using a form of annotation there's not some little patterns there that you can use to help you create visualisations there's no one right way this is almost kind of a sub-parist within the sharing of those be creative that's what I kind of say to Tim be creative have some fun with your board designs I've seen one board design that was a target and instead of cards moving left to right they moved from the outside in the end I've seen one where they create a nice little wavy road and the cards went down the road the more fun you have the more likely you are to use it this is another example where you can't really see it that well but you've got your swimming lanes you've got your columns this thing we're using colour to indicate things you notice here that they kind of created this line and then they realised that they didn't need this so they did design a board and stick to it your board will always evolve typically when I go and work with an organisation and I've finished kind of the first Kanban system design and come up with a board generally I'll say I'm going to be coming back in a few weeks and I expect your board to have changed for that first time you just need to try things out and experiment with it and then this was kind of the next thing that they were playing around with given what we've learnt from this board design how might we do it differently better to start sketching things out so we've studied our system we've started sharing our understanding so hopefully you're all getting a common mental model now we're into starting limiting working process and this is the bit that really makes it a Kanban system because as we limit working process hopefully we're going to improve the flow of the work and again there's neuroscience behind why we want a limited working process it's human beings we can't multitask when it comes to paying attention we just can't we think we can but we can't I want to run a little experiment to try and prove that so I can have a volunteer on this side of the room and a volunteer on that side of the room stand up first person to stand up wins okay great someone down this side who wants to volunteer on this side excellent you want to just work your way out to the end I just want to walk up and down the side of the room between the speaker and the back of the room show us a thing you can walk up and down a reasonable pace without wearing yourself out okay keep going I'm going to give you some multiplication sums I want you to do those multiplication sums I'll just start out the answer while you're walking keep walking two times three six four times seven keep walking twelve times six 72 13 times 11 keep walking 18 times 14 keep walking that's 43 18 times 14 252 I'll give you that one 23 times 31 23 times 31 713 57 times 81 59 do any head so you've not stopped walking quite often when I do this people just are so concentrating on doing the sums that they stop walking calculate or you kind of go through this thing where you stop walking and then you kind of give up and you just carry on walking because you're not even trying anymore I'm not going to get into that one alright thank you one of applause for volunteering the other scientific research on this one is people talking on their mobile phones while they're driving and the research shows that the accident rate for people talking on their mobile phones is higher than the accident rate for drunk drivers I think extremely drunk drivers is higher but if you're just taking average drunk driving and talking on your phone while driving you're more likely to have an accident if you're talking on your phone because you're trying to multitask and you can't pay attention to both things at the same time so I try not to give advice at this point if you're being expected to multitask at work you might as well be drunk at work now I'm not saying go and start drinking at work the Kanban system and putting the limits in place creates a pull system what I mean by pull system is you start work when you're ready to start working at not when somebody else wants you to start working at there's very much a signalling that I'm ready for it I have the capacity to take on the work somebody else can do this the way work limits do this is we've got some work limits of 3, 2, 2, 4 into these columns here and this guy can't push this over here he can't say can you just work on this place and push it because you're breaking the limit of 2 after moving it back I need to wait for this guy to finish that or this guy to pull that so that frees up some capacity here in this column we now can move that across so actually the information is flowing this way while the cards move that way but actually if you look at Kanban for manufacturing the Kanban tokens actually flow backwards that's the information to say I need some new work so even in software where the cards flow this way and in manufacturing the cards that way it's just because we're using the cards slightly differently question you have a work limit at a particular workstation so which itself if it is more than one it means you're kind of allowing multitasking well it depends on the size of the team and so this doesn't show me how many people are able to do that work I think the key point that is worth noting is these aren't necessarily representing people this is representing the flow of the work the whole team could still work across this so this is a bit of a contrived example to show you but if you've got work limits that high it probably means you've got a reasonably large team to be able to do that figuring out what those work limits are is there's no magic formula behind it and there's some heuristics you can use so if you're doing pair programming your limit might be half the team size or half the number of people that can do that work you're not pair programming it it might be the same number you might make it slightly higher because we know work gets blocked actually you're just going to pick something so some people just start with high start with what you have now and you're bringing it down some people like to start with the extreme case and start really low and then bring it up when we get on to censoring and exploring what you really want to do is start being able to measure and understand if you can measure the flow then you can start adjusting your work and process limits to figure out are you improving or decreasing whatever work limits you have yeah this is a simplistic example you probably do some new difference so if you break it down you're actually going to see how many people are working on it and how easy it is to move that flow this way because it'll be a fraction of 0.5, 0.2, 0.3 against each of the tasks people will be working on so giving the number at a high level two or three people working on it can be a few people so this is a simplistic example I'm probably using different stuff. You're breaking features down into user stories features done to tasks. tasks. So you might, these limits might be the task already of a feature. I don't know when I'll say this example. I'm not saying what's on this bike, what I could do is have a swin lane for features and then say we're only going to work on two features this time practiced or long here limit the tax. We're working on those features Mae'n gweithio. Mae'n gweithio ar gyfer y syniadau. Sfynol fod ydym yn eich cwylfa. Mae'n gweithio'n gwneud ei wneud arno i ymddangos i'r holl gwrth rhai y system. Felly mae'r gweithio eich cofio gwybod, mae'n gwneud eich cyflwyno. Mae'r cyflwyno wedi sut yn gwneud yn cael eu hunain. Mae'r cyflwyno'n gwneud ar gyflwyno'n gwneud. Mae'r cyflwyno'n gwneud ar gyflwyno'n gwneud. Mae'n gwneud ar gyflwyno'n gwneud. The infection has been reported to me by the federal authorities for every stage which has the same WAs. Not necessarily because this stage might take a lot longer than that stage, so it might make sense to have more StoFenet. That stuff because things are going through fairly quickly and you'll have less StoFenet. Where are you going to find? You do want to have smooth flow. This is where you're going to find that you don't have smooth flow. Does maybe I find out where you have a constraint or a bottleneck in your system because you get something like this. I've got two in here and this work is blocked here. I've got nothing downstripping here and everything else is backed up here, so what we want to want the team to do is try to swarm on the problem. Again, instead of starting more work up here to keep busy, we want to be looking to figure out either how can we help to move this work yw'r cyffredinol yn gweithio'n gweithio y cyffredinol, a oedd y byddwch chi'n gweithio'n gweithio'n gweithio? Dwi'n ddechrau'n cyffredinol ar y sylwmon, a chi'r cyffredinol ar y sylwmon? Felly y cyfrifiad yw'r gwneud, y gallwn y cyfrifio'r cyfrifio, a chi'n ddechrau'r cyffredinol yw ddych chi'n gweithio'n gweithio'n gyffredinol. Ond ydych chi'n ddechrau'n gweithio'n gweithio, wrth dwi'n credu iawn i'r ffordd o'n cael ffordd gydagol gyda'r Cesinfaith yn ei mwyaf i'r rhaid i'r dweud. Yn gyfnod i'n cyfrifiadau i unolion yma llawn i'r sgwrn neud? Yn gyfnod i'r sgwrn neud? Yn gyfnod i'r sgwrn neud, mae'r ffordd yn fawr yn y byddwch lle o'n yr ysgwrn wedi peithasol a oedd yn hynny'n tan livestreamfawr yn ei wneud, ond mae'n cyfrifiadau o'n yr ysgwrn?업 amser ar gyfer gwybod, is that, while you start tracking how long work takes, you'd have thought that big pieces of work take longer and smaller pieces of work take shorter and more time to take, what actually seems to happen is that the big pieces of work and my hypothesis is that we see big pieces of work and that's big, we should focus on that and it gets done quite quickly. Mae oedd eich cwm arken nhw. Mae'r hyn oedol oes i bob rhan o'r twf a roedden nhw oes i'r hunain. Felly rydyn nhw ydych yn y ffordd paddol wedi mynd i fynd o'r haes i siarol. I gwrs dw i chi'n ddoal. Wrth gwrs ychydig o'r eich archif, y cwm yw trefgliad now eich cyd-bydd ar gyllideb dyma ar y gwasari. Mae roedd hynny a wneud i'r drefgweithio a'r rhan o'r gwrs dim, ac mae rhan o'r dwyngau. Ac oes yn gweithio, dwi'n credu. It creates some natural slack in the systemso instead of having to schedule some slack and that's potentially a good thing as you don't get burnt out or you can work at the same speed and help out. We can assume people element here, people are going to do the wrong thing and use that to slack off. Roedd joyfuldwyff wedi cael eu can Caerdydd. Mae'r mins leash yna sydd wedi cael eu cast o gael hyffordd, mae'n ei wneud fod yn cael iawn Palaceau würdd yn cigau. Mae'n ddechrau yn drwyddo allanllwch hirys моadakiadau penderfyn ar croeddau. Mae'n cidergion popor gan y gallun syddiad gyfan wahanol. Mae'n ryw ni'r hyn yn schyrref yn oes yn frank эту Cross afterдor. Mae'r bydd yn b implants mount ac maen nhw dy fyddian. Roedd yw'r ffocws ar i roi'r botell ar y blaen? A weithio ddau'r wneud miliwn ar y pwyllto? Roedd yna yn gweithio'r gwaith o'r wneud. Byddwn i byddwch eu pergolirio ith, byddwn i'n ddiwr i'r ddeall fflillfa'r problemau? Ychydig hyn o'ch ffordd. Boedd yw'r amfodol yn ei gwirionedd ar Alasacobynn a Gweinidol gan y Gweinidol findingol yma. Yn ei mwy o'r dweud am fwy o'r ymwneud, wrth fynd ymwyn, rydyn ni wedi gweld gwybod ar gyfer y peth. Rydyn ni wedi'nai selydd o ddweud â'u ymwneud. Ymwneud, ble aeth ydi'r hwn yn ddi, ar gyfer y gweffredig, wedi'i ad Llamanau'r comparison cyflawn及bgau. Ydyn ni wedi'u cyflawn, ydyn ni aterog ddweud ar gweffredig, iddyn ni'n hoffii lliwyr y bêl, ydyn ni'n hoffii lliwyr. Dyna ydych chi fod yn fawr o'r fawr ac yn gwneud'r meddwl i gynnyddio. Mae'n gweithio'r cwestiynau a'r fawr o'r lnoch. Rwy'n meddwl i'r cwestiynau. Mae neud o'r cwestiynau yn cyllid o'r bach y fawr. Rwy'n meddwl i'r cwestiynau a'r cwestiynau, i ffasio'r ysgrifennig, a'r dweud i'r ddechrau. Roedd y cymdeilig, y dywed yn gweithio'r cwestiynau, mae'n gweithio'r cwestiynau a'r wneud i'r pwysig, gallwch chi'n dweud ei fod yn eich cyfnod ar eich fyrwyr o'r peidiad. Eftan'u gwneud yn cyffredd ygo, dweud o'r cyfnod ar gyfer am ddwy'r ddechrau, gallwch chi'n gwneud yn gwneud yn dra i chi'r ddechrau, iddraeth yn cael ei fod yn cael ei swyddo o'r pwyntau. Mae'r lleihau wedi'i gwneud yn cael ei ffordd fel i ddim yn ysgrifi, dwi'n gofyn i dda Farfodau a phreddio'r modd, irddwn ni bod yno yn gwneud gweithio i'r gw富, ..y oedd ymlaen i'r argynoddiant rydych chi fyddo, ac nid oedd ydych parodd hynny a'r fel y datblygu'n iawn. Mae hynny oedd eisiau ymddian mewn boblol.. ....a thysgu chi wedi quesliad hwnnw i ymddian a'r pwysig mae hynny, a roi'n bywch ystod y pwysig a'n tylfa bwynt. Yn hyn, nid oedd ybllun opa bwysig yn fwy o holl gwrs. Efallai yna gŵn rhaid i ni'n gwneud yr hyn yn flwyddyn oherwydd. I'll probably put a limit on that because if we start getting lots of work blocked, that needs to be a signal that we need to figure something out with it. So we're trying to focus, by setting the work limits, we're trying to focus on finishing work. Stop starting, start finishing. Because we have to finish work in order for that signal to come back up and tell us that we now have capacity to start making it. So that, to say, in the work limits kind of helps us to get the work flowing, helps us to focus our attention on a small amount of work sort of, we're not context switching all the time. So we can start creating a better implementation, a better process design. And then we put policies in place. So work in process limits are actually just a special form of policy. By policy I mean this is how we treat these situations, this is how we treat work. So you get these things like a definition of done per column. Absolute clarity about what it means for work to move from here to here. And then, as you're saying, if we find a problem down here, and that's happening a few times, we can revisit our policies. Anything we need to change about the way we work so that we're not getting these kind of unnecessary feedback cycles that we could identify earlier and solve. And that gives us some way of improving our process. And you might have policies by pritswim lane. So this is policies by workflow. This is policies by demand type. Different demands you might have different processes for. Some types of work you might say it's okay for that to skip QA. If that's okay for us to do a deployment ourselves, this type of work, that's high risk, we need to do more testing on that. Again, we're not treating our work equal. By understanding our food from studying the work, we can now build processes and set policies for the type of work. Can you give me an example of demand policy? Of demand? So small changes, very small changes to an existing website might require less testing. Because you're just making small changes. Or it may be that small changes, you're okay to allow the development team to deploy themselves. A big new feature, a big new initiative, you might say that needs more testing. And that should be deployed by a separate reduction in the first to manage a new risk. And then once we're happy that the risk goes away, it suddenly becomes really the... It moves from being a new initiative to being a smaller and better process. So you're effectively managing risk. Okay, I'm going to skip over this. The penny game is a really good example to get your fill for setting work and process limits. Reducing the batch size, getting your fill for flow and the benefit you get from that. I'll move into sensing. We're trying to get a fill for what the current capability is. If we want to improve it, we need to know how good we are now. So it's like the car dashboard. When you're driving it, you've got all these information radiators, your mileage, how much petrol you've got left. You don't fill your car with petrol and kind of go, oh, well I know that a tank full of petrol gives me 300 miles. I'll just drive 300 miles and fill up the petrol again. Because you're not going to keep track of that. You have your dashboard in there, you're getting these real-time metrics. I want to get that same sense of really understanding how well things are working so that if unexpected things happen, suddenly our fuel consumption goes up, we can respond and we can be ready for it. So what I want to do to this is through cadence. The way I think about the time box and the iteration in something like Scrum is it's a really simple cadence. It's a metronomic cadence where everything just happens on the same beat. And that's the beauty of it. It's really simple to implement. Every two weeks you just do your planning, or you do your review, your retrospective planning. Do it all at the same time. Two weeks, do it again. Once you start kind of understanding that that's a cadence and you can start decoupling those cadence, you maybe get a bit more flexibility. So you might say that we're going to reprioritise, we're going to figure out what we should be doing next more frequently than we actually review of what we've done. We'll actually retrospect on another cadence. We're actually going to release a longer cadence. We still get that sense of rhythm. That's what the time box gives us. It gives us a sense of rhythm so we know what we can do every two weeks and we can start learning from that. If we're prioritising every week then we still kind of get into a good feeling of how much to prioritise every week. If we're reviewing every two weeks, again, we know we're going to get together every two weeks or maybe every three weeks, get it in the calendar and the teams know that they're going to have to demo stuff every three weeks. It doesn't have to be the same time period. But then we can start up decoupling the size of the work items. So it might be okay to work on a longer work item that spans iterations knowing that we're still going to review it. We're going to miss the first review but we'll get it into the second review. A cadence still gives us that sense of rhythm, that sense of natural progress. It's a little bit more complicated to set up maybe. But sometimes doing everything on a metric on a cadence is just too constrained. That flexibility can be useful. That also then allows us to start generating metrics. It's again the metrics that let us know how good we are. So when we're measuring things and coming back to Stephen Parry's sense and response, this is another idea from him. He recommends measuring things that are related to customer purpose. He has this nice little grid. He goes in and I started doing this. Ask yourself, ask your customers, ask your business. What do you measure? And then you just put it on this matrix. Is it an end-to-end measure? Does it measure something from the customer request to the customer getting that value? Are we just measuring functional things, functional silos or organizational separation? Does the measure actually matter to the customer? Does the customer care yes or no? We want to be getting the best metrics up in this quadrant by their end-to-end and they really matter to the customer. They relate to the customer's purpose which we figure out when we do it. If we're down here, we're just measuring functional stuff that doesn't matter to the customer. That's not going to happen. That's the first thing is make sure that when you're doing the sensing, getting a feel for how good you are. Make sure you're measuring stuff that's related to customer purpose. That's ultimately what we want to do, helping your customer with their purpose. That brings us back to impact. I talked about flow, value and potential. These are what we're developing, what we're calling the performance index framework. A nice simple standard way of measuring organisations to help us figure out whether they're doing the right thing. We talk about predictability. How predictable are we? The more predictable we are, hopefully the happier our customers are going to be. How productive are we? The more productive we are, the more stuff we're doing for our customers. How responsive are we? When a customer asks for something, how quickly can we give them something in value and solve their problems? How satisfied are our customers? What's the level of quality? How satisfied are our employees? We can carry a great system that delivers lots of value and all we're doing is burning our employees out. That's not going to be good for us. We reckon that these things are not exclusive. There's a relationship to these things because if we don't get quality right, probably our customer satisfaction is going to be a down. Equally, we can become more responsive and reduce our predictability. If we want to be more predictable, we might want to be less responsive. There's a simple mapping of those things to flow, value and potential. These things here are what we call outcomes. We have what we call the ODIM. It's ODIM because we start with outcomes. We say that better measurements, the right measurements will give us better insights into what we're doing right and wrong, which means we can make better decisions about what we need to change or what we need to do more of, which will lead to better outcomes. These six things are the outcomes. I sometimes add the bottom. After that, we want better outcomes because we want to have better impacts or more impacts or positive impacts. We don't want to start with a measurement, though. We need to start with understanding the impact on the outcomes and then work backwards. Just some examples. Responsiveness is typically cycle time. Cycle time and lead time tend to get used interchangeably. This is the example of measuring small, medium, large and defect separately. We've got the number of the sample rate. We've got 95 items of small. The average is about eight with a deviation. We're saying on average, if we think something is small, it's going to take a safe place. That's how average responsiveness is the thing that we say is small. Similarly for medium, large and defect. For measuring those things, as we make changes to the system, we can start looking for trends that are being more responsible and less responsible. Similarly, how predictable are we? Predictability is because we have this range, and now we're the range of cycle time to the more predictable we are. We have a histogram here. This is saying that most of these dots, so each dot is something that's been through the system on a particular date. You can see when it got done. Shorter cycle time down here, longer cycle time here. We can see that the vast majority of them are down here, and then it kind of tails off, and then quite often what we see, but it doesn't show me this example, is a kind of a fat tail at the end. That helps us understand that the percentage here, we can say that probably we're the roundabout, 80%, 80% certainty, stuff will get done between 8 and 10 days. That's what that's telling us. When looking at the range of data, we can start saying with what percentage predictability we think it's going to take us to get work done. In that example, teams were just doing a quicks, is this small, medium, large before, and then they were using that to, after they'd measured the cycle time, they then just took all the smalls as a subset, all the mediums and then did average and something like that. Perceived as a small, medium, large site. And then the cycle time. Yes, so this was the perceived what they thought was small, rather than in hindsight what was small. So they were using that information, as something new comes in, you can say, we think this is small, the customer says, how long is it going to take you? You can say, well, we can say 8 days with 50% accuracy. That's the average. That's a kind of a simplistic way of doing it. And there's a standard deviation here, so if that's 8 plus 6, 14, probably there's a good chance we're going to get it done in 14 days. So we might be confident in committing, although we're not supposed to use that word, to 15 days, based on historical data. So that's where, if you're concerned about a varying size of items, try it, but measure it and see whether actually you do get significant. A lot of teams have done that and discovered that all those things look the same. The other thing that some teams find useful is cumulative flow diagrams. So this is kind of an exaggerated scenario just to show, but these are our states in the workflow. And every day we're looking at the number of items in each state. If you get a bottleneck, what you're going to get is something like this where you get a bulge. What you're looking for is these bannas to be nice and smooth. If you start getting bumps, what happened there, we got this bump. You can see here that there was this big thing when nothing got moved or done. Then you had a big bump and it kind of erled off. You can't often see these steps in there in a consistent release cadence. It just gives you some more information about the process, about the trends of the way the work is working. Some teams kind of find that once you get good work limits and pay some people, sticking to them, that kind of forces their bannas to be narrow and you get less value from this. They're far from more useful for newer teams. There's another game that I do called the Ball Flow Game which generates some of these metrics. I've got some screenshots here. Felt familiar with the Ball Flow Game? The Ball Point Game that gets used in Scrum. Instead of getting the balls past the line for two minutes, I get 20 balls. I say, let's measure how long it's going to take to do 20 balls. On a spreadsheet, I actually measure the cycle time of each ball. This is each ball from 1 to 20. The first ball goes through in 8 seconds. Then we can see that subsequently each ball gets longer and longer. Then it gets quicker and then drops down at the end. You start to see that that comes up as this gets wider. That's why this is going up. You get the team to go around and you have to pass the balls through every member of the team. You can't go to the next person. It has to be in any time. You're self-organising a process to process all these balls. It's a very quick, simplistic explanation of that. What I do is I get metrics about how well the team is working using cycle time using a kind of photograph I'm looking at. Anyone who works as a team processing as many balls as much work as possible. Balls have to go between people in the air. You can't pass the ball to your direct neighbour. A ball count is complete when it comes back to the start. It plays a customer role. They feed the ball then it gets processed. They get it back again. We're doing 20 balls. I get teams to repeat this a number of times looking at the metrics and using these metrics to help them understand what's working and what's not working. What this shows the team is that they're getting a more working process as that plan goes. This is the balls that are not started. Balls that are in the system, balls that are done. This gets wider slightly which correlates to the lead time going up. You get this plateau here while suddenly they let everything declog itself which correlates to this drop in lead time here. This is every 10 seconds how many balls get finished. This spike here suddenly correlates to this drop here. We can look at these metrics. Start talking about what happened, why things happened and why you might want to do something different. Start to finish. It's the point at which the customer says can you start processing. It's not from the start of the very start. It's the time the ball gets and the request is given to start working on this. It's the time it goes from red to the point it goes from red into green and it ends when it goes from green into blue. I've got my colours right. What normally happens particularly the first time the person that's feeding the balls in is pushing them into the system and the system gets clogged up and people start dropping balls and that's why this is the first one there's nothing in the system so they can just concentrate on doing that whole ball. The more balls are in there the more because they're crossing over because you can't go to your direct neighbour it starts causing confusion. When I get into exploring we get into this exploring changes to the system shows the different experiments they ran how the metrics are different how that helped them understand getting the right process. So we're into exploring. We're now into... Any other metrics to be captured apart from lead time and working progress limits the other metrics to be captured The only one I'm capturing here is throughput so that's the rate at which they're finishing balls. Well I don't capture here but it becomes obvious quality. So software that could be number of defects raised in production could be you could do number of support requests that are coming in you could do it subjectively just ask a team again there's a number of ways you can figure this one out. I was chatting with somebody Henry last night he's trying to go through the same thing where he has this gut feel that quality can be improved and he's playing around and sometimes he's asked the team all of this stuff involved the team this is about somebody external coming in and trying to do this involved the teams in all this stuff and then customer satisfaction this is a fairly complex spreadsheet so there's only a certain amount of fingers I have to try and capture this stuff when they're playing the game So exploring, I like the metaphor of skiing and the snow plier when you learn to ski you learn to do the snow plier as a way of controlling your skiing really that point you're exploring just how to stay on your feet and then as you get better you don't need to snow snow plier anymore because you find better ways and more efficient better looking ways of stopping and then you're not exploring just how to stay on your feet just how to go over different terrain so as you explore and as you get better the techniques that you use that's what kind of why no one process is going to be static you're constantly going to be changing the process as you learn as you get better so we need to explore intentionally so we want to do intentional experiments and by randomly trying to find and seeing more works because there's this information theory chart there's my pick this up from John Rhinerson which says that the amount of information you get is related to the probability of failure if you have a zero probability of failure which means you're always right you're not learning anything because you're always right equally if you have a 100% chance of failure you're not learning anything because you're just doing it wrong all the time there's this sweet spot in the middle when you're failing 50% of the time because you're learning from the failure you're getting it right trying something different failing so we need to be knowing what information we're trying to get and be ready to fail sometimes now you can argue about whether failure is the right word probability of not getting the results you've thought you might get but you need to know what those results are in order to know whether you've got them or not so you have to be doing very intentional experiments and that's why you need to have the metrics to be able to sense how well you're doing because you're going to make some changes because you think this is going to improve our throughput this is going to reduce our lead time this is going to improve our customer satisfaction and you're doing this continuously so it more likes a normal distribution like 50% time you fail and then you learn and then you be like more information you have more the probability of success or I didn't understand the question it's more of a normal distribution compared to a 50% time it's a bell curve but the probability of success is more for the next time and then you get it right next time but we're always doing things differently so you think that we should be more successful over time because we live I think I can bear that argument more like are always successful what you can derive from it's somewhere in the middle when you are sometimes successful and I think you're saying that at some point do you get to the point when you've learned enough that you don't need to and by the argument but I think we're constantly learning I don't think we ever know enough we should be constantly striving to improve it's plausible practice I'm not so sure and there's a whole there's a whole other talk in there actually around what you're learning, doing and learning is moving from a complex environment and you're exploiting that and moving it into a complex environment you're exploiting that and moving it into a complicated environment when you do that at some point you want to start innovating again when you move it back to complex so there's some slightly more subtle transitions journeys around there I guess the key point is be prepared to fail and the idea of 50% failure is scary for some people but it sounds like a lot but it is a lot okay and we're doing this continuously we want to be going through this the PDCA I prefer PDSA, which I think was the original Shewet Cycle, Pandy Study Act, because I like the emphasis on study rather than just checking. This is this cycle. Find an experiment, do it, check or study the results, act on it, which is going to do something with that learning. And then you're going to go round. If you get to the point when you think you've learnt everything, you're stopping this. So the continuous point is almost saying, yeah, we do need to be always continuing there. And doing an intentional experiment, this is a kind of variation on the A3 format. So again, this is from Toyota. Simple experiment on a page. A slides are available so you can see, but there's guidance on. And essentially this is kind of writing out the PDSA. So where are you now? What's your current situation? What do you want to be? What's your hypothesis for the future? What's your plan to get there? How are you going to check? How are you going to measure this? What are your risks? What are you going to do if you succeed? Having it written down on just an A3 piece of paper is a nice simple way of doing it. I'm doing it in pencil. This gives you a nice framework for making sure those experiments are really intentional. And I'm doing this mutually as well. So this is the people aspect to this. When we're kind of studying and we're creating this common understanding and we're sharing it, we're going to find that people have different views of that data. People might be surprised with that data. And that's going to cause tension. It often causes conflicts. You're shining the light on this thing and suddenly people don't like what they see. This comes from Chris Hargeras, the ladder of inference. Benjamin Mitchell has written a good blog on this. Essentially this is saying, as human beings, we typically just climb straight to the top of the ladder. And just make assumptions and do what we think is right. And everybody's got different assumptions. Everybody does different things and that's what we get conflicts. And it's the idea of kind of bringing, coming down, bringing people down the ladder. So we take actions based on our beliefs. So let's understand our beliefs. What conclusions lead us to those beliefs? What assumptions lead us to those conclusions? What are the meanings that lead us to those assumptions? What data do we take that meaning from? And actually what is the data we're observing? So instead of jumping into conclusions, we bring it all the way back down to looking at the data. And then we can talk about, are we looking at the same data? Are we interpreting that, having meaning in that same data in the same way? Are we coming to the same assumptions in the same data? Any of those points are the difference we can have. But then that gives us a way of talking about this. So it's not to me that the exploring is not just exploring in running experience. Exploring the interrelationship between human beings and how we understand this and how we work together and how we deal with conflicts. How we have crucial conversations, that's a good book to read on this. Because it's going to generate conflicts. Just having the data there and having it visible doesn't matter if you solve all the problems. So here's some more rounds of the ballpoint game. So in round one they were just pushing stuff in. The guy that was feeding the balls in was just going, here's another one, here's another one, here's another one. They looked at our data and they went, hmm, let's just do six at a time. What they actually did was the guy put six in and then he waited for all six to come out. Then he put all the next six in and waited for all six to come out. So you kind of see this that one, two, three, four, five. Now actually, the six one never came out, which is a bit odd. Which is why you get this peak here. Again, we've got data, we can talk about that. How can we stop that happening? So that was then one, two, three, four, five, six. It's interesting that they kind of go down like that. It's because the balls go in and it frees itself up again and the last one's going to speed up. So suddenly we've got better variability because you can see, apart from that one, we've got a nice narrow band there and we look at the average. The average there, this is this dotted line, was 24. So if we get that said, how long is it going to take? I'll say, well, we've got an average percent chance of it taking 24 seconds. With this new way of working, we can say the average is 13 seconds. Excellent, we've just halved our average cycle time. That's good, but the throughput's gone a bit crazy because we were having nothing come out and then all six of them come out and then seven because one got stuck in the system. So five, seven. So throughput's gone a bit crazy and you can see these, about six and six, and these kind of lumps. So we started seeing some of the behaviour. So we said, OK, well, let's go to extreme. Let's do one at a time. Average is now down to five in a bit, just under six seconds. Really consistent, apart from just the odd one or two. So we've got really good predictability, really good cycle time. The throughput's dropped out. So now we've got low throughput. But look at that cutener flow, isn't that beautiful? OK, maybe the batch size of one is too low. It's actually gone too far to be extreme. So let's do batch of three. So the put three in. Put three in. Now we've got three times a round, about the same. Six seconds, not much difference. The variability is a little bit more variable, but nothing to be worried about. The amount of throughput's gone up. Great, kind of getting somewhere now. Kind of flow. I'm not quite sure why it suddenly went narrow there. And then the last one, they kind of went, OK, three seems to be the right number. That's feed three in. And then every time one comes out, we'll feed the next one in. So we're now really creating a pull system. Interestingly, the lead time's gone up a little bit. And it's not quite so predictable. The throughput's gone up again, and we've got a really smooth flow. So we're trade-offs here, and we can decide. And what this isn't showing, actually, is that it's not showing quality. I have a feeling quality was much higher. So that must be a drop ball in there. And I think the employee engagement people were much better this way. So looking at some of these metrics and some other metrics, quality, employee engagement, customer satisfaction, we can run some additional... These were very... The things that we're doing here are very intentional experiments. They were very deliberate. We will make these changes, because we think this will happen. Let's see what this happens. That's really what we're doing with a Kanban system. I'm sure we understand the concept, but it's a limit that... start giving us better predictability, because we start getting some control over the system, putting some boundaries in place. Start measuring, getting a sense of how well you're working, and then start exploring, doing some intentional experiments to try and improve the system. Because we're trying to create evolutionary potential. We're trying to evolve, and we'll make this distinction between evolution and revolution. And I'm not saying revolution is a bad thing. Sometimes revolution is needed. Sometimes evolution doesn't have to be really time steps. Sometimes evolution can happen in bigger steps. And that's part of figuring out what's the right rate to evolve. And that's really it. We're taking a systemic approach. We look into the whole system, what's the nature of the system, and we want to improve the system in order to have an impact on flow, value and potential. And we do that through these heuristics. Studying. I'll just finish on this side and then I'll come back to the question. Studying the context, studying the demand, studying the entity, making sure we study the current system. Sharing that. Sharing our understanding, making sure everybody has the same common understanding. Starting to put some limits in place. So we have some boundaries so we can start bringing the system under some sort of control. Sensing the capability and then exploring. I'll ask you a question before I move on to the last slide. I have a question regarding sensing cadence. Yeah. So actually how we see canvan is more about filtration list delivery, continuous delivery, as when the work is coming going into production. How is that in setting understanding the cadence working because the body of work that is coming would be of different size? Yeah. So how exactly cadence would be done today? So cadence gives us a mechanism to understand how those big pieces of work are progressing. Because we have a regular review point. Let's go back to that slide. So this is a big feature. And this is, you know, a user term feature delivery. This is something the customer really cares about. And we can break that down into smaller things to help us understand and evolve and learn about how to build it. But really the customer wants that feature. However minimal we can make it. That crosses cadences. But we can use those cadence to understand how we're progressing. So instead of saying everything has to fit into two weeks, work can flow across two weeks, but we use that two weeks cadence to check in. And we want to be having, hopefully we want to have something done every two weeks. We still do want to break things down into small pieces. All the good things around, you know, breaking down into user stories. Is that sort of still valid? Well, I'm okay with work flowing across. I'm not saying everything we said we were going to do at the beginning of the two weeks has to be done. At the start of the two weeks we kind of go, okay, given how much work we have in process, how much work we have completed, what our average, what our throughput is, how much stuff should we select next? What should we pull into the ready queue? The end of the two weeks. What did we get done? Did we get done what we thought we were going to get done? Let's still get some feedback into a demo. It gives you a basis to understand decomposing and it gives you feedback, because that's a big feature. We want to get feedback before we get to the end of that. So you still want early feedback, but then you're usually using those cadences to measure your cycle time or measure your throughput or generate your commute flow. In a service, in some enhancement, small enhancement, in one day of enhancement, there could be four days of enhancement. When we are using it, we are using it can be an approach. I'm not sure that today I may be working on one day of your world. Tomorrow I will get a four days of work. Your prioritisation cadence typically is a one day. Every day you're going back and prioritising. But you don't want to be doing a review every day. A retrospective every day. You probably want to be doing a release every day. You might not be able to do a release every day. That's the idea of decoupling the cadence based on your context. I have a tactical question. When you play the game, are you sure that it gets handled by each member of the team? I keep a very BDR on them. I trust them. I go into it as a training exercise and I say to myself, if you're going to cheat, you're not going to learn. I trust them. I do keep an eye on them a little bit. Particularly it's not so much the ghost of everybody. It's normally people cheat by not throwing the ball and they start passing the ball to each other. No. I quite often play a little of the game with them. I create a scenario. I sometimes do it with balls. I sometimes do it with sharpie pens. If I do it with balls, I'll say, we're creating magic balls for kids. The magic gets added to the ball when it crosses the force field. But if it touches two different people at the same time, the magic shorts out. We'll have a bit of fun with it. But generally just people are happy to go along. At the same time I am keeping an eye on them. All right, we're done. I have one slide, which is where we've finished with some started with some music so I figured we should finish with some music as well. You know our gangnam style over there? Okay. It's not gangnam style, it's cambon style. If I can get the video tonight. I've got further questions. Just come down and grab me and ask me. I'll hang around. Is there a music up for the video? I find it very much to come with fun and do it with a dance. I'm done. Thank you.