 My name is Julia Wester, and I'm an improvement coach at Lean Kit, and what I want to talk to you today is kicking chaos-driven delivery, or CDD, to the curb by looking at the things around us and seeing what we can learn. So when I think about teams, especially ops teams, I think about a lot of people struggling really hard to manage the very demands on their time, and without a defined approach to doing that, we can often fall prey to the highest-paid person's opinion or the loudest voice in the room. So I had an idea that ops teams deal with systems all the time that have these same challenges, but they handle them effectively, and so what can we learn? How can we look at those and find something to take back to our context? So I started with the operating system, and the reason that I did that is because it gets varied types of requests at unpredictable intervals, and that's just what we're trying to solve. But yet it manages to do it quite effectively, so I wanted to look at how it did that and see is there anything we can take there and apply to ourselves. But first, I want to talk about the fallacy of multitasking. So from what I understand, an operating system can only process one job per quarter time, just like we can only have one thought at a time, but the OS does want you as a user to think it's doing all the things. So we fool ourselves about that a lot too, and in reality what we're doing is rapidly task switching, taking chunks of one thing and interspersing it with others, and incurring a lot of transaction costs, and that might not be so great. So how does an operating system choose how it's going to schedule the work and how it's going to decide what to take on from context switching, and the creator has to know that there's some explicit policies it's going to have to put in to let it handle that chaos and manage it effectively, and the first is first come first served, so it's simple, low overhead, fair, but cycle times are long, and it doesn't account for priority or size, so it's really best for homogenous work, and that's not us usually. The second is putting short jobs in front of long jobs so they don't get stuck behind these monoliths, but the problem is that we're really bad at estimating most of the time, and often we don't have a lot of information about duration, so we're just shooting in the dark. That's really not feasible for us. The third is more familiar and comforting. It's scheduling by priority, so that means giving everything a priority, and working from the highest party down, but the problem is that if you have a lot of high-party work all the time, some lower but more important work may not get done, so the fourth is a round robin concept, and that's saying I'm going to work a certain time chunk on every job in a sequence and then do the cycle again, and hopefully we'll finish some stuff, and here though nothing really gets ignored. So I've been talking about the concept of job starvation, and everything but round robin has that concept. There's a high probability that something, some type of work or priority of work may never get started. That's not great. I don't think our customers would like that. They don't want to experience death by waiting. So a way an operating system might handle that is to take chunks of dedicated resource and apply it to different priorities or work classes. So that's an idea that we can do and we can learn from. So if you can't tell, there's no perfect method for anything, much less scheduling. So that doesn't mean though that we can't learn from what's there. And I think that the best way that we can handle it is to blend a few methods together, we look at what's out there, take a couple, look at the pros and cons, realize that every con is a trade-off just like an architecture, and put together a combination that has a balanced set of trade-offs that fit your context. So one common blend that you might see out is the priority plus the round robin. So that's working with our familiar high priority down, but then doing each one in chunks so that everything's getting some love. But there's a key to doing that right, and it's breaking things down into small but valuable pieces. At Lean Kit, we have a concept of Fizzgood. And if we were gonna do a round robin, we would try to deliver something small but valuable in each cycle. That's really important to do a round robin properly. Then we might go a step farther, like the operating system might and say, hey, we're gonna take some dedicated people and assign them to certain types of work or priorities of work. But I'm not gonna forget the need that I might want some generalists or flex staff to help smooth out the rough edges or the extra demand. So the takeaways are first and foremost is really to look at your environment, see what's out there that you can pull from it and apply to your own context. Because we're not solving these problems for the first time. Second is to consider the implications of any solution and make sure they're okay for your context. And then finally, make your work Fizzgood and don't let anything starve. Thanks so much for listening. Thank you.