 So how do you guys feel? Tired? I mean it's not yet. Okay, well I'm gonna pull from your energy because at home right now I think it's like three o'clock in the morning. So when they put me on the schedule for this time I got thinking to myself, oh man, that's gonna be like presenting in the middle of the night. Will I be able to do that or not? My name's Ray Errell. Has anyone ever been to any one of my talks before? In the past? Great, so I can make up a lot of stuff as I go through there today. There's two parts of me that it's very important to understand a little bit more about Ray and my background. I am an Agile coach. I've been teaching primarily coach of coach, meaning if you look at organizations that are in the large enterprise size, I've been coaching teams that have been up in the several hundred thousands in size. My last company I was with was a small company called Intel Corporation. Has anyone heard of it? It's a tiny company. I know they'll go big one day. But during that time we had like 2,700 scrum teams. So big scale. That's my coaching bag that as I go off and do coaching, those are my beliefs how I took the Agile Manifesto and that's how I internalized them for myself. And that they dictated everything about how I actually went and did my coaching role. That's my problem solving technique that I use. I'm an engineer. I started off as an electrical engineer on something called the 286 microprocessor, which was a long time ago. But for me, everything is sort of an engineering problem. My engineering brain and my electrical engineering degree cuts in quite a bit. You add a problem, you add a little bit of coffee. A solution eventually comes out the other side. The artifact of that is sarcasm. And I apologize if there's some sarcasm as I go through today. It's just about part of my personality and I want people to be aware of it. If you wanna read me on Twitter, that Elmo Ray is my Twitter handle and if you need to know, Elmo is actually my real first name. Ray is my mental name, so it's a part of that. With Intel, my background of course was very large and complex systems. That was anything from the projects that we worked on which if you look at that's a project diagram of all of the teams that had to go cross through in order to go create a PC system that you use today. You're not aware of it, but Intel actually did most of the software that's on all of the PCs that are out there today. We totally rewrote the Apple iOS when it was ported over to Intel processors. But so we let it dealing with a lot of social complexity, strategic complexity, a lot of engineering complexity throughout my career. This is our org chart at Intel if you guys didn't see it before. It was also a very soft and nice environment to work in most of the time. The last personal note that I'll give to you because you've seen me over here in the Agile Alliance booth, I spend half of my time donating my help to the Agile Alliance. And I do that because I believe in the mission of the Alliance and if you wanna learn more about it, please stop by the booth and give a conversation about what we're doing there. And especially if you haven't signed up for your one year free membership, please do that. I'm not sure if you've heard a stutter in my voice. Has anyone heard a stutter at all? Not yet. Well, I had a stroke three years ago. I had a clot go to my brain and I could not move my entire right side of my body and I could not speak. This is my last public service announcement to you. I say it because my voice is different. You guys don't know that because you've probably never heard my voice before. But to me it's different and sometimes it catches me off guard. But I wanna bring attention to you the signs of a stroke. And the reason for that is is that you see that shot going into my arm and that's my arm right there. That's what actually restored my ability to walk and talk. And the doctor said that that particular shot was only effective 4% of the time. And I asked him, well, that's a really low number. Is it really 4%? And he says, well, actually the drug is actually 95% effective if it's given within the first four hours of the stroke. So you guys are all engineers and you all work really hard. If you have these signs, I believe, is it 112 for emergency here? Is that the emergency number? Yes. If you have any of those signs, please 112. And I know with all of the traffic that's in local town you just gotta get yourself to the doctor because that could be the worst thing if you don't get that shot in time. So that's my public service announcement. So unknowns, they're all around us, right? This happened to be, I was in Poland and I was trying to catch a flight last week in order to be able to eventually get to here. And that's my Munich flight right there being canceled. We get these all the time in our engineering work, right? Did these make you nervous? Do you sit nervous thinking about, am I going to get home or is this project going to be delivered on time or other uncertainty that you have in the work environment? Other people deal with uncertainty in certain ways. They create different processes. I love this process that an engineer came up with because everyone was always asking him where was Ed? And so he basically put a flow chart is that, well, if Ed isn't over there, then I don't know where Ed is. And granted, it still builds a lot of uncertainty into that space, but what we do in engineering is always we find a way and a set of methods that help, just kidding. To deal with uncertainty as it comes up. And it happens at the worst possible time, which when I saw this at work, especially because a lot of agile development, a lot of people say, well, I'm adopting agile because I want more certainty. And reality is agile doesn't change the uncertainty within your organizations. It only amplifies and tells you that it's there. And I was wondering why our human brains think in this way and why we're so frightened of uncertainty. And it actually comes down to this picture that's up here. This happens to be where we used to live a very, very, very long time ago. You used to live in that tree line that was between where you were safe and this is where you worked. This is where you found food way back when. For generations, for hundreds of generations, we had wired ourselves because we had other competitors for the food. And so we always learned to look at the risk analysis on every step that we go off and take. When we go off and stepped out of the trees, we wanted to know whether or not there was something else that was gonna come after us. And like it or not, that risk over time, since we had to live in this space for such a long period of time, created this environment of fud. Who here knows what fud is? What is fud, sir? Fear, uncertainty, and doubt. And do we like fear, uncertainty, and doubt in our lives? No. And so we do everything that we can to go reduce it within our environment. Any project managers in the room? Yes, you spend a lot of time reducing uncertainty, correct? Is that most of what you're trying to do? Risk management, reduce the risk that's in the system, trying to bring it under control? I actually have a different perspective of it. I think that when there's high degrees of uncertainty, there's the potential of innovation that can take place. And I'm not gonna dig deep into innovation in this talk. I'm gonna talk about some of the methods that we used in order to feel comfortable with uncertainty in our environment. But for me, I knew that we were being more innovative, the more uncertainty that we had in the process. And as you know, in the far left hand side of our life cycles, that when we're in this early phase of project development, we have a hard time writing the requirement. We have a hard time knowing what the acceptance criteria is. We have a hard time knowing those key attributes of what people would say in early expectation wise is we should know these answers. And reality is in certain cases, we don't know the answers, but there's ways that we can find these things out. Now, all uncertainty is not the same. True fear, we should really watch out for that. It was funny when I was looking for pictures of what would be the most uncertain thing that I can go found. I found another one of these that actually had a really dark underpass and under the underpass, it was all spooky looking. It had a label of Starbucks on the outside. Same thing, it's clear if it's not a hug, it's Starbucks that might get me in. But situational awareness within this space becomes, I think, our best friend. And with the lack of situational awareness within our environment, we end up making the wrong choices within this whole system. Now, during some of the innovation work at Intel, we started to go think about, and this is when we were more advanced in our agile adoption, we started to think about not just agile at our, what would be the beginning of a sprint and to the end of our sprint where we had the high degrees of certainty of the requirement that we were working on. We started to need to reach out to understand where there are other systems that we can start to pull from in order to deal with what we were thinking at the time was something that where we could become comfortable in the space of early innovation work and early agile work before we actually would load into a scrum. We were dealing with the situation that the work environment had changed on us as well, which was, from a situation perspective, as you can see from where we were back in the early web or emergent web space to where we are at today with most user interfaces. The degree of interface to the situation that we were dealing in engineering teams was suddenly the engineering work, if you look back in the PC era, when we were dealing with PC-based applications, the complexity of the system was being pushed to our users. We were expecting them to know all of the quirks of our system and to work around them. How many here remembering taking a class to learn, say, Microsoft Excel or Microsoft Word? Would you ever take one of those classes to go learn any of the newer web products that are out there today? And that's because the simplicity, if you look at design now, which has become important to the things that we create, we are reducing the amount of complexity that we give our users, but equally as such now, we're creating another ambiguity spot. And that ambiguity and that uncertainty is, will our customers like the three buttons that we presented to them versus giving them all of the options laid out to them as they had more control? That brings in uncertainty that as we do design work, whether or not they will like what we produce. And from our customer's perspective, I have to be honest that they don't really care what we do, right? It's a black box to them. They push a button and they expect that button to have some sort of reaction and some sort of work. They don't expect in their own right to be dealt with uncertainty associated to the things that they're using. Unexplained phenomenons and software are two things that we don't wanna have linked together. And so I'll give you this user interface. This was my thermostat in Poland. This is the thermostat. Can anyone tell me what the center dot means? Does that mean normal room temperature for a human? Does that mean off? Does, if I set the dial all the way up to the very top, does that mean it's going to be as hot as a volcano in my room? Or if I put it all the way down to the base, is it going to be arctic freeze? And I don't know what the heck it is with the human picture there with the force field around them. I wouldn't push that button because I thought I was gonna do somebody harm by deactivating their force field or something. But isn't that just a little too much uncertainty for the end user to understand what this thing does? So dealing with uncertainties ourselves doesn't mean that we then suddenly take our uncertainties and throw them off onto our customers. That's the worst thing that I think we can possibly do in this time. Vague requirements means we have to do more exploration work. Don't let stuff like this get released. I called down and eventually, believe it or not, because my room was too hot. And even though that I set it to the lowest, my room was still hot, I ended up opening up the window, which outside it was snowing, and then the inside it was hot, and I had to sleep close to a zone that was in between where the arctic breeze was coming in and the heat was coming from the other side. I had to find that perfect zone for that. Oops, wrong button. User interface. So as the complexity reverses to us, and as you know, and anyone who's working with multi-year systems, the level of complexity that now gets pushed off to us in order to properly explain what function is becomes a part of our need. So when we were going through our agile adoption and dealing with uncertainty, we started to ask the question, how can we measure it? How can we know where we're in uncertain territory when the uncertainty is high? How many recognize this? What is this? Planning poker. In planning poker, what does a zero mean? What's that? No effort, it's done. Yay! I don't know anyone in a planning poker game ever went zero, I'm out. What about a hundreds? What's that? Yeah, it's too big, but what are the two measurements that I'm using in my brain as I'm picking the higher Fibonacci number? Right, it's complexity and unknowns. Those two things combine together and will dictate and tell me whether or not I've got something that's incredibly large with a lot of unknowns and uncertainty associated to it. In agile today, how do we deal with the 100s? How do you split it, split it in different ways? Did someone say spike? Slice, a slice or spikes as well is another method that's in that space. But if you look at that complexity area that's setting up there, when we get above 40 points, that's a different set of methods than I think most of us have ever explored and went into saying, what do I need to function in that space? Most of the time we try to go stick it through scrum, right? We'll go and write a tracer bullet story or we'll go write some other accommodation so that two weeks later we can learn more about whatever it is that's too complex. And in my view, when we're dealing with this high level of complexity, if you look at what the team knows, I kind of put one more dimension in this, if the technology that we're working on is far from certainty, if the requirements that we're dealing with are also far from agreement, and the final dimension is, does the team have domain knowledge on having worked on said system before? Those dimensions to me actually spell when we get out of from a simple system to a complicated system to a complex system and then eventually to a chaotic system. And these each, from a framework perspective, in my own project management career, has always been that we knew how to handle simple things, we knew how to handle complicated things, but complex and chaotic, we had very few tools within that space. So we started a search for methods that can work effectively here. And we came with a number of different methods that some of them I'm not going to talk about today. The ones that I am gonna talk about is going to be this thing called Kinevin and this thing called Safe2Fell Probes. These two things became a cornerstone of how we dealt with things that were really fuzzy and we didn't exactly know what the customer expectation was and how we were going to deal with it. If you guys wanna have a discussion on three circle and UX proof points and all of those, I'm in the design session later in the week and I'll be talking a little bit deeper on those, but for this uncertainty conversation, these were the two tools that worked for us. How many people have heard of Kinevin before? Dave Snowden gets around. He's a, I'm surprised every place that he ends up being. For the people who don't know what Kinevin is, Kinevin actually is a Welsh word and Welsh, it means a place of multiple homes or multiple places. And Kinevin is a decision framework, gives us a toolkit of how we can go off and approach problems and problem statements that we're working on to give us sort of a mental roadmap of how we deal in certain spaces. And this is a picture of the Kinevin. This is for people who haven't had exposure for it, this is not meant to portray that this is a standard four by four matrix. These domains, you can actually have a problem that is existent in all four domains at once potentially. You could be also in the fifth domain in the center, which is called disorder. In disorder, I don't know what domain I'm in, I have no clue. But to work around this at its earliest one here, which is the obvious one, where the cause and effect relationship are relational. Meaning if I go over there and push on that door like she's trying to do and escape, that it's an obvious type of problem. It's either a push or a pull. She applied a heuristic of since categorized respond as she was walking out the door. What that means is that she judged by what other doors she's opened before. And I look at my category list and I understood that there were only really two conditions for a door. It's either a push or a pull, maybe a twist, maybe a turn, maybe a slide. But it's all categorized in our brain. It's pretty much cause and effect is pretty simple. The space over here in complicated, which is another ordered system space, this is the land of engineering. What this means is that the cause and effect relationship is that when I start the chain of cause, by the time I get to the end, I actually have a repeatable system. An example, your build systems that you have for whatever software that you're building. Or in the case of Intel, when we do a build of a new piece of silicon, something called tape out. It's no one engineer that you meet actually has full understanding of how the process works from point A to point B. But we can get the right people in the room in order to understand that and it's a repeatable thing. We can actually use this since analyze respond pattern, which gets us where we look at the data, we do an analysis of it, and then based on that analysis, we're able to proceed forward. The other two are in the complex space, the non-ordered space. The complex system space, this is where we have emergent practices. And in emergence, this means that the cause and effect relationship always changes every time that the problem, we get closer to it. It ends up being that the cause and effect relationship are intertwined and for such problems like that, we typically use a probe since respond pattern. This is how we deal with humans all the time. When you're interacting with humans, we typically will probe something. They will then based upon that analysis, we will figure our response to a human. It's also how we deal with open-ended questions or uncertainty. The last one down here is chaotic systems and the thought model for that is an act since respond pattern. And the reason why that this way is because there is no right solution, but the simple act of doing something actually pops you up into the other state. Example, if you're working on something and you have the worst problem in the world, I always say make a stupid decision in chaos. Make the stupidest decision ever because when you do it, what'll happen is you'll have somebody say, A, that's a stupid decision, right? Why don't we do this instead? More data comes out based upon the acting. This is also the land of if I was working as a doctor. A doctor does not typically go up to somebody and if they're bleeding, they don't work in this since analyze response pattern. Doctors don't do that. They don't suddenly walk up to a bleeding patient and say, well, let's analyze multiple ways that someone can bleed. Let's go have some meetings. Let's go do some research at a university. No, they actually are over here either working in an act sense response pattern or in that probe sense response pattern, which brings me to an Apple story. A long time ago, Steve Jobs in the 1.0 version of this device cared a lot about how it fit in the hand of the user. It was one of his design criteria. He also cared whether or not where the thumb can reach across the screen or not. That was another design criteria. His engineers came in and gave him a presentation on one of the earlier prototypes which was fairly thick and the presentation had hundreds of pages of analysis of that there was no more room in the device. There was no more way that we can make this any thinner, Steve. And so they're going slide by side showing of X-rays and other things about there's no more room in the device. And Steve's just sitting there and holding it and then blurts out and says, it's too big. And his engineer said, Steve, are you not listening to us? We're giving you an analysis that there's no more room in this device. So Steve walks up to an aquarium that he has in his office and he takes this one-of-a-kind prototype and he drops it into the fish tank and everyone got, we're shocked. And Steve bends over and he's looking in the aquarium and then he sees bubbles starting to stream from the device. Blu, blu, blu, blu, blu, blu, blu, blu, blu. He goes, there's more room in there. Get back to work. So the reason why I bring it up is to kind of illustrate what happened in Canneven. In Canneven, the engineers were up here in this way up here in the corner of this called the tyranny of the expert. And that is that they had proven to themselves beyond a shadow of a doubt that there were no more room. And excuse my pun, but Steve plunged them into chaos. And one more pun, their argument couldn't hold water anymore. Sorry, I had to throw that one in. So he throws them into this chaotic state and they need to work their way out of this. He threw them into the land of innovation. They needed to get creative and understand what were they missing? And they worked themselves around first doing an accents respond, then going up to a probe sense respond and eventually working their way back over. That crossover point itself. He sent them on one of these. I didn't know there was a word, but I guess, you know, a coddywapu? Anyone go on a coddywapu before? I'm not even sure I'm pronouncing this word correctly, but he sent them on a journey with a fuzzy destination and hopefully they got to at least enjoy the trees along the way. So in any of our life cycle work that we're doing today, we go from at the very beginning of our life cycle in what is known as framing and opportunity. And opportunities themselves coming down to concepts to candidates and eventually to obsolescence. We know a few key heuristics that are in this space. Number one, in the very beginning, we know the very least. So when we know the very least, we know that we need to learn fast. We need to have faster cycles that we can go do. And one thing that we discovered in this whole thing is that iterations typically follow this sense analyze respond pattern and that sprints were too long. They were too long for us to be able to accomplish what we wanted to do. So we had to start thinking about these things called probes and safe to fell probes. The other thing, just to talk about scrum for a second, scrum is a heavy process. In order for scrum to work, we need a valid requirement. And if we don't have a valid requirement and a lot of certainty, it makes a planning meeting go lots of high Fibonacci numbers and it doesn't feel like we're actually making any forward progress. So for early exploration work, I don't use this framework at all. I want something that has a lighter touch. I want something that's not a heavy process. This was a comment that David Hussman, who recently passed away from our community, he basically was saying that he actually created an org, a dot org called non-bond. And he said that basically that the process is only just basically the bread in the mill. Getting to the solution is what we really need to be focused on. So safe to fell experiments became key. And safe to fell means that we, A, we want to minimize harm. We want to maximize our learning and whatever we come up with is going to be emergent. And no, the people that are in this pool are not having a safe to fell experience at all. No way. And this is what one of these probe looks like. And if you look at the cycle of it, it's pretty much what we do in all agile practices today. It is the dimming cycle. It is a do monitor. We look at quantitative and qualitative data coming from our experiments. And then we learn and refine. If you notice, there's more than one experiment that's being executed per problem statement. We have multiple ones. We actually will set up a coherent experiment or an oblique experiment or a naive experiment. And I'll show you in a second why that's important. Safe to fell probes as a definition need to actually have a criteria. And the criteria that's involved is we need to have a way of knowing that it succeeded. We needed to know that it failed. We need to know if it failed, how we dampen it. And if it worked, how do we amplify it into the rest of our work now that we have new knowledge? And then we need to be able to harvest that knowledge and have a deadline. Safe to fell probes are typically one day's worth of work. They're not multiple days worth of work. They're not a science project that's going to go on for months. We had an issue with a product at Intel. And I can't give you the name of it because is there any Intel lawyers in here? Okay, whew. But anyway, it was a medical device product and the medical device that we were doing, we hit a snag, a very big snag. High degrees of uncertainty and chaos came into our system. And what had taken place was is that the engineering team came back and said it's going to take us six months and several million dollars and we're going to miss the market window because of these changes that needed to come out. So we established a naive experiment. And the naive experiment was, is we told all of our engineers to go to a local shopping mall, find senior citizens because this was a device that was going to be used by older people, tell them what the problem is and ask them how they would fix it. And they did that for the entire day. They came back the very next day and they said, we can fix this in two weeks. It's not going to cost us that much. And our customers came back with a whole bunch of cool new ways of making this product better. And some of the items that we had, we didn't know that we're exciting and they think it's very exciting so we can probably charge more for this product because of these things. Just as simple of taking and getting people out of their analysis, tyranny of the expert, sending them to a new venue in order to learn things up. So from a summary perspective, you're always going to have things that you know and the things that are unknown within your environment. The toolkit that you're looking to go build within your organizations, as you deal with the known knowns, you already have toolkits for that. The known unknowns, we've been doing that in project management for years. It's called planning. One of the things as a senior director and as a project manager and everything else, we would do planning because it would identify the things that we don't know. But these unknown unknowns, we actually have to, I'm iffy on intuition. Just all I know is my intuition sometimes working it doesn't. That bottom line of exploration and being able to have methods that allow us to explore like the safe to fill methods that I'm explaining here help us to identify where we're at. Now you might not be comfortable with where some of the responses come to in that. It's gonna tell you that you're probably about right here. Heisenberg theory is gonna kind of cut into play here. But you have to be comfortable with that. But at least the data that you're now starting to mine from the system is giving you a much better understanding of where you're at. If you guys wanna join me for conversations, I do a monthly podcast and live event called the Agile Coaching Network. We have 400 of your closest friends join in a virtual park bench and we ask questions and we surround all of them and talk about a whole bunch of different things. You're welcome to reach me there. You can find it on agilealliance.org or agilecoachingnetwork.org. It'll take you to the same place. Lastly, if you wanna contact me, you can contact me at the two locations, Ray Arrell at New Agility or rayarrillatagilealliance.org. And did I make it? Eight minutes left, which means we have time for questions. You have a question? Slide 26? That one. I love that one. One more? The Heisenberg? Is that an escape route? Yeah, that's in the, yes, it's supposed to be the escape route and you're approximately here. Yes. You cracked me up, it's been nice. Thank you for that. No problem. Other questions? Yes. Wait for the microphone. These guys will mute. Okay. As a Six Sigma expert and trying to move it to the agile domain for some time now, one of the key tools, which is used in the industry, especially in the engineering industry, is the FMEA tool, failure mode effect analysis tool. Yeah. Would that be more useful in an agile environment or would you think that it is going to be overkill? If you are trying to plan for failures as part of your product? If the process is too heavy and it takes you more time to execute the process, then it is to get to the experiment than I would be concerned. So as long as the timing in order to get to the experiment is small, then it's fine. But the concern we had like with Scrum to kind of reiterate with it is, well, we had to go through a planning process that took some time. Then we had to wait two weeks for a response to come back. And that's too long of a period of time to get learning back. And so we went to more of a dynamic life cycle. Some things went through our Scrum process. Other things went to safe to fail experiments. And believe it or not, if we knew what line of code to go change, just go change the line of code. Don't have a planning meeting. Don't go through all the rituals. Just go change the code. It made more sense to us. Thank you. Other questions. This is something that I hear quite often that planning is an overkill. And you are saying that go change the code. Where is the line? Because I might think that this is the right place to make the changes. And it might break something else which I wouldn't have thought. So where is the line? When should I go ahead and make the change? And when should I make it through a planning session? I think it comes down to if you do a small planning poker session and everyone's coming up with ones and zeros, which the zero says it's already done. But they're coming up with the ones. That might be one of your signposts that everyone knows the module. They know where it's gonna be. They understand how it needs to be tested. Do we need a full blown scrum planning meeting to do that? That could be done in a hallway in a structured process. But it doesn't have to take place normally what we would do with a full blown scrum. For us, we actually had a separate Kanban board and that Kanban board was handling these more known items and it worked through a different priority schema. But one of the things that was a part of that, the condition to cross into doing was at least a huddle to have an understanding of are we sure. So there is still some light process that's interwrapped there. Theoretically, is that huddle a planning session? Yes. Is it as formal as trying to break down 70 other requirements or 20 other user stories along with it? We figured it was just best that if we looked at that low story, did a brief huddle and a plan with the right people that we can have a higher throughput for that thing to get out. And in certain cases, these things were like defects, high priority, severity, zero things that were coming in. And we didn't wanna weigh it down with all the rest of the project weight. Does that make sense? Yeah, so you are scrumman kind of approach you are taking. It's a scrumman kind of approach, but I never want to do the, and I hear agile teams do this to me all the time. Agile people who don't really read the manifesto, when they say, well, the manifesto gave me permission to not do documentation or plan. And they say, well, cause it said this over this. And in reality is they never read that bottom sentence that says they're both equally important. It's just if we're not doing the stuff that's on the right, then we're kind of messed up as a team. So. How do you prioritize the exploration parts? So for example, the example you gave, right? Right. With the engineer went to the mall. I'm sure you could have multiple naive options which you could find out rather than sending it to the mall, you can send it to the hospitals where they can see how these patients being treated. Right. How do you, most of the time you have short time and you need to find out with the gut feeling which one, do you have any framework which you have used in the project? Well, in this particular case that I shared with you, I already knew what one extreme was. I knew it was going to be a project that was dead and or if we slipped it, it was gonna be six months later than everyone expected. So in that mindset at the time, this spending one day to do something naive felt like the right move. But it was in fact, as all naive experiments are, it's trying to find the best place where you can get the most why are you doing that or have you thought of this. So in this particular model, because it was targeted towards that audience, it seemed like the best case. But every naive experiment is engineered differently and the one thing that we try to do is bring people who are not a part of the team into that discussion because they give us a secondary diverse view that because we're so close to the problem space, that sometimes we have a hard time getting out of the analysis issues. So would the safe to fail experiment could have it failed? It could have, but I only spent one day's worth of the team and granted they all went in eight yummy cinnamon buns and orange drinks and I had an expense report to fill out. It actually, I think, also helped the team to be a little bit happier because they got themselves away from kind of a devastating news, right? It was devastating to hear that the product might slip that long or we might just need to cancel it. So it had two purposes. That second purpose, I didn't know at the time that that was going to occur, but when they came back all happy and elated that they figured it out, one person was coding it in the car as they were coming back. They had part of it done, which was amazing. One more over here. Ray. Yes. A question on, you talked about uncertainties mainly around engineering, right? All the examples you gave were like that. Right. But when you run a project, there are uncertainties around people, my senior techie may leave or some regulation related uncertainties. Right. Some of these concepts do they apply there? Have you seen them? Yes. I mean, where this started to push in, this was our challenge with our agile adoption. Our engineering teams got really good at executing in an agile manner. But our labs teams and our sales and marketing and further up more towards the opportunity framing, we were burning work faster than they can come up with a new backlog for us. And we needed to actually start moving our agile adoption further left in our life cycle. And what we discovered was is that doing safe to fellow experimentation and redefining our life cycle because that life cycle that I showed you, most life cycles say explore, plan, develop, deploy. Even in agile, they still have that overarching activity based life cycle. By changing it to that opportunity, concept, candidate solution, that allowed us to now open people's minds to say, how do we do safe to fellow experiments and when framing an opportunity? Or how do we do safe to fellow experimentation when framing different prototypes and different potential ways we can solve those problems? And then following it through the life cycle as we got further in, then the sprints ran more efficiently because when the concepts came into the engineering teams was they were taking it to high level and high production worthy code, stuff that survives the eternity depending on what type of product that you're doing. Our products had to last for seven years. So there's a lot of rigor that goes in the back end of the process. We found that those requirements were better when they arrived at the development team and the discussion we typically have with the PO of trying to fish out what they thought the customer wanted and them saying, I don't know, I'll get back to you. We found that our POs were like, well, what we found was this, this and this. They had answers for our engineering teams and it started to level out our value stream through our organization. All right, thank you. No problem, are we done? Okay, thank you guys. This was great.