 giving both of these talks independently is that they reinforce each other. So what I'm going to talk about is how we enable the practice of continuous delivery by using principles of evolutionary architecture. So I want to start with a goal. And the goal that we have is for business agility. And we all know that IT departments and the business don't always have the greatest of relationships. And part of that reason is because the business doesn't feel like they can get their features fast enough. As an aside, mostly that's because the business for the last many years has been driving IT departments to be as cost effective as possible, not realizing that agility and responsiveness does not necessarily come at the cheapest cost. So that's one of the problems. But when we talk about business agility, we have this virtuous cycle that we want to establish. And the first is we want a testable hypothesis. We want to be able to go out amongst our business folks and find that person with a great idea. But instead of having them say, I've got this wonderful idea, we want them to pose it as a hypothesis. A testable hypothesis that we can build a feature and build in whatever it takes to actually test whether that hypothesis is true, deploy it, see if it worked, and go from there. And so once we have the testable hypothesis, we want to go through our software development cycle, including using continuous design techniques, including thinking about experimental design. How are we going to design the experiment? Do we need a control group to understand what the results of this test actually are? And then we want to deliver it quickly and release it quickly, more importantly. And there's a whole bunch of agile software development that comes into this particular line item. Very, very important, we want to measure the outcomes. The number of software development projects that we've done where they have this wonderful business case that very clearly lays out, we're going to increase sales and we're going to decrease costs, and they never go back and measure it. In part, that's because they have these grandiose figures and probably overlapping projects that could actually quote-unquote take credit for the same benefit. But we need to measure the outcome. And then we want to repeat this. And so the whole purpose is to be able to allow us to try things quickly, decide if they work, get them out safely as well as quickly, and then if they don't work, pull them back out again. If they do work, build on that. We want all of this based on evidence. So that's a wonderful hypothesis. Wouldn't that be glorious if that's what we could do? What does it really take? And there are several different factors that enter into being able to actually make this cycle work. You have to be prepared to change quickly yet safely. And it's not just you.