 Hello Welcome to the last but one. I think talk of our jolyndia 2018. Have you had a good conference? Yeah, thank you for the organizers. It's been brilliant So thank you for coming to this one choosing my talk. I'm gonna be talking today about evolutionary architecture I'm gonna be talking about How I think these days we can Take a different approach to designing and building Software systems actually quite complicated software systems as well based on uncovering architecture as we go so How many in the room were at Gregor's? Keynotes yesterday So some excellent. So in that keynote Gregor mentioned something called real options option theory about decisions and why decisions and Keeping our options open might have value Or have value intrinsic value and how in an increasingly volatile world where change happens well where we require Change to happen more regularly Actually options and having options and keeping options open. It actually has more value So that's kind of what this talk is about That's the betting part of this talk betting on evolutionary architecture How can we place small bets keeping our options open place in small bets? In order to drive the design of our system This the subtitle a note on software architecture as code is about well Martin Fowler always sort of described software architecture is the stuff that's hard to change about piece of software about an application or a state of application And I think that there's a number of techniques that and practices that we've sort of developed over the last probably even only three years That enable us to make changes to those things that were formerly hard to change in very simple ways So this talk is about keeping our options open place in small bets and how we can make small changes Based on essentially the scientific methods. So can we run experiments on our code bases to help us make better decisions? I should oh, it's back. Yeah back in the room. I Should probably explain who I am first. So this talk is only tangentially about microservices I wrote this thing with Martin and work for a company called ThoughtWorks. So I maybe have a hand up for those of you in the room who know ThoughtWorks Sorry, I probably don't need to explain to most people but we're a bespoke software development company We started about 25 years ago about 5,000 or so people now globally We've got a very big center in Bangalore, although annoyingly. It's my first time first time to Bangalore I've been to Chennai our Chennai offices before but I think there's pretty about 800 people or so in Bangalore now if not more So that's what we do. We write software for people. We solve hard problems and we advise our clients on Best approaches to I guess take advantage of the market volatility that we see I wrote this thing. It's quite interesting You can blame Fred for the name and blame me for writing it all down So I'd like to start with a question Torrible question, I guess how do we usually make decisions about architecture design performance some of the critical characteristics of our systems And I would argue in general we make them in a couple of different ways We make them based on experience This is good. You know as you get more experience and you take on more I guess more experienced roles And you have more responsibility for the design of software Hopefully the experience you've had over time building software helps you make better decisions This is good actually, right? So there's a thing called the drapes model of skill acquisition as you do a thing more regularly You develop that skill Until you become an expert in it. If you've read the pragmatic programmers book on I think it's called Refactoring your wetwear something like that. That's the subtitle they talk about the 10,000 hours You need to become an expert if you've got 10,000 hours in software architecture building software Then you know making decisions based on experience is good But it also comes with a downside because we have something there is something called survivorship bias This is one of a number of cognitive biases that we can't help but fall Pray to survivorship bias says we're more likely essentially to make a decision based on our previous successful experiences So even though experience is good it can also steer the way our decisions go in the future We had a successful project. I'll do the next one exactly the same way that kind of thing We also do it based on gut feel I think this feels like the right solution Especially as we're in this sort of drapes level of expertise You know, we've got this insight that this feels like the right sort of thing and drapes You know drapes the drapes model suggests that as you become an expert in something This is actually a good thing to trust your instincts more But see point one survivorship bias. It can lie to you. Your gut feel can lie And oftentimes we decide things up front And this is a legacy often of the sort of scar tissue we've built up as an industry that makes us think changing things later is hard And actually changing things later has been hard Especially decisions about things like software architecture the stuff that's fundamentally hard to change But I think and hopefully I'll go through some example an example of why that's different. I think now So this is the question. I'm hoping to answer over the course of this talk Can we make design decisions more systematically and if we could what would that look like? And the way I'm going to explore this question is by reading a book What a strange idea so who in the room has come across the idea or the type of book called choose your own adventure Okay, not so many people. So this was something that was very popular when I was growing up I guess in the sort of late 70s early 80s in I guess the UK the US And this type of book it's interesting right because they tend to be set in sort of fantasy worlds Populated by dragons wizards mystical beings and things and that the aim of the book generally is to is is to set yourself up as a hero and To take decisions a bit like role playing dungeons and dragons or something like that To help out a village or to smite a mighty dragon or to find a horde of gold And when I was trying to work out how best To talk about evolutionary architecture and the sort of decisions we make I was searching online I found I found one of these books about evolutionary architecture What a strange strange thing and it's called the endlessly bifurcating trousers of reality Now I think in our industry we don't tell stories often enough I actually think stories are a really powerful way of communicating the decisions that we've made and the our experiences that we've had In fact a former colleague who's think now still at Facebook by Google and then ThoughtWorks before that He talks about the idea of having a team shaman who can gather the team around and tell you the story of how things on your project got that way I like that idea and this is a book about a particular project Tells the story of that project of some decisions that we made so let's see what happens when we dive in The lawful good product owners of a publishing house had long lived in awe and fear of their publishing systems In all they had made a tremendous amount of gold and in fear of a time taken to change them That their slowness and their fragility a messenger was sent to Falsch help from a distant land of Mighty wizards you have taken up this challenge Maybe I should give you some context about what this story is about so a couple of years ago now I was involved in rebuilding replatforming rearchitecting a very successful 15-year-old publishing system for a scientific publishing house Very successful. It was making upwards of one and a half depending on the year one and a half to two billion dollars a year Very so you know pretty good But it was a 15-year-old C++ service-oriented architecture that was becoming increasingly difficult to change And so we were asked to come in and see if we could help out by rebuilding this system. This is what the stories about You must save the product owners by rebuilding their website You start off the project in the course of discussions. You discover that your goals are threefold You need to improve availability improve performance and reduce the cost of delay An enterprise architect approaches and addresses you you may use some and walk-in skeleton Or you may cast analysis paralysis Or you may if you have none of these things you'll have to draw your sword and fight I should unpack this right so in publishing is kind of interesting especially scientific publishing Availability is directly tied to how much your company is worth And that's because you have to prove essentially how many people have clicked on a link and downloaded a scientific paper And then you attribute that click to an organization That's that's purchased access to your site and you charge them And actually the CEOs of these companies will report shortly to the stock markets Basically how many people have clicked on links? It's kind of interesting. So any time your site is down Globally, someone isn't going to be able to click on the link and that directly leads to a loss of revenue and actually to your share price Precinct or not increasing at the right rate need to improve performance Turns out China is a thing India is a thing you know Australia is a thing and this particular website was hosted out of a data center in the West of the United States and so performance in places like China places like India places like Australia especially in Australia where they have to Still bust the internet in on ships with USB sticks as true I don't know if you know that But improve a performance was was a pain up to 15 seconds for page load and they wanted to reduce the cost of delay Improve the cost of delay so they wanted to make be able to make changes to their system more rapidly They wanted to be able to deliver more value more quickly to be able to deliver more value to their customers more quickly So what should we do? We could summon a walk-in skeleton We can cast analysis paralysis So a walk-in skeleton if you haven't come across this before is the thinnest slice through your system that you could make To prove out your path to production and to prove out the first User story or requirement that you're writing. What should we do walk-in skeleton hands up? analysis paralysis Really don't care Yeah, that's most of you okay excellent right well in this case what we did is cast analysis paralysis You cast analysis paralysis at the enterprise architect foolish young adventurer says the architect We follow the evolutionary school of architecture and we shall have none of the lawful evil ways of waterfall The last thing you see before everything goes dark is the architecting counting in a strange voice You have died to turn to page one Oh dear So perhaps we'll summon a walk-in skeleton instead walking skeleton is a fairly well understood sort of practice. I can't remember the Exact derivation, but I know that price and Steve Freeman talk about it in their seminal book on Test driven design growing object-oriented systems guided by tests So we'll summon a walk-in skeleton and see what happens Your walk-in skeleton coalesces in a cloud of noxious gases and solidifies as a Java drop wizard application You reach into your backpack and deploy a content store Your walk-in skeleton reaches out its skeletal arms and grabs armfuls of raw XML Would you like to transform the XML inside the microservice the skeleton or use a magic box another microservice? So generally in publishing what happens is you have a publishing pipeline where you have a set of documents There are editors submitted by researchers Editors editors lots of reviews go on and then it's turned into a big pile of XML and pushed through to some store Where you need it to be transformed and displayed to your users in our case all this content was pushed into s3 We had a giant enormous number of XML documents sitting in sitting in x3 and we had to transform these somehow into html So we can show our users and you know we can take different decisions at this point We can either inside our our single application that we've already created We can write some functions in there Maybe create a module that will do the transformation display the html to our user or we can We can separate that concern into a separate service. We push that into another microservice And that's interesting because this is the first point at which we've got options about where to go This is the first point we can start to think about the bets that we can place the different decisions that we can make This is a divergent evolutionary graph showing how species diverge over time And actually this is kind of an imaginary look at the future states for this system based on the decisions that we can make At one point we might be able to we might decide to transform the content within a module in the walking skeleton But we also you know on the other hand might say we'll create this extra micro microservice And at this point we're diverging the architecture that we're building this design of our system is diverging and you know Reduct your out of pseudom if you go down one path you end up with a distributed system composed of a number of microservices But if you get on the other path you end up with that sort of monolithic NBC type app everything is Is contained within that single application? And this is what I mean about options about placing small bets, how do you decide which path to follow? So I call this betting on evolutionary architecture So what's the definition of bet? The act of gambling money on the outcome of a race game or other unpredictable event That's a good deal of betting of races going on. I guess that's quite apt being that I think we're at number one race course road at the moment That's the act of placing us placing a bet on some on some outcome that we don't know about yes Unpredictable event and a lot of people think that building software is predictable You know certainly most of the project managers I've ever worked with Think that's predicting software that building software is a predictable thing But actually what we're talking about when we're building software. We're talking about a complex adaptive system By definition by definition. It's unpredictable. What's going to happen in a complex adaptive system? We've got a number of people involved in building this we've got a number of decisions that we can make Complex adaptive system. It's unpredictable by its nature and becoming more so as business requirements change at an ever-increasing rate Betting on evolutionary architecture So I would argue that the idea of an evolutionary approach to architecture allows us to place small bets and Then reevaluate our decisions based on the outcomes of those bets This is the options that Gregor was talking about. We can create options for what we do in the future We can have a number of different options each of these is going to have associated a cost and a value Have to spend some money to do this thing, but it's if we but if this thing if the bet If the bet is a successful bet and we've correctly predicted the outcome then we're going to get we're going to get some value back In comparison the idea of this upfront design phase that we do on a lot of projects This is sort of the equivalent of betting the house, right? We're going to guess at the start of a project exactly What the finished end state should be we're going to place all our money on a single bet bet the house I don't like that in my house. I quite like my house Maybe I'm not just a betting. I'm just not not a betting man So back to the story What we decided to do was create another microservice. That was the option that we purchased and You throw the magic box in between the walking skeleton in the content store a villager approaches and exclaims this beautiful content I see before me takes an awful long time to get here You must somehow make the content arrive faster if you have an HTTP cache in your inventory you may use it now And we've gone again got a number of options because distributed system. We've created options actually by creating a distributed system We can cash in between s3 where our XML is and the content micro service or we can cash between The sort of proxy fronting stuff and our micro service the content that the content micro service that's doing the transformation This is what you know in a naughty way this sort of architecture looks like we had some sort of templating basically edge side includes templating application at the top little micro service that was just doing Returning html to our users that we have this computationally expensive service Which was using one of the greatest functional languages of all time XSLT And that was transforming the XML into some html then we had some XML in x3 and we had a bunch of stuff Bunch of other services And when I said about you know availability performance and things these these are the performance requirements that we had for our users globally It was a 0.8 seconds time to first bite One and a half seconds page loads no matter where you were in the world So pretty pretty stringent constraints and when we first started testing whether we were meeting these cross-functional requirements We got this sort of answer out So our page load time was about 35 seconds Now I mean I'm a physicist not a mathematician, but I'm pretty sure that 35 is greater than one and a half That we can we can all agree that that is a thing Which is bad right this is this is kind of a bad thing hence us thinking about caching That's just thinking about where are we gonna put this cash? Are we gonna put it in front of the computationally expensive service all behind it? Let's put some caching in So again, we've reached one of these decision points right where we've got a number of options up Detective performance problems Let's just add a cache Now the thing about caching Who's implemented a cache in the room? Right, okay a few people How many of you think that's an easy thing to do of those people who implemented a cache? There's lots of no and there's not a single hand going up for those people at the front But that's because caching is hard and caching is hard for very good reasons Right we can either put this cash in front We can put it behind Now the thing is with with with things like scientific publishing. It's not like news publishing with a newspaper Right, you know the sort of stuff you need to catch The sort of stuff you need to catch is the stuff that's accessed all the time All right It's the stuff that's published that just been published and maybe over the last day has been published But with scientific papers you might be accessing something that's a hundred years old It might be something that was published in the journal 200 years ago that has been OCRed into XML and put in your content store. So essentially Rather than having a nice set of cashable documents where cash hits predominate With scientific publishing you've got the opposite right cash hit cash misses are always going to predominate unless you pre-populate the cash This is actually a tricky set of options to evaluate Every talk with a cash in it has to have this this this quote from Phil Carlton There are only two hard things in computer science cash in validation and naming things I've got a bonus joke for you if anyone's done any messaging There are only two hard things in messaging exactly wants delivery one delivery order exactly wants delivery, haha So, you know this idea of let's just add a cash No one ever said let's just add a cash Right, this is a decision that we don't take lightly when we're building out software So how do we decide what to do? How do we decide which option which option to purchase which bet to place? So I called the subtitle a note on software architecture as code. How do we decide which bets to place? Does anyone recognize this guy? Anyone anyone Bueller Shags Vineman Richard Feynman, absolutely no well-priced winning physicist bongo player safe cracker Richard Feynman has got what I think is my favorite definition of the science method there with me We'll just take a look at his definition In general we look we look for a new law by the following process first we guess it and When he does this you can actually go to the lecture online where he actually describes this it's called on the characteristics of natural law I think When he says we guess it the whole audience collapses in laughter This is the great no prize winning physicist Richard Feynman saying the scientific process starts with a guess But actually he goes on to say then we compute the consequences of the guess to see what would be implied If this law that we guessed is right then we compare the result of the computation to nature with experiment Compare it directly with observation to see if it works And this is the crucial bit if it disagrees with experiment it is wrong in that simple statement He says is the key to science He also goes on to say and I think this is lovely It does not make any difference how beautiful your guess is it does not make any difference How smart you are who made the guess or what his name is if it disagrees with experiment It's simply wrong and that goes for what I'm saying up here as well Don't just believe what I'm saying because I'm standing on stage What I say disagrees with experiments. It's wrong. So that's interesting. Let's unpack this So what he says is first of all we observe nature Then we make a guess We compute the implications of our guess and we compare the results of those implications with nature and then we draw our conclusion So what would that look like if we were trying to use this approach these days with the practices that are described momentarily To make decisions about which bets to place in our software architectures Well, we need to observe some metrics. We need to make our guess So we need to observe the current state of our universe. We need to make a guess make a small change run the experiment Measure results compare that with results of our experiment and then ask ourselves was I right? Was this the correct experiment? Did we place our bet correctly? So what do we need to do this? observable systems Need a brain most of us have gone out. I hope The ability to deploy small changes quickly We need some form of lightweight probes so we can run small experiments. So the next section is on these practices There are more as well So the first is good monitoring observable systems observability has become a thing recently Which is which is good and I think it's become a thing because of the prevalence these days of people building distributed systems But we talk about monitoring Monitoring what does monitoring actually mean? I want to credit down north with this definition of monitoring So Dan talks about monitoring has been composed of five elements the first is instrumentation So this is the fact that our code and hardware describes what it's doing We have the ability inside our code for it to describe its execution and the amount of time it's taking to do things We then need telemetry. All right, we have to get a have a way of getting that data from our system to somewhere else We have a way of gathering that data Then we need some visualization we need to understand What's happening in our systems and using by meaningfully visualizing that data And then alerting right we need to take some action based on what we find And there's a bonus one which is predictive alerting but that's kind of Outside the scope of this tool. So we need to monitor. What do I mean by all this stuff? Well, I've said for a long time if you've got a distributed system the sort of minimum You should be looking for in terms of observability of that system should include end-to-end request latency should include system health Service health it should include monitoring of downstream dependencies of those systems It should include OS metrics and it should include things like a flash tracing So this should should be the sort of minimum set of Metrics that we should be building into systems to make them observable So that we can make decisions about stuff in our case in the scientific publishing world This was a couple years ago now And we had a whole set of tools that we'll be using to make our civs to make our system to Turn it into an observable system to we reason things like history for circuit breakers we use in tenacity Kodahill metrics because we reason drop wizards using things like breakables in the other We had a whole bunch of stuff that we visualized. This is our graphite visualization of this is request latency Yeah, and then this is a dashboard that we put together using history's dashboard of all of our circuit breakers in the system So we could see exactly what's happening with when services are making calls to external services Then we need sort of continuous delivery Which I'm going to define for the purposes of this talk is the ability to safely and sustainably reduce lead time to value These are the three monkeys of continuous delivery down north Sam Newman and jazz humble and because he's not in the photo We've got Dave Farley popping in the corner These are the original authors of or some of the original authors of continuous delivery the book But only jazz and they've managed it through the umpteen years. It took them to write it. So Yeah, the four monkeys of continuous delivery with a mention to Chris Reed is not there It's what do you mean by continuous delivery? Okay, we're on the continuous delivery and DevOps track I'm not gonna hop on about what continuous delivery looks like but very simply we need some way to safely and automatically Push carried from developer. Hey, it works in my machine into some repository somewhere Create an artifact and then progress that through a build pipeline applying progressively More tests to that for that to that artifact Before finally we deploy into production And as we go one way we get more product like the other way clearly we get faster feedback So we've actually called this out on the thought list technology radar a couple of years it while maybe this last year We had pipelines for infrastructure as code in trial And that's a key to the technique of software architecture as code I think is actually having the ability to push not just changes to our the functionality the features in our software Through one of these pipelines of the production But with a special mention to Keith more issues in the room the ability to push changes to our infrastructure code Through automatically through like tested Through pipelines where tests are applied or tests are run and we can apply this stuff automatically in the production Because of course, you know these days if you're deploying software without using something like Ansible without using something like Terraform Well, you know, you may as well go home, right? Okay, so Not everyone is cloud native yet. It's just the way we're going It's the idea of pipelines automatically testing our infrastructure as code or infrastructure code as we push that through in the production It's getting tech is gaining traction And what that allows us to do is actually go through this cycle In a really short period of time So previously in order to make a change to our deployment of topography So we say maybe we might have to wait months for servers to be provisioned We might have to wait months for firewalls to be reconfigured the lead time to things that were hard to change was long Whereas these days when we use techniques like continuous delivery of infrastructure code We can make sweeping changes to our infrastructure in a really short period of time safely and sustainably So the lead time to doing that has massively decreased The other thing we need is some form of probe right we need to be able to understand We can observe our system. We know what the world looks like We need to be able to we need to be able to run some form of testing to understand What the results of our experiments are the changes that we're making And when I talk about performance test to talk about specifically light weight performance testing I'm not talking about the sort of thing you do right at the end of the project Right where you get gatling up and locust and you know insert loadrunner and inserts your very heavy tool here Barely heavy tool here. I'm talking about really simple lightweight tests stuff You can put into your build pipelines that run for a minute or so They'll just give you some form of baseline some idea of how your system is performing and I think when you combine that the ability to make rapid change to your infrastructure code as well as your feature code Functional code and the ability to observe our systems. You get some interesting things What I'm not talking about in terms of performance testing research on Wikipedia This is the sort of word clouds for performance testing that comes up, right? There's tons of types of performance tests. I'm talking about very very simple ones And on the technology radar we had this idea on it as well a couple years ago simple performance training So what sort of thing do I mean? There are tools out there. We can just stick into our build pipelines with very little effort patchy bench That's literally how you run a patchy bench. It's not hard We can use siege siege is another nice tool for doing this sort of stuff Well, there's another another nice tool written in Golang called Vegeta. I actually quite like Vegeta because it allows you it gives you the ability The others tend to just fire off a ton of requests at an endpoint and then tell you at the end of it The the latency in the metrics for the results of the test with Vegeta You can specify a rate at which Requests should be issued so you can say I want to run for a minute with 10 requests per second And that means you can use those results as a tracer bullet if you like to understand the performance characteristics of your system And then I think we need some form of cloud Well, this is enabled certainly by cloud native infrastructure as code and Shout-outs to the guy sitting in the second row Keith obviously just this was last year. I believe he's Published this book infrastructure as code This this idea that we can we can describe our infrastructure in a way that we would describe features in our system through code and Automatically progress changes to those through introduction So tools in this space things like AWS and small things like BOTO And this is actually the tool chain that we use to create this image of a blue green deploy in our system Just interrogating the AWS API is using BOTO Simple stuff so Back to our scientific method We've got the ability to observe nature. We've got good monitoring observable systems a brain But the ability to quickly deploy small changes right small changes to things that were used to be hard to change You should have long lead times to change and lightweight probes So what happens in this story I was telling when we put all this together, which we did You may remember I said at the time this was our requirement all point eight seconds time TT FB and one and a half seconds page load and we started off at 30 seconds 35 seconds Said that man this point we've got a number of options. These are the small bets These are the options that we can we can we can purchase We've got one option, which is to add CPU right come to that in a second We've got another option Which is to cash content in one of two places cash in front of s3 or cash service in front of the service So in order to work out which option to purchase We ran some experiments So we observed nature. We understood what was going on in our system. This is 35 seconds Our hypothesis was that XSL T can actually be quite expensive. Perhaps your CPU bound We ran some performance tests after computing some implications Maybe we increased the number of CPUs that would help And then we were compared the results with nature and after adding more CPUs, this is what we've got We were down to six seconds. I want to say adding more CPUs again Right, this is possible now by changing one line of code Playbooks We've got that in six seconds Which is better? Our second guess, maybe we should move compute to the data so we were running our infrastructure in the Amazon region in in Dublin the West one, but Our data the XML was sitting in the s3 region in the US So we posited that perhaps going across the Atlantic every single time we wanted to Renders some data was maybe not helping us. So perhaps we could move compute to the data And we did that so you know again It's a few more lines of code to change but we brought down our production infrastructure in Europe brought it up again in the US and After we did that we were down to about four seconds. We saved two seconds on the hops across the Atlantic A third guess we made is that transformations are slow and maybe we could optimize those transformations And the nice thing about this is because we'd instrumented all the transformation code Exactly how long these transformations took. This is a pretty good guess Once we optimized the transformations, we were down to about three and a half seconds Pretty good And finally after we bought all those options ran all those experiments implemented all these changes in our code And moved our production infrastructure from Europe to the US Increased the size the number of CPUs or increased the instant sizes of the boxes during the transformations We were down to about three and a half seconds And the final option we elect with and initially we put our cash in between Templating and the extra the transformation service basically Once we did that We got down to about 0.2 seconds If you're on that program look you're on that project, that's a total result 0.2 seconds per page And some of these things are big documents, right? We're talking 50 100 sometimes 200 megabytes obviously we were paging some stuff But still this is pretty good So going back to our sort of graph of these options these bets that we were placing what do we do? We chose to create another service to transform content this is one bet and Then after purchasing some other options adding increasing the number of CPUs moving data moving compute to the data Tuning the transform service, etc. We got to this other point. Do we add a cash or we have further options? We could put a cash in front of S3. That's one bet We can put a cash in front of the content service That's another bet and in both of these cases we've got different trade-offs to make in the one hand We're going to save just the fetch from S3, which is a couple of hundred milliseconds We'll actually varies between about 150 in our case up to about 500. There's actually a secondary industry now in monitoring S3 latency Or we could cash in front of the content service And that's going to save that fetch but it's also going to save our XSLT transform time And that's what give us gave us the massive boost down to 0.2 seconds We didn't just save the fetch We also saved the milliseconds involved in running the transformation over sometimes very large documents So the result of this Content trickles into the store you keep up by listening for the new content and casting wuget on the cash to keep it refreshed New types of content appears content the villagers have never seen before Content the walk-in skeleton is unable to combat Every time the structure of the content changes the cash must be refreshed the cash grows and grows until blah blah blah There's no longer possible to refresh it latency increases. You have died So what happened why did we sort of hit this problem? Let me unpack that basically what happened was We made this guess that we wanted to save the transform time as well The problem of course is as we're actively developing this We're making changes to the API the content API every time we want to make a change to the content API We need to invalidate the whole cache of tens of thousands of objects and rebuild it And eventually it got to the point where that just wasn't doing So we can either cash in front of the content or in front of s3 and actually one of these was an evolutionary dead end in our case but Remember what I'm saying the changes we're making to our infrastructure are incredibly small This is all version control Standing up varnish is a is applying a set of templates essentially to your infrastructure So what did we do we rolled back? And then we made a different set of small changes and applied those changes to put the cash in front of s3 So whilst we didn't save the content transformation time. We did save the time on that The cash causes the content load to drop from 300 milliseconds to 150 milliseconds Everything was brilliant wasn't quite as good as we originally thought but it was still well within the limits of our requirements Villages are happy. I think it's time to close this book Finish the story So I guess I'm going to have some up but talking about this idea of real options and small bets What I mean when I say we've got we can apply different approaches now to building software to designing our software systems I'm making small changes and pushing those through the production is no longer limited just to our functional code just to the code That is delivering features to our users We can now apply the same techniques to things that were really hard to do if we have a set of Practices in place We can place much smaller bets. We can create more options more cheaply. We no longer have the better house So in summary, I guess what I'm saying is to apply this technique. We need some way of observing nature, which we do You have some way of running experiments So performance testing I'm saying doesn't have to be heavyweight. We can use simple tools to gain deep insight into our systems And then we have the ability now to rapidly change all the things not just some of the things So cloud-native infrastructure continuous delivery for infrastructure in summary every three architecture this sort of approach Which is in options placed in smaller bets keeps our options open longer These techniques CD lean product engineering infrastructure as code they reduce the amount of money We need to place on a bet. We don't have to bet the house anymore. We can make small changes which cost much less and Using this sort of lightweight probes. We can make guesses and test our hypothesis hypothesis much more simply I think with a few minutes for questions. That's gonna be me. Thank you very much You