 It's a great pleasure to welcome Dave Farley to this closing keynote for the light version. Unfortunately, he couldn't make it to the main event, but he was gracious enough to accept our invite to do the talk here. Dave has spoken before at Agile India. He was in person and he did a workshop as well for us, which was fantastic. So always a pleasure, Dave, to have you back. We kind of overlapped our timing at ThoughtWorks. I never had the opportunity to work directly, but I heard so much about you back then. And of course, I wanted to talk about the book you and Jess wrote, which started the whole movement around continuous delivery, which is again, I want to thank you for doing that. It's been such an instrumental thing. I think it's kind of influenced a lot of thinking in the DevOps and in general, better software engineering community. And we briefly test upon, I remember meeting you in Australia at the Yaw conference and we talked about software engineering and I think it's kind of swapping that a little bit. And I'm curious to hear what's evolved since then. But yeah, I also want to thank you for the continuous delivery YouTube channel that you run some fantastic videos over there. And I think you have like 130k subscribers now watching that, which is fantastic. I think it's been less than a year. It's a bit over a year, but yeah, we've got 130,000 subscribers and we're growing, we're now growing, it's around about 1,000 subscribers a week. So it's still on a good uptrend. And I think it's very important because some of the ideas that you bring and some of the folks you've been inviting on your channel are again, the people that we all admire and appreciate. And so it's a great way to get access to all that wisdom. So just wanted to thank you also for doing that. So without too much of a delay, I want to quickly hand it over to you, Dave, curious to listen about engineering for software. Great, and thank you for the lovely introduction and thank you for the thanks. It's my pleasure. I will just share my screen and then we can start. So yeah, I wanted to talk about engineering for software. And as Narash said, we met a few years ago in Australia and I remember him arguing against my ideas for engineering and he was probably right at the time. But I've been thinking about this for a little while. That was probably kind of close to the start of me thinking about some of this. And one of my things is that thinking in terms of continuous delivery, I think that we are starting to talk about what engineering really means for software. So that's really what I'd like to talk about a little bit today. So I think a good starting point is to think about what software would be like if software projects really worked properly. I think that we could expect as many software projects succeeding as failing. We could imagine as many projects being under budget early and delighting their users as over budget behind schedule and annoying their users. But I don't think that's what we think of as normal. I think that mostly what we think of is something like this. And I think that's saying something fairly profound about the way in which we think about and practice software development. I like this little gif animation of the orbits of the planets in the solar system based on two different models of the solar system. This really is a way of representing the concept visually of paradigm shift. And this was a paradigm shift. So for tens of thousands of years, humanity held the view that the earth was still and everything in the sky just orbited around the earth or went around the earth in some manner. And that all made sense if you looked at the sun and the moon and the stars they all proceeded around the earth very nicely. And then there were these pesky things called planets. The word planet in ancient Greek is means wanderer. And that's because these things didn't seem to obey any sensible rules. They just seem to wander about in the sky. So people tried to predict where the planets would be. And they came up with models like the one that you see on the screen here, which is complicated. And you've got these funny curly bits in the red orbits of the planet. What that's model is complicated and nasty to working. What happened here in this example is that sometime later, we had people like Copernicus and Galileo and Kepler come along and challenge that view and have a completely different perspective. They changed the perspective so that instead of seeing the earth at the centre of the universe, they saw the sun at the centre of the universe. That's still wrong, but it's a much, much better approximation when you start talking about orbits in solar systems. So what's going on now is much easier to model, much easier to understand, much easier to predict. But one of the implications of this is that the rules for the new paradigm don't fit the old paradigm and vice versa. I think this is a very good analogy of what I'm talking about when we start thinking about and talking about software development. I think that we've got the model wrong for software development and we need a paradigm shift. And really that's what I want to talk to you a little bit about today. So what does software engineering really mean? Naresh and I were talking just before I started here about some of this and I think to some degree, the term software engineering has become devalued in our profession. We think about it as either meaning something nasty and bureaucratic and heavyweight, which doesn't allow us to do a very good job, or just means coding. And I think neither of those things are true. I think those are poor descriptions in either way. And I think one of the reasons for certainly the first of those interpretations is that we've got the model quite significantly wrong in terms of what it is that we're talking about. I think this is a fairly common view of what software development, software architecture, software design is for. It's about trying to fix the things that we can't afford to get wrong. What's engineering for? It's that. And I think that's partially true for physical devices like aeroplanes or cars or even donuts. But it's not really true even then. And it's certainly profoundly not our problem. This is not anything to do with the problem of software development for one really important reason. I think what that is based on, what that idea is based on, is really the idea of production lines and the ability to automate and regularise repetitive processes. That's not our gig. That's not our problem. One of the things that is really unique about software and these days, all digital assets, but profoundly true for software, is that our software, our stuff is, we come up with the design for some code. And that code is then generates a sequence of bytes that represents the systems that we're building. At that point, once we have that sequence of bytes, we can clone that sequence of bytes essentially for free. That's a freedom that wasn't possible, didn't exist before the concept of digital assets occurred, really. And now, what does production mean for our software? For a physical device, the production of many physical devices is a complex thing. For us, it's essentially free. We press the button, we can clone the bytes that represent our system for free. That's a very different thing. It means that the act of software creation is very, very significantly different from the act of other forms of creation. It means that our problem is almost entirely an exercise in design and not an exercise in production. The other thing to think about when we start thinking in terms of engineering is that all engineering is not the same. Engineering, if you're building bridges is different than if you're building spaceships or electrical devices or chemical plants or whatever else. Each of these things has their own different uniquenesses. So one of the other things that we can be fairly certain of is if we come up with a discipline for software engineering, it will be ours. It will be unique to software and focused on the important parts of software. But even when we take that into account, I think that we often have a fairly naive view of engineering anyway. One of the examples that I think is fantastic that we've got access to at the moment is the SpaceX effort to build spaceships that go to Mars. If you're at all interested in this kind of thing, you can watch world-class cutting edge engineering developing things that have never been created before live on YouTube. And that's a really interesting experience because it's not what we think it is. It's not some bureaucratic approach of just assembling all of these things. Every single one of these things, every change to one of these things is an experiment. This starts with you've got some kind of design and you're going to build some kind of craft like this. But to do that, you're going to try stuff out. You're going to evaluate different parts of the system. This is a very early prototype of the raptor engine that drives this spacecraft. And you're going to get stuff wrong. Things are going to blow up and stuff's going to go wrong. But you're going to learn from that. That's one of the properties of engineering. One of the hallmarks of engineering disciplines in other fields is that we try stuff out. We break things and we learn from the breakages and we find out how not to break in the same way again in future. And then we'll try small pieces of prototypes that we can try out ideas and experiment with. And stuff will still go wrong. Stuff will still blow up occasionally. And then sometimes we start to get some successes. We can actually start to in this case fly some of these devices. And we end up with functioning spacecraft. That's how real engineering works. That's how design engineering works, even when it's something as big and complicated as a space rocket. So I would argue that engineering, the kind of engineering that's closer to what we need to do, the kind of engineering that is interesting from our point of view is always about exploration and discovery. So engineering is about exploration and discovery. Software development is about exploration and discovery too. One of the things that matters if we are in that kind of realm and I think profoundly we are is that what the implications of that is that if we are in the realm of always exploration and discovery to some degree, then we need to adopt the tools to become experts at learning. The first of those is iteration. We want to be able to make change in many small steps so that we have more opportunities to observe the progress that we make in each one of those steps. We want to be able to gather feedback. We want to collect information from our changes, our experiments, our evaluations and understand what's going on. We want to make progress incrementally. We want to change our software development approach from trying to mimic some kind of illusory production line to being a more evolutionary approach. We want to incrementally grow the systems that we work on. I would argue very, I think it's a profound aspect of human creativity, not just about software development. If you look at the development of any complex system, it's a process of incremental evolutionary progress. Think for a minute about the first iPhone. The first iPhone was a remarkably crude device compared to a modern iPhone. I'm certain that when they built it, the first version, they had ideas about changes that they would make in the next version and so on. But they had no idea where it would end up 10 or 15 years later. They didn't have that kind of picture. So the iPhone has been incrementally evolved, developed over time. You could say the same thing about cars or aeroplanes. The first cars were incredibly crude and dangerous devices. But over time, they've come to the point where we're on the verge of having fully automated, self-driving electric vehicles that will are remarkably safe in comparison to those early days. That's what real engineering looks like. We learn the lessons of the past, sometimes nasty lessons, sometimes lessons that involve hurting people. But we learn those lessons, we refine our engineering approaches to try and eliminate the failures of the past, and we just continue to grow incrementally. In software, we can make that incremental evolutionary approach to design much faster, much more incremental, much smaller steps. And we should. That gives us a better chance of gathering feedback and carrying out our experience efficiently, which brings us on to the next of these challenges if we want to become experts at learning. We need to start working in more experimental ways. If we think of every change to our system as a form of experiment, that positions us well to be able to take advantage of the learning that we're carrying out. One of the fascinating ideas of these kind of combinations of ideas, if you think about working iteratively using feedback making incremental development and working experimentally, is it gives us an opportunity of hitting the target, whatever the target is and whenever it moves. Think for a moment of having some kind of goal in mind. And we're going to start experimenting randomly. We don't care what the experiment is, we're going to try something and figure out whether it moves us closer or further from our destination. So as well as the ability to make a change in small steps, we've got some kind of measurement that determines whether we're getting closer to our target or not. If we work this way in small steps, we carry out the experiment, we then measure to see whether it got us closer to our destination, whatever our destination was, or whether it took us further away. We eliminate the steps that take us further away. We keep the steps that get us closer and over time we will incrementally, iteratively get closer and closer and closer towards the target. And that's true even if the target moves. So this is a profoundly powerful way of hitting any target that we choose to set. This is how all real learning takes place. This is how science works. This is how engineering works in other disciplines. This is how engineering ought to work in our discipline too. And the last of my five things for optimizing for learning, the thing that I think distinguishes knowledge creation in engineering from knowledge creation in science is the idea of being empirical. So we are profoundly practical as engineers and it's really about what really works. So how we determine those targets, how we decide what success means for our experiments and our incremental discovery is important and monitoring what really happens in production is a really valuable trait. Software development is also about managing complexity. We build systems that exceed the capacity for us to hold all of the detail in our heads. And so therefore we need to work in ways that allow us to cope with that complexity that we can't otherwise deal with. And in this category, there are another five things that can help us to do that. We want to work in smaller pieces. We want to divide the problem up into small pieces so that we can do work in one place without that work impacting on another. So we need to take the modularity of the systems that we create seriously. And to do that, we need to draw lines of abstraction in our design so that one module can interact with another module without knowing the detail of how that other module works. This is deeply important and certainly there are some people and I wouldn't argue with them very strongly that would define what it is that we do for a living in developing software as we're being nearly all about abstraction. We need to separate the concerns. We need to divide up the problems, the systems that we create into pieces that are focused on parts of the problem. Ideally, I would argue that we should be able to separate the concerns completely. So each part of the system is focused only on one part of the problem. And that makes for software that's easier to deal with. The complexity is managed. We're going to manage the coupling of these components carefully as well with generally a trending preference towards looser coupling rather than tighter coupling. But actually, it's more about appropriate coupling at the right levels as we're working through these things. And finally, we need to think about cohesion as well, making sure that the parts of the system that are closely related are close together so that we can deal with those in one in one place and the parts that are unrelated are separate. That's the other extent end of the spectrum, the modularity part, so that we can change those independently of one another. All of these things are necessary for us to manage the complexity. I would argue that this gives us a practical definition for what makes code good, a practical set of tools with which we can evaluate the quality of our code and the systems that we build. If you think for a moment of two versions of a system, it doesn't matter what the technology is. It doesn't matter what the problem it's solving. But if you had two versions of the same system, one of them was modular, loosely coupled, good cohesion, good separation of concerns and good lines of abstraction, and the other didn't, then which one would you prefer to work on? I think fairly obviously it's the first one. The first one would be easier to change, easier to test, easier to reason about, easier to modify, easier to understand where something went wrong and to fix. All of those things. The second one is almost by definition a big ball of mud that it's going to be terrifying to work on. I would argue that a good practical definition for good code is that it exhibits these properties. Once we've decided that the code works, I would say that the rest of these things are really what define the quality of that code. Yes, it needs to be fast sometimes. Yes, it needs to be secure sometimes. But those are kind of second order effects and we can add those effects later if we've got these properties in our code. So some principles for applying this kind of engineering thinking. We want to optimize for learning so that we can kind of explore and grow our understanding of the systems that we create and our solutions to the problems that we're aiming to solve with them. We want to optimize to manage or limit the complexity of the systems that we build so that we can work on them sustainably, continuously on an ongoing basis and be able to change them and morph them and shape them to meet the needs as the needs may change and as our understanding deepens. Some of the tools that help us to achieve these things are to start thinking of controlling the variables to make sure that when we're looking at our pieces in isolation, they are genuinely near isolation so that we can understand the impact of ideas and so on. We want to be able to make decisions based more on evidence rather than just guesswork and so we want to run the experiment to try out our ideas and to evaluate them. We want to, one of the other lessons that we can certainly learn from science is to always start with the assumption that we've not got the correct answer yet and that sounds like a kind of weird thing to say in many ways but it's profoundly the better place to start. If we start out assuming that our ideas are perfect and we just push on ahead and they're not perfect, that's going to come as a surprise and we probably haven't worked in a way that allows us to take a step back and correct them. If we start out assuming that our answers or design choices or understanding is incorrect, then we're going to be looking for ways in which it's incorrect and we're going to take a slightly more defensive approach to design and implementation so that when we find out how they're incorrect, we can fix that easily. This is deeply part of the philosophy of science these days. Scientists these days nearly always talk about ideas of falsifiability so that's another idea that we can bring to software. Think for a moment of something like automated testing. What does it mean if all of your automated tests pass? It might mean that your software is doing okay. It might mean that all your tests are rubbish or it might mean that your tests are really good but you missed the crucial thing that shows that something's wrong. What does it mean if one test fails? It means your software is not good enough so we can never ever prove that our software is good but we can disprove it. We can prove that it's not good enough when we have a single failing test. Falsifying things is the stronger statement on the reality of our systems and we can use that kind of thinking too. All of these ideas I think are closer to engineering thinking than we would normally think of if we were thinking in terms of software as a craft rather than engineering discipline. One of the tools that I think profoundly amplifies our chances of doing these things successfully is to optimize for good testability. We're going to write code that's easily testable or prefer code that's easily testable and I'd like to explore that in a bit more detail to kind of see what that means. So let's think for a moment what it is. I've already stated my assumption that quality is based on modularity cohesion separation concerns and all of those good things so what drives our ability to put that kind of quality into code in the absence of test driven development? Well it's kind of down to the skill experience and integrity of a programmer. There's nothing else that kind of forces us to do a good job of this. Test driven development is something unusual and it starts off by it's best described by red-green refactor and what we're going to do in test driven development is that we're always going to write a test first and we're going to run the test and see it fail. Now we're going to once we've got a failing test and it's failing in the right way we're going to write some code to make the test pass and see it pass and then we're going to refactor the code and the test to make them clean, elegant, more general whatever we want of them while we're in the stable state of having a passing test. That's all great stuff really good but think for a moment about what that means in terms of our approach to design. The first thing that we're going to do is that we're going to write a test before we've got any code that maps that test pass. So what that means is that puts us in the position of a consumer of our own code so we're going to design our code from the outside in rather than the inside out. Our job is to try and write this test to evaluate the behavior of this piece of code. Now we'd have to be a strange kind of crazy person to want to make our own lives more difficult. So what we're going to do at this point is try and do this in a way that makes our own life easier. We're going to write we're going to prefer code that's easy to test so test driven development automatically applies a pressure on us to generate more testable code. So what is it that makes code more testable? Well obviously it has to work once we run the test it should pass and that tells us that it works but it also needs to be more modular we want to be able to test our code in pieces and evaluate those pieces of code in isolation of other pieces. We'd like them to be loosely coupled so it's easy for us to tear out these pieces of code and we're not going to have a horribly difficult time to get them into the state that we want them to be to be able to run our tests. We'd like them to be highly cohesive so that when we've tested them we know that that behavior in the system actually works and that there's not some other sneaky part of the system somewhere else that's doing something similar and fooling us. And we want a good separation of concerns so that we can focus on just one part of the problem focus our testing on evaluating just that and not bringing in a bunch of other problems like persistence or UI considerations or whatever else it might be. And lastly we want to have lines of abstraction we want to have information hiding in the design of our system so that we can deal with one part of this system in isolation of other parts and maybe fake those other parts or whatever else it is that we want to do. So all of these properties that earlier I argued make a good definition for what we'd count as high quality in code are amplified by our ability to operate through test during development. What this means I said earlier I thought that test during development was unusual. So what this means is that in addition to the skill experience and an integrity of an individual programmer we use the skills of test during development to amplify those things that we value and we end up with a sum that is greater than the parts we end up with amplifying our ability to produce these nice high quality design modular loosely proper or highly cohesive systems with good separation of concerns and good abstraction. This is a kind of talent amplifier it's not going to make a bad programmer great but it will make a bad programmer better and it will make a great programmer greater. And I think that's very unusual. I can't think of anything else that does this in quite the same way in software development. So I think this is a remarkably powerful tool to amplify our talent whatever that might be as software developers. I want to give you just a bit more concrete example to just show what I'm talking about this is very simple example in code. So we've got here a car and this car has got a petrol engine and as part of this code we've got a function that calls start and that puts the car into the gears into neutral applies the brakes and then starts the engine on the car. Okay so let's write a test for this piece of code. Here's my test should start car engine we're going to create a new car we're going to say start the car and now there's nothing to assert I can't I can't actually I can't actually do anything here that I could choose to break encapsulation and dig into the workings of this class that's a terrible idea I'm just going to end up with my test overly coupled to the solution. So we're in a bit of a mess there so what can we do instead? So let's start looking from the other way around so let's let's do the test driven development thing so should start a better car engine. So we're going to start off what we really want to achieve is we'd like to be able to see that the engine started successfully so why don't we write some code that says that let's just assert that the engine started successfully and that's going to happen when we start the car as before so what that means is that we need to pass the engine to the better car so that we can hold on to a copy of the engine and we're going to create a fake engine that that allows us to do this is in reality we might use a mocking library or something like that for this but here's my implementation of a fake engine just so you can see how that works all this does is it just records the fact that starter was called and then allows us to query whether start was called so here's my better car now and that's my implementation it's very similar to the previous one but it's different in this one key aspect we're now passing in the engine we're doing dependency injection in order to make our code more testable but let's just think about the implications of that so the first thing is is that when we start thinking about this in production we can still create a car with a petrol engine which behaves in precisely the same way as the car did originally it's going to do all the same things except now it's testable but the implications of that is that we've improved the modularity of the code we've now we've now teased apart better car and the engine in this case the petrol engine and so each of these bits of code is a bit more focused on what it is that it's doing and more isolated than before in one very practical sense the better car doesn't know how to create an engine whereas previously the car had to know how to create an engine so that's reduced the coupling between those two and it's improved the modularity it's also improved the cohesion so now all the stuff that's about petrol engine net is inside the petrol engine and all the stuff that's about car net is in the better car and they are separate the car depends less on engines than it used to in fact it only deals with an abstraction of an engine the engine interface in this example and lastly the that line of abstraction is clearly defined we now know what that is so this by making this trivially simple change to make my car more testable it is more modular it's more cohesive it's got better separation of concerns it's got looser coupling because the car no longer knows how to create an engine and better lines of abstraction because those abstractions are more clearly defined in the interactions between the car and the engine these are good things the result of that is that we can create a car in production with the petrol engine we can create a car with an electric engine so the car the code's more flexible we can use it in many more circumstances and if we're some kind of crazy person we can even create a car with a jet engine this is just better code and i don't think anybody would argue with that and it's better i didn't do anything else i didn't i didn't think hard about the design all i did was worked to make it a bit more testable and as a side effect of just that we've improved the quality of this piece of code as i say i think that's fairly remarkable and i don't think that happens very often in software development i talked earlier about the need to work experimentally and that's deeply important we've already talked about some of the things that add up to allowing us to do that but we need to be able to take pieces of our system and experiment on those individually and some of the experiments will certainly go wrong and some of them we'll need to be able to take smaller pieces and evaluate those as kind of just experiments that we're going to throw away and not use in real production and again some of these things will go wrong but that's the nature of experiments if your experiments don't sometimes go wrong you're not really experimenting one of the superpowers of science and engineering is in reality we learn more when our experiments go wrong and so at some level we want our experiments to go wrong some of the explosions that i showed in the pictures from SpaceX were actually test pieces that they meant to blow up on purpose to see what would happen we want to make progress in small steps so we want to optimize to allow ourselves to get work intuitively in a very fine-grained degree so that we can capture more feedback we can use these small steps also to do the next important thing which is to control the variable gather feedback and control the variables if we're working in small steps predicting the results and controlling the variables what this means is that we can really understand what's going on the small steps limit the scope of our changes if you make a change to your production system and you find that it increases user sign-up by 30 percent is that a success or a failure the real answer is that you don't know if you released your change alongside 500 other changes some of which also might have improved the value of the sign-up you have no idea if you focus down and you're you release only that change and your your sign-up went up by 30 percent then maybe you've got a result maybe not it maybe maybe the variance is different so you need to look at this more scientifically and think about what the variables are think about what the impact on your decision-making is but all of this allows us to focus in and have a better chance at solving the problems before us I have been using this feedback picture to represent the ideas of continuous delivery for a very long time now at the outside is the kind of core idea of continuous delivery we want to be able to have an idea get that idea into the hands of users and collect information to discover whether it's a good idea or a bad idea at the inside we've got the very fast cycle that I've just been describing of test room development and in between we've got the use of executable specifications acceptance testing to evaluate whether our system is deployable configured correctly does the things that users want is releasable and so on and this is all part this is deeply built into the practice of continuous delivery and this idea gives us this ideal platform for carrying out a certain class of certain classes of experiment this is a one of my favorite quotes from most people's favorite physicists if you're nerd enough to have a favorite physicist it doesn't matter how intelligent you are if you guess and that guess can't be backed by experimental evidence then it's still just a guess so if we're looking to become more like engineers we need to start moving outside the realm of guesswork and into the realm of experimentation and gathering feedback collect managing the variables and so on another of these tools that's remarkably useful and powerful is the idea of speed that we if we optimize to shorten the cycle of our feedback and improve the quality of our feedback what that does over time it starts to drive out the costs that are inherent to our development approach and that allows us then to start fixing some of those problems that we highlight by trying to up the speed of feedback and iteration it drives out the problem and and makes it more clear that starts to get us focusing on trying to make the things that we the way in which we're working the engineering processes that support us more repeatable more reliable more more focused higher performance more efficient over time that gets us into all sorts of interesting territory of again controlling the variables through version control automating nearly everything and so on this is a picture of a deployment pipeline that I use to describe the mechanism that's at the heart of the ideas of continuous delivery and the idea of a deployment pipeline is that we build essentially a machine that goes from commit to a releasable outcome we automate all of the creation that the production and validation steps on our routes to production and this becomes our only route to production so if we commit a change here we're going to get fast feedback on whether our change is releasable when I advise my clients I usually describe them to look at this first part which is largely focused on the design part of the challenge the development team facing part of the challenge and get feedback from here in under five minutes and then the second part of the pipeline is all focused on the releasability of our changes is it does it do what our users want is it configured correctly is it secure enough is it fast enough is it's regulatory compliant and so on and we automate all of that too and get an answer back I'd usually shoot for under an hour and that leaves us then we've just been able to push the challenge out into production I was at conference and not very long ago Alan Kay was also attending and he described engineering as design simulate and build and this model that I've just presented absolutely fits that picture so if we start off with the deployment pipeline we feed our changes into here this first part of the pipeline is all about design problem it's about giving fast clear feedback on the nature of our changes we get fast feedback on the quality of our designs through test to look driven development as I've already described then the second part of the pipeline is about simulating the behavior of our system in production like environments in all sorts of ways that are interesting enough to us to allow us to determine whether that change is releasable enough and that leaves us with what Kay calls build but we just call pushing this out into production that may be a still a complicated part of the problem but from our point of view in terms of our continuous delivery it's always simpler because we're just pushing something out that we know to be good because we've already simulated it in production and designed it well using these mechanisms if it doesn't allow us if whatever it is that we choose to do in terms of trying to establish an engineering discipline if it doesn't allow us to build better software faster it doesn't count as engineering and that's really my primary argument engineering is the stuff that works and so I think that's a reasonable statement to be able to focus on if you'll forgive me doing a tiny bit of our advertising and Naresh at the beginning mentioned my YouTube channel there's a link here and if you're interested in the kinds of stuff I've talked about here today there's lots of other content on there covering all different aspects of things that I've spoken about so please do take a look and as a special offer if you again I'm doing a bit of self-promotion I run I see I've got a bunch of nice training courses some of them are free and some of them are paid for and I'm very conscious that around the world salaries are at different levels so periodically we do we do something we try and do um um pay parity offers periodically we can't offer those all of the time but I thought it'd be nice as a special thank you to you for listening to me rant on about software engineering there's a 50 offer if you go to my courses site courses.cd.training and put in this code for two the next two days you'll get half price you'll get 50 off any of the courses on my site thank you very much for listening I'm happy to take some questions excellent thanks Dave super fast you went through so much you covered so much ground it's amazing thank you again for you know touching all the different things and of course Richard's code there was very apt appreciate that cool I know Dave has a hard stop so if you if you folks have questions uh you know please put it in the q and a section then we will uh you know I'll read it out uh so we do have the first question here that I'll jump straight to um so have you got any thoughts on how to apply these engineering principles to architecting the big system yes so so so so my background is in building big systems of one form another on the whole and both as a hands-on practitioner in the past and and as a consultant these days so these are very deeply applicable to those kinds of systems the problem of building big systems is nearly all about coupling both at the technical level and at the organizational level and so the real answer is that the way that you build big systems is you break them down into a lot of small system that's the overly simplistic answer and you manage the coupling in the way and the way that we discussed there's some videos on youtube that talk about different aspects on my youtube channel that talk about different aspects of this part of the problem you know how how you how you break out platform teams how you how you architect microservices to make sure that they are discrete from from one another and can be independent deployable those kinds of techniques um the part of that in terms of in culture is is to establish this idea of us being a culture of learning and you want to be able to foster learning in the organizations so one of the most liberating approaches to learning in a big team from my point of view is pair programming you get people working together the closely and moving around the organization and then we get to learn from one another I did some work a few years ago in India with Siemens healthcare on a big system and several hundred people building systems for software for devices in hospitals and pulling the data out onto the cloud and one of the most liberating changes in my view was we encourage people to start pair programming before that they were working sort of individual silos getting their pair programming it kind of liberated the teams you got the teams to learn from one another and everything started accelerating from from from that time forwards there's an awful lot to building big systems build big systems are much more complicated but just as a kind of name drop um this is how what I've just described is fairly deeply how Tesla SpaceX Amazon Google Netflix those sorts of places work so absolutely this works for big systems and it works better than any other way for big systems but it is a big change and it's a difficult change to adapt as a result as a result of it being such a big change cool thanks Dave we'll quickly jump into the next question the next question is from Tom Gilb he's asking if you need to engineer security would an engineer qualify the security level needed and hand over to a designer to design it I would try and avoid that it's a difficult question it's let me just generalize the question it's a good question from Tom but of course but but I think it's a it's kind of an example of a broader picture it's how do you bring experts if you've got the small autonomous teams which is what the data says is the most productive by building building software how do you bring expertise that the teams don't have into those teams um and there's some nice nice ways that is described in the team topologies book they talk about something called enabling teams so you have stream aligned teams the teams that build the stuff that that customers want users want and so on and enabling teams and platform teams support the stream aligned teams and allow them to move forwards faster and with a lower cognitive load so an enabling team might be the security experts that Tom's at Tom's about the the ideal is that you try and you want to try and grow the situation so that for 80 percent of the normal run of the mill cases the stream aligned team can make progress with no help and then but they know enough so that when they hit something that's outside their range of experience or expertise they're able to call from call for help from somebody else that's when you bring an expert in from one of these enabling teams a security expert or whoever else and then the expert's job is twofold the expert's job is to first help them hands on deliver the more complex issue in terms of security or whatever in the context of the chains that the stream aligned team are working on and secondarily to coach the team so that next time they know better they've just expanded their knowledge a little bit about security or whatever else it is that scales really well so you're putting so what the way that that kind of ends up in reality is that you're putting policies or maybe platform support and so on to be able to to to remove the difficult bits of security or stream aligned from the stream aligned teams and then you have this ability to to loan expertise into the teams to allow them to make progress all right cool thanks Dave uh just quickly jumping on uh yeah sorry i i meant quantify i think dom is just uh you know clarifying that i meant quantify not qualify i think i've mispronounced it uh so i think in his question he had said that if you if you need to engineer security would an engineer quantify the security level needed and hand over to designer uh to design it design it i i i i think i i would i wouldn't say i wouldn't say necessarily quantify um but certain certainly they'd be kind of they kind of have a role in in overseeing that the policies were correct met the needs of the organization and were being followed in some way but they want to try and do that where they're not acting as a gate keeper you want to try and organize things so it's not their job to you know stop the flow of changes all of the data says that the safest way to make progress is to make progress fast and in small increments i mean get fast regular feedback and so we want to optimize to allow for that so anything that slows us down is a risk it's starting to increase the risk of releasing changes and so reduce the security for example so for as a simple example um one of the fast sets of continuous delivery is that we can make almost any change really quickly because we're getting really fast feedback so we could swap out the version of the operating system the version of the relational database whatever really quickly because you've got great tests we can run these tests and evaluate things really quickly um there's a there's a group called there's a there's been an activity called the rugged manifesto which is about trying to improve the security of software systems what i was at a presentation conference a while ago one of the things that they said stuck with me if you keep your infrastructure up to date so that there's nothing in your infrastructure that's older than eight months old you eliminate more than 95 percent of the attack surface area that hackers exploit so if you want to make build more secure systems you need to at least keep your infrastructure no older than eight months old that's quite hard to do if you're not working in these fast feedback ways and highly tested highly evaluated and so on so you want to do these sorts of things and you want to be able to add these behaviors you're going to be very cautious of things that slow you down more inspection doesn't doesn't end up with higher quality all right i hope you know tom that helps answer your question now quickly moving the lot of questions so just trying to cover uh some ground next question is from gerald he's asking how can you apply engineering thinking uh to estimation and how can you give the customer an answer about can you deliver in x months the real answer is you can't that's the real answer to any question about estimation really estimation is kind of a guess software is software is open to you know software is a complicated thing and it's a bit like saying it's a bit like asking you know i don't know a band how long is it going to take you to write a hit song nobody really knows you could guess you can set deadlines you can work to a deadline you could do something in that time so the kind of approach that i'm talking about optimizes continuous delivery specifically optimizes so that we work so that software is always in a releasable state so i can guarantee you that we will have something releasable but i can't tell you what i can't tell you how much stuff we'll have at given points in time um i could go slower and and promise you less stuff i saw i had a quote on my on my um social media this week from somebody who who said he was he was in a team and his bosses had come and said um we need an estimate that's 100 guaranteed and if you don't make the estimate you're going to be fired but that couldn't really tell you what the problem was but that they want the estimate so he said okay it's two years and they said what so well you know if you're going to fire me after it you want a hundred percent guarantee they might you know it's going to take you two years so if you want if you want me to give you a best yes when it won't be but i won't be fired it's probably two weeks i don't know but you know it's it's estimations like that it's it's this very very difficult problem there is no perfect way there's no perfect answer to that because we're trying to if we are trying to fix the scope of a problem and then narrate the problem we're trying to hit a single target and that's very very difficult and very unlikely if we work the way that i've described earlier where we're navigating and we're moving things a little bit more then we can certainly hit the target and but we do we cheat in doing that by being imprecise this is one of those things where this is about the paradigm shift that i'm talking about in the old world we're talking about trying to accurately predict exactly what we will deliver and when i think that's an irrational thing i think that's a a non-solvable problem it doesn't make any sense and so from my paradigm that's the wrong question the right question from my paradigm is how can i work as quickly and as efficiently as possible so that whatever it is that you asked me for you're going to get it earliest and we optimize for that cool and then you know sometimes we have to do an estimate so you cross your fingers and you make up a big number i think just if i can chip in there uh jeff pattern who's again a common friend he had a very interesting take on this uh so he said you know traditionally what we've been doing is we're trying to fix the scope and estimate the time and the cost right how long it's going to take and how many people or what how many resources you need to get to to do this and it says like there is a paradigm shift in terms of you know you you fix the time and the cost you say you know we'll deliver every two weeks so we'll deliver every x time and this is the size of the team this is the resources we will use in terms of infrastructure etc and we will estimate the scope which means we kind of keep working at it and figure out what we could actually deliver within that that thing so i thought that was an interesting way to kind of twist the the scenario around and say we kind of you know we'll estimate the scope not the night not the time well one one one of one of the ideas i think is is useful is that one of the advantages of the way that i'm just i'm describing working is that we start work sooner before we know the answers to all the questions we don't have to wonder we don't have to understand even where the destination is we can start working on things that might not be a good idea you might want to fix it next you know or think what your destination is first but we can start work which means it's more open-ended and therefore we can kind of so at the beginning we don't know what complete means we can we discover completeness as we explore the problem in more detail and that might mean that that often means that we get to completeness sooner whatever that means because we're working more efficiently i think you we've run out of time so thanks again Dave for joining us today greatly appreciate you coming in