 I want to talk about scaling up, and in particular in the context of how engineering comes to play with that kind of idea, because I think it does. A good place to start with that is what is it that makes scaling difficult? There are a variety of things I think. But often, one of the sorts of things that we think about is stuff like this. One of the responses to needing to scale up is to come up with ideas like this, a production line style thinking. And there's a fundamental difficulty here because this is not our problem. This is never ever our problem. This is not even close to our problem with software. And so any idea that's about scaling up like that is wrong. Our problem is very different. So we develop software. We crystallise ideas into a form that can be executed by a computer. And the result of that is a stream of bytes. And one of the things that is unique about a stream of bytes is that we can reproduce that essentially at zero cost for as many times as we like. So we never have a production problem in the sense of physical things. Our problem is something else. So when we're talking about scaling up, we're not talking about scaling up in the same way as anything else. We're talking about scaling up something very different indeed. And that ought to change the way that we think about it. So this is not a production scaling problem. So that means applying production line style thinking, which to some degree we're all kind of acclimatised to thinking that way because we grew up in industrialized societies. We're primed to think that way. But that's the wrong way of thinking about this. So waterfall, for example, is a software production line. Waterfall people even sometimes talk about it that way, which means it's not suitable for software. It doesn't work. It has never worked. The only time I've ever seen it work is when people cheated the process. So that's not the right answer. Or any really reproducible, standardised approach, cookie cutter style approach to developing software. That's not what software is and it's not how it works. So software development, I would argue, is very strongly an exercise in learning because in whatever context we find ourselves in, we are always creating something for the first time because otherwise we'd create it for free from what we had before because production is free for us. So we're always doing something new. It might not be new in the world. It might only be new in our context in our team, but it's always new. And that means it's always about learning and discovery. There's another aspect to software development that's certainly an increasing part of our daily lives which is that we build systems that are much more complicated than any human being can hold in their entirety in their heads and understand every ramification of them. And so in order to be able to deal with that, we've got to also worry about managing the complexity of the systems that we build. And there are tools for us to be able to do that, mental tools that we need to apply to be able to do those sorts of things. So how do we scale those kinds of things, those kinds of activities? How do we scale learning and how do we scale managing complexity? It's not an easy question to answer. Sorry, I'll just move my cursor up the screen a little bit. I've got ten things, five in each category that will help us to do these things. And certainly for people here at an agile conference, you're going to recognise many of these things, particularly on this side, which is agile is fundamentally about learning. So iteration, feedback, incremental development, evolving our systems step by step, working experimentally, learning and expecting and adapting, but being empirical, learning from production and changing our behaviours from there. Those are all at the heart of what real agility means. You can do all of these things without big ceremonies or anything like this. And if you're doing these things, you're definitely being agile. If you're doing the ceremonies without those things, you're not being agile. These are at the core of what it is that it takes to be able to really learn and be good at it. And then we need to optimise to manage complexity. We need ideas like modularity, cohesion, separation of concerns, reducing coupling, abstraction, so that we're able to make changes in one part of a system without worrying about what's going on in another part of a system that allows us to move more quickly. Now, the problem here when we're talking about scaling is that information is a kind of slippery idea. There's information in our organisation and the way that our teams interact with one another. And the problems are the same. The problems that I've just described in terms of managing complexity are really fundamental to information. They're not really about any particular technology or even software in general. They're true about us as well. A team is an information system and the ways that teams interact with one another have the same kind of problems of concurrency and coupling and so on. And we need to deal with those if we want to do things at scale. One of the difficulties is that, and this is always an increasing difficulty as we start to scale up software beyond a certain size, is that dependencies really don't scale very well at all. As our systems get bigger and bigger, the smallest change can have a disastrous effect. And so in order to be able to scale software development, we need to find ways of limiting the impact of these kinds of changes. So, what must we do? We need to think about and recognise that software is a creative discipline. It's a discipline that's founded around exploration and learning and we need to optimise for that. Dependencies in code and in organisations are caustic, but they are also unavoidable. If we need two parts of a system, whether it's a human system or a software system, to interact in some way, there's going to be coupling between them to some degree. And so we've got to figure out how we're going to manage that, how we're going to deal with that. And so in order to grow our ability to create software, we need to apply creative, I would argue, engineering style thinking to managing those sorts of problems. So here are a few principles that seem to me to be relevant here. So I've already mentioned a couple. We need to optimise for learning and for managing complexity. And as part of that, for both of those things, it's absolutely essential that we control the variables, that we're able to try things out and control the variables sufficiently so that we know what are the result of our experiments are. If I make a change to my software in the hope that it's going to win more sign-ups from users and you make a change to the software which accidentally wins more sign-ups from users, how do I know the impact of my change? How do I control the variables so that I can see the impact of my change and not yours? One way in which I could do that is I could work in smaller steps, release more frequently, I'm more likely to be releasing my change separately from yours then and I'll be able to see the impact. But there are many ways in which controlling the variables matter. If we're writing a test, how do we know that the results of the tests are repeatable and reliable? By controlling the variables, putting a system into a state that we want it to be in and so on. We want to get to the point where we can make more evidence-based decisions. That means that we need to start thinking and working more experimentally. The previous speaker was talking about trying ideas and learning from those ideas and be willing to be wrong. That's fundamental to learning and that must be part of what it is that we do. The way that I phrase it slightly differently but the idea is the same, I suggest that the granddaddy of learning is science and so we can learn lessons from science and we can apply those. That's why I call this engineering. It's kind of a practical form of scientific-style reasoning. But one of the deep philosophical ideas of science is that we're going to start out assuming that our ideas are wrong. Not that they're right and then we're going to try and figure out how we learn where they're wrong and how they're wrong and how we can improve them. That's a much better way of learning than assuming that we're right and trying to prove that we're right. It turns out. So never assuming that we're starting off with the right answer is probably a good starting place. Then we find ways to falsify our ideas. How would we show that we were wrong? That's a better, stronger proof than trying to prove that we're right. I can assert that all swans are white and I can never prove it but as soon as I see one black swan I know that all swans aren't white and that's falsification at work. It's a stronger statement and this is one of the lessons that we learn from science. The other aspect of scaling and taking this more engineering approach to thinking about this kind of thing is to check where we are all the time in our progress, essentially real-time monitors of where we are in our progress, in the safety of our changes and so on. I've led up to I suppose asking this question. What is it that you think of when you hear the phrase continuous delivery? I suppose I'm one of the people that's responsible for helping to popularise this as a term and as an idea. Although I'm not the originator of the term and neither is Jezz. Continuous delivery is one of the first principles of the agile manifesto as an idea. Optimising continuous delivery is really what Jezz and I were talking about when we wrote about it in our book of the same name. I'd be thinking of stuff like build automation and you'd be right, that's part of it. Deployment automation, DevOps, deployment pipelines, always deploying to production, maybe sometimes, automated testing for sure. I'd use this as a definition though. None of those things are definitional. All of those things are activities that we might undertake to achieve continuous delivery, but I'd say that this is part of the definition. Working so that our software is always in a releasable state and we want to try and keep it in a releasable state for as much of the time as we possibly can. That's a big idea and it's a challenging idea and it goes to the heart of pointing out where scaling is difficult. I'll try and point out what I mean by that. But let's start with the words always, always in a releasable state, always sends quite difficult. So how do we do that kind of thing? Well, we optimize for, sorry, let me just step back, optimize for fast feedback. I pressed it again twice. The delays are on that slide. Optimize for fast feedback, frequent feedback on the releasability of our systems. We want to try and get our system into a releasable state after each small change and evaluate that. That's at the heart of continuous delivery thinking and we want to do that all of the time as frequently as we can possibly achieve. Releasability determines the boundaries of scalability. If we can work so that our system that we are working on, whatever it is, is releasable by definition, then we're in a good state. It's tested as thoroughly as we're going to test it before we release it because now it's releasable that we'll be silly to test it more after that. So it's ready to go at that point. So this starts to put a boundary on how scalable we can go. One of the ways in which practically we can see the impact of that is through building deployment pipelines. A deployment pipeline is an automated mechanism to assert and define the releasability of our system. We're going to build and as automated as possible evaluation of our system so that we can make a change, get feedback on our change and understand how it stands in terms of its releasability. I can say this with more assertive more assurance than I can say most things that I say because I'm the person that invented the term deployment pipeline. A deployment pipeline goes from commit to releasable outcome. If we have a deployment pipeline that goes to something else, it's not a deployment pipeline. It's about getting to releasability. It's about being definitive for releasability. That's kind of at the core of the idea. This means if we have this feedback that we can be confident to make progress. We can gain confidence by testing our system very thoroughly within the bands of the deployment pipeline and testing thoroughly in this context because we can test it many more times than any army of people can do and I genuinely think that this is real software engineering. This is part of an engineering discipline and I find it hard to imagine how you can count it as engineering if you're not doing this kind of level of testing. This gives us a huge step forward in a confidence of our systems and our ability to release them safely. So what does it take to be confident? Well, here's my deployment pipeline again and first we want to make sure that our test pass locally. That's going to be the fastest feedback that we can get as part of the development process. Then we want to know that the code does what the developers think it's doing verifying those. This catches about research says this catches about 58% of production failures just that style of testing. Then we want to test, verify that it does what the users want it to do. Evaluate it in life-like scenarios in production-like test environments to get feedback on that. Then we may want to check that it's nice to use with human beings, not regression testing exploratory testing to try and learn just how the system is to use as it's being developed not afterwards. Then maybe is it fast enough? Is it resilient enough? Is it scalable enough? Is it secure? Whatever it is that determines feasibility. Is it regulatory compliant perhaps? I've built pipelines with all of these checks in at some point. At that point if all of those things have been determined by the pipeline we can be confident in releasing into production. It's a releaseable change. In terms of what do we mean by fast feedback which I said earlier well for the first part the development focus part I recommend that you aim for a target of feedback in under 5 minutes and for the whole thing to determine the releaseability I suggest that you aim for a target of under an hour. That may sound extravagant but people have done this with highly complicated systems highly complex systems and very very large systems too. One example is a company that I worked at. We built a financial exchange and just to put this into context of how many tests we'd be running hundreds of tests before a commit, about 35,000 commit stage tests under 5 minutes, about 20,000 acceptance tests user scenarios and that was exploratory and then hundreds of performance tests and we recorded the worst hour of our system in production and replayed it at 5 times load and these kinds of things and then thousands of other kinds of tests. So there's a lot of testing going on as part of this effort but this is what it takes to get to releaseable and getting to a releaseable state in under an hour ideally. If this were real engineering though it would improve the efficiency and the quality of our work and the data says it does both. The data says that if you can get this kind of picture at the kinds of speeds that I'm talking about the people that are working on it will have more fun, they'll be less burn out they'll enjoy their jobs more, they'll be more committed to the organisations that they work for. The development approach will be more scalable within the context of these things and we'll talk about the reasons for that in just a sec. But also the businesses that we're working in will make more money users will be more happy, the software will be more stable. I use a tagline for my business which is Better Software Faster and that's not an idle promise that's what the data says if you read the state of DevOps reports, the writings about Dora, Google and the Accelerate Book this backs this up with some evidence that says that this is a better way of working. When we're talking about releaseability strictly if I'm being very pedantic deployability is really what I mean because what I want to be able to do is I want to do a small drip of changes that may not yet add up to a whole feature but I'm going to end up releasing those into production and this is a tool that we can use to define the scope of evaluation because I'm going to evaluate my change to the point of its releaseability. That means I must be evaluating a releaseable unit of software. If at the conclusion of my deployment pipeline I now say oh I've got to go and test that with these other services over here before I'm happy to release it it's not a deployment pipeline. What this means is if we're going from commit to releaseable outcome in this way then that's the scope of version control, that's the scope of the deployment pipeline, that's the scope of release and so we've got a concrete reason how to apply that scope, what that scope means in the context of continuous delivery. It means that there's no more testing to be done at the end of the pipeline transit no sign-offs, no integration tests with other components of the system. There are two ways in which we can scale this. We can either grow the pipeline and invest in the engineering that's necessary to get answers quickly enough from it or we can decompose our system into a collection of independently deployable pieces each of which we're comfortable to deploy without testing alongside the other pieces. That's the real definition of microservices, it's not what most people that practice microservices think microservices are very often but that's what is in the definition. Microservices is an organisational scalability play. It's the most scalable way of building software but it does this at cost. This is a more difficult way of doing things because now we've got to be able to design pieces of a system that talk to each other but that we can release wholly independently of one another in order to gain the benefits of microservices and the scalability. It's a new presentation and I've got two versions of the slide, sorry about that. So in software though, scaling is more complicated than it seems and one of the ways in which it's more complicated is it's really this, we can scale the size and complexity of the systems that we create and what most people think is that that means that we can scale the size of our organisation to create things more quickly and we've known for a very long time that that's a really, really slippery slope. As we've just seen with microservices, it's possible but it comes at a cost in terms of complexity, operational complexity development complexity, architectural complexity to coin a phrase by the recently deceased Fred Brooks, a genius of our discipline. You can't make a baby in a month with nine women. There are some kinds of problems that have a fixed period of time a fixed amount of effort. You can't scale up by just throwing people at a problem and in software this is a very, very fragile thing. I like this piece of metadata research it's not deeply academic but they analyse the results of 4,000 different software projects of all different kinds and sizes. They divided the project up into teams of 5 people and fewer and of 20 people and more and then they measured two things. First, they measured how long it took each team to get to 100,000 lines of code. As you might guess on average the teams of 20 people beat the teams of 5 people but on average it took all of the teams about nine months to get to 100,000 lines of code and the teams of 20 people on average beat the teams of 5 people on average by one week in nine months. So on average a people working in a team of 5 are nearly but not quite four times as productive as teams working in a team of 20. Small teams matter, small teams are deeply important if we want to do good work in terms of software and that's about learning and about managing the complexity and those sorts of things. The other measure that they did incidentally was that they also looked at the number of defects that were created and the teams of 20 people and more created five times as many defects as the teams of five. This gets us into ideas of like organization or scaling and an idea that I learned of from my friend James Lewis of ThoughtWorks who got really interested in a field called non-linear dynamics and complexity theory and this comes from there. The traditional way of scaling an organization is something like that you build sort of a hierarchy and you have bosses and you have teams of other bosses and you have hierarchies of bosses until you get down to the bottom of people that do the work. Now the interesting thing is that there's some maths around this. These kinds of structures appear all over the place and the maths is about the scalability of these kinds of structures these kinds of informational structures is interesting. If you look at this for these sorts of directed graphs of information, people resources, whatever there's this number that keeps cropping up 0.85. If you have a team an organization company that's structured like this and it doubles in size its productivity its capacity, its profitability will go up by 85%. It won't double. There's a loss each time it scales. The flip side is is if you've got a team like this then if you double in size you only have to pay 85% in terms of infrastructure cost to keep it going. This kind of crops up all of the time. Double the head count, revenue increases by 85%. There's another way of organizing things though if you've got this and this is this kind of loosely coupled social network linked graph kind of approach. This is the micro services team approach. This is dividing the problem up into a series of small independent teams who make progress independently of one another. These are more autonomous. The number here is 1.15. This is what Amazon does and the scary thing about this is that when Amazon doubles in head count they more than double in profitability. That explains their explosive growth and this was a conscious choice. Interestingly, Jeff Bezos is one of the funders of the Santa Fe Institute of non-linear and non-linear dynamics research that's going on there. So he knew about this when he came up with the idea of this famous idea of two pizza teams and how that scales better. Now there's a cost to this. There's a problem with this because what you do is that you trade off consistency, co-ordination for scalability. You want these teams to be more independent. You want them to be in control of their own choices, their own destiny in order for this to work. If you want to be able to tell them what to do and apply more constraints to them it breaks this model and you end up more like this model and your scalability factor changes. It's a choice. You can do either one but there's more limits on this one but some advantages and there are fewer limits on this one and some disadvantages. The picture on the right is Amazon and the picture on the left is a scale up problem solving needs to be distributed. That's how we get to scale up. To scale up what we need then are strong motivated autonomous teams. We need teams to take responsibility. This is born out in the research done by the state of DevOps people. Their data says one of the strongest predictors of high scores is stability which measures the quality of the work that we produce and throughput the rate at which we can produce work of that quality. The highest predictor is the team's ability to make its own decisions. It can make progress without co-ordinating its work or asking permission from anybody outside of the team. Those are very different kinds of teams to what often we see in organisations. What does this look like in an organisation? How do we think about managing such teams? This is taken from somebody else's work too. This is from Gregor Hope and this is his advice of the way of thinking about leading autonomous teams. The autonomous team is responsible for nearly all of their work. They're responsible for making progress independently of other teams. First, we've got to find a way of carving up the problem in a way that we can give them that autonomy and then we've got to leave them to it. We've got to let them do the work experience the problem and they're in the work so they're going to be best placed to see any problems with that way of working and improve those things. But to do that we've got to give them some direction. So leadership in this context is not about telling people what to do it's rather about defining the strategic defence. It's about defining the boundaries within which their autonomy can exist and that they can make progress. But we also probably need to support those. I think one of the most important books of the last few years is the Teen Topologies book and in the Teen Topologies book you talk about using team structure as a tool to better organise software development and they talk about four different types of teams and the core is what are called stream aligned teams those are the teams that are building the software that people want that they're solving problems for our users that the users care about. And then you have platform teams, enabling teams and complex subsystem teams who support those and the idea of these other forms of teams is to use the cognitive load within the stream aligned teams so that the stream aligned teams can stay focused on the stuff that really matters. The outcomes that we're shooting for. So the idea is that we define tactics, common tools, common patterns those sorts of things to support the work of the stream aligned teams and allow them to make progress independently of one another. The first part is really about setting the goal. Leadership is about planting a flag on a hill and say wouldn't it be great if we could go over here imagine how wonderful it would be if we could do these things. That's it. It's not about how they get there. The job of the team is to figure out how they get there. So the job of the team is to do the work and all of the work to make something that is genuinely releasable and then the idea of everybody else is to support their ability to do that work and to maximise their ability to move quickly and deliver changes that users, outcomes that users really want. There are some techniques that we can use to try and breed this sort of autonomy in teams. I would say that this autonomy it doesn't really matter whether you're talking about the distributed graph kind of thing or the hierarchy kind of model the best model for software development this is going to optimise us to allow us to solve hard problems within teams without relying on coordination. If you go back to what I said earlier on this is a way of reducing the coupling between teams aiming for these autonomous teams and allowing them more independence and they can move at their own pace rather than being held back by other groups of people. So some of the techniques for breeding this kind of autonomy are goal or mission based planning what I've just said really wouldn't it be great if we could do this fundamentally focusing on the outcomes rather than the techniques that we use to get there. It's called mission based planning because this is a lesson that the military learned a long time ago for teams senior officers don't say go down this road turn left through that gate go up that road and then shoot those people they say take that hill sorry about the war like reference but mission based planning is kind of a way of maximising the effectiveness of people. You want the people to understand the problem and then figure out their own solutions to that problem. So the other part of this is to define simple well understood constraints we'd like people on these teams to understand what the goals are understand what the guidelines are for the organisation understand where they can compromise where they can't understand what are the parameters for their autonomy. You can't just go to a bunch of people that are used to working in a more traditional setting and say you're all autonomous now because they'll sit there and they'll go yeah really I know you don't really mean that if I'm autonomous I'm going to increase my salary to 3 million pounds it doesn't work like that so you've got to say what the boundaries are you've got to help people understand and allow them to genuinely be autonomous but within some constraints so setting some guidelines is a good idea the other key part here is what I mentioned earlier to manage the cognitive load of the team allow them to be able to make progress on the things that matter to their users and take away work supporting technology practices processes whatever it is that help that team make progress but don't get in their way the job of a platform is not to come up with some beautiful ivory tower solution that nobody understands it's to service those teams it's a service function teams need to understand the problem that they're working on I work as a consultant and I usually work in larger organisations one of the things that I see all of the time the anti-pattern that I see all of the time is something that I call programming by remote control using the requirements process to tell people what to do the commonest form of this is having a UI team that define what exactly what the UI leaves and that's given as a requirement to be handed into the development team defining the UI is part of the development somebody might come up with a better idea than a bunch of people doing that somebody might find out that that idea doesn't work very well and need to change it putting that responsibility into the autonomous team is part of this and to do that the teams need to understand the goals but deeply understand the problem that they're working on and they need to own the problem that they're working on again the previous speaker was talking about teams that were interested and going out and discovering the details of the problem talking to real users and so on this is something that we tend to have lost sight of a little even within agile teams but understanding the problem and the ownership of the problem is a core of being able to do a good job it seems to me I just want to give you a simple example really this is in terms of architecture because I'm a technical nerd and that's where I kind of think about things but one of the ways in which you can kind of constrain things is with architecture and one of the ways that I think of architecture is not as a prescriptive thing that tells you exactly what your answers must be but more like a tourist map so you want an architecture that kind of gives you a rough guideline of where stuff goes and the context of where the thing that you're working on is and how it sits I use these sorts of things these are things that I call whiteboard models and they're called whiteboard models where they're kind of a consensus of the architecture for a team and at least any senior member of the team is able to draw this from memory on a whiteboard that puts a limit to how complicated and how detailed it can be because you should be able to do this even for a big system but it's enough for you the reason I call it a tourist map is it's like if you go to the zoo and you want to know where the gorillas are and follow the path even though it's not geographically accurate these kind of maps allow us to have conversations about how software works together without worrying too much about detail these are useful tools that allow us to move forwards growing the responsibility and ownership in the team the best tool for this is ensemble programming mob programming or pair programming I'm an old school extreme programmer I'm a believer in pair programming mob programming is fantastic too though I have had an opportunity to try it but the idea is that we get to learn from one another it's the best tool to help us to lever up the culture in a team and the ownership and the sense of responsibility for the things that we're working on and the understanding and actually pair nearly everything not just pair programming they're most creative and to do that people they're most creative when they're working together on things talking about things learning from one another all of the time so we don't need to formalize that kind of training other than trying to encourage people to work together I would argue that the primary role of leadership is to teach not to command the goal is to help people do the best that they can if they have skills there understanding their talents maybe plant the flag that I was talking about before but it's not to say you must write this code like this this is your software this is the solution that you must abide by we scale decision making by making it pervasive distributed and continuous if you've got to go and ask permission from someone three layers up in the management hierarchy in your organization to do something that doesn't scale very well that person is a bottleneck your feedback loop is too long and we scale implementation by optimizing for fast feedback on releaseability if we can do both of these things that gives us an opportunity and if you look at the organizations that we think of as being good at this kind of thing able to operate at big scale they do both of these things thank you very much that's the end of my talk I hope I've got a few minutes for questions anybody no there's somebody at the back so it was amazing presentation thank you for this so my question is related to the latest trend what we are currently looking at let's say US market or European market or let's say world market we used to think from a technology point of view everybody's dream used to be MANG M-W-A-N-G companies but I believe if you really see today's trend from past one or two months I believe these five are the main companies which are basically coming down on their kind of like budgets number of people who is employed do you really think the model you just demote right now let's say with respect to say Amazon any step which they are trying to take back with respect to think which is backfired to them telling that why are they kind of like going for something like 20,000 30,000 people cut down in there is it because of overestimation or something like overconfidence with respect to their own decision making so if you're asking does this really scale this approach I think this is the only way that really scale the distributed approach that the trade off there as I said is that you kind of lose consistency to some degree but it's the fastest way it's the fastest way of scaling Amazon for example they have tens of thousands of developers and they are releasing change all of the time it's been a few years since I looked but the last time I looked the number of changes the number of new services introduced on AWS in one year a couple of years ago was about 2000 changes and there were overlaps, there was redundancy in those sorts of things because the teams were working independently of one another the if you work to try and keep things together and to develop in concert and rely on evaluating everything together that is possible ultimately it's less scalable but then you've got to do the kind of engineering that I was talking about getting the feedback Tesla is quite a good example of that they've actually got relatively few people working in software development at Tesla but they work very quickly they're a continuous delivery company they get very fast feedback on the system as a whole and they can make a change they can make a change to a car and that results in the production line churning out the new to design of the car in under three hours so this is the kind of thing that we're talking about does this scale, yes I didn't get all of your question I'm sorry but if you're talking about the distributed teams then more autonomy is better if people are trying to do command and control across time zones that's probably the worst of all worlds in terms of the organisational structure because then the people that are making the changes are going to see problems and they're not going to be able to solve those problems without going back to people that are somewhere else and don't understand the detail of the problem so you need the people in the work making some of these decisions and I think that's a deeply agile way of thinking personally and now I'm not entirely sure whether I've answered your question definitely yes, thank you this idea of having the problem distributed is interesting but I would like to know more on how to make it possible because the reason is that usually the organisational structure is more of a top down approach where we get a larger problem and define some strategy to solve it and that's the way it's flowing top to bottom so how you go about it is it depends on which route you take to some degree or are you going to build so the first thing I would say is if you're looking at continuous delivery as part of the solution and I think it is then the simplest way to do that is that you build everything together and evaluate everything together and release everything together doesn't mean your system can't be nicely modular nicely service oriented all of those things but if that's your scope of evaluation that's the simplest way of managing the dependencies between things and reducing the coupling so that's a world of continuous integration shared code ownership, very fast feedback and lots of engineering to get that fast feedback that's one way of doing things at the technical level the micro services approach the things to be doing then are thinking about contract based testing ports and adapters, architectural patterns at the technical level but both of these, the technical things aren't really the stumbling blocks for either of those the one big system is remarkably more scalable than you think it can work for literally tens of thousands of developers in a single repository if you want it to but you have to invest the engineering but the real hard part is getting people to start thinking differently I think that what we're talking about here with modern software engineering techniques modern leading edge of agile is a paradigm shift in thinking for software development and the trouble with a paradigm shift is it's going to simplify things maybe in the new paradigm and the picture's better in the new paradigm but it means that some of the questions from the old paradigm don't make sense anymore and they're not answerable and so trying to convince people maybe non-technical people in your organisation to adopt these changes is the really hard part the technicalities of it I think are reasonably well understood they're not spread all around the world but I can point to people that know how to answer those problems but I think that how you translate that in your organisation is a much more difficult cultural thing as a result of that one of the downsides of continuous delivery is very hard to adopt it's better than anything else that we know how to do I think that's hyperbole on my part I don't think that's hubris on my part I think that's backed up by the evidence but it's saying it's very difficult to adopt and that's a shame because it works better so if you're starting from a start-up start with continuous delivery because that's the easiest time to start but if you're trying to retrofit this to an organisation you're going to have a long journey I usually say to my clients if you're in a normal starting pace forgive me if you're in the realm of a flaccid scrum not really very good agile if you're in that kind of place it's probably going to take you two or three years before you're good at continuous delivery but the good news is that it's only going to be a month or two before you're at least as good as you were before and from then on you'll be getting better and better and better so it's a difficult change to make always and there's an awful lot to it it covers a lot of territory thank you hi the lmax slide you showed so which is like the cycle of tests in each so in that I have a curious question about like the local and dev environment you said that's going to be like hundreds of tests and in the UAT it's going to be 20,000 plus tests so it's probably going to be an automated test yes virtually all of the tests with numbers were automated tests so what kind of tools that has been used for making it work mostly it was JUnit and then stuff that we wrote the technique for the acceptance tests is based around building domain specific languages to evaluate the releaseable unit of software for our system and those sorts of things so there's a structured way of building those kinds of tests which drives the development process and helps development teams to understand the the outcome that they're striving to achieve as part of the requirements process so we build these things called executable specifications that define the target outcome so there are some technicalities to it but the technology itself doesn't really matter, you can do it with almost anything so it's going to be mostly of micro services related tests and not based on UI right? no they were UI based tests so we hid that, the test cases themselves didn't know anything about that that's a testing technique, if you want to learn more check out my YouTube channel there's some examples on there but in the plumbing of our testing at Elmax we were using Selenium to actually talk to web based interfaces for example and we would use all the technologies here and there but it doesn't really matter which ones you pick okay so in this case sprint timeline you guys have to make this process end-to-end cycle depends what you mean so we built the first version of the pipeline in the first iteration of the project so it took two weeks to have the first working version of the pipeline it wasn't as sophisticated as the picture that I showed you that evolved over time as need demanded it to evolve so the easiest time to start is right at the start with the first features that you write, you start writing some tests you build the deployment pipeline to execute those tests you deploy the results of your work even though that doesn't make any sense for users yet but if you build that then it's then easy to add to in future if you wait until you're nearly finished and then build it, it's hard okay and you'll show a slide for the distributed pipelines and scale pipeline so I wanted to know in the distributed pipeline the merging and integration testing grows high isn't it is it going to be the difficulty so we didn't we didn't do we didn't do integration testing as part of our strategy we did it occasionally but I think you need to stop because I'm going to have to charge a consultancy rate soon but there's a lot more stuff on this, on my on my YouTube channel so do check it out, there's a playlist about acceptance testing thank you, pleasure there's somebody over here hi Dave I'm a believer and promoter of continuous integration continuous delivery deployment so I relate to what you say very easily and thanks for all the work that you're doing in this space thank you there's one point what I have seen is that whenever we talk about continuous delivery it's primarily around engineering the tools, the automation it starts when the code gets committed that is when it begins and it ends when we deploy it in an environment now in this entire process to be truly to imagine that my product gets to the production on a regular basis there has to be also a mindset of identifying the change in the right granularity of the right sizing and if that is not done or if this entire impact of the change in the production in my software if that is not being identified in the right pattern then this is not possible I mean the code won't get I mean if we continue to create monster epics that continues for four sprints, three sprints we won't be able to release it so that part in the pipeline neither it is the engineering aspect there are a lot of sessions like feature mapping sessions or discussions that are happening right but that part doesn't get often covered when people speak about continuous delivery do you think I do agree and I also I also think that continuous delivery is wider than just deployment pipeline the trouble is is that deployment pipeline is the easy part to talk about so I have a training course for example that does exactly what you're talking about it talks about stories to executable specifications so how do you identify the ideas how do you analyse the ideas how do you get those to the point of being executable specifications and I think that's part of the engineering discipline as well but it doesn't often get talked about I think it's most often discussed in the context of domain driven design and behaviour driven development and those are fantastic tools I think of those as tools that are part of the continuous delivery process for the reasons that you just described but you're right they tend to get talked about separately the deployment pipeline isn't all there is there's more to it than that I think the best way to think about continuous delivery is working through our software is always in a releaseable state but that's also about more than just the pipeline that's about thinking about what's coming next thinking about the things that are out in the wild mean determining what's going to come next and so on the whole thing is a feedback loop that we need to optimise for I do talk about that in other contexts so once again forgive me advertising but my youtube channel has some stuff on those things today thank you one last question Dave you said we need to think of software as a creative discipline whereas I've heard terms like factory model industrialisation of software so these seem to be at odds with each other I think they are at odds I think that calling for a factory model or industrialisation can only be from people that don't know anything about software because it's not how it works thank you so much everyone thanks Dave thank you very much