 Welcome all to the session by Matt Turner on the Cloud Native Progressive Delivery. Matt is an engineer and he would like to share his tech talks with everyone. So once again, welcome Matt to the session. Over to you Matt. Welcome you all once again. Thank you. Thanks a lot. Welcome everyone. I can't see you but I hope you can see me. Good morning or evening or afternoon wherever it is where you are. I'm in Alabama at the moment in America so the sun's just starting to come up. Well I hope you're enjoying Agile India. There's a bunch of great talks out there so thank you very much for everyone who's come along to this one. Yeah I'm going to talk about Cloud Native Progressive Delivery. So what I'm what this is going to be maybe a little more technical than some of the talks at an Agile conference but it's not going to require too much you know sort of technical understanding. You don't need to be an engineer. I'm just going to focus on technical stuff rather than people stuff I guess in this. So I'm going to talk about Progressive Delivery which is a way to I guess more easily and safely and ultimately quickly deliver value to customers right. It's the whole I think it plays into what we're looking for it with with Agile and Lean in the technologies it's our ability to you know to run experiments quickly to not hold inventory all of that kind of stuff. I think a lot of people get the the people side the software engineering you know the sort of team structures right but then they get held up by the practical details of getting the releases to get the new features into the hands of users you know safely maybe they'll run some some fast sprints do some some real innovation and then just release the software and there's a bug and they have to they have to roll everything back or vice versa you know a team will deliver very quickly very incrementally but then things will not actually reach users for several months because it's it's held up you know getting deployed ultimately to the sort of final production environment because people are worried about those changes or something. So anyway I'll take you through that the cloud native part is talking about the fact that I'm going to be looking at these new cloud native technologies so if you're running your software in a cloud if you're using a Kubernetes system and some other technology that I'll show you that's a real enabler for what we're trying to do. So I'm a software engineer I work at Tetrate I just need to mention that quickly we make software that helps you with your journey from on-premise into the cloud or helps you run a hybrid environment with on-premise and cloud it helps you with app modernization you know all together if you're if you've got containers but struggling to run them if you're going from VMs to containers as I say if you've got a hybrid on cloud on-premise kind of environment then we we make a bunch of software that can help with that and that's based on cloud native technologies as well it's based on the same technology that I'm going to be talking about today. So I can't see you because this is online I would normally start by asking you know who's uh will the software release process looks like this you know who's this is a nice simple um fun enjoyable you know walking a straight line I guess it I don't gonna guess it's not too many too many people's right you know whether you're uh I guess not too many people are now making printing software on the disk and putting it in a box and shipping and you know I guess most of us are delivering an app or a website or an API or something like that I'm guessing not many people sort of path to you know from from feature inception to to having users you know use our software and getting feedback looks too much like this it maybe looks like this I googled for this I'll be honest but it's quite similar to release processes I've had to work with before uh or maybe it looks like this um some people can maybe tell me you know what the name for this particular approach is but it looks very complicated to me uh maybe it looks like this I think this is something we need a consultant to help us understand this this diagram doesn't seem to have a lot of meaning but either way it's likely to be you know quite complicated quite involved in taking a lot of steps like more realistically maybe we have something that looks like this you know this is this is safe uh scaled agile everybody here is very familiar I'm sure we have a release train and we have all these departments with testing and quality and operations and everything just takes a lot of paperwork and it takes a lot of time so how can we do something better how can we get closer to that first picture well I'm going to tell you that cloud native technology can help us with this um for those who aren't familiar I'm just going to give an overview of some of the tech that I'm talking about this will be quite a whistle stop tour if you haven't heard of any of this stuff before you might just want to take some notes and go and google it I guess like I said there won't be any code but there's a few different things I will I do need to introduce so we have a piece of software if you're making an app or whatever then then that's great like all of this I think applies but I'm going to be talking about a sort of a more modern application I guess right a business that's running a SaaS product so I'm going to assume that we're writing you know code that runs on the server side and either either gives a website or an API or some other kind of SaaS offering you know a web app some kind of SaaS offering to the user so we have our code you know it has to run somewhere maybe it runs on a server or something at the moment and there's this idea in tech called the 12 factor app so this is a set of guidelines that tell us a good way to go about writing modern applications so I'm going to assume that we're writing microservices or that we're you know we're on a journey there and the 12 factor out tells us a lot about how to write a good microservice so I'll come to a few of its recommendations so I think the first thing we should do is use Docker you know we should run our code in a container if you're not familiar with that it's definitely worth looking up it's a nice way to isolate your running code so that we can sort of safely make changes to it so one of the things that the 12 factor it's 12 factor by the way is a list of 12 you know 12 recommendations for our software so it's number five as it happens but one of the things the 12 factor says is that the building of our code and the running of our code should be kept separate so what I'm showing here on the right is our code you know built and packaged up and ready to run so we're building these container images if you've ever worked with java and built a jar or a war or if you've ever packaged anything into a you know a pip for python or or a vm machine image anything like that containers have the same kind of idea so we can do a build of our software we don't build it on the server that we run it on I guess it's the point and we do a build the software we package it up into some format for a container that would be a container image and then that sits in this registry ready to run and then when it actually runs it runs in docker with that isolation that I was talking about so we separated the building and the running as I say I'm going to assume we're doing microservices you know there's there's been books written on this the the stuff that Sam Newman writes and Martin Fowler is all very good but you know our software isn't going to be one I'm watching this blob it's going to be several different pieces of software several different services working together and you can start when you understand microservices you can start to sort of see patterns in and there might be different types of service doing different things but importantly we've got different pieces of software that can be all running these containers to isolate them and importantly they can all be deployed separately this is this was the original sort of point of microservices is I take the code base and I split it up and that means that if I add a feature in one place I can deploy just that part of the code without having to wait for that agile release train or whatever a mechanism without having to deploy everything at once because that's a much bigger job it's more risky and it's the kind of you know thing that will introduce delays the other thing about these microservices if you've you know not seen this pattern before is that they can all scale separately so maybe one part of the code has to do a lot of work one part of the code you know does does less work you know one deals with account creation right doesn't happen very often and one deals with orders which happens all the time we can add more replicas it's just like adding more servers we can add more replicas of one of them without the other which we we scale out just part of our code and that it just is very that's very efficient it saves us duplicating running multiple copies of all of this code when we don't need it you should then I said this is going to be a whistle stop tool through the tech I can't see anybody nodding so I don't know how familiar people are with this stuff but when we have our our software in containers in these in these docker images we then need to actually you know run it on on many servers you know we want to write even if we don't have very many users we're going to want to run three copies of all of these services just for redundancy in case one of those servers crashes and when we've got lots of users you know the scaling that I showed of the services then we're going to need lots of servers to make you know all those all those duplicates of our software you run we're going to just need a lot of service space to to run all of that code so you I'm sure you've heard of Kubernetes everyone even if you're not too familiar with it Kubernetes is a platform it's an orchestrator it's a it's a piece of software that will organize running our containers for us right so I just give it a lot I installed it on a load of servers and I give it a load of these container images and I say hey run 10 copies of that and 100 copies of that and it'll just go and do it and it'll spread them out across the different servers it'll restart them if they crash it's got it's got all kinds of features that then you can look up but taking taking that step to write microservices and you know build them into docker containers is it's great it's great for devs but if you're still you know manually using SSH or RDP or whatever to connect to a server and then that's how you run a new service and that's how you upgrade a service then we're not going to get the agility we're not going to get that speed that we need because that's still going to be a whole load of manual work you know we have the first step but we don't have all of the steps so Kubernetes which is kind of like an automated sys admin is going to be a very important part of that when we're running Kubernetes we can add an add-on called Istio which is a I think called a service mesh what that can do is make the network much more clever much more sophisticated so that when each of these services talks to another one you know when each of our software when our orders software talks to our users software talks to our basket software all of that goes through this this Istio network so these these purple boxes here show a network proxy that Istio is going to inject into all of our containers so when one container talks to another when one service talks to another all of this order the you know those requests go through these these purple boxes and you can see up at the top that little sailboat is a piece of software that's that's controlling and configuring all of those boxes again so that so that we don't have to what this means is that we get a whole bunch of insights into the network we get a whole bunch of insights into what's going on so for example we can run some kind of metric system and we can get graphs of how much CPU software is running or how much memory it's using or importantly we can get graphs of how many errors right so when we're talking about about being agile and you know delivering incrementally and progressively and as fast as we can if I go and deploy a new version of one of these services of some of my software I need to know what its error rate is right it's great if it adds a new feature but if it makes 10 times as many errors as the previous one then I need to see that on a graph and I need to be able to roll it back so by using Kubernetes we get the statistics about whether this software just crashes in its containers and those containers isolate that so it doesn't crash all the other software but we can know whether each part crashes and then by using this Istio this service mesh because all of the requests are going through these purple boxes you know they can observe what's going on and if one piece of software returns an error to another then again we can see the the sort of response error rate on a graph like this so the normal way we look at these metrics because we'll talk about this a little later is what's called red red metrics so that stands for rate error and duration so basically for every service for every part of my software I want to know how many requests it's getting you know that's the rate are 10 users using it or a thousand errors is well okay so there's a thousand users using it how many of them what percentage get an error so if a hundred get an error you know that's a 10 error rate and my software should probably be rolled back right that's that's not going to give a good experience to a lot of users if it's one error in a thousand then maybe that's okay because that's a 0.1 error rate and you can decide that in your business you know what kind of experience you want to give to your users how you balance potentially having users see an error with getting features out there quickly right if you're a sort of early stage in your startup if this is a beta feature you might accept some errors it might be okay for some users to have a bad experience if you're getting features out there really quickly and testing hypotheses really quickly but the you need to be able to see right you need to be able to know what that error rate is so that you can make an informed decision about whether 1% or 5% is okay or is not and then the last one is D for duration and this is how long a response takes because you know say I'm on my on your I'm on your store and I'm trying to order something that you sell you know even if there's no errors even if every page you know the the browse page and then the basket page and then the orders page if all of those pages load okay but each one takes 10 seconds then I'm probably going to go to a different store right I'm probably going to get bored and move away so even if something is technically okay a response is you know response is good I need to know how long it takes because they need to be really quickly to sort of keep customer engagement and when when we have those statistics we can start talking about them we can start writing contracts contract in the technical sense right now not not a big deal but I can start writing something like a service level agreement which says okay I make the orders part of this software and my agreement with everybody because all the other services have to interact with orders right the basket has to interact with it and the shipping component has to interact with it my service level agreement is okay I will I will write a service that lets you place new orders and list all the orders and cancel orders you know I will be the order service that validates those and stores those in the database and and all that kind of stuff that's probably what I'm offering you and you know this is the API to interact with that we can then talk about a service level objective which is saying right I'm going to offer you that orders API and 99% of queries to it are going to are going to succeed you know what there may be 1% errors but we're moving quickly we're adding features quickly you know there are going to be some errors we don't have the time to test everything but my objective is that it's only 1% and my objective is that every API call gets a response in one second right that's the kind of thing we have with a service level objective and this is I think this is just really really useful when you when you start doing microservices and you have multiple teams working on different services like separately that have to interact with each other being explicit about what you're going to get and what it's going to look like from from the other services writing that down it's very useful and the service level indicator is basically just saying well how do we measure that so for you know 99% success rate it's it's quite obvious you know we just look at the responses and we see whether it's a 200 or a 500 but for we can write contracts about other aspects of what these services do and some of those can be a little bit more difficult to measure so sometimes you have to write that down I'll actually just say that so this is what these purple boxes do right this is what this this service mesh technology does within Kubernetes or within ECS or something is it's it's going to gather those metrics for you those service level objectives right it's fine to have an SLO that says 99% of my responses will be will be good but that's usually if I can't measure them so because every request between the services goes through these purple boxes the purple boxes the proxies they get to inspect that traffic and and produce those metrics right that's that's why we're using their technology we're using it so we can we can enable sort of business level designed by contract like this but just another couple of things I'll very quickly mention I don't want to spend forever on the tech I say I don't know whether people are super familiar with this or whether this is completely new there's another couple of ways you probably want to build these platforms one is with what's called infrastructure as code tooling so we don't set our servers up we don't set our Kubernetes software up by going and you know clicking through an options dialogue all these manual processes we you know we have we do them once and then if there's if we need to rebuild the system we have to do them again and we forget the settings we chose last time right um we write everything down as as configuration as code we can store it in git and then we can do we can just rebuild the environment or make a change to the environment really quickly by changing that code in git and then rerunning it and it will go and configure our production environment for us um and okay very quickly I mentioned yeah git ops which is the idea of you you have all of that all that code describing what your your server environment your production environment is going to be like you have all of that sat in a git repository and then you have an automated piece of software you have an agent that watches what is in that git repository and makes it and it goes and builds it in the cloud right so um if if I'm on the platform team and I want to make a change to to how this production environment is is functioning I don't actually I don't I don't go into the aws console and click something I'll go into this code in the git repository and I'll make a change in that piece of code and then as soon as I commit it to git uh you know no human has to do anything a piece of software will will notice that you commit to git and it will go and run that code and then aws will will look like what I wanted so with with all of these pieces we've automated all of the way right so if I yeah if I want to change something in aws I want to add a domain name or whatever I want to do you know I don't log in and click um I write one of these configuration files I committed to the git repo and then everything just automatically happens you know a few seconds later aws is going to go the way I want and this um just this just gives us this speed and this reproducibility and this is what lets us be agile right this is what is lets us respond to to users um requirements very quickly it's what lets us do do the lean manufacturing thing because as soon as I've had that idea and you know got got PR approval got that change motion to git that you know that stock isn't held back that hypothesis is out there being tested because it's automatically you know deployed all the way into aws so let's actually look at the deployment of our own software hopefully that gave you a you know an overview of the kind of technology that we can use to enable this um but how how are we actually going to use it so just another couple more terms um people say ci cd a lot right I'm sure you've heard ci cd a lot the ci in ci cd stands for continuous integration this used to have a different meaning but now it basically just means continuous build it should really be cbcd I guess so the same kind of deal if I'm working on my piece of software you know we're using git as soon as I make a commit uh an agent Jenkins or something like that will go off and circle ci or whatever will go off and uh build my software tell me whether the build succeeds or fails you know make that that package that that pip or that jar or in our case that container image and put it somewhere ready to be deployed potentially so when people say the ci in ci cd what they just really mean is continuously going in and building every commit of the software and that's fine I just wanted to to clarify it so okay if we're defining terms if we're saying what words mean what is a deployment right because we talk about cd we talk about I've said a lot about deploying software so far right but I I haven't actually explained what I mean so is deployment you know taking one of those software packages that we built the the jar or the container image and running it well I think it is so continuous deployment the the cd part of ci cd that means continuously doing this deployment right so every time there's been a new build every time you know a new jar a new container image has been made we we deploy it we take that software and we run it okay great so what is progressive delivery then well that actually depends on what we mean by release so when I said deployment right I said take a piece of software and run it and we might assume that if we you know we're running that piece of software in production in the aws in the kubernetes cluster that users will start interacting with it right I I take the build piece of software I go and run it in production then any user that comes to my site is going to see that software right well I think not necessarily because this you know strictly defined deployment means take the code and run it in production release my definition for release is exposing that software to users having users use that software so what would continuous release be what continuous release would be for every new deployment for every new running piece of software automatically exposing users to it right automatically putting it on the the main path that users take so exposing users to every new build so if we do continuous integration continuous building build ci and we do cd and we do continuous release that means every commit makes a container image every container image starts running and every running container image is exposed to the user right so as soon as they do a commit the user starts using it if that commit is bad the user has a bad time so I think ci cd is good and I think we can keep doing ci cd but when most people say ci cd they include this continuous release step right they they assume that the deployment means releasing they assume that running the software means that that's the software the users interact with but what if we say that deployment and release are not the same thing that I can deploy software without releasing it to users I'm going to try to hopefully persuade you in in 20 minutes that that's a good idea and I'm going to show you that we have the technology to do that that we can use those cloud native technologies that I just explained to enable this so one last thing that that the 12 factor guide says point 10 is that you know development and production environment should should be the same so keeping develop staging production pre-production whatever you call it similar as possible so actually you know I've been assuming so far that we that we don't have staging environment even right that we only have one because I when I said ci cd I said oh yes you deploy to to production right and I think you should do that I think you should always do I think you should deploy straight to production but if deployment isn't equal to release then that's not dangerous because the user doesn't start using the software that we've deployed anyway let me let me show a diagram maybe it's a little easier to explain it so using all of this cloud native technology what do I think the the perfect build any and deployment and release system should look like okay so what's that build stage I write some code the little green scroll here is is code that's the best like I can find and I commit to get right some some agents you know the gears are as a piece of software some piece of software comes along and it builds what's in git and it puts a little artifact in you know the the database symbol is you know some kind of store right so it's an abstract diagram but if you think about how your software probably works this is probably what you have at the moment right you write some code in java you commit it to to git Jenkins builds a jar and puts that in artifactory say or you write some code in python you commit it to git and circle ci builds that python packages that python and puts a a pip in pypy right and this would this applies to to sort of any language any ecosystem you're in so the contract from this is that every any you know any new commit to main any merged pr produces well in our case a container image because we're using cloud native technologies and it pushes it into that image registry so what is deploy then so I've got this this line here because I build and deploy should always be very separate I think a lot of people are halfway down this path maybe they haven't thought about it too clearly but a lot of people have ended up building systems where they're like they're quite separate anyway so I've got you know the build that we just talked about that's on the left that happens completely independently of anything everything else so what I then have is this git repo here a different git repo with files in that say what version of my software should be deployed right because for every commit to this to this software on the left every version in git we build it and we make a you know a jar or a a pip or a container image and this has a version on right so we're going to have a big registry about I've got you know the version one of the software version two version three they're all there we could we could take them all and run them all so I have a git repo with a file in this is a right version two should be running and then there's a there's a piece of software here this this cycle symbol that's constantly watching so there's this piece of software that's going to say okay the file in in git says we should be running version two I'm going to go and run two containers of version two right you see these are all these are all the green the green version so yeah a bit of an abstract diagram but this this is using that technology we talked about if people are familiar with this right so this software is running in a docker container so it's going to be kubernetes that that goes and starts this you know I haven't drawn any servers any vms any vmware anything like that because it kind of isn't relevant when we use all of this automation technology we just rely on on kubernetes to make it happen so what happens normally if if I do if I set this this system up right this continuous it's because this is a continuous deployment system at the end of the day it's just a little bit different from the Jenkins or the um the spinnaker or whatever you might have used of them you know at the moment um because it's continuously you know deploying the software every time I make a commit to this git repo on the right to say oh I want to run version three a piece of software will notice that straight away and it will come and run you know version it'll come and run version three in production so we are continuously deploying stuff now what happens by default I've got a you know a user down here trying to access my my stuff they send their request their request hits this this gray boxes you know sort of the low balancer or the the cloud edge or whatever you want to call it it hits the sort of network ingress point and then all the requests go uh you know to the version that we're running right so at the moment we're in a happy state we've got you know this green version version two that's what's described in the git repository that's what's running and every user request is going to go to version two what happens if I do another commit in git right of new code of my new feature I'm super excited about this feature but it's it's brand new and it hasn't had a lot of you know I've done all the testing I can or the unit testing the bottom of the agile testing pyramid right but we haven't gone all the way up yet we haven't been able to test it in the real world well because everything's automated because we're being really agile and really fast my new commit gets built results in a you know version three a red image in the container image repository and then the continuous deployment system kicks in right and we suddenly get version three running in in the cluster as well and what's going to happen by default is that for all those user requests coming in half are going to go to version two and half to version three because this is this is a dumb low balancer right it balances low so it sees that there's now four containers it doesn't really understand that they're different versions so it just spreads the traffic between them so now instantly because we've got continuous integration and we've got continuous deployment this brand new code that might not be very good is you know exposed to half of the users half of the users are going to see that new feature which is great start testing our hypothesis but half of the users are also going to hit this code that maybe crashes quite a lot right or maybe gives them the bad results so what can we do about it and this I think is the sort of the crux of this progressive delivery we had another agent another piece of software but it's all good right this is all automation we had another piece of software that controls that low balancer for us so this is going to be an ELB or an ALB if you're on Amazon something like that or it's going to be part of that service mesh I won't go into the technical details but there's another piece of software here that can control that low balancer for you right because if this happens if continuous deployment deploys our new version and we see that half of the traffic is going to it we might come along and change those low balancers settings really quickly right you know as the person who owns this app I'm like oh version three is maybe not ready yet let me just go and change the ELB setting to send all of the traffic to version two just just for a while just to be careful so we can have a software agent that comes and does that for us and that's just you know safer and quicker and easier I'll I'll move on a little bit because I realize I'm getting towards the end of the time so that little that little agent it's forcing all of the users to get to go to the green version to go to version two but we've done continuous deployment I think continuous deployment is a good thing so this new untested version is running and it's running in prod but it's not being released right it's not exposed to users so the users are not seeing it but we we can do that by using this low balancer by controlling this low balancer right because otherwise if we wanted to test this new piece of software we deployed into into dev or into staging or into whatever you want to call it into a completely separate environment but it's expensive to run that environment it's expensive to maintain that environment that environment is never going to look exactly like prod right they they just never do as as hard as we try so that testing is never going to be as good as it could be so I think we should continuously deploy and deploy into prod but with these new this new cloud-mated tech with things like this this agent and this smart low balancer coming from the service mesh we can control what the users see so the users are when they send their request they're getting version two but version three is that they're running so what can we do well we can start testing it you know our our tester can also access exactly the same thing so that the tester is now not using a like a staging environment or something the tester is using the production environment and maybe accessing exactly the same DNS name that that the users do but if they set an HTTP header to say hey I want to opt in to the testing version they can send a request they can send their the postman request or whatever that they use for testing they can send to exactly the same ingress point of exactly the same production environment get a very representative test but they just said there's one header and the load balancer is smart enough to send their request to version three where the users are still getting the safe version two we can also have you know other kinds of testing can can go on to this agent that's controlling the load balancer for us you know it's it sees version two is deployed it sees version three is deployed and it goes ah yes version three is still under testing so I'm going to send all the user traffic to version two but I'm going to spin up an automated testing system you know an automated test script and I'm gonna tell I'm gonna tell it where version three is so that it so you know if this is sort of a manual tester we can go even further and we can have you know the agent that's that's managing all these deployments we can have it start up automated testing scripts even you know even load testing so this is you know sending a lot of traffic in I'll skip this because I'm running a little short on time um yes basically the point is that it's this is fiddly you know there's a lot of moving parts here this is hard to do we need a lot of automation we need a lot of technology but when we do I think we're in a really really good place so just quickly you know what else can this agent do well it can let version three sit there for a while you know with no traffic except um our testers who access it by by sending a header it can but it can it can test this stuff in multiple ways so the other thing it can do excuse me all these user requests that come in it can mirror them to version three so you see here you know version two gets the user requests and the replies that version two gives the what goes back to the user we can send the same request duplicates off them to version three but you see the the arrow doesn't come back this is not the response the user sees but version three is getting these requests and giving responses and because we're using this service mesh right all of these things have a little proxy alongside them we can see the responses come out of version three and they just get thrown away they don't get returned to the user we just we just throw them away but before we do that we can see whether they're good or bad we can see whether they were an error or not we can see how long they took so we can get the metrics for version three and we can compare to version two and say right it's got this new feature in but it's got one percent more errors and it takes 50 milliseconds longer and we can then say to the product owner to the business is this acceptable how keen are you to get version three out there so yeah this is I skipped ahead of it this is saying that we can inspect you know well we can inspect that service I guess we can see if it just crashes maybe it gets the same request as v2 v2 handles them okay v3 crashes very useful to know that and because we're testing this in production we're going to get a much better idea right you're thinking staging the only request the software gets is is something we've come up with some some request that we have we have designed nothing is ever going to be as crazy as a user request right so so by letting it get a copy of all the user requests although crazy crazy input that the user center app we can see whether it crashes we can see whether the response is any good we can even compare the two right so if we have a an api um if we you know if we're a if we're a SaaS solution then we can send all the user requests to version two and version three they both send their responses and we can compare and say oh version three is returning different responses to you know version three should be faster that was the plan but it's actually returning different responses is that okay so what is so what is released right so this is deployment this is continuous deployment we've got this software running as soon as it's being committed and we can then it's it's sitting there and we can go and test it we can send we can do manual testing of it we can do automated testing of it we can send we can mirror user traffic but the user doesn't get a response so then when when all of that has told us that the software is good or we think it's good we can then actually release it remember that releasing is exposing the software to the users right so again this agent can do some more control of the load balancer and it can say right most of the trap we think we think version three is good we've done as much you know testing as we can we've given it the mirrored requests and everything but we still still want to be a little bit a little bit careful um most of the request she's still going to version two but let's send one percent to version three and let the users actually have the response and see if the users complain right see if users see if our metrics say that users are dropping out of the funnel or whatever and yeah we can we can watch the statistics for these services that you know all the way through to see how they're getting on we can then send a little bit more to version three and if it still seems to be good you know the metrics are still good the user's still happy we can send even more and then eventually it can get all of the traffic and this is now our running version and this is nothing we couldn't have done before right but it's just very very difficult to if you've got you know windows servers and jars and you're doing this manually you know like going and reconfiguring an on-premise load balancer this is very difficult what i'm saying to you in this talk is that the the cloud native technology you know the cloud providers the kubernetes the istio all of this kind of tech makes this easy and and automates it so everything we've seen here has been completely automated and like a user hasn't had to do it so we've gone through all these stages of all these different ways of testing all these stages of being careful and of gathering data and have sort of scientifically moving on with our software rollout but no humans had to had to put the effort in this can happen overnight this can happen all very quickly if we tell the agent to move from one stage to the next very quickly it can happen very slowly if we tell the agent to be very careful um and then just to round off if you actually want to go and build this um there's there's various technology that fits in all of these boxes um but in this case i've used a tool called flux from weaveworks for the for that continuous deployment piece i've used a tool called flagger from weaveworks for the for that automated rollout agent and then as i say everything is running in kubernetes with with istio and my company tech rate make a platform that helps you manage that istio because that's a very big important part that's those proxies that give you all of these all of these insights into your platform what gives you the control over this traffic so running that is important and quite difficult and we make a platform that can help you with that uh there are there there are other solutions out there we could change flux for agar we could change flagger for agar we could change istio for linkerd so i just wanted to say that you know i am agnostic to this um and there's other ways of doing it so i think that's just about on time i will end there um i guess the host is going to come back in and say what's next but i think there's a i don't have time for any questions now but i think there's going to be sort of 20 minutes where i'm going to be hanging out in a breakout room so thank you very much