 I hope everyone can hear me all the way to the back. Do I need this? Can I? I can you guys hear me without the mic on the way here? On the last bench, yeah? I'm not going to use the mic because it saves me a lot of energy. Shall we begin? Okay, so my name is Larsen, KISHIR. I work at Kayaku. I tell you a little bit about what Kayaku is so that you understand the problem space. And we'll be talking about immutable infrastructure. It is what it is and what good it does. So let's begin. So a couple of things I've covered in this talk just to set context. This is not, fortunately it's not a hypothetical talk. This is not, oh this is a good idea. You should do it, I haven't done it. This isn't that. We've done this, it runs in production. So it's a real-life case study. Can you guys read the text? It's too light. Is it any better now? Can you guys come forward, I have a lot of seats here. Some of you guys come forward, I think that's one part of the story. If you can't read it then you want to read it, come forward. Can we switch out this light here? Cool. Okay, so I'm not going to read these points out. So this is a real-life case study. This is stuff that we've already done. It's not a hypothetical thing, it's not just Gyan. We talk about architecture. I do it without a diagram because I want to talk at a conceptual level. I'm not going to show you a complicated or simple line. For you to understand, you should be able to understand this just by talking about it. We talk about clearly what is immutable infrastructure, which is the core essence of this talk. We talk about people saying infrastructure is cold. So we talk about that, we talk about who that enables that. We talk about these key problems, infrastructure, secret management, service discovery, container management and scheduling, blue-green deployments, and we also talk about how we've used AWS to kind of help enable all of this and make it easy for us. Any better? So does everybody get what this talk is about? Okay. So a couple of questions you should ask. You can ask more questions at any time, the format is generally towards the end of the talk. Couple of questions you shouldn't ask me. You shouldn't ask me, oh, I've got this. What is the code I need to write to do this? That's very tactical, the talk is not for that. I'm happy to answer those talks. So is my team, which is the office team that's kind of down on this work. They're right here, so they'll be happy to talk to you afterwards if you need very specific answers, stuff at the core level or at a very tactical level. This is an overall, to give you an overall idea. After this talk you should be able to walk away and say, I see this is what we can do, you know, at our own processes with our own processes, with our own tech, and this is how we can evolve our DevOps further, right? So that's what you should walk away with. So quickly, what is Kayaku? You need to know that so that you understand what we do because this is a real-life case study and so the case study here is the company itself. So Kayaku is the healthiest software and it's very much like Z Desk or Fresh Desk. If you've heard of any of those, it's a similar product. It helps companies learn their support teams. So I'll give you an example of that quickly. This is one customer of ours called Sindori. This is their website, they build Mac apps. And so they have the support link here, right? And you see the support link on all websites, right? And this leads into some piece of software, right? Which is either Ask for Z Desk or Fresh Desk or any of the other libraries. So when you click on that support link, you land up here. This is Kayaku, okay? So this is a customer-facing view of Kayaku. There's also an agent-facing view where people send in support questions. They call, they send messages to the support team and the agents answer those, right? So there's an agent side as well. I'm not showing you that, but that's essentially what Kayaku does, right? So this is a snapshot of our knowledge centers. This is where people would come, you know, ask questions of the search, essentially see if there's an agent that can answer the question. If not, then they can engage with the messenger bot or, you know, they can drop an email to customer support or call or do whatever else, right? So that's what Kayaku does, okay? We'll start by talking about what we did before we did this and what problems we had, right? So what was our life before a mutual infrastructure, right? I think a lot of us will be able to relate to that and, you know, how sucky that was. So what would happen is when somebody came to Kayaku, right, they would go and click on this, this is the website's screenshot, they would click on this link called start your free trial. So you click on this link and, consequently, you would kind of get a trial that you could use yourself. So it would be installed version of Kayaku on a sub-lumain. So, you know, your Kayaku.kayaku.com is in there, right? And you could just go ahead and play around and understand the thing and if you like it, you could just go ahead and change the billion to pay and, you know, do better versions of Kayaku and start using it. So what would happen is in order for people to understand how our processes have been affected, I'm going to pick one thing that used to do, which is provisioning a trial, right? So when you click on this button, you fill out a form and the basis of that form, you provision a trial for it. So if your company is called InvestorPied, once you fill out that form, you will tell us your company name and then at the end, when you click submit, you will get InvestorPied.Kayaku.com. It will become a help center like this. So this is Singdori. This is basically Singdori.Kayaku.com that they are, you know, might be able to support our Singdori.org. So everybody here, what would happen when you click on a trial, right? So how would a trial get re-entered? What would be the processes that would take into account a trial made on the client? So we get a request, a post request from a form, right? So essentially that would come in, get submitted. Then we had this app which we called, it's a legacy app, but we called, it was called Platform. Okay, the prophetic name, but that's what it was called. So Platform would receive this form, right? This form submission. It would carry just all the servers that are out there, right? That they currently using and we were running on Softlayer. Not on AWS at this point, right? So Softlayer was, as some of you might know, is a traditional data center, right? It's not a cloud data center. You get physical machines, you put them in lags. So Platform was the app that managed all the infrastructure. So it would know what are other servers that I can register with it. So it would carry a list of all servers, right? And then it would make a copy of the latest code for this guy, for investor partners, say that's the, you know, that's the customers trying out and getting a free trial made. It would make a copy of that code in a separate directory and then set up an engineer school to kind of send traffic to that. So I was not really multi-connected, right? Every tenant had their own delivery, which is not ideal, but that's the way it was. Then you know, get the configuration for that, right? So it's like, oh, this is a free plan, et cetera, et cetera. This is the database, details, so on and so forth, right? Whatever configuration is required for that. And then from each of these servers, so imagine 20 servers, right? So this has happened on all 27. A new directory has been made, new code has been put, an engineer school has been set to kind of go out to that code, and this is all PHP. And then all of these servers would send an app back, right? And we'd use the M2P to send an app back. Then I'd go back to the platform saying, I'm done, I'm done, I'm done. When all 20 are done, right? When everybody is done, then, you know, we can go to the next step, which is essentially, the next step is looping one of these 20 servers to execute a product setup. A product setup would create database tables and put in some, you know, sample data and things like that. So you can imagine what would happen. 20 servers are involved, all kinds of load. You send a request to 20 servers, only 18 requests came back, so the trial is stuck. So the guy who's waiting, who's thinking, should I pay money for this? He's not thinking, I should not pay money for this, right? Because it's not going well, he's stuck, you're waiting. And then when you eventually receive all the 20 acts, you go ahead and the process is complete and you go ahead and try it, right? So the last couple of processes, we need to make DNS entries, right? So we have to say investopio.kayako.com must not go into unload balances so that the traffic can be directed on to engineics from which it goes into the directory that we talked about, right? Everybody here so far, right? This process is kind of important to get because then you'll see how this is changed, right? And what we do now and how it's different. So remember, we were on software, they keep that in the back of their mind. So, you know, if you had to provision hardware, how would we do it at that time? So we would write a ticket, which is an email somebody would say, too soft clear. They say, okay, we will add a new service. They'll go check the rack, right? Somebody physically would go and check the rack and say yes, this rack has space, right? You know, they have some registers, they'll check. Say, yeah, and there's a server there. Good, you're lucky day. 24 hours to get a server. Unlucky day, they say your rack is full and so we need to put it on a different rack, right? And we don't have a spare rack so we spent an order for a new rack or they're picking one up from go down or whatever. So it won't take one day but two or three days for a server to come up. So that was the life before. We got this hardware, we would go ahead and provision this by hand essentially or we'll use things like Chef, right? So, you know, you'll get some of your software installed, you know, make sure that everything is properly set up. So that will happen on my hands. And then we have various kinds of servers, right? So we have road balancers, application servers, worker servers, so on and so forth. We have seven or eight types of servers that one needed to set up. So you would have to have a recipe usually in your head otherwise if you're lucky as a Chef recipe, and then you'd use that to kind of set up these servers, right? So if I was setting it up for somebody else or setting it up, it might come out differently. Because you know, depending on how much you remember whether your Chef, S&P covers that, that is going to happen all the time. And then after you've done that, you have to test it because, you know, you had to tell no one that had to test a server because everybody was under confident that the server was provisioned correctly. So everybody, in fact, I have to test it to figure out whether it's working. After you did the task, you felt so under confident about having completed it and you would test it. So people would manually test this, see 8 or 10 work parts, see if everything is working properly. If I send a package to LQP, it doesn't come back, right? They do all that kind of stuff. So that would not be manually done. And then after all of this is done, we're going to tell our legacy platform app, right? That they will say, we have a new server. This is like, keep a record. So when you install new software, make sure you remove it, right? So there are all kinds of failure points. Some of it was automated, but very, very different. Just to give you a flavor of what the stuff that we were using back then, right? So PHP, as you know, is not concurrent, right? So you can't do two tasks at one time. You can only do one thing, right? So what the PHP ecosystem does is that they put a job in on any job queue, right? To do anything concurrently. And then somebody picks it up from the job queue in execution, right? So we'll stop doing all the local jobs because one on that box essentially in any concurrent task, any long-running task will be different, right? And then here, if we needed to kind of communicate between boxes or sentences, between boxes. You know, we use elastic search, we use leaders, we use MySQL. And for DNS, we use dynamic, right? Because dynamic has an API for us to kind of do all of this easily. Now, this is a business line. I'm not going to read it out, but what's wrong with this world? So I think it should be fairly apparent what is wrong with this, but I'm not going to take you guys through it. None of this. Stuff is not consistent, right? If I provisioned the server today and then I provisioned another app server say seven days from now or two months from now, right? It's possible that the PHP version that was stable has changed, right? The app repository that I was using has updated that, right? It's possible that nobody's taking the trouble to kind of correct that in Chef. So it's possible that everything, it appears to be that while I'm running through Chef, everything's actually not consistent because I'm running through different, say, minor versions of PHP or some other software, right? There's nothing exactly like that. There's a lot of ways to be inconsistent. So stuff of being inconsistent. What's wrong with this world? We have to make a gap. I just want to make a point here. No, it's not a problem. The food is the problem of the environment, right? The culture, the setup, the processes that we have in place. So one of the challenges with things like Chef is, right, it allows you not to use it, right? And so everybody does that. So they say, oh, if you're an automation, I'll install engineering through Chef because, you know, I'll find a simple copy-paste recipe, life is simple. I can do that. But if you want to do something custom, right, then I will buy it, right? And nobody can prevent you from taking that part. That's one of the problems that these tools have. And we actually went to a model where that was not a problem. So I'll share that a little bit. But the problem is not with the tool, really. The tool, if you use it well, it will kind of work better. What is the problem with the setup that we have? You know, another problem was we needed a copy of code every time. So if you made a new client, you need to provision a fresh copy of code, right? Possible with that was that it is possible for a DevOps client to not go in and change that code for a particular customer. And, you know, he gets actually different version of the product than what you're getting in real life. And you'll miss what everybody else is getting. And you'll be surprised this is actually a common case. So what happens is that this customer's got a lot of load, a lot of problem. Can we relax this limit for any? And that limit is not configured, right? It's just like a default limit built into the product. So somebody would just go in and change it and then directly, right? And now you've got drift. So this kind of stuff would happen over time. Updating system software was extremely difficult so no one really updated it. But if you've got to update it now, you've got to take that server demo monitor time, go in, land share for manually updated or whatever else. And it was a painful process. It was not truly automated or covered. Secrets were kept in a Chef data bag. So if I had a token, say, send grade or some other party provider that I was using, that token would be put in a Chef data bag and then Chef would ship it to those servers. And they would be lying on disk on those servers, right? So the moment that server got compromised, all of those tokens were compromised, right? So that's not good secret management. People had SSH access, they needed SSH access because they wanted to go in and modify stuff that would provision stuff manually, right? If Chef failed, they had to figure out why Chef failed, all of that stuff. It was not fully automated, I've already covered. Simply because Chef doesn't mandate or Ansible or any of these tools won't mandate that it has to be fully automated. There's a workaround that doesn't manually. It would take hours to deploy, right? So if, for example, something went down and you had to reprovision it, it would take many hours before the server would come back up. If a whole region went down and there is a new region, right? A new setup, it would take maybe days, like three to seven days for us to kind of provision that. If a service went down, so if you have multiple services running and if a service went down, right? You would have to go back into the app, ship a new version or change the concept to say, don't use that server because that service is no longer lying. And for that duration, which could be 15 minutes, could be an hour, you would see downtime on that service, right? Or excuse me. Finally, the stuff that was actually running had never been tested in production, really. Because what it was, was some original thing that you built and then ends of Chef runs over there, right? So the ends stayed, now nobody really knows. It's an implied kind of state, it's not very clear what it is. It's not really being tested, right? It just goes into production and over time you kind of discover what problems it has. So it's not very tested. What is very difficult to revert to the last move would stay because you don't know what is last move, right? It's not explicit what is last move because I could have made a manual entry change. You could have, a Chef run could have made a change, right? So on and so forth. So it's very, very difficult. This is what is wrong with this world. It's very painful to respect very long nights, you know, just struggling and trying to figure out how to keep the software up and running, right? A lot of you probably see this, right? This is how it felt. Very popular GIF. This is what our life was like. This is keeping production up and running. The train is a production. They are very creative, right? You have to do this kind of stuff. So every developed guy, he gets that moment, right? He says, I know how to do this. You are hard to handle. It will start working, right? And so that was our life. So the question is how do we now build a bullet train? Right? We have this broken, religious system which doesn't really work. Life is risky. Everybody's putting their lives on the line. So now how do we build a bullet train? So now let's look at the view, right? So now you understand, appreciate the problem with the view. Now let's look at what this new world is all about. Let me start by telling you what is a beautiful infrastructure. Once you understand that, you will have a framework in your head as to where to fit all the information that I'm going to render. So a beautiful infrastructure does these some definitions of explaining what a beautiful infrastructure is. So one thing is that you build a server only once. It never mutates. It never changes. So if you build it once and it has some flaw in it, you need to provision a fresh server, right? You can't modify this old one. Everything is disposable, right? So because you can't modify it, there's no way of reusing it. So you have to dispose of it. There is no SSH access at all. You can't get it to that server because you're not allowed to modify it. What will you do with SSH? You want to read logs? You can set logs to give one. So this is an incredible infrastructure in the world where servers are built once and you don't touch them anyway. You stop thinking of your servers as pets, right? You don't give them nice names like Tony. Because you don't want a relationship with that server anymore. The moment you want to change it, you're going to go ahead and kill it. So you think of it as cattle, right? It's a very famous analogy to say this. So cattle is essentially the problem with cattle is the way you handle cattle is that whenever some animal in a herd falls sick, right? You shoot that animal and you cannot reuse whatever resources come out of that animal. And you don't worry about it. You don't have a relationship with it, right? With pets, how you name your servers is you call it something fuzzy or nice like naruto or something like that. You know, I know naruto is down. I'll go and fix naruto. So in this world, you have cattle. You don't have pets. Okay, now let's look at what happened to our cattle and since how it was made earlier, how it's made now. Okay, once we've implemented immutable infrastructure, then I'll talk about implementation later. So earlier we got a request form of form. We still get that request. This goes to a more modern piece of code. It doesn't go to that code like you see here. There's no getting a list of all servers anymore. There is no more copying code from one directory to another, right? Now the configuration is not stored on each of the 20 servers. It's stored in one key value store, which I'll introduce later. It's called Consul. There is no need to send an act because none of that work is done. There is no need to execute a product setup because the way we do it now, we have pre-prepared instances that we keep ready for our trial well before you've asked or requested for one. So we just use a pre-prepared instance and there's no need for a real assembly. That's a minor point. So you see a lot of steps have been eradicated and so this kind of now reduces down to this. So with three steps, there's less likelihood of having failure and having problems, right? With eight or ten steps, there was more likelihood. One of those three steps, get a request form to make a trial, update the configuration in a key value store and that's a user pre-prepared instance. But how do we achieve this? I'll talk a little bit about achieving this. I imagine you have this old setup and you have a new stuff, right? What's another first thing that you want to do? So every time you have to make a server, you have to put some software on it. So you need to make an image, in the beautiful infrastructure world, you need to make an image that you will install on that server. You can do this in various ways. We have chosen to use Hasheucot, which is a lovely software company which is providing a bunch of solutions for DevOps. So they have four or five tools that we've used almost all of them and I'll share that with you. So they have a tool called Packer, right? And you can use all of this, Hasheucot Packer. And Packer will kind of help you automate the image creation process. So how do you create an image? You can use something like Chef, Ansible, Shell Scan, whatever you want and you fit that into Packer, essentially. And then Packer will generate an image for you and it will also push that to AWS. So that it's available as an AMI. So one of the changes you made from that old infrastructure, this new one, we started using AWS. So we no longer using software and we make an image just like we would with Chef or Ansible or whatever else. And we would then make an AMI one and that AMI would get pushed to AWS. We also made the other change. We said all our applications will dockerize. And that same docker environment is the docker environment when the developer installs the development. So we said, you know, whatever development you're doing you're going to do it in a docker environment and we'll use the same environment without any changes and ship it all the way, right? So we did that. So the development experience is exactly like production. There is no difference. You will not discover above because your development experience was different and where it is installed is different, right? So we use docker and docker to do that. Now docker also needs its images, right? Because what you do is you commit your code to GitHub repository, let's say, right? So GitHub has a post-commit folk that will essentially hit the CI tool that we use. It's called drone. And it's really good for the docker ecosystem and you can try it out. Try it out, drone.io.org. And what that does is it will take the user of the docker finder essentially make a fresh docker image with that latest commit, right? Then you just hit it. And it will move it. We use ECR, the registry that NWS provides for keeping all your images, right? You can also use docker app. So we push our docker image that we've just committed there. So first thing we needed to get to this new world or if you go and apply this back at your workplaces, first thing you need to do is get your images, right? You go and use something like Packer to generate an image. Generates an email, puts it on EWS. Use something to generate latest docker images when code is updated, right? You need these two components. Next thing you need is you need hardware to put that software on, right? These images you've generated but you need hardware for that. So we use another lovely tool that these guys have. It's called Hasukop Teleform, right? It's like cloud formation but with none of the pain. So if anybody is using cloud formation, you know people used to say that XML sucks and JSON is awesome. And then when you use cloud formation you say XML sucks and JSON sucks. Because you write so much JSON when you use cloud formation, it's like, oh god. So what Hasukop does is obviously they factor pain also. So they've written a more concise key value kind of syntax called HCL to kind of describe this. So that's the example of that. So here what we're doing we are saying we need an ELB this is just a simple screenshot. We are saying we need an ELB. We want to call it front end it's going to be load palettes upon us. The ELB on EWS takes the following settings, right? You should know what port it's running, what protocol it's running, so on and so forth. So you give it those settings. You also tell how many instances you want on this server, right? And what AMI you want to use and what size of an instance you want. You write this in code. You write this, you commit this to your Terraformer quality with the GitHub quality, you commit everything. Now whenever you want a fresh ELB you will run Terraformer fly and you will go there check your current state see if you already have the 5 ELBs you need. If you do it won't do anything. But let's say you have already 2 ELBs and then you will have changes number to 5 and running. Add 3 more ELBs. So that infrastructure is cold. As opposed to going to EWS console saying add a new ELB and then manually configuring it you do it like this. So now we are going to scale the model. How long does it take to scale? To add 15 more ELBs? A lot of time. We will not do this. We will just change that time to 15. Press enter, set Terraformer applied new ELBs are up and running. And this is not restricted to simply all our ELBs can be configured with this. Almost all of them. So you can configure security groups NACLs, VPHs, everything. And it's all in code. So if a new guy joins our company and you don't have much documentation he can do it to read this to understand exactly how everything is connected with this. Now if 2 people have to fire an ELB in the company it won't be any different. There is no risk of it being different. There is no chance of it being different. So you got exactly the same way. So it's standard like this now. Terraform allows you to review your changes. So when you run Terraformer applied you can tell you this is what will change but you want to go ahead with this. So you can see over this is what it's going to do. And then you can approve it. So it's very safe. You can collaborate so you can commit your Terraform code into a repository. And then a bunch of people can work together to get infrastructure up and running. And that wasn't possible earlier. Because you were sitting in a guy sitting next to you. He said, let's do it like this. Or I know we should do it like this and so on and so forth. Now if I want to make an ELB I'll do this ELB and I'll give it to Buffan. And say Buffan, this is how I think the ELB should be. Instead of you missing you know you want to kind of add this configuration as well. You want to add these other things. It will all happen before I add the collaboration. I'll commit and then when I run it it's also modular. So you can write you can write a configuration for a web server which runs Ingenix and PHP. And then anybody can do it. It's not limited to your company. So you have modules. It also gives you environment parity. So now you can take a staging environment. Same things, modern instances, less instances just change the size or the number, the cost aspect. You can change the same environment. The same SG, same NATL, same VPN all of those rules will be exactly the same. So that's teleformed. You have very little time but I can kind of speak to something. So now you've got the you've got the image that you want to install on the hardware and you know how to get your hardware in a reliable fashion. Because we dockerized and even if you did even for normal servers you need something that's going to damage all your containers. So what happens when you're running multiple applications is that you might have servers which have unutilized capacity and you just make for it without ever using it. So what Nomad does, Nomad is the next one in the AshiCop schedule. What that does is essentially schedules your posts and manages them. So it will decide when and where to run a certain service. If it feels that a certain post is empty and it could take another instance of the service, it will run it differently. So you give it a fleet of hardware and it essentially manages all the docker deployments or even non-talking deployments. It has three kinds of tasks or jobs that it does. It does, it calls them services, system and math jobs. So you could tell Nomad to do a service job. What that means is that it will decide where your service is running. So whatever job let's say you want to run a certain application so you configure it as a service job and then Nomad will decide in the cluster where it is going to run. You give it some constraints, I need so many instances or I need so much CPU and so on and so forth and then it will choose where to run it on which hardware. So you know Nomad has to think about which piece of hardware you have to run it on. System jobs are very similar to service jobs but the last plan on the set of nodes that was chosen for sure one instance must run some person. So with a service it's possible that an instance is not running on one box at all but it's running on other ones. But with a system job it's going to run on each box. So we use systems of that and they have bad jobs. Bad jobs are for example if you want to use it to share the money. You come up with a bad job and you give it to Nomad and Nomad then execute it for you on any of the servers and it feels great. How you describe all of these jobs is using a JSON line format essentially and you go ahead and select what a command you want it to run, what resources it should use so on and so forth and then Nomad does all this. So you specify the constraints. Nomad has to kind of improve efficiency and in our case it also answers don't do green deployments. So what Nomad does for us is a hook hits Nomad when a new deployment happens and then Nomad will say you know I need to deploy and we told you to use a blue green strategy so it generates 5 new servers the old 5 servers are still receiving traffic generate 5 new servers wait for them to go green so they have health checks they grow green. Once they go green it switches the traffic from these 5 servers to these 5 servers then takes these 5 servers down. Thus blue green deployment gives you a beautiful infrastructure without any doubt. Once you got all of this set you also need to worry a little bit about configuration and secrets. So configuration is easy you can store it on Hickey value store and I will use that later but secrets you can store on a tool offered by HashiCock called Word. What Word does essentially is secrets management it allows you to safely and securely keep your secrets and to programmatically access so you put all your tokens and everything you need Word. You don't put them on every server you don't put them in a Chef data bag there is no thumbs up the surface area for the threat is against you. Obviously all the HashiCock tools work with each other so Word integrates naturally with Nomad so what happens is that first we have to think about console business services coming which is on the next slide that comes up and then Word comes up and we create a new region so then Word comes up and Nomad comes up in parallel and waits for Word to become ready and then Word is ready Nomad looks for a role and profile defined for it so it gets that from Word it gets all the credentials it needs then it creates custom roles and profiles for each application so if you have a server service which only needs access to say one third party service and needs your token it will give it that and that service so it does all of that what will help do all of that in our case for example it also creates IAM users so if a service needs an IAM user it automatically created for you it also does encryption as a service you don't use that but that feature is available encryption as a service is a general idea that you are bought up and running you give it any data it keeps all the certificates on it saying in the back that you can use to ship anywhere we don't use that but it's also available running out of time is a worship you guys can ask me questions later so then there's console this is the fifth and the last component that we used one of the problems was if a service went down we wouldn't know that the service is gone down so what console does for us is service discovery so you may have heard of Rukipa Rukipa does something similar console service discovery is available so if you have a service running on some DNS endpoint like service.cap.com you know you query that make a DNS query in terms of all the IPs on which that service is running you can connect to any of them and if a service goes down it helps us with way and it will automatically be removed from the DNS so now when your application is running it queries again it gets that only these four IPs are available one is not available so it won't be usable so service discovery becomes completely automated and can be used in your application also provide that key values too so earlier we used to have a configuration for each tenant for everybody so investors tried to try it with us we gave them a custom configuration saying they are not a free plan and then they decided to pay and we kind of upgraded that configuration all of that is now stored on console and it offers a key value store a very simple key value store that you can access as you need and we store that configuration you don't have to go to each server every server has a normal console agent running on it which helps you query and get all the configuration details you need you can also do right time configuration you don't know what to do with this even ahead so what you can do for example is every time you add a load balancer let's say you want to add engineering screws so when a load balancer is automatically it registers with console as a load balancer then there is a tool called console template that is listening to console and says all new load balancers has come up I need to make all the upstream servers come up I need to make entries into load balancers right so it will go and modify the engineering configuration file on all the existing server that would happen automatically that was a bit mutable for us so we moved on from this and we now completely DNS based look up from this but I will not cover that right now that is why we use console right that is a screenshot showing you essentially services which are green and so on and so forth and it tells you how to do on the network what the health service is so on and so forth okay so quickly to wrap this up so that was all of hashit top that we used right so now what you have got is you have got images made you know how to get hardware up you can put it on that hardware you can manage your secrets and you can do light discovery of services you can do all of this but all of this wouldn't nearly be possible or practical to do without having a power provider and so we use AWS right and if you remember the components that we used earlier all of them were replaced by AWS components which are highly available completely managed so we don't have to worry about that so if you notice we have reached out now we are just using SQS you know BirdsRide Elasticsearch is no longer provisioned or run by us there is an Elasticsearch service the data boost provides so we just take our form make an Elasticsearch service cluster with three servers it's done that's the total amount of effort we have to make the same boost for readers Elasticcache service MySQL we use Aurora which is a managed variant of MySQL that AWS provides and it's not dynamic but it's available and gives the world most of us a piece of it so all of this wouldn't have been possible you can't do this in software you have to be on a cloud provider it can be any cloud provider that's the number of components that we use and all of these are not managed by us they are managed by AWS so all of that overhead is gone forever and you don't worry that's how you build a bullet train that's what we thought we did question that I cannot rush through this we have a few minutes for questions yes, yes, yes so you can...there are forms for example you can write you can write a script to run on any provider so you can write AWS Google, so on and so forth it doesn't matter you just change the provider and the script to no understanding so yes it is any other questions? yes, yes what is it supposed to be? AWS AWS AWS we will... yes yes, you can put anything in the script you can provision a post-production script okay I have this I have this little you have a you can go you can go and hit that it's an example of how we how somebody could build an app in the cat on it right now it's 5 minutes of video it's better it takes 10 to 12 minutes but in 10 to 12 minutes you can write an app in cat and get it up and running on the cloud in no time so that's the illustration of how all of this comes together to help development cool, thank you so much thanks a lot that was super informative