 I'm really excited to be here and thank you everyone who's who's attending live and thank you if you catch that for the fact you really mean a lot to some of them. So a little bit about who we are today so I'm Robbie Loughman, I'm the lead evangelist and harness I quote unquote married people and technology and Samarth maybe a little bit about yourself before we get started. Absolutely. I'm one of the product managers here at harness and I'm responsible for the continuous delivery aspect that we do in harness so looking forward to talking to all of you. Thanks so much. And so what are we going to be talking about today so if the title didn't give it away but we're talking a little bit about get ops or a lot about get off but we just start talking about the rise of infrastructure as code so I AC right so kind of the precursor to get ops like how do we get to infrastructures code and what is what were some challenges the infrastructures code, and then hello get ops for your application infrastructure, and then talking about the pillars to get up so there's a really good piece by we've works. This is one of the godfathers of get ops and they actually have four pillars that they have that is a stringent definition of get ops and so your mileage might vary depending if you agree or disagree with them but when looking to implement get upset they're usually a good idea to kind of follow some of those players, and then also super charging or get absurd needs like hey how do you get started what are the two tooling landscape looks like and also again some of the pitfalls up there to watch out for. The rise of infrastructure as code. So we're going to do a little bit of role play here today so my buddy some marks and we'll have several jobs today. But he's trust me some parts of the same person over and over so let's go back to a decade ago right and so my background is actually an application development so I've been writing Java for over a decade and see the evolution of revolution occurred but if you take it back about 10 years you know it's everything was what we needed was kind of statically provisioned right so my dev environment might be my local machine. If there's any sort of dev piece of infrastructure, it might be going to be I'm right so I need a build server maybe Jenkins on the VM I need an application server as a separate VM I might have artifact repository and so this guess guess what as a separate VM, but as we kind of transverse of the journey to production. These other environments have to be pre provision right again there's no we're, you know, there might have been some automation in the VM level but the environments again for pre provision so some art is the quintessential system engineer hey some art. Let me get you another VM ruby so Marth, you know I need, I need to make another note of a load balancer, or a web server he's like okay here's some HP to PD, and then even what pretending this load balancer person is not some Marth like oh I need a role about load balancing change. Well that I have to go talk to the networking engineer for that right but they were statically provisioned. So how long did this take right and so these these wait times are wait times I experience and more than one employer. So like if I need to get a new virtual machine, it could be a couple weeks. No, I don't know why the networking team had such a long lead time over and over that the guards of our corporate network. I think you know what sometimes right and then making a new app server node, a little bit quicker, you know talking to the middle more engineer, they're a little bit more snappy and like getting me the additional jboss node but this is typically the lead time that it would take. And who would do this, potentially could be three separate people right so here's I am as a developer development manager having to orchestrate multiple tickets multiple systems like hey some art give me a VM. Can we get you to make a load balancing change and then middle or engineer Samarth, can you please give me another go to jboss. And so it typically was three separate people, ironically here and only Samarth but it typically took three sets of people right. And so elite. So, with these long lead times. It costs money right like, why are you sitting around waiting for a month why can't you know get your idea into production. This is an actual script that I wrote now I did. You know kind of change the name of the script but when I used to work for an investment bank, you know, I had several VMs out there running different nodes of Webster application server. The infrastructure team would say if it's not being used and they want to turn it off, but you know turning it on was really hard. And so I would touch on the hour every hour with the different nodes right so to kind of get around that but hey you know it's innovation, Necessity leads to innovation and this was what I had to do to keep my VMs on. But now, kind of fast forward a couple years after that that there's a rise in infrastructure as code right and so, instead of having things static provisioned or manually provisioned the rise of these particular tools like puppet chef ansible pick your tooling a choice there's lots of if I see tools out there and platform that they're now. It really gave rise to us to be able to do a couple of things fairly fairly quickly or more repeatable and so what are the benefits and this is on the ISE talk but what are the benefits of having infrastructure as code right well. These are the three really big benefits. First it was repeatable right like we strive for things being repeatable as engineers. So if something is repeatable is also consistent right so I can call this for like new let's say VM provisioning ISE or new let's say easy to instance provisioning. I have a code and you can call over over again, and this also helps with items being know what I have a great deal of confidence in my infrastructure, and but the big thing here is also callable right and so you can call infrastructure code and then the parts of the process so for example if you have a continuous integration or to delivery pipeline, the big change the last couple years were these pipelines becoming infrastructure where a little bit off topic right now. It's also a really big deal right like hey these ISEs are able to integrate other processes. If infrastructure is available, those, those times really really came down right and so it potentially if infrastructure is available, you know you can get a brand new VM in 10 minutes, you can make a load balancer change in minutes. You can get a new wired in application server to you know using JBoss into the domain controller in like five minutes right like if infrastructure is available. You seem to get it fairly quickly right and so that that's kind of the the northern Nirvana state, but there are certain things that ISE lacks or it's extremely challenging to orchestrate infrastructure as code and so let's spend a minute actually going through this. So what is missing in infrastructure is code so going back to personification. I'm going to be the application developer right. And so, again some art is a system engineer again. I have an episode as a developer, I have a reasonable expectation that if a node goes down, it should come back right I don't care what system it's on if there's something goes down again should come back. And also, you know, in locked in my head, I have the order of operations that we did right so I'm like oh yeah you know we have a distributed application that has like a Redis database has a cache in memory cast so clearly you know we need to the cash needs to be there and like lukewarm before the application joins it, like I have all this knowledge, right but how do I translate that over to the system engineer or the cloud infrastructure engineer. It's actually fairly complicated right like hey specifically giving order operations or what I would deem as it's the straddles a border of like application infrastructure infrastructure operations. You know some art has now a lot more tools that he has to use right not only is, let's say our infrastructure as code stack is puppet. But you need to monitor, you know you need to create other infrastructure or monitor for nodes going down through using tools like satellite and cloud forms. Making something the harder part is making an application aware, like oh you know if we, if the application is having bad performance we should scale or certain scaling events, they're using like another tool, like, for example if we're sticking to the red hat stack here would be something like jboss operations network, getting very granular metrics and very good granular operational tasks based on those metrics. And this is where it gets complicated right like this is who has the knowledge how to do this, who or who doesn't have the knowledge to do this. And this is where get ups actually comes in right and so get ups is actually a way of bridging the gap between those two and also providing it as code so let's get into a little bit of the history of get ups and you know what what are some of the benefits so the mighty is right and so kind of a precursor to a lot of the get out mantra is just having a declarative system, in this case or state system, which is the number one most prevalent one out there that folks know is and going back to that argument that hey you know what, if for example if a node dies or a particular part of the application service dies it should come back up. Well, this is exactly what this container orchestrator Kubernetes does right I can declare like minimums right like in this case it's kind of water down like hey I only have one replica but let's say I had to the scheduler inside the orchestrator will take care of that right and so there's a lot of that locked in application and it's being actually visible in a YAML right so you can do things, all sorts of things in Kubernetes order of operations sir you can run an operator or have a custom controller across the resource. A lot of that application infrastructure knowledge is now can be authored by the developer right like hey this is what my expectation is and this is like a handle. With Kubernetes there's there's code where there's has not been code before. Right, so, for example, let's say our load balance exact was F5 based right now not necessarily we might not be locked into a hardware vendor right for those particular routing pages we might be using a service mesh like Istio right so again we can codify those particular changes into a Kubernetes manifest aka YAML. And today, if you take a look at what is actually going on right so the modern purveyors the platforms, the amount of time it takes to deploy something in Kubernetes is super quick right like if now given if your cluster has capacity. There's no route change in seconds, right, where I can spin up a volume with stuff in seconds. And so this really shows that there's no more lead time. And this leads us to, again, what happens to Martin is some are still have a job here at our company right like, which kind of leads us to like the changing of the rules, right and so this this actually it's it's kind of a good and bad about get off right shifting a lot of his expertise towards the development team is that now there's this modern notion of what I like to call a full lifecycle developer. It's, if you write the service you run it right or if you, if you, if it's build or you run what you write, or you operate what you build right and so not only as an engineer or software engineer that I wrote the feature, I need to be able to give a very viable option of how it's supposed to run, which leads to the changing role of system engineer, like, if we're actually we are extremely invested here at harness in kubernetes and so we have a platform engineering team right there the purveyors of our kubernetes platforms or public cloud offerings of those platforms. And basically it goes from, you know, instead of a system engineer writing bespoke code or writing bespoke code, it's the platform is yours and making sure there's capacity so when a customer, an internal customer like myself needs some sort of service deployed. They have to link to onboard and off with that, which leads us to the first part. So software engineer. The first part is, as, as obvious as this is the first part of get ops is actually get or get so for those who don't know what get is get was invented by Linus Tarval himself, Mr Lennox that it's a source code management platform right and so as a software engineer, I'm used to writing code, I'm used to committing code source code management and it supports iteration right so I'm used to versioning things I'm used to like hey, this is a particular state, know of my application that I need. And what what is you know bringing in source code management into the kind of the infrastructure side is, well there's a couple concepts that source code management has right so the first thing is, you know, versioning, right and so you know, depending on, you know, if you're able to version your scripts or version some of the provisioning stuff depending where you are some people are like oh you know we've been doing this for a while but if it's kind of a new concept to you. Instead of underlining something like oh this is you know my provisioner underscore final underscore final final right I used to do that. So, on the SH right like you have the ability to kind of segregate point in time at what infrastructure should be. And also what's very important about using source an SCM is diff management right or differences management so if what ends up happening is that more than one person is going to be working on on a piece of code as a software engineer that's that's a given, maybe as an infrastructure engineer, you know you might be the sole author of the infrastructure's code or but as as that linear knowledge kind of pushes left right you it's one of the fallacies of distributed systems. There's only one admin know there's not, you know there's multiple admins for things. And so how having you be more collaborative right and what you know my my big push is that software is iterative right and so as a software engineer. I'm used to not getting something right the first time I've never gotten something right the first time, and which could be a little bit different. You're an infrastructure engineer right like you know there's no minus having a sandbox, you're always doing a production you always have to get something right. Right, like, coming from my side of the equation, I've never got anything right the first time and so software is iteration software is trial and error. And so it's as a software engineer, and this is going to get into what get up to this in a sec, is if I make a commit. That means I'm ready for the world to see something right now it might not be correct, but a lot of what agile preachers are very iterative design preachers that you're incrementally blowing stuff right and so getting that big bang or bullion might be a little more difficult. So, let's go to the formal definition of what get ups is and so get ups, it's a term coins, what law was going to boil it down but basically, it's what get ups is if it's a term that came out in 2017. And the sole goal of get off to that it should be as easy as engineer committing code. And so, like, we do that all the time, telling the world or changes ready to basically enact all of the infrastructure and application changes that's needed to occur. Right. And so, this might be executing tests, as coffee's building this might be making sure that you know there's a some sort of rolling deployment going on to the deployment going on. This might be even provisioning more infrastructure, right, and also, if there's any sort of variance that how do you rectify that. So this gets into the four pillars of get ops right and so I'll kind of get into what these particular four for pillars are in in in kind of in detail so the first one is everything should be this the online system should be declarative right and you know what that means, something is declared up. It's like me how is the thing is a month against a month, I would like a pepperoni pizza, and some art will deliver me a pepperoni pizza now he's also a pizza delivery man in this example so I didn't have to say some art, I need cheese that is no 2% fat I need a pepperoni that is honestly and craftily and fair use source, you know, it's basically the describing what you want a pepperoni pizza, and you're getting a pepperoni pizza, right, for that two pizza team that pictures two pieces of team. Now the rationale behind why something has to be declared up is that because again you're using using a lot of one blows into the second pillar right like hey you're declaring a state I need three into my application I need to have the source code exposed here here and here, I need metrics to be speed up here here and here, and at any given time, you know this is what I need for version one of my application, because that recipe, the source of truth should be in get right or source code and if the version of truth is in source code and more people can see it right hey a lot of people can have access to what that recipe is, which boils into point three and point four, which are a little bit more like prescriptive right so if there's any variance. Let's say the declared so what a declarative state system is really good at is immediately fulfilling your request like hey I need three replicas so it will drop the hammer and try to fulfill that request as quickly as possible. So the idea that I went into kubernetes and made a direct change to that so up the, you know, your cubicle apply like or cubicle increase replica before the the virgin control system and that the quote unquote get off. If you're supporting a full get off model will say oh no no no no, there's a variance in the state right so that what that means that whatever is this version trouble is the absolute truth right so if you were to say, this is the concept of auto healing and or, or reconciliation right and so, again, everything that can get is the truth and also, again like when when leveraging get ups, you're actually codifying all the steps and going back to hey, I need you know the you read this node to come up or I need to be in memory node to come up before the application node. I need to test know execute this amount of code coverage or test coverage, all of these steps are codified at this point right and so there's no loss in translation between the author and the runner of the upset. One of the infrastructure, one of the also one of the other pillars that I brought up was just enforcement or auto healing right this is actually fairly big that the quote unquote, what would be called drift detection right so as an infrastructure engineer. And that you know you strive for is configuration control right so oh you know what we don't want random people going in and adding additional environmental variables or adding things to bash see or what you know the list can go on for like what you want to control the environment. If using a get off operator, which I will pass those to Martha a little bit to explain what those more moving pieces pieces are is that yeah you're able to reconcile very quickly the system will enforce that there was change in the descriptive state. So with that, I will hand it over to my buddy, and we can just talk about what are some of the tools like if you were curious about me how to enable this modernization will give you a little bit of a prescription here. Thank you, Robbie. So, speaking of you know the right tools for the job, they're countless tools that are you know going to help you integrate the whole get off approach that I was talking about, you know you can add these different tools as a part of your workflow. These are the common good ops tools that we are seeing, or you know, our goal this flux this Jenkins X. So with all these different tools, you know when you get started, you have this whole concept of get being your source of truth and how exactly you can have different YAML files that you could change, and you know that could propagate those changes across different environment. We have the two deployment models. One is the push base and the other is the pull base deployment models, I do want to spend some time on the push versus pull base. On the left side the push base right, what happens is that your image registry will become the source of truth. This is more on the lines of the DevOps flow that we see. So what we are saying is that whenever a new image comes in, then your workflow pulls or pushes that particular image as a part of the deployment and then you know a workflow gets triggered. On the other hand, when you see the pull based deployments. This is exactly where the good ops module comes into play. You have an operator that sits in between your image registry and environment repository. It's taking a look at the newer set of get commits that happen. And as soon as the state in get changes, it's pulling the newly configured get change and that you know makes get the source of truth. So what you're basically seeing over here is that your image registry and your environment repository will always remain in sync. So say Ravi decides to go and make a change on the cluster tomorrow morning. You know, I will be able to see that because get will remain the source of truth. Same thing if I go and revert a particular change right Ravi will go and see the same change. So irrespective of who makes the change get will always remain the source of truth. Just like everything else get ops is good at some things and these tools are you know bad at some things get up tools are great at number one monitoring for changes or drift. So as I mentioned if you know you wanted to go ahead and deploy a security fix right, and this is something that you want to make sure that this is being applied to the correct environment. That's where drift detection comes into play. We see what exactly is the change. We can see the commit ID, the person who's going to make the commit, you know the hash associated with it. So that's basically giving us that hey this is exactly what is getting changed on this environment, which is being deployed to this particular cluster. The other thing is reconciliation. As Ravi touched upon this before it's really important that you know get ops really helps with the whole reconciliation part. It makes sure that your cluster, as well as you know your image registry always remain in sync. At least as a source code management and get integration. This helps keeping you know the role based access control and check it also makes sure that the right people have the right permissions, but also because get is going to remain your source of truth. We know with set of deployments are happening, and you know, in which particular environment. So that's always going to be there. In other words, get ops tools are bad at some things too. Well, when it comes to a non declarative infrastructure. At that point get ops you know does not really do a good job, because get ops is mostly designed on the declarative side. And the reason I haven't say this is because you know if you're not using get ops in this case because you know once the non declarative part comes into play. Are you even doing get ops, I think that's you know the bigger question right because you start disqualifying get at that period of time. The way I see it is that from a non Kubernetes deployment standard when you're doing an easy to base deployment right in that case, it does not really reconcile when something breaks on the VM. So, you know when it comes to that get ops is not really good at handling those situations. Next is failures audit trails in our back. The important thing is that get ops does not really give you the entire audit trail information. If you have an approval gate in the middle. If you have a role based access control in the middle, it can truly give you the entire flow from the start to the end in a particular workflow or a pipeline in which stage who has done what when and where. So that is you know something that get ops is not going to be helpful. It will give you the hash ID it'll give you the commit ID, but it does not give you the end to end flow. That's not least as the confidence building steps. Now think about, you know, probably an enterprise grade customer, or even a financial institution who has all these different steps in between, which could be their low tests, could be performance tests, it could be other forms of tests that they run, as they promote an artifact or the Yaml file from one environment to another. When get ops comes into the picture with confidence building steps, it does not really do a good job. The reason is because get ops is all about being instant with the performance test with a low test it's going to take some time it's going to make sure that the API, you know, is returning a particular value, it is making sure that we get a particular message, we have a payload that's returning something, but it comes to that you know, get ops is not the best way to go. All right, super charging get ops. One of the big questions that you know we usually listen to is get ops only for Kubernetes. And the answer to this question is that, you know, get ops is not only about Kubernetes. If you're, if any system can be handled and managed declaratively, and it has convergence at some point, then you know get ops is good for that. Take your ECS containers, you know we can absolutely do get ops for ECS as well. It just depends how your containers have been designed, and you know if they can converge at some time, and they can be decorative that absolutely get ops is the way to go for that. Next is get ops for immediacy and failure situations. When it comes to handling failure situations, get ops, you know, do a pretty good job but again it goes back to a point where you have to basically tell us which particular version of your YAML files need to be reapplied. So it has to be instantaneous. Again if you have multiple flows that happen as a part of your workflow, then get ops is not the way to go. Centralized strategy. When it comes to centralized strategy, get will remain the source of truth, which basically means that any changes that happen will be registered in get, and that makes it really convenient because anybody who's monitoring from an infrastructure perspective or you know from an engineering management perspective, they know what exactly is being deployed and where it is being deployed. And last not least DevOps versus get ops, I do want to spend about a minute or two just walk through the differences between DevOps and get ops because I feel like you know this is a place where it really kind of gets confusing. So DevOps is nothing but you know the whole silo to break between development and operation, and DevOps is a push base model as we discussed a few slides ago. The other thing is it does follow a decorative approach, but the issue with DevOps is that you need to monitor each and every layer and each and every approach that you take as a part of your deployment pipeline. Now the automation of the Dev and operation, the operations that happen, you have to bind everything together, you can't really have a single step doing all of it. In the case of get ops, when we switch gears to get ops right, it is more intelligent because it is automated operations. Whenever there is a drift that we detect it automatically goes ahead it pulls that particular configuration, you have an operator that's pulling that particular configuration the later set of changes, and then it's applying those changes to the different clusters. So get ops basically powers the continuous delivery cycle, introducing the pull base model. Now get ops is having declarative infrastructure as code as Ravi mentioned so what you could do possibly is you could have your entire application get deployed, but you could also have your infrastructure that is being provisioned before that step coming in through get ops. And you know that also helps keeping the cost in check. Last not least is you know it monitors the state for your desired and your current state of the cluster. What's really important is that when you are specifically deploying something, you need to make sure that your cluster remains in sync with the images that are being deployed. In that case, you know you can get the latest and greatest into the hands of your end users as soon as possible it is getting developed. And get ops really helps to monitor that state, you know and it also helps to keep those different yamls in check across different environments. And then you know it obviously binds the whole monitoring and deployment and it does not we do anything with development and operations, because get is all about making sure that you know we can deploy as soon as possible, but also monitor as we deploy, and should something break, it provides the flexibility to roll back as soon as possible, just like harness does. Well, I think that that was very very insightful thing for us to mark and I think this this portion and our, at least we're preaching to you if part of the conversation right so love to answer the questions I see there's a couple of questions in there. But yeah if you want to learn more, you know, we have a whole bunch of gals material if you have to cook, can be a quick scan, or head to that URL that bit Lee, you definitely learn more. And so let's let's go through some of these particular questions and keep them coming. And one of the questions was, do you think get ops is good for small companies and projects. So you can take it I can take it. I'll take it. So, get ups is good for small companies for sure reason being you know because smaller companies go on to deploy features and test so often right. I'm sure that you are deploying often but you have some form of control as you deploy it, you can run some tests across it and you know you want to make sure that if this is a project you want to test with her, probably you know do a canary deployment you want to test it with a smaller set of people, then get ups is the way to go. Yeah, absolutely, I will hardly agree with that like, especially if you're starting, you know, starting a new project right like it. When you're picking up the infrastructure, you might be looking to a modern modern piece of infrastructure like a container or crusader. It just enforces good, good habits, right so I don't think it's, you know, there's, there's any patterns, like any particular paradigm get up so that is a paradigm, right it is a little tooling heavy, based on like where it came from. You know, their, their opinions that were the several projects that kind of kick it off, but hey I feel that it's good. I want to take the first step and next question to have a very strong opinion on this one so is this is next question was, is it is get ups only applicable with get or can other no version control systems use that too. So this gets into like, it's, it's like, it's like anything in technology right like how stringing of a definition do you want to keep it to like, for example, micro services, you know, it's a pattern, per se, but if you take a look at like the quintessential process though it's like oh you know what you have to have different copies of the data needs to be the message base it should know it usually have guaranteed communication like between, you know, the services, and it's like how many of us are doing that right like 15 years ago or yeah 15 now years ago, I'm one of my first projects at a university. We were using clear case right as our source control system and we had the ability to know every time someone on the team would commit, we would kick off a build and build forge right and so things make sense right like hey and my favorite part of get ops was the SCM event right like if subversion publishes events get publishes events right like you can take you can take the pillars and up to a point really cave the main thing is no we're using pre pre force or using subversion or CVS all those things publish events right so you might be able to automate build or automate a piece of it. And then you know, again, not having to this gets into like if your infrastructure is not declarative. You can, you can take the pillars of it right minus the enforcement stuff might be a little bit difficult. Let's see here so thanks for that question so the next question will give to Samarth. So, for for get ops. Should you have all the Kubernetes manifests in the single repo or can you have it in multiple repos so. So, we've seen different organizations of you know handle this differently. The whole concept of mono repo is something we've seen as common across the board, but I believe personally based on you know the different articles of red and the research we've done. It seems like it is important to have this setup in multiple repos and the reason being that you know you might want to have some role based access control around it right. You want to make sure that only the right sort of people have access to the right manifests and when they're applying those manifests right we want to make sure it's only being applied to certain environment types. So it's okay to keep it you know specifically in multiple repos, but that is you know keeping in mind that you know we want some role based access control around it. And the only time I would say that you know we have to keep the Kubernetes manifests in a single repository is when you know that you know this is going to be applied commonly across all services. If you know you have certain manifests that are being commonly applied across maybe you know it's a security manifest that you want to apply across all services, maybe login log out and yeah absolutely that could be the single repo for everything but I believe it's the best to have it you know across multiple repos. It's into the, the religious debate mono repo versus multi repo. Also, it gets into like system design right like so like, I see side, but I see both sides of it. It's, I favor the multiple repo portion of it because like when you're dealing with like Kubernetes specifically. Like, what is actually what what is the Kubernetes manifests it's your actions, you're telling Kubernetes to change right you're enacting some sort of change in this system. And that changes the clarity of right like you you're giving a desired state but usually that desired state. When you have another manifest is different than the previous state, right and so it all goes down to, you know, do you want to do this is as cliche as the sounds are you making one big wisdom change, or you're making very small changes. It's similar to having, you know, 15,000 line shell scripts right like, you know, it doesn't make sense to have more compartment lines. Kubernetes manifest, yes, in my opinion, it makes it more makes sense friendly but you know to each to each to run. So good question so next question. Are there any organization or team structures that work better with get off so always like to hand to you for a smart and come with my opinion. Okay. With regards to this, what I believe works the best is you know when it comes to smaller team structures for sure, then you know get ups as a good play 100%. But when you have this whole disparity between who's handling the provisioning of the infrastructure and who's handling the actual application deployment. So here it would be the SRE team that we would coin, and you would say hey the SRE team is responsible for doing this, but to be honest with you. You know the team structure that really works the best is you have your, you know, the slide that Ravi really pulled up the platform engineer, they're responsible for handling your application code but they also know what is the infrastructure that needs to be provisioned as a part of you know this entire setup. So I think that's where you know this whole thing comes into the picture. Yeah, absolutely. It takes a lot of you know I, I think that using get off to actually it's really hate to say it's actually kind of complicated. Because it requires a lot of it requires a lot of expertise on the author. Right and so I give this whole spiel, you know this is not something to this presentation about engineering burden, or developer burden is like just increasing all the time, and the name of this is like, you know, I'm not an expert networking. I'm not my actually my biggest outage was a vpc misconfiguration and any no working engineer would have caught that and I had to go fly and apologize to our client that blocked half the internet with the cider rule. Another story for another time, but you know it's like it's really instilling in some of that expertise like as a developer, where some are for the client, right and so a truth, you know maybe if they look over it like oh you know we see that the solution is a little bit off, but it's really pushing, you know expertise around and it requires the semi sophisticated team, or a problem in general to run that so, but do try it right like hey don't let that be a hurdle is just it's a little bit of itself service to the extreme, like you know, as many times that they I commit into our SCM solution, expect a change to be rolling out. And this is just to add to what Ravi saying right the bigger thing with get is that you have the whole capability of doing drift detection right one of the biggest advantages. So you know what exactly has changed right so when say. Now Ravi made a change on the VPC side, something did not work we know what that change is going to look like so get helps with that. So you know when you when you start going in that direction it helps you see that get becomes your source of truth, and what exactly has changed. Thanks for that question. Let's see we have kids. I think this is more of a statement. Let's see. There's two questions. One would be which get ops tool is best use that's, I will say that for a little bit later. And then, let's see. I similar question to the multi versus model repo multi repos owned by ops team, or multi repos owned by devs, which are app repos. So I guess the that that question might be who who's custodian or who's the owner of the repositories. They either or right like, usually, it just goes back to who runs your SCM right so like is it a dev tool scene is the engineering efficiency team is a social tool scene. Is it yourself, you just get home. You know, like so, I feel like, like any piece of developer tooling, you know, it's, there's operational aspects to it there's no quote unquote this DevOps aspects to it how do you increase the efficiency of the pipeline how do you secure it I would say it's, you know, it's wanted or or is not the requirement for get ops. Let's see here so other, other questions that is how how does get ops, get ops tools handle securing infrastructure, which is deployed as part of get ops. So I know you have a juicy one. So, you know, this is, you have different ways that you could handle this and then basically what you could do is you could have a specific pre deployment step where what you decided is your provision infrastructure so you could use tools like terraform, you know you have different tools like cloud formation, they are responsible for getting the infrastructure provision before your actual deployment happens. And you know obviously there's harness to you know help tie all these together. So what really happens in this case is you have your pre deployment step, which is nothing but you making a change in the terraform source repository. And there's a TF file, a terraform file, which harness pulls in, and it provisions that infrastructure for you. And what happens after that is that you know the get ops tool that you're using will do the deployment on top of this. So, when it comes to basically provisioning your infrastructure before you know what infrastructure needs to be provisioned for this deployment you know the artifact that you're deploying you know what percentage of EC2 instance is required what percentage of VMs are required. You know your provision just that much. Then you deploy your artifact using your get ops tool. And right after that, you can decide to either keep your infrastructure hanging for some time to make sure everything looks good, or you can tear it down. So, the way I see it personally would be, you have your terraform setup, or you know you have a TF state file which is doing the provisioning of your entire thing you have a TF apply, then you have you know your cube cut a little play, and then you basically have a TF destroy. So that would be the entire order with get ops. Yeah, and just like, you know, it also it's, it goes back to expertise dissemination, right. I was thinking of this like the last question to which this is actually, you know, I said on my outage it might be a security hardening right which is even it's more realistic it's how do you know how they get up to by itself will not enable security. It's like any other tool right like in a by itself you're not automatically PCI compliant by using get ops like overstated answer there. But it's just about how you disseminate the information right like how you know providing an archetype or a template for Robbie who doesn't secure his up guy do but who is not going to sanitizing his inputs per our app site team. You know, you're trying to disseminate that information across the pipeline and so I might understand it a little bit better because it's codified, right, like, oh yeah like instead of making these like WAF rules or you know NIST rules on the Linux machine and disabling passwords and it's like I don't know what that means, but I do know what certain you know if I can see it codified it for me like in a Kubernetes manifest out. You know this is this is what this is what they need right so it's a tool by itself, it's just how do you disseminate that information across the slc. So I think to two questions kind of go together one question was which get ops tool is best use and then also one question was, what are the differences between Argo flux and Jenkins X. So, I can take a stab at that or Samarth do you want to take the, what is the score differences between those tools. I could, I can take a stab at it and maybe you can add more. So if it's a standpoint between Argo flux and Jenkins X, at least from Argo and flux perspective, pretty much they are, you know, operators. So the way I see it is that Argos has you know an operator, or doing good ops flux as an operator for doing good ops with Jenkins X it's more like how they tie all of these things together. So when it comes to Argo, pretty much you know Argo is only doing Kubernetes same thing with flux, which Jenkins X it's more like tying in the operator but it's coming in a little bit early on in the cycle so it's more like CI and CD, but with Argo and flux it's more towards the CD side, all they care about is you know they don't really want to worry about how your build process happens or you know what happens as a part of your artifact development. Oh, once the artifacts ready how do I do the apply for it. So you know that's where the operators come into the picture that's where the drift detection comes into the picture, how do we make it converge with your deployment pipeline. That's what Argo and flux does with Jenkins X on the other side what they do is. It's responsible for doing CI and CD, but it's also responsible for making sure that you know we start shifting left because they want to start getting the whole get ops module added right in the beginning of the phase itself which is on the CI side. So that is shifting left to reduce failure that's how I would see it. Yeah, I mean that that's better of an answer better than I would have said it. Like Argo is more of a tool and then flux is an engine right so that actually the two projects were have a lot of connected tissue and then they fell out. I'm not sure what how like they're going to be using like some like connective bits with each other. Like, there's a lot of tooling, you know, like that kind of spun up around this because it's like it's becoming more and more as as Kubernetes becoming more popular. This paradigm or operator based model is becoming more popular too and so there's like several get ops tools out there now the most prevalent is Argo. Argo was that tea they sold at Starbucks, then I was told it was a get ops engine. Oh, the raspberry tea. No, joking. I see a lot of Argo out there so, hey, beauties into beauties in the eye of the beholder, you know, no one tool will solve a hundreds of the problems. We clearly work on harness and we might say harness to that. Yeah, you know, hey, definitely look into it. If you haven't looked into it before, there's lots of great tools out there and material out there to get started. And you know, I'm really excited about get ops. So if I entered your question out there. Let's see. How is Argo vlogs better than baking CD into pipelines assuming proper Kubernetes permissions through our back. Not sure what that question means. Maybe I can help Ravi. I can help out with this one. So I think so assuming proper Kubernetes permissions through our back. So I do want to call out that you know when Argo and flux of the they're responsible specifically for taking your YAML file and deploying it so your Q cut will apply basically taking the manifest it renders the manifest and it does the applying on the cluster. And that's how you know Argo and flux come into the picture. When it comes to assuming proper Kubernetes permissions through our back. You're pretty much talking about the cluster level permissions right but what if you wanted that Ravi and I, if we happen to work in the same department, you want to make sure that Ravi does not have access to the production deployments, but I do, or you know Ravi is not supposed to be approving tickets, but I do in that case you know you have the approval steps that come in the middle it's the different sets of approvals that you want to do maybe you want to have a JIRA integration maybe you want to have a service now integration if that's your tool of choice. You know what you want to make sure that all of this is recorded as different things happen. In this case, if your cluster goes down, so does your pipeline. So that's the important thing to also consider right. So in the case of harness what happens is that when the cluster goes down, we tell you we do not have connectivity and it's going to make a retry to the cluster once the cluster is back up it's going to make sure that picks it back up from the same point. But in the case of Argo because Argo or flux installed on your Kubernetes cluster, should the cluster go down, everything goes down with it. Yeah, like what we're awesome awesome explanation of the entered the if you're still listening after the entry question. What one thing this is an observation that I had about the tooling as a long time abuser of systems. Typically get ops systems are very optimistic. And so what that means that they actually assume it will pass right it assumes the deployment's going to be promoted, or the change should go out. It should go out flawlessly. It does allow for quicker iteration so something didn't happen. You can kind of kind of reapply the last known state but it is they are optimistic in their past. We take a little like 10, like two seconds about us and we take a little bit different paths. So my shake his head at me if I say this like probably will, you know, we take a more pessimistic model we know we're building in failure from the get go we're assuming you know we're providing all the safety that you know this particular deploy might not work and so we'll provide you very easy ways to back it up but you know not too much of a harness but here, it's more for education. It looks like there's no more questions. And hey, if any last minute questions for me to get them in. If not, thank everybody so much for attending the webinar or seeing us after the fact really means a lot to some more than I did. Great. Thank you so much to Ravi and Samar for their time today and thank you to all the participants who joined us as a reminder this recording will be on the next validation YouTube page later today. And we hope you're able to join us for future webinars. Have a wonderful day. Thank you.