 Welcome everyone to the talk testing service mesh configs and Kubernetes manifest by Srinivasan, Sekar and Ashiv Thorath. Without further delay, over to you Srinivasan and Ashiv. Thank you Gundan and welcome all. Good morning or good afternoon. I don't know. Basically based on the location. So we are going to talk about testing the Kubernetes and service mesh configurations. We'll start with the introduction first. I'm Ashiv Thorath. I work with ThoughtWorks as a lead consultant. I've been with ThoughtWorks for five years, but developing software for more than 10 years now. Java developer, that's my core skill, but I have been doing a lot of other stuff. For example, front end development, testing, DevOps tooling and as well as DevOps consulting. DevOps consulting also. Yeah, that's about me. And as you can see, there are my Twitter and GitHub handles. Over to you Srinivasan. Thanks Ashiv. Thanks everyone for joining us and myself for Srinivasan, Sekar and open source enthusiast, happy member, contributor to various open source repositories including Selenium and WebPrivateIU. And I'm a conference speaker. I work also as a lead consultant at ThoughtWorks. And today we're going to talk about testing service mesh configurations and cave its manifest. So agenda for today is we're going to talk about what Kubernetes is all about and how does the manifest of Kubernetes look like and some of the failure stories that happened in Kubernetes world. And how you could ensure that we don't get trapped into those failure stories and a sharp demo as well. What's service mesh? What service mesh is all about different services that we could get from service mesh and how does the configurations of service mesh actually looks like and also a sharp demo on service mesh configuration testing. So before we get into what Kubernetes is all about, I'd like to start with an analogy where let's say I own a 10 BHK house and I'm having 10 rooms and I would like to rent three rooms out of it. And let's say there are two options ideally I have. One, I can take care of advertising. So I could find tenants online through any online accommodation service, myself taking care of all of those responsibilities or probably I could hire someone to do that for me. So let's say it's basically you have to decide and probably hire an agent and take care of all of those things that you ideally have to take care of it. Like identifying the three rooms available for guests and then probably ensure that are probably being there and then probably take care of there, probably take care of the keys and make sure it's the rooms are clean. So there are two options whether you do it everything, you do it everything or probably hire an agent and they take care of it and how they do it. Probably they might have their own employees which could help you taking care of probably one takes care of one of the employees takes care of cleaning one of the employees take care of preparing food. Another one might take care of logging who has came in who has came out what time they came in and what kind of foods they have ordered and all of those things right. So how does it relates to kubernetes exactly right so the agent who takes care of all of those things is what kubernetes going to take care of everything related to configurations I mean spinning up proper I mean when you wanted to deploy a service right you have to tell kubernetes that it's on of an orchestrator container orchestrator. You have to tell it that how many number of parts you wanted to run and what's the result that you want to allocate to the right. So that's about that's about what kubernetes is all about right. So what. So you just say kubernetes has containerized application and you just say kubernetes that I have to run five replicas and each with two CPUs and three gigs of RAM and it takes care of orchestrating everything. So similar to kubernetes there are a lot of container orchestrators available and they are going to focus only on kubernetes today. It helps us to orchestrate the containerized application and by spinning up the number of parts that you want and allocate certain resources that you have asked it to do so. How you could do it how you could instruct kubernetes to do that right you have something called a manifest file. So kubernetes workloads are actually described in this YAML manifest and there are a lot of kinds of manifest file and we have two different manifest files here. One is a deployment manifest and another one is a service manifest and we have a sample of those manifest here in deployment manifest you probably tell that this is the container I mean this is the image and this is where it exactly resides in in the container registry and then this is the port that I wanted to assign to that container port and in terms of service manifest we just say I mean we say the name of the service and you assign certain I mean you certain probably assign some labels to that service and you assign port number that I mean port number that it has to bind to in terms of tag and also the protocol in terms of communication. So on a high level this is how the kubernetes manifest exactly looks like right so that are when we define this as a infra consultant or a developer or QA who defines this can probably there are high chances that we could do certain mistakes when we are defining kubernetes manifest. To give you an example let's say we haven't provided the software target port for your service. So you might not be able to see the service available for us to use or if you say there are chances that you might end up. I mean you might end up creating some security loopholes probably let's say you have deployed a kubernetes manifest in your development environment enabling privileged access and then pushed it the same configurations to production enabling the privileged access for the container. So it introduces a security loophole for you right. So there are high chances that a lot of things can go wrong when it comes to kubernetes manifest and there are tons of failure stories available and that is even a website to list down what all failure stories that different organizations have gone through right. So if you understood that failures and you will if you never know that failures you will never know what success that it could bring in and how you could avoid those kind of failures. So to give you another example probably quite recently Facebook went up for almost more than six hours and the configuration that took us there. So probably they would have tested everything and they might have not tested probably they tried to test everything in production. Just because of a configuration change you might have huge business impact it might take off your brand value as well in a few seconds. It's not just that application code resides in production it goes along with its own configuration. So as like how we give importance for testing your application code the business logic in it. It's also I mean we have to give equal importance in testing the configuration that sits along with application code as well right. So only if you make sure that these two coexist and work together very fine then you could ensure that things goes well in production. And there are a lot of failure stories which I would like to differ for now but there are a lot of wonderful stories as well in terms of how kubernetes solve problems for different organizations. So actually you being working on kubernetes for quite some time. So what are some of the failure stories that you encountered or some of the best practices you make sure that you don't get trapped into those failure stories. Can you list out some that you have. Thank you Shini I think probably the failures are pretty common I think you mentioned some of them right configuration errors. And I think most of the developers who are using kubernetes might have been in that position where you make some mistake and then you realize it when you deploy it right it's quite late to that's quite late feedback basically. But I think as I was saying and many of the people who are using kubernetes have experienced a lot of smart people have come up with best practices right. So the kind of there's a list of best practices you need to follow when you are working with kubernetes and if you are a developer who has access to kubernetes manifest. Yeah these are the practices you must know and try to follow them to list some some of the few I think the first one I would like to talk about is health checks. Now basically what is health check it tells if your service is healthy is it able to serve any more request or not. In kubernetes world there are two probes lioness problem readiness problem you can use those to decide if your service is able to take any more request or not and then kubernetes can orchestrate based on the status so you must have this thing configured in your service when you are deploying. Another one being graceful shutdown I would say so we use containers in kubernetes right and they are throwable throw away disposable so it is pretty common that you basically discard them at your value right you deploy new version you just throw away the older version you change some configuration you throw the older version and redeploy something. So when you throw it away you need to ensure that it has enough time to process what is working what the particular instance is working on imagine it has taken a service request on a HTTP and it is processing it it's not good practice to kill it abruptly maybe you take some time give it 30 seconds a grace period and then probably you can tune it up kubernetes allows that out of the boss and you should try to use it. Other one being fault tolerance when you're using microservices you will have a lot of moving places moving moving pieces and imagine one piece going kaput and taking down a whole of your system right it's not a good experience you it may cost you money it may cost you reputation for your organization. So it is important that we build it for fault tolerance and the easiest example I would say is the replicas right you can have more number of replicas kubernetes allows you to specify that in the manifest you say I need to run three replicas so that even if one of them goes down the other two are there to back it up right. And next one I think the most important one I think in all of that is resource utilization. Now what is resource utilization every service you run these are the containers and they would need processor and memory. If you don't put any limits. Any bad container would try to have all the resources and then other processes other services on that particular machine will start right it can bring down your node it can have it can lead to outages so it is very important that you put a limit on what can be consumed by a particular service. And that's basically about resource utilization kubernetes allows you to do that and you must specify that that is one of the good practice. So this is long maybe she would like to talk about another few. Sure. Yeah. So, there are a lot of this practices that you could ensure when it comes to kubernetes well as she has listed to, if we go over it and probably we can talk about resource tagging right. So, the resource tagging is ideally you tag your part with some labels associated to it. So you tag it basically probably with some technical labels are probably some security labels are probably some business labels associated. It could help you identify in manifest if you just go over the manifest it could help you identify what this is actually about let's say if you wanted to say the name of the application. You could tag it there with some labels associated are probably in business where you could have certain labels in terms of making sure who is the owner of this actual application user to identify who is responsible for this kind of application or what project it belongs to our business unit it belongs to. Right. So you can actually tag your labels to make sure that it is actually helpful in terms of providing even in the security context it helps us to provide. I mean you can tag in terms of confidentiality or you could really tag it as well as in terms of what kind of compliance that kind of adhere to like certain specific compliance requirements. Right. So in terms of configurations and secrets and proven as well. It is ideal to actually segregate the configurations from the application itself and keep it separate so application can focus on business logic and configurations can be externalized and you could define what kind of resources you want and all of those which I mean in terms of configurations if you and the main benefit that you get is you don't have to recompile your code when you wanted to change configuration which means we even when your application is in production. And that means it's an app and running in production you could change the configurations dynamically. So that's another advantage so you don't have to read it. Probably same code can be used in multiple different environments right. So those are the things we are as configurations alone can change in terms of that right. So, and in terms of secrets, it is a best practice to mount secrets as volumes not as environment variables in proven as well. The contents of the secret resources actually has to be mounted into containers as values rather than passing it in certain environment variables and exposing it right exposing it. So this will prevent secret values appears in the command that we use to start the container and so on. So in terms of part security policy, the part security policy will help you to restrict whether the container should run in privileged access or not. Probably, it is the best practice to not run your container as a root user and give them appropriate access us, give them the same thing the container privileged access are using only probably limiting the capabilities to preventing privilege escalation. So likewise, these are fraud related security policies in terms of namespaces you can actually define limits in terms of namespaces. These are some of the some of the best practices that we actually captured here and there are a few more and there are a lot more and but the ideal scenario is how do we make sure these best practices are actually ensured over a period of time right. So let's say I'm playing around with my manifest file and then I just accidentally moved into production without validating it right. So it could take up your complete business value of it even though the application tend to work fine in local end to end probably every time. There is a simple configuration change can mess up everything. So you need to have appropriate checks around these best practices when you when I say we need to have appropriate tests around this Kubernetes manifest. So one of that probably to start with is basically you can ensure whether if the YAML structure is proper, whether the YAML is actually described well and it's proper or not. And another way is probably to capture certain policies, some of the policies probably if I could give an example, one of the policy could be I don't want my container to run with privileged access. I don't want to pull secrets. I mean I don't want to keep environment variables as secrets. I can probably mount all this as volumes. That's one of the policy. Likewise, you could define your own policies and make sure that these are actually tested even before you go to production. So after going to production and figuring out something probably takes its own cost right. So the feedback cycle is also huge. So there are a lot of things that you could ensure even before you deploy this configuration. So introducing a static checks, I mean against the deployment manifest that you have created with certain policies will help you uncover a lot of errors even early in the development cycle. And there are a lot of ways to do it. Some of the categorizations here like API validators, built-in checkers, custom validators, maybe Asher will take us through how API validators is actually different from built-in checkers and how you could write configurations, policies around certain configurations. Thank you, Shini. I think before we move on to the validators, let's look at this deployment manifest. Shini showed us the one before, but this is for the another application, NGINX frontend application. It is on the port 80 and it's basically trying to serve the static contents maybe, right? If you look at this YAML file, there is one error here. It looks correct structurally, but if you look at the replicars, here we need number, not the number name, but we, for example, by mistake, I just wanted to show you some validation. So we put some wrong thing over here, which is the number name, right? Now, if I have this deployment YAML manifest, how can I verify it with API validators? Maybe we can move on to the next slide, Shini. So there are a couple of tools, the one being KubeEval. This one basically, before that I think let me introduce how it works. In Kubernetes, you have your KubeKettle command. That's a client and then there's a Kubernetes server. How do they communicate using the REST API, right? And when it's a REST API, you can have an open API specification. You can have schemas. So KubeEval has list of all the schemas for some of the recent versions starting from version 1.10 to latest version. You can have different, it already has the different version schemas with it. And when you want to verify a YAML file, you can verify it against a particular version instead of verifying against a fixed version. You can choose which version you want to verify your YAML against. And as you can see, for the previous file, it shows you the error as expected. It wanted the spec.replicars to have an integer value, but it got the string. So it is able to communicate the problem, right? It's one way of verifying it against the schema. Other one is actually inbuilt in KubeKettle command. So if you have used Kubernetes, you would know this Kubernetes CLI have an apply command, which allows you to deploy your workload using the apply command. But it also has a flag, dry run. What I'm telling here is you try to process this YAML, but do the dry run, don't really deploy it. And in this process, it also can also, in this process also, it can tell us the problems in this particular manifest. As you can see highlighted the error we expected that it expected the integer but got the string. Then how they are different, right? Kube, well, do not really need a cluster. You just need this particular tool. You do not have to connect to the cluster. And you can test it against different versions. Whereas KubeKettle command needs to have connectivity to a certain cluster and whatever resources you want to verify, those will be verified against this fixed version, which is running on the server. So these are just two of the many available for the API validators, but yeah, just wanted to share some of them. If you want to explore, you can obviously explore and find out more. Let's talk about the built-in checkers afterwards. So there are other tools who have some built-in open-ended checks. When I say open-ended checks, there are some policies or some best practices which that particular tool believes in. So for example, one of such tool is KubeScore. If you run it, you can say KubeScore, score and the YAML file which you want to get a score for. If you look at the sample output, it is pointing out four critical things we should take care of. One is about network policy for the security context and the last one is about the image pool policy. In the interest of time, let's focus on the last one. If you see here, it is saying that image pool policy should be set to always. And if you're not setting it, it will give us the error. Why do they say that? Imagine you have a latest tag and you keep on pushing to the same tag. When you say image pool policy is always, even if the latest tag is present on your machine, you will always try to repool that image. This ensures that you always get the latest one. Obviously it looks good. But then it's not the case which Datadop failed. Datadop, the leader in the observability, they have publicly mentioned one issue where they used to have image pool policy as always that actually when they got their cluster down, their cluster tried to pull so many images from registry that registry thought it's a DDoS attack and blocked them. So one size will not fit all. So how do you solve this problem then? If there is open at a tool, your team might have a different open and how do you solve it? I think that's where some of the other ones comes into picture, which is the custom validators. Let's look at the screenshot. This is the regular language and the tool we are talking about is contest, which is able to run the policy checks. What you see in the screenshot is we are saying if I have an input of kind deployment, the ML file is of kind of deployment, I want to get all the containers and for each container, I'll check if line is probably defined. And if it is not defined, I'll deny. I'll deny this policy. I'll deny because of this policy. So that's how this particular test runs. Contest as a tool do not really only work on the Kubernetes. It can work on any structured data, your Terraform files, your Docker file also, or even a response from any API or JSON file. It can even check that. So it's a general purpose structural checking tool. I think maybe let's try to understand more about contrast. Quick time check. I think we are almost 20 minutes. So let's talk about how contest works behind this. It is best or it is built on top of something called as open policy agent. Open policy agent as you can see in the diagram is a decision. It's a policy decision engine basically. And it allows you to decouple all your policy related stuff to this particular engine. You can delegate all this policy, policy enforcement to this engine. How can you do that? And how it is general purpose and unified tool. Let's look at the diagram. If you look at the bottom of it, there's a policy file. And then there's a JSON data. What I'm saying, when I have this data and I have the policy list, OPA can check for us if the data confirms to the policy which is defined. If not, it will tell us the output. At the same time at the top, if you see there's a service. So it is pretty common that you have authorizations in the microservices, for example, role based access control, right? You can also define those kind of rules in OPA. And then your service can talk to OPA to ask if these are the conditions should I go ahead or not by the policy. So your service is not worrying about the policy now. It's OPA who is making all the decision. You can upload it. You can manage it separately for all the policy related stuff. That's how it is unified tool. It actually works across the different tools across the cloud stack. That's about policy agent. Maybe we can go on to the next one. And what do you for showing it to us? Let's say a quick demo on how does this policies are actually defined and how we could ensure these policies are actually being heard in the deployment. Right. So I have a deployment here. So it's quite simple. And what we are defining here is we have a container and the image where I need to pull this data image from. And the container port is assigned here and we also have a security context to ensure that we are not running it as a root user. So I'm saying here run as a non-root user and I don't provide any privileged access by default. So if you have to deal the policies around this deployment, yeah. So ideally you could say always run with non-root user as true. Don't run your container as a root user. Probably don't provide privileged access to containers maybe when it is running. And if you look at the image, probably you can also say the container registry that it has to exactly pull that from probably might have container your own container registry and you wanted to pull the image exactly from your own container. Probably you can say I don't want to pull images which is the tag latest or probably I wanted to have certain versions instead of having it as a latest. So you have certain policies. You could define certain policies and make sure that these policies are actually being tested against this deployment to ensure that even before you go to production and then figuring I mean goes to deployment and then figuring out things in the environment. Rather you could do that up front and ensure that we are adhered to the certain policies. So as I said, some of the policies the way how we have written in return it in regular. You could see it here where in you say, let's say input of time deployment and I'm going to get all the container images. So I just travels through this path and figures out the container with an array and get all the image names out of it. That's what I do here and line up file and make sure that if I just checks whether the image has a tag or latest right. So if I found this tag probably I would say that deny such I mean deny such deploy to go forward. So images should not be ideally tagged as latest and probably if you define that as a policy, we should adhere to this. We should check this up front and another policy is about again checking whether the kind is of type deployment as we have seen previously right. There are two different types of manifest we had one is kind of deployment kind and another one is service thing we have other kinds of manifest as well. So we wanted to apply these policies only if the kind the manifest is of kind deploy. And we are saying that if I mean I can tell us into the path and we are saying that the previous taxes ideally if it is true then deny such deployments to happen. So it's not about Kubernetes. I mean it's not about contest. It's not about regular you could ideally define this in any tool but have your own opinionated checks and make sure that it's actually public within the team and ensure everyone is following the standard. So this will help you to define certain standards as policies and ensure that these policies are actually adhered the chance of that you could change it to latest at times and then try to deploy it. Probably you might have you might be running it in the latest access and try to move that production again you will introduce a security look for that right. So likewise you could also have policies around your volume. Which volume how does I need to mount and volume or probably let's say a minimum amount of resources I need to spend for this container that could also grow in as one of the policies. So now that we have certain policies let's see how we could run these policies isn't contest. So we have seen contest just print it in JSON format as output and this where my policies exist and this is the manifest file that I wanted to run it. So if you see this, they're saying that there is one successful and another policy that gone for a task because the image is tagged with latest. So I go back and figures out okay maybe it's actually tagged with latest. Let's see if I could suffer. I will make sure the policies are okay. So like define your own policies your own opinionated checks and make sure those policies are actually accurate. A minor mistake even in the resource from capital G to small g on low case and case conventions can even go into a huge disaster when it comes to resource allocations in terms of open as well. So now that we have seen everything on Google as well. Let's talk about what service mesh is and service mesh is all about right so service mesh. It's kind of a low latency layer on infrastructure and enables service to service communication. If we go back to our analogy. I had three rooms I mean I wanted to rent it and I did take an option to go ahead with. I mean with agency to manage all of my requirements right so rather than I taking care of orchestrating someone takes care of orchestrating. Let's say I am going there. Let's say now there are multiple agency who wants to take care of individual services. Let's say one agency takes care of cleaning another agency take care of probably preparing food they don't know each other and that could be employees within their agency. They have to talk to other agencies and ensures that they are providing proper service to our tenants right. So they need to know each other's identity to enable communication if they don't know the identity it is difficult for them to communicate and that might be employees around the clock going ahead and coming out right. And we need to also have a gateway when we are assigning certain competencies to certain we also have to have a gateway and ensure that a security protocols managed and security is actually built in cities and that right. So how do we ensure that that's when so service mesh comes in picture it externalizes all the communication between services. Let's say if you're taking an e-commerce world wherein you are doing a payment and the payment service ideally has to talk to another availability service to ensure that the product is available during payment right. So that needs to be some communication always happen between microservices that are deployed in Kubernetes. So one for service has to understand how does this communication happens. I mean how does communication can happen between services that are deployed in other parts in Kubernetes. So how do we ensure that that's when service mesh comes in picture it helps us to externalize that's configurations from the service so service can still talk about or ensures the business logic. The configuration related to communication is externalize. Let's say I tried to speak to another service but due to low latency or due to certain issues it failed. I have to retry it. So rather than building the logic inside the service how about someone takes care of retry logic right. So service can focus on business logic externalizing those configurations will help probably someone can feed those configurations runtime against every service parts during a runtime. Inject those configurations and that works out. So now service knows so so now service knows knows how can I talk to service. So if that's been done during I mean that's been taken care of automatically I can focus on business logic and I could work with business logic right. So service mesh and as a whole takes care of this. So service mesh ensures fast reliable secure communication between microservices and it also provides a lot of other features as well. So service mesh is kind of a paradigm and that in there are multiple implementations around one of the implementation that we are going to talk about today is this deal and paradigm is common and implementation is something that we have picked up and we are going to talk about is this architecture. Thank you. Maybe I can try to work us through the architecture as she said it's a paradigm right I think even the architecture for most of this service mesh would be almost similar so you would have a control plan and then you would have a data plan control plan controls the data plan. Okay. As a developer or as a DevOps engineer you would instruct you would send your instructions to studio at the bottom you can see in the architecture you would send it to control plan and then control plan will ensure that all the configurations or instructions whatever you have provided, translating them to the understandable format for other components and then propagating it across the cluster that is the responsibility of control plan. The core logic over here is how does it does how does it the data how does the data plan manages all this stuff we were talking about right. It's pretty simple basically if you look at the high upper part of the diagram, there is a service and every service has a proxy. This is NY proxy the logo you see is of NY and this NY proxies are the only entry and exit point ingress and x egress basically for the given service. Okay. So what you do your service do not really care about how I talk to external world or how external world talk to me. It just cares about the business logic and this NY proxies will have all this responsibility of communication enforcement matrix. This cross cutting concerns can be basically managed by proxy how now, going back to the same analogy which he introduced right imagine for each room. You have a person sitting on the door. Okay, each time someone goes in that person will control who goes in someone wants to go out that person can control who goes out. And when you're able to track that you are able to also able to count how many people went in how many people came out. You got the matrix right when someone is coming in you can ask them from where you came when someone is going out you can ask them where you're going that way you got the tracing that is another cross cutting concern. At the same time the person can maintain the register for all the people coming in going out you got the logging. Right. So this way all this cross cutting concerns along with the network traffic and routing can be managed here. Also imagine a person wants to visit another place and he asked the person. There is someone in the room he wants to go out and he asked how do I reach service be and the person tells okay if you any message give it to me I'll pass it on. That's how that process takes off loads the routing part also. Right. So that's the architecture at high level. Maybe we can I think the next topic so there are multiple things you can achieve one of the pretty common use case which you can achieve with service mesh like is to really less efforts is kind of a deployment but let's try to understand what are kind of deployments first. Suppose you have a website it's pretty famous right everyone likes it but then there are some features you want to add you work on them. Now you have pretty much more features you feel it would add a lot of value to your service or your website. But if you just say that I'll bring down the version one I'll deploy the version two and imagine there's a book there is some problem. Your website will go down. Right. That's a risk. So to mitigate that risk what you could do is you can do canary deployments. What it means is you slowly route the traffic when you introduce a new version you only send some percent of the traffic over here. Get the feedback look at the logs everything looks okay start increasing the percentage and eventually we too will start so all the traffic and you will shut down the V1. That's how the Canada deployment works. Maybe let's look at the sample configuration how you do it in Istio. This is just if you look at this YAML configuration by the way it looks almost similar to the Kubernetes structure and they do follow similar structure actually here the kind is virtual service. And what I'm saying is in this virtual service if you look at the destination I'm saying I want to route traffic to Hello World. And if it is subset V1 send some traffic over there if it is V2 send some traffic over there. By the way do you see some error over here. Right. If you look at the weights that is the percentage if I said 90% to both of them that means in total it is 130% of traffic which is wrong you would have only 100%. So there is something wrong over here. And this is one example of how what can go wrong in your configuration you would realize it it is wrong when you deploy but is there a way to get early feedback. Yes, again the same tool set we can use to get the early feedback maybe Shini you can walk us through. So if you pick the same service splitters. So you see that so how we could define policies around the service later on. So one of the policy could be that host name has to be same for both the versions. Right. So that's quite simple and probably weightage has to be probably under and it shouldn't be about right and what and also the versions has to be different. It doesn't make sense to give same versions and then route in person to your 10% 90% right so these are some best policies probably might have tested. How does your application works out as a version one works version two works in a different environment post deployment but there are some things that you could still take care of it. Even before you deploy right. So these are some policies that we have defined probably let's quickly go through those policies. So one of that is a virtual service. Let's say I'm defining the first policy where in the kind is of type visual service. And I define I mean I get the current post and canary post and it shows that they are same. If it is not same then I deny that. Right. And next policy is about again I'm checking whether it's of kind virtual service and get extra I mean I extract the weights from routes. I mean destination routes and assign it to current and canary and it shows that it is not actually greater than that if it is better than that I also denied those deployments right. So we could ensure this I mean how we could make sure that these policies are actually being adhered in the spliters. So if you look at this one of that is successful because of course names are quite same. And another failure here is the service spliters can be actually greater than 100 which is not possible. So how what does that how we could ensure that instead of I mean you can define your own customized policy checks that are you can define policies even for service discoveries. How can a service being discovered for example let's say. So we say has to talk in this way to and so we say identifies it and service be and then how does it needs to talk to all of those can be defined as policies and make sure these policies are actually tested against service spliters are service discovery. So this is a way that you could ensure that even before you deploy something to production and I mean even before you deploy something to production you could ensure that most of these small small needs can be verified upfront rather than. Deploying it and then waiting for the service to be receiving request and then there's something that you tend to change often right you lay around with configurations and service later one time you might go with 50 requests and probably 5050 and then 10 to 90 all of those right. So this helps us to this kind of policies helps us to make sure that the deployment ML order service later ML or beat any templatized configurations in your application. So it's not that you send only application code to production right we send it along with its own configuration. For example in terms of service later we might have to ensure and we might have to ensure that the logics are actually fine against the service I mean in visual versions as well that you will do it in probably post deployment or probably in another new life cycle of this right probably here we are just ensuring that there are certain policies are so opinionated checks that we have actually defined and we are ensuring that right. So is that enough is that enough for us to say that governments will go and work fine in production testing we will never do enough of it there could be bugs and what we wanted to do is we wanted to reduce the impact we wanted to base for it upfront and make sure that we get any feedbacks by testing templatized configurations in your application, even if you take the same service later example, making sure 9010 percentage 90% to be 110% to be to is that enough, maybe not. You might also have to ensure that the database that both the services use probably is compatible enough for serving needs of we want serving needs of we that you probably do it in different phases of cycle, but you might also ensure that both these versions come from same post, it doesn't make sense to test very one from post one V2 from post two right. So how we could be ensure those kind of things we don't mess up right. So those are the ways, I mean those are the ways that you could be ensure the templatized configuration testing static checks building static checks are opinionated checks are policies defining policies around it and making sure that there is a secure policies that's good or improper policies that you are well defined and testing it against your application. Yeah, that's the end and that's us. Thank you. Well, thank you, there's one question, maybe we can try to answer that. Sure. Maybe I'll take a minute. So I think there was first one about what are the other options like is to is pretty much to bring it is only. What, what else can you use. So yeah, I think of console is something we have ourselves tried me and she need to get it right to deploy it on virtual machine and we wanted to connect is to sorry the service mission. Even it is connecting to service mission virtual machines. So we were able to do that and yeah, I think of console supported. And I think the follow up question is here from a future perspective would console have all the features of service make that and I still does. I think it's a paradigm right I think most of them have the single set of features, but the way they offer it the way you can configure it the way you can look at it, change it I think that differs that's the differentiating factor. And they will have their own unique features also, for example, is to is proven it is only but actually of console can run across different platforms so that is one additional benefit of first. There are differences but on the larger side I think they offer same set of features I would say, if you can be able to achieve something kind of deployment with is to you can very well do it with the console also linker deals. That was an insightful talk. Thank you all thank you all for joining us and listening to us giving us an opportunity to listen to ourselves.