 Welcome everyone to the session automating the known unknown by Sagar and Joel. We are glad that they could join us today and without any further delay over to you guys. Thanks Nidhi for like letting us in. So hi guys, myself Sagar. I am currently working as senior manager as state as part of geo. And I have over a decade or years of experience in the test automation and testing space. I would also love to get my colleague Joel introduced. Joel, can you please introduce yourself? Hi everyone, welcome to the session. I'm Joel D'Azario. I've worked, I'm working with extensio as a consultant for geo and I have around 18 years of experience in the tech space. Okay, so thanks Joel for the short introduction and. Okay, welcome on board everyone for the case study we are going to present automating the known unknown. So myself and Joel would be driving you through this entire session or case study what you can call. Before we get into any more details, I will just, I just want you to give a overview on what is the 30,000 feet view of our application or system which we are testing to get the complexity involved while testing such a big application or a component. So we have all set of client applications. We have an edge layer which deals with we have a number of microservices components such as consumers producers Kafka and number of databases elastic search cash. And the most problematic area where we would be talking more on this session are the external systems which are on the extreme right hand side of the diagram. They represent all the external systems or external entities the application under test interacts with. Now, I just want all of you to because I might be I'm very sure that you also have such complex systems where you test. So can you just give me on the chat window what are different type of known unknowns you think of or what you understand from what are the known unknowns. Can you please try to write down things on chat. Currently, I'm not seeing any chat coming in. Guys, please let it be interactive. So I'll just give you a hint. Okay. Third party dependencies. I can see Gaurav typing in. Thanks Gaurav for that input. Negative scenarios from third party. Yes, exactly. That is what we would be talking of. And Joel would be a point person of contact helping you know how this things we have done. Okay, so I think we got quite a leakiness of test. I don't think is a known unknown leakiness of test is something which is part of test design we need to take care of. But this known unknowns make flaky test. Testing 500 responses from third party. Race conditions. Yeah, quite a lot of good inputs coming from people. And thanks for typing us typing this down. But if on a broader level, if I try to bucketize whatever inputs we had so far. So these can be considered as like we have to test application when the option system goes down this typically depicts the 500 scenarios which some of you were talking about. Testing applications when there is a time out from external systems. These are like mostly the third party dependencies where we talk off the third party gives time out doesn't respond in time. Then one interesting fact is sometimes we have to test our application. Our application services are ready but the option systems are not yet ready. So we cannot go about testing such scenarios. And this point is like my favorite because there are scenarios when you have to test only few of the API fails not the entire all the API is for a service tends to fail. Very difficult scenario to automate in integration and other environments and requires a lot of collaboration efforts. And lastly, validating time out negative scenarios on UI, which again has dependencies on external systems whom we interact with. So throughout this case study or this session, we would be talking on this unknowns how we tackle them as part of geo. Yeah. Now when I bring the pointers of like these are tests which we need to automate or these are known unknowns. Everybody talks of mocking. Okay, we can use a fake server. And it will solve our problem. And most of the world uses fake in a manner where they have API test for each of different services. And they communicate independently to the mock server take response and we check behavior of each and every service. And same happens for front end. Now why have kept back end and front end a separate entity because as shown in the diagram as well. We have a edge layer which deals with the internal services and then internal services deals with the external services in an ecosystem. So the mocks and the interfaces for front end and back end might differ a while or a bit. So that's where we have separate API test and API mock servers and back end test and back end mock server. Like quickly, people can again go back to their chat windows and type. What challenge they think this approach might bring to them or if they have faced any such challenges as part of their experience. Sorry for making you type, but if you can please again help me out with certain thoughts on this. What you think is a challenge. If you test this way. Okay, change of API contract as part of new feature. Good. Same as like change of API contract test data. Good call out web of test data is like really a painful activity while we do this kind of mocks. Not all data set can be tested. I a little bit disagree to this point that not all data set can be tested because mock provide you that flexibility to test all different type of data. But if required, we can have a detailed discussion in the hangout which needy would be introducing in someone post this session. How do you test your mock? This is an interesting question. Yes. How do tests to do you test your mock? So please be patient for this question at least how we test the mock itself. In a while I will hand it over to Joel who is like kind of expert on service virtualization and mocking. He is active contributor to specmatic tool itself, which also will come in a while. So I think we have got quite a good inputs. But as per our experience, what we have encountered the very first drawback of this approach is we create separate tests for front end and back end. And that in like that involves or brings in mental maintenance and other aspects to it. Secondly, the integration of front end and back end gets only tested in a higher environment, which is again, what is not desired as part of geo as part of the agile methodology. So we want to shift left and have this interaction tested as early as possible. And like as some has mentioned the static mocks test data preparation, these are all like maintenance heavy activities need a lot of planning, strategizing and stuff. And as the application scales, maintenance of mocks also adds complexity to it. And the interesting point which we want to highlight is mocks or static mocks are not intelligent enough to validate the incoming request because mostly the mocks are configured to take a request and provide a response. So they don't actually validate either the schema or the validate the genuineness of an incoming request. So we just take a while to tell you one of the scenario wherein we were following similar mocking approach and the front end application somehow converted a integer value to a floating point value that it added a dot 00 to the input value. So the mock test everything passed because server was expecting a field called as pin code. And it got that field mock work, we went to higher environment, but there the backend services failed, because they were expecting a pin code like 400709, but they were getting a input 4009.00. And that's where a intelligent mock, which can validate incoming request is pretty much important, and we would be talking on that as well during our talk. So the overhead of mocks steps for internal services is something similar to static mocks are maintenance heavy, but as we have a lot of interaction between services and client interaction to services. We have to create a overhead of internal service mocking as well. Now I will not take a because I will again take this slide post when my like when I continue the session, but if you can like if you check this diagram as compared to what we showed earlier. So you can see now everything which was on the right hand side is stopped using a tool called a specmatic which supports intelligent or dynamic runtime mocking that the test itself can handle. So if you can see a dotted line which goes to the specmatic server, which sets dynamic stubs or dynamic responses and the end to end ecosystem works. So when we see the demo and move to the next part of this, I will take you through this in more detail, but currently I would just want to hand over things to Joel. So he can take you on a journey of service virtualization ahead. Joel, can you please take over. Yeah. Thanks, I got needed to stop sharing. Yes. Thanks. Taking over from where saga left off on this slide, just to reiterate deeply. We've created a geo a special environment in which we deploy the key services in the center. And the services to the right, which include things like mass and novel box and other such things. Everything in the peach boxes is basically not in that environment. But has been stopped out using a tool called specmatic or mocked, if you want. I believe this is how just just to sort of simplify the diagram reduce it to its minimum or smallest simplest possible version. This is what we typically do right here simulated API service virtualization tool is a way by which you can simulate all those APIs instead of actually having them in your environment. And your isolated test environment is just meant for testing the clients or the consumers of those APIs. And your tests will set up state on the simulations, they will, they will take care of that and then they will run there are several benefits that this confers. You can simulate weight, you can simulate any number of dependencies without expensive test setup. Right, because you can't really, if you're if you're going to, if you're going to run an isolated environment, you can't really have all the API is that you're using setup locally. That's not easy. The only way to do that is simulate. It's really fast as no downtime you have complete control of state setup. So as some people have already pointed out in the chat is that these simulations while being really fast and light and easy to set up and all of that. These simulations can go out of sync with the actual API. So the simulation can go out of sync with the simulated. And then again, you know, we will end up getting a bunch of integration errors when we put the consumer and the API in an integrated environment for the first time. And that's not good because the higher the environment, the longer it takes to resolve these kinds of errors. And we really want to shift left. We want to get these errors as early as possible. In fact, we want to even get it in our development environment on the laptop, you know, before it even gets deployed. And how do we do that? This is this is a problem in which you must somehow nail down the interaction between your simulated API and the simulation. This is the simulation itself. This is what we are calling contract driven development. Contract driven development means before you even start building the consumer before you even start building the API. You actually start with the contract. And I'll just give you a small example of a contractor contract basically contains all the details about the structure of your API. The API is the industry standard for describing HTTP APIs. And there's a lot of rich detail that the open API aspect can contain. So I've shown you here on the screen a small sample file that that contains an open API specification for a sample application and we'll come to later and order API. You can see the products, right? There's a path for product slash ID, which means it takes slash product slash 10, for example, might give you details of order ID 10. What that looks like, what the structure will look like, the JSON request might have certain keys, the response might have certain keys, headers, you know, you expect certain headers to be there. What method will you use? There's a get, there's a force, there's a delete. There is so much information, right? This is complex stuff, you know, it's, and you can put all of that in the contract. And this is not a document. This is an executable specification. I say that because it is important to execute these, right? It is always easy to read a document and, you know, make a mistake, but it's important to be able to execute them. That is how you get the feedback. The contract then validates the expectations of how, of the consumer, of how the API would behave. And it also validates that the API is actually honoring those expectations and it's not going to break any consumers. And so typically your architect or probably, you know, a developer or maybe someone from, you know, whoever it is, will start off by writing a contract, something like what I showed you above. And once that is done, that is when the consumer developer starts building the component and they start to use the contract to simulate the upstream API, right? And by using the contract, you know, you can't just simulate anything. If I'm expected, if my API is expecting to receive a JSON object, but I'm trying to send an XML, you know, the contract will break. If the API is expecting to receive a string and a string ID, but I'm trying to send an integer, the contract would validate that. I'll show you some examples of this later. And in the same way, the API developer also uses contract tests, driven from the same contract. So if you realize, the contract actually contains all this structural information. It contains everything from what the, you know, request should contain in the payload, what the headers should be, what the parts should be, so on and so forth. All we've got to do is add actual values. And now you have an HTTP request. And you can take the HTTP request, hit the API and get a response back. And now the contract also tells you what the response would look like you can validate that too, using the contract. So this contract becomes a single point of like a guardrail, you put, you laid on that guardrail first, and then you start development. And now you start development independently. So consumer can start anytime, provider, the API can start anytime. Doesn't matter because the consumer can simulate the API accurately whenever it wishes. The API itself can effectively simulate the consumer using contract tests. And there's a whole class of errors that gets eliminated just by doing this so that when you deploy in the integrated environment, you don't get these problems. In Geo, we are using an open source tool that Sagar has referred to before called Specmatic, which, which is a project on GitHub. It's open source and we invite you to take a look and you know look at contributor possible. But it essentially handles both the simulated, the simulations that's your service virtualization and running contract tests, you just feed it a contract. We will, I will show you a demo of how this works data. So just taking a quick deep dive into service virtualization, right, your, you have your test environment, the consumer is wired up to the simulated API. You have to remember the consumer has no idea that this is a simulated API as far as the consumers concerned it's an endpoint. That's it. We as the people running the test know that we are simulating the API because we have an isolated environment. And in the tests what the test will do is they will set up expectations with what would the API as a simulation, right it does not really know what to do. So you tell the API, you tell you tell Specmatic, when you get this request, return this response when you get this request return this response. And Specmatic will will honor your request but before doing that, Specmatic will first validate with the various contracts that you have given to it when it started. There's this expectation with the contract so if you say slash product slash 10 get that request will come in and then you return a response. Specatic will check is there such an API. You know, is that is that slash 10 is that a number as the contract as the open API spec said. Okay, and you're telling me your test is telling me to return some response. Okay, does that meet the structure as specified for that API in the open API contract. Okay, great. And then the test moves ahead actually hits the consumer and this may pass or fail depending on how the consumer behaves. But if that simulation is wrong, if the expectation is wrong, Specmatic will immediately give the feedback to the test and the test is right there I'll show you a demo of how this works very shortly. I have a small sample application that that essentially small HTML UI, it's any small e-commerce application hitting a backend which is for that UI, which in turn hits an order API. We're not really focusing on the backend UI today. Most importantly, I'm at this point I'm going to show you first how we are stubbing out the order API using a contract, which has been fed to Specmatic, and we will see some tests running against the backend UI that hits the contract. Let me now switch to my code. You can see here. The API that this is the spec that I was showing you on my screen before. You know, you can see all this all this interesting detail and let me take you through for example for those who have not seen open API it's it's a really good standard for describing HP APIs, right. The product details, for example, you know, you can look at this, I specified a name string type type itself has is an enum, if I remember correctly in this case, the type is an enum that takes various values, etc. All of this information is here. Okay. Yeah, thanks, Sagar. This, this is your specification. This contains all the details. This is essentially your contract. This is the starting point for your development. You don't start development basically I either the API or the consumer until this specification is ready. Right. And then, once this is ready, let me show you how the tests are and we are using karate for these tests. We've written a small karate helper. The karate helper essentially executes the tests in this file, which I will show you in a moment. And before executing the test in this file, we are starting up specmatic itself, specmatic here has been configured. With the needed contracts that it is going to stop out. So this is the order API v1 file that I showed you a moment ago. And we are going to see how that's coming out happens. So I'm going to run these tests. Right. I'm just going to run these tests and while the tests are running, I will show you how the API test actually look. This is the next test. Right. What we are telling specmatic here. This is karate to say this is the URL we are going to use. We are going to make a request to this URL. And this is specmatic expectations. We are basically telling specmatic, you know, expect slash products, the query parameter will be type and value will be gadget. And this is my response. Right. And specmatic has validated this response and actually check that it matches the contract. So for an example, what is what happens if this? Sorry to interrupt you. Could you zoom in a bit? Zoom in again. I hope this is clear. Thank you. This is clear. Thank you. So essentially, what happens if if this is wrong, right, we'll see that later. But we have just told specmatic. Expect this request. Return this response specmatic has expect has accepted it. And now we hit our, our API. This is the component we are actually testing. This is the system under test. We are getting this product type from the data driven tests, which is a feature of karate. The product type is here gadget. There are a bunch of other values that we're using for validation. We get a response internally what's happened is our system under test has actually hit the specmatic service, the virtual service has got a response back that response was this right. And so it was able to, you know, do its processing and return some value back and the same with the other tests. So there is this two step process where first you set up the expectation on specmatic, and then you make an API quality or system under service, which is going to call its downstream, which happens to be specmatic again. Right. And this is beautiful because suddenly this test is completely under control. The downstream is completely under control. Nothing can possibly go wrong because specmatic is running locally. So I don't, I'm not affected by network availability. I can literally do in this entire work coding completely offline. I can write all my tests also completely offline. I don't know. I don't have to worry about the fact that my staging environment is unavailable. What then prevents the API from drifting away from this contract. Right. Because all of this is of no use. I can do to the contract perfectly well and the API drifts away from it and we are back to square one. Right. And I will show you how this works. This is the API project. The API project essentially runs contract tests. So it's symmetric. Right. On the consumer side, you don't have the API. You need to simulate that on the API side, you don't have the consumer. You need to simulate the consumer to simulate the consumer using contract tests. Where do the tests come from? Let me run the test. Essentially, there is a very specific, a very similar file. Here, which basically contains this line here. It's the same spec file. It's the same YAML file basically. This is an artifact of specmatic that basically imports the YAML file. So I will show you that another time. But it's really the same YAML file that we are using. And these contract tests have been generated from the YAML file. All of these request and responses essentially come from that YAML file. The way the YAML file works here is I was saying, this is how it's imported. So the YAML file is imported in this spec file wrapper. And while the spec file wraps the YAML file, it takes all that wonderful structure, the request and response and all of that. It takes the request structure for each API call, combines it with some values. So you get an actual request, makes the API call to your API, gets the response back and validates that it meets the response. Right. And by running contract tests against your API, thus simulating the consumer, and by running contract stubs, which is to say service virtualization using stubs on the consumer side, thus simulating the API, you are able to make sure that both sides, the consumer's expectations of how the API works are in sync with the contract. And you also know that the API is not going to break those expectations. And that is how you make sure that the two will not break when they are actually put together into an integration environment. So just a quick demo of what happens when, you know, you, you try to set an expectation the wrong way. We know, for example, that type was supposed to be a gadget, there are, there are certain set of values that that can, that are that are expected and furniture is not a valid expectation. You're not supposed to be able to set type to furniture. And if I, if I run the API test now, you should basically see some breakage and the breakage will will come from specmatic. The breakage will come from specmatic because specmatic will highlight. This test has failed right, specmatic is itself highlighting here a 400 request, basically saying that there is a problem right I won't go, I won't go into this but specmatic has broken the test, the system under test itself did not get a chance to run. So that's a quick summary of how specmatic works, and how we use a contract driven development to keep to be able to test your consumer isolation. So you don't have the problems of a staging environment, you test your API and isolation as well. So that also avoids the problems of a staging environment. And then when you actually put it to a staging environment. You know, you can, you can be sure that they will integrate. One quick note here is we have to share the same contract across the board. That's the whole point of this is that you're using the same contract everywhere. Right, there is no point in having separate separate copies of the contracts, because then those copies will drift apart from each other. And then you've lost the plot again. So to make sure that we have the same copy of the contract wherever we use it. What we have done in Geo is we have introduced a central contract Git repository. This is a Git repository essentially contracts are nothing but text files. And so you can pretty much put the contract there. And you know this the same repositories leveraged by the consumer as well as the API team. And so both of them are able to stay in sync. There is a lot more here, which I don't know if I have time to cover right now because we have have also what happens now when you change the contract that the contract the changed contract is going to affect the consumer and the API that impacts your your backward compatibility. So there are ways to handle that using specmatic perhaps you can catch up with us in the hang out later after the talk, you can also visit the specmatic website that's specmatic.in I don't have a link here but if you do a quick search on GitHub for this tool you will find it. And then there is API versioning as well what happens when you want to introduce a backward compatible change. You will have to you'll have to handle that by using API versioning that's another topic by itself. So, that's essentially it. To bring it back to bring the story back and tie it back to this diagram. What we have essentially done is we have used specmatic, the big orange bar on the right hand side that's specmatic. This is simulating hybrid scrap positive margin on any MAS all of these systems, and there's a set of services within our team that when they need to talk to these external systems. They, they essentially talk to the specmatic stuff. And these external systems also run these contract tests. This ensures that things are in sync when we actually put these systems into a common environment. Thank you, Sagar. Thanks. Thank you. Let me stop the screen here. Yeah, sorry. Just let me know once once. So I'll have like a lot of things about service virtualization. So now what different or disruptive as far as geo we are doing. So that is basically we are shifting the contract driven development, not only to back end but getting it on to the front end application as well. That is the first thing we are doing. Secondly, we have created an entire isolated environment for application under test. I'm not talking about service under test, but it's the entire application ecosystem under test. So it can be changed at any layer, be it a database change, be it change on the consumer processor part, any of the micro service. Their impact on the client application can be tested in isolation and the power is provided by stubbing intelligent stubbing powered by specmatic. So again, I will just reiterate that this dotted line, which goes from your test to specmatic is the key of driving this entire test automation ecosystem. You can set dynamic expectations and at the same time you can have validation done on your client side. What is that you want to see. So Joel have quite talked on the central contract repositories, but I will just take a minute more to give you more details on how as a pipeline process we keep our contracts in sync. So every service or UI repository have got a reference file, which contains the contract used or the external system API calls used by that service or application. Whenever the build happens, we create two artifacts. One is the latest copy of contract from the central repository and other is the build artifact. Now this get all through the static code analysis check. Now, as part of the deploy contract bundle stage, all this contract latest copy of the contracts get deployed on to the specmatic server. So that when my automated test run, they always run against the recent contract. If any of the external team maker breaking change. My automation strip will not run against old contract and pass, but it will fail, giving me a update right in the lower environment, not in the integration environment. So just a small glimpse of it. Now, as part of demo, I'm trying to replicate a scenario which I talked while talking on the drawback front. There is a external service and one of the API wanted to give me a failure. So as a end user, I am trying to place a order on clear. Like I have to place order, but option system is not working. So I will not see credit option like I cannot place order on credit, but I should at least see other payment options so that my tier one flow doesn't break. So this is something which is a very important part of testing because in production, if that API goes down, your ordering flow should not be impacted. And that is kind of one tier one flow. So as a flow, what Appium test does is it sets a expectation on the specmatic server, the big arrow saying, okay, if you get a request for a geo, then just reply back with status code 500. And then it triggers the UI flow on the client APK. Now client APK calls internal service, which is internal to our application. Internal service triggers a call to external service, which in this case is a specmatic server. And now as we have already told our specmatic server during runtime that if a request comes with this request body or for this particular phone number, you return a 500 response code. This is dynamic in nature. That's why all the parallel running test will not get 500 only this particular request will get a 500 response code. The backend will process and send a failure to the client application. And client application will show the desired behavior which would be validated by our Appium test. I will quickly move to the demo piece of it from the call. So as you can see what similar to Joel have showed you. We have a JSON blob which says what would be the method which will come to you like basically a great request will come to you. It's blank because we will set it at runtime key what would be the path because it would be dependent on phone number, the merchant phone number. And then what response and status code to expect. Now this is a template which we have already created from our test case. So this is basically the test case where you say I would be this is a typical BDD driven framework. I will log in. I will search for a product. I'm looking for zoom in so that everybody is visible for everyone. I search for a product. I add that product to the cart. I verify and now I verify that the credit option is not available when the external system or the external API sorry not the system but the API itself is down. At the same time I verify other payment options are displayed. So this is a typical test case which if you talk on integration environment very difficult to replicate this scenario. And at the same time this is a perfect candidate for a known unknown category. Even if you test such cases you would be testing it at the very last leg of your testing cycle. Now if you see here you can see that we are setting a expectation for the request. It will read that template and set up failure response. Now this piece of code actually does the magic for us where we say path would be this the path which the external system or the internal system will send the merchant mobile number. What status I expect. The message success should be false and message should be internal server ever status code 500 and result should be null like you should not get any result. I will try to yes not try but I will run this test case for you for a while but I'm not sure because in view of time. This test would run on a emulator but meanwhile the test is compiling and other thing is doing. So what is going to happen is so if you see my application will post a request saying I want to get the merchant details and they will get the response fail to fetch some of the details. 500 response code over the status will be 500. So basically this message comes from an internal service which gets a message from a external service or a mock server. So if there is any problem with either the application forming the request or backend processing the response both can be captured as part of this flow and you can test both the front end and backend in isolation without having any dependency on an external system to get it more clear because we currently lack time. So I will just directly go to the report part of this. So we log all the report on report portal. So when this test get executed you will get to see this kind of report where you can actually see what's happening to be set expectations. These are expectation for login. This is expectation from an external system when we search for a merchant. All these things happening and then we try to trigger our login flow. Sagar just a heads up five more minutes for the course. Sure. Thank you. Yes. So we validate we have finally logged in and before we open for Q&A. Just show you. This is where we have set the expectation as we want to find a response code from external service. And you validate the behavior on UI that we can see this cash on delivery option, but we cannot see a credit option. This is typically what we have done so far that is we shifted the entire UI testing on the left hand side. I think without much more delay. I want to open this stage for Q&A. I think you already have some things on the Q&A part of it. So okay. I think we can take Q&A one by one. There are questions on specmatic. Joel, can you take those questions? Yeah, I'll just read them out. They're like around three questions is a specmatic open source. Yes, specmatic is fully open source. And the next question is there an automated way to generate this contract? At this point in time, we don't have an automated way to generate open API contracts. But I think we're looking at doing something along these lines at some point in the future. And one more question is, but can you have dynamic templates in specmatic maybe with some logic? For example, if a response needs to include something from a request, like some IDs or calculate something from a request, for example, add two integers and put it in a response. So I'd like to actually add one more thing on the previous question briefly. I will come to this as well. I understand the question here. Although we don't handle, although we don't auto generate contracts, there is this very nice tool called Stoplight Studio. Maybe you can look with Google search and look for it. Stoplight Studio is really powerful. It's a nice movie and it makes it very easy for you to write open API contracts to answer the second question about dynamically, you know, making changes. So specmatic currently includes certain matches. If, for example, you want to just match any number you can do that. Usually the expectation is that if you know, you know what use you most of the time you will know what your application is about to ask for. And you will know then what kind of value the request is going to look like and from that what response is expected to be. I think it will be good to actually have an example of this. So maybe you can catch up with us later in the hangout and we can we can look at this a little bit more depth. How does specmatic handle API versioning? I think in this context, the specmatic basically treats API's as files. So we have a backward, we have a static check basically for backward compatibility. It's pretty simple to use. And essentially it tells you so there are certain thumb rules. If for example, you add a compulsory parameter in your request which was never expected to be sent before. You know for a factor immediately going to break all your downstream consumers. This can be statically checked and specmatic has a simple backward compatible check to check this out to sort of get this feedback. And the rule of thumb here is that if you are introducing a backward compatible change bump up the API version, specmatic handles versioning by just the file name. So you essentially have probably V1 in the file name and the new file name will have V2. And then the respective consumers and providers are expected to stay on track with those respective contract files. How are you mapping mocked API to the mobile app? I'm not able to fully answer this one. But okay, I think I can briefly say that there is a descriptor file. There is a descriptor file called specmatic.json that contains all of the downstreams that your application would want to stub out. So the mobile application will have to declare its downstreams inside that file. And similarly the actual API will have to declare the contracts that it adheres to. But essentially that's how your mobile application will declare its contracts. And that's how that's how the mapping works. I'm not sure if I fully answer that question though. I will just also take a while on that. So basically, at least as part of the geo ecosystem, the application which we test is a interfacing application between the external world and the client application. So whatever request is originated from the mobile app, it first comes to our internal services, which are all actual in our isolated environment. And later all the option systems for those microservices are stubbed using specmatic. So that's where basically all the mocking is being taken care by the backend services. And as per the app doesn't consider of mocking anything. They just call actual APIs which they would be calling in higher environment as well for the internal services. And that is what is different what we are doing from all others in this industry or in terms of mocking. If you need more details, we can talk on the Hangout session. Right. And final question I think if we have time but it looks like the last question. We are stubbing external services but say I want to stop an API service and consider my consumer UI and get responses from the stuff absolutely you can do that. You can actually stop any service the fact that it was an external service here was because of the approach that we are taking, but you can pretty much stop out any API. And in fact that isn't that is exactly what that's one of the things we are doing as you as well. The stubbing out we are writing tests for one of our mobile apps that stuffs out its downstream APIs. Using specmatic and this test can run from the desktop itself in fact that was also saga's demo. I think that that might answer your question. Thank you saga thank you Joel for sharing experience with us today. And thanks for joining. Thank you for being patient audience. Thank you.