 Hi, welcome to this demo of contract driven development, where I'm going to use specmatic and open source tool to turn your API specification into executable contract. We have an app which sends a request to a BFF, a backend for frontend, which in turn sends a request to a domain service. Notice you could have one or many domain services. Once the domain service gets back with a response, we want the BFF to log the message onto a Kafka topic which allows our analytics server to pick it up and do its thing. The BFF would get back to the application with its response. In order to allow the app and the BFF to independently develop and deploy, we would like to capture the contract between them in an open API specification. This would capture things like what is the URLs that are being exposed by the BFF, the request parameters, the mandatory optional parameters, and also what is the response that the BFF would give to this request, which means what are all the HTTP statuses that it can get back with, also the schema of the response. Similarly, between the BFF and domain service, we would have the open API specification. Now for Kafka, we would capture the specification in something called async API, where you can describe what are all the topics that are available and the message formats of those topics. Now, let's jump into a demo. All right, to start this demo, let's first start our domain service, which is the order API. I'm going to, it's a Spring Boot app, so I'm just going to get the app started. And then here we have the domain service. We need to start Kafka. So let me go here quickly. And you will see here that I'm using Spectmatic to stub out Kafka. We'll get into this detail a little bit later. Let's get this kicked off. And you'll see that Spectmatic is starting this. It figured out where it is, which ports are available. And as you can see, now it's listening to messages on this topic, product queries. Let's also start our BFF layer, again, a Spring Boot application, which we're going to kickstart with a Gradle command. And there we have the Spring Boot application also started. Let's make sure all of these things are wired up and working correctly. I'm going to try and make a curl request. I expect when I make this request one message to come back. And sure enough, yes, we got one message. So this means that all our services are now wired up correctly. So with that, we are ready to get started. So I have this open API specification which describes my BFF layer. It's got a bunch of parts here. So there is slash products, which is I can make a post request to create a new product. It can respond with the 201-400-503. I also have a find available products, which basically takes a query parameter and also a header parameter called page size to get me back a list of products. I can also create orders and so forth. I'm going to use Spectmatic plugin, which is built into VS Code to run the contract test. And there we go. So here you will notice that I'm pointing to the BFF API specification, which we just looked at a minute ago. And I'm also pointing to where my application is running. It's running on port 8080. So with that, let's kind of run these tests. Notice that I've not written a single line of code at this point. And when I run this test, it's going to go ahead and generate it's executing seven contract tests for me. Where did it find these contract tests? It basically figured out from the open API specification. Let's zoom this in. So as you can see, it's made a request to slash products. And it figured out that I can send 177 for inventory because it's a type integer. There is type gadget and then name. It's generated again a random value. And the server responded back with a 201 and gave an ID four. And this we are then saying that this was a successful test. Similarly, it's made another request. And this time you will notice that instead of here, we used gadget, it's used type book. So how is specmatic figuring out that it needs to send these things? Let's quickly go to the open API specification and look at this section. So you will see here for type, we have defined it as an enum, which is gadget book, food order. And so what specmatic does is it takes that and it trades through that and you will see that it's made a request for each one of these types. And of course, it's made request now to slash find available products. And it's got a list of products back. It's validated against the specifications response. And it said, yes, this makes sense. This is all matching. And hence, this test has succeeded. It's also tried to make a request to find products with type string. And it's got back a 400 error. We'll get to it in a minute why this happened. And finally, it's tried to make a request to orders with certain product ID. And essentially, it's got back a 404, which means this does not exist. One other cool feature of specmatic is it also shows you an API coverage. So very quickly you can see what all parts exist within the application, both in your API specification and in your application. And it then kind of reports whether it was able to cover it or not. So in this case, it found slash find available products. There is a get request only on it and it has 200, 400 and 503. It was able to make two get requests and those two were covered, but was not able to make any request to 400 or 503. Similarly, it also found a slash health point, which it's missing in the specification, which means it found it in the application, but not in the specification. Wait a second, how is specmatic figuring out that slash health exists in the application but not in the specification? So here we use actuator, which in the spring boot comes built in and using the actuator, you can figure out what all parts are available. This can be very handy if you want to do any observability. So specmatic leverage is the same thing and tries to figure out, okay, I found a slash health endpoint on the application, but I do not see that in the specification. Similarly, it also found a slash orders in the application and it found that missing. However, you can notice that there is a, what was supposed to be orders. It looks like a typo, which is there in the specification, but not in the application. And hence it's saying it's not implemented. Similarly, slash products. So this is cool because now very quickly you could have get a quick overview of what is there in your specification and also what is there in your application. And with specmatics plugin we were able to figure out and even execute some tests. Now let's try and clean up this and try and get a better coverage, all right. So the first thing I want to do is I want to fix this typo in the specification so that we can make it work. So let's go to right here. We see that there is a typo, so I'm going to try and fix that typo. And with that, let's run the contract test again, right here. Specmatic is going and running these contract tests again. And notice this time it is basically says slash orders. Yes, it is available in both places and it did in fact cover it. This health endpoint is interesting. I actually don't want this health endpoint to be in my specification. It's purely for monitoring and observability. So what I'm going to do is I'm going to use one of the features we have here where we can exclude certain health endpoints or other kinds of health points. And let's with that run this again. So specmatic is going to go ahead and execute all of these tests. And this time you will notice that slash health is not being reported. And we've been able to now achieve a 33% coverage on all of our parts that we have. 33 is great. We have the positive cases, the 200 cases covered. None of the 400 or 503 cases are covered at this point. And also we have two failing tests, as you can see. So out of the total seven tests that it generated, five are successful and two are failing. Why are the two failing? Because specmatic is kind of tried to guess certain data and generate that. But that data does not exist. For example, here we try to create an order with a product ID 674. But this 674 is not actually a valid ID in our database. There isn't a product with 674 in the database. So at this point, what we would need to do is provide examples in our specification so that specmatic can guide its generation of tests. We're going to use our plugin to generate examples. So I'm going to go ahead and kick that off. And you will see here we are leveraging GPT-4 to generate these examples. All right, there we go. As you can see, specmatic has been able to leverage GPT-4 to generate relevant examples for our context. So here we can see the difference, what was before and after. So in the products, it's generated an example of a successful request it can make with iPhone gadget and 100 as the inventory, which kind of makes sense. Similarly, it's said, okay, I should get back ID 1, which is a valid ID in our case. There's several different examples. And you can also notice that it's generated another example for the get where we are saying that when you get, I should get back a product with iPhone ID 1 type gadget. And even a little description saying latest iPhone model. Using GPT allows us to generate really relevant examples. I could have manually written all of this, but you could actually generate a lot of these examples leveraging GPT. So why would you kind of want to do this by hand? All right, with that, I think we can close this comparison window. And now let's go back and run our contract test. Actually, I'm going to just reuse this window. Let's clean this and run this. Here we go again. And you will notice that this time, Specmatic is generated only three tests, which used to generate seven tests, but now it's down to three tests. And the reason is now Specmatic is going to use only the examples that you have provided and use those to generate the tests. In this case, now you can see that we've got all our tests passing. We don't have any more failures. Of course, we have one, one, one of each of these covered. So we're still maintaining the 33% coverage. The question is, can we do something better? And yes, in fact, let's go ahead. So what I'm going to do now is I'm going to use this feature called Generative Tests, all right? So what is Generative Tests? Let me quickly run this and then as the tests run, I'm going to explain to you what Generative Tests does. I'm going to clear this out so that you can see what's going to happen. Let's go ahead and run this. And wow, you can see now, Specmatic is generating 41 tests, all right? And you can see as things are scrolling by, there are some positive, some negative scenarios it's going to generate. It's going to generate a whole bunch of different tests, and they are kind of going ahead. And wow, you can see that we have 41 tests that have generated, only six succeeded, 35 failed. All right, so how did Specmatic generate 41 tests? So we kind of took the inspiration from two things here. One is property-based testing and mutation testing. Let me explain each of these. So what is property-based testing? In our case, we understand that we can look at the open API specification. If a certain field is or a parameter is marked as mandatory, then we know that's a property of this API that for this particular request, this particular field or parameter is mandatory, and we have to send that. And so if you don't send it, then you would expect a 404 or some kind of bad request, 400 bad requests to come back, right? And so we can kind of think about these properties of the open API specification or async API specification and leverage that to kind of help us construct a set of tests for us. Then also to build on that idea, we can look at mutation testing where essentially you can mutate the code and then send requests to the code and see if the tests that were passing earlier starts to fail now. So instead of doing that exact same thing, we kind of took inspiration from it and changed the idea a little bit where instead of mutating the code, we mutate the request. So for example, if something is mandatory and when we send it, we get a 201 back, but in this case, if we don't send something, then we expect a 400 to come back, which is kind of some examples you will see here that we have generated. So that's the combination of property-based testing and mutation testing, which we call as generative test. And that's what has allowed us to generate these 41 tests for you. However, these 35 tests are failing. Let's understand why they are failing. They're saying key named message is in the response, but not in the specification. Okay, response body message. Why is that happening? Let's look at one of these requests that it sent. So it sent this request to orders with some value and essentially count should have been a number, but in this case, we have mutated the value and we have sent instead of a number, we have sent a string to just make sure that your code can handle this and does not end up in an exception. And what we see is this negative scenario has failed because the key named message, which is this guy, is not there in the specification. So what is there in the specification? Let's go to the specification here. And this is the bad request, okay? So as you can see, we have timestamp, yeah, sure enough. We have status, okay, sure. We have error, cool. But here, in the specification, we have path. However, the actual response is a message. Yeah, that makes sense. This looks like, again, a mistake in the specification. This should have been a message. So let's kind of update the specification with that and let's clear this out and let me run the contract test again and see what happens this time, right? It's generating the 41 tests again, cool. And like you can see now, we have finished and we have been able to successfully generate all the tests and all the tests are passing. Wow, this is pretty cool. Let me look at the API specification and you can see that now we have 67 percentage on all three parts and you'd also notice some of the 400 cases are being covered, which is pretty cool. So we now are covering 200 and we are covering 400. And as you can see here, let's look at some of these just to understand what it's done. So there are, in the beginning, there are a whole bunch of positive scenarios. As you can see, this plus positive scenarios and then these are the standard ones that we've seen before. Let's scroll down to a negative scenario. So here we have a negative scenario where a name, which is a mandatory and a non-nullable field, we have sent, a specmatic is sent a null and it's of course got a 400 bad request, which is expected in this case and hence we are saying, yes, this scenario has succeeded for us. This negative scenario has succeeded. Our application knows how to handle this correctly and give back a 400 response. And of course, we will iterate through all the different enum types and have a test for each one of those. We'll also see a few other interesting examples where name is sent as 470. Name is a string, but we're sending a number to see what happens and sure the application is throwing a 400 back and that also has helped us succeed this test. So like this, specmatic figures out and sends different combinations. Again, you can see here name is sent as a Boolean value and similarly you'll also see somewhere inventory. We play around with inventory and see if that is being handled. Here you can see type was sent as null and so forth. So these are all valid examples of negative tests to make sure that the application can handle all of that. And that's how we arrived at these combinations of 41 tests and we were able to validate both positive and negative scenario. So let me just quickly recap, right? So we started, we had an open API specification, we got the application running and then we used specmatx plugin to generate tests for us. We did not write a single line of code. Just with the specification, it was able to generate seven tests for us. Of course, five tests were passing and two were failing because the examples were kind of missing. But when we did that, we also got an API coverage and we figured out there were some mismatches between the specification and the application. We were able to fix those and we were also able to ignore the slash health endpoint which we didn't want to cover in the specification. And we now had tests working. However, the examples were missing. So again, we used specmatx and leveraging GPT-4. We were able to generate examples for us and with that, we were able to bring down the test count to three specific examples that we've ever given and all those three tests were passing. Then we turned on generative tests and with generative tests, we were able to generate 41 tests and initially quite a few of those tests failed because again, there was a mismatch in the specification. But once we fixed the specification, we were able to see all the 41 tests pass and now we have pretty good coverage of 67%. However, I'm still worried about this 503. We do not seem to cover this. And so for that, let's look at another interesting aspect of specmatx. If we go back to our slide over here, you would see that we have a domain service running basically catering to the requests that the BFF is sending. In this case, we have a real domain service running and the BFF is connecting to the real domain service. Now I want to simulate a case where for those 41 tests that I have, I want to make sure that the domain service is responding back with the valid responses like it's doing now. However, I want to add another new scenario. In that scenario, I want to make sure that domain service does not respond back in time. Let's assume that my BFF has a timeout set for three seconds to receive a response back. But BFF, when it contacts the domain service, the domain service takes more than three seconds. Let's say five seconds to respond back. In that case, I would expect my BFF to give me a 503 back. The service is unavailable and I really can't do anything. So I want to now test this scenario. How do you think we can do this in the case that I want the 41 tests that I already have to respond within the three seconds timeout, but for only the 42nd scenario, I want the domain service to not respond back in time. Well, you could be sitting there watching these tests run and when the last scenario is about to run, you could shut down the domain service and make sure that it times out and you can simulate this. But how would you do this in your CI pipelines? And also, it's not practical to be able to do this. So for that, we do have a feature in Specmatic where we will be able to simulate these. But for that first, I want to not rely on an actual domain service. Instead, I want to stub out the domain service and then do all kinds of fault injection, different scenarios. I'd have full control over that. So let's see how we can stub out the actual service, the domain service that is running with a Specmatic stub. The good news is that if you already have an open API specification for this, you could leverage that. But it may also happen that you don't have an open API specification already for the domain service. Don't worry, Specmatic has a feature called proxy through which we will be able to record the interactions between the BFF and domain service and generate an open API specification along with the request response, the stub data, what we call, so that you can replay all the requests exactly the same way back. This is what we call a service virtualization. So let's look at how that can be done. So let me quickly jump here, let's clear this out. Let's look at, you know, I'm gonna start a proxy server. So what I'm saying is, hey, Specmatic, this is my target, localhost 8090, which is where the domain service is running and record all of the interactions in a recordings folder for me. So I'm gonna kickstart that and with this, Specmatic says, okay, I have now a proxy server running on 9000 and that is basically channeling all requests going to 8090, perfect. Also here in the application properties, you will notice that the order API, which is our domain service is running on 8090. Now we want to say, hey, not 8090, go to 9000 where our proxy server is running so that we can channel all the requests. I make that change and let me just quickly restart my BFF layer so it will pick up this change and yep, there we go, it is started. So now let's go back to our contract test and rerun the contract test. So I'm just gonna run the contract test again and you'll see it's going ahead and rerunning again those 41 different scenarios and what you will see here, if I go to the proxy, it is now recording all the traffic that is going through the requests that are going through and the response that is coming back looks like it's stopped, which means yeah, our 41 tests have run and all of them are successful. Notice that the application is none the wiser now. It has just behaved the way it was behaving earlier except that we have routed all the traffic through this proxy server. So let me shut down this proxy server and when I shut down the proxy server, you will notice that it has generated the open API specification for the domain service and it's also generated out 13 stubs. You'll notice that we had 41 tests running but only 13 stubs have been generated. This is why we call this as intelligent service virtualization which means it is not just a dumb recording of every request, it's actually looking at those requests and saying yep, these two requests are kind of similar, I can generalize it and then it distills it down to these 13 unique requests. Let's go to the recordings folder and let's see this open API specification that is generated. So it's generated an open API specification for slash products and it says okay, there is a get on this which takes parameters in the query called type. It has a bunch of other parameters that it's expecting and then it sends back a response and it's also nicely reused the response by capturing it over here. So you can jump over here and see that ID, inventory, name, and type is what the response comes back. So like this, SpecMatic has now recorded several different APIs endpoints for the domain service and also generated the stub data so we can look at any one of the stub data and see what's in it. So it says okay, HTTP requests slash products, post request and it's got this in the header, it's got this in the body and then the response came back with a status 200 and a ID 37 back, all right. So that is just a simple request response pair that it's captured as a stub files and each of them will have some unique flavor of the request response. So perfect with that, now I have the open API specification for the domain service. I also have some stub data for it and with that, I should be able to tell SpecMatic to run now SpecMatic in a stub mode and point it to the generated open API specification saying use this generated open API specification as a way to generate a stub out of it. Again, you're not writing any line of code here to generate a stub and you're actually referring to an open API specification to generate the stub. This is a big deal because most often people have to write a lot of code to generate these stubs or use some tools to generate these and they can have very quickly drift away and go out of sync, but in this case because we will be referring to the same open API specification that the provider is also using to generate contractors, they don't drift away, they point to the same single source of truth which ideally exists in a central Git repo. So anyway, with that, let me quickly run this and you will notice that it will go ahead and load all the stubs up and say, okay, I have now got a stub server running for you at port 9000 and you can go ahead and use it. Just to be sure that we are not fooling ourselves, I'm gonna go ahead and here in my API, the domain service, order API, I'm just gonna, I've killed it. So now there is no longer the service running and we only have a proxy running at this stage. What do you reckon when I run these tests, what do you expect to see? Well, I expect to see that everything works as before and there is no surprises. Okay, let's run the test and see what happens now. I expect that the application would be none the wiser, it'll still go ahead and generate those 41 tests and you would also be able to see at the proxy there are requests that are coming in and the rest, the proxy is responding to all of those requests and there we go. So we have all 41 tests that have succeeded. We don't have a downstream service running the domain service. We are able to completely work off a stub that has generated purely from the open API specification by specmatic again, without writing a single line of code. This is why we say this is a no code solution. Now, of course, we still have not done anything to cover the 503 case because so far all we have tried to do is essentially stub out the downstream service so that we have much better control now and we can simulate the different conditions. So with that, let me jump in and show you how we will be able to generate a 503 response in this case. So we wanna basically go to the generated spec but before that actually let me add another example here which basically says anytime I make a get request for let's say the type other, I want it to generate a response with basically time out and that would result in a 503. So let me find the relevant section. So find products, okay. We have an example over here which is a success example. I'm gonna add another example for time out. That's just a name that I'm giving and that should be, okay, value 100. Really does not matter. Similarly for query, I'm gonna add for the query parameter. I'm gonna add another example which I'm gonna use as other. So anytime I send other in the query parameter for slash available products, I expect let's go to the response here real quick. This is my 503 bad request. So I'm gonna go here and I'm gonna give an example. So examples and you can see Github Copilot is kind of already guessed that this is the response that you would want time out a 503 service unavailable because of a time out. And so with that example in, now I have added another example in my open API specification which is essentially expecting anytime I send a query parameter as other to get a 403. To make this possible, we will have to go to the generated test data. Let's look at one of the examples like here. We have slash products, a get and then we are responding back with some valid response. I'm just gonna go ahead and duplicate this. I'm gonna make a copy. Just gonna rename this to stub time out. And in this case, instead of gadget, now I want to put other, all right. And here, and I'm gonna go ahead and add a new property which essentially says delay the response to five seconds. So whenever you get a request for slash products, get for a type other, then delay the response back in five seconds. You will notice that as soon as we have updated the stub specmyreg proxy automatically reloads it. So we have this loaded now. So with that, let me quickly go back to our contract tests. Now let's run the contract tests again and see what happens. So this time I would expect to run 42 tests. The one new scenario that we've added for the timeout. And with that, it's going ahead and running all the tests. And sure enough, as you can see, we have 42 tests that ran. All 42 tests succeeded. And let's look at this. And you would see here that we've got 100% API coverage for the slash find available products. And you'll also notice that a 503 case is now covered. How did this happen? This happened because we were able to simulate a timeout and that resulted in a service unavailable response. So let's quickly look at where did we generate the 503? So you will notice here, I made a request to find available products with type other. And whenever we send with type other, we expect the downstream service to basically result in a timeout and which again, this BFF layer will propagate as a 503 service not available. And we're also validating that the 503 is matching the schema that we have for 503. So with that, we have been able to use Spectmatic to generate the contract tests. We're also able to stub out downstream services and do fault injection and other kinds of negative scenarios because we have control over the downstream service through a stub that we generated. And now we are able to get a fairly high degree of confidence that our specification is in fact in line with the actual implementation. So that is in a short demo, what the power of Spectmatic is. Also, I talked about later on, I'll show you how we were stubbing out Kafka. So here if I go to this, you will notice that all these requests are coming in the messages are being posted onto the Kafka topic. And essentially this is a Spectmatic stub that is running because we don't really want to have a Kafka real instance of a Kafka broker running on this laptop. While you can certainly do it, there will be inherent latencies and other kinds of things that you have to deal with and this kind of certainty that you have for your contract tests, you would not be able to get. So in this case, at the end of this demo, what we've been able to achieve is stub out both the dependencies that the BFF layer has. So the domain service we were able to stub out and we were also able to stub out the Kafka dependency that this service has. And now we have full control over our BFF layer and we can contract test it, make sure that it is in line with the specification. I've been showing you the demo of running these contract tests from the plugin. However, I just want to make sure you can also understand that you can run all these tests from a code. All these changes that we've made can be checked in and can be run by other developers on their local machine as well as by your CI pipeline. Also to run that specmatic and generate this one-time contract test code where we essentially just need to configure where our application is running, where our stub server is running, where the Kafka mock is running and you can then specify whether you want generative tests on or off and pretty much specify the location of where the stubs are available and start the application. Let me quickly run this test now for you. And as you can see, it is kicking off and running these tests. And there we have the 42 tests, the same test that we saw running earlier. All those 42 tests are now running from within my IDE. Same thing can run from the CI pipeline as well. And this way you can ensure that these tests are continuously run by other developers and also by your CI pipeline. Cool, that was a quick live demo. Let's just quickly recap. So we have the BFF here, which is a system under test. We were able to use specmatic contract tests to use the open API specification to generate the test. We were also able to stub out the domain service dependency through a specmatic HTTP stub, which was based off the open API specification, of course. We were also able to stub out the entire Kafka piece with a Kafka mock that was generated using the async API. And what that essentially did was generated or created an in-memory broker for us and created the topic that was there in the async API specification. We were also able to do schema validations of whether the messages that were posted on the topic were actually schema valid as per the async API. So initially we set expectations through those JSON stub files that you saw on the HTTP stub. We were also able to set expectations on the Kafka topics. And then we generated the request from specmatic to the BFF layer, which went through the stub. The stub responded back. The BFF layer then put the message on the Kafka topic and sent the response back. Whenever the response came, specmatic was able to validate that the response was in line with the response schema and data types that were specified in the open API specification. It was also able to then verify whether the number of messages that were posted on the topic and the schema of those messages that were posted were in fact as per the async API specification. So that is in a nutshell how we are able to contract tests the BFF layer and make sure that it is in line with the specification and interacts with its downstream dependencies as expected and as specified in their respective specification. If you were to do this without specmatic, typically you would have to do continuous integration like the consumer would do continuous integration with some kind of a stub that they would have hand created on their own. And things would all look good in their local environment and its continuous integration environment. Similarly, the provider would do the API testing locally and on the CI. However, when they came to the integration environment, they would realize that maybe there are some disconnects and that would cause problems in integration and the entire environment can become stable. And this also blocks your path to production and the later you find these issues, the more expensive they get. So the whole idea with contract driven development is to shift this left and give that feedback as early as possible, ideally on the developer's laptop. We do this through specmatic where we take an open API specification or async API specification and we generate a stub for you, which is service virtualization. And the consumer can now work with the stub as if it was talking to the real thing. And we take the same very specification because that's a single source of truth and generate tests of that to make sure that the provider is in fact adhering to the specification. This is what ensures that they don't drift away and they both can independently develop their stuff while being completely in sync. All right, so having a single source of truth is extremely important because even though many teams agree to an open API specification, but they may miss updating it or they may not refer to the current version and you may still end up implementing things in a wrong way and have a integration issue at a later point in time. So what we try to do is we put all of this in a central Git repo. So we take the open API specification, we create a central Git repo and we go through a pull request process where we do linting to make sure that the open API or async API specifications are as per the standards that we have agreed. We also then do a compatibility test to make sure that when you're making any changes to these specifications, you're not accidentally breaking backward compatibility. So how does this backward compatibility thing work? So what SpecMatic does is it basically takes the new version of the specification and it picks the old version of the specification from the Git repo. And notice earlier I explained that SpecMatic can take the very same specification and run that as a stub in service virtualization mode and also run it as tests in a contract testing mode. So what we do is, and this was almost an accidental discovery I would say, where we take the new specification and we run that as a stub and we take the old specification, we run that as a test. So the old specification will make API requests to the new specification that's running as a stub. As long as all the old tests pass, then you know that your new version of the API specification is backward compatible. So that's what happens. A real test get executed. It's not a simple text comparison. These are real tests that get executed. And then once the tests are passed, someone would review and merge this. And this will ensure that your single source of truth, which is the central contract, always stays up to date. All right, to just summarize, SpecMatic then takes the open API specification. The consumers can run their tests locally by using contract as stub or service virtualization. The providers can take the contract tests and use SpecMatic to generate the contract tests to validate whether their implementation is in sync with the specification. The same thing can be leveraged in the CI where they are all referring to the single source of truth, which is the open API or a sync API specification on both sides. And when they come to an integration environment, you do not expect to see any surprises and you can get to production as quickly as possible. That's in a nutshell, what we call as contract driven development. We have recently launched SpecMatic Insights which allows teams to visualize the service dependencies in a very visual manner, where you can take all the data that is generated by running these contract tests in your pipeline and have this visualization built out of real data and then see which service is dependent on which other service, what endpoints is it dependent on and do you have like a single point of failure? Do you have a choking point in your architecture? You also be able to drill down into a specific API and look at what are all its consumers? What are its dependencies and also what type of dependencies it depends on? Is it a HTTP dependencies? Is it a Kafka dependency? You'd also be able to monitor the overall coverage of how things are improving in terms of your CDD adoption. How many endpoints do you have in the central repo? How many of them are being consumed both by the provider and the consumer? And what is the overall API coverage? Is it trending up or trending down? So these insights can help you improve your CDD adoption in your organization. And just to recap, we can support Async API and there we can use JMS. You have JMS, then you can mock that out. You can also use it for stubbing out databases, the JDBC stub. You can use it for stubbing out readers and many more such capabilities exist. So do check us out on specmatic.in. Thank you.