 We will get started a little earlier. This is honestly a 90 minute session that somehow I am trying to cram it into 30 minutes so we will see. Wish me luck. This is a quick show of hands. If in your company you are doing microservices, micro front end architecture, not yet, okay. Cool. There's off late lot of stuff about the evils of microservices and why you know that was a terrible idea, etc, etc. My talk is about specifically the challenges people face when trying to deploy things in a microservices architecture. But it's not specific to a microservices architecture, it just gets amplified in a microservices architecture but I'm sure you'll be able to relate to most of the other things. And I want to take a specific example and kind of talk in that context so everyone can relate to it rather than very abstractly about these concepts. This is a typical e-commerce application architecture. I'm sure everyone can relate to it. You have a bunch of external dependencies like payment gateways and inventory and warehouse management systems, authentication systems. And then you have a set of microservices here. You could also consider that as just one box if you're not doing microservices. But you have like a product catalog service which provides you a list of products that are available. You have an order management service, you have a payment service. And then each of them have a respective micro front end which is basically you could think of these as, you know, set of pages in your application that caters to that software update. No, thank you. All right. So if you're in this architecture, it's also another common thing that you would actually have these journey teams which kind of own a particular journey in this process. And so you may slice your teams in this manner. I'm hoping this all sounds familiar. Now when you've set up like this, what are some of the common challenges you've faced? Anyone? Cross team changes, right? So let's take an example of a cross team change, right? Let's say I want to introduce a new category of products, okay? And for that category of products, I do not want to offer cash on delivery as an option, right? Like I don't want to offer a certain payment option for that particular category. So what will change in this? You'd actually see that your product catalog and the product listing will change. There's also obviously an inventory management external dependency that will also change. And then for that particular product, you'd also need to make sure that your payment gateway, when it returns the options for payments, it does not return the cash on delivery option. So now you have a change that is actually cutting across two journey teams and one external dependency, very common, right? And what's the challenge with this? Tracks, that's much later. This is a more fundamental challenges. How do you get this feature out into production, right? Because this team is busy working on their stuff. This team is busy working on this stuff. You need to now coordinate, orchestrate, get all of these working together so that that feature is ready and you can actually ship it out, right? If the payment makes the changes and it's ready, can it just go ahead and deploy some form of cadence has to be there and you need to make sure that you're following some kind of a release train and someone's orchestrating that train and making sure all these pieces are coming together and then going, right? Now, in such a simple application, if that already seems like a challenge, you can imagine in most large enterprises today where you have, at least the ones I work in, we have about 100 independent systems that interact and they have to eventually go out. How do you orchestrate all of this? This becomes a massive nightmare for people trying to manage, right? And generally, when you measure CLT, change lead time for these components, they will not be in hours, they will be in not days, they will be in not weeks. They will be in months, right? Which is a huge problem because people want to bring it down. And I think we've heard many talks where people have talked about how their CLT is like down to couple of days, you know? But it's actually much more larger. So just to put this into perspective, what I was talking about, like this is typically what I've seen in a lot of companies as they're, even though they've implemented a microservices in a micro front end architecture. You would have the first component, it's ready, they deploy it to some kind of an integration testing environment, right? Where they're doing some integration testing. Now, can you go ahead? No, you would have to wait for all these pieces one by one to show up. And finally, that's when you would actually then be able to do your integration testing. Once you've done the integration testing, now the entire batch either moves forward or does not move forward, right? Like it's all or nothing in this case. So the entire batch moves forward, you hope that everything's fine in the user acceptance test, and then finally it goes to production. And that journey typically from the first service being ready to it going into production, like I was saying, I would measure the CLT on that, that would be at least a few months in many large complex organization. So that's kind of the problem that we want to solve, and then I'll talk about how contract-driven development is a specific approach to try and address that particular problem. But before we get to contract-driven development, I just want to reiterate what we are referring to as independent deployment, right? So what does independent deployment mean? This service is ready. You should be able to test it with the rest of the system, push it forward, go all the way to production. None of the other pieces are ready, right? I should be able to independently take any of these components, services or front-ends all the way to production without waiting for others, okay? And then whenever whoever's ready will keep going and that's eventually everything will go out and the feature will be available. But whatever is ready keeps going, does not wait. That's the main idea, right? This is what most companies would like to desire, yeah? I mean, Dave would say, of course, that's 20-year-old shit. So just putting the definition here, the ability for each microservice or a micro front-end, or if you're doing so architecture than each of your services, or whatever architecture style you're doing. For each of those services to be independently developed, first of all, because you don't want to wait for the back-end to be ready before you can start the front-end, you want them to be independently, which means in parallel develop and deploy all the way to production, right? Using some kind of automated CI CD pipelines and without waiting for others, without waiting for other impacted components for a given feature to be available, right? Now one feature we just talked about, and most organizations don't do one feature at a time, they're doing multiple features at a time, right? Which makes this problem more complicated, right? And that's the problem that we want to solve, not by putting process, not by putting people, but by actually putting tech to make that possible. And that, I think Dave talked about how when you put tech, you can actually scale that. And we've done this across several thousand developers, right, in an organization. So quickly, again, just talking about if I had some kind of local unit testing that I was doing, then I have CI for each of my components. Generally, if you have a service, you would have a pipeline for that. So you would do some kind of a continuous integration. Now in this continuous integration, what kind of tests do you typically run? Unit tests for sure, right? Can you run any other kinds of tests? Automated UI tests. But how you'll run the automated UI test? Because if this particular pipeline is deployed, it's only got its changes. What about the rest of the pieces that it depends on? Where are they? They're not there, right? So you won't be able to run this unless you have some kind of a shared environment sitting somewhere where all those pieces had been already deployed, somehow magically and maintained. Now you can say, okay, I've got this new change. I can deploy this piece also into that environment and run automated tests. That seems like a cute little idea, but actually in reality, I've not seen it work. It's just too problematic. So we'll come to that. Stubbs have their own set of challenges, but we'll talk about how to address that. But generally what happens in my experience, and you can correct me if I'm wrong, you have two options. You either have some shared environment where you deploy and run your CI to validate. And it's too hard mostly because right from test data management, environment, etc, etc, you have to deal with all those problems. So most people in their CI simply run unit tests and say, it's done. Is that a fair statement? Yeah. Similarly, this other warehouse application would also do the same thing. And then each of them would independently or rather coordinate and then deploy it to some kind of a common integration testing environment, system integration testing environment. Where all these pieces along with their external dependencies are all integrated. And now we can do some form of end-to-end tests, system tests, API workflow tests, things like this. And then if everything goes good, then you can then deploy it to a pre-prod environment or a staging environment and then finally take it all the way to production, right? This sounds familiar? Have you ever seen this? Now what's the problem with this? Imagine this one particular, there's a problem between these two integration of these two things, right? There was some, maybe contract mismatch, maybe some logic mismatch. And what ends up happening is that our environment gets compromised. And your path to production, your path forward gets blocked, right? Until that red line is fixed, this release train is stuck. It's not going anywhere unless people can do fancy git ninja stuff and do git pulls and cherry picking and whatnot and somehow make it work and then move it forward, but just a challenge. This is what I call my friend's integration help, right? So what I want to talk today is how to completely avoid this kind of dependency to start with, right? So one idea and everyone's talked about this is shift left, right? So the shift left here is you'll notice that between my CI and my SIT, I've now introduced something called EAT. EAT stands for environment for application testing, right? Where it's only got a given application's dependencies, a given application deployed. And all other dependencies have been stopped out. It's dependency on warehouse has been stopped out. It's dependency on external services has been stopped out, right? And so now you're able to shift certain tests from that SID environment one level earlier and if basically, and of course those are all driven on contracts, someone mentioned that earlier, so I'm gonna explain the tech behind this in terms of how you can drive all of this through contracts. So when we're stubbing these dependencies, we're actually not handwriting any of that stuff. We're actually generating all of that through an open API specification or one of those. So that it's a zero code and you always stay in sync, right? One of the big challenges with stubs is your stub is out of sync with the reality and then you only figure that out again in the SID when you go for the common integration. Now what will happen is, for example, if that particular earlier what we saw, right, if that particular thing is compromised, only this particular environment is gonna get stuck and that path to production is gonna be stuck. But the rest of the piece could still move forward, right? So I'm just kind of explaining you step by step how we will arrive to a fully independent deployment approach. But this is one step shift left. Does this make sense? Yeah? Now you notice here what we have done in our CI is we've actually taken those stub kind of same approach, now one level left. And now here I'd be able to test each of these components, right? Your micro front ends or microservices and be able to actually test them in isolation, which means I can further shift left and actually do each of these service full tests, right? I can test each of these service fully. If it's a front end, then I could bring up the apps stub out the back end and any other dependency it has and test that entire thing in isolation, which gives me a confidence, then take it to the next level, then take it all the way to SITN forward. Okay? So all of these are obviously powered by the same kind of attack. Now another way to just look at the same thing is what we call as the test pyramid, and so I'm just gonna flip this for a minute to look at it in a different way. So you have your unit test where it's basically you're testing each of your individual classes of files in isolation, right? That's all happening locally. Then in your CI, you have just you see those blue arrows, which is basically connecting those pieces. Then in your application testing environment, you basically connected the application but still stubbed out the external dependency. And then finally in your SIT environment, you've basically integrated everything, right? So this is kind of going one step at a time, integrating the pieces, right? Every stage, you're basically connecting one set of pieces. So here it's between within a particular class. This is within a service or a front end. Then that is between an application and then that goes all the way into other dependent applications, right? And then of course you can put a bunch of these application pyramids and then have pre-proud and prod on top of N and create your full product pyramid on top of that, right? So that's kind of the eventual goal where you would be able to do the full end to end deployment. But there are some important missing pieces that I want to talk about. But before that, it's important to spend the rest of the presentation on a commercial break, right? So my name is Naresh. I live in Mumbai, don't act in Bollywood yet. I'm the founder of a consulting company called Accensio. I started my career building neural networks for Indian space research organization. I was part of ThoughtWorks in the early days. There was part of this company, again a very interesting company. I kind of first day felt like this company is going to crash and burn. But of course I was absolutely wrong. I was part of Hike Messenger, again, fastest unicorn at one point. I was partner at Industrial Logic. We built e-learning for teaching Googlers and other people, some of the technical practices. Teaching programmers was very hard. So I decided to focus on kids. Teaching kids is much more easier. So built a company called Adventure Labs for helping kids learn mental mathematics. I've been associated with a whole bunch of conferences, started a lot of them. Some of you already know Agile India started back in 2005 and a bunch of other conferences. My favorite of course is Functional Programming Conference. That's a lot of fun. I also wrote the first few lines of convention code, continue to write code on it, and if you find bugs, it's all because of me. These days, I mean, last three and a half years, I've been helping this company called Geo, if you've heard of them. Been a kind of advisor and a hands-on implementer in various different critical products. So back to integration hell, right? We want to avoid integration hell. So I'm going to introduce five key practices that I believe are extremely important and we're going to deep dive into one practice. But just for completeness, I'm going to introduce all the five practices that I believe are critical for IDAD, which is Independent Development and Deployment. So first one is automated testing. I don't think that's pretty straightforward. Everyone understands if I want to deploy very frequently, I need to have automated tests, right? And I spend some time talking about the different types of tests and the different levels of tests which you'll need to kick in at different points in time. So automated CICD pipelines, again, no problem there. That's well understood. Feature toggles is an idea that's not very well understood, but again, I think most people know what it means and why that is important. If you want to independently deploy things, you need the ability to be able to either as a release toggle or as some other form of toggle, be able to manage what is visible and what is not visible. Of course, the ideal thing is to design everything so it's backward compatible and you ever don't need any toggles. But that's not always the case. And so when it's not possible, you kind of look for a toggling solution. And these days, there are fantastic platforms that are available for you to be able to do this. So I think last year I gave a talk on this particular topic and how we manage all our toggles through CICD pipelines fully automated. Trunk-based development, again, feature toggles, trunk-based development generally go very much hand in hand. Again, the idea here is that instead of using something like Git flow or whatever, you essentially work off one branch. All developers work off one branch. Some organizations like the process of a pull request and they do it through PR, but you bring it back into the main and that's the branch that you're always taking forward, right? And so trunk-based development, again, becomes very important. Do I need to explain why trunk-based development is important for being able to do independent deployments? Or that's kind of well understood. Yes, no? Cool. And the fifth, which I believe is the most important but the most neglected one, which is contract-driven development. And that's what I want to deep dive today into and spend the rest of the time explaining why contract-driven development is important. What does it even mean? Because I don't think there's enough talked about this particular topic, all right? Of course, you can tell me, look, this five is not sufficient. There are more practices that are important. And I agree that this five is, I would say these five are absolutely necessary, but more practices can help you, right? Like story slicing, BDD, TDD, config S code, infra S code. Some of these practices are absolutely helpful, but not like the bare minimum you need. These are, I would say, at least for independent deployment, these are good to have, ideal to have, but not always possible, but focus on these five, all right? So with that, I'm going to now spend the rest of the time, which is not a lot, on contract-driven development and jump straight into it. Even in contract-driven development, there are five further practices. So remember it's like five, five, five, everything I'm going to go in, right? That's the theme for today. So I'm going to take five key practices for you to be able to do contract-driven development, API design first, right? Again, a lot of people have talked about this. I'm not going to spend too much time. But the key idea is, you want to collaboratively design the API and document the API using one of the standard specification protocols or specifications so that you have it available and referenced by everybody else, right? So collaborate to spec the API, to write the API spec first. Once you have that, then treat it like code, right? Don't treat it like some documents that you pass around over email, treat it as code, and put it in a central like Git repo. Once you have it in central Git repo, then you can do several interesting things on it, right? Like compatibility testing, quality testing, and I'll show, I'll deep dive into a little bit of these things. But you can do some of these very interesting things. Once you have that, then as a provider of a service, you could now start taking those specification as executable specification. The open API contract or Visdil or Async API, whatever you have, right? You can actually convert it into a contract. Without writing a single line of code and that's kind of what I'm going to show you today. And then as a consumer of a particular service, I'd also be able to take that as a stub and disconnect with the other things. The big idea here is you're all working off the same central repo, right? Which means you're not going to be disconnected in terms of the providers move forward with some other implementation, but you are behind, right? Generally, that's the challenge. So this is why you need to put it in central repo. You need to run this as a contract test in their pipeline as a provider's pipeline. As any time you basically try to build, you have to first make sure you are still in line with the contract that you agreed with. And all of this is automated, so there's no manual stuff. And then finally, as a consumer, I can work off the same contract. And the important thing here again with stub, which I'll talk a little later maybe, but that's the five practices in contract-driven development. So quickly, what is a contract? Like there's a lot of confusion around what is a contract. Is everyone clear? Can I skip this section ahead? What a contract is, no? Okay, so let's take a simple example to explain the concept of the difference between a contract and something else, right? So let's imagine I want to evaluate that expression. So I'd basically first evaluate these two things, get a response back. Then I'd evaluate the 22 by 7, get a response back. Take the results of those two, then send it to the server and get a result back, right? So there are basically three calls that I'm making. Let's just convert this into a little bit more API type looking thing, right? Is this clear? So now, what kinds of API tests can you think of that you would write on this? This, like, when you look at this, you can imagine like, hey, I would want to test some of these things individually, right? So you might want to basically maybe test, like when I post to slash calculator with this, this, I want to get this result back, right? I want to test this particular thing in isolation. I don't care about the full thing. I just want to test each of these operations in isolation, right? I might have some negative values over there, which also I want to make sure works fine. I might give some invalid values, and I want to make sure that works fine. I want to give some junk operations, want to make sure that I get a valid 400 response back, and so forth. Now, this is a mix of what I would say API tests and contract tests. Which of these two do you believe are contract tests and which are actual API tests? These two here at the bottom is trying to play around with the data types of, like, the signature of your API, and want to verify if the signature, if I send something wrong, does it work or not, right? And generally, these are kind of captured in something like this, an open API specification. So that's the calculator's open API specification that you have, right? Now, if you had this open API specification, some of you might be using postman or some other things, right? So you could actually take postman and generate an open API specification out of it, right? It's all just readily available. So assuming you use an open API specification, or you're able to generate an open API specification, then what I could do is I could basically give the open API specification and generate all of these tests for free, without having to write a single line of code. Now, you'll notice compared to the past one here, I'm not sending actual values. They are now just representing the data types, because I don't really care about the values. I'm not asserting a specific value. I'm asserting whether the signature is being met. The value is not important in that case. Value is only important as long as the data type is matching, as long as the schema is matching, as long as the protocol, what I'm supposed to expect, that is matching, right? And I can have things like an enum which says, hey, only these are valid operations, the remaining are not valid operation, and those all I can take and just generate things out of it, okay? So this is an example of contract tests, right? So the contract is the signature of your API that includes the protocol that it's using and the data type and the schema and things like that. So think of it as a method signature, right, for an API. So it's a signature, and I want to validate that it adheres to the signature. And a lot of generally people, the kinds of tests that they do in API tests can actually be just now generated out of a specification without you ratting any line of code. The only other thing I want to validate, verify, or actually explain is what we call as the API workflow test, which means it's a series of interaction, and at the end I'm going to assert whether I got a certain result. So if I'm doing a sequence of calls, it's a workflow, I want to verify at the end of it whether I got something back or not, right? So those are the three different types of tests that I explained earlier now with an example. API tests, contract tests, and workflow tests, okay? Now why would you care about contract-driven development? I have a... Just simplifying, again, I have a front-end which requests certain product details from a backend service, and this gives you back a product details response back, right? And this is the consumer that's the provider just putting some terminology here. Now if I'm basically as a consumer, the provider is a dependency for me, and generally I don't want to deal with the dependency, so what I would do is I would introduce a mock provider, right, a fake provider, and I would actually proxy would be the right term for this, but most people call it a mock, some people call it a stub, and I just interact with this, right? Now this is all great, but the problem is that is generally not representative of the actual thing, because like I explained earlier, these things can drift away, right? They can start drifting away, when you are generating this mock yourself. Make sense so far? So what then happens is everything is working fine here, but then there is a mismatch, and that's where you would basically see something broken, right, in terms of this, and that's the kind of problem that we want to catch as early as possible. We want to shift left and catch this ideally on a developer's laptop before you even hit the CI, right, these kinds of problems. So, but if you don't do what we refer to as contract-driven development, typically what will happen is you would do this, even the provider, everybody will do continuous integration. Finally, when they come to an integration environment, that's when we'll figure out that something's broken, and this path will be blocked for you, right? And that's obviously going to make your end users unhappy, and that's also very expensive. The later you find this, that it gets more expensive. So coming back now, I'm going to put a deep dive into each of these and spend some time. So API first, like I said, you agree on the contract mutually between a provider and consumer, capture it in some form of a visual or open API specification, async API, there's a whole bunch of specification languages out there, and that basically becomes now the binding factor for the provider and the consumer, right, and it's rightly called a contract, because now this is the contract between the provider and consumer, right? So the whole point is doing this collaboratively, and once you now have this contract, what do you do with this, right? That's where your next thing kicks in, is you essentially put it in the central contract repo so that we're all on the same page. We're all referring to the same thing. Now, if you don't have a single source of truth, which is in a lot of cases, people write these contracts, but they float around on emails or some shared file system or something like that, and the problem is these could go out of sync because of human error or whatever reason, and you would still end up with these contract issues, right? So it's very important to have a single source of truth and put them in Git so that you can refer to both the provider and consumer can refer to the same thing. And I'll later show a demo of, you know, we have Specmatic as a product. In that, you'd essentially have a Specmatic.json which will refer to these contracts from the central repo and at runtime, it'll fetch the right version of it for you, right? Once you have these in the central contract, what can you do with it, right? You can basically take these things and on a pull request, for example, you could do style checks with an API linter, right? There are wonderful tools available that can actually do linting and ensure that there is some consistency in terms of how you're writing your specification, right? You could do both forward and backward compatibility tests on these and that also I'll explain. It's fully automated. You don't need to write a single line of code. You could do both forward and backward compatibility tests and that essentially what it does is it takes the contract, the specification from the latest one that you're trying to push, the new one and the existing one from the Git repo and it does what we call as contract-to-contract compatibility tests, right? And again, this is all without you having to write any code. This is kind of the magic that we've built out and when you have issues in this, you would actually see in your pipeline errors like this, right? Like this is a linter error where it's saying, you know, this particular thing failed and it can fail. It can stop you from merging these changes in. You could also do backward compatibility checks where you've introduced, let's say, a mandatory field and then that will break the contract for the consumer. So it can give you that kind of a feedback. If you're interested, we can deep dive a little bit more into some of these backward compatibility and how we do those checks. But what I want to get into is executable contracts. I think I have... I'm run out of time, but we do have a 30-minute coffee break, so if it's okay, I'll take another 10 minutes and show you a demo of how do we actually convert this specification. Is that okay? Cool. So executable specifications are executable turning your API specifications into executable contract, right? So the idea is that you take your open API specification or any of the WizzDill or whatever and, you know, through Spectmatic, you would actually generate stubs and you'd generate contract as tests for the provider and consumer. Now, if you look at the provider side of things, you have an API specification that you agreed with. You put that into the central repo. Then what it will do is it will basically take this WizzDill, generate a whole bunch of tests based off that, and hit your system under tests. Locally, you can do this. You can also do it in your CI to ensure that your service is adhering to the contract that it's agreed to, right? And I'll show you in a minute how it actually accomplishes that. Actually, let me jump straight into that without taking too much time. And I'm just going to mirror my display. That's perfect. This is, I've taken an example of an order API with one of the things that we were talking about, so you can look for products and stuff like that. So this is just the open API specification for that particular thing. Now, with that open API specification, you will notice here there is something, there's a contract test that I've written. This is a one-time code that I have to write for any service, and if I've written this, this is, I'm showing you a Java example. There are examples in different languages. This open source tool, Specmatic, that I'm referring to, is actually language is agnostic. It doesn't really care about the language, but I'm just showing you an example in Java. So there is a JUnit, you know, you extend the Specmatic JUnit support, and then you define certain properties where you want to run Specmatic, and essentially, sorry, where your actual service will be running, and then you hit the service. There's also a database and things like that here that you, in-memory database that you might want to reset and things like that, and now if I run this particular test, let's do that, just give it a second. It's, you know, build, start the application. It's a simple spring boot application. Most of you must be familiar, so it's going to start the application, and I actually, this is the only code that you have to write, which also is a one-time code, and there it started the test. This is now bringing up, this is the one that's actually bringing up the application right here, and it started running the tests for you. You'd see some positive and negative things, because we generate both positive and negative scenarios of the contract to make sure that, A, you're adhering to the contract, and if I give you wrong invalid input, you also know how to deal with that, right? You're not crashing. For example, a lot of times what we find is, as per the specification, you're expecting an integer, but if I send you, let's say, a string, then, you know, the server just crashes. It's not handling that, and it may happen in reality that you may end up, you know, because some faulty thing might actually send you something else. So we try and generate. So here, what you just saw is 44 tests which have been generated without you having to write any line of code, except this one-time thing, and it was all generated of this contract, of this open API specification, and of course, there is one other little thing which is the specmatic.json file. Let me quickly show you that. This is the only other piece that you need to add, which is how specmatic would know where to find the contracts from the central repo, right? So this is referring to GitHub, this repo, and actually pull out these contracts from there and run this, right? So you need to write this specmatic.json file. You need to write this contract tests, whatever, some 30 lines odd, and you have the... your open API specification, right? Everything else, these 44 tests that you saw were all just generated off that, and it's going to give you feedback, right? Now this is running locally, as you saw on my machine. This is also running in CI. I showed you screenshots earlier where it's actually running these things in the CI. And so this allows you the ability to individually test each of your component by isolating all of its dependencies and making sure that this particular component is adhering to the contract, right? Now most often what will happen is this particular service is actually dependent on other services, right? Now this particular service is dependent on other services, or someone else is dependent on this service. So that's the next piece quickly I want to show you, which is basically running this as a... put this in. So now we just saw this particular thing in action. On the consumer side, you would now basically write component tests on the consumer side, the consumer front end is the actual client, and then you have an open API specification of your dependency. So what will happen is in the arranged time, you would actually set some expectations. You would set a dynamic expectation saying, hey, when I request this particular thing, respond back with this thing, right? I might want to simulate an empty state. I might want to simulate, you know, error state. I might want to simulate different things and make sure that my UI is working as I expect, right? It handles all the different scenarios. One of the classic problems that at least I have run into many times is when you set these expectations, you could set invalid expectation, but nobody's actually verifying whether these expectations that you're setting are in line with the contract, right? So one very important thing that we've implemented is whenever you set an expectation, it actually validates whether these expectations are in line with the contract, and it stops you from setting invalid expectations, right? If I could set a wrong data structure, for example, right? Like if I'm using something like Makito or whatever, it'll not, it'll care less. It'll just say, yeah, you set this expectation, here's the response back, right? But this is the problem. Like I could now happily set a wrong expectation. Everything is working fine on my side, but when I actually integrate, that's when I'll realize that something's off, right? And so the thing to avoid that is to set, do the validation of the expectation when you set the expectation. That, of course, then saves those expectations into specmatic. Again, we have different kinds of expectations. So you can set a transient expectation, or you could set a persistent expectation, right? Especially because these gets used in parallel, so you might want to set transient expectations more often, right? Then you actually trigger the application. It is going to then talk to specmatic as if it's talking to the real service, and this is wire-compatible because we don't, we operate at the protocol level, so it's actually wire-compatible, and it's going to talk as if it's going to be completely unaware that it's talking to, you know, specmatic instead of the original service. The only change you need to make is in the conflict, tell it where to find the dependency, right? Which is generally, anyway, externalized because in different environments, your dependency will be in different places. It's going to respond back, and then you can do your assertions as normal, right? So this allows you now the ability to work off the same specification which is in the central repo. It allows you to put the assertions on things that the expectations are set correctly, and now your front-end or your, whatever, consumer is completely decoupled from the provider. Which means if I did this, I could actually now independently take this all the way to production. I don't need to wait for this other thing to be available, right? At least contractually, I don't need to wait for this to be available. And of course, I can test the logic. Yes. So when you put the, when you put, so that's why we do this compatibility test, right? So if you're making a change to the contract, when you push it to the central repo, if you're not breaking compatibility, you don't need to up the version, right? Because it's completely compatible, so you keep pushing new versions. No problem. But let's say if you make a backward breaking change, right? Which means ideally you would want to bump up the version number, right? The old consumers will no longer be able to work with it, but new consumers can directly now use your version too, right? So whenever you break compatibility, Spectmatic will let you know that, hey, I cannot allow you to put the same contract, change the version number, we use the schematic version number, so you change the version number, and then give the contract. And then it will allow you to, on the client side, be able to say which version you're dependent on, okay? No, you don't need to bring anything down. It's all dynamic. You give a new contract, it just picks it up and starts responding to it. You don't need to bring down this thing. You don't recommend continuously running Spectmatic as a service non-stop, right? You just basically bring it up as part of the CI, do whatever it needs to do, and bring it down, okay? So basically just wrapping up quickly, you know, if I have some kind of a specification, then locally the developers on both the providers and consumers side can work as contract test hub and contract test test. And then get into continuous integration. They can run their things. And as a consumer, you'd be able to simulate the provider through Spectmatic in your CI. As a provider, you'd be able to run contract test against your service and make sure your service is adhering to the contract. And then when you deploy it into an integration environment, you don't expect any surprises over here and you have a path to production. So in the long run, you'd actually ideally discover that way early now. You've shifted that all the way to left, so you can figure that out locally on your machines itself, right? Does it make sense? And of course, like just showing the pyramid now back, like most people end up with a pyramid like this and what you're looking for is a pyramid of that nature. I kind of explained this early in terms of the unit test, the component test, system test. We didn't really talk about acceptance test, but if you're doing BDD or acceptance to driven development, that's the acceptance test. And this also indicates what kinds of tests you could run in which level, right? Like you could do a lot of visual tests of the component very early using tools like aptly tools and so forth where you could actually see that there is a visual regression that has been introduced. You could do that locally in your CI at an individual unit, at an individual component level. So I just want to again quickly thank Hari who's there and Joel. Both of them are like the core contributors to specmatics, so this is an open source project that we've put out. Anybody is free to use it. We now have, I think eight, nine contributors into the project and kind of building from there. So anyway, you can hit specmatic.in. You would see some of the tooling that I showed. It's all available. There are examples out there, so you can go and play around with it and see this in action. And if you're interested, if you have any questions, then reach out to either Hari, myself or Joel. We'd be happy to help you. All right. Any questions? Sorry, there was a bit of bullet train. I had to cover quite a bit of content, but hopefully this made sense to you. Yeah, so we right now support HTTP. We support async like Kafka and things like that. We've just recently also stubbed out JDBC, so you could have to stub out databases and stuff like that. And the way we've architected specmatic is each protocol is essentially just a plug and play kind of a thing. So we support SOAP, we support REST, we support Kafka, we support JDBC and kind of trying to expand to other protocols and stuff like that. So I know there's a lot of requests from people for non-HTTP protocols as well. We don't have all of them built out, but it is architected in a way that each of these are plugable into that. And that also allows companies, if they have a proprietary protocol, they could simply just plug that in as a mechanism. Cool. There's a question there. Hello. Yeah, so my question was that from OpenSpec it generates the test cases, the valid test cases, right? So I think it also helps in business logic testing also if I'm not wrong. Once the developer go and write, he can again run this test to check the business logic or they'll have to go ahead and or this is just to verify the contract at terms of data structure layer, API specification layer, or it will go further and do functional testing also. As of today, Specmatic does not actually assert that you expected 5 to come back. I sent 2 plus 3. I expected 5 to come back. Specmatic does not verify that I got 5 back. Specmatic verifies that I expected a number to come back in this particular format. Did I get a number back or not? It's easy for us to extend and put that one check. It's not a hard thing to do, but we also don't want to confuse people with TDD. There is a specific reason for you to separate out contracts and that you want to get feedback very early and ideally with unit tests and API tests you should actually test the logic. So separation of concern and also single responsibility is kind of something that we do. But there's a lot of requests we keep getting saying you know just extend this little thing and also put that assertion for that particular value then I don't need to do any tests at all because everything can be generated then for us. So you can in open API you could actually give examples. So you can give examples in the specification and you could also give the response examples. So for us it's very easy to take the actual values and start comparing them and that could literally act as functional tests. But that would lead to confusion of what is a contract test and what is this which is already there's a lot of confusion. So we're right now sticking to this. Maybe we'll think about extending it for other kinds of things in future. It's easy to do, but we've not done that. Yeah, I think this is a wonderful tool. So many times what I struggle is the backend engineer when we shift them to API design first approach, they design it but then there is no testing. So the open spec are given to front-end team and they start integrating. I relate to that problem. Maybe we'll definitely use this tool. Thank you. All right. Anybody else? All right, I think we can take a break. There's 13 minutes you need to get a copy.