 Welcome to the session on contract given development. We're lucky to have Naresh himself with us today. And just to add a little bit, if it's a no little bit about this topic, you're all in for a real treat. So without further ado, over to you, Naresh. Right, thanks, Jaydeep. This is a very interesting topic for me personally and I hope that you will like this topic as well. So let me quickly put this in a slideshow mode. So the topic that I have today is contract driven development. And the tall claim that I want to make is basically this is the end of integration hell and this is the end of integration tests, if you will. So I'm going to talk about how we can completely eradicate integration tests as we go along. I want to start with a simple logical application architecture of a e-commerce app. You have a set of micro front ends, if you will, product listing front end, cart front end, payments front end. And they talk to a set of microservices which could be your product catalog services, your order management services, payment services, and so forth. And they have certain external dependencies which is basically authentication, payment gateways. In fact, you have multiple payment gateways and then you have inventory management system and a whole ton of other dependencies. And then for most of us who work in cross functional teams, we kind of create these kinds of squads or scrum teams which basically cut across the application, your e-commerce store application and another application which is the warehouse management system application. Each of them have a set of services within them. As you can see here, let me bring this laser pointer here. Yeah, so you can see these warehouse related services and then you have these e-commerce related services. They all, you know, we do API testing of these in our CI environments or other kinds of environment. And then when it's time to integrate, we bring them all into a common system integration testing environment, throw everything together along with some external services and then we start testing these things. And we hope everything here is gonna be seamless. We'll do end to end functional tests. We'll do some API workflow tests and then everything looks good. Then we will push it to a pre-production where some business acceptance test will happen and everything will be smooth. Nothing, no problem ever happens and then we go straight to production. How many of you are familiar with this, with this approach of testing? Give me a thumbs up. Awesome, love that. Cool, so this is familiar and I'm not talking anything alien to you guys. Perfect, now what happens? Of course, we expect everything to work smoothly. Rarely they do, you know, again, not in your organization but in your competitors' organization, right? What happens is one of those dependencies that you expected to work fine certainly shows up because there may be a schema mismatch. They may be, you know, an additional field required or there may be some API that's deprecated, whatnot. And this does not work as you expect. This basically makes the entire, you know, your SIT environment unstable which basically means this cannot move forward. And of course, the OMS, you know, developer would say, hey, just give me half an hour. It should be a quick fix. And two weeks later, the quick fix is not still right. Sounds familiar? All right, perfect. So this is the current state of testing in unfortunately in a lot of organization and it, and this is something I have personally lived through myself in various different organizations. So the question is, what can we do about this? You know, in a nutshell, if you experience something like this, this is what we call as integration help. There are many different types of integration help. I just demonstrated one type of integration help. And you must be pulling your hair out, trying to deal with this issue. Good news is there is a light at the tunnel, end of the tunnel. But what is most important as part of this session is to speak about myself, right? Because that's why you're here. You're here to admire me and all the great stuff that I've done. So I'm gonna spend the next 30 minutes or so, you know, talking about myself and then we'll spend the last two minutes talking about how to deal with integration help. All right. So my name is Nareesh Jain. I used to be an adventure sports freak. As you can see from my current shape, I'm completely out of shape. Cannot really do any much adventure sports now. I live in Mumbai, don't act in Bollywood yet. Someday when they need a fat big guy, I probably will be there. I run a consulting company called Exencio, which is basically helping organizations across a whole gamut of different skills. But I mean, we basically kind of talked about as we help engineer a B as the business DNA to transform for a digital future. I happened to start my career in a strange place called ISRO, where I was building neural networks for classifying remote satellite images. Of course, back in the days, neural networks were shit and we couldn't do anything much. But that helped me land a job in the bank to basically test their neural network models that they were building for equity research. And that's really what got me into testing to start with. And I was fascinated. I mean, this whole field of testing fascinated me. But what I realized is just how bad the state of testing in general was. And it was not so much to do with the state of testing, but it was so much to do with the state of development. So I looked around and I found this company ThoughtWorks. And back then, I think I was some 30th or 40th employee in the company. And I joined the company because I saw, oh, Martin Fowler's company, right? And in fact, back in those days, couldn't even pronounce his name. We used to say Martin Flower. It's Martin Flower's company, right? So we should go join this company and learn how development can be done and how extreme programming can be done. And I actually learned a great deal of things at ThoughtWorks in terms of how extreme programming practices can be applied, how you can do test-driven development, how you can do clean code practices, refactoring, and all the amazing stuff. Until I ended up in this company called DirectEye, where I literally got a cultural shock because everything that I had believed are sound engineering practices that one must do. These guys violated every single one of them. And they were still extremely successful. Like, for example, they recently sold one of their businesses for $900 million. And it's a pretty amazing success story. But what I found is that you cannot be dogmatic about these things. You have to be very pragmatic about this. I saw the same thing happen again at Hike, which was one of the unicorns in India. And a lot of things that, again, you believe are sound engineering practices and must have were not there. So what's that pragmatic balance that one needs to have to basically build world-class products? That's been my quest for many years. I was also a partner at a company called Industrial Logic, where we were building these e-learning, training people how to learn some of these skills. I got bored of that. I started a company called Adventure Lab, building games for kids to learn mental mathematics. Of course, the startup did not go as we expected. Maybe a lot of people say we were too early ahead of our time. I also happened to start the agile movement here in India, have been running a whole bunch of different conferences. And as you can say, I'm quite passionate about building communities and sharing and learning from other people. To scratch my own personal itch, I built a platform called Convengin, which ironically, the idea with Convengin was I want to just do a pet project and kind of an experiment. And I decided not to write any tests. And so even till now, Convengin has very, very few automated tests. There's pretty much a test less, if you will, like server less, it's test less. And these days, for the last a year and a half, I've been helping Geo transform into a digital native company. And it's been a fascinating journey. Again, just amazed at what these guys have been able to do. And it's been quite interesting. So anyway, enough about me. I was just joking that I'm gonna spend the next 30 minutes. By now, I feel few people would have all been dropped off. But anyway, we should talk about integration health. That's why we are here. And that's what we need to do. So coming back to this diagram that we had earlier about how we do testing. So one of the things what we found is, if we can introduce a new, we can shift some of the testing from SIT environment. Early on, maybe we have a chance to kind of catch some of these integration issues much earlier. So the idea was basically, hey, can we create an environment which is a more controlled environment? What I mean by control is you only deploy in this environment the pieces that you are working on, your team is working on. Everything else including this dependency between these other project and external dependencies are all stubbed out. And then you do what used to be workflow tests that were run in an SIT environment. If you can move that to this environment, then what we found interesting is a lot of things could be caught here earlier on. So this is your basic shift left kind of an idea. And these external dependencies are actually dependent on something called an executable contract. And that's kind of what I will focus majority of this talk on. But you take my word for now, saying there are contracts that are executable contracts and you can use those to stub out these external dependencies. And that basically is what allows you to stub out these, to do service virtualization in an intelligent manner and then do a whole lot of controlled environment testing so that you know quite a lot of integration issues much earlier before they would fail. Now what might happen if one of those dependencies that particular environment can become unstable but the rest of them can still, this will not move forward to the SIT but the rest of it can move forward. And that will not make your SIT unstable. So at least those pieces of functionality could move forward. Of course you need other techniques like feature lookups and so forth, which we'll talk about later. But you don't become in the critical parts now of moving the releases forward. If you just start shifting left, right? Like that's the key idea. Like shift left, have a controlled environment in which you can stub out external dependencies using an intelligent service virtualization piece. And like a lot of integration problems can go away. It won't become zero, but a lot of them can be caught much earlier on and addressed. So I wanna quickly jump here and talk about an application test pyramid. I think James Grining and his keynote also talked about the test pyramid. I had been playing around with the test pyramid now for about 12, 13 years. I've tweaked the test pyramid a little bit. So I'll talk about that in a minute. So I mean, this is no surprise so everyone understands at the base of this pyramid, you're basically looking at unit tests which basically test each of these service, components of the services in isolation. Each of these are tested in isolation. You can use frameworks like Makito, M of Q Mark, Jests, any of them to stub out other classes or other files, other functions. And yeah, unit testing is again, no big deal. Everything is tested in isolation. So basically sometimes people prefer calling these as unit isolation tests, which means there is no interaction between any of the components. On top of this, we basically have something called as the component test or the service test. So if you're thinking of backend service, then it's an API test is a service test. If you're thinking of front end components, then basically each of your UI components should be treated as a component and being tested in again isolation. What you will notice here is, a new set of connections have opened up. You see these green arrows? These were all in isolation earlier and now they start talking to each other. So the order controller is now talking to the order repository that's talking to the database. And for dependency on this other service, which is your own service, but it's still a different service or microservice, for that you use a contract and you stub that out and then this piece only operates within the itself. So this microservice is completely self-controlled, does not interact with any of the external things, wherever it needs external, it just looks at a stub. And the same thing with each of these. So that's your component tests or service tests. And basically there is connections between internal units and modules within a particular service. And any external services that you have, our external dependencies are also stubbed out using these contracts. Are you with me so far? Just quick show, quick thumbs up. Awesome, awesome, okay, great. So everyone's with me, nothing serious here in the chat. Okay, that's good. Let me go back here and start the slideshow. We talked about unit tests, we talked about component tests. You'd also look on the left, we are talking about what environment you're running in. So this is your local environment, this is your dev environment because different developers might be working on these different components and you wanna integrate as early as possible and test them as quickly as possible. On top of that is your environment for application testing which is where you run application tests. And what are application tests? So basically if I take the e-commerce example, you would notice all these green arrows, now they are all talking to each other. The earlier they were not talking to each other, now each of these guys are talking to each other. So now this application test basically cuts across your entire application front end, back end, et cetera, et cetera. And it still does not talk to external service. You will notice there is no green arrow between any of these two, these external services. And this is what we call as basically application tests and they test your entire application in isolation of any external dependencies. This is that environment that I was talking about the new environment in which we'll do this kind of tests and you will run against these external contracts. And finally, you would have your system tests which is basically where you would put into integrated environment and run these system tests. And this is where you will notice these green arrows are going off to these external dependencies. And this is where you are kind of completely testing as a system, as a holistic system, not just as an application, but as an test pyramid. And then if we move forward, then you will see that you can now stack up a bunch of these applications together. And then on top of that, you can put another pyramid which is basically your user acceptance and performance tests and then shadow mode tests which is basically your testing and production kind of a thing. So that is what I call as the complete product test pyramid which has N number of application test pyramids below and then you stack them up together to build on top of that your product test pyramid. And you would also notice the various different environments in which we can run these tests, local dev, eat, SIT, pre-prod and production. Is it making sense so far? Great. Thank you. Let me go back. So just wanna make sure I am everyone's with me so far, right? Now I wanna ask the million dollar question. So if you did all of this, are you sure that you will solve the integration help? If you can please type it out in the chat. That would be helpful. So if you think this will solve the integration help, that's great. If you think it will not solve the integration help, why don't you type it out? Why you think it won't? Perfect. So Mayuresh says that it won't because what if the contracts change? Priyanka is saying not sure. There are always environments which blast everything at the end, okay? Okay, so Mayuresh is saying this would work if the contracts don't change. All right, that's my curious little daughter trying to figure out what the hell I'm doing. Say hi to everyone, Rudy. All right, environmental factors. Yeah, there's a lot of these factors that we kick in. So communication between the teams and external agencies has to be rock solid. That's a brilliant point again. Yes, those are all very important and valid points. So fantastic. I think I have an awesome crowd here. You guys are already ahead of me in some sense. So yes, I mean, this will not really solve the problem. Let's tackle that one by one, right? I'm guessing everyone's familiar with these typical mocking frameworks, you know, Makito, Mockingbird, MLQ Mock. Some of these frameworks, none of them are there out there. Can anyone tell me what is the main problem you face with these mocking frameworks? Type it out in the chat, please. What are the problems you have faced when you have written tests and you've used one of these mocking frameworks and you've tested with these mocking frameworks? If you know the answer, just type it out in the chat, please. You mock them with desired behavior. They are again bound to the contract, which is the core of the problem we are talking about. If the contract changes, the mock has to be modified. Perfect. Thanks, Venkat, for that. Right now the mocks we have developed have gone obsolete in our perf environment. Perfect. We are pulling our hair. Absolutely. All right, so great. So let's talk about the problems with these frameworks, these mocking frameworks. I'm sure you would have seen this picture somewhere of the some misunderstanding on your side. You did not put the right assertions in your mocking framework and all your tests were working fine, but when the moment you tried to integrate with us because you had set the wrong expectations to start with, you had this mismatch. Does that sound familiar? A lot of confusion, a lot of finger pointing, people talking, pointing to documents where they had exchanged over email or in some wiki or something saying, look, this is what we said, this is what we meant, this is what happened, and there is confusion. So if we go back to this test pyramid, you will notice that what we are talking and the key over here is these contracts. Everything is dependent on these contracts in some sense for you to really be sure that when you're testing this and saying everything is working fine, these contracts have to be in sync with what the reality is. If they go out of sync, then you might be happily building stuff in your bubble, expecting everything to be working fine, and the moment you bring it to SIT, or even for that matter, you bring it to your application integration level, and you will notice that there are a lot of surprises. So that's the kind of main problem that we found with these contracts, is that these contracts A have to be executable. They cannot be word documents or Excel sheets or emails or wiki documents. These have to be executable specification. They have to be version controlled like any other artifact, and they have to be part of CI for you to get that feedback as early as possible. So that's kind of basically the introduction of contract-driven development where we are saying, hey, first of all, we need to collaboratively design our APIs, our integrations, right? We have to collaboratively design that, and we need a tool where we can collaborate to create that. And once we have done that, we treat this as a contract as code, if you will, and then use that as a way to make sure that we all stick to that contract. And now that allows us to independently deploy our microservices, our micro front ends, our micro whatever we have. And the key thing here is to look at if you can turn those contracts into executable specifications into tests, then that goes a long way, right? So that's in a nutshell, what I'm calling as contract-driven development where different providers and consumers come together, they do an API first kind of an approach where they collaboratively design what the integration should look like, document that in an executable specification, and then use that specification to then independently deploy things. And they never have to come back and integrate things as long as things are working with the contract, they should be able to just independently deploy things. So let's look at how that works. So the step one is this collaboratively contract authoring. Now, does this syntax look familiar to you? Let me switch over here real quick. Does that look familiar to you? Double thumbs up for a collaboration tool, perfect. Yes, Gautam, you're absolutely right. It's the good old guckin syntax, right? So essentially we saw what a Slack, was a good friend of mine did with cucumber and guckin, and that's kind of become a very de facto standard now for a lot of people writing acceptance tests of behavior-driven development. And we said, hey, couldn't we just take that and kind of use that with a little bit of tweak, we tweak that a little bit. And so what you can see here in this example is we are saying, hey, I wanna make a post request to this particular OTP URL. And when I do this with request body, what is this request body over here? This request body is defined right over here. These are my types. These are my different types that I have. So this request body contains an error info, a request ID, and a mobile OTP. Mobile OTP happens to be a string. The request ID happens to be a number and the error info itself is another kind of type, which has a question mark, which means it's optional. The value of this could be empty, or it could have a value. When it has a value, this is the structure and this is the data type of this value. And similarly, basically your, sorry, I was looking at the response body, my bad. This is the request body right here on the top and this is the response body. And so basically we are saying that we make a post request with this, to this API with this request body that is right here on the top with these three pieces of information with this data type. Then we should get a HVP status 200 back and we should get a response body which matches this particular structure. And here are a few examples that I can feed in and I wanna see that this contract is working as I expect. So we can get together and we can write, we can author this contract. So far with me on the author, perfect. Very few thumbs up. All right, great. So that's the step one, right? Collaboratively authoring these contracts. So this is contract first development or contract driven development, right? So we author these contracts first. We version control these contracts just like any other executable specification. The only difference is you put them in a central repo where all contracts live. So all your contracts go in a central repo across different teams. And this is where everything gets versioned. Everything is stayed. So the second step is once you've authored the contract you push it into a central repo where all contracts live. It's a version controlled artifact. Step number three would be as a service provider as someone who's building this API, I would basically use these contract tests as contract these executable specifications as tests. I can literally go with this, what you're seeing here on the screen and I'll show this live demo in a minute but you will literally see here in this 15 lines of code, I essentially can run that executable specification that I just wrote that contract in Gherkin language. I can run them as tests. You see here at the bottom these are actually running as tests. So nothing required on the developer side. You get these contract tests for free. They run against your API and you obviously have to provide where your API is running and then you run against that and you should expect to see all these contract pass. So that's step number three. So step number one is collaboratively write the contract. Step number two is check into a central contract repository. Step number three could be, and this is three A I would say because now you have a fork in the branch where the provider starts using these contracts as tests. And on the consumer side, now they can take the same exact contract and with that one little command which is basically contract stub, they can run this in intelligent service virtualization mode which basically means you can now turn the spec into a stub which is wire compatible with your actual server. If you had your actual server, this would be identical to that. And it basically gives without writing a single line of code without writing any mocking logic per se on your side, you now have a wire compatible contract, wire compatible service virtualization achieved through this contract. I'll show the demo. This is not all hand wavy stuff. This is real stuff that is being used at a very large organization. So that's basically now step three B if you will, which is running contract, the same very contract that we wrote in intelligent service virtualization mode. So before I wrap up, I have 10 minutes to go. So I'm gonna show you some real demo, right? It's not all smokes and mirrors, it's real stuff. All right, how many people have I lost so far? Oh, okay, good. Looks like most folks are still here. Cool. So some love folks. We need to hit the 5,000 likes mark on this. All right, just kidding. Let's quickly jump into some code to do some live demo over here. I'm using a pet store example that's something that most people can easily relate. It's a e-commerce application for a pet. And what I have here is a contract that you're looking at. Is this visible enough? I can try and make this a little bigger. So what I have is a set of contracts that I have captured in this contract file. It's called api underscore one dot contract. And this basically has a list of contracts, different scenarios basically for a given endpoint that have been defined. So I have different scenarios which is basically here on the top, I've defined some generic data types. And then here I have some basically different API. So if I do a get on the pet slash ID, then I should be able to fetch the pet and it should have this pet body over here. I can do same thing with basically trying to update the pets and providing some data. I should be able to do a bunch of different things. So this basically describes all my APIs. And I should be able to now just go to that little thing that I was showing you earlier. I can just take this and I can click this. There's no other code that I've written. There's just this and the API contract that I have. And just thought I should be able to now run this as tests. And what you're seeing is, of course, this is bringing up the application here right here because this is a spring boot application. And so that brings up the application and it starts running my contract as tests against this application. And you'll see here in a minute that it starts executing these specifications that we talked about to make sure that your APIs are working as you expected. There we go. So this is basically running this. It says, hey, when I make this request to slash pet slash 10, get request, I expect, I got a 200 okay. I got this kind of a data back and that actually matched with what I wanted. So that's basically running contract as tests against your API. Now, what happens here is interesting. Someone mentioned this earlier. Now this contract was agreed. It's version control and the consumer has. So let's go to the consumer side real quick here and let me just show you. So I've written some tests on the consumer side, which basically pokes the consumer and then internally the consumer basically goes and calls this backend. So this is my front end. This is my website stuff where I'm testing some functionality and I'm basically trying to test if my logic works correctly in the front end, it happens to have a dependency on this backend where I'm basically saying, hey, if I hit this URL on the UI, then I should expect certain things to work as I expect. So if I run this, essentially what you will see here is basically trying to stub out the contract. And so let me quickly run this. You will see this in a minute. So yeah, everything is fine. Now you will notice right about now it'll look at the contract and it'll say, okay, you want to use this contract. So I'm going to bring up this contract in the stub mode and I'm going to run this contract so that you can now make calls to it as if the actual server was running and I will give you the responses that, if you set some expectation, then it'll return those expectations. If you don't set anything in the contract, it'll just randomly generate values which match the specific data types. So let's quickly go here. So the first thing when you started running the test is, it does is basically load the config file from this contract.json file which is where we are defining which contracts we depend on. So let's quickly look at the contract.json for a minute just so you understand what's going on over here. So that's my contract.json which basically says here's the list of contracts that I depend on. It's the provider is Git. I can have other kinds of providers like file system and whatnot, but we encourage people to use Git, provide the link to the central repo that I talked about and then essentially say, hey, I'm interested in stubbing this specific contract. So this is all you need to do and essentially if your API test that I was showing here, what we are doing is, we are basically saying right here, just trying to let me maximize that. So before all you will see here as a setup step and basically saying create stub and that's essentially by convention looks at, do you have a contract.json file? If you have a contract.json file, let me pull the contract from that repository in real time, get that contract down and basically bring up the contract as stub. So just this one little line you need to write and you need to put this contract.json file which tells you where do you find contracts? You could have multiple providers, multiple sources of your contracts because you may have different sources that's fine. We do encourage just having one central repo but sometimes people have different repos and so forth. So that's okay, the framework allows for that. So just with this one little line you write that and then rest of it is just your normal create pet or search for available dogs kind of test that you're writing. So far with me, everyone, okay, awesome. Now the last thing we need to look at is, so I've shown you how this Intelligent Service Virtualization works. I can set certain expectations on this, in this case I have not set any expectations on the stub so it's just gonna give me some default values when I run the test and it all works fine with what I just showed you earlier. So right here, json file, it basically looked if I have these contracts locally in my environment, if not it will basically pull it from the repo and then you will notice here that it says, yeah, okay, using local contracts because nothing has changed since the last time you ran this and then here it says loading expectations from this particular data folder and if it found any data inside that folder then it'll use that data, if it didn't then it'll just generate that data. So in this case, there were two create RG and available dogs to data files that basically feed the expectations that I had for my test. So I want a certain kind of response back. Now when I set this data in this json file like I was saying earlier this is like setting expectations in mock if I set wrong expectations then contract will let me know that hey, you're setting wrong expectations and that is not allowed, okay. I know I'm running out of time. I'm gonna quickly show one last demo, one small piece which is on the provider side. One big challenge that typically happens is how do I know that I'm not breaking backward compatibility. So as I have the contract, I have added a new piece to the contract and I'm just evolving my stuff as we go along and when I do that, I feel the change I'm making is backward compatible but how do I actually verify that I'm not accidentally breaking backward compatibility of my API. So let's take an example here. For some reason I think that this should not be number it should be a string. So I changed the data type. Now what do you think this is a backward compatible change or is a backward incompatible change? Okay, let's test this out. So I'm gonna run contract.push which is basically gonna try and push this contract to the central repo. I made my change. I wanna push it in. I run this contract.push command and basically what it's gonna do is it's gonna take the new contract that I have. It's gonna run that in the stub mode the service virtualization mode and it will run the older version of the contract in the test mode against this and see if the older version works as expected. So you can have new stuff, that's okay but as long as you have not broken anything old is what this tries to check. And you will notice here it says expected a number actual was a string and it says the new version of the pet store contract is not backward compatible. So it will not allow you to check this in and the same command can also be there is a command for just doing a backward compatibility check that could also be run on your CI. So anytime someone even if they bypass this and checks it in straight you would have a check on the PR to basically invalidate a PR is a pull request, sorry to basically push you out saying you cannot push this version of the contract. If you really want to make a change then bump up the version number and check in so it'll be a new version of your contract. Okay, so I believe I have tried to answer both on the provider side and on the consumer side how they can make sure that these things are once you've written a contract both of them are running against that this is all automated, this runs as part of your CI and now you never have to worry about some of these integration problems that typically show up pretty late in the cycle. By the way, this is not just for APIs we also support this tool also supports Kafka and other kind of protocols and we're kind of growing the list of protocols that we support, but API in terms of we support REST JSON, we support SOAP XML we support Kafka and we are building support for JMS right now and we kind of keep expanding that scope pretty quickly. So just to wrap up I know I'm two minutes late so thanks for bearing with me just to wrap up step one the takeaway is hypercollaborate collaboratively design your APIs over a contract use a common language to communicate make that contract an executable contract. So, you know, and shift left try and test this as early as possible so you can avoid last minute surprises and this gives you an ability to now do that in a very meaningful manner. And last step is, you know if you've done this then you can start independently deploying your code and bye bye to integration help. So that's pretty much it if you were interested in this demo the tool that I was using is called contract with a queue you can go to this website, contract.run and that's the website it's an open source product that we built and we put it out there and we believe that, you know this many more people can contribute to this open source project and take it forward. So with that, thank you very much for listening to me. And Javi, do we have a couple of minutes for questions? Are there any questions? So there's one interesting question about languages. So the question here is does this work with languages other than Java or Kotlin? So maybe the way of consuming that maybe a little. Yeah, that's a great question. The point is we did not want to make this language dependent so everything that you see today is language agnostic because you're writing the contract in Gherkin which is language independent and then you're just using contract commands to run it in stub so there is contract space stub you can run this in a stub mode. So all these commands are basically language independent and that was a conscious design choice because we didn't want to get caught up in language specific bindings. We wanted to keep it language agnostic language independent. So yes, you can use it and right now we are using this with Java with Python, with PHP, with JavaScript and Golang. So those are the five languages we are already using this at a pretty large scale currently. Can we squeeze in one more? Yeah, absolutely. Yeah, okay. So there's one more which says Shridae is asking aren't these integration test plans part of the agile release trail? Seems like another way to try the dependencies. So I'm not sure I understand the question are these not just another way of aren't these integration test plans? I mean the integration test are they part of aren't they part of the agile release stream? So where do we see them in this whole agile release process? That's maybe, I mean, I'm second guessing but aren't these integration test plans? So I'm not sure, so none of these are integration test plans to start with. I don't, I would call them as integration test plans. These are just contracts that we are writing when I'm integrating with someone else. So this, the one of the advantages is you can do both contract, consumer driven contracts or provider driven contracts. So whoever gets the get go can collaborate, write this together first and then they can start building this independently. And when, as in when someone finishes their work they can deploy their piece. So this whole notion of release train which I believe comes from safe is just a horrible idea in my opinion. You want to basically move away from this stupid release train analogy. This release train analogy is just a very, very nice way to put waterfall back into your organization. What you want is basically each team to independently keep deploying their stuff as and when they are ready. Not wait for these release trains to arrive and create these big batches of things where you put everything together and then wait for, you know, this thing. And it's just, it smells of waterfall thinking all throughout, right? Like you have to plan these release trains. You have to know the estimates. You have to do all of this stuff. And then you have to get everybody onto that release train. And if someone cannot get onto the release train then what do you do? You know, it's just waterfall, right? Instead what we are encouraging is you agree on the contract independently. Each of them build those and then, you know, whoever's ready keep deploying it to production. Don't wait for the release train, right? Just basically keep launching missiles as and when they are ready. Don't wait for these release trains. Okay, great. Last one, not a question, more of a statement from Mayoresh who says, my developers would have cried had they attended this session. And I'm hoping he means tears of joy. So, okay. And yeah, we do hope you'll have tears of joy as people start using and adopting this framework. So maybe that's a good note to wrap up, Narish. Or absolutely, yeah. So hit up website, contract.run. It's open source. There is a quick five minute guide that can get you up and running. We've tried to see, we can try and document all of this. So it's easy for people to get started in five minutes. And you know, it's open source. So we're looking for contributors. This is my plug for inviting contributors. So if you're interested in this, start using this and start contributing to this. And we would love to see you.