 Good afternoon, everyone. My name is Payal and thank you so much for joining in. Today we have with us Joel and Hari and they will both be sharing their insights on a very interesting topic which is contract driven development and like they'll be also letting us know how we shall be able to deploy our microservices independently without doing any integration testing. So quite an interesting topic to hear. So without wasting any time over to you Joel. Thank you. Hello everyone, welcome to the stock. Thanks for Payal for the great introduction. So this is about contract driven development before we jump into the topic a little bit about ourselves. My name is Hari Krishnan. I'm a consultant and a coach. I help both unicorn startups and enterprises, large enterprises with their cloud transformation, extreme programming, which I'll only my interests include, you know, distributed system architecture and high performance applications. I'm a regular at most of these conferences and I love contributing to the community. So these are some of the conferences I've spoken at. So that's about myself over to you Joel. I'm Joel Rosario. I'm a consultant and coach as well. I have about 19 years of experience under my belt. I've worked in development. I've worked in testing. I sometimes find myself with a foot in both camps, which I think is a great place to be. And these days I help tech teams improve their quality and engineering capabilities across the board. And to start this session off, I'm going to take you to a small demo. Let's dive right in. I think this is about contract driven development. Let's dive right in and see a contract of an application. This is a sample application here. We have an API for e-commerce, some orders, products, etc. I won't go into this too much. A quick look at the contract. There's a products API with an ID and so on and so forth. I'm sure most of you have seen an open API specification before. I'm just going to dive in and run some tests. Let's see where this gets us. There we go. We have some contract tests that are running. We are using a tool called Specmatic to run these contract tests. And there you go. We have 12 contract tests. Let's take a quick look at the first one. We are saying the tests sends get slash product slash 10 to the application and gets a response back. This response is supposed to match the contract. And maybe we can take a quick look at the contract now and see what we have. So you have get slash product slash ID that's over here. You have the response that's been defined. So this is what I showed you at the start. This is the structure. Specmatic has of course validated the contract test here, validates that this response matches the contract. And that's how we know that the application is in sync. We are trying to make sure the application is in sync with the contract. Here's another test in a similar fashion. We post slash product slash 10. The test sends this in the payload, gets a response back and checks that, you know, 204 no content is as per with actually no content is as per the contract. There are 12 such tests. Where's the code? Let's take a quick look at the code for this. And that's it. Look, no code. This is just a starter sort of helper class. What essentially happens is we pull Specmatic. This is a Specmatic class that comes into the picture. Specmatic lifts the contract, reads each and every operation in the contract, turns it into contract tests, executes the contract test. There are 12 of them that come out of this. And you get this all for free. No code to be used or to be written except for, you know, these few lines. You get 12 contract tests pretty much for free. And this is good as it stands, you know, we know that the contract test from the contract pass. And that means the application, you know, is in line with the contract. But what if you send requests which do not match up with the contract with the application, you know, behave effectively? Well, let's try that. We just turn negative testing on, set it to true. I'm going to run these tests as well. So just take a second. Specmatic's running again. Spring application is starting. Here we go. Suddenly we have a lot of failing tests, right? The tests failed 26 past 12. What's happening here is that the original tests, the happy path tests that I showed you before, those are passing, but now we have a whole bunch of negative tests, right? What's, let's take a quick look at the first one. There's some no element exception, so on and so forth. What's actually happening under the hood. And here we see the first example of a negative test. We are passing ideas null. But according to the specification, the OpenAPS specification, this is not supposed to be null. This is supposed to be a number. And Specmatic was able to take the specification, know that it's not nullable, pass a null in its place, and see how the application responds. The application here has returned the 500. It's not supposed to, because this was an invalid request. We should have seen a 400 or something like that. And all of these 26 tests that you get, essentially for free, Specmatic just generates it for you out of the contract. It's the same line, the same sort of code that I've got here. I just added a flag. And with this flag on, we get 26 tests. That's a total of 38 tests, which gives you a solid idea about how well your application adheres to the specification that it's supposed to be. So with that, I'd like to hand it over to Hari, who will help us understand how we got to this point from scratch, how we got to this point where we are able to so magically generate tests from an OpenAPS specification over to Hari. Thanks, Joel. That was an awesome demo, wasn't it? That was just the teaser, actually. Of what is to come. And I guess you guys are ready for the main show here. Before we get into the deep dive of what we achieved, just to get this context set right, to continue from where Joel left off. Joel did something awesome. He just took an OpenAPS specification and with practically zero code, he was able to run it as an executable contract test against an application. And he also got it to show where the application's weakness is live, wherein if you send a null, the application did not have a null check, right? So now let's actually look at it in the frame of reference of what's actually, how does it all fit into killing integration tests? So to start with, we have a mobile application. Let's assume it's an e-commerce app and it's got a view product screen for showing the product. It requests the details from the backend and the backend provides the details for the requested product. Sounds fairly simple, right? Like what could possibly go wrong with this application? Let's take a little bit more deeper look. How would we write the component test for the mobile application, this view product screen? So just so we are clear on the vocabulary because the application is requesting the data, we're going to call it the consumer. And since the backend is providing the data, we're going to call it the provider, right? So just so we have the terminology in order. How does consumer component testing look? Any test has three parts to it, right? We all agree that there's a test, there's a system under test, and then there is a dependency. And a good component test ideally isolates the dependency. In this case, the dependency is the real backend application. And as most of you would realize, we don't want to talk to the real application in a staging or a production environment. It's kind of messy to go across the network. If you want to get the provider running on a local machine, that's again, may or may not be possible, depending upon how complex that stack is to bring it up on our local machine. So the tried and tested approach to that is to sort of have a hand rolled mark or a record and replay tool sort of simulate the provider for us, like stub it out so that we don't have to have the real provider and we can isolate and test the consumer and develop, start writing code for the consumer. While this looks quite all right, like we this is all familiar, right? We use this with wiremark or other record and replay tools. There's a fundamental problem here. The mark is usually not in line with the provider, right? If you are using a hand rolled mark, or if you did a record and replay, it's very likely that the provider made a code change or an API change and your mark has not made that change, right? Like we did not, how often can we keep prerecording and playing or keep maintaining this stuff? It's quite expensive and this fundamental issue can lead to huge problems. Like how I as the developer of the consumer application may assume that I could send the product ID as a string, right? And I set up my mark like that. However, the actual backend might be expecting the ID as an integer. And likewise, the provider might be returning the name and SK of the product. However, I have wrongly assumed while setting up my mark that it's going to be returning name and price, right? What does this lead to? Broken integration. Is this the worst of it? Nope. Where do you find such issues? It's not possible on local, which you just saw. If you're isolating with hand rolled marks, it's not possible. And if you are taking the same setup into your continuous integration environment, it's going to repeat, right? And for the provider, the handicap is that there is no true representative emulation of the consumer. So again, it is depending on a common deployment location. So the first time you see such issues is when you actually deploy it to an integration testing environment and boom, you have a bug and it says these two things are not compatible. Now there's a double whammy of an issue. This compromises your integration testing environment. Most often, it's not likely that you just have two apps in your integration testing, right? You have a deployment of microservices, even if two or three have compatibility issues, the entire environment may get compromised. And that may block your path to production, right? And that's unhappy users, which we don't want. There's another problem with regard to cost, effort and time. So this heat map, which you see at the bottom, obviously represents the reality, right? If you found an issue on your local machine, it's cheapest and easiest to fix. Cycle time is really slow, I mean, small. If you found it in CI or if you found it later on, it's going to be harder to fix and the resolution time is much higher, right? So that's also not desirable. What we really want here is to be able to shift left, find these issues on the left hand side and sort of avoid integration testing. However, we do want to find compatibility issues, right? So that's the problem statement. Can we identify compatibility issues without integration testing? So with that, I'll hand it over to Joel again to see if he can solve this problem over to you, Joel. So taking off from where are you left off? We are trying to shift left. We are trying to identify compatibility issues without integration testing. These compatibility issues often come up because of some misunderstanding between the consumer and the provider, right? And the reason for that usually is that they've come to an agreement about how the API should behave and what the request and response should look like. But this agreement might be over email very often. We've seen teams collaborate over email a lot on this topic. There might be multiple emails, there might be multiple word documents, there might be multiple exabytes. And so there is a common but really a fragmented in some sense understanding scattered across different inboxes and desktops. So what if as a first step towards fixing this problem, we could actually have a solid, a rock solid industry standard specification that actually contains all the details, right? For the rest, this is open API. And it contains everything from what the headers should look like, to what the JSON payload should look like, right down to is a key mandatory or not, and a lot of rich details. It's too much to go into right now. And essentially, once you have a specification, you know, with that level of detail and clarity, it becomes much easier for consumers and providers to adhere to the specification. What's even more interesting is that the specification is machine readable, right? And because the specification is machine readable, you suddenly get superpowers, because now you can feed this thing to tools. And when you feed the specification to tools, you start getting early feedback. And you can now get early feedback on your laptop in your development environment rather than you know, deploying it into integrated environments and getting errors there. And now the tools locally can help to keep the consumer and provider accountable to the contract. And then the interaction between the consumer and provider then become governed by the specification. And so I'm going to actually get a little deeper into that. Let's actually let's actually start up with an exercise, a quick exercise. I'd like you to download this contract. I'd like you to download Specmatic and just get this thing running. I'm going to open my chat window briefly and just tell me whether you've been able to get the contract downloaded and running if you're following along. I'm going to be doing this myself as well. The interests of time I'm going to carry on. We have a small contract here, right? It's okay if you don't know, in case you don't know the open API too well, I just quickly run you through it. This is a product slash ID from API. The ID is parameterized. There's a number which essentially means this path matches something like slash products slash 10 or slash whatever. It's got to be a number. It's compulsory. It's quite true. When the application receives this request, it's supposed to get this response back. The response has got to be a JSON object. The object has a name, a skew, both of them are strings. Pretty simple. We are now going to try to start it up as a stub. You saw the command on screen and I'm going to do the same thing. There you go. You have stub loaded. If you're following along and you've gotten this far, you should be seeing something like this stub server is running on port 9,000 controls. That means your stub is running. Then we're going to take this to postman. I was saying that since you have a machine passable specification, you get superpowers. One of them is that rules like postman understand that you can just drag and drop it into the import button and postman just opens it. I'm going to do that again so you can follow along if you are. You just need to have postman open in just where my mouse is moving at the top. You have an import button. You click that. You get this text, this area, blank area here. You can open Explorer or find out if you're on a Mac. Just drag and drop the contract there. Postman sucks it up in a GP and here you go. Get products. I'm going to click on this. I'm going to go ahead and one cent. Spectmatic has returned something. This is a post looking a little random. We see name, some random string. Firstly, this itself is useful because this will help you get started. Secondly, Spectmatic did this with absolutely no further code involved from you. Thirdly, given that this is random, you actually do want sometimes to much of the time to tell Spectmatic what to return. We never told Spectmatic what to return for slash product slash one. So, Spectmatic just took it back to the contract, checked to the contract. Is it right? It's fine. It doesn't know what to do with it so it looks at the response and the contract and generates a response. I have done something a little different on my laptop so if I pass five, Spectmatic actually returns something because I've essentially, I've told it what to return. So, this is a quick demo of how we can get Spectmatic to use contract open APS specification based service virtualization. I just wanted to check if everyone's following along what Joel just showed. Could type in yes if you're following along. If you're having a hard time, if you have difficulties, you could put that in the chat. We'll try our best to help you out. Sorry for the interruption Joel. Go ahead. Please, please. Yeah. Great. I'm glad to see that folks are able to follow this. Pretty cool. So, I've shown you an example of simple service virtualization without the simplicity of doing it without any code. I'll show you how to specify something explicit to Spectmatic further down the line. But this is how we can, and Spectmatic validates this. So, I'll show you how Spectmatic validations work as well. Spectmatic would never have returned something like this because not matching the open APS specification. And we do have to take this essentially to the other side as well. And so, what typically happens with a test, if I can go over that quickly, is you have the open APS specification talking to a system under test and Spectmatic reads the contract, turns it into contract tests and basically fires those requests against the provider, gets a response and checks if the response is actually valid. Let's see how this works. Let's actually try this with a real test. We are taking this over now to the provider side. I'd like you to download one more file. This is the provider sample and download it and get this running with Java minus Java products, etc. Like before, I'm going to give you a minute to do this. I'm going to move ahead now. We shall try to run this as a test as well. Let me actually start the application. The application is kicked off. I'm going to start a new tab and we have a contract test running. This is something similar to what you would have seen at the start. We've just got one contract here. The contract test, Spectmatic has generated the contract test from the contract. We have a request, get products 579. We have a response returned from the application that is for the contract and the test has passed because Spectmatic realizes that the response is according to the contract, which is great. What would happen if the response does not match the contract? I'm hoping folks have been able to follow along. Can I have a quick check if people are able to follow? All right. Thank you. So essentially, let's make a change to the contract. So we are now trying to figure out what would Spectmatic say if the contract is out of sync with the application. To do that, we'll make a change to the contract here because we don't have the application source, which is normally where the source of the error is. So we change the contract. So you just change the SKU to a number. It was supposed to be a string. We know the application is returning a string and we run the same command again. And now Spectmatic actually gives you the contract expected number, but response contained, book SKU4, which is a string. And essentially, what this means is, this is pinpointed feedback, response to a body.sku. Spectmatic was expecting a number. The response contained a string. You get that feedback right here. And this thing can be integrated with CI and other things we talk about that later. The main thing to take away is both in the previous demo and in this one, there was no code that I had to write to get this test running. There was no code that I had to write this to faithfully represent my provider. I was able to faithfully represent each to both. This is symmetrical. As a consumer, you need to faithfully represent your provider. And you use the contractor for that. As a provider, you need to faithfully represent your consumer. And so you generate contract tests out of the consumer. This is how this would typically work. The consumer and the provider have only the contract when they start off. The consumer essentially starts off on the local development environment, your laptop. The consumer doesn't have the provider running locally, obviously, but we have the contract using which we can faithfully simulate the provider. We do this as we pass it to Specmatic. Specmatic generates a contract as stub, which allows the consumer to run a faithful, high-fidelity service virtualization. Specmatic also generates contract as test, which allows the provider without having the consumer to run a high-fidelity simulation of the consumer. And this means one does not have to wait for the other. So the consumer can start development without the provider being available. The provider can start development without the consumer being available. The two of them can then deploy with confidence into an integrated environment because they have both ensured that they stay faithful to the contract. This is a very brief demo of how things work. As a next step, I want to talk about well, smart service virtualization. I'd like to do a deep type. So I showed you a little bit about how service virtualization could work. I'm going to take it further now and we are going to actually see in this switch over to the other one. We're actually going to see service virtualization in action. I don't have anything for you to try here. This one is going to be a demo. So let's take a look at this contract briefly. I've shown you some of this before. I think you will be familiar with this portion of the contract. There's a new piece to this, just a small additional API for creating products. Essentially, you have slash products. The request body accepts a JSON with two fields and name and a SKU. And in the response to the API, the provider is expected to respond an ID, which is a new feature. Now, essentially, we are going to start this stuff again. This is a vanilla stuff. I'm going to import this into Postman as well. Here we go. Postman has imported both of them. We've done this before. We've seen this random response. We've done this before as well, passing five, but this time we are getting a random response because I haven't told specmatic what to do with it. This time I'm going to do that in front of you so you can see how that works. So I'm going to create a folder. The name of this, the contract is products underscore API. So this is called products underscore API, products hyphen API underscore data, data file there, batteries.json because it was returning some information about batteries. Let's take this small snippet. We'll make some changes here. This is exactly what we saw at the start. With the call to slash products, we get this data back. And let me start this off now. There you go. You have batteries back and this is how you tell specmatic what to return when you fire slash product slash file. The very first question that we will want to know after this is what happens when we try to stub out the whole point of using this is what happens when you try to stub something out that does not match the specification. Let's say instead of SKU, EBC 1 to 3, this is a number. We know this does not match the contract. Specmatic is going to take a moment to do that and says contract expected string but stub contained 1 to 3, which is a number. And that basically is pinpointed feedback right there. We are basically called specmatics telling you that you were expecting a string, but you got a number. You can't do this. And if you go back to postman and fire this request, you get the random response back. Basically, specmatics discarded that. So it will never load a stub that does not match the contract. I'm going to switch this back to EBC 1 to 3. And let's just add a new one. So now let's say we didn't want to just specify batteries. We want to do something like SOAP as well. It changes here. It changes here. Save that. So specmatics follows along. Specmatics just watches the fire system and loads things back again whenever this change. Go back to postman. It's actually same. 10, 5 still works. I realize I have not changed this SKUID. SKUID for SOAP and batteries is the same. And honestly, I don't care for the purpose of this demo. So let's just tell specmatics you generated for me. I really don't want to have to bother to change it. I just want to be different every time. Give specmatics a chance to sort of reload that. And then we go back and specify this again. This SKU is not randomly generated. We really didn't care. And now let's say there's something more. Towels, it changes to 15 towels. Anyway, we didn't care about this. We just leave that alone. Life is easy now. Essentially, you don't have to care about the things you don't care about. Leave that to specmatics and specify the values that you really want to see. Specmatics will do it for you. And specmatics will not load a specific stub unless it matches the contract. Now let me try something a little different. We've seen how we are putting payloads in the response. Let me let that to the other side. Let's take a payload in the request. Notebook. For example, this time we are going to create. I showed you there was another API. Let me just quickly check the chat. That's what folks are doing. So there was another API for creating products. That was a post. I'll show you that changing this around. Now there was a body section here. So now this is notebook. The SKU ID was something like and the return value was an ID. Now that we've created it, we need to say ID 10. Let's save that and it loaded it. It's loaded it over here. Notebook is here. So we know that this matches the specification. Great. Let's double check that this is working the way we intended it. I'm going to the ad product request, which got imported with postman a moment back. Notebook and 3 send that and we get a 200. So that's that's specmatics coming out something in the request. Let's try that again. We should get the same kind of feedback. In case the SKU used a number, immediately we are told that contract expected string but stuff contained 10. That's the numbers. That's wrong. Let me revert that. Let me revert that. I do wanted to have this for a moment. Specmatics should just just read out that in a second. What if, for example, we passed the number here. So we've, specmatics has loaded the stuff, but we are passing the wrong value in the request itself. What would happen? You've seen this message before, right? You get a request for the SKU expected string, but request can be in 10. So you're not, the specmatics holds you accountable both when you're setting the expectation up, but also when you, when the application makes a request to the service virtualization, to the virtualized product API. Specmatics would hold the application accountable there as well. So this, this is going to work. Interestingly, sorry, forgot to mention one thing. This, and this also comes in handy. This error response came back with a 400 bad request, right? And the fact that it came back with a 400 bad request is useful because we can check the status message and you immediately know that something is amiss. We'll see how this comes in handy later when I show you a demo, how to use this in an actual test. So this is good. I think we've got pretty far. The next thing that we need to look at is all of these stubs that we've created here are statically created. Statically meaning their files on the file system. Once they're created, they can't be changed. And when they're created all before specmatic loads, right? If you create a new one, you create it as a file, specmatics restarts and detect it. This is already very useful. But sometimes that might not be enough. Sometimes you might actually want to create a resource on the fly, right? And then, and then simulate the fact that that, that ID is going to be, you know, detected. So for example, what if the ID is not, you know, what if I need to simulate the idea ahead of time? What if I need to, you know, create, if I, if I have a get API for products, for example, and let's say the ID is something I can't control. I can't control that ID. So slash product slash ID is something I cannot control. I really need to create the resource in the test on the fly. And while that's going on, I need to pick that idea up and send to a specmatic. Basically, at that point, specmatics started the tests are running. And I did, I didn't know what the ID was. I need to be able to specify the ID. And in fact, this entire content to specmatics dynamically. So let me, let me show you, let me show you how that works. Okay, I'm going to take this, let me just start this back once again. I'm going to take this to a postman request. Let's post here, say it should be colon slash slash local post 1000. This was specmatics slash expectations. Right. Let me get into the body. Let me get into raw. And here I'm going to tell specmatic what to, or let me, let me do that with one of the others. So I'm going to tell specmatic 30 slash product slash 30 return something new. So whatever it is, some random escape. And I fire this here. This was not done with the files specmatic returns a 200 200 just means specmatic has validated this the contract validating this essentially means, you know, we know the contract contains an API with slash products as far as 30, right, specmatic would find the API where ID is a parameter, it would know it's a get check that that API has a response that is shaped like this. So this is all validated. And it's accepted and specmatic returns 200. So when we now want to get products and say 30, we get tablets back. Right. Let's quickly try and see what happens if we try to simulate something that is not for the contracts. I suppose, for example, we want to say, SKU ID of 200, right, the specmatic allow this, specmatic will not, right, we say no match was found error from contracts in scenario, get products, so on and so forth. We've seen this error message before, specmatic is holding you accountable. This time it's happening in the test. So when the test attempts to set this expectation up with specmatic, specmatic will return a 400 batch request, and your test can die right there. Right. I will show you more in depth how this works. This is kind of just to add to what Joel mentioned, just mentioned, right. This is super powerful, because if you have a sequence of tests, right, like, you know, you're making API calls as part of a test, the result of the first API call feeds into the request for the second API call. So obviously, you cannot know ahead of time what to send. So in those cases, these are really, really powerful. And that's, that's just pretty awesome demo, which just joined it. Go ahead, Joel. Okay. Essentially, this is what this is the anatomy of a test, right. So I'm going to, I'm going to actually take this ahead. I'm going to take this further and show you how this works with an actual piece of code. So you have a test, you have system under test, and you have dependency. This is typically how it works, right, on, on a component developer's laptop. There's a test which invokes a system under test, which hits a dependency. The test would be some component test using some testing framework, say Selenium, say Appium, right, for example, you could be hitting some mobile app or some product screen. You would be hitting some dependency. And in the case of the component test here, the dependency would be specmatic, which would be started up and would be fed the contracts, like I've shown you before. Now, there are three parts to a component test. That's basically what we call a range, which is the first piece. The range essentially is responsible for telling specmatic what to do. This is setting expectations where we are setting specmatic up so that the system under test can talk to it. And if the specmatic validates the expectation, if the expectation, you know, adheres to the specification, then that's great. It will accept it. We will act, then the test will act by hitting the product screen. Product screen would hit specmatic. Specmatic would return a response. And the product screen returns a response, which would then be asserted. The return value would be asserted by the test. And the test actually passes, or fails as the case may be. And I'm going to show you how this would work. Just a quick note that we are, you know, anything that has dependencies is a consumer. So the most visible consumer that we have these days is a mobile phone. So we've been showing mobile phones. But that could be, you know, a mobile phone that could be a website, that could be a microservice, right? Microservices may have to make API calls too. And the test framework could be anything. It could be karate, it could be Selenium, it could be whatever it is. The main thing that I want to emphasize here is the range, act, and assert components of a test where arranges, you set up the, you know, the service virtualization so that the test can run. You would act, which means that you actually invoke the application. And then you assert, which basically means that you check the application's response. So let's quickly see how that works. This is a sample karate test, right? As I was saying, microservices could be consumers too. This is an API test, but we are testing a microservice that itself has dependencies, right? And so the very first step is to set up specmatic so that the dependency which is specmatic behaves, you know, as expected. You've seen this before. This is the expectations you are at. You've seen this request, this shape of it at least, you know, send a get, when you get a get to slash products with query parameters type and gadget, you know, return this response. And then this is, this is the point in the test where we send the request out. And then this statement is an assertion. We're saying then status 200, that means specmatic would have returned a response. And if the response was 200, that means specmatic, you know, accepted the request. And if the response was not 200, which means it's a 400, right? Specmatic has not accepted it. But it means that for some reason, this, this did not match the contract. And the test dies right here, specmatics responses, of course, the test dies right here, and we don't move ahead to the act section. So this itself gives you early feedback that, you know, the way you say you want specmatics to behave, it's not how the actual API is going to behave. We tell you right then and there, the test has been done. The act and assert don't even run. But assuming that specmatics accepted this, right, we come to the act section, which means we make the call to the microservice, the microservice internally will call its dependency, which is specmatics, you know, and then once that's done, we assert the microservices response. Right. And this essentially, this essentially is how a typical range access test would look. And this helps the consumer to stay in line with the provider. Somewhere on the line, I'm not sure if you would have noticed, I started using the word contract instead of open API specification. The reason for that is that a specification is great, right, because it contains a lot more detail than the word documents, it's already a step up. But it is, you know, it doesn't actually hold either side accountable to staying in sync with it. But once you start executing the specification as service virtualization on the consumer side, and as contract test on the provider side, at this point, you are actually using it to hold both sides accountable. And now it starts functioning as a contract. And so that's why we start thinking of them as executable contracts. So with that, I am now going to end the session back over to Hari. Hari is going to show you how it works on the provider side. And I've shown you a good deep dive of how service virtualization works to hold the consumer accountable to the contract. The next step is how do we hold the provider accountable to the contract over to you, Hari. Thanks, Joel. That was a super nice deep dive into the service virtualization work. Let me just press on my screen. Alrighty, I think the important part to understand here is not just about stubbing, right, like what Joel earlier showed, you have two sides to the coin here, wherein unless you take the same specification and make it as a contract for stubbing out the provider on the consumer side, and on the provider side, we need to run the specification as a contract test. And that's how we keep the two sides balanced, right? So I'm going to go do a deep dive of the contract as a test approach now. So let's look at what we're trying to achieve. So this slide is already familiar to you from earlier when Joel mentioned this, like essentially Specmatic is able to pull an open APS specification, convert it into a bunch of contractors and based on that, generate HTTP request and verify the response. And this was done with an existing provider application, right, the jar file which you guys tried. Now, what if the application does not exist in the first place, right? And we have to begin to build this from scratch. So how about we attempt doing some live coding and we let open APS specification as Specmatic guide our development itself for the provider? Let's try it. Okay, so by now, I think you must be all too familiar with the specification. It's got just one endpoint here, which is the products ID and there is the get operation on it and it's supposed to give you the product pack. And I will just repeat the command which Joel already ran earlier, just so we are all on the same page here. And connection refused, obviously because the application does not exist. We haven't built it. So let's start building out the app. So I'm going to be using Springboard for this today. However, Specmatic itself is language and technology stack agnostic. It works on top of HTTP, right? In this case. So it really does not care in which stack you're building your provider application, it is still able to test it out and give you feedback. So as a good first step, I think I want to show you that this is a blank Springboard application that I just generated out of start.spring.io. It's got nothing here, just one empty controller. And I'm going to boot it up. And once the app is running, I'm going to go back and repeat the command again. Specmatic test. And this time, what do you anticipate? There's no connection refused. However, it says I was expecting a 200. Okay. However, I got back a 404 not found. And this is understandable, right? Because we do not have any path for this URL. We don't have this GET operation at all defined. So let's go ahead and define it. So I'm going to copy paste in some of this code, which I have as a snippet. Let's put that here. And like any good developer would do, I'm going to start with Hello World. This is a perfectly valid endpoint. And I can reboot the app and then see what happens. However, this is becoming a little bit of a hassle, right? Which is keeping on restarting the application, running it from the command line. Is there a more integrated approach? Yes, there is. Specmatic provides a JUnit support, JUnit 5 support. And note that this is a test implementation dependency only in Gradle, which means it does not have to ship as part of your production code. It's only in the context of your test code, right? So added this dependency in and then set up a contract test test, which you are also familiar with from earlier, which Joel showed. I have just extended the specmatic JUnit support and pretty much just done simple plumbing, right? Start the app and stop the app and set up and tie it down, respectively. Apart from that, whatever is in the command line here, the coordinates to where the application is running and the location of the open API file itself, those are the pieces of information I'm providing specmatic through system properties. And that's pretty much all. So now all I need to do is run this test and that should be good enough. So let's try it out. I'm going to switch to running the contract test and run. And like earlier Narish said, this is money for nothing and tests for free. So free tests who wouldn't want one. And this time around, we don't see the expected 200 but got 404. This time it's a different issue, right? It says I got a 200. But instead, I was expecting a JSON object, but you gave me hello world. And that makes sense, right? Because our API specification says we're looking for a schema that looks something like this product, which has got a name and an escape, and all I've returned is a dumb hello world. That's not necessarily helpful. So let's actually comply with the specification now. So I'm going to drop in the product data object and try returning something that complies well with the specification. So I'm going to say I will return this product, which is a book with this SQ. It looks about all right. Everyone following along. Okay. So let's see. When in doubt, run the test, right? Always. Let pass. Hooray. First green for the day. Okay. So which means you have potentially gone from zero code to some code and, you know, and this whole thing was guided by the open API specification. There is an important point I want to call out at this moment, which is look how Specmatic really does not care whether you actually wired it up with the database and pulled the information out and then returned it. Specmatic is only worried about your signature, your API signature, right? So which is why when you're talking about contractors, you have to understand that contractors verify your API signature. And therefore they are not a replacement for component slash API tests, right? Which are actually about logic. And there's a, this is an important differentiation that you need to understand that this just separates the concern of verifying the signature and gives you early feedback. This supplements and sort of increases your ability to find out issues early on without actually writing too much code, right? If your signature itself is not online, then why bother writing the rest of the logic? I just want to call that out quickly and then let's get back to our development. Okay. Now, this is not interesting. This is like, you know, very simple app. The fact that you have been able to get to this point, guided by specification is pretty good. But then I want to make this app a little bit more complicated in real life. Like, so in real life, let's say you would have a service call, you know, to fetch the data from database and then return the product. So what if your seed data or your test data in your database only has product ID two and nothing else. However, specmatic is just auto generating the product ID, right? It's just a number. And like what Joel already showed, it could be any number that is sent. And you cannot potentially have all those random numbers sitting in a database, right? So let's assume that's going to happen. So if your service says if the product ID is not found in my database, I'm just simulating that if it's not equal to two, assuming I have only a product with ID two in my DB, then I'm going to throw a runtime exception. Yeah. So the service is doing its right thing. Like, I don't know what you're talking about. For every random ID that you give me, I cannot give you a product. So I'm going to throw this error. So what do I do now? Let's run the test and see what happens. It failed. What do you guess will be the error? 400? No. It is a 500. Oh my. You're not supposed to get 500, right? It's not a good place to be. It was expecting a 200 but got a 500 obviously because it's an unhandled exception. The web framework here spring handled the, you know, caught that error and handled the exception and then converted that to a 500 and returned it to us. So now the issue is specmatic does not know that it needs to send two because that's the only row available in the DB. So we need to help specmatic by giving it a clue that, hey, don't send random numbers, send a number that's in my DB. So how do I do that? I can use specmatic a clue through open API examples. So what I can do here is say for a 200 okay response, I want the value to be two. Okay. And there's one more thing I'm going to do, which is for this particular request where I'm sending the product ID as two, I want a corresponding response also. And I'm going to say this guy, right? Remember this data? This is the exact same data that you saw in the controller here. Book one, two, three, one, two, three, one, two, three. And that's what I'm saying here. And notice how I have named in line number 24, 200. Okay. And line number 39, 200. Okay. Both these examples on the request and the response side are named same. Open API as such does not really correlate examples, but specmatic is able to correlate and give you the convenience, which is to say for this request, I want this response. And I'm going to verify it. So will this pass? Let's try it out. All right. You're going from red to green to red to green. It's a good rhythm to be, right? However, I'm not happy with what happened earlier, right? Which is we saw an ugly 500 and that's not a right place to be in. Whenever there is a not found error, you should potentially, you know, what, what is the right error to be throwing? You should be giving back a 404 status code, correct? So let's make that change. So where do I think I should make it? Should I start writing the code right here? Doesn't make sense, right? We are driving the implementation through the specification. So first I need to make the API design, right? So I'm going to put the, you know, the 404 response here into this guy and say, apart from the 200, I also have another response for when there are no products found with that particular detail, I'm going to give back a 404. And these, this is my error response, you know, situation. It's going to give me my status, the error type and what is the path that the issue happened on? There is a problem now. Just like I've given an example for 200, okay? And I said send ID two. Now for 404, Specmatic does not know what to do, right? So, you know the drill. It's just a clue. I need to give it, right? So I'm going to say, give it a meaningful name and set up an example that in order to generate a 404 response, send the ID as zero. Not likely that zero is going to be in my DB, right? So I'm just going by that. And you also guessed it right. I need a corresponding example on the response, right? So that we know what to map it to. So this time around, I put the example here and I say 404. Earlier, if you noticed, I put in the exact details of the book in the SKU. This time around, I don't care because it's error. But I just definitely worry that it is in line with what is the actual schema. So I'm just setting up the same data types here. So I have done this specification change and which means now my open API operation looks something like this, which is I have products, I have get and I have two responses, 204. We have the example set up for it. All right. Let's run the test now. What do you expect will happen? Read again. Okay. This time around. It's an interesting error. Expected 404 got 500. Why do you think this happens? Because specmatic rightly sent out zero, right? For the second example, but expected a 404. We got a 500. Why is that happening? Quite obvious. We haven't written code to handle the not found exception, right? So how do we do that in Springboard? Fairly straightforward. I just need to create a exception class and map it out. I'm going to do that straight away. So I put in product not found exception. And when the product is not found, instead of throwing a runtime exception, I'm going to throw a product not found exception, which is indeed the right thing to do. And thereby I give a spring a clue that I'm throwing this. So if you handle it, return a not found, which is a 404, HTTP status code. What do you think will happen now? Cream. Excellent. So we went from zero code to having like actually build out the basic application, realizing that we have a 500. And then designing the API in the process, like, you know, we went ahead and added the 404 response and only wrote the code after we wrote the specification. Isn't this very similar to test driven development? And specifically this practice called tracer bullet approach, right? Where you use the test to flesh out your system. Like it's a very powerful way of thinking about it. And just like how test driven development is not about testing, it's about designing your code. Contract driven development here is not about contract testing your API. It's about designing your API as well and designing your architecture itself. Well, well, that said, I'm not saying that this is the only way to write code with the contract driven development. I just am particularly fond of writing tests before code. And I like the stress bullet approach because it gives me a nice design longevity for my application, whatnot. Specmatic as a tool itself is flexible to fit in with any setup. For example, when Joel showed earlier, it worked with an existing application also, right? So, but with contract driven development as an approach, I would personally recommend that you try this approach. And it's definitely quite interesting to learn from and see how what sort of mistakes we sort of make and how it can guide us through designing better APIs and eventually designing better applications. So that was trace a bullet with open API specifications. And let me do a quick recap. What we did is we started with the contract as a test. We showed how we can use specmatic unit support to have like really quick iterations. And then we showed how we can use open API examples to give clues to open API so that it sends the right kind of data out to this system so that according to the test data it behaves well. And very important point is that contract tests are not a replacement for your component tests. However, they're super important because even with API tests today and component tests today, you still have integration issues identified only in integration environment, right? Much later. But contractors are going to find that much earlier in the cycle and thereby have them both they're not a replacement for each other. All right. So with that, I'll hand it over to Joel for another interesting topic over to you Joel. Thanks Hari. So we've just seen Hari show us in a very interesting demo how we can keep the provider in line with the contract. This means we have seen two sides to two pieces to this so far. How do you keep the provider in line? How do you keep the consumer in line? That means that given a contract we know that the two of them can stay in lockstep. What we now need to take a look at is what happens when you have to change the contract. The contract can also be changed. This can happen for a variety of reasons. There might be some business exigencies will change, new pieces will have to be developed. The contract may have to accept new APIs or accept new input, so on and so forth. And what is the number one fear that we might have when you change the contract? You've got everything working today. You've tested, you know, you've got your consumers working, providers are, you know, and consumers are integrated, everything's working fine. We changed the contract. Is that going to break consumers today? Right. And that is a founded fear. And I think firstly, consumer and provider will stay in sync with the contract and that's great. But if you only want to make this change to the provider, right, you want to make a change only to the provider, you want to make sure that consumers don't break as a result. How do you make that change? We don't want to touch the consumers in the environment. This is called backward compatibility. Backward compatibility means I'm changing the contract in a way that would not break consumers. This means I don't have to touch my consumers at all. And then my provider changes to match the contract. So the first step in that process is make sure that the contract itself is backward compatible. Just quickly revisiting, you know, what this looks like without the contract is you would, you know, consumer would send some requests to the provider, the provider's updated, the provider sends back maybe a response that the consumer does not understand, consumers break, right? You discover this feedback in an integrated environment. What we have shown you is maybe you can discover this feedback while the developer is building the application. Right. But that still doesn't serve the purpose because in fact, we don't want anything to break at all. We would ideally like to make a change that the provider can change, can gain new functionality and consumers don't even break. That's just work for everyone. And this means that we've got to figure this out even before handing the contract off to a developer. And we have an interesting way to do this, which I'm not going to, you know, demo right now, but it's essentially turns out you can, you can run contract tests out of the existing contract. We've seen how that works. You can run service virtualization out of a contract. We've seen how that works as well. But what if you run the two of them against each other? So you take the contract of the existing, the existing contract basically run contract tests against the changed contract. Right. And the reason this is interesting is contract tests simulate the provider. So contract, contract test running from today's contract means this is how today's consumers, sorry, my bad, contract test simulate the consumer, contract test simulate the consumer. And this means that contract tests running out of an existing contract simulate the consumers as they would look today. Service virtualization simulates a provider. Right. And if you run service virtualization from the changed contract, this is how a provider would look with the updates. And so when the contract tests from the existing contract pass against a service virtualized new contract, right? Service virtualization for a stub of the new contract. It means that existing consumers will understand the provider even after the change of the contract, which automatically means they are backward compatible. Essentially, we are running contract versus contract. This is a pretty interesting approach. We haven't seen anyone do this before. And by doing this, it is possible to identify backward compatibility problems even before the contract changes reached the developer. And we are therefore able to make sure that backward compatible changes alone reach the developer and consumers don't break when these changes are made. Of course, there may be reasons to break backward compatibility, but then that will be an explicit choice. And we will talk later about how to handle this in a seamless way. Just to add to what Joel was mentioning earlier on that slide, the contract versus contract is really powerful. Like what he mentioned is because when you are just verifying the backward compatibility with zero code, how much does it cost really? If you think about it, you don't even have to have your provider or consumer change a single line of code. All you're doing is just verify. If the contract change itself, you could experiment with it. What if I change this, what's going to happen? Am I going to break backward compatibility? You could ask such questions to specmatic and specmatic could tell you those. So that's why it makes it interesting. Again, yes, just wanted to add that point to what Joel was sharing earlier. Yeah. Thanks, Hari. And with that, I will hand it over back to Hari for the next section, which is contract as code over to Hari. Thanks, Joel. All right. So very quickly, so that we're all like well into this workshop right now and you're in the deep. So we have done three different, three major things, which is we've done service virtualization, which is contract as stuff. We've done contract as tests and we've done contract versus contract, essentially all in the interest of making sure that we do not break compatibility between the consumers and providers. That's the goal. And that's the problem statement with which we started also. However, there is a fundamental practice or a fundamental aspect which we need to consider. If not, that could break everything and bring us back to square one. How's that? Let me just go over that. So right now you have the OpenAPI specification practically acting like the glue between the consumer and the provider while they are developing in isolation. And there could be a potential situation where let's say I am the provider engineer. I made a change to the provider application. However, I forgot for whatever reason to upload the latest version of the OpenAPI file into the location where it is shared. Or I may be the consumer engineer and I'm referring to a stale version of the OpenAPI aspect. Maybe the provider engineer emailed it to me. I did not notice the latest version. I'm still referring to the old one and thereby I still am on the old version of the truth. So what happens? Isn't this looking very familiar to the initial slide that we shared? We're back to square one. We are on different pages. So how is this, how are we going to get beyond this? Like, all the fancy contractors, testing contractors, stubbing, and still this doesn't make sense. The only solution to this is if we start treating OpenAPI as a single source of truth or any specification to that matter should be treated as a single source of truth and stored in such manner. Why I'm calling this out very specifically is because with teams I've worked with, seen scenarios where you store certain, you know, you share OpenAPI over emails or you potentially have it in some shared folder and we look at it and there's not much rigor around it. What we've found in our experience is the best place to store OpenAPI is a version control system. And in our case, we've been using Git and what better place to store it? OpenAPI is code. It's machine parsable and it rightly belongs in a version control system. And if you are choosing to keep your specifications in a central location in a version control system across teams and across organizations as a single source of truth, you also want some process around it in order to get it there. So the process that we've been following on some of the teams is to first have a style check or a link check on the specification itself to see if it's number one adhering to the industry standards. Number two, if you have any specific standards within your organization, are you in line with it? Basically, you know, circumventing some of the manual review of all these processes, we try to codify the review as much as possible into a lint or a style check process. We use spectral as a tool for this, but it's not necessarily like the only tool we recommend, but it's any tool that can do a style check is a good idea. And once you pass that basic point, then comes the backward compatibility contract versus contract checking, which the Joel just spoke about. And what's specmatic does here is it just needs the two version, two files, right, like to say, this is the old file, I have this API, and this is a new file with a minor change in it. And I'm going to compare and say if the change is compared backward compatible or not, yes or no, it's a binary answer, right? So in this case, it's not comparing two files, but rather, because it's a pull request or a merge request, it can take the version of the file that is modified from the branch and it can take the corresponding specification file from the central repository, run a zero code comparison, like when you don't have to write any code specmatic is going to run the comparison for you and let you know if it's backward compatible, if and only if that is compatible, then you move on to the next stage in the process, which could potentially be a manual review, if needed, and then you merge the pull request and make thereby the change flows into the central repo. So now you must be asking what happens if the specifications are not backward compatible. Sometimes we do need to make a change to our API, right, like which is going to break backward compatibility, because we want to evolve the features, right? So that's when we sort of version bump to communicate. So the way we version, we've been trying to leverage the semantic versioning practice, we're going to use the major version in the file name itself to say, you know, this is products 5 and 2.0.0 to 3.0.0, because it's a completely backward and compatible change, and it's communicated very clearly to the consumer through the versioning and a minor version upgrade is it's a compatible change, nevertheless, but we do want to communicate it, right? So I could go from 2.0.0 to 2.1.0, and patch version is mostly indicative that there's a change, but not necessarily in behavior, like a structural change, right? So if you have a large open API file, it's very common practice to extract some common data structures into reusable components inside the component section in this schema, right? So if you're doing only structural change and no practical change to endpoints or their behavior, then that could be a patch version upgrade. And this is again, like just a recommendation, like this is what we've been following with some success, but that said, it's up to individual teams and organizations in terms of how to manage it. With that, I just want to show you how, like, you know, as in a contract repository itself looks, right? So we have three applications here. This is a sample project on our GitHub. We'll share links later. So we have the order contracts, which are the central repo, the API, which is the provider and the UI, which is the consumer, just the three, you know, participants in this thing, right? And the order contracts itself is a fairly straightforward repository. We organize our files more like Java package names or C sharp package names, just like how we manage code, because that way instead of having all the open API specifications flat in the parent directory, having some sort of package naming gives you that control of, you know, how to locate easily. So that's what we do here. And if you look at the files themselves, this is all very familiar to you. This is open API file. And then you have the corresponding static steps, which Joel demonstrated earlier. So we have all of that sitting here. And that's your, you know, central contract repo as an example, just wanted to show you that. Now back to our deck, right? So if you have your code in the central repo, and then you have your consumer application and provider engineer, sorry, the consumer team and the provider team sitting and developing on their local machines, how do we reference this file, right? Like this every, every time do we have to like get cloned, get cloned? Is that going to be scalable? It is possibly, but then for the purpose of convenience and also to make sure there is consistency and correctness in how this works. Specmatic has this conflict file called specmatic.json, right? And all it does is it kind of gives you a list. You can put in what are the open API files that your application needs, because it's the central repo to an organization at times. There could be hundreds, if not thousands, of specification file setting there. So you don't want to list every, you don't want everything on your machine, right? You just want what interests you as a microservice or as a smaller application. You just say, I want this, this and this. And then specmatic will fetch it for you and make it available to you on your local. And how does the specmatic.json look? Let's take a quick look. So I'm going to show you the syntax that really matters in the context of this workshop. So first thing is the coordinates to where the repository itself is. Here we're saying that the repository is a Git repo and here's the location. It's an Azure Git repo. And then we're saying, you know, this contract is to be used as a test and this is to be used as a stub. And why is that important? Again, calling your attention to this. We had these three projects, right? The API being the provider and the UI being the consumer. And the order, YAML is going to be used differently by both of these people, right? The UI obviously wants to stub out the one more. The UI obviously wants to stub out the, you know, the provider. So thereby it will try to list it under the stub category, right? It says, I want to stub the order API. And that's why it's listed under stub. And for the provider, which is the API, it wants to run the order specification as a test. So that's why it's listed under the test section. So that's very quickly into how a specmatic like deeply integrates into your, you know, setup. And you don't really have to like point to files manually, specmatic will do all that for you and give you the convenience of it. All right. So with that, in knocked out, let's get into the details of certain, what does it take that, you know, we've been running all of this on our local machines? How can we embrace contract-driven development? And how does it affect your CI pipeline itself, right? So we know for now, the facts are that open API, or any other specification to that matter is sitting in a central repository. So it's the single source of truth. And specmatic is reading that. On the top, you have the consumer team, which is the view product screen. And at the bottom, you have the product API screen, which is the provider team. So specmatic, as you already seen on the local environment, is able to stub the provider for the consumer and run the contract as a test for the provider. So now what remains is what happens in the CI. So for the consumer, you run unit test as usual, nothing changes there. But in the component text test section, where the dependency comes into picture, in order to stub all the dependency, all you need to do is use the specmatic dependency, which is the contract as a stub, which you have been using on your local. So it's pretty much the same. Specmatic is just a jar file, right? So it runs pretty much anywhere. That's the only change on this consumer side. On the provider, again, you run the unit test. However, you then run the contract test before you run your component test, right? Obviously, you want to verify signature before you verify logic. And with that, you can deploy with confidence to the integration testing environment. And you know, for sure, they're going to be working well together. And that means you have an unblocked path to production and a happy user. And from the heat map point of view, you are identifying bugs much earlier in the green part of the heat map, you're not identifying the issue here, you're identifying it on your local, or worst case, maybe CI. So that's pretty much how you embrace contract development with that. We open it up for QA. Thank you all for being a very patient audience. Thank you so much, Hari and Joel, for such an insightful session. It was really very, very knowledgeable session. So thank you so much for sharing your insights with us today. And thank you, audience, for having this patience because it was such a long session, but I think they managed to connect us with this session and we all were able to get these concepts very well.