 I'm going to talk for microservices, this is just a short introduction about myself. I'm a senior as debt and fresh debt and building and designing test frameworks and also into building CI and CD pipelines. Currently I'm working on dockerizing our microservices. So the agenda for today, I'm going to talk about something called CDC, which is actually consumer driven contracts and also about the implementation details of it, how it works and how it actually helps to solve the problem that I'm going to state. So before we could start, can anyone guess what this means or has anyone seen this earlier? What is it? Any guesses? Okay, so this is called the debt star. It's actually the internal architecture of the microservices in Amazon. So in such a complex system, there arises one problem, right? Like each team actually owns their own microservice and they keep making changes and there could be other services which would consume the service. So in such a system, how do I ensure that any change that I make to my system does not actually affect the consumers? So this is the problem that we are trying to solve. So I should be able to confidently deploy my changes without breaking any other services. So this is where CDC actually comes to the rescue. So what is consumer driven contract? It is actually a pattern defined by Martin Foller. So in this, what actually happens is let's take the example of two services, a provider and a consumer. Actually, the consumer defines a contract saying, hey, this is the kind of request that I would send you and this is the response that I expect from you. So this is the contract defined by the consumer and it actually establishes a pact or a contract. And every time the provider makes any change to his system, he needs to adhere to it saying, okay, so this is the contract that I have with my consumers and I need to make sure that this actually is intact and my change does not break the contract. So there are various implementations of this concept, meaning there are various tools which actually implement this consumer driven contract and one such tool is PACT. So PACT is nothing but an open source library and it's available in various flavors of languages. There are other tools called Pacto, Janus and Spring Cloud contract as well. So Spring Cloud is actually a Java implementation of it. So as I said, PACT is a open source library and it's available in various languages. And these are the two flavors that we use, the Ruby and the npm modules and it's available in other languages as well. And this is actually used for asynchronous integration tests. So I would get, I'll shortly get to it why it is called asynchronous. Let's first see how it works. So, okay, again, what I would say is like it is as simple as unit tests, but then it's as powerful as end to end tests or integration tests. Let's see how it actually validates this statement. So for this talk, I'm going to take two services called the dev portal and the activity service. So I actually work with a team called marketplace team, which is into the business of building and providing SDKs for SDKs using which apps can be written. These apps can actually run on top of fresh desk, on top of fresh desk. So there's a dev portal where the developers can submit the app and the activity service. It just records all the activities that happens in the dev portal. So in this example, activities is the provider and their portal is the consumer. So let me just take you over what happens on the consumer side and what happens on the provider side, how these tests are actually implemented. So on the consumer side, we have a bunch of tests. These are actually the packed tests. What do they do? They actually start a mock pack provider. So in our case, the provider is the activities service, right? So it actually starts a mock provider for activities and you have the code. So the first step that happens is it sets the expectations on the provider. So what do I mean by setting expectations is before the tests could start, it would actually tell the provider, hey, this is the kind of request that I would give you. And if I give such a request, you are expected to return such a response. So such expectations are set. The test sign work and it sent to the mock provider and the expected responses are received. And it's again asserted on the other side. So if you actually take a closer look at it, it's nothing much different from the usual unit tests that we write, correct? Because again, we are actually mocking the provider and we are trying to run the tests. But then how is it actually different from the usual unit tests? So if you actually see what happens is all the tests that are run on the consumer side, it actually gets recorded at the background. All this is actually done by the packed tool and it gets written into a file called the packed file. So the packed file, it's actually a collection of all requests and response between two services. And it's nothing but a JSON file. Yeah, and basically it actually contains all of these details like the endpoints, the query params that are needed, the header and the response object that I received. So now that we have the packed file, let's see how it's actually used on the provider side. So on the provider side, I have the packed file and I have my service. So once I start the tests on the provider side, what happens is it would actually replay every HTTP request that is present in the packed file. And the provider would actually respond with the real responses. And this response is again validated against the response that I already have in my packed file. So this is how we actually ensure that the contract between the provider and the consumer is not broken. So if everything goes well, it returns a success, otherwise it fails. So unless and until you actually verify the packed on the provider side, just generating the packed alone would not help us in any way. Only this verification actually ensures this integration. And if you actually notice, in this way of testing, we are not actually bringing up both the services at a time. So both are actually independent and they need not be up and running for these tests to happen. So that's where actually packed tests actually differ from the traditional ways of integration tests. Yeah, the pack file can be shared in various ways. And these are a few ways in which it can be shared. One is using the file system, which is like just if you have both the services in the same file system, you can just share the file path. Or if you have a CI pipeline defined for each of your microservice, then you can just have these packed verification and pack generation steps as a part of your build pipeline and publish them as the artifact. Or you can store the packed file in the cloud, maybe S3. Or there's something called the pack broker, which is again a different module again provided by the pack tool itself. This is nothing but a repo for collecting all the packed files. So basically, it has other features added to it as well, something like a web hook, which would invoke the other dependent services every time there's a change. Or you can even tag the packed files saying this version of it belongs to the prod and this version belongs to the staging and so on. So those features available in the pack broker. So as you can see, the various advantages that comes at it are it actually eliminates wrong assumptions between teams, because all of the specifications are present in the pack file itself and the agreement is clearly defined. And it enables communication every time there is a test failure, it means that the pack is not being kept and the teams can communicate about it. And if you can see the time to set up this is actually very less. So what I mean by this is maybe you might have the initial learning curve, but then the you will not be spending time in setting up the servers and so on. So otherwise, if you actually consider the traditional ways of doing it, what you would do is like you would actually check into maybe one of the branches staging or and you set up the complete staging stack, all of that could be avoided. If you use this, no extra infrastructure. Yes, and this is one of the main advantages. It's kind of a self-help tool for the dependent services. You need not start the dependent service to test the integration. It executes very fast, as fast as unit tests and it fails fast. Yes, this is again a very important point to note because you're not even actually integrating, you're not checking in your code. Even before you could check in, you can be very sure that you're not breaking any integrations between your services and no flakiness like in the traditional ways of testing, you might have some flaky tests due to environmental issues, all that could be avoided and it's easy to debug as well. So I'm just going to go over a short demo, not exactly a demo. I'm just going to show a few code snippets of our Ruby implementation. So this is what happens on the consumer side. So we are actually setting up the mock server. And again, here I'm using the tool called the packed, the Ruby gen, actually. So this is the service consumer, the depth portal. And it actually has a packed with activities, which is its provider. And this act activities is actually started. I mean, a mock service of the activity has started in port 3005. And this is where I set my expectations. So this is the request that I'm going to send to my mock provider. So this has all of the information available, like the path, the query params, the header, et cetera. So this is what we set in the mock provider. So if it gets such a request, then it has to respond with the response that we see here, which is a status 422 and the header and with the body, with the specific error message. Yeah, this is the snippet to actually make the call. And when this call actually is made, this is when the request is triggered to the mock provider. And this is when it actually gets recorded to the packed file. So this is actually a sample packed file. This is how a packed file would look like. As you can see, you can see the consumer and the provider. And also the interactions is nothing but the set of all HTTP requests and responses. So for this example, I've just taken one such request. So this is how a packed file would look like. And this is what is being used on the provider side. So on the provider side, what we do is like the provider is activities and it has to honor the packed with their portal. And what we need to specify is just this, the path where you can find the packed file. And all of the replaying of the HTTP request, all of that is actually taken care of by the pack tool itself. Once you do a packed verify, it would either pass or fail based on the results. So this is how if there is a failure, it would actually specify where it has failed and why it has failed. This is a sample failure. This is how a sample failure would look like. So there are three parameters which it would check. One is the status code of the response that it receives and checks whether the body is matching and also the headers. If everything goes well, then it would actually succeed. And this is how it could look like. And these are a few references which you can check and look at to learn about consumer driven contracts. And to know more about packed tool as such, you can actually look into pack.io. They have a very good documentation available as well. So that's pretty much it. So, questions? Hello. Can we mark intermittent failures using this? Incremental failures? Like what? So if I want to test how my service is going to handle the scenarios where the provider is down or a retry kind of scenario, can I do that in this? So these actually, only the interactions could be tested between services which are actually HTTP requests and responses. So what kind of failures could be tested is like if the dependent service is going to respond with an error message or something like that, that could be tested but not for standalone services. It's actually meant to test the interaction between services. Right. But I would like to test cases where in the provider service responds correctly. And also the cases where the provider service may not respond in the way that I'm originally expecting. I mean, let's say a network outage or something like that. So does it return a response object error response at least? May or may not. So if it returns, that could be tested. If it doesn't, then it doesn't know. We'll deal with one scenario. Sometimes, I mean, as you mentioned how the tool works, that the real life responses are recorded and then the test you compare against that response. So sometimes the responses have fields like timestamps which vary all the time, right? So does the tool allow you to ignore certain fields? Yes. So you can actually, there are ways to specify or rejects for certain fields like the one you said or for actual time and date or ID. So all that is available as a part of the tool's feature. And how about where, you know, you need the response of, you need one response and based on that you make the next request. So like a chain request, is it? Okay. So in that case, you'll have to write different sets of contracts, like contract tests for different services. Like say if you have service A, B and C, A talks to B, B talks to C and again, C sends a response back to B and then B responds to A. In such a scenario, you'll actually have to write a set of tests between A and B and then another set of tests between B and C. That's all. Okay. Thank you.