 I can find me online at Marcus CCM at pretty much anything. I work for a cool company called ThoughtWorks, and they are gently sponsoring my trip here. I'm coming all the way from Scotland, although I'm from Brazil. And this is a disclaimer I have to make. I actually like microservices, but you're going to see throughout this talk that you're going to seem like I don't, but I do. So let's start. Like the mythical monolith. So it's like a billion-line code basis that does all the things your organization needs to do, right? I think it's starting to become a common thing in Ruby, like we are beyond the being cool phase now, like we're just on the getting shit done phase. So we start like a small startup, start building like their services, and those services starts to grow, and then they add more functionality, and this code basis grow a little more, and then they pivot, and then they add more stuff, blah, blah, blah, and you get to a point where you have this really big code base that does all the things. And we can all probably agree that that's not a good idea, right? Everybody here probably has horror stories about having to deal with those code bases. It's one of those things like you touch one line and you break stuff like 10 classes to the side. You have no idea what you're doing. The name monolith becomes a really good description, because it's one of those things you find in the middle of the woods, and you wonder like who the hell built this thing? Why? So probably not a good idea. And we have many ways of trying to get out of a break our monoliths, and one of the currently popular ways is called microservices. So what's a microservice? It's a small code base with a single well-defined functionality. So the idea is to apply like the UNIX philosophy of having like small little things and then combining those things to get to reach our goals is, the term is relatively recent. It's starting to around 2011. And now I've started, it's more of a Java thing, but I've started to see people speaking about it in Ruby conferences. So in those services, they usually communicate through a REST-ish interface, like which sadly in our industry, it means they use HTTP. It doesn't mean much else. And the whole thing is like trying to apply like all the good stuff, like the single responsibility principle to a service area and to architecture. So microservices are not exactly new, right? It's just trying to do services in a good way. So we move from this like monorail, monolith, whatever, to many microservices. And those things have like a heap of benefits. You can say they're easily replaceable. Like they're small. So you can, if you don't like one service, you can go and like throw it away and write a new service and you don't have to worry about all the other services. And they're technology to couple. So you can have a Ruby server, talk service, talking to a Java service, talking to a Go service, talking to like whatever heaps of language you like to use and you can keep doing that. They're easy to understand because they're small. So you can fit the whole code base in your head. And they also have provided a natural work stream separation. So like as your company grows and you start to divide people into teams, you can assign services to each of those teams and you don't have people stepping in each other's toes like that much because they communicate through a more well-defined interface. And you can say like with this little bundle of joy, you really made all complexity in your life. When you start using microservices, you're done. Right? Well, of course not. Like we have a very like hype driven culture and people come to stage and they throw an idea and you just jump to it because it sounds so cool. But everything comes with, everything is made of trade-offs. And I like to think like, ideas that actually reduce complexity are pretty rare. What you usually do is we move complexity around. So when you have a simple service, you end up with a complex ecosystem. We don't, so those services are simple, but you're like your whole thing, it isn't. You have to start to worry about loads of stuff. Like if you had a hard time deploying a single app, imagine deploying 30, 50, it doesn't, so performance like HTTP is not the fastest thing out there. So when you start to have all those services calling all the services because they're so micro that actually don't do anything by themselves. So it gets low, security. You cannot just shove data in a session anymore. You have to really worry about how do I authenticate stuff. Monitoring, all sorts of stuff become really complicated when you enter this microservices realm. So different teams working on different tax tax, that sounds amazing, right? Yeah, but it's kind of like a complicated situation because you have to know how to use all those different technologies, how to deploy those things, how to monitor those things. You have to understand a lot of ecosystems to operate all those different services. And of course, a thing that gets really hard is acceptance tests, or integration tests, customers, coffee break tests, whatever you like to call them, like... Those tests, basically they're tests that answer this question, like how you make sure a service work well with other services. So I have an opinion that tests are only good as their failure messages. So when you go and start testing like microservices, especially if you go for a UI, you get stuff like this, expected to find element item, couldn't, which doesn't help. So you run your tests in your CI pipeline, and they usually take more than minutes. They take a lot. And in the end, you get a red message, couldn't find element. And I don't know, I get really frustrated. I start screaming, why? Because it doesn't tell you anything. It can be like 500 reasons for why you couldn't find the element. It's probably not a CSS bug. So I think, and when we talk about services, there's also another question. How you make sure you're not bringing someone else's day? So we're not talking about well-defined APIs here. Those services are evolving really quickly. And usually, since the customer of those services, your own company, we're not as professional with those APIs as we would be if it was a public thing. So they change all the time. And you have to be sure how you're not breaking other people's stuff. Because usually, you want to change your API, you change, a test breaks, you fix your test. That doesn't mean all other tests that rely on that code, all the code is working, right? Your stuff is working. But what about other people's? And it's not like you can go Twitter, you're not Twitter, you're not Facebook, you cannot go there like, this is my new API, deal with it. Like you have to worry about the other departments in your organization, the other teams. So over time, I faced this problem a lot, working like, I think like for Kota, why are you working with microservices? And I had several ideas on how to handle those things. How to handle those tests. So first idea, run all the services in your ecosystem, in your dev box. So before you run your tests, you basically download everything there is to and you run your tests. So this can work. The problem is like, as you have all those different kind of technologies and like you have to know how to get those things to run. Like it might be simple, it might be well structured, it might be not. And as you move through like a CI pipeline and everything, it starts to get complicated to have everything running in your local machine having your tests hitting all those local services. So it didn't work, it didn't very well for me. So another idea. Run your tests against the sharing environment. So we have like a dev environment or a QA environment, whatever. And you keep the services alive in these environments and you run your tests against those services. So this can also work, but most companies have a hard time maintaining a sane production environment when you start talking about dev environments that are usually a mess. So this, like running your tests against those environments, it adds a lot of noises to those messages because you can, you start to worry about network failures and like network slowness and all sort of like unpredictable stuff and your tests start to go go flaky, nothing is worse than a flaky test. Like you run your tests, oh it's red and you run again, oh now it's green and then you run again, oh it's red. Damn it. So it really like, it kills all the confidence you have when you have flaky tests. So running tests against the sharing environment also doesn't work for me. The third idea, stub all the things. Like you build like your own stubs for each service, might be like using some automated tool, might be just some little Ruby code and you run your tests against those stubs. It can work. The problem is like, this can lead you to live like in a fantasy stub land where everything just works. Like it's really hard to keep the stubs in sync with whatever's happening you know. So you have to worry, oh someone changed this, I have to go and change my stub as well. So like running everything against stubs also doesn't work. Fourth idea, like VCR all the things. Like VCR is very popular too and it's really cool. Like it basically records all the HTTP interactions your service is doing and allows you to replay those interactions for your first tests. So it's kind of like stubs, but the thing with VCR is that you can delete the cassettes as they call them, like the recordings and record them again. So you have the speed of the stubs but you can also make sure your stubs are actually real. You can replay them. So VCR is very cool when you're having like, when you're talking to GitHub or Facebook and but when you start to add like 30 different services, those cassettes that gets very noisy. Like they're not make, they're not human readable at all. And like, are you working on a big project with VCR? Like every commit, there'll be a bunch of changes in the cassettes because like some services return dates and these kind of things they always change. And it adds a lot, it also adds a lot of, adds noise. So VCR for services, for microservices, I don't think it works either. So let me tell you a sad story. It's a sad story moment. So I was working in one of those microservices project and I want to change something. I want to change the API. And like all those APIs were evolving very quickly. So it's not like we had version numbers. Like version numbers require you to have some sort of stability. And basically what we had to do is like every time we had to change something, I had to go around and ask people, hey, are you using that? No? Hey, are you using that function? No, are you using that? No, okay, good. Now I can change stuff. Which, if you'll be fair, like if you feel very stupid. So I'll try to research a lot for like what, how can we fix this? Being is my final idea. You can learn your integrated tests in isolation. What the hell? That makes no sense. So different things, like isolate your tests with executable contracts. Contracts? So what's a contract like in the test sense? Is the subset of functionality a service needs from another service? So it's not the same as a service API is whatever you need from that service. So you're a contract between two servers can be a single field in a single HTTP call because that's what you need. So that's a contract. You have like a request like users and a name and you get back like a name and a date in a JSON fact. So that server can return like can have many other endpoints and it can return much more data, but you don't care. You only care about those two fields because that's what your app is. So that's your contract. And actually you care about the structure, you don't care about the values, right? So you care that the name is a string and age is an integer. If name suddenly became an array of names it might break your stuff. So that's like a example of a contract. So it goes like that. You have a service A that needs some functionality from a service B. So let's call A the consumer and B the producer. So you have this interaction going on around you. So what do you do? You write this contract. You basically you write a specification for how that dependency works, what the consumer needs from the producer. And then you have these contracts. You somehow you make your tests to run against stuff that were generated from that contract. So a consumer can talk only to those contracts. But you also have the ability to get the same contracts and try to validate them against the provider. So, and that's the important bit here. Like those contracts they work both ways. So you have the consumer validating against the contract and the contract validating against the provider. So you don't end up in fantasy stuff blend. And also it's a little more like, you have to put more thought into it. It's not like VCR where it's just like a big thing. It's some more craft thing. You might be thinking, oh, that's just a stuff, right? You just told me you don't like stuff. So no, it's like a declarative self validating stuff. And we like to use contracts because it's shorter. So with using this, you have isolated and integrated tests at the same time. And then there is a gem for that. So I was, in this project, we wrote a gem called Pactu, which basically means Pact in Portuguese. So for to do those contract tests in Ruby. And funny enough, when we released this, another ThoughtWorks team in Australia released a gem Java library called Pact that does the same thing. So now the doctors has Pactu and Pact. So for Pactu, we choose to define the contracts as a JSON schema file. So JSON schema is just like XML schema, but for JSON, so it's shorter. So this is an example of a contract in Pactaland. So you have a name because the idea is like, you want to have every interaction between services to be something very well defined. So you can give a name to that. You have a request and a response. So the request can contain headers, like the required headers, a method just because we need to generate stuff for it and a path. A response contains the status and like some properties and the properties I like have types. So this thing describes like an HTTP request and a JSON response. So from this contract, we can generate both the stubs and the validation things. So this is how you can use the Pactu to use the stubs. You basically, you ask Pactu to load contracts from a folder. So the idea is that you could have like a folder for per service or something. And on this folder you're going to have many of these JSON schema files. And you load them all and you say, hey, stub providers. So when you do that, we use a web mock, the same thing the VCR uses beneath the covers, to hook into the Ruby HTTP libraries and basically stub the services. So whenever you hit the endpoints that you provide in your contracts, if you pass the specific headers and everything, you get back the response. So good, you can run your tests against the stubs generated by those contracts. And the cool thing is the validation side of it. So you can use the same contracts, same folder. And instead of asking to stub it, you ask it to simulate the consumers. So in a different point in your pipeline or like in a different point in time, you go, hmm, are my contracts still valid? Or do I have a problem? And then you do that. And when they're not valid, you get a very specific message because we know the types and the structure of the response you need. So you get like, hey, I was expecting name to be a string and now name is an array of names. Your contract's broken. So this is a specific message. If you see this, you can, you instantly know what's going on, which is quite different what you usually have when you see a failure in the integration test that it's like, hmm, I have to try to run this locally because I have no idea like this is a very precise thing. So when you start using those contracts, you can move to the next level of contractness, which is consumer-driven contracts. So consumer-driven contracts is a similar idea, but they have, we have a twist. So say the service aid needs some functionality. So service aid go and writes a contract. So you write your contract before the functionality is implemented. So kind of like TDD, like it's a contract-first approach, if you like. So you have this functionality, that this draft of a functionality you need that still doesn't exist. And then you go to a producer and to the producer team where it might be, it might be even your own team. It's like, hey, we need this thing. Can you provide me? And then you might have a negotiation on top of this contract like that. I can provide this field, but I cannot provide this. Oh no, this is really good. We can do it. And from the point, you both teams agree on that contract. You can have the consumer team implementing whatever they are planning to do with that functionality while the provider team is implementing the functionality itself. And they can use that contract to keep things in sync. So it allows teams to evolve in parallel, but still like keeping the communication going through the contract. And the good thing is like, once you have those contracts in place from the provider perspective, you knows exactly how each service is using your functionality. So if you want to change some stuff, you know who you are affecting. So we don't have to do like I had to and go around asking. You can say, yeah, Team Service X needs this and Service X is handled by Team B. So you can go to Team B and have a conversation. Or if you look, hey, no one actually uses this thing. I can delete it. That's awesome. And from the consumer point of view, you know exactly how you communicate with each of your dependencies. And if one of those dependencies break, you have a well-defined message. So you know what's going on. And the idea of this consumer-driven thing is to put the focus back on their consumers. So the only reason you have an API for microservices is because someone needs it. So it makes sense instead of thinking about what cool things can this service provide, you start thinking what people need from this service. So it flips the relationship. And you have this good communication too. Like if you work in a larger team, you have this precise thing that people can run code against and make assertions to discuss. So it's not like you're just sending emails where if Jason based it on it. You have a good thing to base your discussion upon. So that's what I have, quick recap. Microservices are amazing. They're really cool. But they bring a lot of challenges. And one of those challenges is testing. And you can test in isolation with this idea of executable contracts. And you can use consumer-driven contracts to evolve your services in interesting ways. So if you want to know more, like I'll send these links in the presentation later. And that's all I have. Thank you.