 Hello, everyone. My name is Valentina Rodriguez and I'm the Principal Technical Marketing Manager here at Red Hat. I would like to welcome you to our last session of the day, how to avoid common pitfalls with more than microservices testing. It's my pleasure to introduce you to our speaker, Eric DeAndrea, a developer advocate here at Red Hat, an open source contributor to Quarkus and formerly Spring. Before we start, a few logistics here. If you have any questions before the sessions, please submit it in the chat window. We will try to cover them after the session. So we'll have around five minutes after that. A recording of all these sessions will be shared with you in a couple of weeks through YouTube so it may be available to you on the channel. We are also encouraged. Please join us for live chat as well. Now, Eric, I will hand it over to you. Thanks, Valentina. Good to be here. So as Valentina mentioned, my name is Eric DeAndrea. I'm a developer advocate here at Red Hat. I'm here to talk to you today about common pitfalls and hopefully ways to solve them when you're testing your microservices. The QR code that's there on the slide deck is a link to where you can actually go and look at download the slides. If you'd like, I'm going to post them in the chat right now. So if you want to follow along or whatever, feel free. So let's get started. I hopefully I'll have enough time to finish this, this talk and get through all the demos. So, you know, before I joined Red Hat, I worked mostly in financial services and insurance and, you know, back in 2015 ish or so we started down this microservices road, and quickly found out that anytime we touched one service, many or all, dare I say, the other ones broke. And so, you know, when reflecting on that, I'm pretty sure when I read the microservices manual, that wasn't what was in there. It was not supposed to be that they should all break every time you make a change. It was the microservices manual says that you get deployment velocity and being able to keep things independent by going to microservices, right? They're supposed to fix all your problems, not cause you all the problems. So one of the ways to fix that was to do more end to end testing. So that's what we did. But we found out that end to end testing was super expensive and time consuming to do, right? Because every, all the teams that were participating in whatever the release was had to align around a single time frame. And for us, it was typically like the sixth Tuesday of every month when the moon was full, or there was a blue moon. You know what I mean? Where I'm going with that joke. It was really hard to get all the teams to align to deploy everything into an environment so we could actually do end to end testing. And a lot of times those dates of when things had to happen were assigned by individuals who, A, were not technical, and B, had no knowledge of the projects or what all the different applications and services did. All they know is as a business person, I've got this application that the users use, but didn't understand that there's probably 20 or 30 different things that need to be deployed to make that happen. And so as you can tell from some of these experiences, I'm not really a tester. I certainly don't consider myself a test expert, although I do write lots of tests. But as a developer, which is what my background is, that's okay because testing is something that we should all be doing anyway. You know, we all wear multiple hats all the time. And so everybody needs to be focusing on testing. But, you know, it's kind of unfortunate that it's our job because it's really hard. We'd much rather be writing code and shipping code because, you know, code that you write that never makes it to production has zero value. So, you know, but if you really start to think about it, people might start to say that micro services, that testing micro service is easy because by definition, each service is really small and easy to test in isolation, right? But what we fail to see is that early on, we don't really care whether the individual services work. Well, you know, we do care whether individual services work. But what's more important and what the business really cares about is that the system as a whole works. And a lot of times the system kind of looks like this. And so this is the Netflix architecture diagram. Twitter has something similar, you know, there's hundreds of thousands of services and most of them are connected to other things. And it quickly turns into something that can be pretty unpleasant. And so a lot of people often ask me that, hey, do they get micro services wrong? Are they doing micro service right or people are over optimistic? And do they just go in with the wrong expectations? Because like I said, the micro services manual says that micro services are supposed to fix all your problems kind of like the cloud, right? So, but just because you're doing micro services doesn't mean that all your problems go away. There's definitely lots of other problems that that can happen. And so a lot of times your micro services end up with something like this. For those who don't know what this is, this is I'm a Star Wars fan. This is a Rattar from the Force Awakens. People expect that micro services like the cloud are going to solve all their problems. But a bit like these Rattars, they can actually cause lots of unexpected problems. They have lots of like tentacles and stuff that can touch lots of other things. Plus they eat other things too. So because of that testing services independently becomes kind of challenging. You know, kind of like the dynamics of a system as a whole in general. You know, but in this particular case, you know, when Han Solo was got these Rattars because they would solve all his financial problems. He discovered that there's always trade-offs. And in this case, the trade-offs helped him because they ate his enemies, right? So, you know, back to micro services, people think that testing with micro services is easy because everything's really small and decoupled. So, but what do you think happens with all that complexity in between the services? Do you think that makes the testing easier? So if we look at a system that just has a couple of services and we, you know, remove the lines and imagine that everything's running in isolation. So if you're only testing those things, but not the lines, which is where kind of all the complexity is, does the thing actually work? So if you're really investing in micro services and you want to be successful at micro services, you know, what's in those lines actually becomes really important. And I think people sometimes get confused because micro services are distributed by the nature of being micro services. And the word distributed starts with the letter D. And so therefore they assume distributed means they're decoupled and treat them as synonyms. But they're really not the same. Distributed and decoupled are not synonyms. So you're always going to have coupling. And you may not understand what that coupling is, and that's part of the problem. So if you could have like the largest system in the world, you still have the most entangled domain model and be really atrociously coupled. And so the challenge starts to be understanding what that coupling is. And so we often talk about being decoupled, but if there's really no interaction at all, it's not really a system. So Kentback talks about, you know, how to manage the coupling rather than trying to decouple. And so to manage it, you really first have to understand it. And because coupling itself isn't a bad thing, and it's usually required in most cases, you know, a system without any coupling, you know, what does that system actually do? So if you think about like the pod racer from Star Wars, there's two different kinds of coupling happening there. The cable between the racer and the pods and the electrical link between the pods, you know, without either of those, the thing isn't going to work. It'll just not go anywhere or you have no control over it. It'll fly wildly around and be uncontrollable. But a lot of times the problem is you don't know where that coupling is. And not knowing where that coupling is, is what makes things fail. And so just kind of a side note with this particular slide here, I was trying to look for quotes that kind of related to what I was going to talk about. And I came across this one, but I have to admit I have not seen that movie nor I've ever heard of it. So, you know, I think that movie kind of got what I'm seeing as well. And so to understand coupling, people often think that they have to write it down. Like writing it down is, that is totally not the right answer. You know, because I have plenty of stories about, you know, when I started to build something, you know, especially with microservices, you're going to talk to another thing. You go read the docs. And the first thing you, after you read the docs, you go to try to implement it and it doesn't work, or it was wrong, or they didn't understand it, or I didn't understand it, or there was confusion. But, you know, writing more documentation is certainly not the answer to how to get around the coupling problem. Because in most cases, design docs are lies, right? They're, you know, even if somebody tells you what a protocol looks like, they may not be right. Maybe they didn't understand it, or maybe it's out of date. You know, anytime in the technology space, anytime you put pen to paper and publish something, it's pretty much out of date already. Or maybe they thought they understood it, but actually didn't when they wrote it. I mean, I remember when I was looking at some of the demos here that I'm going to show you, you know, I took something that somebody else had already written, and it didn't work. I downloaded it, I went to run it, followed the instructions, and it didn't work, because they had baked the person who had built it ahead of me, or built part of it had had some assumptions about how their environment was set up and coming to run on a fresh environment. I worked on her machine, but it didn't work on mine. So, you know, where I'm going with that is, you know, even if there's a universal version of what the word behavior means, what I portray to be correct might, you might portray to be broken. And so, like I said, you know, the works on my machine, it's all about assumptions that the author of the documentation always has knowledge or domain knowledge about what they're writing about versus somebody who's coming into it fresh. And so the point is you kind of have to try these things out to figure it out. You can't just rely on the docs or the theory behind it or whatever, but that can also be really expensive, especially when you start to deal with microservices. And so, you know, I kind of had a paradox here. I said, you know, we have to try these things, but trying these things is hard. So how do we actually do that? So the first thing with microservices is you need to get really good at testing. If you're not doing automated, if you're not doing testing, writing unit tests and whatnot, writing good tests and executing, you know, having tests that you never execute is, you know, pointless. And kind of more importantly, it's more of a process problem. You know, once you find defects or things, actually having time to fix them instead of just pushing it off to another sprint. And I've been in plenty of places where we find bugs and we just put them in the next sprint and then it goes to the next one and it never actually gets fixed. So, you know, being able to do that. The other thing, you need to get really good at automations. Like I said, you've got this good test suite, it needs to be executed. You need to run it. You need to be able to do automated deployments, rollbacks, CICD, code quality and all that other good and fun stuff that came with the whole DevOps Revolution of the last 10 years or so. And so for visual, I like to talk about the test pyramid. The test pyramid is a nice model that lets us visualize the different kinds of tests and the value that they provide. So the things at the top of the pyramid are the end-to-end tests. You don't, I mean, you would love to have lots of those, but you probably don't because they're really expensive and hard to run. Especially if you have, you know, hundreds of microservices, it's going to get really complicated and expensive to set up, but they're really good and they're really valuable because they give you very high confidence in the system as a whole. At the bottom, you've got your unit tests that are really easy to write, they're cheap to run and you want a lot of them. But again, they're only testing the dots and not the lines. And, you know, the biggest problem is they don't actually give you a whole lot of confidence in the system as a whole. Remember that kind of first slide, you know, every time we touch our phone service, we put it into production, everything else breaks. Well, all my tests were green and all the different systems, why did they break? And so then in the middle, you have integration tests, you know, kind of a little more than your unit tests, but less than the end-to-end. They're a little harder to write than the unit tests, but they're easier than end-to-end tests. And honestly, over the last few years, there used to be kind of a clear distinction between unit integration tests. But today, you know, with the technologies like test containers and Corkus and Spring and different frameworks and whatnot, it kind of starts to blur the lines between what's a unit and what's an integration test. So it kind of gets a little blurry. But at the unit test layer, we tend to make things. As a developer, you know, when we start building stuff, we tend to make things that pretend to be something else and kind of fit in to the things amongst them. And so, you know, those we typically call mocks or stubs, and you kind of need to have them because when you're running your unit tests, you're not making live calls to other things. But, and then a lot of times we bake in what we think we know and understand about that system into this mock, making it look like what we think that service looks like. So, you know, and the tests are green. But did you ever stop to think that maybe you're not the best person to know or understand what that other code looks like or how it behaves? Or, you know, going back to documentation as always lies, you know, is the documentation in red correct? And so it typically ends up happening is, you know, our tests are green. The other systems test agreeing we put into production and what we thought our assumptions were that we baked into our mock at isn't actually what the other thing does or how it behaves. And, you know, we end up broken in reality and hopefully with good automation and some end to end testing you discovered this before it went to production but a lot of times we don't find this until production and then hopefully you've got good automation in place to roll back. And so, you know, kind of where I'm going with this is mocks that you write are never really perfect, or they weren't built right, or the documentation that you read to build your mock was wrong, or we, me, I misunderstood or misinterpreted the documentation, you know, but you remember the old telephone game where you say something to somebody and then you go around the room or we used to play it on the bus, you go around the bus and then when it gets back to you, it's something completely different than the statement that you had made at the beginning. So it kind of gets you some of the way but not all the way. At some point, somebody realizes it was a fake, you know, in this case, you know, Luke is kind of short to be a stormtrooper and so Princess Leia, you know, figured, hey, something's not right here you're not a real stormtrooper. And when that happens you end up in the garbage. So, if you think that can never happen, you know, I'm going to prove it to you I'm going to do a demo real quick. The theme we're going to stay with our Star Wars theme there's a link up at the top there and there's a QR code to the repo that on GitHub which apparently I've been seeing is actually down at this point GitHub. Luckily, I have everything local I don't need to talk to GitHub for anything during the demos. But staying with our theme we're going to talk about carpets and wookies kind of look like carpets so we're going to we're going to build a carpet, a wookie carpet system here we're not build it but we're going to show how things can go awry. So I've got my, my application here just to kind of show you what it looks like the application itself isn't isn't anything complicated. Let me make this a little bit bigger goes a little off screen. So we have a couple of different microservices you know I ordered a brown carpet, you know we place the order it goes to it there's a shopper service which manages the cart. And then the, when you need a new carpet you talk to the weaver service to say I weave me a carpet, the weaver needs some needs obviously wookie for to weave a carpet so you know there's a there's a wookie tamer service out on them to go figure out. Get get me a wookie and shave the wookie. And in some cases the tamers don't actually survive because I don't know if you know but wookies don't really like to be shaved. They tend to frown upon that so sometimes the tamer doesn't survive the shaving, and then you know that comes they get the fur back and it comes back to the shopper and I, you know get a nice brown carpet, you know I can order white carpets. Yeah, the wookies occurred, different wookie colors occur naturally in nature. So, so what does my system actually look like so I've got a couple different cork is applications here and part of when I do this demo live, especially when it's one person you know I've got two different projects to work on two different services one of the wookie the weaver service is a is the consumer and the tamer service is the provider. People sometimes get confused, especially because it's just me talking, and you know I've got two different projects and IntelliJ here so to kind of make it obvious which role I'm playing I've got some hats here we all wear different hats so I've got to see you can see the this means I'm the consumer. And so in the in the consumer service you see I've got a cork is application you know you can see some of the logs here as I was playing with the system, all my tests past because cork is continuous testing is already up and running. And then I've got on my tamer service. As I go to my provider I can switch hats here so now you know what which role I'm playing. Same thing you can see it fulfilled the some of the orders and you know I've got all my tests are are green as well. So what I'm going to do here is, and I actually have to switch go back and switch my role here back to the consumer. So coming up with this I actually thought that this was a kind of a trivial example that that I was using but it turns out this is actually happened to me in real life. So what I'm going to do is, I'm going to go to my here my, I'm going to go here and I'm going to change actually I'm in I actually am in the provider so I'm going to switch my hat again here. So I'm in the provider. My hat just fall off my desk here that I put down. So I'm going to go in and I'm going to, you can see the color is spelled fun eight so that my colleague who originally started down this path with this application is from the UK. So, you know, this is how you would spell color in the UK but I don't really like it I'm not from the UK so I'm just going to change it with the spelling here. I'm going to rename color to color. As soon as I save it and say cork is picked up that up all my unit tests are green. My application is good because if you look in the test when you write your unit tests. I'm just using Java code right so when I did the refactoring it just happily renamed things, and Jackson happily did the marshaling for me because it's a unit tests, and everything still works. I'm going to go back to my application and I order another brown carpet. You can see I actually got a totally undescribable carpet I did sad wookie because when I asked for the color brown, and I get it back, you notice my change here and in the payload is now changed differently and so now the communication or the lines between the services are is broken. And now my system as a whole is broken. So, you might think that that's kind of like I said kind of a contrived or trivial example but it's actually happened to me in production. Many times in the past, not just once but many times so it is it is kind of a real life example of how to do that so. So now the question is how do we fix it and you know how do we stop that from happening, you know, because hopefully we would have found that in our staging environment and not before we push things to production. So let's think about kind of how we might solve that by using an example that everyone's probably familiar with and that's, how do you test a firewall, right. Burning your death star down and as the fires smoldering and you know the X wings are flying away and they're high fiving each other than you listen closely and you hear beep beep beep of the, you know the firearms going off. And that's not a good way to to test your fire on because it's really expensive, right. Yes, you know the fire alarm works but you know you have no death star anymore. So, in reality, you hope that the fire manufacturers the fire alarm manufacturers test it like this, at least once or ever so often, but it's really expensive and it's very hard to reproduce so you probably aren't going to do it very often. Aha, the fire alarm manufacturers kind of already anticipated this need. So this fire alarm which is not a depth star, although it does look kind of similar. It's instrumented to allow for unit testing. So, you know, when you push the button, it makes a noise. And that's a unit test. But does it really tell you how much confidence does it give you that given a fire. Is it actually going to go off because the last time I checked the fire when the fire starts it's not going to take its little fingers and say hey, I'm pointing here I don't know if you can see me my camera but hey, I see this firearm here maybe I should push the button to warn the people who are sleeping upstairs that that I'm here. Probably not that that's not the interaction that we expect in real life. So, in real life, you know, somebody better be doing more testing than that. And so I don't know if you've seen if people have seen this before but this, particularly in, you know, businesses or institutions or universities or whatever, what they'll do is they'll go around and put this little it's like a cup on a stick and they expand the stick and it goes up to the fire alarm and the cup goes around the alarm, and then they push a button and a little puff of smoke or some heat goes up and then hopefully the fire alarm goes off and, you know, hopefully they told everybody in the building before they did this and otherwise they're going to cause mass pandemonium. But that's kind of what contract testing a fire alarm work looks like you're actually providing the interaction that you would expect to happen and assert that you get the result you would expect back, you know, rather than just pushing the button and bbp because all that's testing is that the, the, the audio thing works which is good right because if you can if you push the button and it doesn't go bbp then something there's some kind of malfunction with the speaker and if you can catch that before you actually install it and then go test it, you know, by all means you know that that's what the unit test is for is to find, you know, broken parts that don't work, but it's not going to help you with the system as a whole and to help you have confidence that given the interactions you expect in the real world that it's actually going to work. So back to the testing pyramid, we can try to see where these contract tests fit in. And it kind of sits somewhere in between the end to end tests and the in the integration tests but in all honesty it somewhat breaks the model a little bit because they give you really high value and good and high realism that the system as a whole is okay, especially more so than the unit test, but you can do that for the effort of a unit or an integration test so it kind of spans the whole model a little bit. And so if we come back to the mock example. What the contract test does is it sits in between our code I'm calling my code on the left here, and the code that we are thinking about that we wrote the mock for before where we were wrong in what our assumptions were. Our code tests against the contract which produces a mock for us, as does their code so both sides of the of the system are testing against the same contract or the same mock which is an output or a byproduct of the contract itself. And so when our test pass and their test pass we have pretty good confidence that reality is is okay. We don't even really have to think about what the other code looks like so if they do something funky like in this case that you know I fixed the typo their tests are going to break. Our tests are going to pass but we have pretty good confidence that reality is broken so we can resist the urge to actually deploy something into an environment because we the contract tests have actually told us that that reality is broken. So if we didn't understand the documentation right, you know or didn't understand the contract and what we did, you know doesn't match our tests are going to break their tests will be fine. And we will have pretty good confidence again that reality is is is messed up. Excuse me so let's look at packed as a contract testing framework. Excuse me since that's the subject of today's talking and it's honestly the probably the most popular contract testing framework out there. So let's revisit that earlier situation where we made the change all the services past but our system as a whole was was broken, and we can see how one slide too far. We can see how, you know, some using something like packed a contract testing can help us without having to actually stand up the entire system and perform an end to end test. Let's go put my hat back on so now I'm the, the consumer again. So now we're, you know, our reality is still is still broken right you know if I order another brown carpet. Now we're still we're still broken here. So what I'm going to do is I'm going to go into my carpet resource test and you can see I've mocked the wookie service for for getting the fur so what I'm going to do is I'm actually going to remove that mock. And then I'm going to add some more annotation tier that are that come with the packed framework one good thing are kind of interesting with the packed framework is it's not specific to Java. It has lots of different integrations with other languages like, you know, JavaScript and Python I think rust or Ruby or something. You can go read the documentation but it's it's not just a Java based framework and you can see my tests already trying to rerun and failing because I haven't finished doing what I'm going to do. So what I can do now, you know, because I still need a mock to test against. I wanted what I can do is after I resolve some imports here, you know, think with AI today that, you know, I've done this talk so many times that IntelliJ could figure out what I'm what imports I want but unfortunately, that's not the case. I'm back to the second and then we finish it so now I'm going to kind of refactor my test a little bit. So now I have this, this test that says this is actually a test for the, the contract up above which I just created so now I have a contract. I've defined a contract between the provider which is the wookie tamer and the consumer being the weaver and actually you can see Cork has just picked up and now all my tests agreeing so what this actually does is it's going to stand up a mock server running on port 8096 which is what my application is configured for behind the scenes so rather than just using a Java in memory mock my application when it runs the test is actually making an HTTP call to port 8096 and the packed mock server is actually returning a mock for me and what that mock says is that upon receiving a request for this wookie fur, given this, you know, HTTP path a post with some headers, it's going to respond with an HTTP, okay, with some headers and a body so the body has some string it defines to be some string with a field name color. Now, what's in that string doesn't really matter because we haven't specified any rules we've only kind of provided an example of what a value might look like but we, we've kind of left it pretty loose as to, you know, just pretty much any string. And then we have an order number which can just be any, any number. If you look at what corpus did when it ran under the, when it ran the tests, I just got to reload it from the disk here. It now has this JSON file which is the actual contract between the consumer, being the weaver service, the provider, being the wookie tamer, and then it has some interaction which is, you know, basically a JSON representation of what my Java code set. But what's even kind of interesting is now I can publish this console, you know, when you have shared resources now let's say it's okay it's great. It's great that I have this contract but now how do I share this contract with other teams or with other applications or whatnot so I can actually publish this contract to a broker in the pack framework comes or has the concept of a broker that's with it so if I come to my broker I actually have they have a free SAS based broker which is what I'm using here and so now I have, I'm making a little bit bigger. I have an unverified contract is I've only published it from the consumer, but it has you know the same interaction that I've described here, but it has yet to be verified. So now if I switch hats, hopefully my hat doesn't fall in the floor and I go over to the provider. I can create a new test. So if I go in here and I say new Java class. I'll call this the fur resource contract verification tests and rather than watch me type. So you can see it's actually quite boilerplate though if you go to the packed website and look at their documentation. Excuse me, and look at you know what our provider looks like and how to write a contract verification test it's pretty boilerplate the only kind of extra things we've added is specific to caucus this, you know the port that the applications running on in the past but everything else comes kind of right out of the box and you can see we've got this actually I started it without the right credentials start that again when I started the service. So it's running the tests, you can see I've added this annotation to tell it where the broker is. And so when it runs the test it's actually going to go and fetch the contract from the broker. And you can see it's actually failing now because it says it's missing the following keys and it's telling me that about the difference. And if I go back to the broker now and I refresh the broker, it's actually recording the failure on the broker so I can come in here and I can see that there's an actual verification failure. And I could even take it a step further and it's got this can I deploy feature which you could do in the UI or script as part of your CI CD, you can actually run a chat to say hey I've got this version of this consumer and this version of this provider. Are they compatible which other do the contracts verify and you know that the broker would tell me tell me that or not. And so now, like, if I come back so now I know I'm broken so if I come back to. I come back to my provider and I change this back. Right, and I save. It's immediately going to go back at the contract and rerun and now all my tests agree. And so I come back over to the broker and do it so now my interaction is green as well so now I know I have a contract, it's been verified and published by the consumer and it's been verified by the provider as well so now my, my reality is is actually good. I actually deploy these things and I have pretty good confidence that the interactions between them are good. Cool. So let's move on. I think I've got about 10 minutes or just under 10 minutes left so I may end up having to skip the next demo, but we'll see how much we can squeeze in. Kind of one thing that that you should understand here is I started in the consumer right which is kind of backwards from what most people think so you know I created the contract with the consumer, the consumer published the contract, and then the providers tests failed when it went to verify so that means that consumers can break providers tests. And that's a true statement and sometimes and people have a hard time understanding that. But that's what we call consumer driven contract testing, which is different than provider driven contract testing which I'll talk about in a second. So in consumer driven contract testing, everything starts with the consumer, the consumer tests created to clear the contract and the contract provides a mock as you saw in the for the consumer to use in its tests. Then the provider can verify that it actually provides the capabilities described in the contract so this is kind of cool because it's now like test driven development, but between teams, because if the consumers driving the contracts the provider doesn't need to have things that consumers aren't asking for right. So, you know, some pros of this approach you know you can do some a lot deeper semantic and rich semantic testing. And, like I said you can develop to the contract instead of developing speculatively, but some cons is it's its own packed has its own language and specification so if you're doing like open API for example you can't repurpose the open API documents and use them as kind of a big one is the provider needs to know who its consumers are. So in some instances this is probably okay, especially if you're developing a system within an organization. But if you're like developing the weather service, you don't know who your consumers are and you can't have your consumers give you their, their contracts so like I as a consumer I couldn't say to the weather service hey I want it to be sunny on, on Thursday. It just doesn't work like that. And so you know kind of like I've shown the consumers can break the provider CI process. So, you know it's kind of tempting to not want to invest in there especially if you already are doing open API and have specs that a consumer use so you know you can still do that, and but it's taking you more into a provider driven contract testing approach rather than consumer driven. And so like I said it's not always the best, it's not always the most appropriate for whatever for your particular use case. And so, you know if you're a provider you may eventually you have somebody using your service. And so you need some kind of connection between the two which is the contract you know contract without any kind of programmatic connection is kind of useless, specifically like open API was meant to be human readable not necessarily machine readable. So you need to make sure that the provider matches the contract and there's no drift between the contract and the test that verify I know with like spring cloud contract for example, you create the contract with groovy DSL or NAM or something and then you write tests against it so there's the potential for drift like if you update the provider's code but don't update the contract, you can start to get drift within the same application. So you don't want the contract like just to be on paper you want it to generate more dynamically and somehow and get used somehow so like things like using prism and schema thesis or using my crocs to store these contracts and share them between your everybody in the system so you know one thing that gives you it's a familiar format, because you're probably familiar with open API, you don't need to necessarily know who your consumers are. You don't really get any insight either to how your API is being used unless you've got you've kind of built that into your system somehow. And it's hard to do like deep semantic testing so if you think about open API it's really mostly about type safety like describing what a what a structure is, but it can't do anything to say like, Hey, if I have a field in my payload that looks that has this particular value, then that actually has meaning and the providers or the consumer should behave differently if that that happens. So, unfortunately, I don't think I'm going to be able to get to to my next demo but so if you think about like open API is not really enough because it can't, you know, give you those those semantic differences kind of like, you know, it looks like a stormtrooper but behaves like Luke because it is is Luke, the open API would really only tell you that it's a stormtrooper not necessarily that its behavior is different. So, unfortunately, I'm not going to be able to get to this next demo that would shows that deep semantic testing because we just fortunately out of time and usually this this talk is a little bit longer. So, you know, just to kind of sum up here, you know, we all, we all kind of understand that microservices testing is hard that, you know, if they're valuable to do end to end tests but they're probably limited because of the time it takes but mocking is definitely not going to be enough and hopefully maybe contract testing can help. And so with that, I thank you for for having me here. Thank you for hopefully you learn something I'm happy to. That's like I said the source code for the demo there there is a third demo where I do some some deep semantic testing and we talk about pink wookies because pink wookies don't exist naturally in nature so somebody needs to die wookie for and it's not going to be the wookie tamer because they're barely escaping the shaving with their lives so thank you so much. This was great. Thank you so much for the session was a fantastic presentation. I really enjoyed the demos. Thank you so much everyone for joining us today. This is our last session of the event. So stay tuned for the recordings who will be on on YouTube and have a fantastic rest of your day. Thank you. Thank you.