 Good afternoon Quick reminder for you to all be here in this room at 4 30 this afternoon for the Grand prize giveaway. There's some really cool stuff there Raspberry Pi's UB keys And I believe there's a rumor that there might even be a couple of DevConf hoodies in the prize pool Which seemed to be the most popular item I Would like to Now I move on with the next talk and introduce Ralph Bean once again today. Thank you. Thank you So I think this talk is going to be more of an odd duck than I'd originally expected when I submitted it to the conference in that It's talking about microservices and there have been other microservices talks at the conference But the majority of them have been focused on the aspects of infrastructure that go into supporting that mode of development Whereas this talk is going to be about the actual process and applications development of building services and the Microservices and some of the problems that come with that and why you might might want to do it Some of that has also been talked about in the JBoss track, but this is not related to JBoss This is based out of the experience of the Fedora infrastructure team and we build the majority of our services in Python So so there's some similarities there, but also some differences So let's talk about technical debt first what what is technical debt is this new Financial instrument where you take all kinds of different technical problems, and then you recombine them into like a mortgage-back derivative No, that's not technical debt We're talking instead about a bad decisions is really what technical debt comes down to bad decisions that end up causing you to Pay more down the line than you had to up front Which is right the same way that that financial debt works is that you pay more over time than you you'd originally gotten out of it And What that looks like in practice when you're writing code usually comes out in terms of code smells if people are familiar with the term They're not things that are there bugs or things that are actually broken about the software There's some things about the software that as you're writing it or reading it and working on it smell bad about the code So duplicated code in particular is something that we see a lot in our services I have some examples up here if people are familiar with it, but We have the Fedora account system and we have all these services across our infrastructure and all of them need to cash that data But something just because we've been lazy along the way as we just rewrote the mechanism to cash fast data in every one of these places And so now whenever that changes we have problems that we have to go and fix it in all the places and do more work Then we did in the beginning when we were just hastily copying code Long methods and large classes are things that you know when you're reading code and you think oh This is concise, you know and readable and then you get to that one function that goes on for 27 pages And you think what is this, you know, what does this even do? Contrived complexity often times you'll have someone who thinks that they're a very very intelligent Programmer and they want to demonstrate that and immortalize that in their code And so they take what is good otherwise be a simple problem and then invent this really convoluted way of solving it to You know really maximize their their use of functional programming or something like that But so that you know can be a problem Places where there are half implemented features where you got somewhere, but you didn't finish it that causes so many problems later on Because a new reader of the code encounters that and then thinks, you know What did this even do am I breaking this well? It didn't work when they got there, but they don't know that yet because it's only half implemented so causes lots of confusion and delay No report documentation commented out code You know having tests that sort of having a project that doesn't have any automated test suite or sometimes even worse Having a program that does have tests, but they're constantly in a state of being broken Creates a situation of distrust for the tests that developers don't actually use them to improve their their workflow and then the kind of the kind of final code smell is kind of like a social one where if you look out at your Colleagues and all of your projects and if you have a project that no one in your team wants to actually touch Kind of ever but there's no real good way to explain that that itself is a code smell And that's being specific about it. Just people avoid it The beginning of technical debt in a lot of ways is is is not knowing how to say no to features, right? When you you receive requests for hundreds and hundreds of features and you want to get them implemented as quickly as possible You take shortcuts to make that happen and sometimes the features don't even make sense to begin with as a part of that As a part of that project, but you want to make your your customers, you know, your fellow hackers happy And so you do it. You think I'll take the shortcut and clean up the mess tomorrow And when the priority becomes pushing code over having good design and reusability Which is totally understandable, right? Because we all want to be seen as good productive hackers We want to push a lot of code, but that you know can get get we can get in our own way with that The so that's the beginning of technical debt the end like I was saying is you know at the end of the dev Cycle we have one more fragile thing in production Even worse future dev cycles and deployments take longer and involves more fear and uncertainty about what we're going to do We find ourselves hesitating from actually deploying what could be a new simple feature when it has all this other craft around it And the effect of this on the morale of the dev team is what you really want to avoid, right? Because we want to have happy healthy dev teams that can actually function and do more work faster But if you accumulate all this debt over time You can have a culture of despair a culture of cynicism on your team and those are things that you know Ultimately undermine your in your entire project your entire effort so some numbers on a On technical debt in our project and really just one number is lead time Which in the DevOps world is typically measured as the time between? When a commit is committed that fixes a bug or introduces a new a new feature The time between that and when it is deployed in production So for our services We can't actually measure that because we don't have an automated deployment system So there's no way to map those two together But we can get close to it by measuring the time in our get commit history between when features are committed and When the release is cut that includes that so Ideally you want that time to be as small as possible because as soon as you make a change You want to get it in production to be able to ensure that it works the way that you think it does If a tremendous amount of time goes by in between those two Those two events then you wind up in a situation where when you go to deploy it contains changes that haven't really been tested in the Real world and they were written so long ago that you don't remember Anything about them or what they were any caveats that might be might be there and so then you Break production and things catch on fire So here's just some repositories that were on my on my machine calculating the The lead time has modified for us with them And you see you see like projects at the top that are very popular and being worked on actively things like pager and moat Have on average a two-day lead time Which means if a feature is committed two days later It's in production on average which means it quite often It's actually faster than that things are fixed and then they're in production the same day You see a variety of other projects This is not a Well, I say a variety of other projects that are in the in the good camp on this And then you get into the bad ones and they're not the ugly ones They're just the bad ones in the middle and you see some of our projects that are larger in scope and have More complexity to them things that have less developers focused directly on them And then the ugly these are the ones check out take a look at the bottom. Is anyone know Python kitchen? One person the note doesn't know maybe nobody knows any of these services I should maybe explain more a kitchen is not actually a service. It's just a Python library that provides a Whole variety of utilities for general purpose programming in Python And so it's like everything but the kitchen sink. It's kind of a catch-all bucket So I mean the result of that is that it's this grab bag if you don't even know what's in it And if you change something in it, what is that going to break you have no idea because it's like a totally unstructured mess of Things so on average it takes 181 days for a commit to make it out in a release in Python kitchen, which is a really long time I have highlighted in bold up there things like nuance here that is our Web application for voting collecting votes from the community on supplemental wallpapers for Fedora and so the the Meantime there are 52 52 days actually makes a lot of sense because we only hold elections Every so often on on intervals so typically will have an election find some bugs fix the changes then But then we'll forget to deploy them until the next election comes along. So it has a pretty regular even lead time So what do we do about this? There are there are cultural practices for dealing with with technical debt and we all kind of know them They're just good debt practices, right? So so don't don't let these things happen. Just be a good coder all the time There's the Boy Scout rule that the Boy Scouts of America have this this rule that if you go to a campsite You need to leave it cleaner than it was when you you got there do the same thing with code when you check in a patch You should also fix any kind of cruft around at the same time But this only goes so far because we're under a lot of stress and a lot of pressure to produce code faster You can have institutions like code review. I mean everybody should be doing code review these days No features without a test. That's kind of a given. It's elementary and software development these days And we've recently started experimenting on my team with instituting Week-long or multi-day-long periods where we all take a break from working on other other aspects of our projects And then instead just focus on dealing directly with technical debt And we'll see how that plays out in time as we do it again and again So those are cultural practices and this kind of gets to the core when I want to talk about it's architectural practices And how can we design our systems in a bigger way to help reduce? Hopefully the tendency to introduce technical debt into our projects in the first place So microservices is the very very popular talk topic these days Something to think about if you look at the I don't have a graph of it But the Google search hits results for it just begin to go up really fast in 2014 in 2015. It's through the roof I mean, it's just astronomical it is Sometimes up for debate or it's confusing what is the difference between service oriented architecture Which is an old term over a decade old now and microservices And it's not clear that there really is any fundamental difference between the two It's really just kind of a restatement and refocusing of the service oriented architecture literature into this microservices thing And it sounds it sounds good right micro So some character is characteristics of microservices It's it's what it boils down to is just component componentization of your software into services Componentization is the name of the game in software generally right when you write your first program when you're You know in the sixth grade It's this really really big long script that just goes through and does one thing after another and eventually it gets so long that you realize You need to organize it into functions And then as you write more complex software you have to organize those functions into classes and those into modules and so on So this is just yet another degree or layer of componentization But typically along the line of a network service or a network boundary The the literature talks about organizing microservices specifically around business capabilities So that's how you try and figure out what should be a microservice and what shouldn't be A case for like an like an enterprise example would be payroll Right like payroll serves a specific business capability in your organization And so building a service to you know represent payroll would be One way to do that Yeah, another aspect is smart endpoints And dumb pipes Which is something that I hadn't really encountered until I started doing research for this talk But the idea here is that you have really really intelligent services and clients But the mesh that connects them Whether that's hdp and rest or a message bus or something like that should be really really dumb So the the counter example that this is the esp the enterprise service bus or the egregious spaghetti box Right, which you have all these all these services and they get connected to one another But none of the services are in control with who they can talk to and what kind of data they consume Instead there's this box in the middle of the esp Which has to be administered and defines all the routing and mapping and the cues between all these different aspects So that tends to be A single port of point of failure technically in that if it goes down everything else is down But there's also a single point of failure socially Because in order to get any sort of change that you have to go to committee and get people to agree to add certain rules to your To your esp box to do what it needs to do in contrast think of the internet Where you have smart servers hdp servers Sorry the web more web If smart hdp servers and smart browsers But the connection between them is a really really dumb thing you can go and request any website that you want I mean, it's like absurd to think of any other way of Of doing it Other ideas like decentralized government, which right has like this political twist, but it's more about how you organize your your projects Decentralized data management In this case amazon Had this famous rule right that no two services should share access to the same database directly that every database needs They have a service in front of it with an hdp interface that you then interact through And if you've ever worked on a project where there were or a couple projects where things shared read and write access to the same Database, you know that it's a turns into a horrible horrible mess down the road where again you change one You have no idea if it's going to break the other Etc A requirement if you begin getting into microservices is uh application excuse me Infrastructure automation And this just means if you're deploying going to be going to be deploying more services Then you need to have some sort of way to automate the deployment of them. Otherwise you you burn way more time doing that And then designing for failure, right if all of your your services are now Distributed in that they rely on a like a network boundary to communicate with one another that can fail and it will fail So your services can no longer, you know expect that the Function that they used to call will return with no problem But instead they have to expect that it may not return at all or it may time out It may may fail in ways that weren't expected. So graceful degradation is the name of the game there Um and evolutionary design, right? I mean designing like an entire suite of microservices upfront is one thing But if you have them already split in the first place, then you can add them incrementally and change your your entire architecture that way Uh, so how how big is a microservice? It's not um You know, it's not clear. There's a lot a lot of a disagreement about it So so one and this is the one that I like the most is that if you can describe the service with one Responsibility that it's responsible for one data asset or one type of data asset that that's a good Limitation to put on it. Other people argue that it's primarily about Like developer cognitive resources So if the developer can't rewrite it from scratch in two weeks, then it's too big Or if the developer can't hold it in their head all at one time while thinking on it, that's too big But those are you know, maybe flaky or fluffy definitions um with uh In contrast to monolithic architectures, right where you have a handful of very very large apps that do a lot of thing Excuse me that do a lot of things themselves When you scale out to having lots of of microservices you put this this new kind of load on your operations team The amount of deployment work that they have to do So in order to to to deal with that and making sure that we don't have you know that pendulum that swings back and forth between dev And ops and each one of them pissing the other one off on a month-to-month basis There are different patterns and suggestions for how to deal with that within your organization One is the you built it you run it rule Which is that when you deploy excuse me when you write a new service and you deploy it you need to be responsible for The operations of that service at least for a certain amount of time And so google does that for instance where the dev team has to operate the service for the first couple months of its existence And then it goes over to an operations team that then Works on it But if the nagio salutes go through such a threshold that the dev that it's like a predefined threshold for For when that happens then the operations will hand it back to dev and they then have to answer their nagios calls In the middle of the night again for that so Ensures a certain degree of accountability And another aspect too is that you may have all these variety of services How do you make sure they're all actually working so so application telemetry and doing monitoring of the actual features in the Application is really important as well. So etsy does this right in that they have A instrumentation on every single feature on every single app not just at the infrastructure But in the application itself so every time someone clicks on on the login page on etsy There's like they have a graph and collectee for views of the login page and then a separate graph for actual logins So you can get the difference and see how many people see the login page and get confused right and walk away Uh, linkedin does this to the max and that they actually log all of your keystrokes and mouse movements And how much time you spend on different sections of the page Yeah, right But that gives them crucial feedback that they can use to feedback into the development process In terms of thinking of our applications and vidura infrastructure if people know old package db It was a classic example of a monolith for us that we then split into a variety of services it had It's package db and packages are really loose things So it has everything to do with all packages. Well, that's too big What we've narrowed it down to now is that the package db should should be responsible only for acls It's for who has access to packages and who does not But it used to have this application data in there which has since been split out into things like app stream And it used to have this mechanism to vote on packages and rate them and tag them and stuff And we split that on into another service called vidura tagger And as a result we're able to iterate on package db, which is a really critical system But much more quickly than we could otherwise Um, we have fast fast similarly has a whole bunch of responsibilities for granting coji certs and Managing people's individual accounts. It used to also be the login mechanism for all of our other web applications So we split that onto ipsalon and are using open id now. It should be coupled those two And bodi, um, how many people in the room know bodi? I've used bodi Cool, great. So so bodi is this this really another really important piece of our infrastructure But think about the things that it's responsible for it's the place that packages go to submit a new update It's the place that qa people come to, you know, give feedback on on updates It's the place that you go to manage build root overrides, which it's not clear That really has anything to do with updates at all and then it's also baked in especially in the old bodi one Baked into the same application is the release engineering process that actually produces the repos There's so many things going on there the iterating on bodi, especially bodi one was really difficult and bodi two is Is a lot more separated out and and and easier to hack on now Say cochi and and uh the new package db arrangement and fmn all three of them I think are really good examples of of peeling off the different responsibilities of services of peeling off the different responsibilities of a monolith into different Services and we're really happy with the way those are working like cochi is composed of four different services that all have different Things that they're responsible for in the process of doing that continuous integration so If we want so we're headed down this path already of of doing a microservices split up in fedora infrastructure And we're like well on our way through it But if we want to take it further, I think there's some prerequisites that we would need to meet We're well on our way on some of them and not very well on our way on some others The first being automated tests we have them almost all of our applications have them some of the older ones don't But there is a limitation There are limits to what we're doing now and we can maybe think outside the box and go to some some bigger scale Testing ideas for our apps without automated tests. You have no idea You know if you're going to break production when you push these things as fast as you want to be pushing them Rapid provisioning of new hosts for new services It's something that we actually don't have at all It's really difficult right now to provision a new service in fedora infrastructure And it's probably our biggest pain point and I mentioned that I talked earlier today that A side effect of that is that it encourages our developers to bolt on functionality to existing apps Instead of saying oh, this clearly needs to be its own thing Except once they write the app there's been like a week to two week wait period while we actually just set up hosts And basic infrastructure for it. So that's something we can get a lot better at I think Against provisioning rapid application deployment, we're actually pretty good at that right now We have fully automated playbooks for maybe half of our applications that with one run of a playbook It does all the things that need to happen to upgrade the service from an old version to a new version Which includes like taking it out of the proxies taking it out of nagios Updating a front end and a back end shutting them both down doing a database upgrade Starting everything up in the opposite order. So that is now like a flick of the wrist And it's really nice to have and it's facilitated More development Monitoring again, we have that but it's not automatic yet So we have to remember to add things to nagios when we deploy new services And that has we've forgotten to do that. I mean more than once so that's one of the things you just need to be Automatic so you can move without without fear And a dev ops culture is something that I think our team has had from the beginning Just in that we have a really organic relationship between our dev team and our ops team Where are we hang out in the same places in irc? We're basically the same people We hand the information back and forth and we also have an organic relationship with our customer Right if we have a customer the customer is the fedora community, right? So we are packages ourselves So we get to feel the pain point and have really really feel the pain points of our own apps And then know what bugs we need to fix to be able to move forward So without that you can get into a lot of a lot of trouble and mess with miscommunication So of all those things we can talk more about and dive more into I want to focus just on testing kind of for the rest of the talk So that I mean there's there's different strategies for testing, right? And so here are four there are more But we do really well component testing the first and end-to-end testing the last For end-to-end testing we have a tool called rub which is really just a thin wrapper over selenium That steps through our entire staging infrastructure and logs into services and tries to post a bodie comment And does all kinds of things but it takes a really long time to run Just because it takes a long time to run run through all that so It is cool in that we're sure that all of staging is working But it's it's a huge limitation because as you're making changes you can't just run rub and then know right away What's wrong and fix it you're waiting for you know a half an hour for the thing to complete On component testing. I mean that's like what people call unit testing We actually don't do any unit unit testing of specific functions But we do tests of our our whole components like what you might call a functional test that you know An hdp call will return this kind of data and things like that In the middle Integration testing and contract testing we actually don't do any of that right now But I'll I'll I'll talk some about possibilities for that So contract-based testing has anybody heard of contract-based testing In one name we talked about it earlier great great. So some people know this just blew my mind I think it's the coolest idea The idea is well, so start off that we have services right service a Under the hood talks to service b when a request comes into it and returns data with which then comes back to the client That happens so When we're running a test on our system We don't want to actually have to make a call out to service b Either because we're in a test environment and it doesn't exist or if we could stand it up It's going to take a long time to do that interaction and we want our tests to run fast So what we do the standard practice is to mock That request and that involves building a kind of fake Code object that returns fake data But then you can test the path in your own your own component your own service and move along but it generates Oh, excuse me just go back and this just generates new problems right because over time service b's own api changes Right it's being actively developed and if it's api changes that thing that you mocked out You said it was going to return this data, but the service really doesn't return that data anymore So your tests are still passing even though when you push to production You're going to break and you don't know this is called a fragile mox or brittle mox And it is totally our problem. I mean we have it in every one of our applications We have to there is not the tests are worthless But we just have to be hyper aware of what mox we need to change when we change another service So that introduces a lot of lag and pain for us So the just with contract based testing is to just Take a step back and realize that service b the thing that you're depending on it also has its own test suite And you have put a lot of work into that test suite to make sure that it tests all the interfaces and does what you think It should do those tests You know are written in some code and they can't be automatically pulled out this way But they boil down to saying service b saying when I am called this way You know, I'm going to return this kind of data and when I'm called this way I'm going to return this kind of data assert that to make sure that's true If you can take that data in that test and extract it out You can actually do some really useful things with that So that's the idea with contract based testing You take what was the test of service b and ship it as a library That is then used as a mock for service a and everything else that depends on service b too So that when you change service b You presumably change the tests right to make sure that the new functionality is working How you want you then distribute that library and notice that oh all your tests failed in service a and you can You know fix that before it even gets to any further environment And this extends beyond just you know two Services in in isolation right like a whole chain of things you could figure out all sorts of problems about them with this strategy For the other testing strategy so that was contact a contract based testing for integration testing I saw a talk the other day by The jboss developers talking about arcillion cube Anybody know arcillion One one hand it was really cool, right? So it's in the java world So we typically you know don't don't trade technologies back and forth unless we go Researching the researching for them and looking for them, but I was kind of blown away. It's so it's a platform And a framework to use in in your tests to start up the services that are under test and their dependencies inside containers Which is cool on its own So you can actually do a full integration test on your own box of service a and service b You have a kind of yaml definition that describes here the other service relationships So start up these other containers and you can test them kind of end to end on your own box relatively quickly Right, it's slower than the contract based testing or component testing because you have to actually make the database call To date to the database and actually actually return it, but it uh, it's faster than end to end testing for sure The really the really cool thing that they they did they had this arcillion cube cube q the letter q And what it does is it then introspects all of your containers to find all of the ports that they've exposed to one another And it inserts tcp proxies between all of them. So then in your test You can write like with a with a context sticker a contact manager decorator Say in this test, you know Do the interaction that you did before but make the tcp connection close prematurely or slow it down to the rate of Like a dial-up modem or make it do a slow close or things like that So you can simulate all kinds of network failures and test things that you can never test before We can't even test that in our staging infrastructure. You know, it's a whole new whole new ball game So from testing the kind of last thing I just i'll touch on is like for for fedora What would a continuous deployment toolchain look like for us? And this is just a proposal, right? It's not something that we'll necessarily actually do But it's kind of cobbling together the pieces that we do have they're already relatively mature And just with a little bit of glue we can get very close. So We you know have get repos both on github and on pager We have a jenkins instance, but we would have to really beef it up I think to do this because we would need to make sure that all those tests run in a trusted environment And make sure they pass before we move forwards We would need uh In this instance at least is thinking about using copper As the place where we would actually build the rpms to deploy to our systems, which is which is what we do But we would need a policy change, right? Currently we have to have everything that we deploy is in fedora or apple But copper is not in that in that subset, right? So we would need to revise our rules To do that We would then need some sort of system that would just map Our git repositories to the playbooks that do the updates, which would be a very very small piece of python code I think to write And then we would just have to make sure that all those upgraded playbooks I was talking about we actually have a full suite of them because right now we only have half our services Are automatically upgradable And then very very last we would have you know an arithmetic amount of work To get all of our services ready to do this in git and in jenkins and in ansible And that is not you know an exponential amount of work. It's it's accomplishable. You know something we could do in a year But yeah, so that We could do that So the question is maybe why bother, right? So like there's there's all this hype about microservices We have this huge, you know astronomical graph of of searches on google But we have to do a lot of stuff to be able to get there We have to have new test frameworks and this contract-based thing and the container thing and then we have to have new playbooks We've got to set up Jenkins and you know, it's like not you know and just do it on a whim so I mean what problems do microservices actually solve for us One of the big answers is that they are web scale, right? But that's not necessarily a problem for us, right? We don't have any kind of traffic That's coming at us like one of the the the major infrastructure sites on the on the web You know if you have a monolith and you need to add more capacity You have to duplicate the whole monolith along extra app nodes But if you have microservices, you can just scale the ones that you know are under under heavy load But again, that's not you know, that's not really our problem A problem that it it may help solve for us if we carry it out to its full full conclusion Is just scaling the ability of our developers to reason about the code that they're changing and move faster in that direction And that's the thing. I think we really want to Focus on a bodie being right like the big counter example of it taking so long to get to a version to rewrite because it involves so many things to to think about A last word a last slide on complexity. Um, so By doing microservices, we're going to we're going to get stronger boundaries between our systems and less kind of bleeding Of functionality and implementation between them Simpler subsystems, they'll be independent of one another so we can change them and take them down Without that much coordination or worry And we have an opportunity here to get into some technological diversity that we've never been able to do before We have a rule of only python in our infrastructure, which is not because we're like python chauvinists But it's just because you know, we only have a few developers, right? And if we start introducing Erlang and scala and go and all these other things not all of us are You know necessarily proficient in those languages to be able to fix any problems. So But with microservices, we have an opportunity to do that where we didn't before where things when things were more tightly coupled And uh, let's say lastly there's this initial investment right in code complexity, right? When you are writing a monolith from the get go you can write the whole thing And and put it together, but with microservices You now have to start with these test frameworks that mock out other subsystems that maybe don't even exist yet And so that's a cost you have to pay But the idea is that over time the amount of maintenance work that you have to Perform to put into these system goes it doesn't I'm interested the amount of maintenance work Doesn't necessarily go down, but it doesn't escalate in the way that it does with giant monolith systems So yeah, that is is all I have if people have questions I I'd love to answer them I'll start with you You mean like desktop Yes, because we have a small amount of people, but that's the risk that you get to I mean we have maybe four Full-time developers and three full-time sys admins, but their responsibility is spread across maybe 40 or more services And it's at the point for us where I think it would be It is definitely risky to proceed much further down this road for us But it's not clear yet whether it'll be a good win or not To repeat the question the question was is this a suitable um Is this architecture suitable for application to small teams? great Other questions Dennis Right, right. So in that situation, we just don't allow changing of things that you might refer to as a primary key So we just we're not allowed to rename things in package Yeah, and it's tough to know what you'll need Yeah, at that point we definitely didn't think that through that explicitly I think there was just an assumption that the package name is the package name And if you if you were to rename it you would instead be deleting one and creating a new one And if you really cared about the data, then you would have to manually script, you know a migration in the dependent services Yes, Mike That also is then the pain point that we're hitting right now where the number of Interactions is while not exponential. It's it's very large And I we haven't embarked down the road to yet to figure out if it will work in practice But I think contract-based testing is really going to be the savior there if I get to jump back to the slide Yeah, oh, where did it go? There it is. Um, there's there's two models for how to carry this out to there's consumer driven Contract tests and producer driven contract tests and what I described as a producer driven Contract test where service b is the thing providing the service kind of at the end of the day And so we use its tests as the mock and push things kind of socially in that direction But the one that actually looks to be most prominent in the literature is consumer driven contract testing where you you make service a Write its expectations in some format and then you submit those to service b and service b is expected to include All of its many consumers contract expectations in its test suite So then you know in service b that if you ever make a change you know, which service it's going to break Because you're running their tests. You're you're running yourself against their expectations. Does that make sense? Yeah, you know, I agree and that's why I'm leaning towards the the the producer driven one where We take our most dependent on service and begin working on it And then we can later than enable its dependent services to pull in its its contract Uh, it could so you would have to write the framework that performs this in some sort of language agnostic way For instance, there's uh, I know three implementations that that do this There's one in java and I think there's two different ones in ruby I haven't found a python one yet But they they all of them share the the capacity that they can work by integrating directly with the process that's being tested Right, if it's in ruby it can attach to the ruby test suite and just do it right there But for everything else it has a standalone server mode Where you'd stand it up just as a separate process on the box and then you actually make hdp request to it And all it does is read a yaml file and return the expected response or a 404 I I used the word library. Um, but yeah, yeah, it could be in some some independent format and probably should be Um for us we could get away with doing as a library simply because we're already A mono language in python, but that was perhaps unwise to to start there Cool no problem. Uh, yes Yeah, I'd be curious to look at it and analyze it But we don't have any java or maven or gradle Anywhere in our pipeline right now, so it would be a big hurdle for us to form it, but I'd love to look at it Yeah, thank you Yes Yes, we take that on on a case-by-case basis. We don't have a general solution Um, bodi is a good example. How do you refer to an update? Um, you can refer to it as the comma the limited set of all the builds that are in it But that can change and then there's a fedora update id But you know, there's this this big kind of cryptic string that is the update id But that it used to be that that was not set initially at update creation only later once it made it into a push So there was this period of time where there was no key there, um, and we resolved that by Assigning that alias the cryptic string at the very beginning at creation and then ensuring that that never changed So but that's a case-by-case basis, I think Sena Use what they implemented either we implement things that are kind of any ideas ourselves or You know just learn basically from What they've been able to do As taking their ideas. Yeah. I'm not sure there's a question. Have you thought of reaching out maybe to job calls or Getting their ideas Uh, yeah, I tried this weekend So yes, that's good. We should do more of that and there's not enough of that for sure Other questions in the back. Oh, oh, I'm sorry. We're out of time. If you want to ask a question We can talk up front. So thank you very much Those those who ask questions, please go and grab your scarves. You deserved it. Oh, yes, right I didn't do my job. I said I would do it You're gonna have to Oh Yeah, yeah, I mean it's again, it's case-by-case basis, right? But like Koshy was developed with all four from the outset they like first saw it and just designed it and that is rare I think typically you start with things as one service They recommend that you build a monolith first So you understand what your your problems will be and that's after that is when you start killing it But you try and go into it with an expectation of trying to keep things Clean certainly in fedora. We didn't have that in the beginning. We had a lot of monoliths Staggering around Yeah Yeah, they're all in the same git repo, you know So they all have the same read me But they're like all kind of part of the same thing and I think I mean they they work with koji As if koji has just yet another microservice and koji 2 is composed of koji d the koji hub kojira. I mean Everything else in it Monoliths On the wrong track still Breathe and let it go and Stuff or what do you think like it just has shifted to for example a staging environment Where in the end you just still log on there and do the end-to-end test Yeah, so I mean we we we had done that for a long time and we're reaching the limits of it now where I mean perhaps it's it's a Oh, we have to move. Yes, of course Yeah, obviously depends on the complexity