 Okay, this is, let me have a look at my e-mail. Okay, if I sew, by the way, I'm not talking about the sewing of the Jordan term, you can deviate a start, really more of a service-oriented design type sentence, which Brian will talk a little bit about. Okay, so today I want to tell you a story. And it's a story you probably already know, but I'm hoping that today I can give you a few more, few of the flat twists that made you unhappy. And this is a story about an online marketing company that did lead generation for the education industry. And a single Ruby online staff whose origin started out humbly and humbly, but over the time gradually grew into, grew into a behemoth hack-the-other functionality, a monolith of duct tape business logic. Now this application was crucial to business, but at the same time it became a real liability. Now how did this happen and what did a group of rack tag developers do that? Well, first let me start with the application itself. So imagine that you are, you work at 7-11 in the night shift. And where you dream about a brighter, professional future for yourself. So you Google around and end up at one of our landing cages where you fill out a survey and you match you up with a school or schools based on your background and your interests. And then we take your contact data and we deliver it to the school of your choice where you've spent four years and reemerge newly pedigree and with a brighter, particularly for a brighter potential, brighter professional future. This is a pretty simple application, huh? And so it was until business started coming around and Dustin had been saying, you know what, you need to remark on all of the emails that were collected. So we quickly hacked together some functionality which added production. Then business came back to us and said, you only need more complex service and lots of it. So we hacked together some functionality and pushed it out of the production. Then business came back to us and said, you know, we love what you do and I love the app. But we also had an interface and for all of this functionality so we can manage it and by the way, where's report? So we hacked together something, pushed it out of the production. Then business came back to us and said, you know what, we need soap deliveries. So we hacked on together and pushed it out of the production. And so on and so forth for about four years until we had a real mess on our hands. We had an application that was hard to test, difficult to maintain, full of dependencies, impossible to expand and then business came back to us and said, hey guys, what's wrong with the site? It's slow, it's buggy, you can't deploy it without something breaking. And it takes forever for you to implement any new features. Could you fix it now, please? To which we said, it's not that simple, see, because we have a big ball of money on our hands. All right, so after much discussion and weighing of options with the business, we decided we couldn't continue down the same road. We had to kill the beast in other words and replace this with something better. Thankfully, we worked as a company that had smart enough business people that they realized that it can lead to a problem and gave us the latitude, gave a bunch of really excellent relatives the latitude to do something about it. So in short, we wanted to go from this to this. From a monolithic Rails app to a distributed set of services written and reviewed and communicated asynchronously with each other through a messaging broker like Rabbit and Q, or synchronously through REST-related APIs, a Minnesota app, for example. We also wanted to separate reporting from operational data, our operational data store, although in this talk, I won't be focusing on what we transformed into data, just the code there. Finally, we wanted that Rails app to end up where it began as a simple server engine. How can we affect this transformation? Well, as you saw, there were three approaches. The rewrite through which that never seemed to work, or refactoring the submission, but then it didn't seem to, wasn't something that was going to work on our situation. Or we could strangle the beast. What do I mean by this? The strangler application, which I mentioned on the strangle approach, which is something that Martin Fowler, but during the Martin Fowler point, you know, over five years ago, I think, and think of it this way. So think of a vine like the strangle fig, growing up around a tree, gradually killing it until eventually, the only thing left alive is the vine, roughly in the shape of the original tree. So with the strangle approach, you don't really fix up the existing code. Instead, when you need something new or changed, you begin by building fresh greenfield code. The legacy code can communicate with the greenfield code, but access in the other direction is minimized or negated altogether. So in order to strangle our legacy out, there were three main drivers to our approach, all right? We want to separate responsibilities, do things asynchronously, that's the rabbit I'm having with you, when making incremental changes. All right, so let me start with separating responsibilities. In order to separate the tangled layers of responsibilities from our monolithic application, and parcel them out as services, we needed to identify the company's core business needs. This we found was an effective way to kind of identify and isolate the services we wanted to build on the code level. Those responsibilities were qualifying leads generated by our surveys, delivering those leads to clients, determining how well those leads convert once a client receives them, email remarketing, and budgeting from their authors. For the same presentation, I'm going to focus on two specific responsibilities, which can serve as microcosm of all, and that is lead qualification and lead delivery, okay? Now the problem there, however, was that the business logic for any one of these responsibilities was all over the map. So some of it was tested, some of it was not, how do we identify and consolidate in order to repackage it, right? So one simple approach was to pair with the legacy domain expert when one of us worked on a new service so that when we had somebody there who actually understood and knew the legacy business rules we needed to implement. But I think more importantly, we did a lot of, spent a lot of time reading and refactoring the existing tests in our legacy code to identify and understand the behavior down. We didn't have comprehensive test coverage, so if we could write new tests in the legacy code to actually understand and verify the functionality of the crucial pieces of the business logic. In this way, we could be confident that we had an understanding of the behavior we needed to emulate the new system so we're actually implementing the actual functionality. So let me get more specific. Test document and applications behavior. If you're lucky enough to have tests in your legacy system and as I said earlier, we weren't really comprehensive, you can use them as a kind of scaffolding for the behavior of the new system. So this specific aspect, test file, describes some lead qualification behavior for phone, zip code or email, which we wanted to coordinate with the new system. But we didn't want to simply copy and paste the ugly code into our new system. Rather, the test here gave us books into important business logic, whose implementation would then refactor in a new system, right? So notice that the logic for validating phone, zip and email all desired in one class, the lead contact class. So in the new service, however, we want to utilize the single responsibility of all the people that isolate those behaviors to distinct specification classes instead of lumping all of that logic into one big class. So we had two distinct tests, one for phone and one for zip, in this case. For the more we notice, some are done in behavior in the legacy test. This gave us an opportunity to abstract that functionality out into a DSL for qualifying a lead. All right, so now when the lead doesn't, sorry, now in the lead industry, qualifying a lead is often referred to as scrumming it. And you'll see the terminology reflected there. Validating the email and phone, or not blank, is no longer hard coded. Nor do we need to hard code the checks for bad words. But we still allow for customized qualification strategies that can flood in to the larger framework. All right, so in all of this DSL expressed our subdomain, clearly allowed us to easily maintain and extending the service. The scrubber object more or could be controlled by a factory class that could build out various instances of scrubber with different behavior for different operational channels. We repeated this process until all functionality relevant to qualifying a lead in a legacy app was then recorded in a new service. And then we moved on to the next one. Then in the end, we had single responsibility acts which we viewed as a system level extension of the object-oriented principle, just as the single responsibility rule dictates that a class should do one thing and do a well. We wanted these services to have a single purpose at which they could excel. In addition to single responsibility to house, we focused on the quality that goes within them. The last thing we wanted to do was to repeat the coding mistakes that we made earlier. So one primary gauge that we used to guard against reincarnation of that code was that of code cohesiveness or applications of cohesiveness. And a few months ago, Glenn Vanderberg had a really nice blog post in which we discussed cohesic code versus that piece of code. And he takes a kind of etymological approach in which he says, we talked about cohesion, the root of the term cohesion, which has the concept of sticking. Things that are cohesive naturally stick to each other because they are alike or of the same kind, because they naturally fit well together. So these are like pieces that fit together well for repolification and for delivery. Cohesion stands in contrast to adhesion. When something adheres to something else, it was adhesive, right? It's a one-sided kind of external thing, something like duct tape sticking one to another. So clearly the legacy pieces did not fit well together. They were stuck together, they were adhesive, not cohesive. Okay, so a secret instance. We also wanted to offload as much functionality as possible to messages and servers. Long-running review applications that listen to cues from messages. Those messages would represent events that could trigger the delivery of the delivery of the phone. So the legacy app originally delivered leads to client synchronously. The synchronous delivery created a terrible user experience because it was dependent on the response from the client. When the user selected more than one school, he or she had to wait for the application to cycle through each delivery. Processing delivery synchronously however completely alleviated that problem. So when the user submitted their data, the legacy app could simply fire messages and forget. For example, now we can load balance for high volume traffic, running multiple instances of the service, all of them competing for messages off of our delivery queue. Varying the response times mattered less because the deliveries were being processed in parallel and all of this happened behind the scenes, right? So the user was basically no more. The most obvious impact was increased speed and responsiveness of all of the thick rails out. More poor effects, however, were really less obvious. Now, messaging in particular is a message in a specific point because messaging facilitates the creation of highly decoupled applications and services. And it allows independent applications to communicate synchronously in an organic location and form out of messages with JSON. And we decided on a simple and straightforward vocabulary or contract application to the speaker that you're doing. Finally, we took big steps. We wanted to iterate all of these changes in the place. Let's take a look at delivery again. In order to offload the delivery for the legacy system, we first wanted to define a boundary between the old and new systems, okay? On one side of the boundary, we had new delivery, self-contained review consumer listening for messages on a delivery queue. And on the other, it was a world. It was wedged in at the legacy point of delivery in our model with the rails out. So, and finally, we added a new field to the school's table so that the router could determine whether or not the school would then activate for the new delivery service. We could easily activate schools for the new service by adding them to one by one at first and then later in batches when we were more confident in the delivery service. Where it was simple to roll anyone back to the legacy delivery if there were problems. So, I said, we're flipping the switch into the indicators. Now, this boundary is permeable for the legacy code. It could publish out to the new service but the service could not stop it. That's the router had to adapt or map the data to the new service to a simple delivery mapping. And then it published that data too. In this way, we ran old and new in parallel for weeks or even months before finally letting the new service completely strangle the legacy delivery. At this point, we could never get the battle code and continue to strangle the legacy code over and over again, one service at a time, basically. All right. For the router and the mapper is the throwaway code. There were temporary insertions in the legacy system to facilitate communication with the shiny new system. You had a column to the schools table was intended from the beginning to be fabricated as well. So, they all kind of served a temporary purpose. Okay, what do we gain from this? Speed for one. Your system's gonna be faster when you run a bunch of your business like it in the back of your home-districted servers. It's also easier to isolate and work on performance bottlenecks because problems are framed now through a more fine-grained lens. For example, the lead qualification service is slow as opposed to tomorrow with the Grails app is slow. And the app is slow, so. All right, we also gain maintainability. So, small wealth-actored applications that have a single overarching responsibility are easier to maintain in the model of the big apps. I think it's all that you can also be easier to keep up because you have a more fine-grained sense of where problems are occurring. All right, testability. So, testing is easier when your apps are small and they have focused responsibilities. Maybe, more than maybe. Most importantly, most interestingly, you gain composability. All right, so independent well-defined services are like Lego blocks. And for us, that made it easy to support multiple operational channels. So, greater flexibility makes it easier to adapt to changes in business models, which is extremely important for small and medium-sized businesses because they tend to live in an inner-nation mode. So, if your system looks like a chain of nodes, it's simple to add and function out. You just plug in node into the chain. And the existing pieces can be recomposed to form additional operational channels as business needs change. In our case, lead qualification was an entry point for our system. And it could also potentially serve as a router for more than an operational channel. Furthermore, if you couple in lead qualification from a model with Rails app, it essentially became a client of a lead-gen sub-domain. And with that, the business could realistically start thinking about expanding in a different way for employment or loan financing or even building out mobile versions of the work survey. One of the architectural goals that we have is to move towards an event-driven system where each service publishes an event stream of its activities so that other services may subscribe to and then filter an active one. So, for example, if the delivery services deliver the lead, it would then publish out an event, or a message describing an event, which any interested party could subscribe to and then do what they didn't want with that message. By and large, you want to minimize your dependencies between services as much as possible. This is where an event-driven approach is where it really excels, it allows you to excel. All right, so what do we risk? There was a lot to, we thought there was a lot to gain to gain from this approach, but what do we risk? Right, so maintainability. It's kind of flip side of the coin because although simple responsibility services in apps are easy to maintain in isolation, we now have coordinated collections of services that were responsible for it, all right, and that can cause main incentives. And in particular, coordinated deployments, very labor intensive. And furthermore, this is one of the more important points about the talk, moving from a model of Rails app to a service-oriented architecture, we did not really remove complexity, we simply distributed it, all right? We've all flooded it from single applications to the relationship between a collection of applications. This was a calculated trade-off, but we thought it was worth it, but your team made a side differently. You have to weigh the frozen funds, you know, on a case-by-case basis, I think. So, ask yourself about the makeup, the makeup or skillset of your team. You add a manual and it's got a message for anybody who's worked on a sole, does your team have a good integration test? Does your team test? You know, what is the inherent complexity of your domain? If it's not that complex, will you just want to refactor the legacy of the submission? Right, I think testability, right? Once again, flip side at the same point, all right? And I want to make some general comments here about testing, and I'll leave the more detailed approach that's following me. So, in many ways, testing is easier. We are answers small and have focus responsibilities, right? But testing the behavior of all of your services together, or even a subset of those services can be challenging. So, what do you do? Well, you know, you figure it out carefully, I think. You know, here I want to shine a spotlight upon messaging these services in particular. So, for example, a lead delivery service, all right? So, basically, we got away with just writing an integration test for individual services and not trying to write really comprehensive integration tests, all right? So, I think, you know, that, at this point it's been more or less sufficient. It's probably because messaging consumers tend to be, or should be, highly-coupled applications. So, you know, if a consumer does make a synchronous call to another service, then we include that, for example, through the red contact, you know, to make a request to a budgeting service, you know, that would have, you know, a budgeting service fire up, and make that part of the overall integration test. But basically, you know, creating integration tests that run the gamut from, you know, surveying a visitor to the qualifying lead, delivering a lead, remarking in the email, checking the version, it seems like they'll overkill. You know, because if you're obligated to be well-defined and low-on-dependent, you think the integration tests can be, you know, isolated to those apps or those services and their immediate dependencies. Okay? One of the big gotchas about, you know, as far as moving towards a more service-oriented approach is overly-decorating new services. The more services you have, the more complicated your architecture becomes. So, when we're going against the proliferation of services, keep them focused on your core business needs, right? Here I want to bring an example from a different company, we're currently working with, which is also in the region industry there. So, we have some applications that need to share logic for formatting fields on lead and we also needed a rules engine that several apps could utilize, right? And it was tempting to simply set up a couple of synod graphs with breast-related eyes that anybody could hit. But we're playing with a different approach. This is an approach, you know, inspired by the Android interface line that does not involve the addition of two more services, right? In short, we put the field formatting library in a separate gem that any root app can include and we're running local instances of caching you on the servers that host those apps. That says, you know, kind of caches for any configuration data that the gem may need and we're considering doing the same with our rules engine. Those local catch-to-be instances simply replicate from a master instance that are, you know, then an admin application would like to. And this way, we're able to leverage the simple and powerful synod graphs that catch-to-be with a number of services and to keep them more aligned with core business functionality, right? So finally, one last dodge, one last risk, one last risk is paragraph share, right? So, and this is, you know, when you move from model of the Rails app to the most service-oriented architecture, especially when you start using messaging, you have to deal with this paragraph share, right? Because you no longer have a call stack, you know? And the call stack brings away for this period of assumptions. And synchronous calling, which your caller should know what happens next, right? But in this scenario, those assumptions are far gone in this argument, so the assumptions are gone. If you have applications that's in the fire messages and I forget, something else is, you know, something else has to, you know, it is assumed that you're processing that further down the chain. So more, you need to start thinking about recovery strategies when services go down or, you know, when exceptions are thrown in a service that prevents the processing of another request or a message. And furthermore, conceptualizing and programming towards an architecture where behavior is distributed and can run in parallel, as opposed to an architecture where all behavior can exist in one place, requires a real shift in the way that all of this part of it can be difficult. So along those lines, you really want to map the architecture, right? And because the system is easily evolved, like easily evolved, you need to create a way to visualize and even manage the configuration of your system. All right? Could be as simple as generating a mass of the domain or sub-domain or sub-domains, but you basically should, you should be treating the composition of your system as an additional layer of the overall architecture, okay? All right, in summary, if you find a similar story playing out for you at work, consider the call line we talked about, all right? Gradually replace your monolith rails out by strangling at one service at a time. Make sure the services are closely tied to core business needs, processes my business logic, a synchronous use of the domain, take baby steps, iterating changes in place, no one would be afraid to run the old view in parallel for a while. But be smart about it because service orientation and messaging are not without a peril, right? So you have a maintenance tool, a testing tool, overview the position and the paradigm shift to worry about. You should all be measured against the rewards of your degree before you close that legacy chapter and decide to turn it over to a new page. These are some helpful links and some helpful books. Thank you for your questions. There are some separate functions, but what this is in case I do the testing services, but even in object-based operations and separation, and there's also the idea that we organize the big problem and configure different parts of it to run it. Sure. What? Repeat the question? So the question is that you have a difficulty. It seems clear that we should be separating responsibilities, but we need to go so far as separating functionality out into different services as opposed to separating them out into different objects within the same application. Sure, I think you need to make, that's a judgment you need to make based on the complexity and the, you know, on a case by case basis. You know, for us, to deliver you, to qualify you, to deliver and leave, those were reasonably complex, complex endeavors. And so, you know, we made the choice to separate those out into separate services. And it was just, you know, we felt like, you know, your other one was that, you know, maybe we could just have them all as part of a one-day code, because we made rails out, it just ran on different servers. But, you know, in that case, you know, rails, you know, it was just like a hammer and not everything was available, right? So, it was just much simpler to write a similar little Ruby consumer, or a smaller library, a smaller Ruby consumer that could handle delivery. It could grow, expand, and stand on its own, because it had one focus responsibility. We didn't have to worry about it laying down the rest of, you know, exactly. So, all right, thank you. I said that you had some tests that granted the services against each other. Yeah. All right, all right. So, let me, you know, give you some examples of the delivery. So, in the case of delivery, let's, we had, you know, on one hand, you know, the endpoint of that service was to actually deliver the lead to the client. We certainly didn't want that to be part of the test. So, we started out using the fake web, and I think we basically used the fake web, and I would actually advocate using something else, like, I mean, then it's the relatively new application that actually, you know, spawns at every word that I said. I think the little red cap wrote it, but it'll actually, you know, it'll spin up a little synod for you that you can configure to respond to different URLs. And, you know, so you can actually have some sort of, you know, an application out there that is, you know, receiving requests and giving responses instead of using fake web, which basically is, you know, monkey patches and HTTP. So that, you know, so that's one case, but as far as, you know, let's say you have delivery, which is also communicated with a budgeting service, spinning up, you know, the budgeting services, while you do something like Tim Harper's excellent background. Is it background job? There's something called service manager. Service manager. And what's it in Tim? It's a background process. Background process, yeah. We're running processes in the background and more of a regular output. Yeah, that's how, you know, like with messaging consumers using some of that kind of process to, you know, spin them up around their background, using whatever they're going to get in the script, instead of actually trying to thread them or something. Did you ever run into problems of like, like batching where the, instead of dropping a bead from node to node to node to node, you had 10 things that happened in this node that need to be carried to the next node, but need to be treated together. It's like the N plus one problem in SQL. Yeah. Did you just design, just stay away from that from design just in general? Yeah. Alright, we're actually going to have to call up there, so let's give Chris a round of applause.