 Today we're gonna talk about microservices because before that, something about me. I am a consultant from ThoughtWorks, Singapore, full stack engineer, and I also co-founded IdeaBoards, which is a retrospective tool. This is my Twitter handle and GitHub handle. Enough about me, what's in for you? So you're gonna talk about what are microservices, why microservices, how microservices, and when do we use those microservices? How many of you have heard about microservices? How many of you have actually used microservices, or wrote code? So let's define microservices. There is no formal definition per se, but before jumping onto that, let's talk about what are services. It's just an implementation of a contract, right? You have some contract, and the service actually implements that particular contract, and tells you that if you call me with these parameters, I'm gonna do something for you. What happens if we attach a micro to the microservice? So micro, as the name suggests, it should be small. You're gonna talk about what is small. It should be independent, it should be self-contained. It should perform on itself. It should be composable. It, each service, should work together with all the other microservices. And it does one thing, and it does that one thing well. That's the key. But what is the right size? We talk about microservices. There is a lot of conversation around what should be the size of the service. Some people rate size based on lines of code. If it's beyond 200 lines of code, it's not a microservice. Some people think about it in terms of team that if one person could develop that over a period of time, then that is a microservice. Or if it's a group of people that could work independently on a service, that is a microservice. Well, for us, we've been working with a client, and for us, what really worked out is defining those services in terms of domain. So each service does just one thing, and it does that one thing right. So that one thing could take 100 lines of code, or it could take 1,000 lines of code. Tying it to Unix philosophy. You write programs, which are small programs. We've used, said, fat, all these small little programs and then use pipes to concatenate them and make them cohesive, make them work together. In case of microservices, yeah, HTTP is the new pipe because HTTP, microservices are easily exposed over HTTP, and each service does its own job and passes it on to the different service. So it's the kind of going back to the object-oriented philosophy. Talked about, there was a talk on solid principles, so taking, talk about single responsibility. The service, this service does just one thing. It has low coupling, it doesn't, it's not chatty with too many different services. At the same time, it is cohesive, and it's small and does that one thing well. And as the Jeffway says, a monolithic application, which is 100K lines of code, is nothing but 101K line of applications waiting to happen. Well, he has used the 1K line of code as a parameter for a microservice, but you get the idea, right? I mean, rather than having a big monolithic application, break it down into smaller so that it's much more manageable. So why do we use microservice? What's the key objective? Rather than answering that question, I would like to share my story or my journey of using microservice, so how we use microservice in one of the client projects. So I'm gonna tell you a story. This happened a couple of years back when we started an engagement with a client. I'll give you a little history on the client. This is 90-year-old business. It's a social gaming company. It was a 90-year-old business, and it was quite funny to hear on the first day that their customers were literally dying. Not dying out, but literally dying because of the age. So, yeah. Lots of legacy codes over this period of 90 years, they accumulated like a lot of code, acquired a few companies. The tech stack was mixed off VB scripts, VB six forms, and Oracle, and all crazy stuff. Because of these, it was not flexible. The cost of introducing any change was like end-fold. Even adding a simple feature on a customer was like changing three or four different systems and then getting that through. So it was really, really painful. All these apps were fully functional in their own silos because of the various acquires and mergers. But each of the apps had concentrated complexity. There's a lump of code sitting there and nobody knows how it works. When we started talking more about it, that's when we came with this idea that, oh, this sounds like we need to build something that is small, that is independent, that is composable, and that does one thing, so that if there are four different applications once through payment, it's just one service which should be doing payment, right? If they want to store customer information, there should be just one service holding those customer information. So that's when, that's where our journey started and building the microservices. What we have achieved so far is we have 10 microservices doing a bunch of things, 25 VMs in production, 60 plus VMs across other environments like QA tests and performance environments and we could achieve one click deployments across all these environments. And you can guess if it's one click deployment, who does the deployment? So it's product owner with QA is anybody. It's an interesting story that somebody at the client side wanted a dog to do the deployment. So press the button. So for us, microservices, each of these 10 different microservices was like self-contained. They had like their own DB, their own contracts. They were running in their own process talking to each other through HTTP. But how did we start? We didn't start with 10 microservices on day one, right? So try to solve small and valuable problems. Started with small piece of functionality like customer database and try to migrate that first. Start with plain old services. So we didn't start with microservices on day one that hey, let's go with just one resource per service or let's just go with one responsibility per service. We started with plain old service and started realizing that some things could be moved out. So when service is starting doing too much over the period of time, like you do object refactoring, we refactor services. So if service responsibility grown, we extracted them to smaller ones. So as I was telling you about the domain which is social gaming, typically over the web it has three or four components. So that's where we started the plain old services. There is catalog, which is game catalog, customers, the orders and payments. We slowly realized that customer service is trying to talk to legacy too much. So we extracted all that as a different microservice to talk to legacy database and those kind of things and eventually throw that away. The order started growing too much. So it's abstracted out orders into two which is order processing. The main order service now is responsible for just taking orders and there is a separate service for processing the orders and there is a separate service for resulting because it's a gaming company. So after when you play anything at the end of it you get results. So there was this results service. From a high level you would still think that, well, it's okay that just sounds like a plain old service. I mean, I had like a lot of debates like why is this microservice? It's just service which is well designed, that's it. I mean, had time people asking those questions that why, what is micro in these services? Well, for us it was mainly the single responsibility fact that a customer service just does everything related to customer data and its boundaries are restricted to just customer and it never overstepped its boundaries. You also said each service has one resource. So customer service has just one resource. Well, sometimes it had two resource. So especially with payments when we started dealing with payments it started having two or three resource because we had payment method, credit card and debit card which are like three resources in one service but pulling them out into a different, their own microservice would mean that breaking that law of composition and the services, the payment service or credit card service and the direct debit service would become too chatty and it would defeat the purpose of having a service. So they communicate over a restful contract, HTTP and JSON over the period of time when we were into that journey had some thumb rules around what really you should do to make these services micro and keep the complexity to a minimum is one top level resource as we just talked about. Well, in some cases too, focus on contracts. So that is very important for each service that the contract should be driven made a clear that contract should be driven mainly based on the domain and not the client. So if the clients, the consumer of the service needs certain additional information or summary information or those kind of things then what I have seen is many a places we end up creating those service endpoints and those becomes unmanageable. So we focus a lot of contract, a lot of it on contracts. Each service had their own context and they're not allowed to access data which is beyond their context. Anyway, each of the service has their own database so they, it was anyway not possible but we made a point that even if something is accessible some data is accessible and you're not it is beyond the domain. So call the service rather than directly using the data source. Avoid too much of coupling between the services again since they are independent we try to avoid a lot of coupling between the services and since now we have 10 different applications each of them logging in their own each of them having running in their own process it was really important to have a sophisticated logging and monitoring framework so that if anything breaks we get to know immediately. If any of the request fails through logs we can immediately catch those errors or exceptions. So having said that following those thumb rules there were, we ended up creating few cross cutting services which was needed across all the other microservices. So going back to the services that we had we ended up having a communication service because there was a lot of communication to the clients around hey your payment is due next month welcome message or maybe result message that hey you won this particular game. So each of these services had their own communication so we abstracted out the communication the cross cutting concerns around all the services created a communication service and all the service needs to do is just ping that communication service and say that hey send the communication that customer has won and it's the communication service responsibility to figure out whether to send an email communication or SMS communication or whatever. There were a lot of scheduled jobs across these services like need to send a weekly email to customers payment emails and stuff. So those scheduling was part of all these services abstracted out in a separate service so that it would just ping the service and schedule anything on any service. And we talked about error reporting already. So as abstracted out as separate service so it would log a request to one common place and goods to a search and those kind of stuff on this error reporting tool. So having these farm of services what usually hear about thing is service explosion. So you have these services, it's difficult for a developer to check out them and deploy them and things like that. So how do you stay productive in spite of having these many services? Well we use Ruby and Rails API to build the service endpoints. Focus lot of our focus was on DevOps to make things as simple as possible in terms of deployment, in terms of setting up a dev box and stuff. We used feature toggles instead of feature branches because doing feature branches with a service oriented architecture is like really really difficult and having CI and CD pipeline itself is really difficult. We also created a lot of client gems for these microservices. The client gems basically provides ease to talk to the service. So it would feel like you're calling a service in memory because it gives you a nice object. You just call a service using that object and you get an object back. And a mantra was automate, automate, whatever it is, whatever is the repetitive process, just automate everything. So how do I make a small change and still say same? I mean, if we make a small change, how do we make sure that everything works properly if that change happens? The answer is simple, test it. And if you're thinking of something like this, then this is the answer. It's funny only when it is a joke. I mean, if you're building an enterprise software or any software, it's important to have tests. So started off with unit test in each of the services, whether the object within the service is doing the right thing. But then that's too obvious, right? I mean, everybody of us write unit test. We love our spec. The contract test, which is my service doing what it should, which is basically out of container test. So we ping a endpoint in memory service and see whether we're getting the right response. And there is, and we test the contracts. These are basically black box test which test the contract send something and get some response back. Don't worry about the implementation. The next is integration test. The acceptance test is for the boundaries within the service itself. In integration test, we test whether this particular service is behaving nicely with other services. This service is calling some other service or testing the user flow. Then we write throat unit test. It tests the distributed effect if anything fails, then actually is the error getting reported in the error reporting service. Or if the payment is getting deducted, then is the customer getting communication or not? So test async action, a lot of actions were async. Like when you're sending a communication, it's all async. So we were using rescue for it and there's nice plug-in rescue that lets you test async actions. So you build these microservices. How do we actually ship it? As James Lewis says, we're essentially building the complexity of building the software to actually the infrastructure. So instead of now having one application to deploy, now we have to deploy like 100 applications of one K line each. So the code becomes simple, easy to understand, but infrastructure is slightly complicated. So we provision, use puppet solo. At some part of time, we would like to use Docker as well. So provisioning, it begins at home. So even the dev box are provisioned so that if there is any change in any of the version of the software, the same scripts are used across all the environments. So script goes through CI, like application code, the puppet script, we're gonna see that in the CI pipeline slide. And immutable server, as Brian was talking about it in the morning. So it doesn't make sense for server to be mutable if you're using provisioning script. This is how our integration pipeline continuous integration pipeline looked like. You had the UI test, each of the boxes at the top is unit tests. So there's UI, there is service, there is a puppet code which flows through integration, UAT performance and eventually to production. All this is one click deployments across environments. So with every check-in, we run unit tests, run integration tests that we've written, run acceptance tests and build a package. This is really important because that same version of the package would be deployed across all the other environments. And the most important thing is we shipped often like weekly or less than that, just ship whatever you have and we shipped it like FedEx. So talking about CI and CD followed that in the project. What it actually gave us is single click deployments. We managed to get cut down the server deployments from to actually three minutes each change would actually take like three minutes to deploy to production. We had a farm of 25 servers and everything just works with their deployments and everything, just works like a charm. Yeah, so easy that our product owner does it. We made a point that since we are adding a lot of microservices and refactoring to microservices, the cost of adding any of these services should be as low as possible. So we managed to cut that time to less than a day. And right from creating a project to taking that, but that empty project production was less than a day. So having talked about microservice, when do we use the microservice? It's not a silver bullet, right? I mean, it comes with a cost. So these are some of the trade-offs, the benefits and the cost associated with them. So the benefit is you get small reusable and maintainable code which are throw away, you can just rewrite them and stuff. But at the same time, you'll have like a complex infrastructure because you need to deploy those independent services, the individual codes. Each service would grow independently. It can divide the teams such that based on services and each service would keep growing on their own with the teams. But on the same side, on the other side, the learning curve is quite huge in microservice because now you have to deal with multiple applications and developers would find hard to know what's happening in the other service. They scale independently as they are in their own process. They have their own database. You can make deployments such that if there is any service which is not heavily loaded, you can make a rational use of the infrastructure rather than having monolithic app and running fat servers. At the same time, there is network overhead in terms of going through the HTTP, going through over the wire and calling those service. They have independent DBs. So if there is high load on database, the database could be scaled independently. But at the same time, they end up having a fragmented data and the reporting and all becomes slightly difficult. Well, that's all I had. Questions. Your last comment is the perfect segue into my question which was, this seems like it would make reporting a fucking nightmare. So is it slightly difficult or is it a fucking nightmare? Well, love. Well, some of the complexity reports become like fucking nightmare, as you said. In some of the cases, what we have tried out is dump these data into warehouse and start developing reports out of that. There are some plugins with PostgreSQL where it allows you to connect to multiple databases and run SQL query across databases. That worked out in some cases. It was just fetch the data if it's small enough and do it in memory. So it depends on the usage. If the data is really huge, then first case, where you dump that in a warehouse, it really would work fine. How do you deal with versioning the services and what services you're talking to from presumably have something planned at all and working together with them? So as you saw the deployment pipeline, to start with, we said, okay, let's not do versioning at all in the services. Let's deploy everything or nothing. So rather than picking up what needs to be deployed, we made our deployment script intelligent enough to figure out if there is a change it would deploy, otherwise it won't. And since in the deployment, in the CI pipeline, we have tested that, hey, this version of the service works nicely with these other versions of the service, we would deploy that whole lump of all the services together or and roll back if needed all the services together. This is just to simplify so that we don't end up having too many versions and too many making code unmaintainable basically. I have a couple of questions if you don't mind. On your client library, Jim, did you settle on something like there are a few JSON schema standards that we talked about that sort of make it easier to, like the client libraries could be sort of discovering the layout of the API, did you use anything like that or is it kind of that long? So we used hashi gem, if you have heard of that. That worked out really well where it could actually create the objects, create the models and stuff really nicely using a DSL. And if you don't mind that, have you found a need or used any tools or built any tools that can include something that is probably so specific for tracing transactions that go through multiple services? Like if you have some issues that come up, especially with the urgent question, like DLR time, do you want to use something that goes through multiple services that you mentioned a lot of logging and monitoring structures have that very specific goal of being able to follow a particular transaction through multiple services? So we passed in a unique identifier from the main caller. So the UI layer which talks to the service would typically pass in a common header which would be a unique identifier. And if this service is calling some other service, it would pass on the same identifier. And the logging actually makes sure that we are using that particular identifier. So when you're searching in Splunk or any other log aggregator service, all you need to do is just check based on that identifier to trace where all the request went through. Thank you. Hey, I was wondering if you, how you went about testing the contract of the service? Did you use an A.R. tool to find something helpful? Actually, RSpec has a way to test the contracts without actually spinning up the server. Like we do controller test, right? You hit a endpoint and see if it is returning the proper response. If you do a render view on top of it, it will actually render the view, it will actually render the JSON and give you back the response. So you just use plain RSpec. Does it actually help you test that once you get the other client, or if the server changes, it breaks the client, or is there some law to make it, or do you need to do some manual intervention? So the contract test was about testing the contracts in isolation. So you just test that services contract. And if there is any change, and if there is any breakage, you don't allow to promote that particular package any further. There was also integration test pipeline, which tests whether this particular service is able to contact, works together with other services. For that, again, we used RSpec to test the contracts and test its distributed effect across multiple services. Okay, so you're talking about, did we calculate CPU utilization and memory overhead, right, Ram? So as I said, I mean, these, since we could break down the domain into multiple apps, it was really interesting to spin up, let's say 10 instances of customer service because that is heavily loaded, that needs login and we don't want customers to lose out on that. While the payment service or communication service, which is communication service is async, so we just spin up just one instance of communication service. So we played around with a lot of those combinations to optimize the infrastructure usage. Does that answer your question? Well, thanks, thanks Anand. Thank you.