 Hello, everyone. My name is Daniel Keeney. I'm a software engineer at Redmart. So today we're going to talk to you about an internal tool that we've developed at Redmart that we refer to as the microservice template. What this is is a tool that helps us build a microservice in one step. That way we can create microservices faster and it also helps us abstract away a lot of the configuration and server setup that needs to happen for every new service that we create. So today's presentation is going to be split into four pieces. First I'm going to give you a little bit of history about Redmart and why this became necessary and also about the microservice template itself and how it's evolved. And then we're going to talk about the microservice templates and then two critical pieces that it relies on. One is something we call the microservice library and then the microservice test library which helps test things. So first to establish some context, let's talk about what microservices are. First you can see what they are not on the left side. Does this thing work? Nice. On the left side there, this is how most services start out. You just have one giant server that handles all your traffic and all your logic. But as your business grows, you start to accumulate more and more business logic inside this one server. And you can deploy more and more of them, put them behind a load balancer to handle traffic. But eventually it becomes really tedious and cumbersome to deploy all these. And if you have one small piece of your logic that's taking up a lot of memory or a lot of CPU, then deploying a giant server just to handle one small piece of business functionality doesn't make a lot of sense. So splitting up into microservices helps scale things. And so you can see that microservices are small independently deployed services that are based around independent pieces of business functionality. And so the UI just calls a single service. It doesn't really know who. And then they just do their own small chunk of work. And if they need to call other services, then they can do that. And they end up forming their own ecosystem similar to what Surya was just presenting. So how do we get here? Well, it's a tale as old as time. Developer meets API. Developer falls in love with API. API is unable to scale to meet the love of the developer. And so the developer decides to do something about it. When we first looked at our monolithic API implementation, we had several distinct pieces of business functionality. They're already pretty well segregated into packages and whatnot. But they had a lot of common code that was being reused, a lot of helper libraries and utility functions. And there was a lot of configuration and server setup that went into the initial application. So we did our best to split up a couple pieces of the business functionality into their own services. And when we did this, we noticed that we were able to isolate the business functionality pretty well. There was a little bit of duplicated common code. And of course, the database configuration and the server setup was duplicated as well. So the way that we created these first microservices was a little tedious, a little error prone. We had to first manually create a new play project. And then we would have to find a working service and copy the continuous integration setup as well as the database configuration and other server setup files. This was problematic because there could be older versions of frameworks that were in use. And then the configuration values would be in the wrong format or the wrong naming schema. And some of them just wouldn't work. And then in addition to the configuration files, we also had to copy some actual Java files as well. And again, this was problematic if we were using older versions of frameworks between different services. Sometimes between different versions of play, they wouldn't even call the same classes on startup. And so you could copy your configuration over and it just wouldn't even be there. And it was really hard to debug why. So there wasn't a single source of truth for where you could find the latest version of the frameworks we're using configured correctly. So we looked at the microservices that we had created and we combined them into the first version of the microservice template. So this had instead of actual business functionality, just some sample functionality to get you started. It had references to the common code, a lot of the third party libraries that we were using. And it had stable versions of the latest version of our continuous integration and server setup scripts. But this still had a problem because we noticed that when we would make changes to the microservice template, it was really difficult to make sure that those changes propagated to other services that we'd already created from the microservice template. And also if anyone had a really cool idea that they implemented in their own service, again, it was difficult to propagate that to other services. So the second version of the microservice template only has sample functionality and then it includes the microservice library. And then inside the microservice library, we abstracted away all the common duplicated code as well as all the configuration setup. So now keeping things in sync is just as easy as increasing your dependency version. So let's talk about the template. What is it? It's a fully functional sample service. On the screenshot on the right side, you can see the folder setup of the structure. And it has an app folder that has all the actual production code. There's a test folder that has integration and unit tests set up into their own folders. There's a build.spt file that play uses to deploy itself. And two lines below the highlighted one, you can see new project.sh, which is what the template uses to duplicate itself and create a new service for you. So all the sample code also includes some dependency injection setup. So that it's a lot easier to get started coding the actual logic you want in your server. So we've mentioned the continuous integration setup a couple of times. So I want to talk about some of the tools that we use during continuous integration. The first one that we use is a plugin called Jacoco. It's a Java test coverage tool. So it'll listen while you run your tests and then it'll give you a report on how effective your tests were covering different branches and different lines in your code. And the cool thing is that it'll fail your build if there's not enough tests to cover the metrics that you want. Next, we use a plugin called FindBugs, which does just what the name says. It finds bugs in your code. It does some static analysis on your code and it'll fail the build if it finds any glaring bugs. Check style, make sure that all of our code follows the same readability standards. If you use tabs instead of spaces or if you use four spaces instead of two, it'll fail your build. Sorry if you feel otherwise. SonarCube is a plugin that we use to comment on our GitHub pull requests. And that, again, finds any obvious flaws that are missed by the other three. And then the last piece, Syria mentioned that we have our own Nexus repository, but that's hidden behind a firewall. And so for the continuous integration server to actually be able to build our builds, it needs to SSH into the firewall and be able to access things. So there's some scripting set up to allow that to happen. Inside the microservice template, we also have a standardized logging configuration that's used across all services. We make sure that we're always logging the timestamp in a certain format. The name of the service is generating the log entry, the currently executing thread ID, the request ID. The request ID is we generate a unique identifier that matches incoming requests from the front end. We're going to touch on that in a couple of slides throughout here, and then we'll show you a cool graph how it all plays together in the end. And then we also log all this in pretty colors. So you can see at the screenshot at the bottom there that different pieces have different colors and draw your eyes to different pieces. So the code that was abstracted away into the microservice library is mainly focused around setting up the external resources that we use. So the vast majority of our services use Mongo as their database. The regular majority of our services use Redis for a key value store. And then a good number of our services also use RabbitMQ for messaging functionality. So we have a lot of set up that's abstracted away into the microservice library, and we also have some dependency injection modules that make it very easy to quickly inject any clients you need to interact with these services. So for RabbitMQ, we have a standardized publisher, a standardized consumer, and we also make sure that when the server exits that all your connections are closed gracefully. For Redis, we have a lot of connection pooling set up. We automatically monitor several metrics and statistics using stats D. And depending on the type of programming you're using, we have a synchronous, asynchronous, and reactive clients that you can use. And we also make sure that content is serialized in a standardized way. For Mongo, we have a really cool helper annotation that will automatically increment certain fields whenever you save an object. This is useful for things like automatically generating IDs. And we also have Morphea integration set up by default. Morphea is just an ORM between Java and Mongo. We also have a web service client that we use to make external or internal network calls. And so this was originally built as a wrapper around Play's static web service client. But over time, I grew a life of its own and we added more functionality to it as well. In the diagram here, you can see that whenever it makes a request, the first thing it does is it logs the request URL and any header information that's relevant. And then it also adds some custom headers to every request. And then once the response comes back from the magical cloud, it logs the HTTP status and how long we were waiting for the response to come back. Some of the headers that we forward for every request are red marked specific. The extra quest ID is related to the request ID that we saw in the logging slide. And that's necessary so that the logging is uniform across all of our services. Oops. And so Play's controllers are set up on an action and filter composition model. And so it's really easy to insert your own filters into that design. So we created a lot of filters that we insert into all of our controllers by default. These do a lot of similar logging as the web service client. We log the request URL and headers, HTTP status and how long it took. We also make sure that any incoming headers that are missing that are required are there. So just how the web service client always forwards the extra quest ID, the incoming request filter makes sure that an extra quest ID is generated if it's not there. And then finally the response header modifications. There are a couple headers that are used in some of our internal services and some of our front-end clients. And so this makes sure that those headers are always present for every response that we send out. And then finally we make sure that there's some automatic metrics logging for the response status and the timing. But there's also a lot of other cool metrics that we track. And Chitra will tell you more. Sorry, I gave it all away. Thanks, I know. So I'm going to be talking about some of the other interesting stuff in the microservice library as well as in the microservice test library. So first up is metrics. Metrics are a great way to figure out what's happening in your system to identify any performance bottlenecks or to get alerted in case something goes wrong. So what the microservice library does is it automatically collects some of the most commonly used metrics across our services. So for any request which actually comes into the service, it keeps track of what the response status is as well as how long the request took. For any request which your service makes to a different service, it does the same. It also keeps track of JVM metrics such as CPU or memory and other connection-related metrics. So the library also makes it pretty simple for a service owner to customize the metric name just by using annotations. And if they want to collect their own custom metrics, they can just use an injected Stats-T client and get started. So this is just a very simple dashboard of one of our services on Grafana. So as Surya mentioned, we use basically Stats-T, Grafite and Grafana. There are already a couple of interesting things over here. So for one, like here you can see that we're logging the response times and it's pretty obvious that there's a big spike there. This is an ideal candidate for when you might want to get alerted that maybe something is actually up with your services. The other thing interesting here is like if you look at the memory metrics, you can see these drops. So those actually correspond to the time when there was a garbage collection on the service. So metrics are great to get an overall idea of the system and to detect any anomalies. But in certain cases, we really need to look at specific requests. So as an example, let's consider a case where a customer is trying to place an order on the red mark site, but something goes wrong and the order is not created as expected. At that point, we really need to track what exactly happened within that request. But since we have a microservice architecture, there's like a ton of services which are involved whenever we create an order. And without any way to track what's happening and to correlate across services, it can get really hard. So that's where request tracking comes in. So what the microservice library does is the first time someone places a request into our system, the library generates a unique request ID. So this is set at the X request ID header which Daniel mentioned before. So subsequently, when service one is calling service two, it makes sure that the web service client in the library actually passes along this request ID too. Not just for calls across services. When the service one is publishing a message to a queue which is being consumed by a different service, the RabbitMQ publisher within the library makes sure that the request ID is a part of the message and the consumer knows how to get the request ID and use it. So now that all the services actually have the request ID, when we look at ELK and we want to diagnose what actually happened, we just need to figure out what the request ID is. And then across services, you can get all the logs you need. So with the microservice library, we have also changed the way we do error handling. So by default, the play framework returns HTML errors. But those are not really useful for us since most of our clients do their own UI rendering. So what the microservice library does is it replaces all the HTML errors with JSON errors. And on an Alphora staging environment, it also includes details about the actual exceptions as well as the stack trace. So those can be really useful when you want to debug some issues. So just an example of how it would have looked like before. So basically, you have no idea what's happening. But with the JSON kind of format, you get to see what the request was, what were the headers, what was the actual stack trace of the exception. One other way in which the microservice library alerts you if anything goes wrong is through a Slack appender. So what this does is that you can configure your service to return, to give you a Slack notification on a particular channel whenever there's a log message which crosses a particular threshold. So for example, if you get an error, if there's ever an error logged in your service, you can get a Slack notification for it. So next, we're going to talk about the microservice test library. So most of the functionality in the test library is focused around making it easier to write integration tests in our services. So the play framework does have an integration test class which starts up a server for you and you can also call endpoints through it. However, we found it quite clunky to use. So what we have done is we've written a wrapper around place integration test and this sets up all the application level configuration which is needed for the test. So we also have helper methods which help you call endpoints on the running service. In a lot of cases, we need to have some kind of test data setup. So what we have in the test library is a helper which will basically load test data from JSON into your running databases. The other interesting thing is a test watcher which basically adds markers into your log outputs. So whenever a test starts, passes, or fails, and this makes it easy to identify if a particular test is failing, you can just focus on that test to know what the logs for that test. So the other thing which Daniel had mentioned before was that we use a ton of external resources and when we are writing a test, we also want to be able to test those. But in a lot of the scenarios, we don't want to actually connect to the staging environment. So what we do instead is use embedded resources. So there are a number of third-party embedded resources available for the ones we use. So for RabbitMQ, Redis, and for MongoDB. But it's pretty cumbersome to set up because you need to have a setup and tear down for each class in which you want to use those resources. With the microservice test library, we have kind of abstracted away all of that, and you can basically just use JUnit rules and set up the included resources which you want to. We've also made sure to have unique data stores and we automatically do the cleanup after each test. So we've seen the template, we've seen the library, we've seen the test library. But the presentation was titled Creating a Microservice in One Step and how do we actually do that? It's pretty simple. So basically we just run one shell script and that does everything for us. So what this shell script does is it gets the latest version of the microservice template from GitHub. It renames all the placeholders based on the service name which you want. It builds the project and runs all the tests to make sure everything is still working fine. And it sets up a new Git repository and if you want it can even push the changes for you. And at the end you basically have a fully functional service with all of the external resources and dependency setup with logging setup the right way so that you can visualize it on Kibana. With the proper metrics collection so you can basically see all of it on Grafana and get alerted as well. And that's it. Thank you.