 Okay guys, so the next speaker, Wei Li, and the topic is microservices lesson learned. Thank you. Good morning everyone, welcome to my talk, and I hope we all enjoyed the session so far. My name is Wei Li, I'm a senior software engineer and team lead from Red Hat Mobile. Today I'm here to talk about microservices. So in the last past five years, we have been trying to build microservices, and we have learned a few lessons along the way. So I'm here to share those lessons with you, and I hope it will be helpful for you guys if you're trying to build microservices. Just some background information, we're from a previously, we're a startup company called Feed Henry in Woodford, Ireland, and Red Hat acquired us in 2014, to rename our product to ArchMap. So we're building a mobile platform to allow our enterprise customers to manage, build, and distribute mobile applications for their organization, which is a core part of a lot of companies IT strategy today. It's simple, it starts with a very simple monolith web application, and it's doing very few things. But over the years, we started building other features in. For example, you can easily integrate with your Lexi enterprise backend systems using OJS, and we have very sophisticated team-based access control system, and you can do drag job, code list, application development on our platform, and we have reporting analytics building. And also in the last year, we built our product on top of OpenShift, so customers can deploy the product in any infrastructure that they would like to use. With those features started building, our modernization started to grow and grow and grow, and it's hitting a point where there are just so many types within the application, and it's become harder and harder to add new features. A lot of times when we add new features, existing features broken, and also the build process, it just takes so long to do. So we're trying to figure out a way to actually allow us to solve those problems. And at the time, the microservice, it can buzz forward. So we're all excited about it, and we think this is a perfect fit for a product. So this is basically what we want to do. We want to just move our modernization into a new microservice architecture. You can see it doesn't look like there are a lot of changes. So we think, okay, this should be easy. So let's try that. And then that's the first lesson we learned. Microservice is not very large. It's very complicated. Here's some problem that we actually didn't think at the beginning. So the first thing is our current, the current process at the time doesn't really scale. We have the process for our CI CD. We have process for our service deployments. They don't work fine for our modernization application. But when you try to scale that to tens, to even hundreds microservices, the process doesn't scale. We have to improve the process. And also we underestimated the complexity of the system. We are literally moving from a single modernization application to a distributed system. Although we know that it will not be easy to do, but still it's actually more than we expected. And also the team wasn't really ready. At the time, we were good at building the multiplication. But there are actually a difference between building multiplication and building microservices. And a lot of the team members didn't realize and we just go on with building more microservice like our building multiplication. And we just end up with more multiplication. And also we didn't actually understand the impact on organization. So we didn't change the structure of our team so that we can make the communication a lot easier. And that actually causing a problem as well. And also despite microservices offer a lot of features, and in the long run we're gaining a lot, but it's still engineering goal. As from product side of view, they don't really care what architecture we're running. All they care is if you can deliver a feature. So there's always a battle between how fast can we deliver a feature and how can we achieve the engineering goals. So what's the advice? The first thing is think again about adopting microservices. Is your application really needed? What's your problem at the moment? And what's your end goal? And in a lot of cases microservice is not an answer. So think again and you may find out other solutions. The next thing is if you're going to go ahead with microservices, don't just jump in the water straight away. Think about what your challenge is. Think about your current process. And think about what you need to do. And then prepare for those challenges and get ready for them. And the last thing is it will take time. So you have to be patient. You can't get microservices over a weekend or even a month. Like in our experience we've been trying to do microservices for the last three, four, five years and we still can't say we're there yet. So it definitely will take time. So be patient. So the next thing is when we're trying to develop our applications, microservices, we quickly realize that we are lacking standards. It's not a big problem in monoliths because it's just single code base. You just follow the existing standard. But what you do in microservices seems a bit different because one of the benefits of microservice that offers is it allows you to do the best for the right job. We all know as engineers, as developers, we are trying to new technology, new technology stack for our services. And if you combine the two together, you end up with many services that are running in different languages, that are running in different technology stack. And you are having inconsistent code style across all those services. And also you will see the same code, the same functionality repeating in every service over and over again. And also if you have a relatively small team, you all have problem with one engineer working on multiple services because that means he has to learn all those technology and the languages. So the advice is try to limit your language choice. And in most case, two to three language is enough. Netflix is famous for their dedication to Java language. And also across those, if in those languages, try to apply the same code style and standard and build off some shared libraries for the common tasks. And also even when building similar services, like for example, if you're building a retro service using different languages, try to apply the same architecture, for example, onion architecture. And then if you all do all this, you probably went up with your own microservice toolkit or you can just use existing ones from the market. There's many open source projects these days. So once you have finished your development work, you need to do testing. And then we realized it took us a time to realize we're not testing at the right level. And so this is an ideal testing permit. So you expect you have lots of unit tests and then a relatively small amount of integration tests and then very small amount of end-to-end tests. But we end up with this. We have a large amount of unit tests and a large amount of end-to-end tests but very small amount of integration tests. So what's the problem? So the problem is there when you're building microservices, normally your one service was talking to many other services. And when you're trying to build a new question test, it means that developer had to somehow figure out how to actually communicate with all those dependency services. It's either took them too long to build those services or there are just some other problems. So, and also the other problem with that is our team structure. So we have the QE team as separate team and when they're working, we're basically following a workflow. That means when we're doing our development work, for QE people, they have to work on some testing so they just end up with building more and more end-to-end tests. And that also increases problem that we have. So what can we do with that? So the first thing is try to invest more in integration and component-level testing. So you can think about building some... If you have some core services, that is the dependency of many other services, try to build out some mark for those services. Ideally, it should be auto-generated if you are doing documentation-driven development. Also, you can try a contract-driven testing and it's very popular these days. Also, you need to limit the number of end-to-end tests. Ideally, just try with very important happy pass tests, the most important workflow of your product. And the last thing is a lot of people didn't realize this. You probably want to have an integrated HR team. So that means form a small team with developers, with QE, with ops people in one team so that when they're working on a feature, they can do the testing within the team level. So this will avoid that QE people and testing people building more tests at the end of your workflow. So now we have the testing and the next thing we want to do is to deploy our services into our infrastructure and then we'll quickly realize that automation is the key to the success of microservice. For example, here are some of the challenges that we had. So it's not a big job to set up the development environment if you only have one monitoring application, but it will become a big problem if you have 10 microservices. You don't want the developer to spend two, three weeks or even months to just set up a development machine. And also your CI CD, it works fine for one monitoring application, but if you want to try to scale that to many microservices, you probably will need to add a lot of automation to that. And the other thing is for service deployment, before our operation has a lot of squeeze to do the deployment of the monitoring application, but there's still some minor steps involved. Again, if you scale that to many microservices, you need to automate all those steps as well. A bit unique for us is because we're not doing the on-premise release, that means we have to somehow package all our microservices, different versions of microservice into one archive for the release for our customers, and that actually has its own challenge to automate that process as well. So we want to automate the workflow to set up the development process. What we have been using is we're using Vibrant and Chef Solo to provision our development environment, and that works pretty well for us. And for CI CD, probably you want to think about something like Jenkins Pipeline, and because Jenkins Pipeline is a lot flexible, it allows you to find a lot of repeatable, reusable stages for your application, and that means you can actually compose those flex pipelines into different workflow for your microservices. And also you want to build the DevOps culture within your team. So what we have been trying to do is to we allow our engineers to use our Chef scripts to provision their own testing infrastructures, to creating their own developing infrastructures, and we encourage our engineers to try those steps, and if something goes wrong, figure out a problem, and encourage the collaboration between development team and operations team, so that we can have a DevOps culture within the team. And the other thing is you have to encourage automation. That means when you plan some work, you need to think about how do we automate the work, and you need to add that time into your plan. If you don't give engineers enough time to do the automation, they will never be able to do it. So that means from top down, you need to have the automation mindset. And so now with help of automation, we can deploy our services into our production. But then if something goes wrong and customers are shouting to fix a problem, we want to try to figure out where the problem is. But a lot of times, it feels like playing the game of World War II, right? You know he's there, but you just couldn't find him. And it's not good game to play when you have customers shouting at you. So for example, some of the problems like this, so you have a lot of microservices and talking to each other, service B2 probably lock in error in these log files. But when you're trying to figure out where the problem is, you will have to start from service B2 and looking at service A4. And if nothing there, you don't need to look at service A2 or even service D2. So that means you have to go through this service dependent tree and figure out which service is giving you the most problem. And this normally will take time to do. And also when we're doing that, we will have two problems with the log files. So what happens is with so many different services and the different team will have a different way to do the logging. And we end up with inconsistent log styles. And also the other problem is a lot of the log message doesn't have enough context. So all we see is just a log message, something goes wrong, but nothing else. So that's another problem that we have. And also within this type of services, if you have a problem with the performance, it will be very hard to find where the performance bottleneck is. So what we need to do? For logging, we end up with just using the JSON log format. So it's easier to be passed by a lot of services but a lot of log processing services. And also we try to use per request logger. So it had to provide two benefits. So when you're creating the per request logger, you can inject in the user information, the request ID when we're creating a logger. And that means when you do the logging, those information will be created while we log automatically. And also another small technique we use is with the per request logger, you can keep the trace log logs in memory. And then when there's the error, try to log that whole log message to log file. And it's extremely useful if you have some problem that you can only produce in production environment. Then distribute tracing. So the simple thing you can do is making sure you have request ID inject at your edge of your API calls and making sure that request ID is passed along your call stack. And then if you have a central place to process the logs, you'll be relatively easy to figure out what services are going through your call, your applications. But again, you should try something more advanced like Twitter Zipkins or Red Hat Hocular. Also, you need to try to making sure you are measuring everything to collect enough metrics. But again, you have to making sure you have the infrastructure in place and also have the libraries in place to encourage developers to collecting metrics for their services. The next thing is we realize that documentation is a map for developers in microservices and it is foundation for team collaboration. But what normally happens is, as developers, we don't like to write documentations. I have them made on my website as well. It's very hard to get done. And also the other problem is when you make a change to your code, you also forget to update in your documentation. And no matter how many times you say, we still don't do that because we're developers. So, what you can do? So, try document-driven development. Some tools like Swagger are doing that very well. And also making sure you have documentation generation as part of your build process. And make sure that is part of your code review process. A lot of times, developers are reading the code but they don't read in documentation. And try to put the documents in a centralized place to making sure it's easy for a developer to find, to use, and to learn. And making sure it covers as many areas as possible. Like your high-level architecture, your API for your services, your operational manuals, et cetera. Making sure they are all in the same place. The last thing that we learned from doing this is treat microservices like cattle. So, there is this phrase, cattle not pets. Has anyone heard of it? Cool, okay. So, it's basically used by ops people to describe VMs. So, they think VMs should like cattle. If something goes wrong with the VM, you kill it and it's being a new one. You don't need to maintain it. You don't care about it. I think it's the same analogy can be used by micros, could compare microservices and monitor application, right? So, if you have only a few modifications, you probably normally will have to try different deployment methods. You will somehow maintain them. And it requires a bit of work to scale up and down, right? But for microservices, you don't care. All you really want is just get them deployed. You don't really want to maintain them. And it should be easy to scale up and down. So, in order to do that, I think you need to use something like a container platform like OpenShift. So, that's one reason that we move our platform into OpenShift for our on-premise product. And we haven't done that for our SaaS product yet. So, we can compare those two and we clearly see the advantage that using a container platform will bring us. So, I encourage you, if you're doing microservices, try something like that. It doesn't have to be OpenShift. There's Kubernetes, there's Docker Swarm, but definitely OpenShift is an amazing product, right? So, to summarize, microservices have bright future, but it will be a bit easy journey. So, you have to understand the cost. You have to review your process, methodologies, and identified gaps, and be prepared and plan ahead. It will take time, but you will get there. So, thank you very much. And it happens to be the Chinese New Year today. So, happy Chinese New Year, everybody, and I hope you all have a great and wonderful New Year. So, any questions? People, can you please wait a little bit? Right, okay. So, the question is a bit more about contract-driven testing. So, what you do is you base, there are some frameworks that are acting like middleman for your tests. So, when you need, for example, if you have service A, you need to have a testing service A, which is depending on service B. This testing framework will act like middleman. So, your service A will send the expected response to this testing framework while it's doing the integration test. And then, when service B running the integration test, it will send its actual response to this testing framework as well. And then, this testing framework can compare the expected response and the actual response, and to see if they match. So, it's basically, you only have one testing framework to do the testing. So, all the other services don't need to know about each other. They only need to know this middleman there. So, that's basically how we do contract testing, contract-based testing. Right, thank you very much.