 My name is Demi and today I'm going to talk about high-performance microservices with PHP. A little bit about myself, I'm a staff software engineer at Gloom Mobile, and I have been playing with PHP for many years, since year 2000, and worked for different companies in China and the States. And you can find me on Twitter or GitHub. Here's what I'm going to talk about today. First, I will give a brief introduction about Design Home, the mobile game that we have been working on in the last three years. Then I will discuss different improvements, different efforts we have made to make our APIs, our microservices, extremely fast on production. After that, I'm going to discuss what else we could do to make our microservices even faster on production to push the limit of PHP. First, an overview on Design Home and some backend microservices behind it. Design Home is a mobile game that we have been developing since about three years ago, since January 2016. They had reached some top positions, both in the US app store for iPhones and in the US Google Play Store. And we have about a million daily active users, over 40 million words every single day, and we have our opinion designs up to date. Our main API service serves about over 100,000 API calls every single minute, with an average response time of about 56 milliseconds, as of now. Last month, it was still about 60 milliseconds, and I will discuss how we achieved that from 60 milliseconds to 56 milliseconds later on. We are very proud of what we have achieved and like to have this chance to share our experience on how to build high performance microservices. For Design Home, we use different microservices on the backend. As you can see from the right side, for this talk, I'm going to use the inbox microservice as an example for discussion. This inbox service, it provides rest APIs with different cloud endpoints to manage inbox messages. And we use different tools for that. We use PHP 7, we use NGX, we use Composer to manage different third party libraries, including those libraries developed by ourselves. And we deploy our production instance in ECS, Amazon ECS, and also we use different tools for logging, for testing, for debugging, and for secret check purpose. We use Redis and Couchbase to store those messages on the backend. This inbox microservice is one of the busiest microservices we built. On average, we have about 14,000 requests per minute, and during peak hours, we have about 34,000 requests per minute. With an average response time, less than 9 millisecond. Here is a list of PHP tools, PHP extensions that we use in most of our PHP microservices. Like PHP 7, we use some cool features in PHP 7, like type declarations. We use Composer. We use OpCache. We do data cache with APCU, and we do distributed cache using Redis or something else. And if needed, we do chain cache as well. We do different tests, unit tests, blackboard tests to test our API endpoints. Some team may create feature tests or functional tests, but I feel similar to each other. Also, we try to do PHP the right way, using different best practices from the community to make our microservices reliable fast. So I feel this is something that we should consider first when building PHP applications. And in this talk, I'm not going to discuss any of these things in detail. Instead, I'm going to discuss something extra on top of this list. But before the discussion, I have two more notes to mention. First, most of our APIs are built with PHP FPM and NGX. So here, when I talk about PHP, most of the time, I'm talking about PHP FPM. And secondly, when I talk about those tools or extension or something being used in our microservices, it may not just about PHP itself, it could be some tools, software that work with PHP or relate to PHP. So now we are going to discuss some different improvements that we have made to make our APIs very fast. As said previously, we started our development about three years ago. And in these three years, we have made tens or even hundreds of different efforts, improvements, to speed up our microservices. And I won't be able to discuss all of them here in this talk. So I'm going to discuss some key improvements that we have made. So I grouped those key improvements into four different categories here. Web server, HV processing, data storage, hardware and network. And in each category, I will discuss three different improvements we made. First, use dog-raised PHP 7 containers as web server. We used to have a single API service that does most of the jobs in the background. Now, for design home, we start speeding those tasks, those components into different microservices, as you can see from the right column here. So what are the benefits here? Well, considering that, okay, we have a legacy system here with hundreds of thousands or probably even millions of language code written by different developers at different level. Now, you can imagine how hard it could be to write testing code for it. How difficult it could be to upgrade your PHP to new major versions and how dangerous it could be to reflect this legacy system and how painful it could be to debug issues in it. Also, how long it could take for new developers to get familiar with your system. I'm not quite sure if you have experienced this or not, but I did have experienced all these bad things. Now, think about that we use different microservices here to do different jobs. Each of those microservices here is small, simple, and easy to maintain. It's very easy to write testing code for each of them. It's easy to upgrade them to new version of PHP separately. And it's easier to debug those issues in those microservices because they are small, they are simple, and for new developers, they get easy to get familiar with one of those microservices first to warm them up. So, besides decoupling our microservices, we use Docker to deploy our images into Amazon ECS. And there are lots of benefits using Docker, so it helps to improve and simplify our development environment. And it's easy to upgrade your web server, your PHP environment to new versions, and it's easy to debug issues in them. I would say Docker is a must-have tool for PHP development today. The thing is that, okay, we have so many different microservices here, and they are being maintained by different developers. So, when creating those Docker images, some of them may use Ubuntu as base image, some of them may use Debian as base image, and we could use some other stuff as base images. We have so many different images, and it's hard to maintain and manage those images. And the other thing is that, assuming that you use Ubuntu or Debian to build your image, and you use APT to install PHP and some software there. So, when you rebuild your image, you could accidentally bring your PHP to new versions. Even you didn't make any changes in your system, in your code, your PHP got upgrade automatically in the stock images when being rebuilt. So, this is kind of dangerous, and we don't want that happen. So, what we do is that, okay, we use a bunch of base images created by ourselves, which works on different PHP microservices we built. We have three main base images here. The PHP FPM1 is being used to create different REST HP microservices. We use the PHP Kali base image if we need to create some job worker instances. We also have another base image, PHP SWALL, it's something we want to do something asynchronously. And when we create those base images, we have certain rules here. We don't use the tag latest anywhere. We don't think it's reliable, okay. And our tags here match with PHP versions. If we see a tag like 7.3.1 it means, okay, this base image use PHP 7.3.1. And if an image being tagged and being used on production, we are not going to change it anymore, we are not going to rebuild it anymore. It's frozen. It's never going to be changed anymore, and that's something very important. You're probably wondering, okay, what if like we found a security issue in the image we are using on production. Well, in that case, we are going to build an image, but tag it with something slightly differently, like 7.3.1-1 or 7.3.1-2. And also, all these images, they are being built and being deployed, being tagged manually. We don't have any continuous integration jobs to do that automatically. It's dangerous. So by doing all these things now, we are taking death efforts to build those microservices, to build those images, and it's easy and safe to upgrade PHP to new versions across different microservices. And it's easy to fix security issues and to make improvements across different microservices. Now, we also use different tools, PHP tools and some other tools during development and deployment. Some of those tools being integrated into our base images already. And for error handling, error reporting, we mainly use the relic and the box like and feel these two are so amazing, and we like them a lot. And for secret checks, we use Sonocube and the Sensio Labs Secret Checker. These two tools work on different parts of your PHP source code. And for Sonocube, it checks possible coding issues in the code written by yourself in your project. While Sensio Labs Secret Checker is to check the security vulnerabilities in the third-party libraries being used in your project through Composer. Both of them we use everywhere. And for debugging and profiling, we mainly use Blackfyre and Xdebug in our images. Now, actually processing, well, here's the thing. Like if we need to send out some emails in our PHP request, or if we need to send out a pushing notification, all these things don't have to be done first before we have actually response being sent back to the client. All these things can be done in the background. And in PHP, we have different ways to do background processing. We could use an external program here to do it on the background silently. We could register a PHP shutdown function to do that, just like BoxLag. We could also use a separate job queue server to do that in case our job, our task is heavier. And also, in the PHP FPM, we could consider to use a function called FastCJFinishQuest to do that. There are also some other options, but it's probably not that reliable, so I'm not going to list them here. For us, in our PHP microservices, most times we choose the last option here to use function FastCJFinishQuest. What it does is to flash all your response data to the client first. So let's see how it works. We have two pieces of code here. They're pretty much similar to each other, but if we check the right side of one, it has a function called FastCJFinishQuest here. So what's the difference of these two when they've been executed and the PHP FPM? Let's take a look at the left side one. It tries to print out number one first, then registers a PHP shutdown function to print out number three. Then creates an object of an anonymous class which has a destruct method to print out number four. Then if you try to print out number two, then call exit to terminate the execution. So number five here won't be print out at all. And now we reach the end of this PHP script. But it doesn't mean there's nothing wrong after that. First, those PHP shutdown functions will be executed in order. So number three will be printed out next after number two. After that, those destruct methods of those un-destruct objects will be executed in random order. So number four will be printed out last. So for the left side, the output is one, two, three, four. Now let's take a look on the right side. Everything's still being executed in exactly the same order and it takes about the same amount of time for the right side. But once the function called fast.js finish was being called here, it will flush the response to the client first, which is one only at this point. So one will be sent back to the HP client. Then it will continue to execute whatever after. So two, three, four. So one will be printed out in your PHP FPM process, but won't be sent back to the client. The output you're going to see from HP client is just one. So here's the tricky part. By using this function called fast.js finish request, we don't save any time to execute the whole PHP process. We could send back HP response much, much earlier before the whole PHP process finishes. So here's how we make our APIs faster by doing background processing like that. We reflect some of our microservices to perform some operations after having HP response sending back to the client. Like for deleting messages, what we do is that, okay, we do some basic data validation first, then we send a successful response back to the client without any database interactions, then we delete the message from the database in background. So by doing that, as you can see here, we decrease our response time from about 13 milliseconds to 9 milliseconds for the inbox microservice. Well, some people might wondering that, okay, you're doing things in background. We don't have much visibility on those jobs, those tasks. What if there's something wrong? You know, happens in background. Well, there are more things you need to do other than just putting your tasks in background for processing. You need to do some basic data validation first, making sure your input data won't cause issues in background processing. We do exponential backoff to make sure those tasks being performed properly in background. And also, in case there's anything wrong, anything happens, we lock them, we report them properly so that we could get them addressed properly on. So this background processing implementation has some limitations. And also, if you have something heavier, you probably shouldn't do it like that. You probably should think about to have a separate job queue to do that. Also, if you have logs being used here, okay, you need to make sure to unlock your resource properly before doing background processing. Otherwise, the other request could be blocked because of that. So we have some more detailed discussions and the implementations about this background processing approach. And you can check those links here. So lots of sites support HV compression and many public CMS frameworks have been implemented and enabled by default like Drupal, WordPress, Joomla. And as PHP developers, when we create our own microservices, and we probably want to do the same thing to have HV compression be enabled. But does it really work on our microservices? Does it work as should? Does it always work? Probably no. Probably no. I'm going to show how we improved HV compression on our microservices. When doing HV compression in NGX, the first thing you want to do is probably to have this directive, GZO being there, to have HV compression turned on. But that's not enough. You need to have another one to specify what type of responses naturally be compressed. Okay, if you check online and you check those NGX confusions, and you could probably always see these two, but they are not enough, especially for our case. Well, we use Amazon CloudFront as CDN service. And here's the thing. If you have your NGX running behind CDN or some proxy server, okay, then NGX may not know that your CDN or proxy server supports HV compression. In that case, your HV response won't be compressed by NGX at all if it's running behind CDN or proxy server. So to make sure HV compression always works even behind the CDN or some proxy server, we need to have one more directive here called GZProxy with proper values being there. So now it seems that we're in a very good shape, still not yet enough. There are two more things we need to think about. One is GZ compression level. The other one is GZ minimum length. The compression level is to set a compression level of a response. The value is between 1 to 9 where 1 is loosely compressed. And for minimum length, it is to set minimum length of a response that should be compressed. The different value is 20. And for compression level, the different value is 1. Both of them not optimized at all. Let's say if you set minimum length to 1, one byte only. And when the response being compressed, the output would be way more than one byte. So it doesn't have any to compress a response with just one byte in it. So how to configure these two properly? For us, we choose compression level 5, something in between, and the ratio probably is good. And for minimum length, we used to set at 120 bytes, but it's not an optimized value. Chris Holland suggested that, let's say, it's better to set it at a smallest typical value of a network packet size, like 1,500 bytes MTU, because if the response is less than, let's say 1,500 bytes, it's going to fit in a packet no matter if you compress it or not. And we totally agree with that. So now we set it at 1,200 bytes and 80 bytes. And thanks, Chris, for that. So now it seems that we have everything set on the NGX side, but still it's not enough. It doesn't mean HP compression will work as should in our PHP applications. There's one more thing we need to check on the PHP FPM side. If HP header condense is not set in PHP response properly, NGX will always compress the response, even if length is less than the minimum length. So here's the thing. If we don't have this particular header set properly in our PHP FPM process and send it back to NGX, then we are always going to compress the response in NGX even if the response is just about one byte long. So different frameworks having that particular header set differently. Like for Slim 3, they always have that header, the condense had been set properly by default. But for Lumen and Nervell, no, they don't have that particular header set at all. So if you happen to use Nervell or Lumen to build your microservices, the problem is that your HP response will always be compressed, always. And that's not what we want. To prevent that from happening where we create a mid-ware for Nervell and Lumen to inject that header in HP response. That's how we get the HP compression parts done properly in our microservices. Count caching. So there's a very common feature request there which is to feed new data to the client side. So if we have data push, sorry, server push being labeled like in HP2, it's fairly easy. But almost all our PHP microservices do use HG1. In that case, what we have to do is to hit the server again and again to fetch the data. And it's terrible because first, every single time we hit the server, we are going to make some database queries to get the data. And it's expensive. And in case there are so many different users there doing the same thing at the same time to fetch data from server, it's going to be even worse. So here's how we address the issue. First, we do data cache on the server side if possible. And also, we use the last modified and if modified since headers and return HP 304 responses back to the client there is no any new data to be feed to the client side. So there are two benefits here. The first, we reduce network IO because for HP 304 responses there is no any body content in the response. The response is very small, a tiny. And on the server side when we return HP 304 responses back we don't need to make that many database queries, some heavy queries to fetch user data in that case. Reduce the database operations as well. So there are some other HP headers that you may consider to use to use this kind of implementation. Now, data storage. We use NoSQL just to make our microservices fast. And there are many, many different options you could choose from like Redis, Couchbase, AirSpark, MongoDB and we choose to use Redis and MongoDB for our inbox microservice. Redis is very handy, especially like for sorting simple data and it has some very interesting data structure like set and sort this there. Couchbase is just like memcached but with persistent data storage and some more features there. Both of them are extremely fast and also they can expire data automatically from your system. But there are also some limitations when we use this kind of NoSQL solutions. It's not like MySQL, not like RDBMS, there's no fancy queries available and also there's no something like index being there and you need to make sure to compress to serialize your data properly to save disk space. For our case, we have thousands of messages being created, being stored and being sent out every single minute and our database keeps increasing all the time so we have to fight against network IO and to save disk space. That's one of the biggest challenges to us and we have tried many, many different ways to resolve that. Like for larger messages, we try to compress them first before saving them to Couchbase and we also try to use some features in Couchbase like Couchbase encoder to serialize those data better when saving them in Couchbase and for certain messages like announcements, they're pretty similar to each other for different users. In that case, we don't have to save the whole message for each user. We could just create a message template then store the message template ID or something along with some variables for different users into Couchbase and also for certain fields, let's say, assuming we have a field called expiration and our messages are in JSON format but we don't have to use expiration as a field name. We could probably just use E as a field name just to save space. Also, there are certain messages that don't have to be in the system forever. In that case, we use some TTR fields to get those messages expired automatically from our system, from Couchbase, from Redis. So that's our effort to fight against those IO and database disk space. Besides all these efforts, we also want to make sure that our database operations are fast. So for Redis operations, there are two main drivers here. One is PHP Redis. It's a C extension. The other one is P-Redis. It's a library written in PHP. We use the second one, P-Redis, because like three years ago, PHP Redis is not yet ready for PHP 7. So we have to stay with P-Redis here but the problem with P-Redis is that it's very nice but it's a little bit slow and it costs more memory. So later on, we did a refactor to use PHP Redis instead. And let's see what's the difference between these two when talking about performance. So before when we use P-Redis, the response time is about 16 milliseconds for the inbox microservice. After the refactor, we start using PHP Redis. The response time reduced from about 16 milliseconds to 11 milliseconds. And there are more ways to make Redis operations even faster. Like you could use pipelining to speed up your Redis queries. Also in Redis, there are precision connections available if you want to use them. Hardware and network, network question. How did we do code deployment before on production? Well, back to a few years ago, what we do was that we pack our new PHP source code first, we sync those PHP source code to different production servers. We then unpack our PHP code from the server, replace PHP code on those production servers, then flash caches on each of those production servers. That's how we did production deployment before. There are issues with this approach. First, during the deployment, your APIs may break some requests, may break because of that, especially when we switch our PHP code. The second thing is that in case we introduce any bugs in the deployment, it takes time to find the issue and we probably have to make another commit a patch to fix the issue, to make another deployment. All these things take time and during that time, your microservices are done. So now, for design home, we started using Amazon ECS to host our microservices. So when we're doing code deployment, what we do is that we create dark images first, that image has PHP FAM and NGX built-in as web server, along with our latest PHP source code there. We then launch a bunch of new instances in Amazon ECS and once those new instances with new code being stable enough, we start training collections from those old instances. So during this process, we always have certain amount of production instances running to serve traffic. So there's no any downtime here. And in case we have a bug being introduced in the new image we created and we need to do a rollback, it's fairly easy because we still have those old images being there with old source code being there. What we could do is that, okay, we bring up a bunch of instances with old images and take down those new images, those new containers with new images. So rolling back is pretty easy here as well. And let's say one of the production instances is not healthy, probably running out of memory or something, where in that case Amazon ECS will notice it because each instance has health check endpoint being there. When a particular instance is not healthy, Amazon will try to start a new container to replace that unhealthy one. So by doing that, we always have almost same amount of production instances there to serve traffic. So another very nice feature is auto scanning. Like today, usually on Thursday, we have some big sale in our game. So like today, we have more traffic coming and in that case, we probably need more servers there to serve the traffic. And in Amazon ECS, it will notice the changes, okay, there are more traffic coming, it will launch a bunch more instances to serve those extra traffic. So in case we don't have that many traffic now, we will introduce the amount of production instances automatically. So we have different development environments. In Amazon ECS, we have like test, RC, dev, staging, production, and all these things help a lot for our development purpose. And especially, we have a staging environment which is very, very helpful to us because when we need to deploy some big, some major changes to production, we can always test it in our staging environment first, make sure everything is okay, and it's pretty safe to do that on staging. And also, if we have some production issues, some bugs on production, and we want to identify it, we don't have to do it on production. We can test it in our staging environment, which works pretty similar to the production environment. Hardware upgrade, this helps a lot to make our APIs much faster. So in AWS, we used to use C4 instances, and last year, we upgraded our instances, double CPU, double memory, and also, we have a feature called enhanced networking enabled in our EC2 instances. And that's a very amazing feature there. With this hardware upgrade, we reduced our API responses from about 18 milliseconds to 50 milliseconds for our main service. Well, you're probably wondering that, well, okay, I did a hardware upgrade, probably I'm going to spend more money for that. Probably, but not exactly because when you upgrade your hardware, okay, each of your production instances could serve more traffic. So like last month, we did another hardware upgrade, so double our CPU and memory resources again. But meantime, we reduced the total number of instances on production to half, cut them down to half. So as you can see here, doing hardware upgrade may not always cost that much money as you think about. So, moving everything into VPC. So, we started using Amazon Virtual Private Cloud about one and a half year ago. It's not just to make our micro services safe, it helps a lot to speed up our micro services. What we have tried is to move our instances closer to each other and move them to same locations. And also, for communications between micro services and the databases, we use internal network instead of internet for that. So by doing that, it makes our API communications much, much faster than before, including those database operations. Let's take, just check another example here. So, last month, we did another upgrade, not only just to upgrade those hardware, but also we relocate our micro services, those database nodes. So by doing that, for culture-based operations, so, those operation time reduced from about 10 milliseconds to 5 milliseconds. So having said all these things, migrating to AWS helped us a lot to make our APIs faster. But it did take us a long time, lots of different efforts to make things done the right way in AWS. Well, when we started working on our micro services for design home back to three years ago, we were a very, very small team. We don't have much development resource. So we have to use some best practice from the industry, from the community, like HV1, like REST APIs, like PHPFPM. But now, okay, we have more developers, we have more resources. Now we can look into some more things to make our APIs even better, even faster. So here's something that we want to look into to make our micro services even better. So for PHPFPM, so there are different great frameworks like Faircom and also we want to look into the routing part because I feel that, okay, for most PHP frameworks, the routing part seems a little bit slow. But we don't have a solution for that yet. And for asynchronous operations, we are looking into solutions like React, PHP or AMP, but we prefer better with SWALL, especially SWALL4, which works very nicely if we want to build micro services on the backend side. And for network protocols, we still use HV1, which is not that great for backend micro services. So we want to look into HV2 and also we want to look into some other protocols like the TAS protocol. The TAS protocol is a binary protocol. It's different from JSON because for regular JSON message, let's say it's about 54 bytes, but if we convert into this message in TAS protocol, it has only about 16 bytes, 80% less. So these are something we want to explore later on to make our PHP micro services even better, even faster. That's my talk today. Thanks. Hello. You mentioned New Relic before and I have a question. Have you tried any open source alternatives to New Relic, like Zabix or something like that? New Relic, right? Have you tried Zabix or anything like open source alternative to New Relic? Well, we have been using New Relic for a couple years, a couple years, more than five, and it's a commercial product, but it's also a free version. We feel it's very, very helpful, helps a lot for tracking performance, especially between different deployments. We see for each deployment if our APS getting slower or better. There could be many other options as well, but for now, we just stay with New Relic. I would say there's another good feature in New Relic is that it's not just about performance. We want to measure the performance of different parts in your APIs, like how much time being spent on Redis part, how much time being spent on culture-based part, how much time being spent to make external HP cores. And New Relic could get all these things being sold out for you. I feel it's very handy. Yes. I think we can't we couldn't explore all those great things. Hi there. Thank you. This is a question about your containers. Do you combine PHP FEM and Nginx into a single container? And if so, how do you manage the processes when you bring the container up? That's a... Okay. As I said before, managing PHP images is a painful thing, right? What we do right now is that we don't use APT to install PHP or any other PHP Relic software. We build them from source code so that we know okay, when we install PHP into a particular base image, it's always of that particular version, not any, like, patches or minor version of the same thing. We build from source code. My question was more around if you combine Nginx and PHP PHP FEM processes in the same container, and how do you manage both those processes when you start the container? They have to be... Sorry. It's better to put both of them into the same image. If you put them into separate images, it doesn't help much. We don't use TCP connections between the two images for the communication between PHP FEM and Nginx, but if you use soft file for the communication between the two, it's going to be more efficient. Do you have anything that starts both the processes up like a start-up script or a process manager? You mean... To manage Nginx and PHP FEM starts with a single command. How do you bring both processes up? We use SuperWise D. Awesome. Thank you. It's a very nice software. We use it everywhere. We tried to use some other solutions, some other basic images, but now we proof more on SuperWise D. That's great. Thank you very much. Hi. The question is specific to ECS. It has two launch types. I'm assuming you're using EC2 launch type. Have you played with the Fargate launch type and how does it behave in terms of performance if you have explored that? There are different strategies to launch EC2 instances during deployment and we mainly use two replica replica and the demo set. They work differently, but you have to dig into those details because, as I said, it costs us a lot of time playing with different confusions to make them work better. For the code deployment example I mentioned here, it's about the replica, but we don't use it. Actually, we don't use it for now. On our main service, we use another one called demo set. It's more reliable, for production deployment. The one you mentioned, we probably use it in some microservices, but I don't have much impression on it. Hi. Knowing what you know now, where would you advise a team to start optimizing? Sorry, I didn't get you. Knowing what you know now, where would you advise a team to start optimizing if they were to start optimizing? You don't get it, sorry for that. I'll try once again. You've mentioned four different categories of optimizations, right? Where would a team start doing optimizations if they are starting now? I still have a hard time to catch. Which areas should a team start working on first? Right. Probably we can talk about it all, but I still have a hard time talking about that. Morning. Just interested in your app performance with a real world. Even though you're doing brilliant reduction in performance in terms of response times internally, how are you seeing performance with people with mobile devices and the networks that they use? Are these optimizations helping lots or are you still finding that response times to actual end users are fairly sluggish and what have you done around that? That's another big challenge to us about to support like slow mobile devices and the slow network. We we're still working with it and it's not only just to make changes on our backend microservices side, we also need to make changes, improvements on the mobile side, on the current side. I don't have a long talk and long discussion. I don't have a short answer for it, but from backend side perspective, if I want to address the issue, I won't do it with, I probably won't do it with PHP FBM. I'm going to do it with some other protocols probably other than HTTP one and that's why I'm mentioning another base extension there. That's a very handy PHP extension to address the issue about slow network. Yeah. All right. Do you use services on AWS like API Gateway? Excuse me. Do you use API Gateway? API Gateway I'm not quite sure if I understand correctly, but you mean that when you set up your production instances and it's being processed through some Gateway we are two things. First, our containers or our instances are running inside Amazon ECS and we use Amazon Cloud front as CDN and for development environments we use some proxy server to route those traffic internally. Is that what you're asking? Yeah, it's just more defining like would you use API Gateway to then go to Cloud front then to ECS so you can define your resources. That's more like an off-question. I don't have much information about that. Yeah. In the very beginning on your slide, presentation, slide 2nd or 3rd you mentioned that main service is talking to database and rest of services are behind the main service. So none of the services has access to database or they just go to main service again and again for data. For example, they don't talk to database for the main microservice. So like for inbox microservice, it has its own database, the latest database and the culture database. They don't share the same database because like for example, okay, like for our moderation service, it doesn't use MySQL anywhere else especially on the main service. So MySQL is just used for a particular microservice behind. Yeah. Okay, but sorry. Persistence between the services, how your inbox service knows which messages users belong to or what is that mechanism of evaluating authorization users, how do you do that? I mean, JWT or some internal local tokens or whatever. Like some kind of identifiers there or what is it? Here's the thing, only the main microservice is publicly available to the client side, to the mobile game, but all the other microservices, they are in VPC and we have like firewall, they are making sure that they are not accessible from outside the VPC from anywhere. Only the main API service could communicate with all those backend microservices. That's a security concern. But it does mean that when we make REST APIs calls from the main service to those microservices, we don't have any validation. No, we still have validation but something that we don't have to do. It's actually basic authentication. And we we're not yet sure if there's any better solution for security purpose, but for now that works well. Yeah. Thank you. One last question. On the same topic, how the main service communicate with all the underlying APIs? Well, we still use HV1, the REST API calls. I wish we could improve it on. But for now, all those communications are based on HV1 REST APIs. I'm not going to say it's the best solution, but that's what we're staying with for now.