 start. Good afternoon everyone. My name is Seky Vazquez and I'm presenting you this session Drupal Extreme Scaling. For the next hour I will be talking about a project that I had the pleasure to participate about one year to make this project work. I'm going to focus on infrastructure on this project. This project was successful and I will give you the details under the project and I will focus on the scaling features of the project. When I start to talk about the project itself you will see why the session is titled Drupal Extreme Scaling. First things first, who's me? Who's this guy? Well as I said my name is Seky Vazquez. I work as a DevOps freelance mainly and also I do backend Drupal Development. I focus on Drupal science for the last three years and a half. I also am a PhD student on the University of Seville. My PhD thesis will talk about cloud computing to be precise. I will try to develop a formal method to optimize cloud deployment options. I'm very involved with hacking and security, IT security stuff. I really like it. I like everything in life is code. There are other things. I always like to let you know about some of my hobbies. For example, I like rock and roll. I play the electric guitar. I have my own rock band. I like video games and I love to read books, special adventure books. After this introduction, I will review the main points, the main sections of the session. First I will give you a brief introduction of why scaling, why extreme scaling. After that I will describe the project, the requirements and the technologies and all the stuff. The main points is the second. After that I will focus on some problems we found when developing the project and how we solved those problems. After that I will show you a little demo. It's recorded. No more live demos. Finally I will share some conclusions very fast. Let's start with this introduction. When we think about Drupal scaling, scalability, performance, availability, we all have some modules and some techniques, some tools on mind. For example, the ones that we have here. Drupal has a problem with cache. Sorry. Drupal has a problem with cache. It uses by default database as cache system. So we can use main cache. You are a Drupal built on PHP, so we use APC or P cache. We use varnish to improve the speed of our application for anonymous users. We use redundancy to be able to use the code to make sure we are confident about availability. You can choose whatever technique, whatever tool, modules. There's a lot of modules to improve this kind of stuff, which are non-functional requirements. But we have to consider new technologies that come from the last years to nowadays. They represent not the future, but the present. I want to focus on this introduction on two of them, the two main technologies that we use on this project. I'm talking about cloud computing and containers. The main advantage of cloud computing is elastic computing. What does this mean? Very simple. We have resources. Sorry. We don't have resources fixed. I mean, we don't have, for example, four servers, six servers, and then we have those resources in the same amount of time. Sorry. I will repeat what I was talking about is, basically, we have a fixed number of servers, but with cloud computing, we have the ability to have an elastic amount of resources. For example, when our application has a workload, a peak on the workload, those resources are able to scale up. I mean, we will have more resources, more RAM, more CPU, et cetera, to be able to manage that peak on the workload. And when we have no more need for those extra resources, those resources just scale down and disappear. So we don't spend too much money on this stuff. In theory, all the cloud providers sell this stuff as, well, we have almost 100% scalability and availability, and this is not true. Why? Well, it's not the first time. It will not be the first time that one of the CPT's of, for example, Amazon Web Services goes down and one availability zone just disappears for a couple of hours or, you know, so you have to be very careful when designing how your infrastructure will work on the top of the cloud service. And of course, this is something that is not talked about by the providers, the cloud providers, is the budget. You can easily deploy more machines and more machines and more machines and more resources and more resources, but after the end of the month, the bill will come and you will just go crazy when you see the number. So you have to be very careful with the budget and you need to design properly so you can avoid excessive costs. About Docker containers, well, today Docker is mature enough to be used on production environments, but I have seen a lot of people who use Docker containers just to have, quote, virtual machine and, of course, for local development, but we use it as a technology based on Docker and it works very well. We will see in some moments. Okay. Having done this brief introduction, I will focus on describing the project itself. Okay. First, I will describe some basic requirements. At the moment when we started to build the project, we use it to use Drupal 7 and we will use a multi-site install. We are talking about a platform to which each user on this platform will have a Drupal site for him or for her. So at first, when we started development, we were talking about 30,000 users. That means 30,000 sites and business wanted to scale, to be able to scale up in the next two years up to 100,000 sites, a lot of sites. They had problems with the previous infrastructure, with the previous version of this project regarding the availability. Sometimes the service became unstable and they wanted to, you know, cloud providers give us almost 100 percent, so we want 100 percent availability. Of course, high performance as always, lowest possible cost as always, and, well, due to the need to manage so many sites, we need a tool to control all of these sites. So we needed to research an external application. We wanted also automated and not disruptive deployments, I mean, zero down time when possible, related to the availability percent. And we needed also to do a migration from the previous system and, well, one of the problems were that our team was made of only three depths. We have a front-end guide, a backend guide, and me as a DevOps. We also have another non-technical people, but, well, three people one year, and a lot of sites to support and, you know, all these requirements. So our face was like, what? It was like a bit crazy, you know, three people and this monster project. But, well, that seems scary, but don't panic. No problem. We talked about the project, the requirements, and, well, we agreed that we would need a bunch of technologies that could support us in our work, in our mission of giving support to that number of sites. Well, it looked like a great challenge, very challenging task, but, hey, we are computer scientists, aren't we? So we started to review the possibilities, and, well, we focused on open source. So definitely we decided to use, on one hand, cloud computing, and, on the other hand, containers, as I described before. We found Apache Mesos is one of the tools. Well, I will describe in a moment, and, well, we will use Docker and all this stuff. I will describe. We will use Amazon Web Services as a cloud provider, okay, our low-level infrastructure. We will use Apache Mesos, which is a software that let us abstract. It's an abstract layer, which let us to just assume all the resources available on a group of machines. For example, if we had four servers with four CPUs each one, Apache Mesos will make us look at these four servers as one server with 16 CPUs, for example, okay? We will use Marathon, which is an Apache Mesos application, which let us run Docker containers on the top of our cluster. And here is Docker, and for the Drupal, Drupal will be the CMS, of course, we will use Nginx better than Apache. We will use Node.js for the external application to control the multi-site to manage it. We will use MongoDB for the field storage, so all the field data and field revision tables will be out of my SQL. And to automate all, we will use Ansible, because Ansible works very well, pretty well with Amazon Web Services with the API. So, well, let's describe. I will go from, if this would be a pyramid, I will go from up to bottom, okay? I will start describing the Drupal installation, and then we will go down. So, as I said, we will have a Drupal over Nginx and PHP FPM. Nginx is very flexible when configuring it. That is a two-sites blade. It gives us a very flexible configuration. PHP FPM increases performance significantly comparing to mod PHP on Apache, and we use it mod security just to improve the possibility to use, improve the security of the security system and avoid some basic attacks. And, well, we developed custom roles for Drupal, but well, I need to work on them a bit before I can release. So, like this picture, we have a Drupal Docker container. We, as I said before, use Node.js to develop an application to manage the sites, the multi-site. So, why? Why so asynchronous? Node.js is asynchronous, as you know. Basically, it means you can execute something, you start some execution, and then you forget. You tell the application, well, when you finish to execute this, just execute this callback, this function, and you forget about how it works. And, well, we managed this with Node.js app, which connected to DynamoDB, which is a Node SQL database on Amazon Web Services. We have a JavaScript SDK, so we can interact between the Node.js application and the DynamoDB table to manage the state of the different sites and so. Well, we developed it also on the same app, an API, so we can do batch processing, for example. Hey, there are 2,000 new users that want their sites created by Monday. Okay, no problem. We just send a JSON to our API, and the sites will get created, and we use Qwis and so. Well, now that we have the two main applications, the two main level, the top level applications, we will go one step down, and I will describe the container itself. Okay, remember, we have a Drupal running on top of NGnex and PHPFPM, and we want these technologies, these tools inside a container, a local container. So, we need a stateless Drupal container, which is this stateless Drupal container. We don't want to have any mutable data inside of the container. We want to be able to kill the container and raise up a new container without losing any data. What that means in practice is that Mencache, Mencache, MySQL, MongoDB, must be external services. And the files, the directory with the user-uploaded files will be out of the container too, so we use it as storage support. For that purpose, well, Mencache, MySQL, and MongoDB, you can configure it at the end point on the sit-ins PHP, and to use S3, we use the S3FS module. That module is a great advantage, and I will talk about a pitfall that we found and how this module helped us to go on. Okay, the e-mails will be sent with an external service, which is postmark, and we use it in the new relic for monetization on each container. You can see here a small diagram, okay? Elastic is blue instead of green, because it's not persistent. So, having this in mind, remember, we have now the container, and now we have to go another step down, and I will talk about the cluster. I mentioned before, we use it in Apache Meso's cluster with two masters and a bunch of workers. On the workers, we will launch the Docker containers using marathon, and all the clustering is orchestrated by ShowKeeper. We also had a hyper proxy on the bottom, and well, we use it also another tool, another Apache Meso's basic tool, which is Kronos, to let us execute Chrome jobs. We had some problems to execute Kronos in the Docker containers itself, so we use it Kronos, and it worked very well. So, you can see here the, oops, you can see here the diagram. We have ShowKeeper orchestrating the two Meso's master, and we have one main worker and then a group of another workers. I will explain now. Very important is that marathon and Kronos works in a way that they have a REST API exposed, so to launch a new Docker container, a new app, or maybe a Chrome job to set up a new Chrome job, you just only have to launch an HTTP request to that API, and then you have your staff done. We will see an example later. Okay. So, you put on top a Ginex and FPM on a Docker container, on a Meso's cluster, and Meso's cluster is not elastic, but using Amazon Web Services, a cloud provider. On the bottom of the Meso's cluster, we have an elastic Meso's cluster, which is a very, very good idea. So, we use EC2, an auto-scaling group of EC2, and I will explain, okay, how it worked. We have, this is the diagram we have seen before, and take a look at that, at this. We have an auto-scaling group with the Meso's workers, not the main workers. And, well, the Route 53, which is the DNS, we'll point to this auto-scaling group. We have all the external services, and MongoDB, we use it MongoDB, MMS, which is the managed system from, from, from, from, from, from, from the company of, which created the MongoDB. And, well, why is this? We had three different containers which are, first, varnish, second, the Node.js application, and third, the Drupal containers with Ginex and all the stuff. But, one of the problems I will describe later implied that when an auto-scaling group just scales up, it adds, it adds a new EC2 instance, okay? More, more containers are deployed. But, when the auto-scaling group goes down, if you don't separate, for example, we had only one varnish container and one Node.js application container, and a lot of NGinex Drupal containers. So, if when scaling down, the container with varnish and Node.app gets skillet, then we lose uptime. Our application just won't work. So, we decided to separate those two groups. And well, the, when you, when the, when the auto-scaling group goes, sorry, when the auto-scaling group scales up, there's no default way to deploy new containers. So, we have to create an script and execute it from atcrc.local to get those new containers deployed after the new images is up. Okay. Let's continue. And, well, of course, we have now an idea of how the infrastructure looks. But, it's a very large infrastructure to be managed by hand. And remember, a three-people team. So, I'm a bit lazy with my work. I don't want to repeat work. So, let's automatize. Lazy DevOps is burst DevOps. I really like the way Homer thinks. Well, we use it Ansible because it's a very lightweight tool. It uses SSH connection to manage all the hosts. And it integrates very good with Amazon Web Services. You know, a simple playbook is enough to execute one command and have a full environment deployed. We use it Docker half, a private Docker half account to store the Docker images. When we deployed, we made a new deployment. We just created that Docker image. We uploaded it to Docker half. And then we only needed to kill the containers and deploy the new ones. Not in that order. But it worked. We use it a rudimentary way to configure the containers itself, which is this. Make file and Docker file. We use the Docker file to configure the container, the static content of the container. For example, the Immutable configuration files and the packages installed and so on. And the Make file will help us to deploy mutable content, mutable settings. I mean, for example, the settings PHP file and so some configurations about the virtual host and so on. The best point here of using Ansible and this combination of Make file and Docker file is that we have a very easy way to create and destroy environments on demand. Okay. But that's not all. Devil is on the details. We have to pay attention to a lot of details as this infrastructure is very complex. So, for example, backups. Amazon lets you do automatic backups on RDS, for example. But we created a new recovery plan, a custom recovery plan. So we didn't depend on the Amazon default staff. You know, we have a firewall. We just used it. This is basic. It looks basic. HTTP authentication for all these APIs, but not everyone does always. You will be surprised. And, of course, lock centralization. We later, when the project was a couple of months running and working well, well, we started using Mesos. Mesos has inside of each, easy to instance in this case, inside of each worker, had a directory in which, which is shared over the network with some little configuration. And we use it to communicate and to store the logs of the site creation process. But this is good. But we finish it using elastic search given a log stack. But, well, Mesos is very powerful in that point. So I will focus now. I will talk about some problems that we found and solutions. And databases. I want you to think about the problem of having a lot of databases on a multi-site install. You remember a multi-site install will have one database per site. And if we're talking about more than 30,000 sites at first, and we want to scale up to 100,000 sites, there's a lot of databases. So, yes, the cloud scales, but that's crazy. Remember that MySQL creates one folder per database. And inside that folder, it has, depending on the configuration, of course, one file per table. So if we use 100,000 folders, okay, it's supposed, we suppose that Amazon will manage low-level stuff. But, well, we test it and it does not, it was not working properly. So, well, we just decide to make some changes. Another pitfall was MongoDB. When you create a new MongoDB database and you add one collection and one item to that collection, it will pre-allocate about 600 megabytes, so it's 655 megabytes. That memory will not be used at this moment, but it's pre-allocated. So if you start creating a lot of collections of the same database and you start to add in data, that will be growing exponentially. So we have to split because both MySQL and MongoDB will not scale up so easily. It was basically unmanageable. So we use a strategy, a classic strategy, which is divide and conquer. So what we did, we identified each site using an unique hash. So each site with its hash, that hash will be the prefix for tables on MySQL and MongoDB. MongoDB module, MongoDB field storage did not have the prefix feature, but we modified it. I have pending to contribute that change. What we did, so we can now mix on both MongoDB and MySQL, we can mix the tables on the same database, but we don't want to have one database with a lot of tables, millions of tables virtually. So we use it group by. We grouped both MySQL databases and MongoDB databases on groups of 500 sites. That way we get a maximum size of 4 gigabytes for MongoDB databases and each MySQL database had 62,000 tables, which is more manageable. So it gets more ordered. Well, we had to perform a couple of operations to determine which database belongs to which database belongs to which database belongs to which database belongs to which database belongs to which database belongs to which database belongs to which database belongs to which database belongs to which database belongs which site. So, what happened when we received on the web server a request, an HTTP request. Okay. We used the settings PHP file which is executed on all the HTTP requests to perform some changes. So, before creating the databases arrived, we just detected the subdomain which is, yes, we used subdomains to separate each site. So, as I was saying, yes, we received on the signals PHP the request and we identified the site, the subdomain, the subdomain name for the site which received the HTTP request. From that subdomain, we obtained the hash, the unique hash, and we used that hash as the database prefix on both MySQL and MongoDB. We asked DynamoDB, remember we used DynamoDB to identify which sites are deployed and which just don't exist. And on DynamoDB we have a column which identifies the database to which the site belongs. So, we just created on that point the databases arrived and we can access, Drupal can access to its tables without seeing the other site's tables. And well, to avoid multiple database, sorry, DynamoDB calls, we stored the database reference on Mencache on a specific key so we don't have to call DynamoDB once and once and once. So, we had another problem. This is a very strange problem which happened when we started testing. And when we created a new Drupal site, we had a timeout which we could not detect on the first days. It happens when the Node.js application just launched the site and it just got cut. We were under very heavy pressure and we just developed a quick patch which basically is take advantage of the Node.js asynchronous feature and we developed a Fire and Forget method. You know, we launched the site, we informed a container, a container, hey, create a new site with this subdomain, this user and this data. And when the site gets created, notify me to this new URL. When we forget about that site creation process and when we get notified, we can continue with the processing. After when the pressure went down, we could investigate and the timeout was produced on Haproxy. Well, that kind of things happened. And well, other problems that we found were instability on Auto Scaling Group is the kind of stuff. This is related to the stuff that I talked about before, you know, the containers for varnish and Node.js app being destroyed when the Auto Scaling Group went down. That happened randomly, so we decided to split and have one fixed worker, Mesos worker for those containers and have the Auto Scaling Group with only the NGINX containers. It was crazy when we were in the middle of migration with just three days to finish the migration, we were at 20%, I think, and this happened. MongoDB instances went out of space. Be careful if you use MongoDB MMS system because it creates easy-to-instances. It creates a replica set and a chart of MongoDB instances on EC2, but the default storage size, the default volume size, are 50 gigabytes. We just realized it when it was too late and we lost about three hours. In the middle of migration, it's hard to fix that. We also found on PHP FPM some stability on some request. Just remember to disable slow log when you are using NGINX and PHP FPM. On the first approach, we created one single bucket for each site. 30,000 sites means 30,000 buckets. What's the problem? Amazon only lets you create 100 buckets. That's a big problem. As I said, S3FS module saves it as because it lets you configure a single bucket for all the sites, but each site will have a directory inside of that bucket and the directories just don't see in between them. It was a good solution. When you are applying a new update, for example, we have a new Drupal code release, you have to deploy. The Docker container will get the code. We create the image, upload to Docker Hub, and then on the Apache Mesos cluster, we just deploy. But what's the problem? We have to apply a database update to a lot of sites. To do it, we created a custom script that will let us call drash on each of the sites using the DynamoDB list that we have of all those sites. It took about 24 hours, a full day, to call drash on all the sites when they were 30,000. The migration, I have spoken about it. We use the Migrate module. It was a Drupal to Drupal migration, so no problem with that. One thing to mention here is that the previous version of this project was a single site Drupal. You can imagine how big it was. 30,000 users with a lot of nodes each one. Subdomain module, it was crazy. But it went migrated and it worked good. One comment to add, when you are using RDS Relational Database Service on Amazon, if you allocate more size for your instances, then the instance will have more IO speed. We realized it and we just used it. We had some problems with the database speed. We tried first to approach by reducing as possible the size of the RDS instances, but we realized that it's not a problem of storage. We prefer to have instances storage so it can scale best as long as new sites are deployed, but also we would take advantage of this feature. I will do a small demo. I have prepared a video. I will show just Apache Mesos. If you don't know it, you will see. This is Apache Mesos HTTP console. I have a cluster on my local with two workers. We will see these are the two frameworks which are Cronus and Marathon. These are the tools on the top of the cluster which let us run CronJobs and run Docker containers. At this moment they are not used. We will see we have two workers, two machines with one CPU, each one with 244 MB of RAM. This is the disk and the IP address. It's very complete. Looking at the Mesos main screen, at this moment we have no task deployed. This is a summary of the number of CPUs we have. Remember, two Mesos workers, but we see two CPUs. The memory also gets zoomed. We have them idle. The active tasks are known at this moment because we don't have... This is Marathon HTTP screen. We have no application run, no containers run. This is Marathon. This is just the user interface, but remember that we have a REST API. How the REST API works. We send it to a specific URL. This file, for example, is a JSON file where we describe what kind of container do we have. In this case it's a plain NGINX container. This is the configuration for the network. We have the container port, which is 80, the default HTTP port. We receive the... I will stop the video for a second. If we receive a request on the cluster port, 1088, we will redirect it to the Docker containers, 80 port. We have one Docker container in this case, one NGINX container. It will use only half a CPU, and we will use 64 megabytes of RAM from the available. There's a lot of options, a lot of configuration where you can fix the host, the worker, where the containers will be deployed, and so on. We have here the URL post request. We use JSON, and we include the JSON file itself, the contents. And 8080 port to the apps URL. We receive a very interesting response. We can even use it to manage it. And then you can see we have one Docker container using the features, the resources that we ordered. We have also access to the JSON file. We use it, and remember, the port open, the number of containers, the CPU, all the data. And now another interesting stuff. Well, you can see here, okay? If you can access, well, let's see. You can access to the container on a fixed port. In this case, it's this one. Okay, we will close it after. Okay, here is the NGINX. I don't think it gets, you know, it's 31. Okay, here. Okay, so if we open the fixed port, we have the 31, 484 port, and we have access to the NGINX. This is direct access to the Docker container, okay? But we do not want to access to each, different ports each time we receive a request. We want to access through the 1088 port. So we will scale up to two containers, see how easy it is. Just pushing scale, set two apps, and we have another Docker container deployed on top of our marathon. And we have one of them on the 10 and another one on the 11 IPs. We can access on the direct port, okay? But if we change to the 1088 port, we update, and we have now access to the NGINX. As a best practice, we close the ports on the range. It's a configurable range. And we let only access to the 1088, so we can manage it on a proper way. So this is a marathon. You can see here, the resource is being used on the Mesos cluster summary. And now you can see here, too, the tasks which are the Docker containers, our references. You can also access a sandbox, so you can take a look to a standard output, standard error. Okay, so it's very easy to manage the containers. You can see here the access log for NGINX. Okay, so it's very easy to concentrate the logs there. That sandbox, it's based on that shared directory that I mentioned before. And this is Kronos. Okay, this is an example. We can launch KronJobs. In this case, it's just a data. So it's very simple, and it will be executing. This is the next scheduler execution, the time it takes. Well, it's very interesting. And that's it. Okay, that's the demo. You know, I just wanted to show you the most interesting part on the technology stack that we use on the project. Just to mention, the project has right now, if I don't remember, about 70 CPUs, 70 cores, and about 120 or maybe more gigabytes of memory are available, and almost all of these sources are being used. So those are very great numbers, and well, just to be finishing. As conclusions, well, this project went live on April, last April. It was a success, yes, but I slept two few hours that week when we went live. You know how this works. I learned a lot about those new technologies, and this is something I have in very high value for me. And well, the combination of containers and cloud providers is an absolute success too. Just after we launched it, Amazon released a new service which let you run Docker containers on the top of Amazon itself. You know, no need to Apache Mesos, no need to EC2 instance. Well, it was a bit late for us to change at that moment. And well, another thing I learned, well, I already knew it, but Drupal is very flexible when someone just say, hey, I will use another CMS or another tool. This project has demonstrated, at least for me, that Drupal is flexible enough to basically don't use it on every project. It's very flexible. And well, about these tools, I recommend you to learn about them. But I recommend you not the quick way. I mean, don't try a couple of hours and so. Read the docs, are very good docs, both marathon, Mesos and so, and keep practicing with them for some time. And well, when you just found a way to reach the volcano and throw the ring, it will be more useful and more satisfying for you. So, please practice with them. And well, that's all. Thank you. Thanks. And if you have any questions? Yes, please. Sorry? I can't understand. Can you show the site? No, I don't. You ask if I can show the site. I cannot because I don't have great permission to do so. That's because I have not mentioned the company. If you remember, I have not seen. But I don't have permission, so I can't. Sorry. Yes? Yes, so my question was always everybody's very positive. If you create a new social media site, you will never usually think of the use case that people leave. So, first of all, obviously, great work on scaling at really amazing levels. But there will be complexities probably when you will remove users and then you had 500 sites per database and then you need to somehow scale. So, will that be how have you thought about handling that? So, your question is? When a user removes a site, what will happen? Or is it impossible that you cannot remove a site? Yes, you are asking how we manage that sites. As you remember, we build a Node.js application, which is a custom application. And we use the Amazon API and we connect in that application. We design it to be able to locate a site from the subdomain or from the user identifier, the unique ID. And just calculate the database and which is the data, the tables. And then remove them or move them to another database and perform whatever action we needed. Maybe there were best approaches, but we use it this because a custom Node.js app will let us know a lot of flexibility to do these tasks. Yes, so thank you. Thank you. Any more questions? I have two questions. First, simple one. Do you have any problems with S3 file system? A problem with? S3FS. S3FS. Well, at the beginning, yes, because we were using the first version of the module, one point, et cetera. And then we had a problem with requiring an S3 bucket for each site. When we realized that that wouldn't be possible, we switched to the dev version of the module and it worked like a charm. We had no problems with the modeling in this case. We tried some options, but that worked best. The other question or idea is, did you think of use this kind of scaling for one site only using some load balance or something? Could that be applicable? I think I didn't understand. You have multiple multi-site installation. Yes. Did you think about using it to scale only one domain, one site, with the same technology you made? Well, I did not take the decision on the technologies. I just implemented everything, but the architect was the one who decided. But, well, the previous version of the project was a single site, you know, a single multi, a single, sorry, a single site installation. And, well, maybe due to the design decisions took on the moment or whatever, it does not work it fine. It was just slow. We were talking about five seconds per request, non-catcher request. Well, the architect took the decision to use this kind of monster infrastructure and it worked. It was a good experience. Maybe we will have considered, we should have considered that option, a single site, but at that moment this man decided to use the auto scaling that way. Thank you. No problem. Hi. I have two questions too. Yes. You said about the main worker and the other workers. Is there any difference between the two? Yes. What we did to get the, to avoid the problem with killing an easy two instance and getting the varnish and Node.js containers down. We just separate and we have one worker with fixed containers, one for varnish and one for the Node.js app. Okay. So in the auto scaling group instances, we have only Docker containers with the Drupal itself. That way the auto scaling group can scale up and down. It can add instances and kill instances without problems because the load balancing occurs inside the mesos itself. But we, that way splitting the two kind of workers, we get confident about not being killed on the varnish and Node.js site. Okay. And the other question is simple. Do you run dras with Ansible or not? Are you asking about running tests? Dras, drus. Sorry. Dras. Ah, drash. Yes. No. To run drash on all the sites, we built a custom script. In fact, it's a plain PHP site for it. And we used it on the, over the network. But that was a decision that took on the last minute basically. But it worked. We just prepared a launcher, PHP launcher and another PHP file which was listening on the container side. And well, it just executed the instruction, the command that we ordered from the other endpoint. It's not the best solution. I would have used another kind of solution, for example, to raise up a new Docker container just specifically to run drash there. You know. But we had no time and we had a very heavy pressure at that moment. So we took that approach. And well, it was fixed later to use the single Docker container to run the drash. But what? At that moment, we used that solution. Thanks. Thank you. Thank you for your speech. It's been amazing for your job. But I have a question. It's running in my head. I don't know if it's a stupid question or not. But I would like to know who... Try to speak with a higher volume, please. I would like to know how you make... Because you speak... You say you make like 3,000 database. And I would like to know who you make that database. You put all the stuff inside that database or you just separate some tables. Or you put all the tables all together. I mean, user, cache, everything all together or... Yes, you're asking about how we manage the tables on the database. No. We needed to split them in different... How did you separate all the tables? I mean, for example, when you make a multi-site database, you can, for example, say to settings, PIP, you can say, well, the user, the table user is going to be in that database and the fields table is going to be in another database. So what did you do? How did you make it all database full stack? I think I get what you... What we did was two main steps, which were we just created the new sites on the database as normal on the main SQL, on the database that it functions. But we don't have two kinds of tables on that database. For example, cache tables were not on the main SQL database. We used main cache. So we just removed them except cache form. That has to be persistent. And on the other hand, the field data and field revision tables were on MongoDB. So we just don't have those tables. That way we have per site about 100 tables on my SQL. And once we did this, we split all the databases. Imagine we had 30,000 sites. So we have a lot of tables. But we mixed tables for each 500 sites. I mean, each bunch of 500 sites, we create a new database and we start to use that database until we reach 500 on there. And the database, we can be sure that they don't have a conflict because each table has a prefix, which is the hash from the subdomain. What exactly did you do with the user database? For example, if I'm in one site, if I go another site, for example, subdomain, I will be keep logging or I have to log in again? No, no, no. You are logged. It's a multi-site in Drupal install. So it just behaves. So you manage with the model? Yes. That's it. It's plain. Thank you. Just one more question. Well, after that, I will be out. But we need to let the room for the next speaker. Please go to the microphone, please. My question is, on this setup that you had on the sites and it is in a multi-site approach, when you log in to one, it log in to every single site, you are using like a separate set of users for each one of the sites. Yes, the logging is separate. I mean, a user logged in on a single site will be an anonymous user on another different site. We just had some mechanisms to show the admins could interact with the sites, but that's a part. That's not related. Okay, thank you all for coming. If you have... Thanks. Thank you. If you have more questions or you have to discuss something or whatever, I will be out or maybe down with a coffee. So feel free to get by me and ask. Thank you.