 So, this talk is about turning Drupal into an automated deployment machine. This is going to be somewhat high level because we want to talk about kind of the concepts and why we use Drupal and the way we ended up using it. There will be time for questions at the end, so if you have more deep dive questions or you want to know something specific, we're certainly happy to answer those questions. But it felt like going on to the command line level detail wasn't going to serve the broad audience here at DrupalCon. So a little bit about us. I'm Peter Willanen. I've worked at Acquia six years or so now. We've done a lot of Drupal development also, so... That's an honor statement. Up there in the ranks for Drupal's six, seven, and eight contributions and did a lot of work on the Drupal integration with Apache Solar, which is kind of one of the things that I'm known for and then that Nick has become known for also more recently. My name is Nick Finhoff. I'm now based in Ghent. And I started three years ago in Acquia as an internship with Peter as my mentor. So he was my guide and teacher in everything solar, but also the server infrastructure in Acquia. And it proved a really interesting challenge for me because I wasn't very skilled in infrastructure. It was more software development. You can find more by me in Twitter at Nick VH or just Google my name, I guess. And Nick is also famous because he appeared in the DrupalCon Goes to Paris video. If anyone has watched that video series, look for Nick. OK, so enough introduction. So I want to tell you a little bit about where we started. So as Nick said, he came in as an intern and he has basically taken this service that we built that has a lot of infrastructure deployment and moved it to the next level. But I want to tell you where we started to motivate sort of the talk. So at the point where we sort of reached a crisis and knew we had to rebuild it kind of from scratch. So we had a lot of servers and a lot of search indexes. So we were managing thousands of search indexes. Acquia had grown and we were basically giving one search index for every customer, for every that signed up for a subscription, including free subscriptions. And to deploy all these, obviously we needed hundreds of different servers running. We had load balancers in front of the servers. But we were also running a problem that as our base expanded, we had people that wanted to deploy their Drupal sites in other regions of the world. So people wanted to deploy their Drupal sites in Asia, in Australia, even in Europe. And our service wasn't really able to service them because we had built it without that in mind. So in particular, I had originally built this service together with Jacob Singh. So if anyone knows Jacob, he's now actually in India building up our presence there. But he and I had worked together for about half a year through mid-2009 and rolled out the service. And both of us, in a sense, it was our first time doing something like that. So this is kind of a little bit of a list of mistakes we made in hindsight. So we didn't really know how that it was going to scale. We didn't really have that in mind. We just wanted to ship it. You've all been there, right? But by the time Nick arrived, that was leading to a lot of operational stress and a lot of development friction. So we couldn't deploy new features for customers. We couldn't go to new regions because we were basically hampered by the legacy infrastructure that we'd built. So this is kind of a, I picture Nick as like that guy and he's trying to fix our service. He has to stick his hand down that really dirty machinery and he's risking getting his fingers cut off every time he has to fix something. And I didn't study mechanics. So that was more of an issue. I had no clue. Right. So in order to preserve Nick's fingers, so we didn't have to get so dirty so we could make people happy, we needed to fix things. So let me tell you about why the machinery was broken. So one of the even from the start reasons that it was obvious the machinery was broken was that we use a polling model to update and deploy. So if you've ever built a service, this is kind of like the dumbest thing that you always start doing and we'd never move beyond it. And it was just every five minutes, we asked our data source, telling me about all the things that are supposed to be deployed on the server. And we would check if there's something new and if there's something new, we would deploy it. So that was annoying for a lot of reasons. The biggest one was that customers weren't happy because they would ask to connect to one of the search indexes and we would want to deploy it for them. But it might not be there for five or ten minutes. Meaning in the meantime, Drupal was giving them error messages saying that the search index was not available. So that was a bad onboarding experience. Customers were unhappy trying to get started and often they'd give up even before the five minutes was up. With this mechanism where the server is just pulling in the data and doesn't have any other interaction, it was hard to see when an error occurred. So we didn't have a good way to see that whatever data we'd sent over wasn't correctly formatted or the server itself had some error. So we needed a lot of manual auditing and checking, constant tweaking to make sure that the system was working. In addition to that deployment problem of the pull model, it led to a scaling problem. So as I said, we were getting to hundreds of different servers. Each of those hundred different servers, hundreds of different servers were calling to our data store every five minutes and downloading a lot of data. This data store was a different Drupal site. So you can imagine doing basically un-cached Drupal bootstrap, load lots of data, pull it over. And as you scale more and more servers, we're basically loading down our other production infrastructure in order to usually provide no updates. So we're sending data over. It hadn't changed, but we had no way to avoid doing this expensive call every five minutes. So again, if you're building a service from scratch, don't get stuck doing this pulling model that in long run will definitely bite you as you try to scale. That's also something that, like with our hosting platform, had started the same way and we had to move away from it for exactly the same reason. As you load down your data store with each server trying to update, it simply falls over and you can't put enough hardware to just serve these useless requests. Another problem we had with that old infrastructure was that basically in order to create a new server, someone had to go in and manually configure it. In this case, we're actually using a service that stored the data, but you still have to go in and form and you'd have to edit the values and say, this is the name that the server is going to have. This is where it's going to live. It's sort of identifier that it will collect a group of indexes and someone had to do that by hand and they did it right every single time. Or when the server launched, things broke. If they did it wrong or if you edited it by hand later, you might not be able to launch the same server again if it went down. So scale and then scaling and balancing, search indexes took a long time. I don't know if you're going to talk about this, Nick. I can talk about it. So balancing in the sense here that we would have servers that would get overloaded. So we would provision each customer, basically all new customers onto a single server, let's say, for our free sites. And if we weren't paying attention because we didn't have a good system to pay attention, we'd get too many and the server would start to be overloaded. And then someone would have to go and manually copy those indexes to another server by hand. And so that made us really not on the good list of our operations team, shall we say, or we had to do it ourselves. And as I said, the relaunching didn't necessarily work smoothly. So you, as Nick says, might lose a server in the Amazon. So even then we were deployed on AWS. We redeployed into a different configuration on AWS. But AWS tells you things like they might just send you notice that we're going to reboot the server soon. Or we lost your server. Or they won't even tell you, the server just goes away. If you run it off as we do, you just see occasionally they just disappear. It's a virtual machine. You didn't need it. They also advise you if you're, and this is actually really important to think about if you're using Amazon, that they basically tell you they have multiple basically data centers within each region. And you should be, your application should be prepared for total failure of one of those data centers at any time. And if you want to build a reliable application on Amazon, your application has to be prepared to suffer that loss and still keep going. And otherwise, you didn't follow their best practice. So having these sort of problems where we couldn't readily relaunch, we couldn't necessarily fix things that were broken, made it hard to provide customers a good uptime. And some of them liked our service a lot. So they were using it as an important feature of their Drupal sites, striving the search for their e-commerce or for their travel or whatever. The front page of their site was basically driven by search results. So if our service went down for them, they were very upset. But we didn't have the system in place really to keep them up consistently. Finally, as I said, we had basically a monitoring and distribution problem. So we had basically one data provider. We had search servers deployed in the US. We were sort of experimenting, but not totally successfully with deploying some in Europe. And then trying to figure out how to monitor them at the same time without having really a comprehensive system for doing this. And in particular, the big problem here that we didn't actually have a single record that described what should be happening for given search index, like what server should it be deployed on, who owns it, what configuration does it have. This information was actually split between our sort of data store on the one hand that knew about the customers. And the servers themselves that knew which things they were supposed to fetch and deploy. So with this split between some of the configuration on the server, some of it in the data store, we didn't have any one place that we could basically capture all this and use it to monitor the service. Because we didn't know, we didn't have this one piece of information to allow us to know that this index should be on this server running this configuration. So yes, monitoring was hard. And as I said, the instances were confined to one region. So that meant customers had high latency if they tried to use their service. And then finally, we had a real friction with this problem, which is that we had hard coded basically the system to say that each customer gets one index. And that was great when we started, it worked fine. But as we got more and more customers and more and more customers with big sites, they sometimes needed two search indexes for one site. Or they needed development search index as well as a production one. And so we didn't have a flexible way to give that to them. And so we basically had to do more and more painful workarounds in order to provide them the things that they needed. And that seemed like it should be easy for us to do. So it didn't look good, but Nick came in. I panicked. Yes, Nick was a big panic, perhaps. I was, operations is a bit panic, but we came up with a plan. And we sort of reimagined how we were going to deploy these search indexes and rebuild the service essentially from scratch. And at the end, I don't think Nick's going to really talk about the migration. But as we migrated all the customer data, all the search indexes from our old infrastructure to the new one with relatively minimal downtime. And now it's running happily. So a big piece of context for us, and thinking about how to solve this problem, for me especially was Drupal Gardens. So I spent a lot of time in the Drupal Gardens project. And Drupal Gardens, if you don't know, uses a Drupal site as a control server for deploying other Drupal sites. And so we had successfully scaled this up and we had about 30,000 Drupal sites deployed. I think that's about what the number is now. So it was clear to us that we could scale to the tens of thousands of scale that we needed to in terms of deploying something using Drupal as the control layer. If you want to see more about that, I have a couple of links here. Obviously, we'll post the slides just about what that service does and how it works. And then the other piece of context that probably a lot of you know is Agir. So Agir is a Drupal set of modules and scripts basically that deploy Drupal sites. Again, so we have this model of Drupal being the machine that deploys other Drupal sites. And so we thought, well, why don't we take the advantages of that sort of system and apply it to something that's totally different and turn it into a machine for deploying Apache Search Indexes. So we kind of took a step back and thought about what is the ideal characteristics of such a system. And I think this was very important because remember we had those sort of pain points where we had friction in development and operations and we wanted to avoid running to those again as we scaled it out. So in particular, we wanted to make this system independent of anything else at Acquia. So we basically through discussions, hand occlusion, it should be decoupled from our hosting platform, it should be decoupled from our customer-facing portal. It should really be a standalone thing, so we're going to interact with it through an API and not couple it together directly with the data or with any customer-facing service. The advantage of that is also that this system becomes testable. So we can spin up this Drupal site and run tests just against the API. We can actually spin this Drupal site up and run system tests where it's going to deploy search indexes. And we don't have to deploy the entire rest of the Acquia infrastructure, all the Acquia sites to do this. It's a standalone thing. So it's really a black box from the standpoint of other teams at Acquia. So they can basically make an API request that says, please give me a new search index for this customer, with this name, with these credentials. And it just works. They don't have to think about how and why it works under the hood. And if we in the future had to change this out and rewrite this service in Java, as long as the API stayed the same, the rest of Acquia wouldn't have to even be aware that we made that change. Nothing I'm planning to, but hypothetically. We also had a goal that we wanted to basically reuse our own hosting platform. So I've spent time on the hosting team, as well as on Drupal Gardens. And Drupal Gardens is really now called Acquia Cloud Site Factory, because it is essentially just a wrapper on top of our hosting platform. So we also wanted, in a sense, to reuse the hosting platform, be it a customer of our own internal hosting platform, and not have to reinvent all that infrastructure. And one of the caveats about using that, of course, is that it's designed to host Drupal. So getting us back to why we might use Drupal as the machine. Finally, both Nick and I have contributed a lot over the years. And we had in mind as a important goal, not as an engineering goal, but as sort of a values goal, both to use, reuse, contribute the code involved in this project so that we wouldn't be isolated from the community. So a lot of what Nick's going to talk about is how we use different Drupal modules to actually carry out the tasks necessary to do these. Well, and not only Drupal modules, but also packages or libraries that you could use outside of Drupal. So just to kind of summarize why we picked Drupal for this project as the sort of control machine. So we're very familiar with it. Probably a lot of you guys are familiar with Drupal. It gives us a UI. So if you just started writing a bunch of scripts, you wouldn't have a UI where you can go in and inspect the data, debug it, tweak it. Drupal is really good at basically using content or the content is data, right? So Drupal really just has a data-driven model of behavior. And that means you can use the data to anything, including that data can represent these things that you're deploying. And we'll talk more about that. But that, if you haven't thought about that concept, hopefully that's one of Drupal's main advantages over a lot of its competitors. We also had experience using things like services module to synchronize this data or content between Drupal sites so that we knew that we could keep the configuration, for example, for a search index, we could then basically synchronize it back to the customer facing portal if we needed to. So the customers could actually see the up-to-date status of the search index. Again, without coupling these two systems, but we could synchronize back and have an update report. Of course, because it's Drupal, there's lots of modules we could integrate into a stock Drupal 7 site and customize it very quickly. And things like views make it very easy to build reports, look up customer statistics, and again, sort of this debugging and development process became a lot faster using Drupal as the centerpiece of this rather than basically a bunch of shell scripts, which is what we had before. So- That's to summarize. Yes, to summarize, basically Nick and I came up with this plan. And then to a larger extent, Nick went off and attacked this problem and implemented the plan that we'd come up with. So let me- I tried 42, but that didn't work. So, all right. Does everyone know from which movie or which book this quote is? Exactly. So that was a trivia. I don't know if there's a Drupal trivia here this week. Yeah, okay, cool. So now we know all these problems. You probably are kind of curious, okay, how did they actually solve that or why Drupal or has this whole architecture? And we'll mainly go into the Drupal or the machine side of this problem. Two years ago, we already gave a talk about how we did that infrastructure using Puppet and a bunch of the whole solar replication stuff. So that's not what we're gonna talk about here. If you do wanna do that or if you wanna see how that works, there's a presentation from two years ago. So, oh yeah, there, but that kind of point. So we talked about that data provider. So that's where all our customer data comes from. And that customer data knows how many search indexes we have for each customer, which region it lives in, that kind of stuff. So it sends an API call to our governor, and we call it a governor because it kind of governs the whole search infrastructure. And once it's in there, it actually adds it to a queue. And the queue is kind of interesting in this sense because it's a finite state machine that has multiple states. So when you add an index, you have a couple of states. So first you add it, you check it, you make sure that the config is right. So if you all do that in one task, if something fails, it's really hard to actually find out what failed. So I'd highly recommend you make small tasks. Then it goes back, and as you can see, this little governor from ours is doing a couple of things. First, it sends out SSH commands to the appropriate servers of that index. And the appropriate servers are actually being auto-defined by this deployment machine. It tries to find out where do I have space, where should I go to. It tries to be smart. It's not AI, but it tries to be smart. And it does that for the solar master, the solar slave. It configures also the whole extractor. So all of that logic is being done by SSH commands. To make sure that we can balance to these right machines, we needed NGINX. And NGINX is a little tricky to configure because it uses NGINX config files. And we came up with a way to actually deploy these config files on a whole set of balancers at the same time using a Git deployment mechanism. So we commit to one repo within our Drupal deployment machine, and it automatically deploys on these set of balancers. So we also know that the file is always the same, and if it's for some reason we need to spin up a new balancer, we spin it up, the Git clone happens, bam, we have a new balancer. So that's kind of the whole scheme of what we're going to talk about here. And if we dive a little deeper into Drupal, so we have a couple of content types that we defined. One is a search index, there's a server, there's a cluster, and a search config set. All of these are being linked to each other using entity reference. So you can really get a whole view of what's connected to what. Are there any questions about the definition of these words? Okay. So why do we really use content as our driver? It's all about content in our case. It's all about how many indexes and what defines that index. It's all kind of textual information. And as Peter explained, Drupal is really good at that. So we created a bunch of API calls that accept this content, but you can accept certain configuration files. You could accept, okay, make me a new index. So that was quite a lot of work to do that in Drupal, to make these API things, as I think now is commonly referred as headless Drupal, but this is still Drupal 7. So that wasn't really there. And yeah, this basically explains what I mentioned in the diagram. Are there any... Well, I'll come back to the diagram later if there's any questions. I'll dive into what kind of modules we actually created or used to make these SSH calls, to make these Git deployments, et cetera. So there's QRunner, SSH Helper, GitRapper services, which probably is one of the few in this list that is known by a lot of people. Composer Manager and EFQ, I'll explain all of them. In this whole process, the whole company was kind of looking at us, and they saw us developing this kind of QRunner thing, and that's also why this picture is here of Peter and me and Andrew Yang, a colleague of ours, sent this to everyone, hey, look, these guys doing QRunner. And QRunner is a module that runs as a fake daemon, because it's not a real daemon. It's still a PHP process. And it runs for 55 seconds in the background, and it tries to run as many tasks as possible. Remember I said, keep your tasks really small. This really helps to get as many tasks as possible to execute. Then it waits five seconds, and it starts a 55-second batch again. And it does this over and over and over until there's no more tasks. The cool thing about this QRunner as well is that you can use the finite state machine in there that was invented by the gardens team. So we tried to cooperate a bit. I'm not going to go deeper into that, but if someone's interested, I can show you how it works. Then we needed a way to actually execute SSH calls from within Drupal. I don't know if anyone ever tried that, no one. How did it go? It was okay. Interesting. Like error handling and that kind of stuff. Did you try, for example, SSH keys? That kind of stuff. So that's perfect. So that's what this module also tries to help you with. So it allows you to actually give it a certificate. You can actually enter a passphrase, and it uses the phpseclib library that's available on GitHub to actually make that connection. So you don't need any weird bash script to actually help you get that into memory for your SSH key, which is what we used before. And it proved to be very stable. The phpseclib library, I would recommend you if you want to do something like this. Then for the whole Git piece, we used to, or actually we created, so we also created QRunner, we created SSH helper. So now we created the Git wrapper. And Git wrapper is a module that was created by Chris Plakas, maybe known to some of you, also the author of FacilAPI. And what Git wrapper does is it provides you an object-oriented way of actually handling Git. So you can say, OK, create me a new repo, deploy me a new repo, maybe clone this here, add this there, that kind of stuff. And it becomes really easy to then somehow programmatically handle Git calls. So we had this in our governor, as I explained, in this deployment machine. This is an abstract on how our nginx config file looks like. And it's an include, of course, because otherwise it wouldn't work. But this is what it actually spits out and writes to disk on file, then commits it to that Git repository, and then it gets deployed automatically doing this whole Git deploy mechanism across all our balancers. So that was a good way to scale on multiple machines. Then we have Composer Manager. I'm sure it's also being used by many of you that are site developers unless my assumption is wrong. No, Composer? I would assume so, right? If not, Drupal 8 will kind of force you. And Composer Manager was really useful for us because we had all these different libraries that we had to include and we wanted to keep up to date. We also used it to get internal packages from Acquia in our repository. And it was also created by Chris Plakers. And it solves the problem that you have with multiple Composer.json files. So sometimes modules come with a Composer.json and that allows you to pull in libraries from GitHub. But what if you need to use the same library twice? Then it will be added to your autoloader twice. And it comes with some complex problems. Composer Manager merges all these Composer.json files and gives you one place where all your libraries live and also allows you to autoload it. And then, yeah, I think one but last one, it's the services and services client. And this module is actually being used to send Drupal notes over back and forth between different Drupal sites. So we have our client repository that sends over these indexes. But we can also send just random, regular data. This one is an interesting one which gave me a lot of headwind from someone called CHX. I don't know if he's in the room. This module allows you to using entity fields query, which is normally only supposed to give you back the entity ID. Do some hackery and do join tables with all the rest of your entity information to get your field back. This way it's an easy way to write data structures or data queries in Drupal and not write the MySQL queries. And it will just make one MySQL query and get you the data. Pay attention, this is not how it was intended. This is a hack. It won't work on MongoDB. So I don't recommend it. But it's really cool. I think you should probably talk a little bit more about services because I think I mentioned the synchronizing content. I don't know if you want to... We sort of have moved to model a lot of this content synchronizing among multiple Drupal sites, which is... We're synchronizing it because primarily we use it as data, but it's very much the same system that people use to do content staging and content synchronization. And again, it is actually just content, right? So Drupal content is data, data is content. So if you've set up a system where you stage or synchronize content between sites, it's exactly the same setup to keep your data, your deployment data in sync. So one site might have basically a read-only access, another, or it might be read-write so that you can... In this case, our support team needs to go in and tweak a setting for the search index. They tweak it in the UI, it's automatically synchronized using services module back to the governor. The governor sees the new data, does SSH call the server, and that new configuration gets deployed. It's also services client is built on top of services and it allows you to actually select a couple of fields that are being mapped to other entity types and other fields. So you can take a subset of your data from your customer and send it somewhere else, but still keep that link. So once that other entity type is updated on the other side, it actually goes back into our original. But you can specify the permissions if it only goes one way or both ways. For our search application, we only wanted to go one way because we wanted to make sure that we could spin up indexes that are not attached to customers data. So make sure you get that replication in order. So as I mentioned as well, we have a bunch of server infrastructure that maybe your crew is about, but I recommend you to see the other presentation. We're asking the Q&A. Basically, we have automated a process to spin up servers using Puppet. We by setting some flags, we can say, okay, this is Tomcat 6, Tomcat 7, or Java 6, 7, Solar 3 or 4. And operations is overly busy, but still they just copy a set of commands in kind of a generator and it spins up, it auto-registers using services with our data or with our governor. And then it sends an SSH command to that server. Say, okay, hi, I know who you are. You're here now. So operations no longer configures anything on the server itself. They never have to go in and they actually should not go in because if that server goes away, we should be really quickly be able to spin that up again. So this is the session I referenced. This talks about the whole replication and some, a bit more information about the APIs, how to build that. So there's a lot of confusion on how to really build an API maybe in Drupal using these services, but that's quite complex. No, I really want to make it myself. I really want to fine-tune that one API on my Drupal site. And you all know hook menu, right? I hope so, yeah, okay. So there's a small trick and maybe it's not very known. It's called the delivery callback where you can set it to Drupal JSON output. So then if you make a page callback and you give it back an array, it will automatically convert it to JSON. So then you actually have a menu part that spits out JSON. And that's already a good start if you want to make an API. And using, go ahead. You can actually, if you look at Drupal core, you'll see that this Drupal JSON output is used directly by Drupal core taxonomy module. But by making it the delivery callback you actually skip a bunch of other code that Drupal would run every time you hit that path. So it's a way basically to speed these things up because in this case we know that there's not a user interacting with Drupal. There's not a session that we have to care about tracking. We just want to spit out this output to the correctly authenticated consumer. So this API is also used by customers to, it's still a very small subset, but actually to add for example stop words or protected words or that kind of configuration changes using regular JSON that we can control because we allow customers to send in real files that are directly deployed. That's a security issue. We're still testing that feature internally. All of our customer support already have access to that feature and they just add these files and it gets automatically deployed and you can see that the concept of a black box becomes really useful. There's only a really smart part that your support engineers need to know to actually deploy files on the whole infrastructure. They never have to log into the servers. They don't need to know how it works, but they need to trust it. So the robustness of the system is your responsibility basically. So for example, we used Swagger to describe the APIs. These are all APIs that are built in Drupal itself. But as Peter said, if we switched this out to Java it wouldn't matter. No one would know. No one would care. So we can get statistics, paying local payings, that kind of stuff. A couple more points you. A couple more points? Okay. Well, yeah, so I kind of mentioned them already. But for example, operations, they have to monitor each index. And if you have 4,000 indexes that's kind of tricky to monitor if all of them actually are online on your master and your slave, and they actually work. So also using this API, they can get all the details from these indexes and call the ping command and as soon as one goes down, their Nagios will actually start alerting okay, hi, this is down and if they want to analyze, they also use the API to get an analytic schema to see okay, where is this one? How do I go in? What's the username? So they get that single piece of information that they need to know to actually resolve the problem. So really tailored to their needs, tailored to support's needs, tailored to engineering's needs. I think that's it. We're very happy to answer some questions about details that might not be very clear to you. And I hope you got something from this talk. If you have questions maybe come to the front, so I don't know if there's a microphone. I don't know if you can or we'll repeat the question if you can. So why did you guys insist on using SSH for deployment for the Solar Service? Why didn't you, for example, since you already used Puppet, not deploy your own role daemon that responds to API calls to do the configuration on the Solar system itself, so you didn't have to use Indian Terrace as a trapper thing. So that's a good question. And that's actually because we are deployed on Aquia Cloud and Aquia Cloud is a separate project that has to accommodate to a lot of other projects. So we don't have ultimate control over the Puppet configuration, so what it does is it does deploy Tomcat, it does deploy Java, but what you do with that Tomcat that's your problem. This is also a way to not get like a massive team, because it's really hard to scale if you have to accommodate for each and every product. So it's also, in this case, Nick didn't really go into details, but we're actually just using the SSH really as a trigger. So the configuration scripts are there locally on the server, the deploy it also can get as well as the war file for solar and everything it needs to run, and so we basically just go over and say there's an update, you know, get the data and do the update, and so it's not that we're actually sending a lot of commands over SSH, it's really just like a triggering mechanism. Great, thank you. The application itself that does that whole trick, like the whole command thing is actually Ruby on the server itself. And actually that's the part we didn't really update that much from as a dirty secret. It worked from before. Are there any other questions? No? All right. Oh, there's one. Okay, yeah. Okay, thank you. Also an obligatory slide. So we are hiring. We do fun stuff in Acquia. I still need some colleagues in the Gent office, but we did a bunch more in Burlington than Boston. So please come to anyone who has that blue shirt if you want a challenging job. Okay. Sure. Right, so the question is, yeah, can we just check on Cron, and that was, so yeah, at the beginning, I don't know if you saw the first slides, that's where we started the system, and the problem is if you just run on Cron every one minute or every five minutes, you have two problems. One is that the speed with which you deploy is a lot slower than you actually want it to be. So the lag between a customer asking for something and getting it, if it's five minutes, is often too long. And then the other problem is that every server, and as we're scaling now, I don't know how many hundreds we have running. In Nexus? No, servers. There's around three to four hundred servers. So four hundred servers, so if every one of these servers every minute makes an API call to get data, the load of that creates operational problems. So it's a reasonable way to start building, but yeah, you don't want to continue that in the long term for scaling. And you only have like 60 intervals per hour, because Cron doesn't do half a minute as far as I remember. So please also give us some feedback if you want, here. And there's stickers from a Drupal Camping Gents, which is two hours and a half away from here. It's free attendance, and I'm one of the organizers, so I'd love you all to come. Thank you very much.