 Let's get started, guys. So welcome to the Cloud Native panel. So Cloud Native, it means a lot more than just spinning up an EC2 instance in AWS and deploying your Drupal. Cloud Native really is like a holistic approach to building and running applications in a way that really takes full advantage of the cloud computing model. So everything from design, implementation, deployment, and operations. So today, we're going to be discussing technologies like containers, cloud providers like Amazon, Google Cloud, DigitalOcean, those types of things. And we're going to get stuck into some Kubernetes as well. So we have three very talented panelists today. We have, I'm starting from the left, Mike Richardson from Ironstar, Nick Shoe from Skipper, and Scott Leggett from Amazey. Could you guys just introduce yourselves, give like a 30-second intro on who you are and what you're doing with containers, the cloud, Kubernetes? Sure. Thanks, Nick. So yeah, I'm Scott. I'm fairly new to the Drupal world. I've been at Amazey for just nearly three months now. But I've been doing containers, fiddling with Docker and stuff for maybe three or four years, and Kubernetes for the last couple of years of that. So yeah, I've worked on a few different cloud platforms. Yeah, it's about me. Sweet. My name's Nick Shoe. You introduced me anyway. But yeah, I work on the Skipper platform, command line-based hosting platform built on top of Kubernetes. I've been on a container Kubernetes journey for the past couple of years, since I think it was like Kubernetes 1.1 and onwards. Yeah, it's definitely been a journey, and I'm looking forward to answering the questions. Cool. Thank you. So I'm Mike Richardson. I'm the managing director of co-founder of Einstein. Yet another Drupal hosting provider. We run production Kubernetes environments using our Takaido platform, which if you haven't checked it out, highly recommended the Takaido local development environment, which I plug at every opportunity I have. As I said, we run Kubernetes in production, and I think like everybody here, we've gone through that process of very early adoption up to more and more automation and more and more scalability for us on AWS and Google Cloud. Awesome. All right, so just so that we can get a feeling for the room, just want to gauge what level of familiarity you guys have. So could I just get a show of hands if you're getting started in this space? Like you haven't really had a chance to play with containers, cloud, Kubernetes? Yeah, a couple hands, cool. Who here is using containers for like a local DevStack or maybe the CI, things like that? Awesome. Is anyone running Kubernetes? Running Kubernetes in production? Cool. And what about it? Anyone constrained to on-premise hosting? A couple of sheepish hands up the back. I feel sorry for you. I'm sorry. Cool. All right, so just going to be keeping an eye on this hashtag, so if you've got questions, feel free to tweet at that. I will also go around at the end with a mic, so hopefully we get time to do that. OK, so let's get stuck into the questions. So for the panelists, can for people still running legacy style hosting on virtual machines or bare metal, can you explain one of the reasons why they should be looking into containerizing their Drupal apps? Who's going to go first? So for us, I think the biggest advantage that we've seen with it is repeatability and consistency. When you move from a VM-based architecture to a container-based architecture, it's very easy to look at a container definition and say, right, this is what my container does. And every time you run that container, you're going to get a very consistent result. And it is a lot easier, I think, now to create a container-based infrastructure than it is to create a VM-based infrastructure with all of the really good examples that are out there about how to run specifically Drupal in Docker on the different Kubernetes and Cloud providers. So for us, it was a few things, but the main one that I always like to land on is PHP upgrades. Sounds a bit boring, but you get this ability to migrate a single app from PHP 5.6 to 7. And whereas on a traditional VM stack, you're stuck with upgrading the host, which then probably upgrades 10, 15 other sites if you're doing a shared instance on that one. And then it also comes down to deployment. You have to then ops and devs have to go, well, OK, so you're going to update PHP, and then I'll run the deploy. And it's this really hard lockstep kind of situation to do those upgrades. So for me, that was a massive, massive win that projects could declare. When they wanted to upgrade their PHP versions, but also it was seamless. And we didn't have to be super involved in that process. And just as easy to downgrade as well. And just as easy to downgrade, yeah. Yeah, I think that was going to be my point as well, the flip side of the upgrading is that something goes wrong. You've got these images, which are sort of immutable build artifacts that make up your application. You've rolled out the new ones in production, something's gone wrong. You just want to roll back and try and fix whatever's broken. It's easy because you've still got that other stuff still there. Those artifacts can just be picked up and just put back into production. And yeah, life goes on. It's not you're not trying to sort of mutate state, I guess. Yeah, awesome. Thanks, guys. What are some of the biggest challenges that you've faced when running Drupal in containers? Yeah, at least. File systems. Yeah, stateful file systems. Yeah, cool. We'll get stuck into that later on. What advice do you have for security patching and general maintenance of base images and your app images? I'll go first. Yeah, I think general advice would be try not to do too much wild customization. Try and stick as close as possible to your upstream, just because it just means that there's less stuff for you to maintain. Yeah, I've been thinking about it quite a bit lately, where a lot of people talk about like, have really small images. So then you have less attack surface. But I don't think it's being looked at the wrong way. It's not about having really small images. It's like you said. It's like having less stuff thrown in there. It's like, it's really understanding what you're deploying and also sticking to a schedule or a routine to make sure they're up to date. Yeah, I think that last bit is really important. Making sure that your containers go stale by only updating them every three months or only updating every six months, we put all of our containers through a build system where every month they're going to get automatically rebuilt with whatever the latest version is, versions that we haven't specifically pinned of things like PHP and nginx and so on. And that goes into a non-production pipeline for a month before it goes into a production pipeline. So we've constantly got those images being refreshed. That of course means you're gonna get patched for any security things that come out, but you will also be going headfirst into any bugs that get introduced. So you've gotta manage that trade-off. The other thing, if you were in Toby Bellwood's talk earlier, key.io and a few other hosted images, image providers have some turnkey solutions that will scan your Docker images for any known published vulnerabilities and give you instructions on how you can go about mitigating those. Yeah, that's a great point. So I guess there's an issue there where if you're not deploying regularly that even if your base images are receiving security updates that your application is still potentially vulnerable, so building that automation is important. Cool. Is achieving local dev prod parity a realistic goal with containers? Absolutely. If you've got containers, you can have the same container running anywhere and it's going to be, I think especially if you're on something like Kubernetes where it's very easy to run a multi-tier application or multiple layers of redundancy without necessarily having to have a duplication of cost, then it's very easy to say, I want staging to be exactly like prod and you're gonna have, I mentioned before that consistency of this is what staging looks like and if I deploy this container to prod, I can have a high degree of confidence that prod's gonna behave exactly the same way. Yeah. I think we sometimes overdo it trying to keep the parity between our production hosting and our local development. I think we get a lot of that by running, like just by running the same containers, you're mostly there. You're very close to there and I think from there, like you can make trade-offs on both sides. There's this whole idea of like the inner loop and then the outer loop and the inner loop being like your local development and CI and that's very specific to different teams and how they operate whereas in production, you have this outer loop and this workflow from going from dev to staging to production. So I guess my advice around it is not to be too prescriptive but and think about the best workflow for you guys. I'd have much more to add. Yeah, cool. That's good. All right. So the most common path to the cloud for people getting into it is basically just to spin up an instance, deploy a LAMP stack and run your Drupal code, Bob's your uncle. So is this still a viable approach? If not, what are the limitations that you're going to run into? What are the challenges that are going to push you up against a solution like Kubernetes? Well, I think the short answer is yes, it is still a viable solution. You have to make, when you're deciding which architecture you're going to run, you have to decide what is appropriate for your business needs, for your use case. Going out and deploying Kubernetes and containers and moving into that sort of more modern adoption of Drupal hosting is certainly very cool and extremely powerful if your use case demands it. However, there are still many, many cases, I think, where a VM, even a single VM running on a cloud provider or running in a data center is still viable if your use case doesn't really need all of the bells and whistles that you might get with Kubernetes. It is, there is no need to over-engineer the solution by necessarily having the latest and newest shiny thing. Yeah, I think that's, sorry. I think that's right. Definitely, there's a lot of stuff that Kubernetes gets you that is basically means in the end, especially at scale, it's going to be less maintenance work for the hosting platform. I guess the flip side of that is that as a developer, you do sort of need to understand a little bit about how stuff is working under the hood because it is a bit different to the way traditional lampstacks are set up. So, yeah, you really do have to have a little bit of understanding and there's a bit of a steep learning curve for sure. There's a lot to learn about. But it's shiny. No, like you said, it just comes down to use case. It really does. If you don't have that strong need or if you can see in your head, I'll provision this host, I'll provision this lamp stack, I'm off. That is still the way to go, for sure. If your use case has changed, for us, we were in that position and then we realized we're managing way too many of those and we could combine them all into this single bucket of compute and make them all HA. That is when it turns from getting stuff done to saying, okay, well, we've got a strong use case to save a bit of money, make everything HA and yeah, use case. I just also wanted to add, I think there's a lot of, not talking about technology, but thinking about your mental health and your developer health. When you are constantly feeling this pressure, you go to a panel and you hear about people using Kubernetes and containers and all this new shiny stuff. It's very easy to be on the other end of that and go, well, I'm not doing that stuff. Am I not keeping pace with my industry? And I wouldn't think it, I don't think any of us would say you should be using the newest and greatest thing even if you don't need it. And you should be working on weekends to make sure that you can deliver it. It really is a case of, you've got to have the business requirement for it, but you've also got to make sure that your team can manage it and that it's not going to put too much stress on everybody. Yeah, awesome. Okay, so what are your opinions on using managed services for things like databases, file systems, education as opposed to rolling it yourself? Hey, that totally riffs off exactly what you just said, like it, so for us, like managed services are key because we're a small team and it's really hard to debug HA MySQL. So, and I don't want to spend my time doing that. I want to spend my time adding value to and improving our teams and the tools that they use and the workflows that they have. So for us, yeah, managed services are awesome. We go whole hog with managed services pretty much up and all the way up and down the stack and keep a pretty small team for it. But yeah, I strongly think it's exactly what you said. It's, yeah, knowing what you can maintain and knowing what you want to maintain. Yeah, I think it's definitely just, there's a lot of work that you can offload onto someone else and lets you sort of do the interesting value add stuff. The only caveat I guess would be just to make sure you understand exactly what the trade-offs are. I mean, if you, I mean, it is easy to sort of rely on a lot of services that then you kind of get locked into like one particular cloud vendor or something like that or even just making sure that you know what's the availability zone that my managed database is running in or my managed cache or whatever. So, just, I mean, and it's easy stuff to just have to understand it. I think as hosting providers, we are for the first time ever in our history in a position where we can be hosting providers that don't run servers. We can go out and we can get managed RDS, we can get managed database, we can get managed Elasticsearch, we can get managed file systems, we can get managed VMs where we're paying someone who is better, more capable, has a larger team, has global operations to do that stuff for us. And it gives us the ability to focus on things like automating Drupal deployments and automating Drupal upgrades and all of the stuff that customers want from us. I never have a customer come up and say, can you please give me a server or can I please have more RAM with obvious exceptions? But that's not what the customer wants to buy. The customer wants to buy a reliable Drupal website or a secure, allowable website or whatever the case may be. So I think for us being able to offload a lot of that responsibility is really, really powerful and I'm very, like everybody else, I'm very pro-managed services. However, it's something that we often struggle with in terms of if we're paying someone else to provide their service on our behalf, we've got to make sure that they're going to deliver it against our expectations, which in most cases is true and we need to make sure that when there's a fault and our customer comes to us and says, my Elasticsearch cluster isn't working or my database isn't performing properly, that the SLA that we've given our customer is an SLA that we've got with our provider and that's something else that we've got to watch out for. But by being aware and all that, most of the time a managed service is going to save you a lot of effort that you can reinvest in something much more valuable to you, to your team. Good segue will be into something like managed services for Kubernetes. So your EKSs, GKEs, DigitalOcean solution, what are your thoughts there? Everything I just said, we don't have a managed service for Kubernetes. We roll our own and we've gone through a cycle of when we first started, I think the same as Skipper previous next, you're provisioning your own VMs and putting Kubernetes on top of that. We started with that, then we moved into using COPS, which is sort of a turnkey cluster management solution for Kubernetes, principally on AWS. Recently we've started looking at Google Cloud's GKE platform as a fully managed service, and there's good things and bad things about every approach. For us, it's easier for us and it suits us better to manage our own using COPS than it does to use something like a GKE, but it's horses for courses. Yeah. Nope. So we started off by rolling our own Kubernetes, like doing Kubernetes the hard way and really understanding what it took to run Kubernetes, and then it was a great day when we could offload that to EKS, but we still understood what was going on under the hood as well and how that runs and operates. I think there's something to be said there, but I guess the other thing about all these managed Kubernetes services is we've kind of crossed the point where it's amazing that all these cloud providers have this standard API essentially, like between EKS, GKE, and EKS. They all have Kubernetes and Docker registries, and they're all very standardized, but at the same time, we're getting to this point where all these Kubernetes services are now going to start adding their own value on top, so then use them, like you can see that right now with EKS, and then EKS is very aggressive starting to add new features and improving it, so I think, yeah. So there's a risk of vendor lock-in, is that what you mean? Yeah, a little bit of sneaky convenience, vendor lock-in. Yeah, so I mean, historically, Mazeads looked at, done their own Kubernetes clusters, where shortly gonna be doing some, looking at using managed Kubernetes, and yeah, like you guys have already said, it's just all about trade-offs. You get a little bit more control when you've got, looking after your own stuff, the trade-off is, you have to do the maintenance on it. Yep, sorry to keep doing this. One other thing that just occurred to me, even if you are using, I'll give you an example, we're on AWS, and up until very recently, if you were on AWS provisioning Kubernetes, you had a relatively inefficient network topology where the IP address for a container running in Kubernetes could be routed to any one of the nodes running Kubernetes. So if you had a load balancer at the front end of your cluster, that load balancer would receive the request, it would send it to a node on a node's IP address, and that node would have potentially the container that you want, or it might not, and then nodes are gonna route amongst themselves. So you might have this packet come into your load balancer and go all over the place before it ends up with your application. As your cluster scours and gets larger, that's obviously very, very inefficient. Amazon now have a network topology that you can deploy where every container or every pod running in Kubernetes gets a routable IP address that the load balancer or whatever external service can reach directly, and that means that it's a lot more efficient. That was built for the managed EKS product on Amazon. However, you don't have to have the managed EKS product on Amazon. So a lot of those benefits that you would, that might attract you to that managed service can be brought into your self-managed cluster in other ways if you wanted to. So everyone's genuinely getting better, even if you're not using them. All right, let's get into the meat. So can you guys talk about ways that you've been deploying Drupal under Kubernetes and the challenges of each approach that you've tried? Yeah, so for us, deploying Drupal under Kubernetes has definitely been a journey. The first thing we really did was we consolidated and built a CLI API on top of Kubernetes. I think Kubernetes is a platform for platforms, and the interfaces and APIs that you work with are kind of like they're kind of deemed like the machine code of Kubernetes, like the YAML files. So yeah, so we built a workflow on top, and then for two, three years, we honed it into Skipper. So that's kind of our journey there with that workflow. In terms of interesting parts, I think it's definitely been a journey keeping up with, especially with adopting Kubernetes like 1.1. We kind of went for a ride with new APIs coming out and new features coming out just in time where we're like, I think we want that, and then it came out. But also bugs, so it's kind of like that early adopter thing where you just have to ride the wave and work through it, so. Yeah, so Amazie's platform is obviously Lagoon. That is the platform that's built on top of the Kubernetes platform. And basically, it just helps you when you spin up a Drupal site, provides a bunch of the auxiliary services that aren't going to necessarily be there by default, your solo, your database, those kinds of things. Yeah, and yeah. I think, as Nick was mentioning, Kubernetes has matured a lot, especially over the last two or three years. If you aren't terribly familiar with Kubernetes, it can appear like just a way to orchestrate containers. But especially now, it is incredibly easy and powerful in terms of how much you can program Kubernetes. So you don't just say to Kubernetes, I want you to run my application. You tell it how you want it to deploy your application. You tell it, I want to wait for this service to be ready before that service is ready. When we started deploying Drupal on Kubernetes, we did it in a very imperative way. We had our API, a customer would send an instruction to our API saying, deploy this version on this environment. And we would send a message to an Amazon SQSQ, which would then trigger a Lambda function that would then inject into a Kubernetes cluster the instruction to deploy that environment. But it wouldn't have any awareness of what that environment's current state is. It was very much sort of a set and forget. And if it worked, it worked. If it didn't, it didn't. And we might not necessarily know. Over time, as Kubernetes has evolved, we've managed to evolve into programming against the Kubernetes API directly, using what's called a custom resource definition or using what's called operators, where we have a container that's our program that runs. You guys are doing the same thing. Runs our code inside the cluster that watches for a configuration change, and then performs events that we tell it to, like we would tell a human operator to follow a playbook or a manual. This Go operator will run along and say, right, well, I've got a new environment. I need to configure SSL keys for that environment. So I'll do that first. And then once I know that my SSL keys are ready, then I'll fire up Nginx. And it'll do all of that sort of stuff. So it's really incredibly powerful what you can do now in terms of managing a complex deployment pipeline that a human would have previously done that now a machine can do. Yeah, right. And for those mere mortals among us, like solutions like Helm, Rancher, OpenShift, like what are your thoughts on where people should get started? Yeah, it's pretty tricky, because there are a lot of tools out there. There's lots of options. In a lot of demos that I do, I tend to lean towards Helm. So just because it's like a really, there's a big community behind it. There's a lot of documentation. It's a very mature product, and it's kind of an easy way to get started. But I also don't think it's always the end goal, because it's more of a package manager for running kind of like apt, or it's kind of akin to apt and yum, which is OK. But we have different workflows for deploying Drupal. In that, so updating a database, that's where some of these tools don't really work very well. They can update the application itself, but then you kind of want that extra workflow on top. And that's when you kind of deviate. But yeah, if you're getting started, I'd definitely say go look at Helm. Yeah, I'd second that. I think as an example, if you needed to deploy a database and you don't really want to have to worry about how you deploy a database inside your cluster, you can go and get a Helm manifest for how to deploy MySQL that's really configurable, and then away you go. And you can focus on something that's going to add more value to what you're doing. Yeah, the nice thing with Helm as well is that it's a good way to sort of, I mean Helm, basically what it does is just template objects that you're adding into Kubernetes. So it's a really good way to deploy something like a database or whatever, and then have a look at how it's kind of working, how things are plugged together. It's a good way to get your feet wet. Or come and use one of our tools. But I think, yeah, like the question is really around learning those APIs and what's the next step. Like I think that's, yeah. Yeah, cool. All right, so let's just get into some details on key components of the stack specifically for Drupal. So can you tell me what you guys are doing for your shared file systems? So for us, we're basically using EFS from Amazon. Yeah, that's what we're doing, platform users. Oh, have we had a fun time with file systems? It is 100% like the topic in the Drupal Slack. There's a lot of people talking about like S3 as a backing file system, so yeah. Oh, have I got a story then? But yeah, like looking at the Slack, like there are a lot of people that have maybe one site and then they'll go to S3 and they'll do like the first class integration between Drupal and S3. But in terms of like mounted file systems, yeah. For a very long time, there wasn't a great managed solution. You either had to roll your own like HA mounted storage. So that's like, yeah, Gluster or Cep or like these, yeah, these tools that are very, very, yeah, demanding to run. And for us, we did go down the fuse S3 mounted route and we evaluated, there was like three projects at the time. We picked one, it got us by, it was definitely very tricky. And I remember that where I was, I joke about this, I remember exactly where I was on the day that I got the notification that EFS launched in Australia. And then I think it was like a week later we had migrated everything across because it was just one of those. It was the single point of, yeah, of pain. So I might just, if anyone's not familiar with EFS and with the problem, EFS is a managed network file system from AWS and it solves the problem of you've got a Drupal environment that's running in with at least two web servers that might be in different availability zones or different regions, not different regions, different availability zones. And you want to make sure that both web servers see the same public files and private files system. And you wanna make sure that if somebody uploads something they upload a PDF into the content system that it's gonna be available on the other web server right away. That is from just a pure computing and infrastructure perspective that is a really hard problem to solve. There are tools out there that you can deploy like Gluster and Ceph. And we've certainly experimented with those in order to provide a file system that multiple systems can write to at once that doesn't break because of issues with consistency or trying to write to the same thing at the same time. EFS is the best solution we've got. I personally think it still lacks quite a bit and I'm hopeful for the AWS conference next week that they're gonna have some managed clustered file system that's gonna solve all our problems and we won't have to do it ourselves and manage services. But that's the problem that EFS solves. And if you're running Drupal in Kubernetes with multiple web servers then EFS is your best solution at the moment. If you're on Azure or Google Cloud they have similar products that aren't quite as mature yet. For example, the Google Cloud product called File Store is a managed network file system but it is zonal, it is not regional. So you will have multiple availability zones with your cloud provider on Google Cloud. That volume is only in one zone which means generally it's only in one data center. If that data center goes offline nothing else can bring that zone online whereas something like EFS runs across up to three data centers. So if one data center goes offline you're probably not even going to notice. So yeah. All right, excellent. Can you talk about like what your approach is to aggregating Drupal's logs and how you expose that to developers and your ops team? That's a really tricky one. What we would love to be able to do is have a fast, easy to use web UI that customers can go to and they can pass all their logs like something like a managed elastic search or something to that effect. And for some customers we do do that but elastic searches are very heavy and very expensive application to run very hard to scale and maintain. Good as a managed service. What we do at the moment is we use a utility called Fluent Bit which will collect logs from all of our different web servers and database servers and worker nodes and everything else. It aggregates them into a central repository and writes them to a persistent disk that customers can then use to analyze and view those logs to download them if they want to. And that's been a really, really useful process for us. Yeah. So we use CloudWatch logs, AWS CloudWatch logs. We did go through the, like running the Elk stack like so the elastic search, Kibana kind of stack and ended up on CloudWatch logs just because it's easier for them to manage it. There's always regulation around like keeping logs for seven years in government. So it's very easy just to offload that until AWS to keep them for seven years. But we definitely and in terms of the integration we ended up writing our own log pusher just because we wanted to get the groups and streams just right. So it was a bit easier to use. It was some of these tools are very, like they cover a lot of cases. So something like a Fluent Bit is kind of like it'll like grab all the logs but then you can kind of configure where it wants to go and it's very generic whereas we had a very specific format and structure that we wanted for our logs. But I think the other side from that is if you've used CloudWatch logs before it was very, very tricky to use up until recently. They've shipped a whole query UI that's super, super powerful. Santa actually writes some queries that amaze me. Yeah. So if you have looked at CloudWatch logs in the past and gone a bit basic, I definitely come back and have a look. I really recommend it. Yeah. So Lagoon also uses elastic search basically as the log storage. And I guess it's kind of the currently the best solution. Everything Mike said though is true. It is extremely heavy and resource intensive. There are a few other things out there that we're kind of having a look at and also just doing some experimentation because logging is kind of a little bit of a pain point and just being able to store those. And yeah, looking at if you can get a managed service and sort of offload all that pain, I think it's definitely a good idea. Sorry, can I just ask a probing question that could to get in the guts of it? Like how are you actually outputting Drupal's application logs? Because out of the box, Drupal puts it into the watchdog table. If you turn on the syslog module, it goes to our log syslog. Yeah, yeah, good question. And the other caveat there is that if you're using syslog in a container, in order to do that effectively, you have to run FPM and syslog in the same container, which means you need something like SupervisorD, which is an anti-pattern for building containers. A container should only ever run a single service. The way that we work around that is that we get our customers to deploy monologue, which is really, really effective and it can write Drupal logs to the file system. And then we read those logs into Fluentbit and relay them off to the central store. It's an imperfect solution though. Yeah, we're in a very similar position. Monologue, standard out. We are looking at running like a syslog sidecar container and configuring monologue to push to that, to kind of separate the PHP logs from the application ones. Also because there's, like I mentioned, like the keep your logs for seven years, there's different tiers for that, especially with government, like your application logs are very, very, very important whereas your web ones, I think it's three months, like your actual web requests, it's a lot smaller. So yeah, it just allows you to segregate it a bit. Also because Drush likes to take advantage of standard out as well, so you end up clashing with a couple of other things at the same time. Yeah, very, very similar. Basically just using out to standard log file and then FluentD, out to a lasting search. Yeah, cool. So kind of coming close to time, so we'll just go on to some audience questions now. So Psy Hobbs, he tweeted, so with relation to vendor lock-in in relation to managed services, so like for example, RDS. Is that just a reality? Is that what we have to accept? I think it all depends on how much pain you want. If you are able to use managed services and you're happy to stick with one vendor, then you will have a pretty pain-free life. We, to give you an example, we've not gone too far down the path towards using RDS. We have our own database engine that can deploy replicated or standalone database environments that run on cluster. That was very, very difficult for us to do. It took a lot of engineering in order to make it work successfully. I think we've ended up with a really good product and what it means for us is that we can run the same database stack on AWS or Google Cloud. And we wanna be able to be vendor agnostic as much as possible. Inside Kubernetes, most of what you deploy, if you deploy your application on one Kubernetes cluster, for most components it's gonna work the same on another Kubernetes cluster with another vendor. It's only when you integrate with something like volumes or load balancers or external database services or other managed services that you might have some pain, but it really depends on what your appetite is. Lean into the lock-in. I think there's something to be said for the difference between something like a DynamoDB and an RDS. RDS has that consistent interface, MySQL. There's definitely benefits, so we use Aurora serverless. So you could say there's definitely lock-in there because you look at it and go, well, it's MySQL and it auto-scales for me and it's amazing why would I go elsewhere. But at the same time, you still have portability because you still have that MySQL interface and you still have that mounted file system. You still have those pieces, but yeah, moving is definitely, there's a lot of conveniences that can keep you in place. But at the same time, they also become your edge. Or they make your life easier. So there's a trade-off there, but I think we can all agree that making our lives a little bit easier is a good thing too. Yeah, I mean, we also use RDS quite heavily. And I guess as far as lock-in goes, yeah, having the MySQL as the interface makes it a lot easier to be a bit agnostic about your database provider. There are other cloud services so that it's much easier to get locked into. So yeah, it's also, I guess it's just about being aware of going in with your eyes open when you decide to really integrate closely with one particular provider. All right, so this is my moment. I'm gonna come down to the sidelines and get some audience questions. You're like, does anyone remember DIPPA? Used to go down the sidelines of the footy. All right, here we go. What do you guys reckon about using containerized databases? Is it okay in production? Is it okay anywhere? What do you think? I'm not a big fan of them in production because I think they're extremely stateful. So yeah, it's like running a MySQL database on a cluster. Like if you split that out, then you have so many more advantages. Like you can kind of like your, it doesn't like a database doesn't auto scale the same way as an application. It's not as elastic. So if you think of your cluster as like this very elastic piece in the puzzle and then separate your databases out into separate machines or managed services, then I think things get a lot easier from a management standpoint and from a availability standpoint. But having said that, we do still use database containers for local development. So we snapshot, we sanitize and snapshot our dev staging production databases. And so that's everything like user accounts and strip out the cache tables. And then that's all stored in a registry. And then developers can pull them down and use them as a part of their local stacks. Like that is where we use those. And I feel very strongly about that and that they work really, really well for local development. I mentioned before that we do run our own containerized database platform. We do use it in production. That may be the contentious thing because every time I tell anyone who's an infrastructure engineer that we're using containerized databases in production, they're sort of like, oh, geez, you're brave. I think specifically for Drupal, if you're not using things like DB log and you've got good caching for your application and you've got a CDN, your database server requirements are going to be relatively small. Most of the sites that we host, the database is less than five gig, the traffic's relatively reasonable. There are some sites that are dozens of gigs for the database, very transactional, very heavy. But even those sites, we run in containers in production quite successfully. But I did mention we had to do a tremendous amount of engineering to get a replicated database platform that would run in multiple zones with automated failover and good security and everything else. It was a huge effort. And if our use case was just a little bit different, I think we would just be using RDS because you pay a little bit more per month, but you save dozens of hours per month. Okay, I think we're like right about on time. So I'm sure that these guys would love to continue the conversation out in the hallway track if you're in the break. So, but if you can all please give a round of applause.