 And this thing on yeah, I'll try Yeah, yeah, yeah, I just turned it on. Yep. Yeah, just turned it on sweet Cool. Oh folks coming in. Hello. Hello Geez hey Keep on coming Sweet cool. Oh, well, maybe I should start talking. Hey everybody. How's it going? So my name's Nick shoe and I'm here to talk to you about deploying Drupal on Kubernetes. Oh Yeah, come in. Yep. Come on in Sweet so Yeah, okay, so Let's see if this works Cool. So I'm an operations lead at previous next I started as a developer and then slowly transitioned over time I've been on a five-year journey into like operations and I think that's in doing consulting and hosting and To me, I think that's my edge like coming from the development world And as a part of that build a lot of tooling and we built skipper Just a quick it's not really a plug I'm just trying to say that like this is kind of this is why I feel like I can talk about Kubernetes We've got skipper Which is a CLI tool that runs on top of Kubernetes and is kind of the tool that our developers touch every day and use to Deploy and and get things done Cool, so This is kind of a standard slide for a lot of Kubernetes talks where people say like, you know Kubernetes it's a tool built by Google and you know went through this big change and this big growth. I Like there's there's 101 talks where they go through that Ultimately the way that I want to summarize it is it's a way to take one computer and then make or make multiple computers Look like one with a set of API's But when it comes down to it Hey, hey, how's it going? What it comes down to what does Kubernetes mean to me and to me Kubernetes is It's all about the API's like I mentioned API's but it truly is all about the API's right so and let me elaborate so Back in the dark ages before before containers. We had these virtual machines, right? We had these clouds with all these inconsistent API's for Provisioning compute like we're provisioning VMs and like with all these, you know Like your AWS API for EC2 was very different to like your Azure and or your GCP API to spin up an instance But not only that It was also operations that kind of handled these API's so they would use some terraform or you know I get like the no cloud formation or whatever till it was It was kind of like the ops area to govern these virtual machines being provisioned and run an auto-scaled and and So there's a silo off to one side And then you had the application end which was kind of a mix of ops and devs Coming in and using like puppet chefs grips like whatever it took to kind of put that layer on top of the virtual machines and Kind of wire things together But ultimately what that meant between these two API's was there's an overlap of responsibility because as a developer You may want to know more about like how the thing auto-scales like you want to be helpful and jump in and go Okay, I think we need more compute or less or I think we need to do this and this like dive into the deeper internals But that was more in the ops realm and governed by all that tooling. It was a very high Yeah high ledge to go up But then from the other side then yeah, you have developers doing puppet and chef and and ops were also involved So it was very yeah, there's a lot of overlap So to me with Kubernetes. It's like why can't we have both? And Like with Kubernetes to me there's API's for like routing deployment auto-scaling I could it kind of takes all these concepts that were like best from compute and application It kind of gives you all those API's in one spot in relation to the application Had to had to finish it. I had to snapshot a YouTube video for that But ultimately what it means is clear boundaries So you've got operations kind of under the hood provisioning a Kubernetes cluster and running a Kubernetes cluster But they're just providing the service to developers and to teams and providing tools and working on tools together and workflows using all these API's and this kind of consistent Naming and consistent language on how you can deploy an application But yeah, there's there's a lot of API's there's a lot of API's in Kubernetes like I the routing and all that before We're very very very high-level API's like sorry high-level just words to group them all together So I'm going to show you around some of the API's but first I kind of want to talk about why this is so powerful I am Why these API's mean a lot to me? So and it kind of gets broken into three three concepts. I'm sorry if people have used to what a Kubernetes but we'll get through this but I Honestly think that these the API structure in Kubernetes is is amazing It's the it's the reason why Kubernetes is one in this space and it's all about how these API's are structured They're all consistent. They have metadata at the top, which is a way to describe your objects So name there's a namespace so you can separate it from the others I'm so you know dev in one namespace staging in another or the dev in another namespace and separate teams permissions and teams why that but Men does all all about naming Spec is about saying this is this is what I want. This is what I want the system to do So and then it goes through this bit of code called the controller Which loops and loops and loops and then updates the status says this is what I've done So if you can kind of summarize Kubernetes in in one way, it's kind of like it's kind of a database With a whole bunch of these control loops over the top It's a it's an API with a database that just stores these objects and then there's a whole bunch of looping code It's just constantly checking these API endpoints and making making it happen And this is this is what it looks like So up up this is the best way it's all yaml Lots and lots and lots of yaml But but on top of that there's also this like strong versioning construct with API versioning kind so the idea like this is Demonstrated very well in the Kubernetes project itself where API's go from like v1 alpha 1 to alpha 2 to alpha 3 to beta and then to stable. So just like deployments and then they're also grouped into like apps and networking and storage This is super super incredibly powerful and kind of enables you to have contracts with the people that are using your API's Oh, so we came here to talk about Drupal So and we're gonna go through all this But this is just kind of the high-level kickoff of like the anatomy of a Drupal application deployed with Kubernetes API's With the hot path being the red So there's quite a few So buckle up So the first one is ingress so The ingress API in Kubernetes is akin to Be hosts in Apache or nginx. It's the way of saying my application responds to example.com and then routes it to the right application But it goes a little bit further than that. It also gives you the yeah, it gives yeah laugh it gives you the ability to To do like sub routing so you can take specific paths and go off and route to a different application So you could have a static site on the base on the base of the domain and then you have slash API going off to another another app This is also where things are governed like certificates like you could do certificate management at this layer It's really really powerful, but it's also One of those API's that is kind of the lowest common denominator for the industry Like they took all the low load balances and how people are doing ingress in the community and kind of boiled it down to this API, which is great, but it's only going to get better in v2 and Build up. So um, so this is there's still going to be quite a bit of change here The next one is is service So we're just slowly working our way down the hot path So if the first one was kind of like your traffic coming in and saying example.com Route it through. Well, then this is where things get routed to So think of the service like an internal load balancer It was the first thing in kubernetes or it was one of the first apis in kubernetes to get added because Microservices were super hot when kubernetes came out And so this is a construct to give you a static ip On the kubernetes cluster that then service a can talk to service b and the load balancer will route traffic Round robin or sticky around the cluster. So but the main thing for us since a lot of our traffic is kind of external coming in Then service is kind of like that little discovery piece in the middle that's um That's taking that traffic and then routing it through to the right some copy of our application Um other common cases where you would use this is if you deployed solar onto your cluster Or you have a few back end services then you can use a service to get a static ip for that as well And now and now that we've got we've talked about service, which is going to route to our application There's the pod api Which is essentially a set of containers. It's a it's a collection of containers It's kind of like how on your docker compose stack locally how you go docker compose up It's just a small subset of those containers. So an example is like if you have like an nginx fpm stack Um a very common use case is to put nginx and fpm in that content in that pod um Also, maybe some some helpers like some side cars some like you can deploy some software that goes Hey, you know like let's let's check out php fpm. Let's see if it's healthy. Let's see what's you know Let's see how that process is going Maybe your app needs just like another little daemon on the side to sync some content. You can do that you can um deploy that as an atomic unit of your application and then Basically cookie cutter it out the system takes that pod and cookie cutters it out as many times as you want as many copies of your application Now the next two I was meant to combine these I didn't do my homework center um, but um config map and secrets so The next thing in our hot path. We've got our ingress which routes to our service to our app and then we've got these other two apis That help us connecting to our database We need some mechanism to connect to a database to have some credentials, right? And you don't want to commit that to code That's a very bad practice So what you do is you add them to these config maps and secrets and mount them as a file system onto into your application So the so I'll go into this a little bit in a little bit more detail soon But this is kind of that mechanism that allows you to per environment Um also provide like api keys and and things like that so and toggle on and off features Also the difference between these two apis um config maps is kind of a very simple Key value store with um also the ability to kind of add whole arbitrary files So it also so you could add like I don't recommend this but you could add like your whole settings.php in there And mount it and load it But most commonly it's used as a key value Um secrets on the other hand Also a key value, but then they're governed a little bit differently. It's kind of it's almost like a way Like when you go to circle ci or any tool and you enter in like a username and a password You can mark the password as secret That's that's and hide it away from ui interaction and things like that and they can be encrypted and Yeah, so I so when I think of these things it's like config map is like your identifier Um like your username secrets like your password So so we've connected to a database and we probably need to write some files. We need some file storage So kuminatis has a persistent volume claim system um the whole Logic around this is kind of deeply rooted in enterprise. It's kind of the best way to put it So let's just say I'm a developer in enterprise and then I go I need some storage and then you go and like ask the storage team Can I have some storage and they go how much storage? It's like, oh, I don't know. Maybe maybe 20 gigs and then some girl go I'll do it in two weeks or something and then go off and grab a hard drive and then slam it into a computer and go there's your storage This api is the embodiment of that there's um a developer uses a persistent volume claim to say I want 20 gigs of storage, please I can also add a few constraints around it to say maybe I only need uh, like I've got Multiple web heads that need to write to this thing at the same time or Maybe I've got like, you know, just a single block storage that I just want to mount in that would be The difference here would be like Your multiple web heads of course for right many But then if you've got solar like a single pod deployed with solar you could use some block storage, which is more performant um but yeah, and Honestly, these those two this api and the persistent volume api under the hood Uh, those two are actually used in enterprise for that Like like that manual use case that I just pointed out like that's actually kind of a little bit in the real world too There's like operators behind the scenes Provisioning storage and then sort of handing it back But the developer never knows like they they just say I want storage um I guess another thing to talk about is it's a way for organizations to check spend to Like how much they're using how much they've asked for Cool Cron so so we've gone through the whole hot path here. So Um, you could say we've deployed the application and we're done, but then you realize that maybe you've got some staled Unscheduled content that's not going up or or you know things aren't being indexed into solar um So that's where cron comes into it So the best way to think about this is it's running. It's a way to put a cron cron tab string Say run my container every five, you know every five minutes every 10 minutes But it it goes a lot deeper than that This system is also not just for like us with running running batch tasks in the background where we could Run mail cues or yeah index solar or do weird wacky crazy things in the background So our applications don't have to take the hit, but this api is also used for big data and Um There are entire kubernetes clusters dedicated to the idea of having this kind of cron job and under the hood a job api Where you can just schedule some work that's probably going to go off and run for a day and go across the cluster and run in parallel And with that then comes things like deadlines and like if the thing failed do we rerun again? If you know, we have 10 copies of the app running and it fails three times. What do we do that? It's it's a very very Um complex api from that standpoint But i've honestly found it truly rewarding especially under the context of of dripple Because it really makes you start to think about like that drush cron run and How we used to always just go to like edc cron on a linux host And then we would just say oh run every two minutes But maybe that tasks run runs longer than two minutes And cron doesn't respect that so now you've got like this stampede of background tasks So it really forces you to think about your background tasks and setting limits and how that all works Yeah, so so crons crons pretty fun Cool so So we've covered a lot here um ingress for accepting specific traffic Service for load balancing it around internally pod for serving your application Yeah, we've even got one of these And then config a map for connecting to like a database or a backend storage or Or even s3 or some service and then your persistent volume claims for um for storage and cron for that those pesky background tasks cool, so You're done You now know all the apis and you're like nick that was very very very high level and it was um And from here it's really like how do you get started right where what's the best way to get started And you know what like everybody says yaml. Let's let's use yaml I just like here's here's all the yaml. That was the definitely In the early days the promise was like, you know a couple of yaml files and your apps deployed. It's simple But honestly, it's not it's um yaml is really getting turned into and seen as more like the machine code And people are building tools on top um Yeah I do include it Actually, there's a whole twitter account for that. So I'll tweet about it later, but some like I think it's meme meme and eddies It's pretty it's pretty awesome. So so no anyway But what this means is there's a bunch of high level tools that kind of govern this workflow and provision. So The first one I had to make mention of is um is customized because it's actually baked into the kubernetes command line client It's it's a little bit of a deep dive But the whole idea is it's um kind of layering so you could have some base yaml files and then just layer things on top So like changing versions and updating storage It's it's still to me like a bit of a rudimentary building block to build on top of But um, it's definitely definitely worth mentioning since it's actually integrated with the tooling now as a workflow option The other one's helm and this has a big vibrant ecosystem. It's um Helm's kind of like a package manager for your cluster. It's like apt yum they kind of market is apt or yum for your cluster And I think that's a really really really good way of looking at it because it it'll get you started really really quickly um helm 3s out now, which is um kind of solved a lot of the shortcomings So it's a very very simple tool to use now And yeah, there's there's uh, uh, Drupal Drupal packages like available to kind of get you started But I kind of alluded to this before that uh kubernetes is a platform for platforms. So We've got all this yaml and it's it's kind of like the machines version Of how to run things and we need some high level abstractions over the top because Um, ultimately like we all we all have opinions. So we all have opinions about how we deploy our applications and manage them and run them And um, so that kind of draws the conclusion that yeah kubernetes is the platform for opinions. That's that's the way I like to think about it Um, but we still have those common apis on under the hood that we can talk about That's that's the difference here like we can go off and write these custom workflows and talk about things But since we have this sort of higher level base Now we can actually talk about what those differences are instead of diving into like two sets of code and two teams and It's it's incredibly powerful So I I actually want to talk about a couple of our opinions as we've gone through this I want to make changes to that to that chart that I had before And talk about things that we've gone through and the first one is um decoupling engine x and fpm so um for us we Like that pod before we split it out We we take engine x out. We take fpm out. We run them separately The reasoning for this is because we want them to scale independently. That's that's key like We want we don't want engine x forcing our fpm Us to have more and more and more fpm instances and we don't want fpm forcing more and more and more engine x instances The the real in the real world. It's really really tough to scale that out when you have sort of two moving targets Because you've got this one instance and you have to say, okay We'll let scale by If you're not just scaling by cpu and memory and you start to think about well you have to slice it up So engine x has its own cpu and memory and fpm has its own cpu and memory in that pod And if one starts to spike then yeah, it'll keep going So we split them and then we put tools around that as well to kind of govern that so So for engine x we still scale by cpu and memory, but the key for us is And i'll cover later is we've got some tooling around fpm so we can scale around prox as well So cpu memory and the amount of processes that are running The other one was around the config map and secret side So um, so we do use both of these apis but then we kind of take it to the next next step We double it again. So we actually have four of these a four of these api objects And the idea here is we reserve two for the system So that's automation And then the other one the other two are for users So the idea is like you kind of have this safe target for automation to go through and say here's my database credentials Here's my solar credentials. Here's some s3 We can safely put all that in the one config map and secret without worrying about Running it rolling over the top of a user's provided secret so and that's what the other two are for so developers can Add their own api keys or come in or configs or toggles. So So splitting that up has worked out really really well for us And like I mentioned before it was autoscale by process So um kubernetes has this autoscaling Um framework. It's almost a framework now. It's the best way to put it and we expose The fpm metrics to the system and then scale by that. So we have like a little I mentioned sidecars I mentioned adding little daemons to your to your pod as it deploys out And we have a little one that sits there and goes Hey fpm how many how many processes you got how many processes you got And that's and that's it and then just exposes it to the system So and then that allows us to um, especially with nginx fpm If anybody's looked at like your new relic graphs like when you get a spike You also see that like that green spike in there, which is like cued tasks Like when fpm's queuing queuing up requests And that's where this this is kind of the other side of it Where we can cover it as well. So So we don't only we kind of cover all the edge cases cpu memory and end process So So this is this is what our Ours ends up as I'm I'm not saying that that you should go out and do this Like not not enough like I think we all have opinions about how we run our applications and how we want to run our applications But um, but this is kind of what we landed on as as a best As a best system for running running Drupal um So the ingress is just up here and the hot path still vary the same except except for these three Yeah, where we've now come through and gone. Okay. Well, this is just nginx And now we go to a service to then route to our fpm processes And then we go down and um Mounting convict maps and secrets and storage The other the other thing I really want to point out is from a security standpoint um all Like this pod and this pod are actually they're all read only With mounted file systems in there for public private and temporary So we've got a very strong security posture, but on top of that Because they're split out like you can kind of see at this pod the For nginx it loads up the public file system, but it never touches temporary or private Because nginx doesn't need to know about your private files or your temporary files Like it's those file systems always go through a php process to check permissions or to generate something So you can totally eliminate those out of the picture, which is I I think that's super super cool And then of course your fpm gets everything and you've still got cron Kind of in the same spot cool, um Now for us like that's a lot of api That's that's a lot to manage and we have a lot a lot of websites And it it'll be really hard to manage and maintain all that yaml So in kubernetes, there's this whole idea of a custom resource definition and What that does is give us the power to create our own apis in the system. So All these apis that I've covered are core apis But now we can also build our own our own workflows and set up our own contracts and kind of use it as a building block to build our own systems And a lot of our building blocks like in fact like the the one that runs drupal They're all available in the skipper operator project And it looks it looks a lot more like that a lot more than that But yeah, you can see we've just reserved apps dot skipper.io in kubernetes, and then we've got kind drupal. So The project also contains operators for managing like Auxiliary services so things like cloudfront certificate manager like where we we use a lot of aws and And so we also use the system to automate that instead. So it's a lot more dynamic But but I think There could be a lot of people here that saw a lot of this and went well that was a wild ride all the way through there was a lot going on there and Look, it's the way I see it And when I talk to a lot of people like the way that I like to frame it is like it's it's a steep learning curve but it's totally like it's absolutely worth it because knowing these apis and understanding these apis Like I've said gives you a common language to talk about how things are deployed how things are run and it also allows you to kind of Yeah, like manage Yeah, it's it's all about that collaboration that talking. Yeah cool, um But if you want to come have a chat Learn a bit more about this there will be there will be some more demo. There will be some demos and things like that at our Boff I should have put it up. It's on the board. It's at two o'clock. It's after the keynote So we have the keynote slot Then a break and then in the boff room We'll we'll be in there. We want to talk about well, we want to talk about skipper But also talk about kubernetes. We want to bring in people to talk about kubernetes Cool, um, yeah, sweet any any questions Oh Mike You said you transitioned from development to DevOps. Yeah, what helped you transition and helped you to get started Oh, that's that's a yeah, that's that's interesting. Um, because A lot of a lot of my journey like Started with ci It really did start with ci. Um as a developer um, I think it was like six or seven years ago like We kind of went oh well, we want to run coding standards over our you know over our code base and then it kind of just Went from there to the next to the next to the next So I guess the best way is like I I found a use case which was I want to improve our tests and then so I kind of owned ci and um Then that led to talking about like or working in well How can we improve our local dev stacks and doing that and then Around that time I I was a little bit lucky because kubernetes and docker kind of happened and and then there was this new wave and It can be overwhelming but I think there's a lot more resource and a lot more best practice out there So I think it's all about finding those um those use cases because um because diving in Without one is is really really hard to navigate. Um, and we also have the um triple slack channel Like the kubernetes slack channel to end the dev ops one We have a community going so which which we're happy to to point you to the right Right places because it can be super overwhelming especially with kubernetes Thanks mac um, I'm curious about whether you're using prometheus or open tracing or jager or those kind of distributed tracing tools to Get insights into we are we request the ending up and how they're being handled Yeah, we are we are slowly moving into into more of that. Um our monitoring stack is Operates at a at a couple of levels. We we don't use prometheus. Um because it's There's a lot of cognitive load when running it. It can be pretty overwhelming. There's a lot of metrics Involved there, which is fine. Um, so we but we still record kind of the basics around cpu memory around that path We have a another project for the actual cluster itself because Not only do those apis work for provisioning. They also have status conditions so we can We've got like a htdp endpoint for all our clusters kind of like a held z for your entire cluster that then goes in and checks like Um, is kubernetes reporting that it can't autoscale right now because that's that's a big that's a big problem if you get here by traffic Um, and then we also rely on tools like new relic In terms of tracing We're looking at using like eight like We like to use a lot of aws managed services to kind of take that take that load off, especially given why we're running eks so, um We're starting to look at that. Um gateway like the aws app App mesh. Yeah, too many meshes Yeah I have one if I can um, yeah You saw that you you showed us that you have a pot for nginx and pot for php fpm Yep, and you're mounting fast. I assume that means the code is baked inside Yeah, it's basically the same code baked in and gnex and the same code baked. Yes. Yeah. Yeah, it is. Um, so we So as a part of our build process, we we actually build three containers or four we push three so Um, so the idea is we compile the code into one image And then which is everything from like composer to building your style guide to everything it takes to kind of prep the code And then it gets copied into into three images A little bit in the weeds, but the first one's like nginx codes in there fpm Your code's available and then we have a cli container Which has a whole bunch of extra tools and runs batch and cron and things like that and the idea there is to Kind of separate the responsibilities a bit have less code running in different things So nginx just does nginx fpm is the same And then your cli you could add extra scripts and extra tools and things like that they can run in the background So, um Yeah, i'm i'm happy to demo that In the bof as well Do you have any thoughts on using something like OpenShift on top of Kubernetes or is that something steered away from? It's it's something we've steered away from it's it's um It's almost like another system on top, which adds a lot of a lot of a lot of cognitive load. Yeah. Yeah, so um it's so that's Definitely one reason why we've kind of kind of steered away from it. Um, it's but it's not just yeah OpenShift it's things like Rancher and Um, they all have a lot of their own ways for and these are fine. These are totally fine like they all have their own ways for provisioning and running in um, kubernetes But um, but there's a lot of community that's matured with this for for provisioning running kubernetes itself and um I think there's a real power to just having that bare Kubernetes layer and then picking whatever tools you want on top being kind of almost Yeah separated from the stack Thank you very much for the presentation. It's super interesting. Um, I was wondering do you have any Data on how much time for example the system takes to autoscale How it takes to autoscale That's uh, I mean actually um, I'm just trying to think Maybe we might have some data around that I'd have to I'd have to go looking um but To talk to autoscaling like there is definitely an art to it. Um, there's absolutely an art because um because I think there's like a real it's it's a balance because um You can provision lots and lots and lots of little containers and make a more to scale out But maybe that doesn't fit the nature of your application so um Because maybe you have some heavier processes that run in there So you want a little bit more breathing room to be able to take in all those requests and then keep scaling out so in terms of I don't have data on like how quickly things autoscale, but um I can definitely point out that there's um Yeah, a lot of work that it's it's a lot it's a lot harder to just say we we just autoscale There's yeah, there's a lot of art that goes into kind of right sizing your applications and Making sure that you can kind of take that hit While the autoscale also kicks in so um, but that's also tunable in the autoscale or itself and how often it scrapes so um I think I think it's like every 15 seconds it'll scrape and and then ramp up And then from there scaling down is like every five minutes And it's just got a watermark and if it goes over then it'll keep pumping it up If it doesn't then it'll just slowly slowly work it down as well. So yeah Is there any difference in having your nginx and um FBM containers Separated rather than as sidecars like if you had pods they're in a sidecar or if you have them as two separate pods Is there any advantage or disadvantage either Yeah method um I think there's a simplicity in um in keeping them together. That's for sure So yeah separating them allows you to autoscale independently like that's that's a big big win for us But then it does also include you've got a kind of account for nginx knowing where to route and things like that, but um Yeah, I I'm not really definitely not really out like an nginx fpm in the same in the same pod. Um And like especially for deploying an application And getting it out and running it. That's I think that's totally fine. Um I just don't think I had something else. No, I think I think that's it. Sorry. I thought I had a point around Just on that. I'm just curious around. Um, do you have to use like pod affinity? Will you separate them out to make sure they're running on the same note? Or do you see any Nick Mcladen see across? Yeah, both both sides like nginx and fpm We we set pod affinity to spread them out because our Our primary goal is is that high availability. So yeah, we don't want to Yeah, so we just spread it out as much as we can. Um, Like that network latency between nginx and fpm. Yeah, is that Is that We k or yeah, it's yeah, it's it hasn't been a problem to this point and honestly like a lot of The when we've gone through things and gone. Oh, like this is a bit slow those fpm has we have this log that gets written out from fpm that says the request took x seconds and that Is like a god's end because it allows you to kind of like and that's where tracing comes into it, too Of course, like tracking it through But in a very simple way you can like we've gone to tail our fpm logs From well that process actually takes five seconds or 10 seconds We we need to look into that and that's always been where it is So What about redis to use redis redis, uh, we yeah, uh, we used to use memcache and now we've kind of Transitioned over to redis. I blame nix and rio He's the expert Using elastic cache. Uh, yes. Yeah, so we're using elastic cache for that. So um to run and manage and maintain that so Um, what's the difference between redis and memcache? Do you use them together or is it better just to use one? Oh, um I went pick sides. Um, no, but um, I guess so they I mean, they're both good services and they both serve the same kind of need in some way like for us, especially under the Drupal lens, it's all about taking those like very expensive transactions and And taking the result and then storing it and then serving it back and putting that into memcache or um redis that's Like it could go either way and for us we we picked redis because that was sort of where our operational experience went with managing it um And I think it just comes down to feature set from there like redis has Some things around cues and first in first out baked in which is really nice. Um, but Yeah, we we don't really Sorry, is it possible to use them together or will there be no benefit? I I think you would get your maximum benefit from from picking one. Um, for sure Um, because they're both going to be that in memory key value store that's going to serve it back So, um, so just keeping with I think just keeping with one is is fine. Yep Cool