 I'm Dan Duvall. We're also a work for the police engineering team. So thanks for coming to this talk. It's got to piggyback nicely onto the Kubernetes talk, because it has to do not just with the platform on which we eventually run our production stuff, but also the way that your changes in code, or the developers of your non-developer, with the way they get their changes in code to the production cluster. Yeah, we want to go on with the presentation. So questions afterwards at the end. Yeah, we've been working on this for about a year, a little over a year, and it'll be good to present and get feedback, see what you all think. So yeah, right now, the naming is a little confusing, because we keep referencing it as different things in different places. So in annual planning, I believe it's still called Streamline Service Delivery. We casually call it the Continuous Delivery Pipeline, or the Release Pipeline. These are all pretty synonymous. The reason it mentions services now is that is our primary, or our initial use case, because we thought services represented a unique opportunity and an easy first use case to be able to put through such a pipeline. Kubernetes sort of assumes that your application conforms, as Alex said in his talk, conforms to certain parameters to be able to be run in such an environment. Same thing. So yeah, some of the goals of this pipeline. We had problems, as Alex mentioned in his talk, as a developer, what do you have to do to get your code running in production? And the answer to that is it's complicated. And I don't think the answer should be it's complicated. So primarily why the pipeline is for a developer empowerment. Also, we have a lot of different clusters currently. We have a laptop, which is a completely different environment. And then we have the CI cluster, which is a completely different environment. And then we have beta cluster, which is a completely different environment. Then we have production. And if anybody wants to stand up their own environment to test things, it may be completely different than all of those environments. So environment parity is a big problem, as it is everywhere. So that's another reason why we wanted to continue ahead. We wanted to create a solution that empowers developers and allows environment parity. Also, the feedback cycles for developers are pretty long, especially for things that shouldn't necessarily take so long. So, for instance, if you want to run, you know, unit tests should be fast and then you should get those results back. And if your unit tests pass, then you should be able to run your integration test to get those results back. And then you should be able to integrate with other services, other, or exercise services via, from Meeting Wiki, and get those results back maybe a little bit later, but still within a reasonable amount of time. And the longer those feedback cycles are, the less the developers engage with the code that they're writing. So if a developer's writing a piece of code and it takes a long time before they realize, well, it doesn't integrate with this other extension or this other service very well, but you wrote it a month ago, that's not useful information. And the final thing is reproducibility and deployment confidence. So if we have problems in production, currently, I mean, they just may not have, there may be an environmental parity reason for that. If we have problems in CI, it may be a CI environment problem. I'm sure everyone's aware of that. So reproducing test results and reproducing errors that come up in production is a big reason why the pipeline. And you want to talk about this? Yeah. So this image was made years ago. It essentially sort of outlines what your general strategy is when testing software. And that's sort of to work from the inside out. You test more atomically your functions, what the inputs should be, or what the outputs should be given certain inputs, and then you test those functions together, then you test those components together, and then you test maybe the full stack of the application, and then maybe you test your application and parts of the system in which it runs. And this kind of shows, illustrates that along with the computational costs and the feedbacks associated with that type of testing, is the more of the stack you're exercising in your test, the more computation that's going on, and the longer the delay and feedback from when you initiate the test to when you see the result. And so in the pipeline, we tried to keep this basic strategy and this basic model in mind that we could start testing software in the most core of ways in the very beginning, and as the code gets promoted through the pipeline, we can test it in increasingly different modes using different strategies. So yeah, what is the pipeline? The pipeline is how you promote application images, which are packages of your application through an immutable progression, and it uses the same testing strategy that we just outlined, which is the greater computational complexity. Things are pushed off until we get feedback so we can get feedback faster to developers. This is what it looks like when you dump all of that, all of those thoughts into a diagram. This diagram has gotten a lot of mileage. We built this at all hands, like, two years ago, I hope it was two years ago. Anyway, we're going to be going over the stages of the pipeline. It's complex, but this is the basic progression that an application would take to get to production, but most of this is developers don't have to understand it in a huge amount of detail, but this is the progression that things will go through on their way to deployment. You can also see represented visually, like, that same, these sort of feedback loops in this model also correlate loosely to the overall testing strategy and the feedback delays. This brings us to one particular tool that we implemented for the deployment pipeline, which is called Blubber. It was named such that it has to do with wrapping Docker in a nice, happy layer of necessary that. And it was necessary for a few different reasons that we identified when we looked at Docker files. We'll get to the details of them in the next slides, but we saw opportunities for more efficiency in the way that Docker likes to cache things. It's kind of opaque to the user of Docker, we felt, and so we wanted to be able to make that more deterministic in the pipeline when testing application changes. Security, people kind of think, okay, once I'm in a container, I can do whatever I want and it's totally fine. We don't feel that way. I think there are certain security models in Linux that have been followed and certain security models with deploying web applications that should still generally be followed even in a container. And then empowerment, which is essentially, we wanted to be able to have developers provide a configuration for exactly the type of environment that they wanted their software to run in, but we saw giving them full access to Docker files not only as an additional sort of learning curve, because you have to understand all the idiosyncrasies, but also not really the most desirable approach when it comes to running things in production. Again, we want to make sure that the answer to how do I deploy an application into production isn't go talk to all these teams, it's follow these guidelines, write this stuff, and it will be deployed to production. Docker files are full of unnecessary complexity to that end, and so we wrote a wrapper around that, which is as Dan was explaining. So yeah, this is the first step in the pipeline, zoom Dan a little bit. So this is what a normal developer will do when working on their code. They'll iterate on the development branch and get pushed to Jenkins when they're happy with local working on this, and this is touched on in Alex's talk a little bit about how you will do this locally, which is hopefully Minicube and Helm, so it mirrors this whole progression as we go. But once it enters the Jenkins pipeline, what we'll do is, well, we'll explain it, but yeah, Blubber will run the build process and eventually publish that to a registry, but we'll go into working tail in the next slides. Yeah, so what Blubber does with efficiency is, well, it tries to be a lot more simple. Docker files are idiomatic to say the least, or esoteric would be a better word possibly. We wanted to let you say. And yeah, it's really hard to know what is Docker going to do with this, what's going to be the result of this. A couple of different examples is the copy command, always copy stuff as root, and that seems totally backwards to how you write it, and also it assumes that the text of a command hashed is a good identifier for its deterministic result, which is a little weird. So we wanted to sort of wrap that all up. So Blubber knows about these things, and it can provide to the user an interface for declaring a simple YAML configuration and then doing all the necessary things under the hood. And it also supports multi-stage built. So if any of you don't know what that is, it's essentially a strategy that seemed to grow organically in the Docker community or in the container community, which was the idea that you should be able to build your application in one container and just take the artifacts from that container and do a new clean like production container to not have to ship all the development dependencies and all the other crap that comes along with your application when you're doing test team and other things like that. So Blubber also supports that. And for security, it enforces, I talked a little bit earlier about how once you're in a container, the tendency is just to go crazy and do whatever you want. Blubber tries to enforce a simple security model without too much input from the user. And that is to the only thing that the user, the developer in this case, can influence that happens as root or writes any files as root in system package installation. So in the Blubber config, you will specify your system packages that you want. Other than that, you can't specify any arbitrary commands that you're running as root and result in root-owned files that could essentially execute as root when the container is running. Then it drops privileges to a somebody. The reason it's called somebody is that nobody usually doesn't have a shell and this is the one difference between that is it's nobody with a shell. And it owns the, ends up owning the application files, does the dependency installation, so like node modules or pit packages or Ruby gems and that kind of thing. And it ends up owning those files too and then it drops privilege one more time before specifying the entry point so that the application is run as a different user in the owner of the application files. And as far as empowerment goes, well, we said earlier that you can provide the configuration so you have control over what happens in that environment. Developers specify system dependencies, entry points, package management stuff. You define what your test entry point is and hopefully you are able to deploy a command. So that's three developers. So naturally, we had to include this GIF. We were really just trying to get this GIF into the... Yeah. And we did it. Success. So thanks. You can all go now. Yeah. Do you want to talk about that? Sure. Yeah. This is what a Blumberer config file looks like. So I don't know if any of you are familiar with what Docker files look like, but they're whatever. They're like shell scripts, but someone who really loves basic or something. And you can do whatever you want and rearrange things wherever you want. And things that look good are usually wrong. So it looks nice in a Docker file. You're doing it wrong usually. So we wanted something that looks nice and was right. And we think this looks right. So we're going to be talking about all the individual pieces. But this is a pared down a little bit, but this is pretty much what Mathoid is using and is currently running in production and using. So you can see like we let them define app packages. They install node requirements from their node requirements files. They have a test entry point and they can define the entry point for their test. And yeah, that's what's running through the pipeline currently for Mathoid. So you might have seen in this previous slide these variants. So what the heck are these things? Well, we identified, of course, that environmental parity is super important. You want to have almost as much, you know, you want to have the environments to be identical as much as possible. But some things are always different. So in development, of course, you're not going to be running as many processes. Maybe you won't be doing the same kind of logging. And maybe there are slight differences in your dependencies. So we did want to provide a way to specify those differences without compromising the entire achievement of getting that degree of parity. So that's what variants are for. What do they look like? Well, you saw them in the last slide, but there's a pared down version of that. A variant, well, yeah, these names don't totally matter. Like they're pretty flexible. You can define a variant, give it a name. You use it as an include later or something. But we do have a couple conventions that we use in the service pipeline script, which is we expect a test variant and we expect a production variant. And those are the two things that pretty much have to be there if it's going to work through the service pipeline. But there are other things you can do outside of that for whatever your need is. And in this case, you can see that there's configuration at the top level. And then the variant actually specifies additional configuration items, some of which are overwritten, some of which are merged. We're going to fix that weird discrepancy and make it so that anything specified in the variant is actually overwrites completely the base, because that makes a lot more sense, I think. But yeah, it makes for a much smaller configuration. And I think easier. So yeah, if you want to talk about how variants work for variants how. So in this instance, we have three variants, the build variant which you saw in the last slide, which it's worthwhile to note that it includes things like build essential. Like why do you want build essential in your production container? You probably don't, but we will use it for NPM install, for instance. NPM sometimes has to compile binaries when you run NPM install. So we need build essential and it needs to be consistent across containers. And so in that way, we can inherit all that into the test variant. But what we, and we do that by saying includes, includes is a list so you can include for multiple variants and that gets complicated. But in this case, the only thing we're overriding after we've included everything from the build variant is the enter point. For test, what we run on the pipeline is basically blubber, we point to the config and we say test. And that gives us all we need to build the test variant of the image. At the end of the pipeline, we just run that test variant. For prep, you can see it includes build. But this is a prep for the production build. So instead of running NPM install with all of the dev dependencies, we run with environment production. So it's pared down, but it still includes all of the build essential and everything else. But this is what we'll talk about. How we get rid of build essential at the end. Yeah, so you can see another level of this. This is a pretty common pattern too we're seeing with blubber files for services is to have a build variant, a test, a prep, which is build but with production settings. And then you can see this copies instruction. And what is that? That is the multi-stage support that we talked about before. You're not actually including prep variant, you're just copying artifacts from it. Into your production image. And that production image is probably based on a much more slim image. And it's not going to include all of the crazy development dependencies, build essentials and all that stuff. And this gives real pretty substantial savings. Like for example, the mathoid image built, the test variant of the mathoid image is I think 800 pegs. And the production one is 300. So that's a pretty big difference when you're shipping around big blubs all over the place to Kubernetes. So hopefully that will streamline things from that end. So yeah, so your image is built by blubber and now what happens in the pipeline? We'll talk a little bit about that. So this is what the pipeline looks like currently for mathoid. This is a screenshot of our Jenkins install. And also we have the blue ocean skin on our Jenkins install, so it looks fancy. So it looks like a pipeline as a matter of fact. And yeah, we'll run through all of what it's doing. But you can watch each of these steps as it moves through, as a change that moves through the pipeline. So yeah, the breakdown of what it's actually doing is we still have Zool, so you push up a patch set and Zool triggers the entry point to the pipeline and passes in all of your, you know, how to get your patch to Zool merger and that sort of thing. So basically we check out the code. We check out the code and we build the test variant, as we discussed earlier. We run the entry point of the test variant, which if you remember from the previous slide is in being tests. Then we build, if that passes, then we build the production variant of that. From there we're running a verify step, which is deploying to local Kubernetes, which right now is mini cube, but we're hoping that's why we're like, that package is weird in the last presentation if you were in it. But yeah, basically we deployed to a mini Kubernetes and we run Helm test against it, which is currently running the service checker. So it exercises endpoints from another pod to make sure that not only do the unit test pass, but once we deploy this image of your application to a production-like environment, is it going to respond? Is it just going to fall over? Is there some way to exercise it as a user would exercise it? And that's the Helm test. And then once if that passes, we currently push that production image to the production registry, which is from whence it can be deployed. And so this part might be a little mysterious. Hopefully not so much if you did go to previous talk, but Helm is always like, whenever I mentioned Helm to someone, they're like, wait, what exactly does it do? How does it interact with things? So we wanted to highlight this portion of it and just sort of give you the zoomed in diagram of what that's actually doing. And it's actually taking, so it's taking the image that was built using Blubber and Docker build. And it's taking a chart. You don't know much about charts. You can find someone and ask them about that. But essentially a chart is what glues together. Kubernetes resources, while the images with the Kubernetes resources responsible for running those and setting up Ingress, Network Ingress, and all the other stuff that an application ecosystem needs to run. So yeah, once the image is successfully built, the test entry point is executed. Then it takes that chart and it takes the new image ID and it deploys that image using the chart to MiniCube. Then, yeah, Tyler already explained all this. Yeah. Sorry, step out of your toes. No, no, no. That's cool. And then after this stage, we don't quite know. That's where we're in uncharted territory with that because that's the next part of what we're working on with SRE services is like, what does the next part of this pipeline look like? Yeah, that's mostly it for our talk because the pipeline isn't done, but we are working really hard on it. At the end of this quarter, we're hoping to, what are we hoping to do with this? Annual planning. Yeah, we're hoping to do more annual planning for this quarter. So, yeah, this is the current image that's running Matthew in production. So you can download this right now on your laptop and tag subject to change. But if you download this, this is exactly what's running. And then if you next, there's one more slide here. Yeah. There are two more. And if you combine that with the chart that you find at releases.media.org slash charts, there in is the Mathoid chart. You can deploy it to your local Kubernetes. And currently, I think these charts are tuned for development, local stop. So it's not like the values are running in production, but you can deploy it to Minikube locally. That's how CI does it. And yeah. And then the next step is deployment, which I didn't know how that works, but it's something like that. We're going to match just a single command. Yeah. That's pretty much it. Yeah. We'll just open up to other questions for comments from anybody. Concerns. General terrorist probe. There's one small thing. It's actually just three of the middle four, like the main page. Yeah. The main page will 404. I don't know. There's no clients. I thought the fabricated ticket and yeah. I can answer that. Don't expect it to be not returning that 404 in time soon. It is the Docker reference registry from Docker. And it is meant to do that. It has API. And if you actually use it with the Docker command, it's going to correctly respond. That's not meant for human consumption by a browser. Is there any place for human consumption? I hope it's due to Curl. It's the Curl Docker Registry that we're meeting at order slash v2 slash underscore catalog. Yeah. And I pipe that to JQ. And then I grep. So yeah. I don't understand. It's perfectly usable. If you have Docker on your laptop or in a VM or something, you will do Docker pool. Yeah. Docker registry to be the dash registry. Yeah. Go back to the URL. If you did Docker pull this, you'd get method. And yeah. There's no break line in it. It should all just be on one line. But slides are only so wide. So I'm sorry. I could show it. It's a tempting method itself to get it ready for Kubernetes or was it just external? Marco did we have to? Yeah. We had to add a file. We added a file that the pipeline consumes, which is the Blubber file. And that was it. I think that's the only pull request I made for this project. Oh, and a helm.yml that has a single value that says use this chart. Yeah. To tie the chart together with that. Since the charts are distributed on that, but on releases.wikimedia. So that's what I mean. If you have one completely done, then putting the other ones, other services in, it should be fairly quick. In three? Yeah. I know Alexandros has done graphite in the two days that we've been here, but that's also an unearthed different amount of changes we'd like to make while we were doing that. Yeah. So I imagine it will be slow-ish going initially and probably faster as we gain more experience in moving migrating services. Yeah. I think the biggest next challenge is going to be, because the mathoid and graphoid are both single service things. They don't really depend on other services all that much. So once we have something that depends on respace or stateful services in the Kubernetes cluster, I think it's going to be an additional level of challenge, but nothing that Helm shouldn't be able to address. But we wanted to start with, like, the most simple use case just to prove that overall process of the pipeline. Mobile apps, by the way, is not a bad candidate. All right. I'll be any of you. Yeah. How much time do we have? A lot of time. A couple of hours, though. Yeah. I could show the grotesque of Blubber. Sure. The stuff you're not supposed to see. Well, maybe if you could show that. Like a demo of, like, Polly? Yeah, you could pull it up. Whatever. Cluster. How many years are we from media-wiki being packaged and deployed this way? Or thinking about media-wiki being packaged So, by the end of next fiscal, we're trying to get all the services migrated, and then we're going to spend the following fiscal working on media-wiki stuff. So, that'd be a better position to answer that question then. But, away. Roughly, you think, like, okay, the next year we'll get all the MISC services stuff kind of pulled into this model and then figure out what the next step will be. Yeah. So, I'm starting on MiniCube right now. It takes longer than I think it should, but I'm impatient. But once this has started, yeah, I can show you the output. Well, I'll show you the Blubber file in its entirety and then what the Docker file output of that is and then what piping that Docker looks like. Am I, let me make sure I'm actually... Oh, I'm still on your MIFI. Is that all right? So, working from the phone in my pocket. I can pull Docker images now? Yeah, sure. Do it. Cool. Yeah, great. I'll send you the bill for this talk. Oh, and you're roaming, by the way. Yeah, I see. So, this is the Docker file. It's basically, yeah, this is the one in its entirety. We did take a couple of things out for the slides just to, like, fit it. What's that? Can you make that bigger? Oh, sure, yeah. Is that all right? I guess, because of the backslack combination, it's kind of low contrast, even though it's the same font size as what you showed before. Is that better? Cool. Yeah, so we have a version field, which is just to control, like, if we were to change anything in this configuration that you get a validation error really quickly that tells you, like, it's the wrong format of the configuration. We do have a fair amount of validation in it, which is nice in Blubber, so it gives you happy error messages. Hopefully. Yeah, if you have bad values in here. Oh, some things about Blubber. It's written in Go. There are reasons. Wait a minute. We were thinking about, well, this tool will probably need to be distributed to everyone. We also did distribute it. We did, yeah. If you go to, where is that slide? The very last one. Probably at the end, maybe, like, 25. Oh, well, goodness. Yeah, so Blubber is available at this URL. So yeah, we cross-compiled it. What are the reasons we decided to use Go? Because developers are going to have to use Blubber on their laptops. And so, without installing the whole Go tool chain, you can run Blubber on your laptop. Right now, they're just distributed there. We need to find a better place for it. We just put it up last night at the Hangover. So there are checksums and binaries in that directory. There's a Plan 9 one for people who run Plan 9. And let me just check on this. Yeah, and the other reason we were thinking was, well, the entire, like, all the tooling for the ecosystem is written in Go. So, Docker stuff is written in Go, Kubernetes is written in Go, Helm is written in Go. So there are advantages to that. For instance, in Blubber, some of the validation actually does check your base image URL against the same exact patterns that Docker validates those URLs against just, like, minor things like that. If we do want to have Blubber actually actuate or make calls to the Docker daemon, then we can just import Docker into the project instead of shelling out and making sure and sort of, like, delegating the responsibility for all that version control to the user, things like that. And hopefully, if this tool proves, you know, generally viable for others, you know, hopefully that same sort of developer community can embrace the project. Well, yeah. I think there's probably a more likelihood if it's written in the same language as the other tooling. There it is. Okay, so MiniCube is up. And let me just quickly make sure that I have the newest version of Blubber. I don't know if you want the demonstration. Oh, yeah. Well, it's too late. We did change something. Okay, so, yeah. So you just run Blubber against, I'll just show you the usage statement. Hopefully that's big enough. That's a little bit small. Make this bigger. Yeah, so you give it the configuration file and then a variant name. And that's essentially it. You'll see this policy option here. We did develop support for policy files. So if anyone such as SRE wants to ensure that you are not doing really crazy stuff in there, we mostly ensure that you can't do absolutely crazy stuff. We do have one flag called Runs Insecurely. So, which is... So don't do that? Yeah, because that's really the secret to the security. Is that you don't specify that flag. It's mostly meant for test suites that assume they have a lot more read access to, like, the project files and directories. So we can enforce that on just the test variants. Tests do seem to sprinkle around files in your project. Yeah. You might actually want to use the version of Blubber that is packaged and distributed. We have an updated the Mathoid Blubber file and we did a lot of hacking on Blubber at the Hackathon. So installing the latest version of Blubber from the source repository is probably not good. That's no fun. Okay. That one's fine. If you want to ensure that demo goes forward, we could do this. Cool. All right. So there it is. And we'll run it against diskpipelineblubber.yaml, which is where we currently expect the Blubber configuration to be. That can change. We've already had somebody come to us and say, why are you doing this? So this was just something that we came up with with services. So we can either make the pipeline script allow for different locations, or we can agree on a different convention. And I'm going to show you what the output looks like. Which looks beautiful. What a beauty. So you can see I'm generating the test variant. I'll show you the test variant right here. It includes build. So it uses the Node.js development base image. It includes some app packages that are necessary for dev dependencies. It tells Node to install those. This configuration is now different in the current version of Blubber. We've changed it slightly. And yeah, what that looks like in the Blubber file is that it's using the base image. It's the user privilege level that we talked about. That's sort of a model of dropping privileges. So as root it's running all the app packages. Devian packages. It's creating a couple users. Somebody user and the run user. It's dropping privileges to somebody. Setting the working directory to what's specified in the file. Setting some environment variables that are necessary for getting the application to run or to build. It's taking care of these esoteric darker things such as the fact that copy is usually results in root-owned stuff unless you specify otherwise. So that makes sure it's applying the current run level user to that. It's installing Node module separately from copy and over application files. That is optimization essentially to ensure that changing some random application file does not result in an MPM install. Image is rebuilt. So it does that in two stages. So as you're working locally if you change a file in your project that isn't package.json it's not going to reinstall the MPM dependencies. It's not going to install app dependencies. It's just going to copy over your stuff. That's it. It'll take it second to second. Yeah. And then it's copying over the application files. It's dropping to the run user again and then specifying more environment variables and adding a couple nice labels so that we can track images and know how they were generated. And normally Blobber version isn't just plus. Yeah. Which is fixed to the latest master. I swear. Who knows. Just how you compile it is how it's fixed. Right. So yeah, this generates a docker file which we then just pipe to docker build. Docker build handles the creation of the image that we then run for the entry point. It's all cached on my machine. We've got this at some point. But I can show you if you were to change read me. It wouldn't do everything over again. It wouldn't do nbm install a little over again. And all that. So yeah, you want to run the test entry point? Should I? Just run that container? Sure. This is what the pipeline does for tests. Effectively. Is it going to blow up? Probably. There you go. And I could do the same thing with the production one. And you can see these two commands that just popped up. Those are actually doing the multi-stage. So they're copying the application files and the installed known modules over from the prep image. Now it's completed. So we can do. Yeah. Is that right? Yep. And another attempt to use curl. I will say minicube show me list. Show me where my stuff is running. Well that's running on minicube. And I will query it. So what is it? SVG. We'll do that one. You don't want a cpng output on the console? No. There you go. So yeah. What we've done now is we built the test variant, the production variant, and we query it basically. All using the same configuration. All from blubber.gamel. I really think that's all we probably have. Yeah. But if there are questions about that last it's about the demo. Shoot. Cool. Well thanks so much. By the way, the only thing in production is nothing on labs. You mean like in order to deploy to labs? Yeah. No, we don't. I mean, yeah, this is something that we should probably talk about more internally. How to maybe unify some of our tooling with tool forage. Or, sorry, is that not the right name? That's right. Oh yeah, Brian. Yeah. Yeah. Yeah. Yeah. We can definitely have more conversations about that. But what might be useful for them what they have that's useful for us. I have a question. So I'm a service developer I use this I create a labr file to be able to test this fast and reproduce the test environment that is also used on the CI. And if I'm lowly media wiki extension developer I have to wait two years until everything is ready, right? Okay. Yeah. That's fine. I know how hard it is. It's just, yeah, it's a much bigger problem and in fact, only in that problem we're finding an intermediate with problems that also need addressing. But we're working on it. Ideally, like the tooling that is developed will be more comprehensive more able to address a lot more needs as we present things like this and as we move into working fully on the media wiki problem, the packaging. Yeah. I'm sure why. For example in the extension developer you know very well that a new extension relies on another extension Well, we have to do it manually right now but it's a bit off. The media wiki tooling will have to change to become a database that's why it's gonna take two years. Yeah, the way that we're in-lubber able to delegate out to the dependency managers we need something to delegate to for media wiki, essentially to be able to accomplish that same thing There are some proposals flying around using Composer to do extension management. Yeah, we were in the RFC discussion at the end of the day yesterday. We're trying to be active in that discussion so that we can make sure that the final solution there works for the final solution here. Well, thanks everybody. Thank you.