 Everybody coming in here upset. They didn't get a steam deck. They didn't win the steam deck. Yeah, okay I'll try to make you a bit happier by talking about local development environments. We'll see we'll see how we go So I'm here to talk about patterns for local development in environment Nirvana So and for those who don't know me my name's Nick shoe. I work at previous next on the operations lead there I've been a previous next for over a decade and just sort of reinvented my role over that time going from a Developer role all the way into ops. So everything from local development environment to see I to do our hosting product Yeah, and this talk is all about that local development journey that one sort of facet of that journey The patents learned along the way and then don't worry It is pretty there was quite a bit to cover but I'll recap it all at the end. So yeah, you got it And then there's also a blog post that's draft Just a really quick one of our previous next we get to work with some amazing clients. We celebrated our 15th Year birthday party last night who went to the party last night And you still made it today. Nice But yeah, we as you know amazing community I get to work with some great people and amazing clients so This is just the lovely, you know, like because my slides had my title had Nirvana in it as soon as I you know posted that I went I know the theme so maybe a theme change and sort of three two one so So who was this talk for so? Um, so I just want to kind of reiterate that this talk is all about the patents for the environments, right? so There was a chat and in slack the other day just around core development and local development environments and that chat went from You know, like this is the best tool for local development tool for core development And then this is the best one for contribe and then and then it just really got me thinking hey Local development covers a lot, right? It covers a lot of workflows It covers a lot of different, you know use cases and that's anything from core development to Contrib development to just if you're doing project work on a very steady projects Or even if you're an agency and you've got more than one project like that. Those are very different use cases So I really do want to just cover that the best tool Like this is the caveat for the entire talk the best tool is the one that fits your workflow That's the key piece to take away from all of this because if you if I come up here and I start Spouting hey, this is the best tool. This is the best tool Then I just start to sound like this guy And if you do that in slack well, then you probably sound like that guy too a little bit too, right? So the best tool is the one that works for you, right? That doesn't mean we can't have these conversations though and it's pretty It's a pretty diverse cast of local development environments that fit all these different Different folks, right everything from D dev all the way down to even like aqua desktop, which You know has its place as well So I guess the question is is this talk for developers? and So maybe Is is the talk for operations folks infrastructure folks? Maybe It's really it's really for both So the idea is that if you're a developer Maybe you'll get like take a few things home and gain a deeper understanding of local development environments If you're from the ops side of things, maybe you'll come across and empathize with workflows and Areas of improvement to help help out the development team So right right from the start. Why do we need a local development environment? And the key is consistency for this so Like when we when we started our journey, it was actually we'll get into our journey soon but the idea of having these sort of You know disparate like you're using this tool You're using this tool like just doesn't really work in in modern development Like we're getting new like new versions of php all the time where our stacks are getting more Expansive and bringing in node and other services and like we have to really reign that in and have consistent environments across the board Or at least accept that if you're not using Consistent tooling that that's something you have to be very very cognizant about um I remember very early on like bugs around like file system issues, right? Like we had our ci pipeline, which was linux, but then local development teams with You know just running it straight off os x And linux is case sensitive But os x isn't so they're tests are passing locally But as soon as they commit and push The tests started failing and it took a really long time to debug If there was a consistent environment in that case A lot of that would have just been squashed immediately But the other the other pieces around collaboration. So the local development tool shouldn't just be Like your local development piece of the puzzle and then the hostings the other really should be a place where you can collaborate on changes and prototype changes like even if that's things like redirects or You know or nginx or Apache config or php config things like that It should be a place for you to quickly prototype and go hey I think this is a change worthy of going You know through the pipeline and out the other side So this brings us to our local development journey And it all started with handcrafted environments So in this case i'm talking about Those os x environments php just manually installed through things like homebrew Um, it was very like a patchy mod php mysql centric Nobody really ran the same versions for everything I mean you could debate that if everybody was on the same os x in the same You'd be reasonably close, but even then Configurations differ like you've got a different Apache config. You've got a different php configuration They were very handcrafted Artigially handcrafted local environments, right? Which then led to a lot of discrepancies between what everybody was running and they took a lot to get up up and running And that is where we started our first little piece which Was boxon. Does anybody know what boxon is? Oh, yeah, you would know what boxon is. What's boxon? Oh It was puppet, right so So, yeah, sorry or some of my slides will have a little like, you know master of puppets reference or something that you know But yeah, it was puppet so infrastructure as code was taking off at that stage and then there were projects geared towards local development like configuring your os x local environment in the exact same way And we tried to go down that that path, but it just led to Still a lot of while they may have been consistent by the end They were very very difficult to maintain and and understand why we're trying to solve this infrastructure problem and puppets really new and Still maturing we're doing the exact same thing with our local environment. So it became it became a lot and around that stage vagrant came on board Which then meant okay. Well now we can kind of have a bit of a standard operating Environment, so to speak we can have a virtual machine. It could be a bunch of okay Now we've all got a pretty decent base that we can build upon And we actually applied our production manifests To those local development environments. So if somebody went vagrant up Pull those down and run it and you know, they go off and have a coffee or two or And uh, you know and provision all that thing and get up and running But that timing that took a lot like that was a lot of time like like I alluded to So that's where things like Docker came on board. Docker exploded, right? We all know docker exploded. Well And and the idea was that instead of a big vm. We had these smaller packaged linux containers and They were much easier to like take we took away all that ability to package an image Which we actually ended up doing with vagrant We packaged like full blown vm images that were like a gig and a half or something like that and then we started shoehorning them into Smaller lightweight Docker images that developers could then pull down and run But that was just a very small piece of the puzzle Does anybody know what fig is it's a bit it's a slight deep cut, but um, oh, yeah, there we go. It's it was brought out as fast isolated development environments for docker. So fig and Then, you know, they took the naming scheming and it was awkward or orchard was the company and then they got acquired by docker, which then became docker compose And it's like this was very early on and the docker compose configuration change And the cli interface has not changed very You know, it was it was an innovation very early on that has Been a massive impact to to what we do now, right? So But with all this Like we were a few years in by this stage and there was a lot of complexity involved in what we were doing so What we had to do is we had to keep it simple. So Um And a part of this process we went, okay, we're we're gonna have docker images But we really have to take a step back and not just sort of apply the same wheel to what we're doing with docker images We have to go, okay Let's really lean into what docker is and what docker is trying to achieve and and try and solve these issues of you know, very Expensive to spin up environments. Um that were very complex and hard to configure So the first thing we did was start to Split up or consider splitting up all our components. So a docker compose stack comprised of these components From the beginning So there was your web server your database server And then we would have mail and caching and search. So and that could be any any combination We went through a few over over time we went from Yeah, mem case to redis to yeah solar to open search And then that stack developed Over time as well to the point now where where this is this is what we run to this day So we we've moved to an engine xfpm stack Which is very much the web at the top and then we even took out our cli So when you run php inside an image, that's a completely separate cli image Same goes for node as well So if we're compiling our our style like our front end components, that's that's all done through the node component on a cli And then we still have our database and and services in the back as well and And one mission around that keep it simple was to really lean into having a very beautiful Docker compose file as much as possible because there will always be some so some scope creep Or you know or edge cases that need to come in for per project. So having a base that was simple Was was very much the goal and simple doesn't mean easy either like this. It's very It's very easy to look at this and go. Oh cool. That's the engine x. That's the fpm That's the mysql but to get to this point was not easy and took quite a long time to get here With both the docker ecosystem maturing but also our methodologies as well over time so That was the web stack We then have these infinitely running cli containers which Folks developers can docker exec into and then run their commands And then finally back end services as well And i'll come back and and talk to a few of these These configurations as we go through But it also goes I just also want to point out that it wasn't just even though i'm talking about environments. We also implemented a make file approach so we took instead of I very much pushed thing I around an advocated for thing internally a previous next for a long time But then it became very cumbersome in the way that you declare it with xml and write everything in php Which essentially boils down to a set of bash commands Which if you could just Show those set of bash commands and time into a role or an exit or an execution type Then it'd be much simpler for a developer to come onto a project and understand the commands that are to bootstrap the project so That's why we went with make we moved on to make instead To complement our our local development environment itself. I just wanted to kind of point this out because this is Really us trying to keep it keep it simple in what we're doing And here's a here's an example of what what that looks like. It's it's very much cut down from what what we have and um And now the images so there's so if we were breaking down those docker compose components we Each image was a was a separate. Yeah repository. So these are all on skipper now They were previous next and they've slowly moved across but we have image nginx image php Image mysql and all of those then support the multiple versions that that we need but um one thing I really want to point out was that became very apparent was still the requirements of having to have local development configuration in certain cases because The images that we were running on production Can run locally but then developers require tools such as xd bug You know xh prof um ph different php configurations So what we ended up doing was starting with a for php We started with a base Then we used docker image tags to then build the fpm cli image And then we provided a build on top of that which applied the local dev configuration meaning that we had this very um very Declaration of this is what production images are And these are the things that we've applied to our local development environment because up to this point It was very difficult to understand. Okay. What are we doing locally? What are we doing and then what is actually being shipped to production? So and I want to say there's very few things that are actually in those local dev images uh, for example our nginx dev image has It's headers the header size that gets passed out is is expanded so we can see all the cache tags when debugging is turned on so things like that and The entire flow is a docker compose up and exact into php or node that service and then running running those commands Now with the stage set there We can move on into the learnings and the biggest one that I Made up for this talk in some ways or upon reflection and what Where we ended up was really Trying to understand our time to 200 metric. So time to 200 being I've cloned the repository I've upped the environment and I've run those extra steps now. I'm returning a 200 on my environment I can get going and that can be Invagrant land a lot of time like I said a to coffee time potentially or if it breaks then you got to start again um right down to potentially seconds or minutes but When you multiply that by The amount of times you need to down the environment up the environment rebuild the environment. It really adds up So like I said vagrant for us was was minutes Potentially hours depending on the client like database size. That's another factor. I'll touch on But then compose really did boil down to do seconds in a way, especially if things were caged so through this journey it was Package as much as possible So so instead of having a lot of automation trigger when you have to up an environment The more that you can front load and put into a pipeline and come out with an artifact at the end or a package at the end of it the less development teams need to pay for that Exponentially you can do it once in cli and then developers locally don't have to pay for that three or four times over it was also eliminating manual steps so I alluded to databases, but it could be anything from installing dependencies to big long lengthy Like now you've got to install this package manually. You've got to add this api key. You've got to you know Yeah, I don't know what what your ritual might be when you docker compose up But uh, but if that's a manual step, well, then maybe you can automate it But but that really eats into your time to 200 before you can get get to work and start working on on the site So on an application Also trimming x access data Like excess data means more time to pull that data down or work with that data or even results in a ill performing environment potentially but in this case Trimming that data really slims down the environment and pays in dividends when it comes to spinning up an environment Another one to think about is image composition so one thing around that stack that you might have noticed was that it was a container for each each Service so I don't get to that in a sec, but it was essentially one per language. So engine x php fpm php cli but What we we it took a while to get there and we had these rather bigger monolithic docker images which had Everything in there. They they literally had Apache php my I can say that my sql And what that meant was okay now it's time to go to ph to the next version of php I've got to adopt all the other changes that are in that image And sometimes that's not desirable. You don't want to bring all that in especially when you're Starting to contrast like your node version and your php version So we were in a position where if you bumped your version of php And it was all in this monolithic image Well, then you were also bringing in the latest version of node and then a different team or a different set of local developers We go wait wait wait. No, we can't we can't do node 16 or node 18 or 20. That's We haven't scoped that out yet. And so But then the other team wants to bump php. So it becomes this Dependency hell all because it's all packaged in the one image So we went one one image per language, but really it boils down to one image per service So and that's where the php fpm and the php cli came from as well local hosts so this is this is something i've kind of advocated in a big way internally for a while like for quite a while and also um externally so this this post was 2017 and this was the idea of everything being on local host 127.0 to 0.1 we had Development teams where they if they were running commands externally so outside of the environment outside of the environment coming in That might be the environment exposed on local hosts port 8080, but then internally The it would be by a different name. So it was very hard to wire services together So what we ended up doing was Declaring everything under one network stack. And so if you were connecting to the database externally and internally inside the environment you could connect via 127.0 to 0.1 which led to A very consistent easy to understand approach And way less edge cases and what we were doing And how we achieved this was that one network stack was there's a network mode flag in in docker compose And we nominated a service. So in this case it was nginx We're potentially moving to maybe just like a network Image that does very little And it's really just responsible for that, but we nominated nginx as Our network image essentially and everything joined that container and when you do this it's The exact same as installing PHP nginx php mysql on your local laptop and everything is talking on the one network stack By default if you spin up docker compose everything gets its own ip address everything gets its own Name internally and that's that's very hard to grok. This simplifies that all down to local host That also meant that when it when it was time to move into the cci pipeline local host could be used there as well so And that smoothed out a lot of edge cases where we had like the ci settings dot php file or The local setting stop php file because everything sort of communicated a little differently By standardizing on local hosts We removed a lot of complexity from our stack And that's not to say and this is one trade-off that we made and and the trade-off really Was that it's one one environment at a time So so if you spin up one stack You can't or one project you can't spin up another and For the most part it it has paid off really well It's the default developers can pivot a little bit and expose their environment on a different port and then have multiple environments But docker compose up and down is very quick meaning it is Quite nice to just you can just up and down and context switch around The next one is a settings dot php for your local development environments, so How many people have a settings dot php file per environment? Yeah, do you like it do you run into issues? Dev staging production local ci Like a switch statement that's yeah, this one's for that environment. This one's for that one Yeah, that's We we did that for a very long time. That was that was the standard for quite a while. It was very much like if you went to Our hosting provider back in the day The first one was include this settings dot php file and it was if you want a per environment configuration Here's an example switch statement and how to how to manipulate that But that led to a lot of complexity with our settings dot php files and for a long time there was also a settings dot local that wasn't checked in and then A developer might go. Oh, that's not happening for me on my local But it is for somebody else and then you go through all the you go through everything and you go like, oh, there's this Settings dot php file that's doesn't show up in git because it's git ignored But it's got all this extra configuration that overrides other things. So and then they delete it and everybody's consistent again. So So we moved towards a system where we removed that from what we were doing And we went with default with fallback So the idea of having a default configuration Default configuration. This is the skipper version, but it could quite easily be anything else but a A default Or an environment override and then a fallback. So so for the dev environment It could be set on the platform or or the hosting provider is yep these are these are the configurations And then if that doesn't exist then fall back to the to the standard default, which is the local configuration And that simplified things immensely because it'll still allow per environment configuration And this has been enabled more as hosting providers have matured over time. They they're now configurable but This yeah, really simplified our settings dot php configuration and and there's still still projects that i'll go through and Trim that down and and folks go. Oh, wow So that I think that's a pretty great takeaway and If you just want a really deep dive shameless plug, I wrote a blog post it took a really long time because I wrote about our config system under the hood and compared it to other hosting providers But um, but it's a pretty deep dive into the mechanisms that we implemented on our hosting to make that happen But the concept as it is stands stands cool the next one's database images so Going back to before like we were we were spending minutes maybe hours sinking down databases and what that meant was Developers going off and grabbing coffees and you know i'm waiting and then but we really needed seconds And and how we got there was through containerized databases. This blog post was 2018 We with we Went down this router saying hey, we're we're running everything in containers We're running our database server in containers. We're packaging everything Why don't we apply that same principle of packaging to our database images? And at first it felt like oh like is that the right fit? But the more we sat on it the more we realized That's import times really do chew up That time to 200 But not only that it was very hard to control The distribution of data of databases across development teams because Even to this day a lot of the standard is hey, here's an ssh connection to to your dev staging production environment Now go connect to that database and pull down pull it down Um, whereas we wanted to have us put that in pipeline Pull down the image once package it sanitize it and then Container images have a have our back Surrounding them which that means that we have a really nice mechanism to distribute as well So not only are they they smaller they're more secure and controlled and And yeah quicker And over and over the years we've slowly built out the mtk project Which is the mysql toolkit project which does take you from A dump to a sanitize dump database to a to a database image. You could use this in your pipeline We leverage it on our hosting platform, but there's absolutely no reason you can't implement this in a ci pipeline yourself And um, yeah, I did I did a talk about it Yeah, okay The next one which is very recent for us is around performance testing so For local for your local development environments. How how many people do performance testing on their local environments? Yeah, nice This is a this is a tricky one because at this stage it's the There's a few routes that you could go something like a black fire Has documentation it takes a while to get up and running, but it's a great tool and then you could ship something like a um, like an xd bug or an xh prof for like an open source tool and configure it But then it's still lacks like that ui interface. It's something like even a paid product like black fire provides And what we've been starting to use is a tool called spx for our local development environments and it's pre packaged has a web ui While it sits a little bit less than black fire in usability black fire is a product It this allowed us to say hey, here's a pre packaged performance testing tool that you can use right now get running with And and start to understand identify bottlenecks in your application And that's the biggest takeaway from this is to have these kinds of tools available as defaults meaning that when it's time to When you're when you're under fire and in productions Spiking and performance is going haywire. The last thing you want to do is go Okay, no no we'll set up black fire. We'll get it get it going locally and we'll you know try to debug this You really want to go okay cool. We've we've got this tool. We'll start here That gets us 80 90 percent of the way there now we can understand Get a get an early understanding of bottlenecks or performance issues and then then take it from there So these are these are the things that really enable development teams and those are the moments that really count in a big way That's just another blog post but that links off to Like this is all prepackaged in our images So everybody can go out and check out how we're configuring them how how we're using sbx but Really the takeaway that I really want to instill is Think about these kinds of scenarios and how these types of tools can be prepackaged and pre preloaded and ready to go for your development team the next one that we're starting to focus on is cicd integration so Going back to those monolithic docker images CICD platforms such as circle and And travis and even github actions They really do Promote in a big way The golden image especially for us. We use circle ci a lot. They promote the idea of this is your application image And it's got everything in there goes back Goes back Goes back to because they're a vm-based company. It was spin up this machine Execute a bunch of tasks and then exit And then that was applied to docker And where we're heading now is the idea of all why can't we just Spin up ci and then run the exact same commands that any developer would would execute So that's that's something to something to consider because it's very self-documenting If you can look at your ci file and go okay. Well, it checked out the code it docker composed up It executed these three or four commands to get preps and then it started running the tests A developer could quickly pick up the project run the exact same thing and and get going so all in all to recap um I just want to throw a little little kind of spicy one there that might have been a little little undertone with this is to consider a tool such as Docker composed in your flow That's not but again the best tool is the one that you're using and facilitates what you're doing but still consider the layers under the hood and Um, and that might even bubble up to to what you're using right now. It definitely helped us Review your time to 200 So even just right now or maybe not right now. I'm talking but after this you could just go out check out your code Follow the steps to get up to a 200 and and then stop the clock and then go, okay You know that took this okay, and how many people do we have on the team and how many And then that becomes a pretty quick easy business case to go. Okay. Maybe we need to invest some time Or maybe it's the other way. Maybe it's no we have a we have a great system. That's we've we've done. We're in a good spot Can spit consider splitting out Monolithic images, but even consider splitting out monolithic components where you might have that dependency hell in some ways So if if that's happening That that is a very nice win that you could implement because it'll keep coming up time and time again consider minimizing complexity on environments and and lean on localhost where where you can consider auditing your settings.php files and how that's used and then potentially how you could How you can move across i'm happy to talk to anybody about their settings.php files. It's that's It's fine by me. I Somehow get get a kick out of helping and and cleaning all that stuff up. It's yeah, it's pretty fun But also consider integrating database images and I'm here for the rest of the day And I'd love to talk to anybody about database images. Yeah Do it for quite a while now, but yeah, I'm really stoked to talk talk about db images because that Has been such a massive massive improvement to our our environments in fact Carl implemented db images with our platform through ddev. So it it's doable. It's doable for all Cool and uh, and that's that's my talk Also, people start talking about their local development tool or environment where we're getting a bof going. I'm just putting that I end it's uh, it's Lee's Lee's birthday. He's uh in the back corner right there. I told him that was a slide If we we have time we'll sing happy birthday Cool great talk, um, what what is your average sort of target to 200 but you're getting at the moment That's a good as an average across your your projects. That's a good question. Honestly Haven't particularly measured it because as soon as we went from the minute Sorry the hours to minutes and then the db images, especially once we had like the prepackaged images and the db images Uh, and and it was cached the environments were coming up Like yeah, literally in second as in Like up and then the next step was composer install compile your Compile your app and then run the deployment steps. So so by the time we got to there I through the process we Yeah, we we stopped thinking about how long does it take to spin up an environment? Okay, I might rephrase the question. What after how long would you then consider? Okay, this has taken too long uh Sorry Somebody say some I mean if If you have to go off make a coffee and come back Maybe some minutes are honestly minutes like in some ways Like if I have to if I have to be frank, but um Yeah, if you're if you're spending like we we were very much in a position where it was like a half an hour exercise Or even an hour and that was very much due to The time that a vagrant up and then a puppet provision and executed And then the database took to actually import and then if the database failed then so it was very apparent so um So for us if you're spending more than you know five ten minutes Getting an environment up or not not on the you know happy path All the way through then then it's absolutely the you know worth investing time. Yeah I need thank you so much. Uh It was amazing. Um Everything you said about we've actually Arrived to that conclusion as well separately So everything you said yeah validates and works and works really well Including database images and all that that's actually really cool. And thank you for MTK That's a good stuff as well. Um Have a question on make files. Yeah, so Uh make files essentially for making you know compiling things and it actually has internal internal some sort of way your mechanism to know What has been compiled what has been not because for c for c plus plus all that stuff. So um When teams are given Make files created by someone and they want to extend it They find the syntax a little bit challenging because it's not actual bash, right? It's not shell. Yeah Um, have you considered switching to something like task file or a hoy or any other wrappers? Um Just to wrap those commands that are long we've We we haven't considered shifting There has there has definitely been a few times where you do get stung by like yeah It's not exact bash, which that means that you know, there's special syntax around Execing out to you know get a get a variable and use it elsewhere Um, and then there's definitely times where like you said, um How it's it does take into consideration the compiled artifact. So we have to mark everything. There's like a little one line It's like phony star to say everything everything just run otherwise you're in a position where Your your commands will just skip over the top because they'll say hey it's already compiled and said no it hasn't We're trying to we're trying to run this every time. Um, but once we did that we were in a we're in a great spot. Um Moving moving from thing to make was was a big task and then it opened up In a big way the ability for like we started with a very standard configuration and that's morphed over time into In some ways for certain projects very bespoke configurations for what they're doing. So So far there hasn't been a big enough pain point to pivot But I definitely recommend that other people do way up make or similar type things that execute bash and seek Yeah, and what they're benefiting. Yeah What were those tools again? Just so people people a hoi And tusk tusk file. Okay. I'll task file. Okay, cool. Yeah. Yeah nice I mentioned Yeah Yeah, oh hoi oh hoi is on gubcms just everybody. Yeah, and that's where I've seen that too. Yeah, yeah Yeah Yeah, do uh, yeah, let's let's have a bof. Yeah, do you want to have a bof? Let's have a bof I'll go I'll go right one down for this afternoon. Yeah. Yeah, cool awesome Sorry, just quick one over from here in the corner. Um You mentioned also talk about local development tools, but um Over at Sydney Opera House. We've been using Lando really heavily and really been enjoying it. Um, but that's a very high level tool Um, and you're talking about really going under the hood Is there any advantage for a team that's really enjoyed the high level abstract stuff to going That next level deeper and creating our custom images and all that jazz How long are you taking to spin up your environment? It's very quick. It's about five to ten. Yeah, perfect. There you go job done Honestly, that's it. That's it, right? That's the assessment. That's the no but um And and if you don't have much in the way of pain points, then then perfect That's that's exactly why I put that slide there because you know, that's working for you fast spin ups Small, you know, like small to no pain points perfect. You know what I mean? But if somebody's inclined to Invest time into that, then I wouldn't say that's a waste of time. I think that's the caveat I'd add to Like understanding these kinds of tools and technologies has definitely Had a bit of a hidden benefit in in what we're doing, especially when things do break. So yeah Thanks It's a heckler Yeah, I was just going to add like with with things like make files I I mean, I personally don't think we're we're super attached to it, but If you look at most projects make files, they're not super complicated All they're really doing is calling one command and they might be abstracting out a bunch of parameters or options and things like that. So um, I mean depends on what you're doing in your You know your lando or your ahoy if it's like calling lots and lots of things then um, yeah, that might make sense to wrap it, but um, I'm not opposed to putting using like package jason for running scripts for front-end dev tools or Even in composer putting scripts in there just to abstract out some of that stuff. It's just trying to make these These extra arguments or options just the same for a body and simplifying things Great talk me Sorry, what was the question make? Yeah um Yeah, so makes yeah make makes included with default. Yeah with with linux machines So, yeah, so so it is a nice quick little prototyping tool if you've got like a init.sh in your project Maybe the next step and you want and then you're creating, you know The next shell script maybe a tool like make is is worth looking into maybe task task file or ahoy or Oh, yeah package dot jason. Yeah I hope you done Thanks everybody