 So I come to the session from Dev to Broad with GitLab CI I know this is the post lunch session Which means that some of you might want to fall asleep. I'm totally fine with that if you do so try to avoid snoring so that your your attendees next to you are not affected Few words about me as I said hi, my name is Stefan when I'm not speaking at conferences I do remind a company the company is called bid expert We are a technology company doing custom applications for our clients of all sorts My current role at bid expert is called head of technology. That means I'm responsible for getting technology knowledge into the company as well as Organizing the whole educational programs that that that we provide for our employees developers as well as as project managers and Since I've still got some spare time left, and I'm not exactly sure how that happens I'm not organizing one, but two php user groups So one is the php user group in Frankfurt and the other one is called the php user group Metropole region Reinecker, or that's a very German name but yeah Basically what I'm saying if you're close to Frankfurt or if you're close to Manheim feel free to swing by I'm always looking for speakers and attendees as well But I guess that's not what you're here for you're here for for this get that who of you has heard of GitLab before Great, who of you is using GitLab? Okay, who of you likes GitLab? Okay, same amount of hands. That's good Quick check of everything's up Great, but the question is then why why are you here if you if all of you are using it lab But I'm sure I have a few games with me that that you will find interesting So for those of you who don't know what GitLab is this is what Wikipedia says GitLab is a web-based git repository manager with wiki and issue tracking features and to be fair. This is still true today But GitLab is way more than that these days Roughly, I think two years ago the GitLab guys announced their beyond CICD strategy Meaning that they do not only want to cover the dev parts and not only cover the ops part But making sure that like everything is working seamlessly out of the box Meaning you as a developer you can plan new features by creating milestones and issues You can create them by obviously committing code and pushing it to to GitLab. You can verify them by hopefully writing tests That's what hopefully all of you do Then package of stuff releasing it configuring build pipelines Monitoring your applications on staging and production and then again transitioning to the planning phase on improving existing features or planning new features and Making sure that as I said everything works seamlessly together out of the box But let's get started first things first, how do you install GitLab? Well, a couple of options you could use the GitLab omnibus installer, which is like the defector standard There are packages for Debian their packages for RPM Or as all the cool kids these days do run it in in Docker So there's a Docker image on Docker hub called GitLab dash CE So CE stands for community addition that you can use and this is basically more or less what what we are doing We are not using the default image. We've built it our own because Because reasons Just kidding But for the sake of this talk, it's it's just like just like use the default image Publish a couple of ports port 80 port 22 and and that's basically it So you wait like two or three minutes GitLab is is then spun up The database is up and running it will install itself to the database migration or stuff Cool. So that's that's one piece of the puzzle Since during this talk, we are building a Docker image and we deploy Docker images We do need a place to store them a Docker registry Now these days GitLab comes with a Docker registry built in Back in the days when we started this whole journey, this wasn't the case So we had to look for an alternative which luckily we already in place and this alternative is called zoned type nexus So it originates from the Java world was used for hosting maven packages But these days they host a lot of more npm bower Docker and and whatnot So this is this is basically what what we are using for Similar to GitLab we run this in in a Docker container Which is quite fun running a Docker registry in a Docker container that hosts Docker images, but that's a different story again These days there exists an official sonotype nexus image back in the days when we started this wasn't the case So we built it our own And if you don't know how how that looks like this is basically the the nice fancy UI it comes with Perfect for a Java application, that's probably the best I I can say What's really important in the whole process is that? GitLab needs to authenticate against nexus And this is also the case if you if you would run or if you would use the Registry that GitLab provides so at some point you need to log in this registry Provide the username and password and then you're able to pull and push those images Depending on how you set set things up You could do this once on the host or During each and every build that that you're running. So this this is a bit of like going Going either way or the other depending on on your situation and The third piece of the puzzle that we need is an HTTP reverse proxy as we do want to run Multiple Docker instances on one host or multiple applications on one host We need to make sure that we have something up in front that does like the routing and stuff Again, there are hundreds of solutions. You could use engine X a j proxy or We use traffic Who have you has heard of traffic before? Okay, who likes traffic? Okay, if you feel the hands. Okay. No worries. I like it because it's like super simple to configure It's like 15 lines of code and you got the whole thing up and running What you need to define first is you need to instruct traffic to expose HTTP and HPS traffic And then you configure those those entry points saying HTTP traffic should be served on port 80 and We also do an HTTPS Upgrade so any HTTP connection get instantly upgraded to HTTPS so that our applications itself don't need to care about it and for HTTPS connections we say they should listen on they should be served from from port 443 and This is the certificate that that we are using for for this Now since we use my app.lock as a custom domain We can't use a service like let's encrypt to generate those those certificates But this is something that traffic can do automatically So if you use a public available domain and traffic is available publicly or accessible publicly You could easily say, okay, just do the magic with let's encrypt generate those certificates on the fly and and everything is good Now where does traffic get its configuration from and this is the pretty cool part There are multiple configuration back and support the easiest one is probably saying okay traffic like here's a docker-socker just listen to it and Watch the events that happen there and then just magically generate your configuration As as you see it you can also hook it up into rancher That's another solution you could use etcd and and all sorts of of key value stores So you're pretty open in in Picking your your configuration endpoint As with with the other stuff we are running it as a docker container. You're publishing three ports port 8080 That's like the management console Should probably not publish that publicly should Make sure that only you can can have access to it and then just published ports 80 and 443 And I mount the docker socket into the into the container as well as the configuration file that we've just created and that's that's basically it Now let's get started So for those of you who don't know GitLab, this is basically how it looks When you create a new project all you need to do is provide a project name We just ignore the description for now and the visibility level. That's that's basically it created click the create project button And the git repository is created as an alternative You could also use the create from template functionality, which then would also generate a default file structure Within your your newly created git repository, but unfortunately the git lab people just don't like us php People and they just have Ruby spring and I know it as a current offering. So that's probably not the way you want to go So we need to build the application and the structure ourselves now when I arrived at this point of the presentation, I was wondering like What kind of application should I use or could I use to to show you how awesome this this git lab stuff is? I could have built one myself But that would contain the risk that I build it in a way that it runs perfectly with like this whole whole Git lab infrastructure. So that was a way I really wanted to avoid. So I went back and forth and Was looking for an application that was like Not so easy to install comes with a bit of complexity and I Ended up using magento. I Thought that's that's probably a good good way of doing so. So we got a fan over here, right? Kind of not you Derek. No, yeah, definitely not Luckily magento can be installed using composer these days So I just say composer create project and then point it to repo magento.com. That's like the packages of of magento and we're using the project community edition installer and Like after a couple of minutes after composer download that half of the internet We got all the dependencies up there and we can Add commit and push those into our git lab instance and this is basically how it looks So we've got the first commit over here and we're good to go now to this point She's gonna nothing happens regarding builds All right, so the code has landed in git lab, but but nothing happens To be able to build that code or to do anything with it We need to install a fourth piece of the puzzle. That's the last one. I'm promising and that puzzle piece is called a git lab runner It's also from the git lab folks The idea is this We have git lab and the git lab CI module which acts as kind of like a master instance that would coordinate which runners are available and which jobs can be run where and The code that you want to execute are executed in the context of such a runner instance Now the cool thing about this is that you could for example use the git lab sass offering But host those runners on your own either like locally and on your own machines Locally in your data center or locally in the cloud. Well, not really locally in the cloud, but you get the point and Then just connect that to the git lab sass offering and then all the code is built on your machines, so to speak No, that's that's pretty cool. If you want to get started. This is probably a good good way of doing that You can install that such a runner in multiple ways again. There are debian packages. There are p.m. Packages There is docker container just pick what makes the most sense for you We just need to mount the configuration file and the docker socket and and you're good to go Now Situation is is this we got two docker containers Well got four actually, but that's don't matter So one is git lab one is the git lab instance and one is the runner They run side-by-side, but they don't know each other up to now. So we need to connect them right to do that we first have to open the admin section of of git lab and access the runner overview page and Over here. There are two important informations one is the URL to get left technically you wouldn't know that Otherwise you couldn't see this page and the other one is called the registration token and the idea behind this is that You don't want any third-party runner to connect to your own git lab instance Because otherwise that code could be built and checked out like anywhere in the world by anyone, right? So there's authentication token. Just make sure you know this or you can set up this runner or configure that runner to talk to your instance So having these two information we can create a runner instance This is done by calling git lab runner register First thing it asks you is what's the URL of your git lab instance? We know that Second thing it asked give me the authentic authentication token. We know that we entered that cool You can pass a description for this runner. So if you have multiple ones, you know what the what this runner is for You could provide tags for this runner and this is something I'll show you throughout the talk Why this is important or what you can do with it? I'll just leave it for now as is you can Tell the runner to only run builds that are tagged with a specific tag or you can instruct the runner to build like potentially like anything You could lock the runner to just one git lab project Which may be interesting if you have a runner in the data center of a client and you just want to run certain jobs In that instance But that really depends on on your setup and now the the most important question What is the executor that all the commands that that you will define later on in in the Configuration file how do get those executed you could use shell which basically means all the commands you execute are Run by a git lab runner user within our container. We just created Now that's that that's maybe okay for a few commands But if you want to install third-party packages of stuff, you can't do that because you're not rude So that that's potentially a problem or you could say hey spin up a new Docker instance A new Docker container run all that stuff within that container Which probably is a bit better and they have dozens of options like Docker SS age Docker machine Docker machine SS age Whatever so we can can make sure you pick like the best the best runner or the best executor for for your environment And last not least You can define a default Docker image So if you don't define an image in your project or in your job This would be the one that that would be taken instead So pick a sensitive choice. Just don't use the Ruby one if you build a PHP project that probably doesn't make sense right Now what you can do is you can reduce the multiple of these instances for one runner demon And they can be configured completely differently. It doesn't really matter So that that's important to know the outcome of all of this is a configuration file that that looks like this You can also edit it by hand and then just restart the runner and everything will be fine For example changing the number of concurrent bills that should be run globally or then locally for each of those runners or as we do We mount a volume containing like all the composer files from the host into the container So that we don't need to read download all the packages over and over again just to save a bit of time So this is an idea that that we were using back in the days of vagrant When all the deaths were running that stuff locally just to avoid like this massive downloads of packages all over there all over again So this can speed up the builds a bit Great and then after like a minute or so this new runner appears on this runner overview page over here and we can use it So that's that's pretty simple To be fair when I showed you how to to use the composer install command I just missed one important information You need to be authenticated against this this magento registry. Otherwise, you can't pull those files So you need a way of somehow managing secrets Obviously, we can do that in this GitLab configuration file that's checked in in the repository, but that feels like yeah, that's that's not really cool We are looking for something else and luckily GitLab has has got us covered They have a functionality that they call managing secrets where you can define secrets either on a project level or on a group level that means I could create a group within GitLab saying, okay, these are all my magento projects and Create my my magento projects beneath that group and then expose that these two to Two variables like the username and the password on the group level and they get then inherited to the to the different projects So again that may or may not make sense In your in your situation and this is basically how it looks so you open the settings page And then you got a section called secret variables. Just give it a name. Give it a value and and off you go you can say for sake of security That those variables should only be available for protective branches That means that only a certain amount of people with a certain role within GitLab are able to push to those branches and you avoid the risk of like Yeah, someone echoing username and password and you would see the the result So again depending on your setup this may or may not make sense but the question is still like how do we set everything up and how do we get get things up and running and Basically, it's like super simple. All you need to do is you add a file dot GitLab dash CIML In the root of your repository and that's basically it If you are familiar with Travis, this looks kind of similar But it's different in detail But I guess if you can't follow along or if you know the traffic stuff you can Not the traffic Travis stuff You can can follow along easily. So on the first top line. We define the default image that that we are using PHP 7.01 CLI I know this will hurt some of you Unfortunately Magento isn't able to run with 7.3 right now And back in the days this was like when I compiled the slides last year This was like the latest version that Magento could support and Then we define a test job this test job runs in the test stage We'll see later what that means it is tagged with a check docker We'll see later what that means and these are the scripts that should be executed These are the different steps that should be executed when this job is run The first one is simply like installing a couple of Debian packages that we need to be able to compile a couple of PHP Extensions that Magento needs to run I think these two lines took me like like Almost half a Saturday to figure out and like constantly like redeploying and testing I was like the most annoying part of this presentation, but anyway We then also need Composer because we want to install The Composer dependencies and these are like the four lines that you would usually copy without looking out of the Composer manual and Yeah, afterwards you've got Composer installed and it's usable we then Need to instruct Composer to use the Magento user and Magento password environment variable That's the ones we defined before Whenever repo magento.com is accessed Why are HD basic authentication? And Kind of side note because this is important if you like this feature. This is my contribution to Composer Yeah, awesome. I know Jordy completely rewrote my PR and Improved it like ten times, but I can still claim this is like my idea Right so we got the configuration right and then we run the Composer install and hopefully all the packages are installed So we we add commit and push the file it ends up in in GitLab And you can see there is a blue icon on the very left of the git commit chart And this basically means we have a build job up and running cool. So we can click on that icon Can see a list of the pipelines that are configured Again the blue icon which indicates that there is a job running you can click on that and we get an overview of the pipeline Pretty simple pipeline. I have to say so we got a test stage. This is test with an uppercase t Running a job called test with a lowercase t Now again if you're familiar with Travis and and similar solutions Just click on the test job and it will give you like the output of what's happening during the build and If you wait a couple of minutes Swear to God in the end it will say job succeeded. Awesome now to be fair I Named this job test and I'm I'm really lucky that Sebastian Bergman is not in the audience because he probably will kick my butt Naming this thing a test because it doesn't really test stuff, right? We're just installing composite dependencies So we need something more to be able to run tests of any sort we need a database So that we can install Magento and make sure that everything is working now I could spin up the Docker instances that are need for this like my sequel instance or stuff like that In my build script, but then I had to like wait till the till the till the containers up and running and then Continue doing stuff. So that that also feels like yeah, that's that's not really cool Luckily again the the GitLab guys has has us covered with a feature that they call GitLab services and The idea is that you can define Multiple images that should be spun up Before the job that you've defined actually is started and then you can be relatively sure that the services are up and running And you can interact with those and just by a couple of lines of configuration and the pretty cool part is that? GitLab has a bunch of logic implemented to Check which ports are exposed by these containers that that you want to to run so the default ones that are exposed by by the image and What GitLab will do is spin up a second container that will just wait until all those services on all those Pours just gonna respond and if that happens that container will be will be shut down and your your container running your job will run Um Sorry losing my mic go to my problem. Can you hear me? Okay, cool. Sorry Right, so it's it's it starts to your container and you can be like 99.9 percent sure that that everything is working out of the box Even though the GitLab guys say well, you can't really rely on that that everything is working, but We pretend that we were safe And this is how how how you need to do things Again, we first define our image We can define a couple of variables now These are available to our own jobs as well as to the other services that that we include like my sequel root password is Passed to our jobs as well as the my sequel container that that we will spin up We define our test stage. This is what we've seen before we Simply add a section called services and it's gonna list all the services that that we need and this is all you need to do You can define services locally within a job or you could Define them globally like for all of your jobs. We do it locally because on Staging and production we already spun up my sequel and some other sort and we don't want to add another my sequel container Just to be there so that that doesn't make any sense Now if we run the job Get loud we'll say hey, I'm starting the my sequel 5.6 service I'm pulling the the docker image my sequel 5.6 I'm waiting for the services to be up in running and then I'm pulling the PHP 701 CLI image And then just gonna execute all the logic so I can see what's what's happening and in the end hopefully hopefully everything is green Isn't that amazing? Yeah, kind of okay. I see tough audience. I can cope with that But still that's not really something that you could consider being a build pipeline, right? So we need multiple jobs. They should interact with each other and and do awesome stuff And this is exactly what we're doing right now So again, this is the shared configuration we have our image defined We have the variables defined and next up we define different stages. We call them test build and deploy But that's probably like the the simple setup these days You can define a section that is called before script and these commands would then be executed before each and every job that you have defined now technically, this is bullshit because We don't want to install all the the apt packages for staging and production and stuff But you could in this section do something like the docker login command, right? So you don't have to retype it every time and making sure that that is it's in sync in all places So that's that's what you can do in in this place and Then we have our test stage we have seen that before we do have our build stage I'll cover that in a second what it does And we have our deploy staging job Which and this is slightly different to the other to the other jobs has a tag called docker-stage and the idea is this You've got multiple runners running in our data center. We got the ones tagged with docker running over here They are responsible for building and and testing stuff We got the ones with attack docker-stage running over here and all the staging domains pointing to this part of the data Center, right if I would deploy such the staging a staging container over here The domains couldn't reach it right because they are pointing over here So we can use these tags to steer a bit like where is stuff deployed in in our infrastructure For staging deployments we say we do not want any good check out because we already have the container built So there is no need for for checking out the whole git repository We do define an environment that we call staging and say okay This is the URL to our staging environment And that's also pretty cool because like you don't need a list in the wiki or or anywhere else saying okay These are URLs for staging. These are URLs production Make sure you use the right credentials and stuff GitLab will render a couple of buttons in its GUI You just click on them and it will bring you automatically to this URL. So you got it versioned and stored in like one central location and Then we say Run those builds only for changes that occur on the master branch Again we commit the file the job is running The pipeline is looking a bit more complex now So we got the test build and deploy stages and the test build and deploy staging jobs So test is running all green Build is running and now this is what we are doing Um, imagine up to this point. We have installed all that all the all the dependencies that that are needed We tar that stuff we move that tarred file back in the project directory and Then we use the the feature that's called the build arguments of Docker to pass this tarred file to the the Docker build command this helps you a bit in Not needing to maintain multiple docker files like one for production one for development where like once add those files and one One student do not add those files. So I can play around with that a bit And not really like this kind of approach But this like the best thing I could come up right now without the need of having like to maintain multiple files Great. So it's built we tag it and Then we push this image to to our to our registry. That's that's all we need and This is then basically how it looks. It just appears as a master version within our registry now So build is also green awesome. We can start deploying this on staging and This is what happens First we pull the latest version of this image now You can see there is a variable used see I commit ref slug and I also used it back in the build stage Throughout this whole build pipeline this variable holds the exact same value. So I'm always referring to the exact same image Which is important if you want to deploy a certain version on on on your machines So we pull the image first we stop the existing my sequel and the application containers We remove those images Then we start my sequel again and we start the application and then link the application to the my sequel one and Pass a couple of traffic labels so that traffic then later knows like stage my app log Just going to points to this exact docker container and then we wait like a minute or so and open the URL and Finally we got it deployed awesome Okay, tough audience. No worries So this is staging For production it looks pretty similar So we have a job deploy underscore prod Again using a different tag docker-prod so we know where the stuff is deployed, you know the story We don't need to get check out because we already have built the docker container We name the environment production because it makes sense point it to the right domain Say hey, this should only run for master for commits on master Now if it would use this configuration what would happen whenever we make a change on master branch, so The test job would run the build job would run and then it would synchronously push or Run this this this commands on staging and on production. This is not what we know what we want, right? So we want to test it first on staging and then roll it out to production later and that's what we do with the when manual trigger Which basically means that this job is triggered manually through the GitLab UI This is how it looks So test is running All green build is running all green deploy staging is running all green and then the build pipeline stops Now those are those of you in the first first row can see there is a play button next to the deploy-prod Job well if it looks like a play button that pretty much is a play button Which means you can press it and this is exactly what what you need to do so someone in your in your company needs to press that button and This will then kick off the the build or the deployment to production. So Your project manager or your customer can kind of prove the stages Changes on staging and then someone just just presses that button and then kicks off that build that will run on production And the pretty cool thing within GitLab is is this so you see if you check the logs You can see all the output and GitLab also warns you hey watch out This is the job created a deployment to production So this is like the right moment where you can press cancel if you figure out you press like the wrong button Which rarely happens I know So this is yeah gives you just another warning sign Right, and then we just wait again like a minute or two. Hopefully and this thing is up and running in production And that is basically it how awesome is that? Thank you Thank you finally finally But I see we got some time left So I got some special for you Because we have a problem with this setup Let's take the staging build for example If you have one developer You probably know which state staging is in if you've got multiple developers that keep committing and pushing code It's pretty hard to say or to tell like which version is actually running on staging right? So your project manager probably goes crazy figuring out like changes work not work changes work not work like all the containers are spun up over and over again So I guess it would be really cool to have have such an environment per developer, right that would be cool Yeah, that would be cool But why not go a step further and say we want such an environment per feature branch My blow and this is what GitLab calls a review app Now to be fair the documentation of the review app is a bit Weird and I have to have to read it like a couple of times to really figure out what it is and how it sets doing So I'll guide you through the process and you'll figure that out way quicker than I did The idea behind such a review app is basically that those environments that we just created statically naming its staging and production Can be created dynamically with a dynamic name and a dynamic URL and this is what what we take advantage of So again, we create a deploy underscore review job Again, it has a custom tag called Docker dash review so that we know where the surface is deployed You know the story We don't do a good check out similar to staging a production and then in the environment section we say hey Name this environment review dash review slash CI commit ref name and this is kind of like the branch name you're working on and This is the URL for this environment. It's called CI environment slug stage my app log Now CI environment slug is also like the branch name But encoded in a in a URL friendly fashion and then a bit opposite obfuscated So I have a custom domain for this all your all your Swiss admin needs to do is to expose like all the Subdomains of stage my app log and put it on on this network where where all your review apps are running And then we also say whenever this environment should be stopped Call the stop review job if you don't do that like all the Docker instances will pile up on this server and somehow Somehow it will it will crash or your your Amazon bill would get like really expensive So we don't don't want to do that And then we say Run this run this job only if a merge request is open now This is a relatively new feature back in the days You would need to say okay run this for all the branches, but not master which isn't kind the same. It's a bit different But since like I don't know two or three releases back. They added this feature So this will only run when you open a merge request which you should do and Then we define our stop review job which is called when the environment is stopped and we would just then Kill the containers as as we don't need them anymore Again git strategy is set to none because we don't need it and we add a when manual trigger Which is kind of weird because you never execute this manually, but it's called it's triggered by a different job So you need to set this And then we use the same environment name so that GitLab can figure out like what environment to stop and Set the action to stop so this then causes GitLab to kill this environment in its own database So let's test it. Well, let's assume the customer wants To build a feature to Yeah, protect your website So no one would ever see like pricing and the catalog and stuff without being logged in Luckily, we've built such a magento module ourselves. So we just create a feature branch add the module that that we need added committed push it and When you now open GitLab, GitLab will scream to you merge request merge request was showing you all it all over the places so you just need to press that button and This will lead to a to a view that looks like this So you need to define a good good title. You need to define a good description Like none. No, just don't don't do that Just provide a proper description if you open a merge request And now what you see is that it automatically started a new build pipeline for this commit shows you the three dots that the three stages Going through each and hopefully and finally deploy it on on staging Now the question is like, where is this stuff deployed? Right? I don't know it Well, GitLab will tell you Says hey, I deployed this branch on this URL So all I need to do is click this link will open a new browser page And I can test everything in isolation and this is then then basically how it looks like awesome So it's completely separated from all the other instant instances Completely custom I can completely test it on our own and and you're good to go Now what do we do with that like of course somehow we need to put it on staging and then put it on productions How we how do we do that? Does any of you have an idea? Well, it's super easy. Just press the merge button That creates a merge commit in master which will then kick off the build pipeline again. So we do that So test runs build runs deploy staging runs you many click the deploy to production button and boom We are we're alive That is pretty cool Yeah, exactly Thank you, I was waiting for this reaction for like the last 50 minutes, but I got one more I'm like the Steve Jobs of GitLab, but I'm not affiliated in any way The pretty cool thing about about GitLab is it keeps track of like all the environments that are currently running Given you an overview like this saying, okay, we got the production environment and the staging environment up and running There's deployment three and two the job the last commit and when the last update happened So we get really good overview of what happened when you could also dive in such an environment and Then see like which commits made it to to that specific environment seeing exactly what was going on And I could also just do a rollback. So if I figure out like hey, there's something wrong I just click the rollback button and the old state is is is Is deployed again now to be fair Your application does need support rollbacks right especially regarding database and stuff So you need to handle that that on your own This is not something that that GitLab can do for you So you need to do that yourselves when the container is is launched For some applications like Magento, it's not that easy But yeah, you need to figure out then ways of doing that and That's actually it. So so we are finally we are done with this with this presentation As you've heard throughout the day and probably yesterday as well, please do rate this talk I've done it a couple of times, but I want to improve it If you're interested in new features that that might be missing just just let me know and that's basically it for me Thank you very much. Oh, they are questions Hi, hi on that first demons you had on container And then you went on to run the composer commands of setting up Wouldn't it be easier to just have docker in the image? Sorry co-position in the docker image You mean like not running the the composer in the container but like before when you build it Yeah, yeah, that might be easier. Yes. Yes So this is just demonstration purpose. All right. I was wondering about best practice actually Is right that would you recommend to have it in the image or run it as part of the build? I would rather run it as part of the build As well when I when I showed in the beginning how the container is built We installed these libraries and we compile the PHP Extensions over and over again. I wouldn't do that either This is just like for you to take something and just just play around with it I just would create a Default build for the application on always use that Derek your question. Yes. If I was your first slide you showed creating a project. Yeah There was a third tab which is import from an existing project. Yeah, does that mean that it would? Re-import the whole git repository Repository into kit lab or would you be able to run it on something that you already have locally in a git repository? You can run it against a custom git repository that is run anywhere. You could import it from git lab You could import it from github Maybe also bit bucket and in those cases. I think it would also import like open issues and all that stuff So it really grabs the whole thing and and gets to in there Hi Regarding the credence ill like you store it on the Text right Sorry, I just couldn't follow you regarding the credential you store on the plaintext. Is there anywhere where we can like Encrypt and decrypt things So that we can store our things on the gate or anything. I Don't think so At least I'm not aware. I think you need to provide them them as is in plain text Oh Sorry open a feature request Hi God my voice is deeper than I thought No problem. Have you got any have you got any thoughts on So what you've been doing is setting up these custom custom pipelines But one of the things that I'm finding quite confusing in git lab cloud at the moment is there They've just started bleeding their automated CI build pipelines, which is turned up turned on by default now When you when you create a new project in the cloud, you mean this auto dev ops. Yeah. Yeah, um, I Haven't tried it yet. I don't know how good or how bad it is Yeah, since we run all that stuff locally and Out of the box using our our git lab CIML files this this just works on you But I don't know how how good or bad this this auto dev ops thing thing works Hello, yeah the The script is it where that you actually deploy is in the githlap.ca.yaml in the repo Is there any way of locking that down so that only your dev ops people can Change the way the thing gets deployed. I Don't think so So this is really one of the downsides of of kit labs. They're they're Roll based authentication mechanism is really bad so they got like four or five rolls and it's like this is it So and all the cool stuff can be just done by like the master role or I think it's it's maintain a role these days So most of your team needs to have the maintainer role and that just makes things even worse But that's a different story. No, sorry