 Hello and welcome everyone. Thanks for joining us today. We're really excited to have you here with us. In this webcast, we're going to be covering the topic GitLab for complex CI CD robust visible pipelines. So we're really going to try to dig deeper into that area of the product and help you understand what features we have available for you in premium to help out with those complex CI CD tests you might need to do. My name is Darwin and I'm a senior solutions architect here on the team at GitLab. And I'm joining you today from Pennsylvania near Philadelphia. We'd like to hear where you're tuning in from. So please use the chat function if you would to let us know where you're joining from. And before we get started, though, I'd just like to cover a few housekeeping items. First of all, feel free to ask questions during the session using the Q&A function at the bottom of your screen. And we encourage you to actually ask them as you go along so you don't forget the questions. But we'll actually be covering them at the end when the content is completed. If you are experiencing any technical difficulties, then please use that chat function to also get in touch with me, the moderator, and I will give you whatever assistance I can. Our presenter today is Joel Kruzwick, a solutions architect manager from Chicago. And we're going to launch a couple of polls throughout the webcast so we can learn more about you. And then that way, Joel can also tailor what he has to say to some of your needs and requirements. And so with that, I'm going to launch our first poll. So on your screen, you should be seeing a poll that is basically asking you what GitLab package you're on currently with your organization. And we also include the option there if you're not currently a GitLab user to let us know that as well. Great, so, and so as you're filling that in, I'm going to allow Joel to get started here and launch into today's content. Take it away, Joel. All right, thank you, Darwin, and thanks everybody for joining us. So again, my name is Joel Kruzwick. I'm coming to you today from the cold and snowy Midwest in the United States. And I just wanted to jump in on some content here that we may not usually talk a lot about. So if you're a user of GitLab today, you know that this is a product that goes deep and wide. There's a whole lot of functionality that can be a part of your day to day work with GitLab. However, finding and understanding how to use all that isn't necessarily always the easiest thing. So what I wanted to do today is dive into a number of different components as it relates to CI, okay? So if you've been around GitLab for a while, you've probably seen some content like you see on the screen. Now, the idea that we have this marketing style to our diagrams, right? And you've probably seen it on our website. Now, on the left there is more the source code management side of things, coding, committing. These are the things we talked about in our last webinar in this series. So if you missed that, go back and take a look. We're out on YouTube with it. It's about rich change controls for workflows you can trust. It's excellent content about gating, about merging, about approvals, about auditing. The ability that the right things are done at the right times for code to enter the pipeline. Now, the reason I showed this particular picture here today is because this is the core usage of GitLab, right? This is the highest number of people that come to us, come to us for source code management, CI, and increasingly CD. And so I wanted to focus here today. And as we get into this, I wanna tackle a few questions. Okay, some of the most common questions that we hear every day in our customer-facing roles. First and foremost is that CI side. How do we make this thing faster? How do I improve my performance? So we're gonna talk about some tips and tricks that everyone can use regardless of what enterprise or core tier you might be using with GitLab. And that's the first part of what we're gonna talk about today. What registries should we use? So if you've used GitLab for a while, you're aware that there's a Docker registry component. However, we've also added other components and capabilities that I wanna get into today, as well as why it's important for us. What about microservices? While we're not gonna go in-depth on microservices specifically today, we are gonna talk a little bit about various strategies and why it is that the approaches are taken that are taken and what we do with the pipelines in order to better accommodate microservices-based code. Some other questions, what deployed where? How do I know what's in my target environments? And as part of that, if I'm using Kubernetes, how do I know that that deployment is healthy or what's going on out there? It can be quite an enigma when you just stare into the empty Kubernetes space unless you've got some way to visualize what some of that data looks like. Who approved that deployment? How did it get there? We've got that traceability component. I mentioned that it was part of the last webcast that we did. However, there's also that direct tie-in. Can I quickly and easily see what was the last thing to go into production and who got it there? And then last but not least, then this is one we'll only really touch on here. Will this work if I have another source code manager in place? So I'm using maybe Bitbucket or GitHub today. Do I have to jump right into GitLab to leverage GitLab CI? And the answer is with GitLab Premium, no, you can continue to use that today until you're ready to make the move over. So the things I'm about to show you will in fact also work with your current SCM if you're not using GitLab. So I wanted to call that out as part of the introduction here. All right, ready? Let's jump in. So first and foremost, let's talk about package management. So if you again have been using GitLab, you know about our Docker container registry that's built into the product. You'll see I've got the S and the B there that's like starter and bronze versions of enterprise. I'm only referring to the enterprise stuff right now, but those of you using a core or free version today know as well that it's there. From the perspective of usability, it's as simple as the admin turning it on. So if you don't see it today, if you're newer to GitLab and don't see it, perhaps your admin doesn't have it enabled. First and foremost. But when it comes to usability, there's also another gate. So there's no way for you to leverage that registry unless you are using personal access tokens or CI tokens. What I wanna call out here that's really nice is that CI token component. For each pipeline that's running, when it's leveraging your CI token, it's pushing and pulling from that registry seamlessly. Okay, it's something that just makes it an effortless approach. But that's just for Docker container registry storage locally. Why is it important? Well, all of these things are important. All these registries are important. When they're localized, obviously we're reducing the network variance, right? We're not trying to store this in the cloud or somewhere else where we could have a problem with our network speeds or any number of influences that can slow down the pull time. And I'll actually give you an example later on today about where this plagued me for quite some time. But it's not just a container registry that we have in place. We also have added an artifact repository. So the idea here that I can store Maven, Conan, NPM and very shortly NuGet artifacts within GitLab, it has become a very, very common topic for us. And we're talking about that quite a bit. Again, if you're storing close to the code, you are reducing any variances that could affect your performance. So back to the whole performance idea, we're trying to keep our pipelines moving as fast as possible. Now there's one other thing that we've added also that you may have missed, and that is a dependency proxy. Now this too, like the NPM registry, is only available with GitLab Premium today. And currently it's just for public groups. Submittedly that's something of a limitation. But the idea here is that provides upstream dependency artifacts for usage within your pipelines, okay? There's a lot of upgrades and roadmap to that dependency proxy. So keep an eye on that one. The idea here being though, when these come together, you've actually got something pretty powerful that can help you expedite your pipeline, okay? So from the perspective of pulling that upstream dependency in play, pulling from the container registry, if you have something locally stored to start your pipeline with, you can process your pipelines and then upload the artifact or the final container back into those registries and repositories before you actually do the deployment and the distribution of those. Now, obviously once it's in place, you've got a way to quickly distribute to one or multiple locations, whatever artifact you need to do that with, or of course, if containers and popping that into Kubernetes, okay? So this is the whole package management thing. And I wanted to just touch on it because it's something that's often overlooked. A lot of people come to me and say, hey, I really just wanna use this registry that we're currently using or I'm just gonna pull from Docker Hub online. And that's perfectly acceptable. You can set things up that way, but performance-wise we do see a hit sometime to the tune of several minutes. So I like to call this one out specifically. And just as a quick example on the NPM side, again, the idea here is that all packages have to be scoped and versioned. And then once they're uploaded, they will include both the scope and the version, okay? But that is how it works today. Those are requirements for usage of the registry. All right, let's talk about some tips and tricks for speeding up complex pipelines. I'm actually gonna show you a little bit of GitLab YAML. Now, admittedly, it's basic, but there's some keywords that I wanna show you as we go through some of these tips. Now, the first one is kind of a 101 moment. It's kind of a duh kind of moment, but if you've never used GitLab for pipeline management before, this could be new to you. The idea that GitLab uses is stages and jobs. So stages being the things that run sequentially, think a series of different jobs that will run in a row. Each one runs after the one before it, okay? Parallel jobs being the things that you can put inside each stage to run together at the same time. So when you build a pipeline like that, what you're gonna see is something like this, where I've got two stages, two jobs in the first one, five in the second. The idea being we can of course expedite things as much as possible by parallelizing our jobs, okay? That speeds things up dramatically in most cases. And what you're seeing here actually is a GitLab pipeline. This is one of our internal ones. So this is an actual thing that we're doing today. However, depending on how complex your pipeline is, there are variations on running these parallel jobs. And what that could look like is this. Now, this is the idea of leveraging the directed acyclic graph or the DAG for short, okay? It uses that needs keyword that you see in blue there. And the idea here is there's three different stages, build, test, and deploy. Each one caters to a different environment. One is catering to Linux, one is catering to Macworld. Now as part of that, when you use the needs keyword, the limitation we have on the left goes away. So the limitation on the left is when I'm running the prepare stage and crop pictures and enforce relative links is running, those have to both be complete before the next stage kicks off. In this case, and I apologize, that picture is a little small here today, that bottom set of jobs actually ran to completion while the first set of jobs is still in its first stage. The idea here being if you have some longer jobs, some systemic jobs, some longer term test jobs, things that are part of your deploy process that don't affect all your environments, for instance, you have the ability to use this functionality to speed up your pipelines and speed up your deployments for other components within the same pipeline. This of course being most useful in deploy pipelines in monorebo type approaches potentially where you've got a lot of different things that are gonna be running all at once. This can help you get certain parts of your pipeline done faster so you can now analyze those items first. Okay, so that's something I wanted to call out. And again, these are part of just regular old GitLab, no premium requirement. Same thing with everything that I'm about to show you on this page. So one of the other things we can do to speed up our pipelines is set run rules. Now we're actually coming out with something called run rules and you'll be able to do a lot more with this. Today, some examples of that are the only and except that you see here. When should these jobs run? Every job shouldn't have to run every time. Maybe you only wanna run that job when there's a code change to certain files. Maybe it's only on certain branches. Maybe it's only when it merges to master. Those kind of controls can save you an enormous amount of time when your pipeline runs. So these are some of the controls that are most popular today when it comes to controlling what runs on what criteria within GitLab pipelines. Another big one is runner caching. And I've had a lot of questions on this lately. The idea is if you've used GitLab for pipelines, you know a runner is an executor for our jobs. If you haven't set one up yet or maybe you're using gitlab.com which has shared runners that are ready for you to use. All auto-scaled and ready to go. The idea here is that there's caching built into those runners. So if you're leveraging a bunch of runners for your jobs but you know that there's certain jobs that are gonna run in a sequential fashion that are gonna leverage the job before it. Think I built an artifact or a container and I just don't want to rebuild that in the next job or I don't want it to come all the way back to my registry and then pull it back out again. This cache component here, this stores the artifact locally on the runner. The idea being that a job runs, the artifact is stored right there on the runner. The runner picks up the next job, pulls that artifact back into action and now we have no network latency. We have no time allocated to actually pulling that image back into play. So caching is a really important thing. You'll notice here too that there's a pathing component to that. So the cache has a local path that it's going to leverage to find that artifact the next time that you run the job. Another recommendation here is to consider tags as part of this just to simplify which runners are going to actually leverage the artifacts. And so if you're self hosting GitLab I recommend you look into tags as part of the caching algorithm for the highest efficiency. Another one that's not real well known is for testing. It's the parallel idea. And for those of you running a lot of tests at once I'm sure your ears perked up. This is the easiest way that I know to quickly break down and run a whole lot of tests in parallel. And that number of course is variable. The idea here being we can simplify testing as much as possible and expedite it by parallelizing the jobs within it. And then last but not least I wanted to cover the before script. So we have a before script and an after script component. And sometimes I see this used really well and sometimes I see it misused which is why I wanted to call it out. So the idea here being if I need to establish variables if I need to establish a core set of things that we're going to leverage think I need to create tokens for my interactions I'm about to have with GCP. Those kinds of things might fit really well in a before script. However, there's a limit to that as well. So the alternative to this would be to have your first job within a pipeline after the before script actually do creation of an image that you plan to reuse. So that's one of the more common use cases I see where people try to tuck it in and unfortunately the before script doesn't necessarily fit the mold as well as creating a first job, creating an image then pulling that out and leveraging that with caching and there's a whole bunch of different things here. So my encouragement to you is if you have one thing you want to do at the front of your pipeline before the rest of the pipeline runs, if you've got this one setup component, test it, try it with the before script, try it as part of your first job, see which one gives you a higher performance level because there is some variation there that can occur depending on what the script is and what the output is of that before function. Okay, so now I want to get a little more complex. Okay, everything I've shown you so far has pretty much been something you could do with a single project. Well, as we get into microservices, as we get into cross project dependencies, how do we manage that? Well, there's a couple of ways and this is where we're starting to get into some premium functionality. When you're looking at downstream jobs, think I've got one project that's running a pipeline, now I'm about to run another project. Okay, that's dependent on that first project. There's a couple of ways to do it. The first way is just an API based trigger. So you see I have a curl request there, I'm actually using the CI job token and calling out the API. This looks a little bit cumbersome and to a certain extent it is. However, if you're a heavy API user, you'll notice right away that this is pretty simple from the perspective of simply calling out the next pipeline as an API trigger. There's another way to do this though and that's with a path base. This is the one I recommend for everybody starting out. If you need to trigger a downstream job, you'll see that I have a couple of extra things I've thrown in this one as well. So not only is the project a path under the trigger there, but I'm also gonna pass it some environment variables. And in this particular case, I'm gonna point to a branch in that downstream process, that downstream project that I'm going to leverage. And you also see the strategy depend there, which means this downstream job is only gonna run if the upstream pipeline passes. So some of these components are really important for things like specificity in what downstream pipelines am I going to run? And these can get really complex. So what I'm gonna show you in just a moment now is just the visualizations of some of the stuff we've been talking about. And being able to see this makes all those pipeline management components a whole lot easier. So this here is example of our auto DevOps. If you've ever turned on auto DevOps in GitLab, you may have seen something like this. This is kind of our core expectation for a viable pipeline. It's an auto build, auto test, code quality versus with a code climate. There's a number of security scans put in there. And now those security scans are only part of GitLab ultimate. So I wanna point that out. If you haven't seen those, that might be why. And then the review app. So in this case, I have my Kubernetes cluster enabled. I spun up a review app of my web app. I ran dynamic security scans against it and a sitespeed.io browser performance test against my site. This is all part of GitLab auto DevOps. I didn't build this, but this is a single project. This is one project. It runs the completion and then off the screen here, you see that line continuing. That's where it's gonna deploy to some sort of internal environment once I'm ready for that. What happens when we get more complex? What if we take this pipeline and we say, well, I'm gonna build test and deploy. So I have a deploy job where I'm creating an artifact to go out the door. However, there are four different environments that I'm interested in having that deploy go to. So what's gonna happen here is that deploy job's gonna finish and it's gonna trigger all the cross project pipelines. So when we go downstream of that, those cross projects you can see is kind of the far left side of the screen. It's basically cut off. When you click into one of those, you'll get something like this where you see the downstream pipeline and all the components of that, that yellow arrow showing me that that would be the next one I'd click on, the website. And I would get a different rendering of the build, test, and deploy that I see on the screen now because it would be building, testing, and deploying to a different environment. So this visualization component as part of GitLab Premium and it's one of the more valuable things that I have leveraged within GitLab just because it can be really difficult to visualize all this stuff when you're moving forward with just a couple of downstream pipelines. The more complex microservices architectures you get into, this becomes really, really helpful to visualize all the different downstream components that you may have and all the dependency management you may be trying to do. One more thing I wanna highlight here and that's our operations dashboard. When it comes to seeing across all the different projects that you might be interested in seeing, that past running, failed, past with warnings, you see all the different pipeline statuses as part of our dashboard. This too is part of GitLab Premium. It's that insight that you get by taking a quick look and again, you can click into each of these and see the actual pipelines and find out what's going on. But this is your adaglance pipeline health or this is your adaglance pipeline activity monitor. So it's another good thing to know about. All right, now you'll notice our slides change. That's because I borrowed from Darwin who presented on our last webinar in this series. This is his deck, but I felt like there's a lot of application here. So again, if you have seen the last one, you know when we looked at the left side of the screen, we were talking about the controls, who has the authority to merge, who has the authority to approve the code, right? How do we get to the point where we've created that puzzle piece in the middle, which is our artifact? So we talked mostly about the left side of the screen. However, once we have that artifact in place, now we've got these different deployment options, right? Where again, back to the idea of having multiple projects, multiple repositories in place. And the idea is that one project in GitLab is a single repository. So if you see the terminology, in either way, it means a single project or repository. This idea was, I take that artifact that we created up front and we're now deploying it with various configurations applied. And think of this as kind of like a SAS model of some kind, right? Where you're deploying these into different customer environments with different configurations on them. And there may be other controls that are part of this, but each of these is like a different CD pipeline that comes after you've created your original artifact. So this is one model. Notice the reuse of that artifact repeatedly to deploy to different customer stacks. Another example of this. So this multiple deployments via a push. This is just a little bit different model. Same kind of idea, but here you can see that we're reusing a, with the same configuration essentially, there's only a couple of different configurations. And then we're reusing that artifact in a whole bunch of different environments. And this could be a different customer tenants. This could be different clouds. There's a lot of different components that we can configure for here, but I can do multiple pushes into multiple environments with a single pipeline configuration. And that's the difference between this and the previous screen where each one of those arrows has a pipeline configuration. This one you can see we're doing those five deployments at the bottom off of a single pipeline. So, you know, there's a lot of different configuration options available. It really is meant to mirror your configuration today. What are you doing with your customers and model that in the CI specifically? Now, what happens when we get into multi-service mono repos? Well, this is where we start to get into dependency management. This is where things can get really interesting. And just as a data point, I went back and looked to see when it comes to microservices and multi-services, what is it that we're doing out there? What are customers doing with GitLab? How many of them are using a large mono repo? How many of them are doing a lot of microservices repositories? And I'm afraid I just don't have a real clear answer for you because it came out 50-50, almost exactly. People are kind of doing what's best for them or what their application supports today. So, there's no clear answer right or wrong on whether you should go with a large mono repo or if you should go with a breakdown into all microservices. At the end of the day, though, both of them have an idea of pulling from another repository in order to support dependencies or in order to support a core set of jobs that has to run every time. So, here you see, I've highlighted in blue that word include. This is the idea that there is a GitLab CI file in here but we're going to include jobs from other pipelines, from other projects. Now, those other projects where those CI files are defined, those may be locked down, they may be regulated. If you're using a regulated environment, those jobs are going to specifically be controlled in a separate project. So, this is one way that you can kind of help control the core components that need to be run in every pipeline. Notice here, though, the idea is that this file is the, I guess, the master pipeline or the primary pipeline and it's going to pull in those other pipelines as part of it. The alternative here being that there's a template of some kind that all jobs should run and this particular YAML file will extend that existing template. So, I think kind of the difference here is that the one on the left would be, I want this to be a primary way of driving my pipeline from within the repository. The one on the right is more of a, look, everything's driven externally. I just want to make sure that we also account for a few other things in this pipeline by extending it. So, again, a couple of things I wanted to point out there that are useful for this. Another common question we get is, well, how do we make sure that this stuff isn't being changed all the time? How do I make sure that this is locked down in a regulated environment? And the idea there is that you can use the code owner's file and file locking today. Tomorrow there is actually coming soon a specific way to lock down the pipelines from everybody except for admin and maintainer roles just to make sure that those pipeline jobs aren't tampered with in any way and we can be compliant with audits. So, there's a couple of different things there that you can do, one method today and another method coming soon. All right, let's switch gears a minute. For those of you using Kubernetes today or not using Kubernetes today, the idea here is that GitLab is trying to make Kubernetes use a bit more effortless. So, the idea of having clusters today that you spin up, that you can quickly add to GitLab is what you see in front of you. So, I have, this is my project, actually one of the things that I'm working on. It's kind of funny, if you look closely you can see that I've actually destroyed a number of Kubernetes clusters in my testing. I have an overloaded cluster, I have a dead cluster, I've got an offline cluster at the moment. So, I've kind of made a mess of things but all that to show you that no matter what you do, it's worth some testing and having some fun with here. So, I actually have two environments that are active in here. One shows the AWS Summit demo. It says that's for the dev environment. So, I'm running that deployment in AWS, that cluster. And then the top cluster is actually my GKE cluster. So, I am multi-cloud currently in my deployment model. Now, the reason I showed this screen specifically is that you see on the right there's project clusters. These can be added specifically to a given project or to a given group and utilized in all the projects under that group. But if you wanna add more than one cluster, you're gonna want GitLab Premium. So, that's an important thing to note. Once that's in, what do you get with it? Well, this is what you get with it. The idea that you can see into the clusters once you've made deployments into those environments. So, in this case, I have a production environment running on GKE. You can see that I've got about 20, 25 pods that we're deploying into, five of which actually have a canary function with them. And if you look on the right side of this, this bottom screenshot, you see a little box with an arrow pointed out of it. That is something I can use to access those environments and see real-time, what does the site look like that's running inside that environment? And if I reload that over and over, well, one out of every four times or so, I'll get the canary environment and I'll be able to see what that looks like. The other advantage of this screen is I can now see which pods are rolled out. Green is good, right? Green is healthy. We can see that those pods have in fact deployed and we can click into each of those boxes, be taken to the terminal inside of a Kubernetes cluster and actually take a look at what's going on inside. One thing missing from this screenshot, and I didn't realize it until it was a little too late, is the idea of the monitoring side of things. So we also have Prometheus bundled with GitLab. And so when you are deploying into the Kubernetes environments, if you just one click install Prometheus into your Kubernetes cluster, we will monitor these pods for you, give you an idea where the RAM and CPU utilization is across each of the clusters and pods. And so you get insights into the performance and the health of Kubernetes. Now, this example looks pretty basic, right? This is a single project. It's not a big deal. Things get more complex once you start really rolling with this, right? So this is kind of another example of just look at the volume of pods that were deploying into production, into staging. I chose this screenshot for a couple of reasons. By the way, it seems like that's a little blurry. I didn't realize that without a camera, like a screenshot, it looks like you can actually blur a screenshot if you do it wrong. I didn't think that was a thing. So the idea here though, shows you that there's those green ones at the on the production side with also the boxes that are not filled in green yet. So notice these are pods that are spinning up and they're deploying. So this is, I'm actually watching a staged rollout occur on that top one. It's a really interesting thing. So you can watch these kind of things happen and not only see the health of things, but you can actually watch that deployment occur. The other thing I wanted to point out here is the other items below that. So the reason that we have what we call those deploy boards with all the pods on it is that's Kubernetes, right? And we're giving you insights to your Kubernetes environment. Everything below that is virtual machines, bare metal servers or other deployments that simply aren't done to Kubernetes. And so as such, we can monitor the health of those environments, but we're not necessarily going to show you something like the deploy boards because it's a singular deployment, right? So we don't need to give you more information about how that's broken down. So just a quick wrap up here just the things that we've looked at. We talked about the operations dashboard. There's an environments dashboard that kind of takes the information from the last screen and shows it in an operations dashboard kind of way. We talked about the deploy boards just now, the CI CD for external repos. So all the things that you see here are things that we discussed today. We also talked about visibility though. And some of the things that we didn't talk about that are part of GitLab Premium are visibility features like the auditability, scoping of labels, analytics around issues and multiple issue boards. So lots of other components that come even before you get into SCM management. So merge trains, productivity analytics, feature flags, there's a whole lot of different functionality that we haven't gotten into that might be worth exploring again with GitLab Premium. And with that, I'd like to stop at this point and take a look at the Q&A, see what questions you might have as it relates to the topics at hand. Okay, Joel, one of the first questions we have here is how do you ensure all of my pipelines are included in a core set of, or include a core set of jobs? Yeah, so the core set of jobs thing, that's a really common question we have. There's a common concern that there's gonna be some bypassing of a core job someplace. Somebody's gonna skip a security job or skip something critical just to meet a deadline. I really do choose to believe the best in people and that we're gonna not have that problem very often, but in regulated environments, that statement doesn't hold. So there's a couple of ways to do it. One is that the GitLab CI file is locked and it's only available to certain people listed in the code owner's file. The code owner's file would be a file that lives in the repository that lists out just certain people who are responsible for that file. And of course that file too, would be locked. So as such, then the YAML file would be locked down and it would not be something that could be altered by anyone who doesn't have the sign-off authority to actually do so. Another way is, again, looking forward, we actually have specific functionality coming out around this that's going to have a requirement template essentially and that's gonna be something that can't be altered. It's gonna force certain jobs and functions to run as part of that pipeline every time. And so that is a common thing that we were asked about and again, we saw some various ways to do this today. It could be a trigger base, it could be an include base, it could be an extend base or one of the new definitions coming soon. Great, we have another question. I'm gonna rephrase it a bit and if whoever submitted it does not agree with my rephrasing, please retype. Is Nexus repository supported for artifacts? Oh, explicit Nexus support is not listed. So we don't have an explicit support there. Is it possible to pull from Nexus? Absolutely. So I think it's a matter of how you're leveraging it today but we don't have like an integration or something off the shelf with that, if that's the question. Yeah, and just to add to that, Nexus has actually published a blog which I'm gonna add to the question here that tells how to integrate but basically you're just using CI commands to go ahead and configure the proper credential files locally on the runner and then to be able to push and pull from the repository. And actually any repository would work pretty much that way that you can do whatever you'd normally do on a workstation to configure it and interact with it. You can do that on a runner and use it during your CI process. Yeah, Darwin, that's a good point. So that's one thing we didn't talk about today is everything that you saw here had maybe a code word script in it. There's some script that's going to run. A lot of that is bash scripted stuff. Anything that you can get to with a bash script you can integrate, right? So I think that's a really important thing that I did not bring up today. It gives us a lot more power and a lot more ability to leverage other tools within our existing pipelines as part of what we're doing every day. Yeah, yeah, and also a lot of times people think in terms of plugins other CI infrastructures will have plugins and within GitLab what you would do instead is potentially create a build specific container. So you'd have a container that has some additional utilities pre-installed maybe some of your own custom code. And then everyone in your organization can easily pull that container, their source code and artifacts get replicated to it and then they can run whatever utility you have. And this keeps plugins much more isolated than Jenkins or other options. And so then you don't have as much plugin contention as well. So building your own containers for your build stages is also another powerful way to extend. So one final comment on that, we talked a little bit about what to do with those containers where to store them, right? About the whole idea of the GitLab registry and its usage. There is an exception to that at times, right? If all of your work is being done in say AWS today, right? If you're leveraging AWS you've got your runner deployed there you're deploying your work there. You definitely want to look into caching your artifacts on the runner there but there are some examples where it can actually be a time savings to say I'm going to store some things in ECR. I'm going to tuck things up into the registry there locally because the pull time will be shorter. So back to that performance that we talked about early on there are some cases where that makes sense if GitLab is not also posted in AWS, right? Where it might be good to keep those containers close to the code, close to the runner and close to their deploy environment. So it's just something to also keep in mind again where is the code, where is the runner? How do we keep it close together to minimize performance impact? Great, well it looks like we've got a wrap on our questions. Thanks a lot Joel for a very informative session. We also like to let you know that the previous session I'll drop a link right now actually into chat that we mentioned earlier and the existing session you will receive a email to the same address that you registered with that will allow you to get access to this session in case you want to share it with colleagues or review some of the content. So that's all for today. Thank you so much for joining us. Thank you to Joel and we'll be sending over that recording email pretty soon. Thanks again and we also should mention that we're gonna be having a third for one of these sessions coming up. So you wanna review the registration page we got onto this session and go ahead and see if you'd like to attend our third session in this series coming up very soon. Thanks everyone.