 Welcome, everybody, on the, I believe, first, well, definitely first in this session, talk in the automation testing and analysis track called Advanced GitLab CI-CD for Fun and Profit. The presenters are Iñaki Marba and Mikhail Hoffman, and I'm going to let them introduce themselves. Thanks, Mike. So, yeah, we are going to try to explain a little bit how we do what we like to think as advanced GitLab CI-CD and not dying in the process and trying to have a little bit of fun. First of all, we are members of the CKE team. CKE stands for Continuous Current Integration. We basically run CI as a service for the Linux kernel. We try to prevent bugs from being merged into the kernel trees, but we also manage the CI infrastructure for Red Hat kernel developers now that we are moving to GitLab workflow. Basically, what we do is we spawn GitLab pipelines for each kernel revision, and we test them in Bitcoin, which is a hardware provisioner. We also have a lot of deployments on different platforms, such as OpenShift, OpenStack, Beaker, AWS, EC2, Lambda, a lot of places, and all our services depend on RabbitMQ as a messaging fabric for getting data across all the applications. You can find more information about us on tki-project.org. It's our webpage. We have all our documentation public in there, but we also have all our code public hosted on gitlab.com. We have around 70 microservices and Chrome jobs, and we have an average of 20 changes merged and automatically deployed per day, and we are going to try to explain a little bit how we stay sane after all that. In the same line with previous talk, try to keep the abstract not so complex, do not promise too much stuff, otherwise you won't have enough time, but we are going to try to explain how we deliver features as fast as possible, trying to use state-of-the-art continuous integration and continuous delivery and deployment setup that is based on gitlab. For this talk, we prepared the examples. We are going to try to show some snippets, but you will be able to find all of them in the link that's in there, basically in our gitlab namespace. You should be able to find not only the snippets that we show on the talk, but also real projects working end-to-end from applications to the deployments, so it's a really nice thing to follow. We're going to try to divide the presentation into parts. First, an explanation, and if we have enough time, a small live demo of how a small piece of code can go from a merge request to a deployment in a few minutes. First, why do we call Profit? The idea is to have an advantage to take benefit of what we do, so we have a common CA pipeline that we use to build, test, and deploy all our projects the same way, and a common continuous deployment pipeline that we use on a single infrastructure repository based on GitOps, where all the deployments are going to get triggered and the applications deploy to production staging or different environments depending on the needs. But we try to keep this fun. What fun means to us is, as I said, trying not to die in the process of doing deployment consistent across different projects. If you maintain several projects, trying to keep all these requirements of testing and code quality, it's not simple if you don't have a unified process. So adding a new project to all these rules should be pretty straightforward. Without any trouble, you shouldn't be reading a lot of documentation. The code should be self-documenting, and you should be able to start just from reading examples. We also like to have the pipeline as a source of truth for the changes, so we make all pipelines fail if some condition is not met. And that keeps everyone honest about the rules, and if the rules are the same on all the projects, it's way simpler. But how we do this? Basically, we do everything on GitLab. They say they are the DevOps platform. Basically, a platform that merges Gitforge, CI, CD, Engine, container repository, and a lot of other things like issue management and project management and many features. It's a relatively new project, but they've been adding features way faster than the competitors, so they have grown super fast. Their development, it's completely in the open. You can see from all their workday in public, but also all the rules they use to develop. It's a really nice thing to see. The only thing is that they are an open core. That means that the core of the product is free. You can use it, you can host it, but there are some features that are under a subscription model. The good thing is that they provide an open source program. That means they give all the top tier features for free, for open source projects, but without the support. As I said, we provide an example project, so you can follow along with the talk. All the projects are hosted in our namespace. GitLab is a subgroup called DevConf. There you will find a few projects, like DevConf, for example, Common, where we are going to put all the common code libraries and the scripts. DevConf, for example, app one, which is a small sample application that uses all the things that we are going to explain, and an infrastructure project that will contain the GitOps, all the infrastructure repositories. What we mean when we talk about CI, just an overview, we create pipelines and we run them. These pipelines build and tag the container images that we use as a delivery platform. We deliver all our applications as container images. All those pipelines also run unit tests. These tests are running talks. For those that are not Python developers, talks is basically a helper that manages virtual environments, but also knows how to run tests on linters. So we run our tests in there, but we also run them on container images and on the projects that depend on this. We also have a way to visualize and enforce code coverage. GitLab knows how to parse coverage results, so they are displayed really nicely on the diffs in the merge requests. But also, as I said, we like to fail pipelines when something is not right. So we have the option to fail the pipelines if the code coverage decreases with a merge request. We also have an approval rule, which is a way to enforce that the code owners approve the changes before they get merged. When you change a file, the owner of that file needs to review your merge request and approve it. And we enable some security checks such as secret detections and dependency analysis on GitLab CI. What it looks like, my GitLab pipeline, it's basically a YAML file hosted on the repository. This means that the file is Git tracked. You can see the history of how the file changed. You can have different files on different branches, and you can manage the changes of the CI configuration as you would do with any other change of the code. You can open a merge request, and the merge request will run the changes of the new pipeline you are trying to create. In our example, we are going to use a pipeline a little bit more complex for those that are not used to GitLab CI. Basically, the bubbles are jobs. The jobs are the things that run the building and the testing and the things that you want to run. These are what we call stages. It defines the order in which the jobs are going to be executed. Of course, the green mark means that a job was successful. The gray mark means that a job is waiting for user input. And these are what GitLab calls multi-project pipelines or child-parent pipelines. There are pipelines triggered by this one, but on another project. Something very important now getting into GitLab configurations, something very important to configure, is when to run the pipeline. GitLab provides a feature that is called Workflow Rules and lets you define when you want these pipelines to be spawned. In this case, we use predefined GitLab variables. This one starts with CI underscore. This configuration allows you to get a pipeline for a merge request. When you open a merge request and push the merge request, you will get a pipeline. But also for the default branch, when in these references in master, main, or what your default branch is, you will get a pipeline. Also, when it's triggered as a dependent pipeline, as I said, with multi-project pipelines, and the user has the option to trigger a new pipeline by pressing a button on the UI in GitLab web page. Something important to enable is the merge result pipelines to get the pipeline in the merge request, but also merge strains. It's a nice feature that GitLab has that if you happen to merge more than one merge request in line, the following merge requests will get tested on top of the previous one. That will prevent you from getting a red pipeline on the default branch, because maybe two changes were merged at the same time that both worked, but both together didn't. So this way you can change merge requests, and they only get merged if the CI is green with all of them together. As I said, we deploy and deliver applications using container images. So for us, it's really important to build these containers reproducibly. We do this on CI image. So the first time we need to build an image for the image building. So this chicken and egg problem is something we had, and we solved using the upstream builder image. We used builder for building the image for logging in to the registry and for tagging and pushing the images. We use GitLab registry. Each project comes with its own registry, so we use that for publishing our images. And we tag all the images with different tags. For example, each pipeline has its own tag for the images, each commit and each merge request. If it succeeds, it gets its own tag as well. The default branch head gets a latest tag, and we have also other tags that we use for deployments such as production and staging. We wanted to make a point in here that this is one of the things that we want to talk about and the most important things from the talk is that we want to explain how we are sharing code across all our projects. We try to keep ourselves consistent between projects, as I said, so we try to keep all the code and try to do not repeat ourselves. For this, we use several things like GitLab features, such as includes. We reuse fragments of container files across all the applications. We shared images for the dependencies, and we have common Python libraries. We're going to try to explain a little bit what that means. We use, for example, what GitLab provides that it's two very important keywords that are include and extends. Include lets you import into your YAML external files or basically merge files into the current project YAML. This lets you both create templates for the job to reuse the code and also use pieces provided by other parties such as GitLab. You can include files locally from the project, but you can also include them from another project hosted in the same GitLab instance. Also, you can use generic URLs that can be fetched through HTTPS, and as I said, GitLab provides a certain set of templates, mainly for security check purposes and other things that are common between projects, so you can reuse the code that they provide by using includes. There are some special jobs that start with a dot. In the YAML, the name of the job starts with a dot. That's what they call hidden jobs, and they are not processed by GitLab, so you can use these plus an extend keyword that lets you override properties on the jobs to create very simple templates and very simple CI configurations on your applications. The extend keyword lets you get any job and change some of the parameters to customize them for your application use case. Another thing that we share is container file fragments. Most of our applications are Python, so there are a lot of things that they have in common, such as setting the base image or installing certificates or installing the application itself, so we split all these pieces and we use the C preprocessor to merge them back into a single container file. These tasks encapsulate common things like cleanup and lets you get the resulting container file with something like this. This is the C++ theme preprocessor include syntax, so you can get the fragments included into your container file and just those five lines are enough to get a working application from that container file. What we also share is the CI container images. We have an image that is generic, that contains all the dependencies necessary for running all the CI CD jobs, but also for the development tasks. For example, we use this image for building and tagging the containers, so it contains helper scripts, it contains the container tools like Builda and Scopio, but it also contains these reusable fragments that I mentioned already built in the image, so you can use them both on CI and on local development the same way. It also has tools like Git and Python and some other deployment tools, like Kubernetes, command lines, and JSON Jamel parsers. And last, what I mentioned is this shared Python library that we have. It's a way we found to consolidate code on a single place to define a single way to do things, such as detecting when an application is running on production or in a staging environment and, for example, logging configuration for Python. Configuring Python logger is not really straightforward. It takes some time, so having a single place where to put all these configurations and being able to use them on all the applications the same way, it's a really big plus. And also, you can configure the log levels in a single way, so debugging turns out to be way easier. Another example, another things that are not included in this example are, for example, Prometheus metrics and message queue handlers. You can find them on our production projects, which is CKE project slash CKEleave. There are a lot of other ways, things we unified in the CKEleave, and it's a really nice example for everything. On the left half of the slide, you can see a working application almost. The only thing that it's missing to define is the callback, and you can see how each of the methods, there's a comment on the top. For example, Prometheus init, what it does is it gets the environment variables from the host, same with sentry, same with message queue. For example, in this case, the message queue helper contains a lot of metrics that are really useful for the users. For example, how many messages it consumed, how long it took to process them, and the load of the consumer. And you have to do nothing extra from the application side, but to initialize the Prometheus server. Everything else comes built-in, so any change we can do in there, any metric we add, it gets automatically applied to all our applications instantly, and we only need to rebuild the images and redeploy them. We also have a bit of a small, on this shared application, in shared library, we have some scripts, the ones we use for linting, the ones we use for testing, and also a common entry point for all our images. So all of them have the same way to define how applications run. We use the same entry point everywhere. It's in charge of starting all the applications, but also following them through the lifecycle, and if one of them dies, killing the whole container. This also handles logging, handling, and a few other things. And now, Michael. Okay. I'll continue with the second half of the presentation. So I'm going to start talking about the testing aspect. So we said that we wanted to test in three ways. So the first one would be to test with TOX. This is really easy to use locally, but it doesn't really correspond to your production environment. If you use TOX, you might use different Python versions. You might use different package versions. But it allows you to easily hook in all kinds of linters. So that's what's done in the job definition on the left side. You can just pip install TOX, run it. It will look up what it's supposed to do in the setup configuration file. In our case, it's calling a shell script coming out of the common library, which will run all kinds of testers and linters. Whatever you have, you can just plug in there. Important part is that you enable a setting in GitLab which is pipelines must succeed before merge so that actually these green pipelines are required for merging your new code. We also test in the production environment container images. Or it's not the production environment, but in the production container images. There, we don't care too much about the linting, but we want to make sure that actually running those unit tests in those images still succeeds. So we take the images that we use or that we produce in the first step and install the double dependencies on top and then kick off the unit tests. So for example, on the left, that would mean that we also need to install the responses pip package on top of everything that is needed for running the production or like the deployed code. Another aspect that we want to test is whether we break any dependent projects. So we have a common library. If we change one of those interfaces, it might actually break any consumers of those interfaces and we want to prevent this. GitLab provides something called parent child pipelines that can actually cross projects. So this is seen on the left in the top. We can put in a trigger keyword where we specify which project should be triggered. In this case, we want to trigger the app project and trigger a pipeline there. And we can also specify variables that get injected into those pipelines as well. In this case, we specify the source code that should be used instead of the source code in the default branch for this common library. And then in the dependent pipeline, we run a shell script to overwrite the production or the default branch version of those packages with our unmerged code. And we do this both for the container images so that we get custom container images that contain this new code. But we also do this in the TOX environment. And we encapsulate this into a hyperscript because it's slightly more magic than what's on this slide. If we run those pipelines, we then see the intent test drop. So this was the first one that runs in TOX. We see the test drop that runs in this container image that we're going to deploy. And we also see this last dependent pipeline drop that triggers the downstream pipeline in the application project. And only if this downstream pipeline passes will your pipeline be green. So this enforces also the contracts that we have for these libraries that it shouldn't break any consumers. We also want to visualize code coverage. This is important for developers, but it's even more important for reviewers so that they see actually which lines of the code are covered by the unit testing. GitHub has nice support to visualize coverage in the diff. So you see these green bars for the lines that are covered by the unit tests. And there's one line here where we are missing coverage. So if you use something like coverage, which is a Python tool that can output XML files with the coverage information, we can hook it into GitLab where this artifacts keyword and then GitLab automatically in the merge request will show you the lines that are covered by your testing. We also give it a regular expression so that it can pass the job blocks and determine the percentage of lines covered. And this becomes important if you want to enforce code coverage or at least we want to enforce that code coverage doesn't work. Basically, force developers do write new unit tests for the new code that they add to a project. There's something inside of GitLab projects that's called approval rules and we can add an approval rule for coverage. And that basically means that as long as coverage increases or stays the same, nothing needs to be approved. So in the case that coverage drops, certain team member or certain team members, for example, your team lead is required to chime in and approve your code change and acknowledge the fact that you're actually submitting something that will drop code coverage. Speaking about approvals, we are using code owners files, which is a standardized way of specifying who's kind of, or who are the people responsible for code in a certain project. GitLab has this concept of protected branches where you can basically say if you want to merge to this branch or you want to push to this branch, branches, then you need to get approval by certain people. It will take this file out of the branch responsible and again, if this is configured in your merge request, you can see these boxes that there's an approval required. People will get an email and then they can press a magic button on the merge request to approve after review before things can get merged. The third thing that can be hooked into approval rules is security scanning. We enable three example scanners that are available at GitLab. One is a container image scanner, which basically scans the packages in the image. This needs a supported base image. In our case, it's UBI, so that's supported. But for example, for Dora last time I checked, it's not supported. We also want to scan our Python dependencies. For example, for supply chain attacks, but then we would know after the fact at least. And we also want to check whether we accidentally leak secrets, for example, because we committed a local configuration file into our merge request by accident. Again, there's an approval rule that you can enable where somebody needs to acknowledge that you are introducing certain potential vulnerabilities in the code with your merge request. So that was it for continuous integration. But now that we have all those images built, we also want to deliver and deploy them. As said, we have one common GitOps infrastructure repository where we can deploy everything into production. But we also want to easily deploy individual projects from the merge requests themselves. We want to deploy into different environments, production, staging or dynamic environments to check out the changes and look at, for example, the web interface of an application. And if something fails, we actually want to go back as fast as possible to the last working version. Setting this up is easier than it actually sounds. So first, we deploy into a Kubernetes cluster. In the example, we just use Kubernetes yaml that are slightly preprocessed with nfsubs from the GitOps package which basically replaces environment variable references in text files. So the first job is the lint job that runs before anything else is done, which just validates that those yaml are actually correct after they are processed and that they would be most likely accepted by the server. And then in the second step, we just deploy everything for each merge, for each pipeline one, on the default branch. That keeps everybody honest so there's nobody that actually considers changing anything on the production environment directly in a suitable way of persisting change because it will not be persisted. Next time anybody works on the infrastructure repository, everything will get redeployed. This works very nicely for Kubernetes or for Ansible where things will not change if they don't need to. Now, because these jobs actually will increase quite easily, so a CKI deployed I think 150 jobs each time somebody changes something on the infrastructure repository. There's a feature called matrix jobs in GitLab which allows you to specify jobs that are very similar. In this case, they only differ in the project name variable. And so for our deployment jobs, we are going to log into the cluster and then deploy the yamls depending on the project name variable. So for example, here we're going to deploy the application for our example, but we're also going to deploy two different resource files that are related to the deployment itself. Speaking about deployment, so what are we actually going to use for running our workloads? So we have four namespaces. One is for putting anything that's related to deployment. And then we have three namespaces corresponding to production, staging, and testing environments. To make this happen, from GitLab CI pipelines, we have a service account. So this one is the one at the top that gets deployed into the deployment namespace. And then for each other namespace for production, staging, and testing, we have a role binding of the service account to the admin role. So basically our GitLab CI jobs in the infrastructure repository get admin access to those namespaces and can modify resources as needed during deployment. The problem now is that actually the cluster we use for deployment is on the internet, so it can't be reached from the public GitLab workers. The solution is to deploy our own GitLab runner into this Kubernetes OpenShift cluster, which is again quite a bit easier than it sounds. So we want to have GitLab runner spawn jobs for those CI jobs in the pipeline that needs a service account, needs to give access to the service account with added rights, so it's able to create pods. And then it needs to deploy in the config or deployment for the GitLab runner. And this is mostly what's needed. So there's not much more to do that. So if you want to see the details, you can take a look at the example repositories. So the only thing that's not shown is the configuration file. And if you register that one with the infrastructure repository, you get this nice little green dot next to our registered runner saying that this runner is online and it can actually run deployment jobs on the cluster that's going to be used for running also those container images. Now, we have this infrastructure repository. We can deploy the world by triggering pipelines there, but this is actually not what we want to do normally while working on those applications. What we want to do is we want to have a merge request open and we want to deploy this code into a dynamic environment, but maybe we also want to deploy it already into production to make sure that actually the code that's going to hit the default branch works. For that, we are going to trigger, again, dependent pipelines. This time, we are triggering them in the infrastructure repository. So the code in the source repository of our application, so this is for this example, FlaskApp, we're going to tag the container image that we want to deploy as production. And then we're going to trigger a child pipeline with curl. And then in our infrastructure repository, we are selecting the correct jobs. So we don't want to trigger all jobs every time we are working on one of our services, but we only want to trigger very selectively what we are going to trigger there. As said, we might want to deploy into different environments, so there needs to be some way of actually first telling where to trigger and then also getting Kubernetes to actually roll out new images. For rolling out a new image, we need to convert the production tag that we put on those images, for example, to digest so that when we apply them to the cluster, the cluster knows, oh, there's something that changed about this image, so it rolls out a new version of our deployments. And the second thing is that we need to have some generally in place that we can tell it, okay, please deploy this into production and please deploy this into staging. So the way we do this is by putting a deployment tag variable on those jobs and some shell magic in the background will basically take care of taking the right image and handing this through to the infrastructure jobs so that it knows where this new container image needs to be deployed. Depending on how we configure those jobs, these deployments can happen automatically. So for example, for merge request, we want to automatically deploy them into a dynamic environment that also gets stopped after the merge request is closed. So basically, the reviewer can take a look whether the application actually implements all the promises. We also might want to deploy into production right out of the merge request, manually in this case. And if you merge to the default branch so the merge request is closed, depending on the policy of the project, we might want to automatically, for example, deploy into a staging environment and then have somebody press the button to deploy the changes into production after checking that everything still works. GitLab provides history for those environments. So as GitLab has native support for each environment, for each environment, it records all the deployment jobs that ran for them. So warning back is actually quite easy because we keep all those old container images around. They're all tagged by the pipeline ID, all those jobs still exist, and you can rerun those jobs. Those old jobs will take those old images, again, for example, with production, and then just rerun the deployment job in the infrastructure repository. So this will take about one, two minutes for stuff to roll back to an old container version, which is good enough. So if your cluster is fast and spinning up jobs, it might just take 30 seconds. So to summarize, the stuff we built here is about 700 lines of bash and yaml, a bit of container file fragments. So this is actually not a lot. The most important part is that it brings down the costs for bringing up a new project to, let's say, 100 lines on average, which is far less. So if we agree that this functionality is something that we would want to have in each project, it basically cuts down 600 lines that we would copy around, and most importantly, we have a central place where we can change how stuff works, we can extend features, and these features will be available to all individual projects, set like CKIS, I don't know, maybe 30, 40 microservices, so each of them will inherit new features automatically. And yeah, so this is actually running in production at CKIS. It's a bit more complex because we tend to add features to things. As a last slide, I want to mention that obviously this is not the only way to do those things, also not the only way to do those things in GitLab. If you hate bash, you might want to reconsider different implementations or you might want to add features that are provided by other solutions. So running pipelines is obviously not only possible in GitLab, you can do it in GitHub, you can use Tectin or OpenShift pipelines. We ship container images, but you might actually build those container images differently. You might want to ship AMIs for EC2 machines, so depending on that, you might want to build images differently. We have a share script for linting and testing, but pre-commit is very nice for hooking all kinds of linters into your pipelines. You can spin up integration tests in GitLab CI, literally Docker compose, and make sure that your container images actually spin up correctly. We didn't do this in this example. If you have a lot of GitLab repositories, you might want to automate the setup and all of those configuration settings. GitLab has an API that lets you do this. We only showed very few security scanners, but they are far more. They are static and dynamic application testing. You can do fuzzing, all sorts of materials, all of those things are available, and if you have to do this work once, you may be more likely to do it, and it will apply to all projects that you have. For GitOps, we just used very simple templating, but at CKI, we use Java templates, but you can do templating also with Ansible. You can use Helm charts, the different ways of actually templating your applications and reduce application again to a minimum. For Secrets, we use GitLab Secrets, but it doesn't scale. You can use Ansible Vault, you can use Hashicop Vault, especially if you think about Secret Rotation, you might have other tools that might help you there and it might be a more scalable solution to this problem of managing Secrets. And then for the deployments itself, we did this with an OC Apply, but you can use the GitLab Kubernetes agent. You can use Argo CD, you can use Flow, there are different ways of actually putting stuff into production, but actually also the state of your production environment is taking into account that it might be at least more featureful than what we've presented here. So much for the presentation. You can do a demo, but I'm not sure whether we're going to do a demo. I'm not sure whether we're going to do a demo. I think we have a few minutes. So I can try to do a demo. You know it's live, it might fail, that's how things work. Yeah, see how the cluster goes. So, let's try this. So this is the application repository. Maybe you have an app that basically will show a simple web page and it's configured to be green. And I don't know how the most likely it will be. Let's use this color. So we're going to create a new change in there. Yes. Going to push it to GitLab. You will have to explain your shortcuts in GitLab. Yeah, let's try to do this. This deployed in there. Okay, so what did we do? We deployed, we created a merge request, changing the color of the website from green, which is the default, to magenta. So this is the merge request. This is in the application repository. And we see that GitLab tries to run a pipeline on here. That's why I wanted to do it now, because that might actually take a while. So it's going to build our container image, trying to run all the testing. Sorry for jumping in. Michael, your audio is a choppy, because you are using Firefox. That's not an issue. You have to speak slowly. Then it kind of works. Okay. So GitLab spun up this pipeline. And we'll wait for it and see whether we can deploy this into our cluster. So for reference, this is the production environment and what the application will show in there, which is green text. And then we have a staging environment. It's green text. And then I prepared two different versions of it. One is the blue version and one is the red version. So those environments are dynamic. And now maybe we get the chance to deploy this new merge request even into production. But this depends on those testing jobs to run. On the right side, we see our cluster. So this is the deployment namespace, the production namespace, the staging namespace, and then those two testing versions that I prepared already that you have like two different colors. So yeah. You can do something with the prepared one. Let's say you have the red version that is deployed into a dynamic environment already. And because we really want to have production red, we can press this button. Try. And now we see what's going to happen because that's kind of the thing. Normally, we would expect that the pipeline will appear. So it's started to kick off a production job and that should actually spawn a job on our cluster so that it can actually deploy this version into that. Not counting, but maybe 30 seconds. This is actually going to deploy. Oh, so we see that now GitLab Warner is going to run the deployment job on our cluster. And there goes our production deployment. It's rolling out a new revision of the deployment config. You can check. It was green and it's red. So okay. Our DevOps presentation worked. I think I will stop here because otherwise stuff will maybe break or something. And stop this green sharing. Okay. So much for the demo. I will just stop there because I'm a bit scared to do more. So yeah. If there are any questions, I think we still have a couple of minutes. Yeah. I just wanted to point out the thing that all the code that we shown and the demo that Michael shown, it's public available on the links we had. I'm going to try to link it on the chat as well. You should be able to see everything in action and basically copy paste the chunks of the code and that should just work. Regarding the questions, Iñaki was quite busy actually answering the questions. But it would not be on the recording. So I believe it might be good if I read the question and you can reply on the record, so to say. So let's start with the first one. What kind of permission is necessary to use include in CI other project files on same GitLab instance between couple projects? I'm going to go with that one. Let's do one-on-one. This one is easy. As I mentioned, there are different kind of includes. The one we use is file. The one you mentioned, that it's the one that can fetch a file from a different project in the same GitLab instance. For that one, GitLab uses the permissions of the user that triggered the pipeline. There are others like, for example, when you include the plain URL that can handle any authentication at all. So the URL has to be publicly available. But for including files between projects, the project could be private as long as the user triggering the pipeline has enough permissions to access it. Okay, thank you. Another one, how the final container file is built with C hashing boots? Michael, do you want to go with that one? Yeah, so build basically has this little-known feature to run stuff through CPP in case if your file has the in extension. So it will transparently try to do this if you have CPP installed. And it's basically doing what you would expect. So it's looking up the reference files in the include path. So if you have like a setup file sitting in your include path during your builder run, it will include this there. There are some interesting interactions if you define stuff with define and if they've tried to reference those things. But it works very well as a very simple way of customizing those container files. But you could do something similar with JINDA, I suppose. So I think the most important part is to try to do this to reduce the application. It doesn't really matter much how you do it. All right, thank you. Next one, do you have all variables defined in the main project and using them in dependent projects? I had a small chat with Krzysztof. I hope that's correctly pronounced. We try not to use the project variables in GitLab CI 3, mostly because they are not really obvious to the user. It's not simple to find them. But also they have different hierarchies. You could have project variables, group variables and subgroup variables. So it's not super handy. And also they are not version tracked. So we had to develop something to keep control of what's on those variables and what's not. So we try not to use them. And when it's necessary, it's usually for some kind of token or something that is necessary across more than one project. So we use runner variables for that, which we also have version control. You can do the same. As you would put a variable on the project, you could put it on the runner. And when the job runs in there, it's already populated, so you just assume it's in there. I think that was the point of the question if I understood correctly. Yeah, I believe Krzysztof also confirmed. And the last question, if I'm not mistaken, it's do you use dependency proxy? Will it work with Quay IO? We don't use it. We mirror container images ourselves. The main reason being that dependency proxy work with forks. So, if somebody wants and stops from forks for whatever reason, there are some interesting interactions there that basically make those drops not available, make the dependency proxy not available to them, but you need to specify the proxy in your CI files. So this kind of makes it really hard for other people to contribute because those pipelines for people that don't have permission to run in the forks. So I think, yeah, we've never figured it out. But, yeah, we have most problems, Docker and full limits, so we just mirror those things into GitLab container registry. And, yeah, it works very well for us. Okay. Another question popped up. Do you have classes of jobs where the GitLab pipelines are not in the report to be tested but managed centrally somewhere else? Yes. We have... All the CI itself will run for the kernel testing behaves that way. Basically, the project, the Merch Request triggers directly a child pipeline or a multi-project pipeline on a different project. That's the one that fetches the code and runs the tests. I mean, it's not what GitLab CI was created for. So it has... It needs a lot of hacking around. But, yeah, I don't know if there's a question about that. But, yeah, we do it somewhere. Sorry. Sorry. So that's all the questions in the Q&A. Are you folks going to move to the work adventure or... All right. I guess that's the wrap up of the session. Thank you very much for it. It was very interesting. And... Yeah. Thanks, everyone. Goodbye, enjoy.com. Thank you. See you. Bye-bye.