 So, thank you. Yeah, warm welcome for me. I'm Nico Meissenstahl. I think we already got a nice introduction. So this session is mainly about how we can use GitLab and build your GitLab CI-CD pipelines and somehow streamline and enhance them with Kubernetes and open source tools. So we have five different topics. All we get is small introduction, and we will then head up into demos. And hopefully, all demos will work. So first one is how to move your pipeline workload into your Kubernetes cluster. So how you can containerize your whole pipelines and why you should do, and then how you can build your container images within your cluster, and then talk us around securing your application when they're running. And then about how you can enhance your application deployment onto Kubernetes. And the last thing will be about GitLab serverless and talking about your ways that you can only focus on your code and not think about all that deployment stuff and running it in production. So let's start on how to containerize your pipeline workload and bring it to a Kubernetes cluster. So just some hands. Who is using Kubernetes already? Does anyone run the pipelines on the Kubernetes cluster? Oh, cool, some of you. So there's something cool from GitLab, which is called GitLab Runner Kubernetes Executor. This is basically a runner, as we know it, which is running our workload and building our stuff, running our pipelines. But this one itself is running in Kubernetes and allows you to trigger your jobs in Kubernetes, which basically means a pipeline job, a task, compiling something, building something, deploying something, is not running on any kind of Linux Windows machine, so that it's running inside a container, or better, inside of a pod within your Kubernetes cluster, which is pretty nice because it's containerized, so it can put in all the dependencies. It's completely isolated. You don't have any issues. The build is running on runner one. On the second runner, it's sometimes issues because of dependencies and so on. It's just based on a common container image. It's everything the same, and you can also pretty much scale your pipelines because you can just scale your Kubernetes cluster and you're able to scale all your pipelines. So to do so, first of all, you need to install the Kubernetes Executor. One thing is to just do a click in your GitLab CI and deploy it from the UI, but you can also automate it and just install it with Helm. And then you have the runner inside your Kubernetes cluster and can schedule all your work. We have some smaller things, which are a little bit different if you build inside of containers. You need to think about how can I cache stuff because you will start up a new instance every time, which is completely fresh, so you should think about caching and also how to download and upload your artifacts before you build and after you build. And this is something which is completely handled by the Kubernetes Executor. Though it's not only starting your container image, you would like to use for your build. It's also starting some, let's call it service containers, which are preparing the environment, so downloading or fetching your code, uploading artifacts if you need them to deploy or even provide you with your cache or even after your build finished or your deployment finished, also write back the cache or upload the artifacts to a container registry or somewhere else. So it's completely handled by the Kubernetes Executor and you don't need to build things to build caching on your own, which is pretty nice. So first demo, let's see how this works in action. So first of all, just the UI, how you can integrate your Kubernetes cluster. So you can just go into a project on the Kubernetes page. In this case, it's a group, but it's nearly the same in the project folder. And here you can add a cluster or create a new one. So you can create a Google Kubernetes cluster or Amazon cluster or integrate any other cluster. So on premise, Asia doesn't matter. And I already connected my GitLab group and all my project in the group with my cluster. And it's basically providing here in the down some Kubernetes settings, whereas my API, my certificates, and so on. And then I have an option to install some applications. Like in this time, I installed Helm to be able to deploy with Helm. I installed the GitLab increase, which allows me to expose my applications, third manager to manage my certificates, and so on and so on. So this is the basic stuff you need to do one time before you have the integration set up and couldn't work with all the tools I will show you now. So first of all, I have a small project. It's basically containing a small web application with just a sample application and a small CI CD pipeline. Now let's have a look. And this stuff is really basic. It's just to give you some examples. So it's basically one stage, which only does a deployment. And we have some variables, which we use later. Mainly we have a Helm image. It's basically a Docker image or link to a Docker registry in the Docker image, which we'll use to run our job in. And we have the Incushost, where we would like to export the stuff, and we can define a message, which I will show you later. Then the basic stuff is really important. Stuff is the tag. So we can define a tag, which then helps you to tell where to schedule your pipeline jobs. In this case, my runner on Kubernetes has a tag Kubernetes, so I'm just providing the tag to Kubernetes and telling him, please deploy it on my containerized pipelines. Then because it's in container, I need to provide the image with basically the Helm image variable. In this case, I'm overwriting the entry point from the Docker image just because I would like to add more information. And then here, basically, I'm just running a Helm upgrade and providing some more information. So pretty straightforward. So let's run a pipeline. We will use the message variable. We saw it's just a basic application, a web application with freight. It's our company logo. And it's possible in this application that I can override the first part with any kind of information I provided in the message variable. So let's do a hello GitLab commit. Before we run the pipeline, let's switch over to our Kubernetes cluster. And here, we will see one important part, the Rana GitLab runner, which is basically our Kubernetes executor. I'm starting the pipeline. And the Kubernetes executor will talk to my GitLab instance, and we'll say, hey, I have a new job, or a new pipelines, which I need to trigger. And the GitLab runner will then trigger the pipeline run, which basically means it will start a pod, which then is used to schedule my job, I defined. So even now, run the pipeline. We will see that we get a new pod. And this pod is basically containing out of two containers, which is here. The first one is the backend container uploading cache providing my source code and so on. And the second one is my hail image, which I use to deploy. It's already running. And here we see, after it has started 25 seconds later, it gets terminated again, which basically means the job is already done. So it took us 25 seconds to deploy the application. It's already green. So we can just check the logs. It's basically the same, no, from different runners. So it's basically telling me all the information. And our whole pipeline was running inside our Kubernetes cluster. So just to let you know, normally, yeah, now we're getting the new deployment, hand deployment, and it says hello, GitLab commit. So it's basically a simple job which runs completely in our Kubernetes cluster inside our container image, which is pretty nice. So next step, now we have know how we can run our pipelines in our Kubernetes cluster. We might also would like to build container images, because now we have everything in containers. It would be also good to build our application and to run it in containers. And then we're getting some kind of an issue, because to run containers or to build containers in containers, we basically somehow need some tools to get it work. I'm not sure if you know Docker and Docker, which is one possibility, which basically means different options, but basically running a Docker build inside of a Docker image, which has some issues or disadvantages. You either need to mount the Docker socket from a host inside your container, which is somehow a security issue. You can mount the valid Docker directory from a host inside your container, which is also made a really good idea. And the last one would be to run a privileged container, which has privileged writes on your system, and really run a Docker daemon inside your container. So it somehow works, but it's not really nice. And in a bigger Kubernetes cluster, which is managed, maybe are not able to run privileged modes or mount volumes from a host machine. So we need somehow a better solution. And one is Kineco, which is an open source tool introduced by Google. And with Kineco, you have the option to deploy, to build container images within containers without any dependencies and privileges. And this is just perfect for us to build our containers. Here on the right side, it's pretty small. It's just an example of how to use Kineco. In this case, we just support definition. It basically means running container. It's called Kineco. It's based on the Kineco image. In this case, the last one. And then we just provide the pass for a Docker image, the pass to our context route, and a destination. Destination means the registry we would like to upload the stuff after we build it. And of course, Kineco also allows you to cache your Docker layers to speed up your pipeline. So how does it look like in your pipeline? So I have another demo, which is basically containing out of a Docker file. In this case, it's a Docker file for the Docker image we just used. So it's basically an L-Pen-based image running kubectl and now installing the HALM CLI. So it's exactly the Docker image we just used in the previous demo. So once again, it's pretty basic. And what we are now doing is we have a new CISD pipeline, which once again is triggered on our Kubernetes cluster. Once again, we of course need an image name. And once again, we're overwriting the entry point. We need to, because in this case, after building the image, I would like to upload the image in the Docker registry, which is part of my GitLab project. So in this case, I need to provide Kineco authentication details to log in to our registry and to upload it. In this case, it's just a simple echo, providing the CI job token, which is a one-time token only valid in this pipeline run. And it's save it to my configuration chase and fire that Kineco is able to push the Docker image after it was created. And then we have once again called the Kineco executor providing a context route, providing the pass to our Docker file, and the destination, which once again, I already told you of the registration for my project, and provided a name and a take. So once again, we still have the watch running. Just run the pipeline, and we get a new pod triggered, which is used to schedule our containers. They are already running. So let's switch to the output. Yeah, it's checking out the code, creating the config file, and then starting to build our images. In this case, I did not enable the caching, so it will take some seconds to finish. It's taking a snapshot of the files to him, and now it should be ready in just a second. Job is done. Now back on our kubectl, the pod is terminated because our job is done, and we now should have a new image in our community registry. So it's 27 seconds old. This is how you can build container image inside the Kubernetes cluster with Kineco. So sorry, next one is a nice solution how you can secure your applications running in Kubernetes with the GitLab web application firewall. The GitLab web application firewall is integrated in the Kubernetes nginx increase, which is deployed when you install the increase from the GitLab UI I showed you in the first demo. You install the ham, then you click I would like to have the increase, and then you will get an increase, which also includes the web application firewall. So if you already did it, you may have already used the feature, and you just want to have it. So what can you do with the GitLab application firewall? Basically, two things. They can find and track SQL injections, as well as cross-excripting, which is pretty nice. So you can secure your application and get insights on if you had some cross-excripting attacks or SQL injections. The whole thing is based on, as I already mentioned, the Kubernetes nginx increase, but also has the ModSecurity module enabled. The ModSecurity module then has some enabled rulesets, which allow you to find those injections. And this case is based on the open web application security project, which is an open source project, and those default rules. They're slightly customized by GitLab, but also totally managed, so basically you don't need to care about it if you stay to the defaults. Default means the GitLab web application firewall only detects source injections or cross-excripting attacks. You will get some kind of output in a log file and then can act on it. But of course, you can also customize those rules and define a blocking mode, which basically means that the request gets blocked and you will get a four or three or something. So how is this working? In this case, I provided a small web application, just with a small field, and if I type in my name correctly, I get a greeting, welcome Nico. And if we now go here and put in JavaScript, so it's basically a small alert containing this is the cross-excripting attack, and we will now put in the greeting. We would be able to inject some cross-excripting. Just to show you, we could just open up the log. So in this case, it's once again only the default mode, which is detected only, so which basically means when I push on the button, we will get a log entry and can act on this one. Though it's a pretty long command, but it's basically we open or exit into a container and just open the var log mod seg audit log, which is then we can use to get the insights. So took some seconds. So if I now inject the cross-excripting attack, I first of all get my alert box in my browser. This is a cross-excripting attack. And on the other hand, I'm getting a log entry, which is basically telling me that we have a cross-excripting injection, and I think can use this message to act on and get insights on it. As of that, this is the default mode. It could also enable blocking mode, which basically then mean I wouldn't have get the pop-up, it just would have get unauthorized, and the request had been blocked. So pretty nice feature and completely integrated. So if you use the increase from GitLab, you already have this feature and just need to check in monitor your logs. Good. So two more things. One is a tool called Customize. Not sure does any one of you know Customize? OK, some of you. It's basically a tool which helps you deploying your applications. So you might need to think, hey, why do I need Customize? I can use Helm, which is pretty common. But for me, and it's only my opinion, I use Helm if I would like to package a bigger complex application and would like to share it with somebody or to deploy it on 20 different environments. There's Helm's pretty good. I have templating and so on. It's pretty nice. But if I just would like to deploy my one or two microservices, completely integrated CI-CD pipeline into my environment, I don't need templating. I don't need rollbacks, at least not provided by Helm. If I would like to do a rollback, I will do it with my CI-CD pipeline and not with Helm. So Customize is a pretty nice small feature or tool which can help me with it. Why? It has no template overhead. So it's basically mean I have my deployment, my service definition files, manifest general files, and just adding a second file where I define my customizations. So it's less complex. I don't need any kind of CLI. It's completely anti-created with Qubectl, which basically just reduce my complexity. And for me, it's much easier. You can use it, as I said, with Qubectl apply with the minus k option. Or on the other hand, you can also install the Customize CLI, but feature-wise, they're completely different. So if you have Qubectl installed, you can use it and everything is fine. So what can you do with Customize? It's just a small screenshot from the documentation. You can do things like, I would like to add annotations to any kind of my deployments. I would like to add common labels. Think about you are deploying in one stage to development. You can add to all of your manifest resources the label development. And then on the second stage, you would like to deploy to production. And you can add label productions without doing templating or customizing your YAML files and stuff like this. You can override images, namespaces. You can add prefix and suffix to resources. So pretty nice stuff. All the options to generate config maps, generate secrets, and so on. So how does it work? Let me go to my next project. Here I have a small folder deployment that's contained out of two folders, a base folder and an overlay folder. In the base folder, I see my normal files. So I have a deployment YAML, which is basically a deployment definition. On my Kubernetes, I have an increase YAML, which is basically an increase definition. And I have a source definition. Just basic files, nothing special. But I have a somehow called customization YAML. And this one, I can define to say my customized CLI, what it should do. In this case, it's just, hey, I have this three resources and nothing else. So no customization, just information. Please use the three files in my folder. But I also have the second folder, which is the overlay folder. And you have two folders, different production. And if I go to the production one, I have another customized YAML. And this one tells me, hey, please use my base, which is basically my deployment increase service, and add a custom label environment development to all resources. And please also add a prefix def minus to all of my resources. And please patch my resources with these two YAML files. So if you check the YAML file, it's basically an environment variable. Once again, the title of my web app, which I would like to change. And on the other hand, it's just my replica. For def, I only want to run one replica for production. I would like to have three replica files, replicas running, three ports running. This is how I can customize my deployment files. My resources stay as they are. My definitions are just add customization YAML. With the customizations, I would like to deploy based on my stage. And if you have a look at the ICD pipeline, it's pretty straightforward. It's a one-liner, just kubectl apply minus k, which basically means please use customize. And the folder I would like to use, in this case, I defined the folder with an environment variable, which is basically development. And then I could override it in a second stage and put in production to deploy my production environment. So pretty straightforward. So last but not least, I need to be a little bit faster, is GitLab serverless. GitLab serverless is somehow also called function as a service, which basically means you just need to care about your code. So you're writing your code, you're writing your business logic, and don't care about how to containerize it, how to build it, and how to deploy it. It's completely done in the back end. So as I mentioned, it's function as a service, and it's based on some open source pools on Knative, which is a serverless stack on top of Kubernetes. Kineco, we already learned about Kineco. And Istio, which is a service mesh just for routing your requests to the different version and so on. GitLab serverless supports Go, Node.js, and Ruby. With the open-fast integration, you can also use C-sharp, Python, and PHP. But basically, besides that, you can use any language. You just need to provide Docker file, basically this information, how to build the application with your code, and then you can use any other language you would like. Because it's completely open-soothed and running on Kubernetes, you could have multi-cloud support, you can deploy it to any kind of cloud, and you have auto-scaling, including scaling to zero. It's basically managed if you have a function, rendering some kind of pictures. The whole application scales down to zero if it's not used, and if it gets a request, it scales up as much as it needs to. So let me show you this. Basically, it's just, in this case, Node.js app basically doing nothing except providing me a hello from GitLab serverless. And we have, once again, a pipeline which uses a template which is provided by GitLab. So it basically means I am first of all building the function, which basically means this is a Docker build based on Caneco, and then deploys the function into our registry within our project. And then I need a second definition file, which is a serverless YAML. Then I need to write, I would like to have a function. I need to define a provider, which is TriggerMesh in the backend. If you use the open-fast integration, it will be open-fast. And then you need to define your function. In this case, it's a JS file. You need to write your source and the run app you would like to use, in this case, Node.js. And that's it. With that, you will get, in the operation, tap under serverless, your function. And as you see, the function is deployed, but we do not have any pod running. Because it's scaled down to zero. So let's do a quick curl with a post-command against our function. In this case, 100 times. Now the first one will take a bit longer because none of the pod is started. So it's now starting the pod. And now the requests are running. If you now go back to the UI, it might took some seconds until we see that the pod is up and running. But it basically should. Let's do it on the console serverless. Hello. Here we go. Here's our pod running. So we have one instance at the moment. And now we can schedule a tool which generates a little bit of traffic. And we now should see, let me split the terminal and copy it over. We now should see that it's scaling up. It should scale up. It's not scaling up. Perfect. Okay, so now we see it's one pod running. Normally it should scale up in some seconds. But yeah, normally it should scale up. Okay, so we have end of time already. Just to find a slide. My slides are already online. The demos are all in my GitHub repo. It's open. Some related blog posts to the stuff. And yeah, so we have no time for question but just see me outside if you have any of them. Thanks.