 Yeah, my name is Roger Meyer, I'm working at Siemens, living in Switzerland and my primary role right now is managing code.siemens.com, our internal coding platform. In the past I did several other things like computer system engineering. Yeah, I'm Max and I'm also part of Code Siemens.com and I'm a DevOps engineer and we work together to make this a great platform. Definitely. So what we're going to show today, we have a very simple Angular app, as you can see on the left hand side, just a simple to do issue list. With Node.js back end and we will go through the different steps like testing, building, container testing and deployment. It's available on GitLab, so if you like to clone, gitlab.com.siemens.to do. So let's push the GitLab. So we're going to start with the first stage, the linting stage, which should be the first stage for any project because it ensures quality even before the commits are on master. So we check for things like commit message or markdown and license changes to reduce manual work so that maintainers don't have to check this manually. But CI pipeline can just check these automated things. And we can detect style issues this way and we simplify the maintainer's life while still improving quality at the same time. So it's really the best stage you can have in a project. So you can really ensure the quality. The bots or CI should complain. Yeah, that's true. And the maintainers should only care about the content and not about styling or in dense or things like this. There should just be one standard for the project and it should be checked. And if it fails, the pipeline will be wet and maintainers don't even have to look at the merge quest. So maybe we can show that quickly. Oh yeah, sure. Can you see my screen? It's not directly mirrored. What's the issue? Because the PowerPoint is still in the presentation mode. Oh, ensure. That's kind of, okay. It's kind of smaller text now, right? Didn't I change text size before? I'm sorry. I did change the font size, right? So what we've added for example to the we're using husky. That's a nice local pre-commit hook you can use. So you can register several commands. So in that case, we're hooking in conventional change log. So we really like to have conventions on the commit message so people are so we can generate them later on automatically our change logs in a simple way. So as you can see here, just doing a little commit hook. So already on the developer desk, we see those issues. So you don't have to talk about commit message format and stuff like this. And as you can see, it doesn't pass, right? And what I wanted to do originally, which I didn't do, was add a message here like gilab commit. And now if I formulate the correct commit message. Sorry, with husky. Husky is an open source project where you can define certain rules. You don't need to define commit hooks, but you can just use husky to put certain rules in your package JSON file. And then it's pretty easy to set up. I'm just going to do a valid commit. As you can see, I can commit and then I can push to some branch. Let's go to some branch before I push. Of course, you could also run the whole test suite before committing while the commit hook locally. But we are usually only doing the commit message linter within that stage and then let the rest do the gilab CI later on. Oh, the testing, the demo effect. It's running. So you will see that the pipeline fails now here because the markdown isn't correct. That's just the point, I guess. Even basic things like markdown, linting is quite important, especially if you're using then a page generator like mkdocs or Hugo or so. So you have proper rendering later on if you have proper markdown at the beginning. So we're taking care of all those things and enforce them at the early stages. Let's just continue. Waiting for fetching packages. Let's just continue. It will fail. Actually, I can show the demo before. So it failed because the markdown is not correct. That's the point. So then people can look at the CI log and they can fix it, mentor commit and push again. Of course, next step, build and test the application. So don't rely on people that they have tested all the different configurations. Build every configuration and what we do also run the unit and the end-to-end test within CI. So that's, I don't know if you're aware of the services feature. That's quite nice. So you can launch a Chrome browser here running Selenium. Then we have a backend service, MongoDB in that case. We build all the different configurations and start the end-to-end test suite also within GitLab CI. So that's, of course, beside of being able to run that thing locally, you should be able to do the same on within GitLab CI. Right. So in GitLab CI, we launch a headless Chromium browser and basically do the same as we do here. So add tasks and check that the tasks created successfully. As you can see, it passes. It's kind of annoying. Okay. The topic to mention here, test coverage. Within the settings, you can configure a grab string to get the coverage out. But this can be done within GitLab CI as well. Pretty nice. Then you see it on the UI. Next up, we have container builds. Kubernetes is getting more and more important these days. That's why we also want to build containers in our CI job. Because your runner might be on Kubernetes itself, so how are you going to build containers in containers without giving the users privileges which we don't want? You can use Kaneko, for example, or you can use Kaneko, which is an open source project. You can just define your Docker file and push it directly to the GitLab container registry. So that way the builds are done in user space and you don't need to give 30,000 people or whatever, how many people you got on your instance, privileged access to your runner. So that way we can enable builds for everybody on the platform. Next step, container security. So for that particular reason, we're using Trevi, another open source tool. That's a nice option here. You can override the entry point of the Docker image within GitLab CI so you can directly execute by your own. Then the Trevi script, so in that case, we give an exit code one, so if it fails so that the pipeline fails later on. Maybe let's show that quickly. I'm just going to show the result from the last pipeline. As you can see, there are warnings because we are using node 10 and they are using Alpine below, but you have to always estimate if the risk is tolerable for your project. So you can have a look at the Trevi result and if you think that's okay to ship that if you're not using Kerl or something like that. But you have a basis to evaluate the Docker image you're using. That's a typical case here where you have an upstream image like a node upstream image and they have not upgraded their underneath Alpine to the latest. When we built the Docker image and we checked the security of it that it's okay, we want to deploy to production the same image. But we still don't want to give our users privileges on our runners. So what are we going to do? We are using crane, which is another open source project, to basically copy the image we've built before and push to the GitLab registry and promote it to the latest image so that we can deploy it with Helm, for example. So this avoids rebuilds and this also ensures quality because we tested the image before and then we deployed the already tested image. Another nice feature you might have recognized is error tracking integrated within GitLab as well. So we're using that heavily. We did also so a lot of contributions in that area for GitLab itself to detect issues. So here it's absolutely worse to have your code instrumented with an error tracking piece of software like Sentry. So you recognize issues before your user recognized them. And that's quite a nice thing. So let's give it a try here. As you can see, that's the do application. I can add tasks here, but not if the server is not running. You can add tasks here. If you've seen before at the beginning slide, but now we have a patch which throws an error in this case. So we're going to run this and it takes a while. You need privilege containers. You have to expose the Docker socket to the executor and so having access then to your Docker completely. So that's a security risk of course. Yes, but if you have thousands of people on the same build infrastructure, you want to have full isolation between all those and that's why we like to avoid using dint. So we had a lot of requests on this as well, but now with Caneco we are quite happy. I have the same feeling experience for the users. Back to the error tracking. That's how the application should work. As you can see here, tasks are added, but if I apply this broken patch that I've prepared before, so it just introduces an error in the tool itself. And if I try to add something now, it doesn't work, but if we look at Sentry, we can see that there is now a crash here. So we noticed that the application crashed and we can look at the details and we can also visualize this in the GitLab itself. So it's neatly integrated. It uses the Sentry API to capture the error and now you can see it here. So we can see the errors before the users do and fix them. Yeah, one thing is doing the security scans for each merge request for each build, but usually you have an application deployed and it is running. So that's why we do scheduled tasks to scan for vulnerabilities on a daily basis and get notified then by email if some new vulnerabilities have been found. So that's an essential piece, not only during building the software, but also to keep an eye on security over the whole life cycle. Also the later step. Thank you. That's it.