 So the next talk is by Iñaki and Augustin on Salsa CI. Thank you. OK. Thank you. Hello, everyone. Hi. Well, just straight to the question. How long does it take you to realize that you have uploaded a broken package to the archive? Yeah, too long? Yeah, maybe minutes. If you get rejected for F-15 master checks, like, I don't know, you are uploading with the wrong distribution with whatever the F-15 master checks, but probably it's not a good idea to get rejected for that. But yeah, you get feedback faster. Maybe hours. It depends if you uploaded the package right before the scripts run and install your package into the archive and everything, the magic behind the Debian walls happen. And usually, yeah, it takes one day. More or less, it depends of your package, of course. Yeah. The Debian infrastructure, we think that the Debian infrastructure is really great. It's really awesome. But first, you need to upload your package and tag your package and tag your changes. And then to get, I don't know, some feedback from the changes that you did. The problem is that you, as Debian developer, you have to change the context to another package or maybe to your real life, whatever. And switching context is something expensive. And also, you have to have your own setup in your machine or your own infrastructure to test your package, to build your package. And just if you get some contribution, you have to take the patch, apply the patch on your machine, test it, and then do the upload to get the feedback from the Debian infrastructure. Well, just in my case, when I started Debian, my first packages, before I uploaded, I don't know, I'd run like a ton of tests. Like, OK, I remember my first package was a Flight Simulator. So yeah, I don't know why, but just don't ask. I remember that in my first releases, I went to the package, installed my machine, and just played with the Flight Simulator a little bit. So yeah, it works, then uploaded. But then after three, four releases, I stopped doing that. Because it is a repetitive task, so you don't do that for every package that you build. And over time, I started maintaining like maybe more than 20 packages. So you cannot do that with all your packages all the time. So just let me change my notes. Yeah, the idea is to, well, I want to feel that every time that I spend time for Debian, I want to make it worth it. I want to deliver some value to Debian. So yeah, I don't want to wait for the other day to get the feedback. I want to feel like, OK, I did something for Debian, and I can just go to do something else. But I stopped working on Debian, and I realized that I added value to my contribution. So yeah, I think that, well, we take the continuous integration from the internet. Actually, the GitHub page, they have a really good definition there. Continuous integration is the practice of building and testing each change automatically as early as possible. So yeah, it's a definition just we put there to see if. I don't know, in the beginning, we started in our former company. It was really small. We were like 20 people. And I was in charge to build the operating system. They are satellites. So I was building the operating system for the satellites. And every time that someone needs to do a release, I was building the roof system on my machine. So it was totally chaotic. The company started growing and growing. And in the beginning, we were 20 people. And then one year later, we were like 100. So I don't remember. Yeah, I did all the tests before the satellite launch. And it was awesome for the first satellite. But then we started sending more satellites. And it wasn't fun. And the responsibility was really too much for me. And the contributions or the workflow between developers was really hard. I mean, I was in the middle. I was demanding the middle of the operating system and the releases. So we didn't know what continuous integration meant in that time. So that was the idea why we put this slide there. And yeah, when we changed the model of the satellite, we added like many boards in the satellite. In the beginning, we had only one board. But then the satellite became a little bit complex. Right now, it has like seven boards or something like that. So we needed to do some integration tests in there. What happens if I change a component in the satellite and software component in the satellite? What happens with the rest? So we build an infrastructure with JTAX for the embedded nodes. And every time that you push a change there, all the frameworks were built and were uploaded to the embedded nodes. The operating system was devian, of course. So the devian package was installed in the root file system. And then we run like an integration test. In the beginning, it was just to see if everything was working. But yeah, at the end, it was something more thoughtful, more smart, I guess. And yeah, well, in that time, we started using GELLA. So when Salsa was deployed in devian, we thought, OK, we had a lot of experience doing this. We walked the path through the, I don't know, we didn't know anything. And we have like a really great CI, like everyone in the company trust in the results. Every time that someone pushes a commit, they can see the feedback. They can see how the user set up the infrastructure. There is nothing behind your test or your commit and your test. You can see the logs. You can see the command executed there. So you can fix the test if the test is broken, but you can also fix your commit as well, for sure. So we think that continuous integration adds a lot of value in devian. Well, we are going to that. We are trying to go to that. Why? Because trusting in your CI pipeline makes it really easier to get contributions from outside, not only from your core developers. Everyone can push a commit to your project, fork in the project first. And they can see the results. If the commit breaks the package, if it depends on the test, of course, that you have in your package. But you have to make your pipeline solid or something that you want, you can trust. And then you can accept or take contributions really easily, just taking a look to the changes and accept the merge because you trust in your test. Like I said before, the first batch that you get, you take the patch and you see the batch. Oh, awesome. Thank you. I am going to test it and you run a lot of tests in your machine and do whatever you need to make sure that your package is not broken. But then after a few years, you don't do that anymore. In my case, at least. I am talking about myself. And the contributors can get feedback from the patch. So, yeah, I think that I already said that. Yeah. I am in yucky. Well, but what is the SalsaCA team? What did we do? We developed and obtained a recipe for building and testing devian packages. What we did, it's a recipe, just a file, which your project imports and allows it to build and test the package in a genetic way. We tried to do it compatible with most of the devian packages and trying always to keep it dry and kiss. This is not hiding the magic. Everything is explicit. You can enter the definition. You can read what it does. And if you have a special case, for example, your package has different way of testing or something, it's really easy to modify that. Because the definition is really clean. All we're trying to do, as clean as it was. What are our goals? Well, as Tim said, the first and main goal is to detect problems before the package gets to the archive. We are trying, our main goal is to have the same services devian infrastructure provide us, but inside GitLab, say, CI. Well, every time you push a change, we want all the current devian tests that are run normally asynchronously on devian infrastructure to be run on GitLab CI. And you can access the results, the log, on the shortest time possible. Also, having a reproducible environment to build and test your package, this way the environment, the place where the tests and the builds are run, they are defined on our work. So anyone involved in the development of the project, I mean you, your contributors, the forks, can work on the same environment you do with the same dependencies, with the same versions of everything. Also, everyone can see the recipe, as I said, but also the logs. So we think it's really important for the ramp up of the learning curve for the newcomers. If you have a package when I, well, I started really a short time ago, I don't have more than a year here. At first, it's a lot of black magic happening. You know, you run and you push and things happen on the background. With this, we want to achieve having an explicit recipe on how the package is built and how the package is test and that people know that the developer uses the same recipe that you are using. That's to generate a bit of confidence on the contributors. Well, but what is the pipeline? This is what the pipeline looks like on Debian. It's quite wide here. But here we can see two stages, the stage on the right, it's the build. We are now building with the Git build package. Right now it's the only build method we have, but we would like to have more, so contributions are welcome. Once the package is built on the first stage, all the files that result from that build are passed to the next stage where five tests are running parallel. We are running auto package tests. That is the same that ci.dev.net runs, which is a framework to run the package tests. Also, well, BLHC, it's build log hardener checks with checks for hardened flags. Also run by Debian. Lintian, the Lintian checks. And reproducibility, which now we're using repro-test, that it's the only thing that it's not the same as Debian infrastructure does because reproducibility, it's tested with different hardware and things, but we are working on that and we would like to have the exact same things that Debian does. All this takes no more than eight minutes for a regular Python package. It means that when you push, all this is run and in less than 10 minutes you can have the feedback you need to know if the package works or not. Even less if it fails the build or something before, but we think it's really, really fast. It's important to note that every job is run on a clean and reproducible environment, as I said previously, that has the minimum required dependencies. When you build the package, the build itself, it's done on an environment which only has the build dependencies of your package. It doesn't have anything else. Maybe if you forgot to add a build dependency that you happen to have because of the dependency of another thing, you will notice that kind of problems there. We would like to thank the Salsa Admin job because thanks to them we are capable of run all this in parallel because the infrastructure allows us to have a lot of runners at the same time. Well, but how do I make my package CI? That simple. You only need to create a file, of course, under the devian folder. We are using salsaci.yaml name, but that's just because it's not mandatory, which the simplest definition is just to include those files. Those include the definition and implementation of all the jobs I show before. This also, of course, can be modified. You can take a look at our readme, you can modify each job, or you can only use some of the jobs if you don't want everything to run, you can just select some of them. That's all. Well, you need to change a little the configuration, as I said, because GitHub looks for this file on .githlabci.yaml on the root of the project and in devian we need to put it under devian, but we are trying to make that a default configuration somehow upstream. Our future plans, well, we would like to build and test on multi-architecturel. Right now we are only doing AMD64 because that's what the runners are, but we would like to have more runners on different architectures. GitHub allows us to do, to install a runner natively on an ARM board or something like that, so it would be possible. Also, we would like to test upstream changes in devian. This means give upstream feedback about the package. If they make a change on upstream, on GitHub, to have a way to trigger our pipeline, our building, and let them know if they broke or not the package. Everything is okay, or no, you broke devian packaging, and someone receives a mail to have a feedback that we think the upstream might like to know about the devian packaging itself. Also, we're working on an idea to propose new releases automatically. This is when upstream has a new release, there's a new version on the watch, the repository you're watching. You can have a merge request opened on your project, and the pipeline will run automatically, so you know if importing the new upstream will run for devian, will break the tests, will break something. So as the pipeline runs for every comment, if we bring upstream changes, it will run against all your tests as you usually did, and at least you know if it breaks everything or doesn't. Also, we would like to increase coverage. Right now we have these five tests, but as Tim said, for example, we don't have the FTP checks right now, so we would like to have more tests and to increase the things we can test on each comment. If you wanna help, well, we need more use cases. Right now we have some packages running the pipeline, but we know most package has some differences between them, so we would like everyone to let your package join the dark side. Right now we have more than 330 packages, which is important, but we would like everyone to use our pipeline. We also need to improve the documentation. If you take a look at our readme, it's not the best thing. It's quite hard for someone who doesn't know about GitLab CI maybe to understand what you have to do. We're trying to work on that because it's really easy, but we are not good writers, so if anyone would like to discover what Salsa CI means and document what they did to make it work, that would help a lot. We would like everyone involved on this project. Our idea is to help everyone, so it's not something from us. We would like you to bring your projects. If something doesn't work for you to propose a change, to report an issue, well, I don't use GVP, I use another build method, so that's the feedback we want. This is our contributor so far. We have six months and many people got involved into making patches or issues. This is the people who has come it already, so we are really thankful to them. Well, I would like to do a small demo. Oh, yes, computer. So I picked a randomly tested package. I tried yesterday, and the idea is to show how easy it is to add the pipeline to your project. For example, I got this project. I'm gonna fork it, this is not my account, so thank you, Tim. Yeah, everything. The first thing you need to do, as I previously said, is to change the CI path configuration. In particular, this project hasn't got a CI enabled, so the first thing you have to do is to enable here a pipeline, but anyway, this comes on by default, so you probably don't have to do it. Then on the CI configuration, CI CD, you have this custom CI config path. Don't worry, this is all documented on the Redmi, but the idea here is to post the file you want, get the CI, we use salsa CI, but it's the same. Enter to confirm, put the changes, and then you can add the pipeline, for example. This says set up CI CD, but it's just because it offers you a template from GitLab. It's just creating a file on the path that we previously configured, but so this is. And what you have to do is put this, which is on the Redmi, that's the text I put on the notes, and that's all. Comet, of course not master, this is just a test. You can commit to any branch, one of the features it has that the pipeline is defined on the branch, not for the project, so different branches can have different pipelines, everything is tracked on GitLab. So okay, I added this, and now you can go to here, this is CI CD pipelines, and it's running. That's all. Every time you push a commit to this branch, we'll start running, this is, it's running build, whenever it's finished building, it will start testing the package just build. And this is what it looks when it builds. Okay, it's now spawning the runner, but then it starts to build. Where is the? Control F2. Control? F2, F3, sorry. No, thank you. Questions? We would love to have a lot of questions. It was that clear? I hope it doesn't. Have you thought out about scheduling the pipelines regularly over time? I was looking into it, and I think the only thing GitLab is allowed, allows you to do is to say, I want to run this once each month, and then it runs all drops on the first of the month. Is there a way to spread it out somehow? You can configure on your project, run run jobs, and it has, you can run it whenever you want. I mean, you say once in a month, just because, but you can run it, I don't know, every hour or every day. It has the current syntax, and you can run it. But I think then it's running in my account, so I'm getting the notification. Is there a way to get that to the team? No, I don't think so, because how GitLab handles it is that you are responsible for that, and it uses your account to trigger. So the pipeline has your, if you triggered it, it has your permissions, your thing. You can create a team email, for example, with a user that has the, just like a workaround. It's not the best, but trigger the pipeline, it's just cool with the token, with the proper token. So you can do that. Another question. So I have a question. So if I wanna change something in this thing, do I have to copy paste the whole YAML from you and then change it, or is there a way to override something? Or maybe you can show how that works. If I, for example, want to remove one of the tests, in my project, how would that work? Oh, there it is. It's everything here, but you can, for example, here we have a different thing. We are running report tests without the scope, because it needs a lot of RAM, and some big projects won't build because the runners have only one gigabyte of RAM. That's how you change, for example, a test. You need to define a new job with the same name as the one we did, and you make it do anything you want. Perfect, thanks. So I'm old-fashioned, and I just upload Tables with cool patches and stuff to Debian's infrastructure. I don't really use Git, if I can help it. I don't have all my stuff in Solcer, but this sounds quite nice. So what's the easiest way to use this without having to use Git for everything, or is that not really possible? I've got to get into this whole Git world, and start building things with Git, and I don't really like it. I mean, I could. I've got a system and it works, but, you know, yeah, can DeGit do it for me, because that sort of does things between one world and the other without me having to learn all that shit. Can you connect this to DeGit's repository as well as Solcer's repository? Will that be a thing? Honestly, I don't know how DeGit works, but probably. I think the point is that if you just upload things with DeGit, it puts it in a Git repo somewhere as well. Yeah, exactly, and then maybe the infrastructure could run on that repository, too. Yeah, it's possible, yeah. Anyway, that's the view of the world from old-fashioned people who haven't really done this yet. Okay, okay, fair enough. There's probably still quite a lot of us, I don't know. There's a comment from Andrew. I just wanted to comment on your suggestion to hook it onto DeGit. Well, it sort of goes against the point of it because the point of CIS, at least as I see it, is to test your changes before they go to the archive. And in DeGit, you get things after, well, in the process, as they get uploaded, they get pushed to DeGit. So it basically defies the point a bit. Well, it still provides some value, but much less value than it does when you actually use Solcer. Thank you. So if one of my tests breaks in that runner, how do I get what exactly is broken? You can see the logs. It depends on what it's broken. I don't know, what do you mean? If the test broke or your package is not compatible with the pipeline, if the test broke, if anything in the pipeline broke, you will get a mail with the last piece of the log. So you can see it. Anyway, you can enter the pipeline and watch it and read the whole log. But there's no state of the runner or something that I can debug. Oh, no. There is a feature, but it is not enabled because the Solcer admins are waiting for gilab.com enabled it because it has some problems. But yeah, in the future, you will be able to debug and inside the runner. So gilab made a wait for you to attach to the same environment that is running so you can see what happened. But they didn't even enable it on gilab.com, so Solcer admins are waiting. So there's a reason they didn't enable it. Okay, thanks. But we will probably have it soon. It will be really helpful. Thank you. Well, anybody? Any more questions? We have still, yeah, please. So can you say a little bit more about your runner infrastructure? How do they look like? What do they consist? How do they scale? And maybe also how can you clone them? I mean, if you're not on Salza directly, so we have a lot of downstream package work going on. So I could imagine that we run the same thing for our downstream stuff in our infrastructure. Of course, we're using the same thing that you are doing. Well, honestly, we are not the Solcer admins, but you can see the code they are using Ansible, I guess. We are using what Salza provided us. We don't have our custom runners. We are just using what Debian has, so anyone can use that. But you can do the same that he did on gelab.com, and it will work fine. Or you can do that in your private instance, and it will work. You can register a runner, a docker runner, and it will do everything just as we did. That's the main idea. Okay, so it's docker-based already? It's docker-based. Yeah, it's docker-based. Yeah. Thank you. Or container-based, because maybe the future is going to change. Yeah. Hey, so I have two things I wanted to ask about your timeline for different architectures. Well, in my case, I used those two packages, your size SCI already. But the thing is, most of them, my package breaks are in architecture, which are not AMD64. So do we have an idea when the size of old supply of different architectures are not an option at all? No, we would like to talk about that with people already like Wookie or someone that already has experience adding a coverage for different architectures to see if we can get machines dedicated to gelab, to Salfa. I guess some architectures like, which I do not support, like containers right now will probably not work, but ARM should work, I guess, with gelab runners. That would be helpful. And then the other point was from your future prospects with like pulling from upstream projects and testing it. In my experience, like even simple projects do have like issues with patches. Like if you have FMILO to patches and my patches rebuild, like all these patches need to be re-based on the new exchanges. And I don't know how you intend to do that. Well, but usually you try to remove the patches. From your package. So if upstream pushes a commit and breaks your patches, yeah, you have to re-base them. Yeah, but I have like closer, okay. Especially like with packages like net data where they have like some ideas what default values should be and they don't apply to Debian operating system. Then we have an issue that we have to supply patches. And that's why one of the points are like, they need to be like either like turn off the patches for these upstream builds or actually we can talk about that later. Yeah, we should think about that. Thank you. Yeah, you can show the demo that you made earlier whether it worked actually or not. Oh, okay. No, no. Well, this is the build job, it finished. And for example, I can show you. This is the artifacts. Artifacts is how GitLab calls the files which result from a job and they are passed to the next one. So this is what we are having as an artifact. It's all the files generated plus, for example, the log from the build. If you, I can read it here, but it's sometimes it's too big for GitLab to display it. So you, we pipe it to a file so you can analyze it if something goes wrong. And then, oh, everything went well. For example, auto package test. Show you everyone. This has tests, this package. So the test passed, here you have, this is the name of the test, it passed. This is a summary, only one test. Or one test, in fact, well, auto package test defines the test and you can do a lot of things. Yeah, as you can see there, there is, we are running just auto package test. It's the same output, it's not like anything else, I mean. Well, it will be LHC, it's not C, so it doesn't happen, it doesn't happen anything. Lintian, it shows, well, two warnings. That's overrated. Peepart, run okay. And you can, for example, just to point this out, you can see, this took almost three minutes to run, which one minute and a half, it's just turning on the virtual machine for the runner, so it's quite fast. And this is repro test, which also passed. The package is reproducible, well. Yeah, and that's the package. Hope you can use this and give us feedback, hope it works, it does some value to your project. You're trying to make our lives easier only. Okay, so if there are no more questions, then let's thank the speakers again. Thank you.