 Hello, everyone. Thanks for attending my talk. This talk I will introduce about the infrastructure. I've been using this kind of infrastructure to do our work for our customer for like five years. And I've been doing this. And then I also package those infrastructure into an official Debian package. So to keep it easier to like you guys to use this, so I put into the containers. So this presentation will share about how we use these containers and the infrastructure. In the end of the talk, I will give you the example how we use and how to optimize that such a built infrastructure in your environment. And I need to thank my employee, Klobbera. They really encouraged me to share this and also make this a publishing to the Debian and also put into the images on the web for you guys to use. OK, then I need to introduce myself. My name is Andrew Lee. And also you can call me Li Jianqiu is in Mandarin. And I've been working on the open source community for like 20 years. So when I was started, I do in the RPM packaging. And then I become Debian developer 10 years ago. OK, many of you may have already had the experience to maintain the package. So any of you have the experience to build package or maintain the package? OK, let's give the overview that the benefit about this infrastructure will benefit you. So let's look at the classic infrastructure. When you want to maintain the package, you publish it on the website. On the repository, you need to build the package in your clean environment. So for example, if you want to build your software project for Debian, JC, and on the MD64 architecture, you need to prepare a clean shoot, right? And you build the package, then you can make sure that it can be reproducible. And then you publish that binary into the web so you can download from the repository. If you have more customer, like for example, you want to build your software project on two different Debian version, then you need to prepare two shoot to make it reproducible. And then you publish twice. And then if you have more customer, you want to support more indicators, then you need to multiply the builds and shoot to maintain this. And then you have more distributions. Then you've got more builds and more shoots. So every time you want to release a new version, just imagine that how many shoots you have to go to build and then publish. Then you have more and more. It's nightmare, right? So how can we fix this kind of mix? OK, this is a new infrastructure container. First of all, what you need, you need to have your own native distribution on the somewhere on the cloud. And then for me, I choose Debian here. And then you get Docker installed on your distribution on the cloud. And then you get the Debian OBS image build. And then that's it. Done. So let's see how this infrastructure benefits you. OK. This infrastructure is automatically every time when you build a package, you use a deep bootstrap to clean the churrut. So every time, you make sure that this is a consistency also reproducible build. And then it has just one source upload. You don't need to publish the binary. You upload a source package. And then you build against multiple different distributions and different indicators for you. So it's much easier to maintain. And then when you do the release for your project or your software for a customer, you better have some reviewing system for the QA to test it before you publish. Otherwise, when you use a factory update, maybe something broken. So the building that the review system is like you have a branch. You can submit the merge request, similar like that. And then after that, you publish that automatically generate the repository and publish on the way for you. So for users also, the benefit is that because it's published to the repository, so user can easily keep tracking the update from their system with APT. And then he has a nice workflow. He is building that CVS like in the web UI. Then you can see all the changes or revisions there. And then you also that it just when you imported the source package, they automatically doing the dependency calculation and then build the packages. So just imagine that you want to build an image for your embedded system, for example. You know that the packages you need, they just have a script to import the loss package into the project. And then it automatically calculate the build dependencies and then start to building whole thing. And then if you get anything wrong or failed or missing build dependencies, then it will show the status on the web page. And then for instance, if you want to build the image for your project, then you have auto build. And then you found the server load package failed. And then you can assign the task to different engineer to work on different package parallelly. Instead of other method, you build from the Mac file and then something failed. You fix one, and then run again, and then from the other error later. You cannot really dispute those kind of things to multiple people to fix it at the same time, right? But this one can do this. And from this one, you can see that you click on that. This is missing the build dependencies. So it shows that unresolvable. Now you can see what is missing easily on the web file. It also has the assets control, building on the web page. So you can create different project and different engineer and different team to work on each other and who has a right to publish, who has a right to change or during the review. You can do it on this kind of thing. OK, after we see all the benefit and the nice features, let's look at how to set it up. OK, what you need to do is you need your distributions, as I mentioned, and with the Docker and Docker compose installed. And then you can fetch the containers image in the git repository there. You fetch that one, and then you follow the steps in the readme.md. It's very simple steps. Then you can build the image locally. And then after you build the image, then you just bring it up by the Docker compose up command. And then once you done this, then you need to config that UDNS to point to the container where you bring it up. So here is a sample. We just use that OBS API here because I build on my laptop, for example, just for demo. OK, once you have this image build, and then you bring it up, and DNS connected, you can just type the host name. Then you can see that front end, web front end. And then the web front end, the default account is admin. And then the password is open through there. And after you log in, you can see that you can do a configuration and create projects with the admin account. But everything is complicated on the web. So we create a simple script to help you to set it up. So just go back to the repository. You previously checked out. And then you can see that the Docker image is here. And then you switch into the OBS test folder. We have a script to provide inside. And this folder, you can see this, the test DOD, and underscore one, a share script. You can find this. And DOD is the short name for the download on demand. You just imagine that this infrastructure will build your distributions. But if you want to build your package against Ubuntu or against the two Davion, just imagine that there are so many packages. You cannot have full milo on your infrastructure. It takes too much spaces. So it has built in the download on demand. You automatically fetch the dependencies from the repository from Ubuntu or from Davion. So it saves the space on your infrastructure. And then this script, you just run it. You just see that the script pointed to your OBS front-end host. Here is a sample I use, the OBS API. So you run this command. And then the script will automatically create the download on demand for two different Davion versions. You can also modify the script to fix your needs. You can modify update to all the other distributions. Very easy. And then you automatically fetch the package from Davion, like a hello package. And then submit this hello package into a test repository for you. So once in a while you've done that, you go back to the web front-end. Then you can see that that is the updates. You can see that the two DOD projects are created. Davion A, Davion 9, and also test the project. And then also the package hello. So let's just click on the hello package. Then you can see the status. So this hello package is going to be built against two different Davion distributions. And then against two different articatures. Then you can see now it's showing the block. It's because the DOD is working. You click on that, you can see DOD is going to download 151 packages to build the hello package. OK, yes, meanwhile then you can see that it's building in progress. One articature already done, and it shows the statistics there. OK, let's see the components. Because when I do the images, I put the multiple images for this infrastructure. The front end is a Ruby Rails app, which has the database on that. So that image called OBS API, if you see that in the repository. And then the backend service is the OBS server image and also the OBS worker. And OBS worker is like the builder. So if you want to connect your package to different articatures, you need a different articatures worker to connect with this. But it's a doc file there already. So it's easy for you to look at it and add more worker for your articatures need. And then the most of people that we were working on development or doing the packaging, we don't like to upload the package by the web front end. So this is a command line tool, OSC and OSC, deep plug-in. Can let you do that in command line. So you can submit the package and download a package or branch a package with a command line. So here's a sample. The OSC, this is a common workflow we use. And first time when you use the OSC, you need to do some configuration, but you will do it for you. So you just use OSC and dash upcase a, and then point it to your OBS API front end. And then you use a command LS, for example. You just want to list what kind of thing available on the OBS server. And you do this, and then first time when you use this, you will ask you this configuration. And then you need to type your username or password here. If you create one on the web front end, then you can use your own. And here we just demo an example. So I use the admin account. And I use the open suicide password. And then you will ask you to this kind of thing, because this is just a demo. So we just trust the self-signed certificate. And then you can see the S command works. These are the two DOD and the test project there. And then I can use this command OSC, check out, for short is a CO, and test. I check out the test project. And you can see that the HALO package also got checked out from the project. And then let's switch into the HALO package folder. And we see the file inside for the HALO package. And then we want to show you the common workflow. Usually we want to update the package. We need to extract the source package. So we use the DPKG source-x to extract the source package. After this command, you can see there's a folder. The source curve of this package is created in the folder. It's called HALO-2.10. And then you go into the folder and you can see the source curve of the package. OK, and then you start to update the package. And but this need the Debian packaging skill. If you do not have a Debian packaging skill, here is a good start. Yes, Debian have the maintainer guide. You just go to the URL. Then you can see this is a very, very nice guide to explain the basics of Debian packaging. So that after you've done your modification, of course you need to prepare a new release of your package. And then you need to update a change log. So you use the deep change command dash i option to insert a new change log. So when you do this command, you can get a template from deep change. And then you need to read the previous document I mentioned, the maintainer guide, to know how to update this properly. And here we just update it properly. And here is demo. So I append the plus A1 revision for OBS talk demo. And then update the distribution to test, because it was in the test project. OK, and then you just save this and X from your editor. And then you go back to command line. Then you use the DPKG build package. And this command is generate the source package. And you see the dash s, up case, s is a source package. And dash d means that you don't check the dependencies, because we don't want to have all the build dependency on our laptop or desktop. We want this on the container, on the OBS infrastructure. So we just generate the source package. And then use this command. The source package got generate. Then you go to the parent directory. And then you see. There's a new version, new revision of the source package I generated. Of course, we don't need the old one anymore. So we just remove the old one by this command. And then you use the OAC AR. AR is add and remove for short. So you use this. You can see that the old version got deleted. And the new version get added. And then you use OAC checking, CI for short, and then also the commit message. And then this one, they will send a commit into the OBS server. And then let's check the progress here back to the web page. You go back. You just reload the web page. And you can see, oh, there's a new version there. The source package is there. And then you start to process to generate the binary packages. Once that meanwhile, if it's complete, it will show you the green succeed. And you click on that, you can see the build log ever since the inside. And then there's also one. The other command is OAC deep boot plugin. This is a similar syntax like a Debian developers to a deep boot command. Just one command, you use this. For example, we fetch the source package from the main guy. And we fetch this. And then we can see the source package. And when we do this command, we just use a OAC deep boot. And test is the project we want to upload to. And then the DSC file. The main guy underscore 1.2.39.DSC. And then after this command, this package, the source package will be uploaded into the test project. Again, we check the progress. We go back to the OBS icon on that web page. And then you can see this front page. And then you see that that is update. You can see there's the main guy there. And click on that, you can see it started processing. Of course, they didn't need to download the DoD also. And it's time to wait. And then here, we already showed you about the workflow and the setup, but we look at how to optimize this. Like we're using this a lot at work. For instance, that we use the merge-out-misc. And this one is actually we forked from the Ubuntu merge-automatic project to our merge-out-misc. And this is automatically doing the continuously packaging integration. Just imagine you create a distribution, but your upstream, like if your distribution, for example, based on Ubuntu, Ubuntu has some new update appears on the repository. And this software can help you to detect. There's a new revision. See that two different versions, different revisions, and newer one from Ubuntu. And this one will automatically detect and then submit that to OBS for you. You see that's a merge-out-misc user, submit, new update. And then also, you can hook the OBS with your bug tracker, for example. You can modify the source code. And then once that your package builds successfully, then you can trigger a script to update the bug status on your bug tracker. Or you can also integrate the OBS build with the Jenkins. So just for instance, you can build the package directly from Git. And here, example, that we build the Ninis kernel package in Git. So we have multiple developers that work on the Git repository, the ones that they want to submit to the merge. And then they will trigger a Jenkins job and to automatically build the new package. And how they build? It actually follows that the common workflow I showed you guys before. You see that you use the send command, right? You generate a new source package and then submit into the OBS. And then also, you can even build your distribution image by Jenkins also. Here is, for example, that we have the image in the Git. So we modify the image customization and then we submit the merge. And the merge will trigger this Jenkins job to automatically build a new digital image. And once that the image built, then you can integrate with Lava. And Lava is developed by Ninaro, which is an automatically validation system for real hardware. So Lava is behind the kernel CI, which is a project to auto-text and to verify the kernel books. And also, we use this one to verify the image builds and also running the test on the image to see if that new image have no problem. OK, that's a recap. Then in my talk, you just already get the idea how impressive, how much time and a resource a team can save if you use such an infrastructure to manage your package or your distributions. And then just remember that you have seen how this infrastructure can benefit you and how it's set up and about how to use the components. OK, I need to sense that everybody who involved in this without those people, I couldn't make this come true. Thank you. And any questions? If I want to clone your repository, I just use admin openSuser. You mean clone my repository? No, it's a Git. You need to go to GitLab interface and then you get clone. Yeah, yeah, yeah. Git clone, founder repository. So even though it's openSuser, the password, but it's actually a Debian build, isn't it? It's a Debian package, but this software is developed by OpenSUSE. The upstream is OpenSUSE. And then I keep the default username and password without the modification. But you can update change that yourself. OK, I haven't done this before because my FST is on the packaging side. So I have one project, it's frankly that I provide the RPN spec file and also the Debian DSC file in the same source package. So it builds on different multiple distributions and also on different allocations also. But I never done that one, so maybe you need to provide something that OB is supported. But this one is we actually, our customer project, we use that based on Debian, most of the project on Debian. So we modify the OBS package to customize to make it be able to build a whole distribution, Debian-based. Yeah, so this is the purpose. But I never tried other. Yeah, probably. Yeah, maybe you need to get the OBS, maybe the technical manual or something. I think this might be possible. I don't know because I'm not OBS developer. OK, any more questions? Yeah, we use this to build the embedded image also, not only the desktop. And it's something very powerful because you can build the multiple package with parallel on the same time, so much faster than if you just have a one Mac file to build one image. And then you have catch the multiple file that you can reduce appeal to the task, to many different engineers can fix much quicker. I think it opens through the offer one, but I will try that. It doesn't fit our need because they cannot build whole distributions. And also, it's just provide the package. You cannot do the integration because you are not the one host on yourself. You cannot modify the code to hook up your bug tracker or with your own Jenkins, everything. So better self-help. Yes, yes. But if you need more to suit your needs, you can do get the Docker image and also the script example, the DOD test. It's just there. You can just modify the script to fit your needs. OK. OK, guys, time. I don't think it's time.