 My name is Andrew and Well, this is I'm going to talk to talk about how I worked managing build and integration infrastructure of Debian derivative so first of all some bits about myself I started contributing to Debian in 2007 some years later in 2013 I became a Debian developer and Two years later. I became I started working for CLABRA a company which does open-source consultancy basically and sponsor this conference, so I Only started doing doing actual packaging in 2010 and it was basically it started as a joke A friend of mine claimed you cannot package something for Debian in half an hour And I tried to prove him wrong Well, I didn't manage to package that package in half an hour, but still I've gotten to well they've been packaging I never Attempted to run any bits of real Debian infrastructure I've only used mini-de install to publish binary packages for my users to test so I barely know how to properly Manage real Debian infrastructure But then I Started working on a project in CLABRA called Apertis This is Debian derivative of which is tailored to automotive needs and Basically Originally it was developed to run on infotainment systems basically machines which Do the navigation and play radio for you? but It is in fact fit for quite wide wide a variety of electronic devices in automotive area so This project it provides quite a lot of infrastructure for Code hosting it provides its own code review tools package build services image generation services and also automatic testing infrastructure and Well, even though it is a Debian derivative it is based mostly on the Buntu not directly on Debian Even though it takes several packages directly from Debian for example system D And on top of all of that it provides a set of its own software frameworks packages and APIs for automotive needs So We as I mentioned we use system D for process tracking We heavily use up armor to enforce policies on applications And Even well we take up armor from Ubuntu, but we extend it with policies for lots of applications already on Ubuntu and Debian and also to our own applications and Packages We use flood pack thanks to Simon McVity for Safe and efficient deployments of applications in there We use well and for graphics so there's no more xorg since I think last year and Well, obviously we just trim for multimedia So well, some people may say why do we use Ubuntu and some bits of Debian and not just Debian? Which is well a universal perishing system and it is entirely free software And it has been developed by commutative individuals and not companies which often happens or all of those things are great, but Unfortunate stable moves a bit too slowly for us and the ones it's released the changes to the to stable are many minimal and It Quickly becomes very outdated And then when the new release is out the changes are often quite significant On the other hand unstable breaks a bit too often for us to base on it Whereas Ubuntu They do releases more often and even in LTS they push updates more often and it moves slightly faster than Debian stable and Even then they have quite a large install base. So it they have lots of testing Yep, so now bit a bit about infrastructure we have in the parties so Um The core of this infrastructure is open build service or BS which is thanks to under lease package for Debian by the way So we Store a purchase specific sources in git and At the moment we see it, but there's a plan to use github for this From giths a purchase specific sources go into OBS where the rest of of packages are stored Mostly it's packages from Ubuntu and from Debian Which go they they're directly or with some modifications OBS builds packages builds binary packages and They go into Compatible package repositories then they get installed into images which can be used directly on the devices We use Jenkins to manage image builds and other Continuous integration parts and we heavily rely in fabricator for project management and backtracking and we also use lava for Testing the actual images on actual devices So Something about open build service Just like As built OBS uses its own code to create routes in which they The package has been built in a clean well-defined environment So OBS resolves dependencies every time from scratch and Installs build dependencies and build the package It stores all packages in the version revision-controlled manner so you can See how the package has evolved you can check out any older version if it's needed to revert and build again and Actually to resemble subversion in many many ways this package storage It provides Access control so Basically for every project every package you can Choose whether it is accessible for reading writing and so on packages may have maintain a role assigned to them so one user can Manage it in Manage this package similar to what we have in Debian and Even if a user doesn't have a right access to a certain package there's a branching feature so a user can clone the package into his own sandbox Develop the package there and then submit a measure quest so Even non-maintenance can contribute changes and test them and Well this branching feature very much also resembles the subversion branching and merging because it is based on directory like namespaces So In OBS a purchase is split into multiple components Unlike Debian, there's no main country and non-free Display is done differently every component is in its own OBS project, so the the components split is target development HMI SDK and snapshots and there's also helper leaves so Target are packages which are getting installed on the device itself Development are mostly Additional packages which are needed to be build what's in target SDK are tools which are needed only for development and they are installed into special SDK images where Users can also install additional packages from helper leaves HMI is a special component with a software for Human machine interface packages Which is what is used for in infotainment systems and finally snapshots is a special component which is used to store and build Development versions versions of packages right from the git so In OBS you can specify Dependences between components so when packages from one of the components are built They may depend on packages from other components. So for example development uses target Actually, it's other way around this Well, in fact development can use packages from target, but Target uses package from development to be built and SDK depends on development So you can select multiple components when you are on the live system But the SDK sort of assumes you have also development and target So when OBS builds a package it is getting published into internal OBS repositories Which are normally you have just one per project Those pro those repositories repositories are not in opt format. So we is re pre-pro to make them accessible to opt Those internal repositories are used by OBS to Build other packages within the same project. So When a package is being built and published into this repository it can be used as a build dependency for other packages in the same project and This makes it very easy to do full rebuilds. You just add an additional repository, which is not published into up to repositories and Which depends on the main report repository. So Packages can be rebuilt and the results of the build can be discarded So it's easy to detect when Detect situations when some packages stop the building after some time because of changes in their dependencies or in build order or some other for some other reasons right, so To work with packages we've got we've developed a number of workflows Which are different for packages imported from Ubuntu and Debian And for our own packages so well packages from Ubuntu and Debian to which we don't do any changes at all they are imported to OBS either manually or using OBS feature basically where you can copy packages from elsewhere If the package the changes to the packages are quite minimal which happens often we just commit changes directly in OBS keeping patches in depth 3 format and Well, all of those modified versions they have a prefix similar to similar to what Ubuntu does but instead of Ubuntu one Ubuntu Ubuntu 3 We get we put co1 co2 and so on and We also use the fork of Ubuntu's merge automatic tool called merge our MISC To pull new updates from Ubuntu LTS so It can handle simple merges and automatically rebates our changes on top of what Ubuntu has and Certain packages are kept in git to make the merges easier And then in git We use Both Debian Standards and our own approach. So for non-operative packages, we use the depth 14 we keep upstream code in upstream branches and operatives specific branches for packaging of for operatives packaging and When we release a new version of package we add an operatives version tag and that we use Gilbert a git build package and It's GBP PQ command to manage patches But for native operatives packages, we use slightly different approach so the We don't keep Debian packaging and the source code separate. We keep the both on on master branch and When we create a new release To be able to push updates for the previous released Distribution we create a release specific branch on which we put changes specific only to that release and We use two sets of tags a purchase specific tags for changes specific to the packaging and Just a version tags to releases of the upstream code So a normal upstream release is usually two tags one just version number and the other is a purchase version number C01 and if later on we need to do some changes in packaging only we just add Debian packaging tags and If we change the actual code or apply patches of we Bump the upstream version number So We have a Jenkins instance which Every time Something is committed to get it picks the the top com it of of a branch and builds it in a controlled environment, which is Not the same as OBS This environment is updated At the moment is a date manually from time to time When the build dependency change because we we use this approach because we didn't want for potential Unrelated build failures to cause Failures to build our code. Well, if for example bash becomes uninstallable You we don't want our code to fail to build because of that So when the bit succeeds Our Jenkins generates Source package, which is submitted to OBS and is committed to the snapshot component OBS builds the package once again in in a clean truth and if the build succeeds and It was a release Which means it was tagged as a release it submits a Merge request on OBS to the component from which the package originally came Jenkins uses build snapshots script written by Simon Mostly this script has been used to well to build packages and to create Source packages for uploads to the OBS This script is probably going to get packaged for Deben as well because It is useful not only for a purchase and it has many other uses, but so far it's not yet been submitted So and We also have Say a procedure for new patches, which are submitted at fabricator. So after before The they are being reviewed by actual humans Jenkins applies them to the top of the branch and builds them in the same environment And if the build the build fails the submitter can immediately see that there's something wrong and they need to change the code And That's basically about packaging now image builds Also controlled by Jenkins The same Jenkins instance as is used for packages and the rest We use another image tools Which build images in a multi-stage process and we should separate hardware component how hardware specific components of Hardware independent components If there's someone familiar with how linear image tools work basically this means that first of all we have Something called OS packs and how do packs so OS pack is basically part of the root file system with Packages which are specific to this architecture, but not this specific device So they can be shared by multiple devices running on the same architecture. For example MX six six q sub light and Raspberry Pi and hardware pack is also part of root file system where Device specific files are installed like firmware or device specific you boot or maybe device specific kernel and They get combined producing a set of images Based on this architecture for every device so we've we create multiple OS packs for Different types of images it is well as you can see on the on this slide is at least target and development there's also SDK image and a number of other images and Since we support three architectures at the moment, which is I am d64 I'm 64 and 32-bit arm I'm HF We produce quite a lot of them So Image build process they builds hardware packs and OS packs Then combines them Into actual images, which can be run on the devices Generates these routes for for the SDK SDK is based at the moment on Eclipse And it allows you the world developers develop applications in a More user-friendly way, so they need sister roots of of the system for this So after sister roots are built Jenkins triggers test on the lover image or the lover instance So the images are installed onto the devices and being tested and verify that they actually boot and Do something useful? We are in fact we Use auto package test infrastructure for part of not only this infrastructure, but what do we also use it for certain packages? and well, and then a special job scans the package change logs in the image and Closes the bugs which are fixed by new package versions So there are lots of challenges and maintaining all of that without and using something else than Deb a standard debon infrastructure I'm going to start from from the bottom because First of all OBS isn't quite like as built and it builds packages in slightly different way Which often doesn't matter but at times There's some difference which makes some packages to fail to build from from source and the same time those the same packages build quite fine and debon One one of such differences is that OBS ignores essential flag and it needs manual overrides to specify Which packages are needed to be pre-installed before any build dependencies are installed Sometimes we mess up those and the packages built fail to built in very funny and interesting ways Well next thing which is sometimes difficult is that Mergematic can handle simple package mergers when Changes do not conflict with each other, but when they do it fails and and It forces the user to oh the maintainer to resolve conflicts manually and this is quite difficult because It doesn't provide any meaningful Conflict description or what it provides something to start with but it's quite difficult Using git helps because git can solve many of the conflicts Mom can't but then you can put the whole distribution into git. Well, you can but it is quite difficult and We don't have many packages in git it's about couple of dozens packages, which we maintain well changes well packages from Debian and Ubuntu with our own changes, but We can't put everything there and then When packages are removed from Debian Ubuntu, it's bit difficult to keep track of packages removed and Remove them in a purchase as well So Yeah We as I mentioned previously we plan to Provide git lab to make it easier for Potential contributors to To contribute because at the moment it's just a secret instance and it's not easy to Basically It's not easy for you for contributors to create their own folks of packages do work then submitted back They need to go through all official ways of contributing I There's now a work in progress to automate the release creation because it involves quite a lot of manual work updating packages creating new branches and this is something I'm going to automate and There's also work in progress to stop using the narrow image tools because they are quite difficult to maintain they sometimes fail in In part because of specific bugs in Linux kernel, which are very difficult to trace and fix so now We have a custom tool which is much simpler and handles image creation much better Yeah, and yeah, basically we're going to shift focus to become a common platform for more Automated systems not just infotainment as it was until now so that a purchase can be used for wide range of devices and Basically, this is it if you are interested in a purchase this the web page you can go and learn more about it and Well, this is it questions Answers Proposals have you considered using that testing as the middle ground between stable and unstable? Sorry, have you considered using testing as a sort of rolling release between a stable and unstable? Well testing is better than unstable definitely But well, first of all the decision wasn't mine. It is a bunch of what I can see Certain challenges in using testing as well because well is in many ways. It's basically unstable delayed slightly so Still changes the things changed there quite often and quite a lot When is something more stable than testing? But more lively than stable Fair enough. I can expand on that a bit for a while we were using non LTS Ubuntu so it was updating like every six months and We decided that was essentially spending too much time rebasing it was too much of a moving target and We weren't getting enough benefit from the more from the more frequent updates to justify it So we dropped down to using only the LTS releases Yeah, that's another another reason Yeah, we're basing to new releases and well not even new releases just pulling updates from the same release Within its support cycle is quite a lot of work and With testing it would be quite quite a constant work which needs to be done and doesn't often bring benefit benefits So I'm working on Ubuntu Sometimes I'm interested in some of this thing you want me to stand up. Oh, sorry. I don't know you're trying to film me so it sounds like you do most of your testing on the on the Images that you produce and on the packages like the individual package uploads. Is that is that right? Sorry, most of the testing you do is based on the actual like produced image product rather than packages on the packaging level Wondering if you've thought about using something like Brittany and or like running auto package tests on the packages as you upload them Or if you've ever done that before We so as a bit of history in Ubuntu we introduced these like maybe maybe four years ago or so and So previous to that we developers would just upload their stuff straight into the development release You know kind of like kind of like we still have an unstable in Debbie and now right and you get problems with like Arch SKU or just like random broken packages or like half done transitions making it through to your You know Essentially the product you're trying to give to your users for us the development release for you Like maybe you get broken images from this from time to time if you have transitions half done Wondering if you ever thought about like introducing more of the kind of Debbie and like testing like release management stuff into your into your workflow well, we first of all Well, we also run the package test the well Deben rules test basically and Since we've done integration of auto package test into our Well Test transfer structure recently we can technically run tests for the whole well already existing tests in Other packages which come from Ubuntu and Debian and there was a plan to do that, but it's just We basically need to somehow separate those tests from the rest of What we have so that It's just quite a lot of work to Deal with you know failures and by in which happened due to changes which were introduced in Ubuntu or Debian So, yeah, we the war plans to do this It just I think it's it's been implemented, but we haven't switched it on So we do this in Ubuntu. We when we introduce this we started redirecting uploads to a new suite So sort of like sorry, we started redirecting uploads to a new suite and it was what we call them So instead of developers uploading directly to the development release that the uploads are automatically redirected to a staging area And then we have Brittany sort of running and and only once Brittany thinks the packages are good enough Are they then copied into the into the thing that we build the release from so I mean for you That would be like when somebody uploads to an OBS You'd upload into some other area and then you would when the automated test surpass you would then migrate them into another Repository, which is the one that you build your product from and right So if someone fails the test it And then you can see the results and then somebody has to fix it before it makes it through to the state to the place You build the products from I don't know if that's interesting to you But for us, I mean I think I feel like it's given us a lot more confidence in the things that we're producing Oh, yeah, especially in the you know in the presence of tests like at least we know that they've gone green Or somebody has looked to them and overridden them. It's just interesting. Okay Okay, anybody else want to Mike if I can just respond to that point We had So we start we started running more We started running more auto package tests and things like that For the for the stuff we pull in from Debbie and Ubuntu But the problem with that is not all test failures are equal, you know We have a lot of packages in the distribution that Some of them if they fail, it's like well Now the product is broken and some of them if the tests fail, it's like well, do we even care We're using like 1% of this package if the test failure is in the other 99% It's really it's really not not even worth our time to identify it let alone fix it. So we have to be quite We have to be quite careful about Making sure that we only test things that We would even want to fix and that would that we did that we don't waste developer time on Debugging things that don't actually have a significant impact. Yeah, I personally spent quite a lot of time Trying to figure out why certain packages failed. Well, there are some build dependencies of packages we use but Basically, they are part of SDK image only At those were basically basically packages written in Java and some of them would randomly fail because of test failures so After spending quite a lot of time and figuring out why the tests failed and they failed because well one of the package started failing because it is suddenly 2017 and the package was designed to Well, it was not designed to fail in 2017 But they never thought it will leave that long and some test was not expecting the new Unix time or something like this So sometimes Disabling tests in some leaf packages Helps in fact Yeah, because we We don't Need to test all of the packages which are just build dependencies for something that we don't use all the time If we had enough developer time to make all the tests for all the packages pass it would be amazing Yeah, but we just don't So we have to prioritize Anything else? Okay, thank you then Thanks for coming and That's it