 Okay, next up we have Raphael Herzog, who will be speaking about Kali Linux and their experience in tracking the testing distribution. Welcome. Thank you. So, how do everybody? Time for the introduction. As I said before, I'm going to present Kali Linux today to you. So, we'll cover the first part about what it is, because many of you probably don't know about it, and then what we use, and the choice we made to set up our infrastructure, and the workflow we use to do our packaging work, the tools we have set up for quality assurance, and the last part will be more dedicated to the problems we have, because we have chosen to be based on deviant testing. So, we have issues and some workhounds and also some wishes of improvements that we would like to see on the deviant side. So, Kali Linux is a deviant derivative that has been created in 2013. It's focused on penetration testing and contains lots of security tools, forensic tools, so it's used by security experts doing audits, by script kiddies also, but that's another problem. It's really the successor of Backtrack. Many of you know about Backtrack, but really it's the same people doing both. Backtrack was the first try. It was somewhat hackish, not really made cleanly with packages, and Kali is the reverse of Backtrack based on deviant with deviant packages. Not all of them clean, but at least clearly separated. So, the company behind Kali Linux is Offensive Security. The company is doing a penetration test and security trainings. So, Kali Linux is really their toolbox for their security audit and penetration testing, but at the same time it's also a marketing product, because they give it freely to users on the internet, so they are known for this, and it brings them customers for their training. Kali Linux is rather popular. We have more than $100,000 for each release of the Edo image. We have many minors everywhere in the world. The forums on the ISC channel are very active, so there's a large user base, and it's also, well, since it's well known in the security fields, there are many extreme authors of security software who are using it as a reference platform. So, this is rather useful for us, because since they develop on Kali, it tends to work on Kali, and we have less problems to integrate it. It doesn't mean that they do clean stuff, but at least it works, and we can help them to do something cleaner in terms of packaging. So, what I like about this is that it means we have a very large user base of devian testing user, because Kali is really devian testing. So, if on the devian side we tend to tell devian testing is for advanced user or at ease, on the Kali side, we do not give any such message. So, we have some support issues, but it means it's also realistic to give devian testing to end user. So, my role in Kali in Linux, so, I'm a devian developer for a very long time, and I'm working as a consultant since 2004 in my own company called Friction. And I've been working with self-sufficiency in the start of Kali Linux. They found me through the devian consultant list, and we discussed, and I worked for a full year before Kali Linux had been announced particularly. And since then, I've done lots of stuff for, well, at the start, mainly creating the infrastructure for the packaging, and packaging lots of software. At the same time, there are so many software, about 300 of them, that I trained a few other persons from the company to, so that they could help us in the process. And now, I need most of the work is, on the last part, monitoring the health of the distribution, because as we discussed later, we have checks to ensure that it works. And since every day we have new packages from testing, we have regular regressions and stuff to fix. So, what do we use for infrastructure? Hardware-wide, it's a lot of, well, a lot of, many rental servers on virtual machines on the internet, over several places. We have one machine where we host our repository. It's an internal machine, not really published. And we have a sort of public mirror of it, which is archive.kali.org, which is a reference machine, which is used by all mirrors. We have mirror redirectors based on my work brain. So, when you install Kali, your source.list file points to HTTP.kali.org. And from there, you are redirected to a mirror close to you. We have mirrors that we manage ourselves, and we have also mirrors that are contributed by universities, companies, and stuff like that. We have four build servers, one for each ARM architectures, and also one for each BAUS AMD64 and E30086. We have one server mostly dedicated to quality restaurants, and one where we run configuration management stuff. And there is stuff by the management server, because most of our services have been integrated with that. So, we use SaltStack as the management configuration management tool so that we can easily deploy service on new servers in case of need. And we use some SaltStack event features, which basically allows you to inform one host of something that happened on another host to coordinate operation. For example, every day we build the ISO image, daily ISO image on our build machines, and when it's finished, it informs the sound trial or repository which downloads the ISO image to make it available to use. We created some salt formulas, so it's basically a sort of re-scipes ready to use for our full services. And we created three of them, but actually we use only mainly the S build one, but it's a layered solution, so they bootstrap the way to just run they bootstrap on a given distribution. You just have to give the distribution name, and the formula has all the information of what is the main mirror that it should use, what components are available in the distribution, so on. S-shoot builds on top of the bootstrap to add S-shoot integration, and it builds on top of S-shoot. It builds S-shoot-shoot, if you want, but with the name expected by S-build and with appropriate seemlings ready. We use that to set up our build daemons, obviously. We contributed things to other formulas that we use. So the package repository is managed by Reproprop. It accepts uploads by SSH. We have two uploads, two uploads from developers, obviously, but it's also a binary upload from build daemons. This is rather usual. It's also this machine which enqueues build jobs on build daemons through SSH. This is a rather easy solution, but I'll come back on this later when I pick up build daemons, because we're using rebuild daemons, which is... Basically, there's no sound trial service listing which build needs to happen, so we directly send build operation to each build daemons, and we have a single builder per architecture. The server also centralizes IZO image that we build, and it imports package from Daemon, runs BritNet to create a consistent distribution out of what we get from Daemon and from Kali, and pushes the public archive. So the public archive is just in-jinks to serve files of our HTTP and our sync to make the file available to other downstream mirrors. We use the HTTP sync or hash sync script that Daemon is using with its own mirror, so it's based on SSH triggers and notifies downstream mirrors when they have to update themselves. We have... Our sync access is restricted to only official mirrors, and we manage that through Sault from a list of official mirrors that is stored in Sault data. And the same data is reused by the MyWare Redirector service. MyWare Redirector is run with MyWare Brain, which is an Apache module. It uses a PostgreSQL database to know which files are available on each MyWare, and it watches the status of each MyWare and redirects you to a working mirror close to you. So the definition of working on MyWare Brain by default is only HTTP is available. We got a bit frozen and we hooked another script to disable Mirror, which are no longer in sync with the main archive. MyWare Brain is not yet in Daemon, but the upstream provides Daemon files, and I would like to see this software in Daemon. I thought I would have promised it to its upstream browser, but I never kept my promise yet, so if you want to help me keep my promise, well, I'll gladly get some help. MyWare Brain has been working relatively well for us. It has some raw sides, rough sides. I get many notifications which are not really interesting when don't stream Mirror are not available, but only for a short period of time, for example. And it has a rather annoying bug that I worked around, which is related to how it handles files. When you delete files from the main archive, it may still be on downstream MyWare, but it will immediately start returning HTTP 404 errors instead of keeping redirecting to whatever Mirror still has the file. I worked around this by ensuring that we keep deleted files for a few days before actually removing them. So build-sales, we use RebuildD, which is a Daemon package created by a French guy, Julien Anjou. It's rather simple. It has no complicated features, but it does its job. It uses a Sculite database to keep track of what it has to build, and it builds them. There's no way to handle builders of multiple machines. It's just basically a list of packages to build on a given builder. So we feed the source package to build directly from the machine running report to the builder. This is not a problem for us because we use binary package from Daemon Unmodified and we have only 400 packages which are really specific to Kali and that we have to build. So the workload of building is not so high and a single machine is fine for this. The build machines also build ISO images with live build. So again, we have official releases and daily builds made available. I know that live build is officially offered in Daemon, but we use it in Kali and we'll keep using it and we will keep maintaining it in Daemon. At least we will fix anything that is broken. We might not develop new features and stuff like that, but we will make sure it does not get removed, at least not until its replacement is as feature-complete as live build because so far live wrapper is really light on features. For quality restaurants, we have a private Jenkins server with many jobs. We run tests only on AMD64 and E386 architectures so we don't have tests for the other architectures. It was initially set up by Olga Levson with a similar setup to what we have nowadays on Jenkins. And we have a public backtracker obviously, which is the Mantis setup. I think Mantis is gone from Daemon, but well. So I speak a bit about the structure of the distribution, how we, what repository we have, what meta package we have created and what Kali-specific package we have. Kali-specific being both forked package from Daemon and package that really exists only in Kali. So how do we create our release? We have our Kali packages in a repository called Kali-dev-only. So that's where we upload our packages and it contains only results. Debian testing is a plain mirror of Debian testing and from there on we build Kali-dev. We combine Debian testing with Kali-dev-only and Kali-dev-only takes precedence. So if one package is in both repository, you get one from Kali-dev-only. Obviously this breaks quite often when for example there is a transition finished in Debian testing. We have to do the same transition in Kali-dev-only. So to ensure that we have something consistent to use for a user, we run BritNet on top of Kali-dev and this is then named Kali-rolling. Our BritNet configuration is rather simple because we have no delay to wait, no RC bugs to check. So basically it boils down to instability checks and making sure that the package are available in all the architectures. We have created the meta packages. So for those of you who don't know what it is, it's a simple way to install a set of related packages through an empty packages whose sole purpose is to have dependencies. So for example we have Kali Linux full which defines our default system that you get installed on our live image. But we have a whole set of topic-based meta packages as well. Software, different radio, one oriented to tools using your GPU, wireless tools, web tools, forensic tools, voiceover IP tools, password cracking tools, RFID. A Net Hunter is a specific one. Net Hunter is basically a phone device on which you can install Kali to do funny things like plug it in a computer like say you have a friend or a target rather. You ask him if you can charge your phone on the USB device. So you stick it in and it becomes a network device which becomes a default outgoing route. So basically you monitor everything through your phone but you drive the traffic back to the original interface so you can snoop everything. It also becomes a keyboard so you can send the case strokes and take control of the computer. So it's a really nice project that they created and well this one it contains all the dependencies that you have to run to set up this on a phone. We have also some desktop oriented meta packages so that if you want to install them or rather when you create mainly used in the creation process of ISO image because you can create custom ISO image for a specific desktop and then you use those and you get something rather well integrated with Kali. We have two special packages which are Kali Linux all which is a dependency which is a meta package of meta packages so when you install this one you have everything. It's too huge for most people so we use it for testing because it's a way to ensure that we it's a simple way for us to install all our packages and the Kali Linux top 10 so the most popular the 10 most popular package among Kali. So Kali specific package we have a few package of our own. Kali default is basically default configuration file for the web browser but also for desktop with gconf gconf gsettings and everything that is related to that. We have Kali menu which is using the free desktop menu specification to create a Kali specific menu where all the usual entries are even one level below and where we put all our Kali tools in the main menu. Kali would log in the set of hacks because Kali default user is root and you can't log in as root in KDM, GDM by default so this basically diverges some pump configuration file to allow this. Kali meta is where we built our meta packages and Kali Archive Clearing is the key to sign our repository. We also fork quite a few packages so the minimum one is base files because you want to be recognized as being a derivative so you must modify your ETT OS release and base and ETT DPKG D4 and stuff like that for various reasons. Desktop base is where you change the background picture of the desktop and root scale dash GTK is for the branding within the installer. We have a few desktop features we have a modified version of Dome Shell Extensions with an application menu supporting nested menu. It used to support this but our recent GNOME version dropped this so we developed something to keep that feature for us and in GNOME terminal we add a patch to support transparency which also got removed by GNOME recently. We fork Kleinex mainly to add a single patch which allows with injections on many brands of Wi-Fi cards. I would like to get this one to Debian but I'm not sure it's actually possible. Yeah we discussed it once already I think a friend didn't like it so we have a fork of init system helpers basically it's a fork of update dash RTD to disable most services by default. It's a hack and I'm rather happy that we have found a cleaner solution for the future. We recently contributed well with the help of Andreas and Rickson I believe support of preset files in system D in Debian because the DH system D was always using the system CTL enable instead of system CTL preset so we fixed this so you can now use system D preset files to disable services by default. At least it works for packages that have system D units which is not all of them which is the reason why we still have to keep the update RCD fork for now. Obviously we have customized Debian installer mainly to put a pre-seed file in the init RTD which is generated and related to this we have a few two UDEPs that we modified net retriever to make it use the correct package for our containing our key for the predatory and another one to change the default name. When you want to install Debootstrap Kali obviously Debian Stabilizer doesn't know of Kali yet so we have a fork to support this and a few open bug reports in Debootstrap as well. Debian CD is as well as some distro specific data that we have to add. We have a modified live build with EFI boot support which we contributed back into a bug report but which never was applied and which is not entirely satisfying so that's why I never committed directly I wanted someone to review it but since then Daniel stepped down so somewhat languishing. So that's it. Now how do we do our packaging? We do everything in Git repository so you can find all our packages in git.kali.org. We use what I consider the standard and best tool to use nowadays so git build package we use a pristine tar to store the original table 3.0 kilts for a source package format short tools files with DH so this is a rather standard. When we have a package fork from Debian we maintain a Debian branch with git build package import DSC on a separate branch the Debian branch and then you check out a Kali branch and you merge directly from Debian. It works rather well in particular if you have configured TPKG-merge change-ups to avoid most of the conflict on change-up files. Another packaging related workflow is the britney because britney needs some manual care most of the migrations are automatic on ours because while we are really really close to testing and testing is already really consistent so you don't have many problems but you still have issue from time to time when well Debian used some first team to bring a package in we have to use similar hints. Up and also for packages which are not instable on the E386 or stuff like that which are forced in also on the Debian side so what do we do on the quality assurance side so as I said we have a Jenkins instance and it runs mainly for kind of jobs simple test we do is installing our meta packages in minimal shoots well that's the test that breaks most often usually because either package need to be rebuilt in Kali Dev only or because package got removed on the Debian side I'll talk about this later. We have a new grading test even so we are rolling release we do snapshots every four to six months and so we do tests and we do ensure that when you started with the last snapshot you can upgrade up to the current status. We ensure that our ISO build process works because well many of our users will customize like build image and so we want to be sure that it always works and we also test installation from the ISO image also this is rather superficial test I think Olga had great plans but did not manage to finish them at this level we kept running it but I think it's not checking much. We would like to go first in terms of quality assurance because well security software are rather very specific sometimes hard to understand and how to and for us package without specific knowledge of the domain it can be hard to ensure that they work so we would like to be able to synthesize this knowledge into a test and then rely on those tests and obviously then hook results into Britney because we have nothing that can blocks uploads package that we put in CaliDev only into CaliRolling so we would like to have this sanity check in the middle so we would upload to CaliDev only it would run auto package test and it would only accept them in CaliRolling if the test succeeded but we're not yet that so now the more interesting part for billion the problems that we have related to the even or not so if you want to look it up you can check the bug that we filed not all of them are related to the infrastructure many of them are related to specific packages but a few of them are related to infrastructure not notably related to repropro I said in the talk before the repropro has no integration with Britney so Britney outputs a file which is called a high D result which is really the list of packages of binary packages that you want in the resulting distribution but repropos can't use this repropos the set of commons so we have written a script which translates this in a large set of repropos calls and it works fine but it's not really nice I would rather have something integrated repropos also has no feature to keep deleted to keep files of deleted package for a few days so when a package is deleted it goes away from the my world immediately and this is causing problems for us so we again acts something with snapshot so repropos a snapshot feature where you can well snapshot our distribution at a given time and then it will keep around the file as long as the snapshot is there except that the snapshot failure is not really complete so you can't really remove a snapshot yourself you have to individually remove references keep the file around so it's again not really clean and one of the most annoying problems that I have with repropro is the fact that well sometimes in Cali we upload new apps in version of the event packages before they been so we push a new dot oric dot our gz file and when they been updated for some reason because either the event package have been repackaged or because it has been downloaded from another place sometimes you can download from github directly as charges it and sometimes you can download it from pi pi or another source the result of basically make this and so the it's not exactly the same time so you have conflicting files and repropo does not allow this and it's same to not allow this but it has no feature to solve this problem and we have to solve it so we are the only solution is to manually remove the our Cali version in all suites where it is and copy back the deviant version and it gets really hard because when you want to keep files for a few days you you add snapshots but snapshot you have no way to modify them so it's really tricky here it would be nice to have something here so it's somewhat related another problem is auto update deviant packs it would be nice if we did not had to push new apps inversion on the Cali site because the new option version would be already in deviant unstable or they didn't testing but it's not always the case mainly when it happens it's due to maintainers missing in action so part of our solution so far has been to create a pkg-security team to take over package which are not really well maintained in deviant so we're starting to do more work with in deviant in this team so now the problem which are most specific to deviant testing the main one is package being removed we we have our meta packages they list packages from Cali but they also list a lot of packages from deviant and some of them get removed so one of the common reason is due to release critical bugs which are not handled on the deviant side because again the maintainers is missing in action or maybe because the maintainers does not care as long as it gets fixed before the release is happy with this it's not it might not be a problem for him if it's not in testing we have no solution yet for this but we would like to add a supplementary janky check which would basically monitor the the warning well the output how can I help restricted to the the testing removals warnings that it emits we have multiple ways to get this information but how can I help recently got a machine power stable output so we can rely on it if we want what our idea was mainly to install Cali Linux all so that we have over packages then run all canal app and then filter the output that we get our second reason explaining the package get drops might be the QA team requesting their removal because well the package might have a few bugs and low popularity contests so if the maintainers seems to be missing in action why keep the package so it's true that a few of them got removed that way so they've been is not aware of package use in derivative we should aim to fix this maybe through tracker.dev.org advertising package which are relevant in the context of derivative I don't know and the last point is when the race team kicks package out of testing to finish a transition sometimes a package needs changes in unstable and when it doesn't happen often quickly enough to be ready with the other packages package get dropped from testing and it gets back in when it's fixed but it still means a few days where there is no packages in testing the second problem is package which are broken it often happens with partial transition what this means is that when you have a package A depending on package B with a minimal version and while for some reason the minimal version is not correct anymore it should have been increased but it has been missed either through tooling or through because the maintainer did not look the changelog correctly and stuff like that and both packages have not been imploded exactly the same day so the first one will migrate to testing before the other and until the second one migrates package might be broken at runtime so such partial transitions are creating untested combinations that ends up not working it happens quite frequently in the context of GNOME transition because GNOME do somewhat stage transitions only five oh okay well so that's a problem for us we also have problems with upstream not caring about backwards compatibility so when we upload the package to the end the situation is not really perfect but over the months until the freeze such problems tends to be fixed but in testing you obviously have a few weeks or months where the situation is problematic on one last reason might be that bugs might be filed too late or the non-appropriate severity so that the package still migrates and we often have also problem with regressions with a new kernel there really is you know even so Ben keeps uploads only new kernel with when the dot one stable raises out it's we still have a few regressions so our wishes so I already spoke of britney auto package testing integration missing report features but what I but we we would like is that they been in general cares more about testing so at the individual developers level I mean taking care of your own packaging testing and showing that gets fixed I mean if you know that it doesn't work take care to use urgency high to get it fixed in two days or instead of five I would also like to release team to acknowledge that it's more than a tool to build stable it's also used by derivative and by orders so there are really end users using it so try to be more open to accept quick and temporary fixes even if it's used only for four days of five days it's still useful but this can only happen obviously if we have the developers who cares about this and they are ready to prepare such uploads so it would be nice to really to revive a never real cut team which cares about the status of testing I would like report back to be be a better on derivatives because it reports back directly to debon without warning the user still for the release team I would like some way to some way to gather data about transitions I would like to know if Cali is affected we have trackers on the debon side but I would like to be able to check on the Cali side if we are affected also without having to run my own tracker and copying all the tracker stuff and be informed what it has been completed so that I can check in a timely manner if we have something to rebuild we would like some application bundling support because quite a few of our of our packages for example ribbon the way Ruby on rails web application which use many many gems in different versions not necessarily the one which are packaged in devian and so we end up currently pulling bundle into our depths but I believe it would be nicer if we could rely on something like a mix of flat back some storm and snappy that would be generic enough to cover web application services both to for the case where we can't deal it deal it properly with devian package also for the case where we know the application to be not clean and doing nasty stuff on the system so we want continuity continuation for those I would like a proper my brain package I said and more volunteers for the new team we created and we are known to recently on devil that's it perfect timing I guess if you have a few minutes left for question maybe otherwise yeah just so you know I wanted just this minute got the live rapper you if I support working I don't know you are pick-off live rapper but speak a little bit so I've just now got the live rapper branch building with the UFI support okay has Icelandic and grab support that's all I had okay well it's not the UFI support that we lack in live rapids rather all the hooks and the possibility to configure runtime with diamond we have a question from my RC from Pede he he says comment to a speaker it is possible to change the desktop background and the installer pictures without forking desktop base and root root scale GTK we do so in deviant edu check the deviant edu artwork package okay I think we can have one more question as we're already on the like about to finish please there any other I have a question as a developer I'm actually following my tracker very well which you're also quite heavily involved in why do I only should see one derivative show not there you do you mean why do we see only you want to on deviant tracker okay but basically nobody else did the work to provide data on the inside but actually I would like each derivative to have its own tracker and so that I can build features in tracker that would export data that we could match back in our own truck and at least Cali has its own tracker instance and we use it to check compare the our package with the beyond so if you go to PKG dot Cali dot org there is a link derivative page and you will see what we have only in Cali and what it brought from deviant and so on okay so thank you very much we have to stick to the times as much as possible thank you