 So a few months ago, I started putting together a wiki page. Microphone? Give me the microphone. Give me the microphone. They have a little break. So I'm ready for this. Thank you. I put together a wiki page, and that references the various tools such as DHM approach and QTAP, personal QTAP, NPM QTAP, Qabal Debian, and DHM Gronung. I aim at automating the creation of the game packages, usually from upstream package management tools. So the goal of this book is to discuss the status of those tools, identify the type of collaboration with those tools, because I have the impression that each team developed its own tool without really looking at what others were doing. So most of those tools rely on language-specific upstream packaging tools such as RubyGems, PiPi, C-Pile, etc. The general idea of all these tools is to take as much information as possible from the main packaging team, go back to the young metadata, and then enjoy, you get a quite ready source package. So what does it bring? But for Debian, it makes it much easier to standardize packaging practices in teams because as much as you want to generate, you don't want to change anymore and as a result you get those packages that are really the same for each different particular software. Also I am centralizing all the intelligence about the packaging and moving it from each package to a central tool. So for example in the Ruby team, for each of these tools it will be a Ruby team that's what I know the most about. Each source package becomes really simple and gets as much things, a bit of control from the agenda to that tool. The stuff like how to build install files or to run tests. Also it makes it faster to package simple packages because the package you generate is almost ready and you can then just focus on the hard stuff that the team needs to give and don't lose time doing simple and stupid stuff all the time. Also for these admins, for admins who are not Debian developers, it's an easy path to create Debian packages for what's not to package in Debian. After all we only have 20,000 source packages in Debian and that's probably at least 200 different pieces of free software out there that people want to use. It's still a way to create backpots, package a lot of free versions to meet what these admins really need to do. It could help achieve a world domination by providing a single, question-rich, unified way to distribute software. You can end up in a situation where all these admins know how to create basic Debian packages using these tools and then use a local repository to distribute software inside the infrastructure instead of using manual install, containers, etc. The path to that is probably still quite long but I think that's something that could contribute to the current events about all the distributed applications. So there are some downsides actually. First it's yet another additional tool or layer to learn when you learn packaging. The other downside is that by providing tools that automate most of the work about packaging, actually we could end up in a situation where less people will know about lower layer of packaging tools. I'm not sure that's really a bad thing but I'm sure some people will think it's a bad thing. It's basically similar to the question about should every Debian contributor be able to arc into developer and know exactly what developer is doing or basically understand what that doesn't understand because of the interface. Easy do. So as far as I know the current status is described on this cookie page. So maybe what we could start to do together today is just review that cookie page and look at what each tool is doing and try to find a way to work on specific items to present things. I think that's how one claims that actually well, some tools could learn more from what other tools are doing or maybe identify a bit such as arc of being one specific tool and could move to a more centralized tool so either developer, dev scripts or a specific package. I also tried to think about the topics for this special cookie page. First one is should we have a common front-end tool for all automated packaging tools? So on the cookie page I tried to find references and ways to do all the basic tasks. You probably want to have the cookie page on your laptop because I cannot zoom in because then the table gets too large. But I reference all the various tasks that one needs to do and of course each tool has its own syntax to do that from the light parameters. It could actually make sense to have one single tool that knows about all the different things and just hides them to have the appropriate interface. We could discuss workflows, especially tools that are not present in each tool. We could discuss the target situation with regard to the game because actually we could try to get less packages in the game because it becomes so easy to create packages for all the long-term packages, packages that not so many people use. Maybe there's no need to package tools in the game and we could just rely on those tools for people to use them locally. Or maybe we could aim for more packages in the game because we actually have a way to scale the time of packaging a really large number of packages. Something that's quite autogonal is what's the recommended way to create a local report and see if you create packages like that. You want to distribute them inside your infrastructure in the local repository. What you could do is quite hard to use which is who I was talking to about that but if you try to do it with your report you can do it in a few hours of your time before you get it working. It isn't really simple how to but it's supposed to be obsolete. I'm interested in doing how you do that. And then we have an additional question about any expression of copyrights is both sufficient to use or to generate the game from that and push upstream. Okay, so maybe let's start by going through the wiki page. I noticed that on the wiki page there's only tools targeting language extensions and so on. What about plugins for other programs? I'm thinking about in the Postgres world there's a gazillion of extensions around and I think the problems they are exactly the same. Intentionally the scope of its own targeting language extensions that was just where it started was more are there no packaging tools for extensions for finalized extensions or not? I don't know of any but maybe... Me neither. I'm pondering right on the... At least that's for something. I have an idea which is called VH Web PgPoolExit. Yeah, probably the problem with space looks really similar. Yeah, right. I wrote one for Vagrant plugins. We are just about to have the second Vagrant plugin packaging that is there. The Vagrant plugin is the H Vagrant plugin or something. I actually wrote one for Ving plugins some time ago but I don't think it ever got to their file. Ving plugins? I think there's two parts of the problem here. One is actually packaging stuff and then there's all these software universes that have a central repository or central registry like Cpan, BIM has one, Postgres has one which is not that much used and so on. I'm pondering how much packaging these full repositories should be automated. It's related to that long tail question. If we really want to have 10,000 portal packages or if we want to have one big package where you can install sub-modules or something like that, I think the actual packaging part is less interesting because it just works somehow. You just write a script to gather the information and put it in the dev file. It's my impression. The more interesting question for me at least is if we want to automate accessing the list of all existing file extensions or bringing all of them in. It could be insides of the game archive which sounds a bit crazy but it's also prohibited. It could also be an external archive. I think there are a lot of people doing that. There's a program prepared for the service. Packaging all our modules and putting them in a separate big-end repository. Then, later questions related to that. So it's true that the plugins system should also be added. What can we see from the existing systems? Actually, maybe we could just... Who is involved with DHMedical? Just to see if we have experts. Then to that. And pricing estimate. NPM to that. So... Cabaldebian. And DHMedical. So, in terms of... Well, underwriting helper, most of them use DH except Cabaldebian, which we see DBS. We want to comment on that. Well, it's not actually a matter of... Cabaldebian is rather... We have a separate package called Haskell Beth packet. Haskell Beth script. Which is like the extended rules for building Damian packages. And that just historically has been written in DBS. So, of course, the package helper creates something that uses this depth helper extension. I think by now, most of the time, it has moved into a shell script. It's being executed from Damian rules. I mean, could be sent, exactly. But it doesn't really matter, because it's working. So, most of the... All the tools look at stuff like Debbie May to generate the maintainer feed or the openers feed. Most of the tools do big dependencies and dependencies generation. So, that really depends on what kind of data you can get from the upstream packaging tool. So, I think that several packaging tools are doing is use the app file to do a mapping between upstream library names and Damian packages. Sorry, I've been interrupting you in a bit of a sentence. What I did, because I usually do stuff without asking, so every Haskell binary package has a field in the actual packages file in Damian, called THD-package, which has the upstream package name and upstream package version without any Damian wrangling and the ABI has, which is specific. So, I can actually get the mapping between Haskell package names and Damian package names from the packages file without any guessing, without app file. I didn't ask anyone to put the file into Damian control. Nobody complains. That's Akish from the app file solution. Yeah. Is somebody taking notes? Please try to take notes, because I won't be able to do it. I will just review the video, but it's hard to do it. The old page generation is quite obvious. Usually, it's found in the upstream data. Short and long description as well. For 1pm, there's no long description in the upstream data, so they cannot do anything about that. Tests, so actually, I think, Chemp2Divis, where it goes further. Chemp2Divis installs it. Test? Yeah, so we it has a separate binary package for the test runner that we can, for instance, use it for our test without having the full helper installed. And that helps sketching missing build dependencies or missing dependencies because when you start the package, it brings a whole bunch of stuff that you usually don't have on the installed package. You have a few ways of specifying how to run the test in your package, so you can use rake, which is Ruby equivalent of make, or just a plain Ruby file that can load test for you when you run. And you can also put a Yama file of the list of test files, but it usually doesn't work anymore because every test suite has a specific way of being run. So the nice thing about having that is that we can automate, we were able to automate running the upstream test suites as auto package test suites by just calling that helper. So Ruby package don't have the Debian test control file for auto package test because out of that page, which is the tool that you use to figure out how to run the test for a package that doesn't have explicit test, just calls the test interpreter. How many languages do that work for? I believe you know. You mean the auto package test? Yeah, I mean, auto package test has special support for pro. Yeah, it's Ruby pro node and GKMS So when it says no all over, it's actually not necessarily no. Probably yeah, that's because it doesn't mean any support from the source package for pro. Yeah, so the pro support is actually they wrote a sample counter file that is run for package check. The package into should add a header to Debian control, which I'm not sure it actually does, but it should be easy to add and then we test Ruby. Yeah, in the case of pro and Ruby, we actually out white listed three or four thousand packages because we knew they were standard and it would work. So we decided to start adding the header and at some point in the future we can drop that like this and every package will have the proper bad information. So some members installed docs and examples determine all standardize it is in the option side to describe auto financial spice. Debian copyright is quite interesting because some packaging tools are doing that. So if we look at all tools to open and what can NPM packaging describe that? The copyright part? Okay, so for Rascal, it's quite rather nice, upstream metadata has a copyright field, it has a license field and a license file field. And often they are correct. So in this case we can use a tensor that refers to other upstream. We take the copyright from upstream data, we take the license name from upstream data with a mapping to the DEP names and then we put that file as the license file into the copyright file. For Debian slash star what I decided to do is say copyright helped by the people mentioned Debian change lot. So there's no need to update the copyright file with our names. I also find that there's no point in copyrighting Debian directory of an auto-generated package but some people wanted there so using this line you don't have to worry about you have to have the same text for all your files and you don't have to worry about mentioning yourself. Yeah, and sometimes there's copyright data missing but... Sorry, what was the question? How does copyright generation work in DHC? It depends on how well the upstream specifies it. For some packages it was quite easy but there are some packages which like bundles, files which shouldn't be exactly there so it's I think it depends on the package. It's a good start it's a good start but it needs some work. Okay. Do you have a field about the license and the action with that impact? Yes, usually thanks. Nobody is doing scanning of all the files or generate Debian? Is anybody scanning the PCS of upstream of those instructions? How is it to generate basic Debian copyright file by scanning all the files? The license check from those groups is supposed to do that. Last time I checked it didn't get copyright from a readme file which is where most of streams today put it but that may have changed. I think there's some copyright generation touring around and they forgot the name. It's not in desperate. There is something totally automatic it's coming from some packaging and so forth but I totally forgot the name License Reconciler? Yes, right. What is it coming from? It's a pearl model Debian license record file. I think I tried it once but didn't try hard enough to do it in this one. That one is also supposed to update the copyright years in Debian copyright which is probably the most annoying and useless thing I can handle to do that. Debian Watch most of the packaging tools create a Debian watch line Something interesting that DHbreak does is creating the deep repository using Christian Tart that was a deep tree working because you only want to do something like that I don't know if it creates it also on your side or only locally. It's only locally but when you want to project it into your repository there's not a command that you can use. DHbreak also feels the ganit tree metadata it's really cool doing that. I don't know what's that actually. He's already signed. There are two tools which are Node.js and Node.packaging tools that create the ATP burn template. Any other things that should be listed there that aren't which are the packaging tools? They didn't. Maybe the amount of work to get the package finished the average case doesn't really work or is it just a rough starting point? Maybe whether it generates working packages or packages with lots of boilerplate needs to be It probably depends on the culture and the upstream community to get the copyright right and not to install crap or make install. Maybe what's the experience here? For my experience it really works except for the description and there's always things you can do or the package you don't have to So it will be several times if you get a working package including running test suites with you then you can test suites somehow broken and you disable it and you get a working package that's easy to define Yeah, because if you run when you're joining to that you want to create a package just for a local installation when the test suite fails it gives you the chance to just continue with the package if you just want to leave it to my last stop and don't really care about what it's really doing again, put a working that when working don't test it that I know I tried to use that thing and it failed or didn't work right as I wanted it to and remember the details unfortunately I'm writing a new one so it will be available soon Okay but for now it's on my computer but it's already a lot better than STDF it generates new dependencies you write files creates implementation packages to other things but it's not ready yet but I mean it works for most packages but I didn't test it very carefully yet because I started working on it like during that time Just add to that I'm also using locally the Python packaging heavily and I have good experience is that only with big packages like the Shinkumoni system and even say managed to got out some difficulties with user creation stuff like that so usually it works I don't think it has the quality to be uploaded directly on packages but I'm not a professional user So you want to comment about any other tools? I wrote the peer for PHP another one for I will probably approach some of this better probably the PyPy inside PQG tools and the other one is used by the PHP team? the problem with PHP is that most upstream moved away from peer to the composer so for peer it works like with everything it's not a problem but I don't know because I stopped doing PHP packaging to use it You can use the peer one until it satisfies the DeGeneres to say use composer the package they are pretty similar there is another tool which is not language specific which is not done by someone in DeGeneres which is at PL which generates at PL it's widely used in some communities because it's very easy to use you can give the directory and it turns that into a DeGeneres package the package is quite horrible there is no binary dependency generated almost no configuration file management but it can run on any computer I mean it can run on OS 10 so it's pretty popular for that and you don't need to know it knows how to handle Python but it is not it is not integrated with our tools it handles Python by just knowing that you need to put everything in user lead Python 2.7 it handles Ruby I suppose the same way I know a lot of people using it because there are things that DeGeneres packaging is too complex an FPM reduced to 1.9 there is ImageMake as well which is what's mentioning for Java packages MHMake Maven there is also DHAlpha which is what and that may be used for MX package from the new architecture that DHAlpha we change for example DHMake to be a meta tool but if the text for example Python or Ruby to use other tools but it seems very wide the guys having to DHMake suggest using the other tool if the package looks like something something that Dept.Dry is already doing yes DHMake is a default tool I think Dept.Dry is something that we probably should talk about because it's a very nice idea with the request those who don't know Dept.Dry is the idea of first of all it's a way of not storing the output of these tools in your virtual control system but rather the changes you need to make to them basically you run the tool then you make your modification as you need it and Dept.Dry makes sure only the modifications are stored in a I think WN slide so you don't have any non handwritten data in virtual control system it also automatically chooses the right tool so it's like a very convenient tool to use but Enrico as you might have noticed this doesn't seem to be like he had this idea he created the first prototype he's hoping somebody else to take it over so maybe this is the right audience to motivate people who think this is a great idea and it should really be pushed forward maybe somebody here wants to take it over and continue working with it because I think it would benefit any of these teams that use such tools are you using it? no that doesn't quite get to everything I want so that's why I'm actively looking for someone to take it over it is packaged with Debian so just Dept.Dry and it has good repository are there any upstreams that include the Debian stuff like the Linuxcon just produces working builds as far as I know maybe that's something we could reach out to each upstream repository and talk to them about providing depth which automatically generated by those tools or applications metadata that is useful for us so several keys which is in the script to identify the format of the generated files to someone that recommends that CME updates dpk, gpk write to prepare a regeneration with someone that is familiar with it so I don't think it's useful to go through the full case unless maybe you can look at things which are not supported or it's interesting that it's supported but degenizing an impact upstream source NPN cannot do that it has to be very important degenize without building source so that's just a way to stop supporting the process so it helps us to do that for Perr there are multiple upstream tools that just work out on the box they work on Debian other systems as well like CPM plus, CPM minus so maybe just it doesn't work for that one for some reason but in general they do can you go up a bit on the list it says that Perr is not supported by for some use case which I think is either I understand I do not understand it what does that mean it's just not correct so this is just about how do you get the upstream package upstream archive really quick when I'm trying I couldn't get this to work basically it provides a way to do the world chain of taking virtual source degenizing, building, installing which we don't support that set so I think this question is interesting is how to refresh the already created package so Perr as this DH make Perr refresh that I think overwrites some of the files but not all of them and try to merge some of them is that correct I'm not sure it merges them or if it just recreates them it's available so it just overwrites everything except change them Romantic, Devin, Control, Devin change that already has been yeah that's true so what I've been using recently to do that actually is if I take Ruby Demons which is quite outdated began package what I've been using recently to do refresh the packaging actually a bit a bit graphical different mesh tool is actually quite nice to kind of compare each file try to minimize the amount of changes compared to the generated files packaging like that is actually quite quite fast instructions what I do since last week not use expected in our we have tools that are managing the repository so it's not inside but what I do is I take a separate empty git branch there's no answers to this I run Kabaldebian on the old version then I upgrade to the old version run Kabaldevin again previously I committed so I have one to commit or one commit that represents the change it would happen to a completely unmodified automatically created devionization directory and then I use git cherry pick this temporary branch onto the main branch and if I get much conflict I can use my usual git foo to resolve it do I understand this correctly than what you're basically doing is re-creating the original version to do a 3-way version run that's right about this yeah and that's if to work okay and the good thing about going back to the heritage files is that you pick up all the changes about good packages automatically that's quite important to kind of keep all the packages in sync and then it's a good team something that I've been wanting to do but I don't think it's yet visible at will is never modify the multi-penalty structure at all but rather patch the upstream before running for use to make sure the upstream better data is in the perfect state you want it to be so that a couple derby and whatever it is will always generate a good tool output but I think we're not there yet so I've been pushing so you're changing the upstream meta data back to the upstream are you putting your changes to upstream meta data back to the upstreams well we've already tried this right now we're fixing the output and fixes to descriptions are not always upstream at all they have slightly different scope descriptions I guess we should probably write that it's very important okay that's the same thing mentioned CME that's config model I've never used it I don't know if someone can comment on this magic I think it's kind of meta language to describe configuration files and changes to them the good thing is it already has a distinct description for the user packaging files so you can just say okay refresh control file refresh from what what does it do maybe it's outside the scope one thing that CME does is comparing the version of your control file with the latest stable for example if you have a version which is already available in stable you can remove the version from it okay I think basically all the things that are described for example in the control structures in checks of them if there are changes okay we are kind of putting it out of time so is someone waiting to discuss one of those points before we wrap up which is if nobody is pre-admitted I think when it comes to our question of having more or less packages should rather go for the more packages because of all the infrastructure we are vending around with continuous integration and reproducible builds having socials available in a way that they won't easily vanish less all the benefits we get from this solution I think we should try to keep this alive also with automatic packages my observation with having lots of packages in Libion is that also due to these tools it has become easier to create a new package in Libion and to keep it a package so the cost of adding a package is actually higher than the maintenance cost so I see that people joining the team oh I want this package to be added and they edit the package and they disappear after two weeks and I have to renew so actually I am looking for a good policy regarding what gets into Libion and what does not but I don't have a solution for our team yet so I would be interested if there are some consensus on how you decide what of these many libraries you package I think the idea of removing open packages from testing is actually it's de-maintained they won't be opened but the team can open them then I can just remove them that's what I do okay on the repository generation I use a local script with apt FTP archive and GPG and it works well also I don't think it just works you need to write that config file and then it just works maybe that was just my impression but I think it's a really handy tool and if you really want a good solution that's apt FTP archive maybe the bad bit is that you really need to learn both first to know what you should be using it in different situations how it works pretty well so anything in particular that you're thinking about you can share with us is there any use for merging the tools together I think it's mostly about reading some metadata file and then spitting out files which doesn't seem to have much potential for merging because it should be simple the third question is if team A just to not mention teams just in general if they use like for their language to split out these files you mentioned and the other teams uses their own other language to do the same then if they are told that there is some third one which is not yours is that good for them I don't know just asking I think that for language specific packaging tools there's no need for merging because each team basically has its own language but on the other hand stuff like copyright there are a lot of things we could basically share but you basically need to meet a data which is usually a YAML file or something and then generate an output of that and I think it would be good to share that CME is part of this one actually maybe it doesn't support everything or maybe not that broad but some parts so you're recommending CME so you're recommending all these tools you CME to implement that part there it may be worse if it may be potentially I'm just considering we have 20 different other tools each generating the control file in their own way if it's like the same thing actually then it may be worse