 Welcome to the GitHub package box, the idea behind this box was basically to kind of find out, so we are, so GitHub during the last year has got quite some features here and there and so the idea would be to find out what kind of features you are using and what kind of features you are actually missing at the moment or which kind of features basically don't work as expected for you and how they kind of more render your workflow and then support it and maybe during this kind of learn about the different workflows that are there at the moment and so just to get into it the idea was basically to find out which are the most commonly used tools from this package so you will be using GitHub package to command itself probably not really about everybody and the important origin of the table and so who is using a GP builder? So those of you who are not using a GP builder are using sBuild or? Yeah, okay. So because I would love to hear how you are doing the sBuild integration because I think it's not perfect at the moment, so maybe... Pretty good actually. Okay, so but we don't have minus-minus-gip sBuild so you are doing minus-minus-gip builder sBuild or setting the builder equals sBuild? Yeah. Something like that. I don't use the integrated version, just plain pBuilder. Okay. Yeah. Me too. Set builder equals sBuild. And is there a reason for that? That's a... I dislike the default of having always the text set after pBuilder 21. I think that maybe it shouldn't set a tag after it. Well, maybe it did in the past. That was something I was annoyed with so I went back to pBuilder and basically I'm using git build package only with dash dash git tag only after the build. If I'm satisfied. Okay, so you may be not building with git build package at all you just keep working together in that record. That's what I do as well. Typically I build a lot of times before I'm satisfied. Yeah. That's the point. Yeah. So what kind of pBuilder are all pBuilder directly which kind of is the thing that takes most of the time is like unpackaging all the dependencies so are you using like kind of several change routes like with pre-installed dependencies or are you just... SSD and waiting for it or even PMPFS. So who's using the patch queue handling? This was the end of the last thing from... Okay, that's a lot. Sometimes. Yeah. Much less people than using the above tools. So who's using DCH to generate the change locks? Okay, that's much more people actually. So who's using GBP pull and clone? That's only very few people. And I'm missing push there. Yeah. I know this actually. So was you using the import DSC or DSC stuff? I have used that. Yeah. Specifically regularly. Okay. Yeah, indeed. When I created the proposal for the first time it was to start using GBP. So basically I'm using it pretty often to work on patch packages. I usually don't work on like on security updates or any use or something like that to better version and to maybe get some history to find out that I'm trying to control. And Kali, we're using it to keep track of the deviant. The deviant branch is the managed background. Okay. So we're giving the DSC deviant branch equal to deviant and then we do our work on Kali. And so you're going... So if in Kali, if a package has... Isn't people in control, you use that one or are you always using it for DSC? Well, almost always you're using DSC except for stuff like deviant starter or whatever. Okay. Which is actually deviant specific. So who's using create remote vehicle for creating remote repositories? That's not very... Not at all. Yeah. Yeah, you can do that. Yeah. Okay, that's the new GBP config command to find out about configurations and etc. So that's... I think I requested that but I haven't noticed that it's there yet. It doesn't matter. I'm using it now, so... Okay. So anything like... So we heard that GBP push is missing basically so anything else that makes you crazy is missing as a super command? Like some... Well, there are a few weapons around GBP used in a package pro team. They have a... We have a tool called GPT, DBN Pro team, which works similar with the sub commands and it has for example also an import OIC sub command which calls GBP import OIC but also tries to figure out if there's an upstream JIT repository which will then be written down into the DBN upstream metadata automatically and then pulled into with the upstream repo option and just everything set up for using that feature. Okay. So I can imagine there may be more wrappers around GBP in that package pro tools package where features may be... may move over to GBP proper. Yeah, or maybe making them... making them more widely or something like that because if they're working well. So basically pulling in all the upstream JIT if it's available. Yes. Yeah. Which actually quite nice because it's two steps at the moment. I'm using the TPT push which is what I would like to see here already for non pearl packages and works well for me because it looks in a DBN GPP conf what's the format of the text and only pushes DBN's text not upstream text. Okay. That's quite useful then. Okay. Then the other thing is like now we had all the commands so I kind of listed things I use very often so I'm usually importing all the original cables in GBP or if you scan DKG mentioned that before I think that's actually very nice especially if upstream signs its releases then it can also this is you scan its job then to verify the signatures. If not you can just pass it the URL so it will just download the tab one and import it so you don't have to do that in two steps. Is anybody using that or is everybody using use scan anyway? I'm using that quite often I like it really a lot. I'm using it as TPT import or okay so that's what I'm going to get. Okay. So just a very secret question maybe I could use import or but for me it plays is that the same or not? Yes. So one thing we did for Jesse is just like remove all the git parts so before that we had GBP import or and git import or and we kind of dropped all the git parts and there's only GBP import or so git import or shouldn't be there anymore in Jesse. Okay. So that's kind of the changes because we had we had tools that the new tools only had the GBP part the old tools had both now we have the three years of transition or something. Okay. So if I do the on my Jesse's to get something then it's still valid for you. Yeah. One question there. So now if you use GBP build package that's some kind of double. Double. Yeah. Will there be build package being the default without everything? Without anything. I've been actually thinking about that but then you had one exception and I wanted to call it just GB because it's not just G otherwise it would become very short and it could come up with anything else and actually if not absolutely necessary I don't want to go through another name changing because it's kind of for people having business it's just annoying. One thing I use quite often is like with GBP PQ import is the time machine option which basically uses your patch queue and if it doesn't apply on the head it goes down back in history until it finds a point where it applies because sometimes you have a patch queue which is outdated and then you kind of import the Utah bowl and your patch queue wouldn't apply anymore so you can just go back in history find the last time and then use it again. It kind of makes it kind of simple or if you have a package from somebody else and you don't know where it would apply you can just use that one and the three just says go back no more than three minutes so it does it linearly going back yeah so it basically goes back on the dead end branch no no no it does it linearly because usually it's like so the usual use case is like you have an up-to-date patch queue you import a normal ORIC you import a new ORIC toggle and just forget forgot to update you pull the new upstream it gets a lot of commits so if you're having upstream in like the upstream history there so actually we should maybe add it make it possible to advise it so any volunteers for these kind of things just let me know especially if you have the upstream history this will speed up things a lot what I'm oh sorry I think if you speak up loud enough it will be picked up by the camera so fine machine option how does it choose the decent length to track I mean if you have upstream git repository on a branch and you also check in tarbos upstream upstream branch into the upstream branch then your merge into the dead end branch has two systems and the upstream also has two other systems and if you go the right way then it's true that you only have to go back like three commits to have your HQ apply perfectly but if you go the wrong way then it won't be able to apply or it will apply so usually if you if you use like import oric to import the tarbos the dead end branch is your left hand parent so it just goes down the left way so it's not very clever trying different possibilities just takes the first parent and you have it anyway and if it's not there because it merged with some other tool then it won't find anything else no not yet so the basic idea of matching here was exactly that so if nobody else besides me is using it first you find this is so if you're using it and it doesn't find the history then it should just pick it up and try it out but what if you use it because I forgot to check out the HQ before importing the new last thing and then I just get in there okay so but if you have a repository where it doesn't work then maybe just publish it somewhere if possible and then you can just kind of find it um other questions? or so this is the blog so it's fine if you're just discussing between each other and I'm just sitting here I'm having a more general question what about if you are your own upstream sorry? if you are your own upstream it's not as some remote code that you have some small script you want to maintain in a data package and just build it from Git there's no import no upstream or new scan no but you don't need to so what I usually do if I'm my own upstream I'm still keep so if I don't want to keep the devian history in just one branch I usually just do my master branch with the upstream development on and I have a devian branch whenever I do an upstream release I just merge it over to the devian branch and build that's the way I do it at the moment so that basically brings me to the next thing so but nevertheless you might want to have like a pristine tar capitol tar wall so when I build packages where I'm my own upstream I use Git pristine tar commit so it will just kind of like commit back the pristine tar history of that branch so if I want to regenerate it at a later point I can regenerate 905 identical tar wall that is actually for especially in these cases or when you don't use any upstream at all Git pristine tar commit is kind of your friend here because it just kind of saves you the extra pristine tar so we've been talking about gvp config gvp config before there are two other commands which came up before in the workflow part which is gvp import tsc and import tscs for for any use of these kind of things and this will be kind of partially superseded by dkits because if you have a git view on the repository you can just use dkits directly so but it's supposed to stay here for for a long time and especially if Kali is using it and others there's such outside of that and it's fine anyway so there's batch completion at the moment and there's set shell completion the options are quite long and sometimes you have to put a branch name there and then you don't want to remember how the branches are named and about the set shell completion is currently un-maintained and I'm not using set shell myself so I wonder if there's anybody there who kind of want to pick up the set shell well it works for me so far okay maybe it would be a good point to put it set shell upstream then okay so they are maintaining upstream so the set shell completion are a lot of and usually the root of thumb is if the software upstream is maintaining it anyway it's better there but if it's not maintained probably it's probably best maintained with set shell itself so we can have a look to make a move over because I can't even tell if it's upstreamable code or if it needs to complete rework or something like that yeah well I'm packaging set shell and team with well there's even one more member here but that's not my part of expertise so yeah but it would actually be nice because at the moment I think it's just broken is that right? yeah and so it would be nice to have it three? I haven't noticed it okay okay on jesse works fine maybe on something new yeah maybe it's just broken and unstable so some of the recent needs like since the last time I talked about it which I think was in New York so we had this before we have the one tbp super command now which maybe wraps everything else with the tbp content command which is not that old so I propose like a git naming scheme which I think would be actually a nice thing to discuss here because I think this is kind of important to downstreams who look at our history and kind of important for us if we close another repository and then when I want to kind of find the actual branches the world is being down on and so who knows about that 14 okay that's what do you mean by no some of my discussions there most of the repositories you enter are kind of using different themes and I think what is proposed in depth 14 makes a lot of sense in many many situations and so the first idea is basically to put everything into the devian namespace that's basically step 0 if I understand that correctly so all devian related work the upstream work will stay in the upstream namespace or rather do you want to talk about it or because if you know it much more better than I yes the idea as you said to not mix up too much because the need for a git package to use the master branch with data packaging is a bit virtual when you want to import the upstream git property to the true one it's even virtual if you're here or not because you always have to think the other way around and so the idea was to colonize the name devian branch, devian tags stuff like that to be able to share easily git repository between devian and kali for my case and also with that to the repository so there is some description I agree we see what makes a lot of sense but not everybody agrees still I think it would be nice to git package before to this naming sheen and I don't know it will be able to convert all the tools to follow it but since git build packages and I'll just move the page at least it seems like it's a good first step I should basically do a new run of discussion that I managed to do but I committed recently the results of the last set of discussions so maybe it would be actually a good step to have in the git build package configuration at least have a configuration which you only have to command out some lines to get a conformant naming and then announce that the default will switch in I don't know continues when it is being abandoned or something like that if you'll be able to change it it can make it easier to switch typically you can assume that the current branch is the given packaging branch if there is a given veritrade complaining that given branch setting or rather the default value does not match if you set it in you should respect it but if it's not set anywhere the default should be I also had once this experience just a second this because if you have for some reason different Debian branches that it's strange it's somehow merged between the branches and you always have to take care for this particular entry and this is a little bit strange and by default policy this would be amended every time while I'm working I added a few words I would work in this route I want to import a new upstream version someone in upstream so I just give it the import already but since the default of Debian branch is master it would check out master and try to merge the upstream branch master but actually I want what I'm putting up a separate upstream branch because I want it for a separate Debian branch and I forgot to actually I think it doesn't even respect when you say keep Debian branch on the GBB import already it does but since I forgot it it's a wrong branch so the default should be to merge on the branch you're on but it should be I don't know the default the default behavior and not the default value just the default behavior would be to merge on the branch you're on if it's a packaging branch we can make some settings to check looking for Debian directory and these kind of things but if it's a packaging branch then you can have the Debian LGBT code there so it's not a problem if you are in a packaging branch but if you are in many other branch then you can get into trouble yes except it's just annoying to outcode the current branch name in a file because when you merge it in somewhere else you have to edit it every time or not every time but I don't get it if you have the name of the current branch then go to and if you merge this branch in another branch which is let's say another work tree of the same same branch it will complain anyway that's an option just as documentation for what branch and mode eventually merge is back into and then I pass Debian branch equals every time always explicitly even though the value is there it's just an documentation otherwise but basically I have to tell you the option branch that I'm tracking in the DebianGDD.com when I'm in jungle 1.6 and 1.7 separately I want to take the option branch separately so I take the time to write it down but I don't see the need to write down and so what about the upstream name Spain so I think the depth 14 at the moment just says something like 1.12.x if you're tracking the 1.12.x I mean it's a day for people packages fine for most packages as soon as you need to track multiple upstream branches and the idea is to use something below upstream when you pick what you want a default suggestion a name based on the version that I just said with .ex or the last number which you're free to use yes but basically it would be nice to standardize there and I think the problem with the current default is like if you start with upstream by default and then you do the upstream slash or something else it would kind of mess up the people's repository so the question would be should we then go for upstream slash whatever then by default and what would that be so yeah so but what can you actually try it from the branch and you can try to figure out the distribution you built for so to say it's just like depth 14 and then it would try to find people or environment which is kind of and you're on a branch called Devin Experimental and it will just try to find people or environment that's called Experimental and if you're on a CIP branch you will find CIP and if you're on a CIP branch you will find Jesse 5-2 weeks ago it didn't work at all so then show me the event by the back please because I'm using it all day actually it should work so what was the problem actually it basically couldn't find PBR I wasn't using the embedded PBR on my own environment it might be the case but did you use the PBR before at all or did you use the G-biller I got the fine builder inside the concrete okay yeah so that doesn't make a difference but then we should have a look maybe and try to figure it out yes some other things which are still ongoing and which are kind of hope to finish it so we have some like there's a fork of git build package which allows you to build RPM packages anybody using that here anybody else I checked what I wrote the default suggestion is to use Elfin slash latest because we have part of this RPM support in the Debian package already we also have a spec file so you can look these things like in RPM packages so if you are in a situation that you kind of build Debian packages on Fedora or whatever you can just use that one to build the package and kind of getting the rest of the tools merged is kind of ongoing at the moment and yeah we have some other changes like very repository support which is like so that you can import like tables into a very repository and we have detailed shed support which you kind of need when you're building a jank isn't that these kind of things this merge mode replace it was mentioned before I think in the skill share I think the idea is basically that when you merge in your upstream version and you have that on your upstream branch then basically sometimes what you want is like you want on your Debian branch you want exactly the upstream version plus your own Debian directory so you basically don't care if there's any merge working or anything else you don't care if upstream has a Debian directory you don't want any merge conflict caused by this you basically just fix the newly imported upstream version and put that on your Debian branch and just add your own Debian directory wait I was missing that thanks so we might need some more tools like maybe to move over the Debian directory between different directories but I think that sometimes it makes it much easier because if you have Debian stuff in there it should totally help as I said ongoing changes at the moment there's a lot of support and merge in the final rpm support we have a worker progress branch which kind of builds and runs the tests with Python 3 but I've not really used this myself yet so if anybody kind of wants to jump in and says Python 2 to 3 porting is the nicest thing in the world then you're more than welcome what do you mean exactly by support to provide an IPA actually for applications basically to code migration build package is driven in Python at the moment it runs with 2.7 it's about migrating and the other part as well is about working in the parts of people to be able to build rpm packages the rpm work was mostly done by Marcus Peter for the first time in a couple of years so what I'm currently working on is kind of what really is bad about input already at the moment if you run out of disk space or these kind of things then you have to kind of rewind all the branches yourself and the idea is basically to be more clever than that and just clean up after yourself if you fail between the different steps like importing the towel, merging to the branch creating the text so that's basically the worst thing usually is pretty hard to eat up all your temp space and then it fails and then you have to go back that's basically the motivation for adding that my common failure is that the merge fails at the end hopefully we have that resolved now I learned that just now but one thing I only noticed quite late is that if I I can fix the upstream branch it usually argues about the tech but I never noticed that it will also do the pristintar again so that's basically what pristintar does it doesn't care about what's on the branch it just does it again so the idea is just to clean up so that clean up thing is implemented that's the current working progress so that's what I've been working on yesterday evening for half an hour and but that's basically the next thing I wanted to add because otherwise it's kind of annoying sometimes so check the tech that you would want to apply in the end so sometimes I actually got a successful import but for whatever reason for instance because I do repackaging of my upstream tarball and I decide that repack tarball is not the one that I want to use but I repack it again I can use the name again but sometimes I've got the tag and then the import fails because the tag already exists which you know immediately from the name so it's easy to check so we could check it in advance that the tag already exists so if you can come up with anything else maybe more documentation there's pretty good documentation but I think from where I come a lot of people have a hard time understanding the workflow most of the workflow is very well documented on the website but I don't think in the Debin packages everything is well documented I mean the workflow part is very well documented there are parts missing so not that much of a good documentation writer so if anybody wants to kind of jump in here and document the workflow it would actually be quite nice because I think that's really something that it could improve on and I think the manual doesn't have a single picture or something like that where you have a picture which how it looks like so if there are any volunteers who kind of like to write SGML doc books that would actually help a lot otherwise I just think it won't happen in the near future because I tried to write some documentation I think from the hands to somebody and he has to understand that quite badly so it would actually be nice to have somebody jump in there yeah I'm coming to the rollback it would be nice with what the first time option would be idempotent it checks okay I want to implement this version and I already have this tag and it checks okay it's my file that I want to generate the same that is already committed then I would say okay it's already committed and matches and I'm done yeah so but basically I would see that interesting tar like not doing the same work over again but I think it's unmeditated currently that's a bad thing by itself because if we're using it that much then we kind of have to pick it up not terribly important but it would be convenient if import vsc would support the hosting vcs tag which importer it is I think there's a bug for that one I don't know just one more thing back to the time machine I just got the impression that maybe it would actually make more sense to go back on the upstream branch if you if you have failing matches because the failure is caused by some upstain from it so here behind the time machine things like to find the last Debian version where this patch fits too because that's your view of things at the moment you can actually go back on the upstream branch and find a place where it broke so it may be easier to fix the breakage if you can resolve the patch at least there's another possibility that would be nice so some things were not very terribly good and there are bugs as well and I'm basically usually very happy with what it am does when I export the patch queues and GPT PQ doesn't really respect any depth re-header so if there is a depth re-header which says where does something come from and is a file of free and you import it and you re-export it again and it will just basically forget about most of the stuff that is in there there is another tool for similar purpose called kit Dead Cherry in the kit PKG package that one has recently got support for storing exactly that this information in kit nodes so maybe that's something for kit build package patch Q2 to store the depth re-headers in nodes because they get updated after time anyway so like forwarded node to yes or other bug reports for example an Ubuntu bug report is added so that's helpful there too and then the node is attached to the one that has the patch well there is a bug report against I think we already made a bug report against kit package to fetch the nodes on pull I think you have all the information or references in there I saw that it's basically needed for Dead Cherry but I didn't figure out that it's needed for nodes just one thing could we make kit build package during all the .pc directory is complete at which place does it care about if you have one kit build package it will say there are uncommitted changes there's a .pc folder so you were able to build from a patch tree or what project which is using build even if all the patches get unapplied then the .pc folder is still there so you have to use it in order to build it exactly that might make you one into other problems if you're using kit ignore all the time you might end up just ignoring other things that should have been committed and basically I think we could so is anybody going to file this box then because otherwise I will forget anyway so one thing that came up I think in the skills exchange session before is like supporting like G-Clean branches and these kind of things so is anybody kind of like something he wanted to share how it came up his history not his history it's towers from non ESG-free stuff like images and these kind of things are using filter yeah use the EP-5 specification for the next generation where you can specify high excludes and specify all the fights in the towers that you don't want to have and ESG-free tower wall and use GAN handle that and download the tower wall for all these unwanted fights and repack them so fits nicely into the workflow with all the changes to so is this already in jesse or stable because I remember some discussion that there were parts missing somewhere and I'm not sure about the current it works oh yeah very good sometimes is anybody maintaining packages with multiple upstream towers does that not anymore not anymore because there's no support for it in new package at the moment and I kind of didn't come up yet with the same workflow do you want to kind of use sub-monials or do you want to use detached histories for different towers or do you want to use kids sub-trees which is the new hop thing or something like that because of the time I think about it if you do it with different trees it just gets more complicated than just using sub-monials and building one tower wall so some support would be really nice because in the highly experienced many projects which tend to have bad other libraries I know it's bad but sometimes I want to build a package even if it's not truly clear and they have the models that are in the GitHub repository so if you could generate yes 3D doc they were built so it's packaged with corresponding supplementary towers so basically just like using one repository and not caring that they are different projects and caring at the built export time or using sub-modules because that's basically the point I usually then give up and say somebody else handle the problem because I can't I did not think about it too much but sorry you want to go first maybe we could have multiple upstream branches for each company and and the thing is it needs to be sort of a bit larger than maybe also with usecam to see how we can track some but the short term what would be nice is simply to be able to take an existing Git repository and convert from Git submittals and generate the supplementary towers we only have we only track the reference but the missing parts should be able to be generated from the Git submittals I think well my case is a bit but is slightly perverse one I just have a package with many similar turbos no processing needed I just need to throw them over create many very similar packages for them so I don't care how the turbos are stored I can store them externally I need to figure out how to fetch them I just need to store them a few bits internally I just don't want Git build package to get in my way and it does at the moment or from the little I remember I think it does SVM build package has to be patched for it to work I think Git build package gets in my way but I'm not really sure in getting the towers that's the main problem so maybe we can just sit down and try so that would be the somewhere so there are unit tests within Git build package so if you want to submit the patch then you can run the tests and extend the tests also and you send the tests and cover your new use cases thank you there are some component tests which are currently stored in extra sub-modules so you can do good sub-module update then the component tests will be run as well and you can build the documentation in the case you really want to add some workflows and these kind of things directly and then it will just generate the HTML pages and the main pages and there are some Epidoc generated things for the Python parts especially the repository related commands are actually documented quite well so it might kind of help if you want to contribute some patches so that's basically everything from my side time over time over so no other questions then so is it one minute or over ok