 So, I'm happy to run a monitor and we're going to say I haven't done a lot of prep work but I'm thinking of picking up where we left off. I don't know how many of you were here, how many of you were here for the previous GBP Skillshare class? And how many of you were not here for the previous GBP Skillshare class? Okay, cool. Right, so it's a mix. So, come on in and sit down. I need to stand in the doorway. So, probably we'll cover some of the same territory that we covered in the previous form, but probably people will also have other things to share because they're really cool too. So, the basic idea is to try to spend the 45 minutes that we have to talk about our GBP workflows. One possibility is to pick a package and I could drive a console or something if people wanted to tell me what to do and we could look at packaging one. That's something that we didn't do at the previous Skillshare. People could call out and tell me what to do. I won't do anything that you tell me, but I will do things that look reasonable. If you want to take as an example like packaging glue, hella or whatever to show how it works. We can also run through some of the highlights of last time. What do people think would be useful? What about something that's already in here? We're using that to understand the tags and doing all this stuff. Where the packaging is already in? Off-screen, okay. Okay. That does not use the official tag mechanism that we just made. So, how do we, you know, then we have to make a film and we have to do the tag. Let me ask another question first. Who here currently uses Git for their Debian packaging with Git build package or not? So, I think that's everywhere now. No, not everywhere. Okay. And who here uses, who here does not currently use Git build package for the packages? So, I'm assuming that the folks, even the video team, you guys might not care about Git build package, but I'm assuming that the folks who are here for the meeting are here because they're interested in learning how to use Git build package. And so sometimes I think starting with the thing that has the most complicated quarter cases is not the way to go for those folks. But maybe, but there aren't that many. I don't know. I don't want to run people away if they're not used to the base case. Yeah, I'm not sure what the right next steps are. Do you have a package in particular, for example, that you want to look at? A few of their ways you think for this. Well, I think the problem you're referring to is that upstream doesn't maintain proper tagging on upstream. Yes. Like, for example, I have a package where they tag it with dates instead of versions. So, all the automatic stuff doesn't work. Like, they release a version, 0.9.0, and they name it, the tag is not called 0.9.0, but 2015 or 5 or 5 or something like some date. And I need to contain a separate tagging to translate those dates into some version in order to at least do some kind of automatism. And that's what I'm doing. But I don't know if that is the best approach, but hard coding it every time you invoke, get build package is also... Well, you're only hard coding it every time you invoke, get GBP import and range, right? In which case you have to have looked at upstream. I mean, as a package where I try to look at upstream's code, I don't deeply sleep over every single commit, but I do try to look at upstream's code. So I have to do something before I'm doing any import and range. And so then I probably... I mean, the easiest thing for me to do is to know what the names of the most recent tags are. Yeah. So in terms of automatizing that, I'm not sure... No, no, well, but on subsequent releases, I mean the problem will occur on every release. Yeah. So every import and range. I say, oh yeah, and the option VCS tag for this one is, you know, closing purple unicorn. Yeah. Exactly. They chose. But then that's just... You just apply an extra argument for getting one release each time. You're automating... No, no, I'm not automating it. I'm just creating an own tag. Also for me to have as a reference for future use. Like, you know... Like, I will forget about this and then I would have to look at that in ChainClub for someone which are the same word or in the Git history which word and it corresponds to it. If I maintain my own additional tags that have same naming conventions then I always have this at my hand for the whole history of the package. Right, that makes sense. And if I supply it only on each import then the information is lost after the input or not lost but deeply hidden in some way. Have you shown those tags to upstream? No. Why not? Well, they don't really have... Well, I could do that. I think you should. So I've actually had an experience where upstream was using an underscore in their tags like to underscore version number and the version number replaced all the dots with underscores as well. And I looked at it and I was like, why are you doing... I just have paid in my ass to figure out what the right tag is. And I went and did a little revision control archeology to figure out why they went and done it. It was because they had converted from CVS. And CVS and Metru put dots in your tags because they meant would look like a version number. And so I just wrote upstream and I was like, I think you're doing this because that was the CVS convention. And it would be much easier for me as your downstream packager if you just put it in this other convention. And here's a couple of git commands you can do for the new version. If you just released a new version that's got the underscores could you do this command and that would make another tag on it. And then for future things maybe you could think about switching it to the new one. And they were like, oh, yeah, sure. Yeah, no problem. I guess it makes sense to talk to upstream in this case because it also seems they can't decide between the date version and the real version. Yeah, so not only is it pushing somebody who says, hey, it matters to me. And if it doesn't matter to you guys, just do it the way that matters to me. Yeah, that's one of the things that I definitely recommend we should... We should actually help fix the broader ecosystem in that way by saying, at Debian we like same packaging practices. It actually reminds me of a completely different thing which is that we've seen that this tradition at Debian that we make this Wikipedia for the how-to and it only covers strictly the absolutely politically correct case. And then there's no doubt. And then we have this kind of session where all sorts of things are flying around but we're not calling it anywhere and anywhere that's actually helpful. No, we are. Well, right. That's not nearly as easy to access to find as a how-to or something. Am I making sense a little bit here, right? There's so many special things that you would have like to have notes and, you know, by the way, by the way, by the way, in the wiki and then you know there are three tutorials that we particularly get bill packaged and they're just slightly different, not really compatible. So you follow one and then try to follow another one. Later it just breaks. So I don't know if that's worthwhile. I mean, it's a don't be a worthwhile time. We did that in the Haskell Group in the dev camp, right? One person just spent the whole dev camp, me, really. Just rewriting the tutorial so that there was just one which covers several cases and made notes. Well, I guess it depends on who's reading the tutorial. I sometimes find that if I'm reading a tutorial that's huge that I'm just like, whoa, don't have the time to take all this in. So maybe you've got good ways of writing a tutorial that allow those special cases to be broken up and skipped over by people who aren't encountered and otherwise it's just, you know, wall of text, eyes, eyes over, over-node, this and that. But in general, I think that this is a well-at-point if there are some exceptions, often you need a bit more data. But another point is anyone in the room who started with git bill package and he was quite new to Davian packaging and all and then most of the tutorials are geared towards recreating of like no version control at all or from SVN bill package or so on. So the case where you start from scratch is not covered in a lot of tutorials. There's a huge focus on converting existing packages to git bill package. That's just my observation. I think this would be something that could be improved. Yeah, I just wanted to say it sounds like what you're saying is that the great thing about tutorials is that there's so many to choose from. No, but I think I totally agree with them. It's really confusing because you follow one tutorial and then you realize, oh wait, it doesn't cover this little bit that I need. And then you put another tutorial, oh, it has that bit and then it doesn't work because it's not, like the workflows are different and then so they're not always easily combinable and I think that's the problem you're addressing. And the tutorials don't say when it's sort of the developer's personal preference and when it's something you have to do with that way. And I have some kind of same problem. I have specced that it was very slow, slowly. So every time I have to go read the documentation because I don't get how it works and if I don't take the same line it doesn't work. I mean, as a community of users of Git Build Package are we saying that we want there to be one economical workflow so that the user... No, I'm seeing violent yeses and violent noes. So I mean, again, we have these two rows of seats because they're angels. That's what we want. I'm not even sure that's really possible. Is it to have one unified workflow? For Git Build Package? The Devian Packaging, sure, that's out of the question. I mean, Devian Packagers know. But, I mean, we have the Git Build Package developer who can say I'm just not going to bother with this particular workflow and if you want to do that, go do something else. And I mean, in some ways, I don't know what your technology is. Yes, basically as long as... I think there are several things like you don't want to support every workflow, basically. And I think there is one workflow which is like in the tutorials and on the pages which is kind of like the workflow I think most people are using and if there is something else, like for example, you don't want to store the upstream toggle in Git but you want to have it separate and it's reasonable to support with an endless amount of code and not being like in every place. And I think that's perfectly fine and everything else should... I can discuss on the main list or the bucket board or something like that so if it doesn't work the way you think then it can be a bug or it can be like missing documentation or maybe it just can't do what you want. So, and if it can't do what you want, it might be out of scope for the tool because it's just a layer above or below because it can't drive Bazaar but on Git, for example, or it might be just something where you want to kick off like 10 Jenkins slaves which build all the packages which would probably be a layer above that so that's an area where I think the build package should do its job and where it doesn't we should fix it and maybe it's not very clear where the boundary is at the moment. Exactly the last thing. The tag must be named v1.0. One tutorial reads like that and you don't know whether that's the limitation of the Git build package or not. So it would be nice to just say there's a star. It's like the tech book, you know, they just start bending saying here, you tell them about the choice that other people wrote about it the other day. That's all right. That's really, really helpful. Whereas if there is something which is really fundamental, I don't mind that the tar files are stored where you say it and that's a choice you made but then say so, it really makes sure you do this because otherwise, Git is not going to work for you actually. Yeah, it's really good. It happens with other things. It's just a matter of getting it done. I mean, in some sense, the idea of supporting extra work flows is not just about whether it's a more code review and container. It's also about for other people who are writing in documentation and tutorials and whatnot. Like, oh, now they have to think about this other case. So sometimes it's more useful for the community, I think, to say fewer work flows. Yeah. Yeah, I think that, yeah. They should be comfortable with this. Yeah, yeah. By the way, a question. Where does GBPBCH know the top string version? Sometimes it guesses correctly. What should actually, like version pass, Jesse has an improvement on that one. So it basically kind of looks for the last merge from the upstream branch. It goes there. So where does this information come from? So from upstream pass? Yeah, exactly. So actually, it matters what the upstream pack may be. So it matters that there's basically the same number in there in one for your devian package, but it doesn't matter if it's the version or it's upstream slash version. We have some variants in that. You just have to tell people package, like, what the format is. What it's like. I do not. But maybe it's a cut married. Yeah, so it has the default there where it's just like devian, like it says upstream slash version number and that's the default. But if you want to have, like, a format, which is a devian version, you can specify that instead of one. I think it's not documented. At least I didn't find it in the GBPBCH. I think at least there is like the upstream version option documented at this point. Yes, you can specify. It's too small. Yeah. But I know that I didn't have to. Yeah. And I read those and there's one which says you should add tags upstream slash version, which if you do it yourself of course it would. It could have come from the upstream. I don't know how upstream would put versions with upstream in it. No, it's basically so, that basically depends on, so if you use GBP import orage for importing the original it will just do everything for you because it and so the tool is like if you put all these information in the default section that it's right for all tools. So if GBP import orage puts the tag, GBPBCH will obviously be able to read it. But in some cases where upstream users get then you obviously have to tell the tool like how it is named. It should not work for me. I always have to figure out where did I merge last and then tell it manually. Yeah. So if that is the case then just publish your repository and kind of like how we do it. And then we can have a look. Okay. Yeah. But is that kind of remark which is helpful in the tutorial, right? It'll say do orata import orage. Yes. Now it knows your version number. Okay. Start. If it actually doesn't know your version number here's some text that tells you what you would do. So these are the configs that you're talking about. Yeah, exactly. Right? Okay. And then there's also a get... So this is... Sorry. So what I'm looking at is this file here. This is the default git build package config. And it could also live in... Actually, it has an a manual page. It's quite nice. It shows you the different places you can put it in the order in which it will look. So, I personally like to use this. I don't muck around with this because I don't want to have to sink it across different buildings that I'm working on. And this, I don't particularly like because it affects all of the package builds that I do from a single user account. And if I go from another user account, it doesn't work. Yeah, but it's useful for the key IDs there, for example. Yes. That's the main group that I think of. And this is for a git repository. And this is for package to be published. And this is... And this one we're duplicating, right? Yeah. So, all of these things matter and you go in this order, right? Yeah. So, the last ones. No, first ones. Really? What? Yeah, this configuration in this overrides configuration in this. So, last ones. It depends on the direction of the error. Yeah. Lowest ones. Yeah. So, maybe the last one should be promoted before the... This should win over this? Probably. Because that's... Making a... I mean, I don't know what's right, but making a change in this once people are already using it is... Yeah. Complicated. Although this is the room to make a change in it, right? Yeah. So, if the other one would win, you would never have a chance to overwrite something that comes back... badly from Alio, for example. Okay. So, that's why you have that one. So, if the Alio configuration is somehow broken and you want to, like, overwrite something, then you can put it in your repository. And if you swap it around, then the... the remote configuration would always overwrite. Well, we started a brand new professional repository. I was wondering if you're doing that right because upstream does tagging is perfectly fine. They even sign tags, but they also provide our tower walls on the GitHub page. And we have set up something like to track the upstream repository, but use gbg to import that. So, that's the upstream text, but still import the upstream tower wall, which was something I read in some manual. That's laying around. So, is this the right way or could we simply not use the upstream tower because we have the tags here? I think there are two spools of thoughts here. So, one is, like, importing the upstream tower wall and using the upstream dcs tag option to kind of link the upstream history to your tower wall. I... So, if upstream is halfway sane, I then skip the tower wall entirely, because I have, like, the whole history. Why would I bother importing another piece which I don't know about? And if I keep the upstream history and get anyway, I then use that one generated tower wall. The only reason is basically that you, upstream, for example, signs its tower wall but doesn't sign its tags, then you have, like, a better trust chain if you use the tower wall before that one. Sorry? In my case, the upstream tower wall contains generated lines which define the version from the software. So, to make this step is the difference there, right? So, even though I don't need these autocom stuff, I still need the tower wall because it does contain really specific Yeah, I think that's, yeah, I think that's fine and supportive. So, that's why what I'm going to say upstream is halfway sane. So, usually, I should be able to regenerate that thing we can make. Yes, it is perfectly sane. So, it's a great upstream if it does the really is right. Sometimes it does not and you just have to get the tower wall from the gates. Yeah. We also have to generate those which are worthless because they are is that they have a clear that same counter that they can do that and these can be able to because they base this version of information on that version on that version. So, I see a couple of differences here. One of them is that so I'm trying to encourage everyone to check their upstream signatures, right? And if someone, if an upstream signs their gate tags and signs their tower walls, we can actually ship the signature for their tower wall in Debian. As long as you don't have to fill it or something. As long as you don't have to fill it, right? And DAC is now supporting that and so that means that we can actually get upstream signatures more widely distributed and other people can verify upstream signatures. Whereas we can't do that with the gate tag, assigned gate tag because that would imply that we're willing to be distributed the entire get history and the other thing is, so one of the things that's different about MakeDisk from git archive for that tower is auto tools, right? So upstream has a set of auto tools that they're using to do the thing and they generate a tower wall with MakeDisk and it puts in the image of the auto tools generated files that they have. So are you doing auto recap or using auto tools there? Many people think that's the great best practice within that in to do auto recap in those cases. I'm seeing I don't know how they are doing this most of the time. I had a real problem with that in that upstream has a configure and an AC load which is broken and I have to regenerate it. Then the DH also reconfigured clean things through the right thing. You can't find a way to get DH also reconfigured clean which is the target that's supposed to do nothing. They keep show coming back and not going away. And principle when I do make clean in my If your upstream ships that file in your in upstream tower wall and you remove it using db source P1 it comes back because it isn't very terrible and DH cannot say but db source has nothing to do with files. In our stable it can we can say and now you do a patch that removes files except it's not really working anyway sorry that's it's the same I think but it's not good build-ups sorry sorry but there's some besides I mean I actually do think that auto-reconfigure has to do with it because Debian has a set of auto-tools and we like to not depend on the auto-tools generated files are huge terms of code that aren't really anything like relying on the pieces that are in Debian is probably better but then there are these other things like generating header files that upstream is doing manually some how as part of the release process that's different from auto-tools and I don't know how to get to this so I think those are good reasons to do the job well it's very important to have a go you to the upstream and percentile branches and present history manually or what that's an idea it's ugly yeah I'm wondering because as soon as it fails you have to do more manual work and redo or retry or retry until it's working or do you don't you care and just push the percentile upstream bunch like several upstream commits so the next or maybe the nice already started not saying which we're in that exactly so when it's ready with some Rolex but I'm cleaning up manually so does anyone have any non-manual approach snapshotting.git before I don't know that's also sort of manual right it's just different kind of yeah but it's a share into this branch and a little bit of that thing but you could work how are you going to do Rolex for it sorry yes I just remember the old show once and then something fails you just reset the heads so and there are some slight things because it might create branches so you don't roll back the show one but you need the branch and all these kind of and first of you try to actually our repository we need to remove files from upstream tower that you can be using is there's a reason to use the tower or can we somehow remove the signal when it's finally so I think you don't need using just the git text so I think the same way it's like you can read the depth 5 headers in the copyright file regardless regardless regardless we're using the tower or using so only when you use you scan to import the tower it will just unpack it you already doing that if you want to get rid of the script so I know the lucky situation that all my do you know about the headers in yeah we've been using them but I lost the thinking about not importing the upstream tower but since our upstream is text simply rely on the text and so then you would do a filter output like an archive with a filter on the archive to generate the tower ball I'm sorry yeah so we get our kind of tree ish yeah but then you could also say filter and say I don't want these these yeah okay yeah okay but yeah given that there's nothing that hasn't been used with the tower ball I would use the tower ball with the this this came up at the previous discussion and it sounds to me it sounds to me like there's no limit in terms of what you can do for Aliyah other than the DRDP I don't think the W-Machine usage policy I don't think that it has a constraint about that so I think yeah go ahead and keep that in Aliyah and if we find so if we find that is a problem that people want to make changes but I would go ahead and use Aliyah to benefit the upstreams keeping track of upstreams commits that's a good thing yeah I think there's all kinds of things that need to come off the Aliyah I mean if there's stuff that's in there that's like going to get Debbie into legal trouble like Metallica or child pornography or you know something even worse than Metallica then does anyone have any other specific GDP related workflow problems that you have recently yeah actually we use a patch queue thing and we use to move our depth aid patches for the auto build and the demon patches support directory into our sub directory depth aid and every time we do it's not more like back what that shouldn't actually happen at least probably I have another divorce yeah because depth aid specific patches are not part of the patch series okay so okay so they are separate yeah we want to we keep them in the patch queue branch patch queue master branch but when you do an expert you have to get AM the the depth aid stuff and then manually remove them from the patch series yeah so but what's the purpose of keeping them in the package and not having them in the series final and then we want to we want to be able to revise the patches so okay let me see if I can get this right I'm so not sure I understand the scenario so you're using get build package with patch queue but you also want your patches the ones that are in devian patches to have the depth aid headers some of them we have the depth aid to expose the auto package and test no depth aid depth aid no depth aid is the top right there are headers for the patches so are you with your head up or are you suggesting this for the SAP directories there is a gvpp queue you can attach to those patches that's a gvpp queue topic and then they move when exported you put them in the set to prepare for it so we don't want them to be applied when we build the package we only want to apply them when we run the auto package yes okay I have an interesting issue so I think the same use case where upstream is developed and get and everything is included and get so you don't have to deal with the firewall training obviously you have a pure get workflow so you can configure your repository locally to have a upstream remote pulling so it actually has a new version you want to update with the Debian package too so you do some stuff locally how do I communicate the fact that the upstream repository is at this other remote URL and it has a particular layout because right now what I do is I essentially just write in the Debian readme.source file how to configure this how to to a new version is there any way to automate that you know that setup for you know collaborators I not within GitVel package so because usually when upstream uses Git and it tags then you actually don't have to do much more than it gets fetched because it pulls in all the tags and that's all it needs but your new contributor doesn't know about upstream repository yeah sure I understand that but basically that's an area currently don't cover would it be useful to put something in DebianGVP.conf that says upstream repository is over a year and then you can have tooling that can make use of that so is that household then GVP pull from the remote as well or something like that the VCS yes already together and given control for delivery used for that the VCS getting Debian neutral power versus the package yeah not to upstream right and so if our goal here is to merge upstream's history with the Debian packaging then your co-maintenors want both so probably used source header in Debian.conf well really the header and Debian copyright usually points towards a URL that explains what the project is right so it could point to the source code it could point to the it could but other people might be using that I definitely don't want to change that right now for some of my headers could they point to a description of the project which is actually useful yeah I would say this is actually a missing field in a sense it's sometimes scary that you get a package you just get it with that get source or whatever and there's really no clue as to where the actual source of that package came from it's hidden in how so putting it in read me no source is nice that's actually a service but many packages don't have anything read me no source but a formalized place where we know to look for it would be even better yeah exactly so a field somewhere would be actually very nice could you speak part of the given control to make it look specific really nice so who do we need to file a bug against policies let's get the tooling in place it was tooling then we can have the policy if we put the policy in place then we have to use the tooling but it's like upstream VCS or something there's depth 12 proposal which proposes a new file in the source package is there an upstream metadata so many I've seen that it's used by a keyword okay so it might put it into something else yes but that would be a place to have the URL for the history as well does that sound reasonable to propose and that makes it not GBP specific which I'm just going to do but that didn't tell you how to get the source for that particular version that's my scary thought we have an already depth 12 yeah well if the rest of the configuration is in the the GBP compile then once you have the repository plus the compile should be a no no my question is a very simple where does the already talk about okay that useCAN should know yeah what it should actually so it's not declarative but very really okay isn't it in the GBP file that you first package the file yes but I think useCAN is up to date because useCAN kind of used to fetch the power yeah so as long as useCAN works it's quite true but that's the thing well and there's no policy requirement to useCAN well that's because it's really complicated some things right yeah well some things are possible but then there are also upstreams who are their own debut maintainer who don't bother publishing it because why would I need to scan for it or something else I don't know how to do that and you could argue that when it's complicated because the metadata is missing really so we only have a few minutes left I think this has been useful it's been useful for me to see what people are doing with it is there a way that we want to follow up further or do go out questions about how to continue working with this I have one I would like to have one picture which one do you have few months ago you had the GVP name the comments so when I import the patch queue and I export it it has the same name could you have the option to ignore it that completely I prefer the previous okay yeah I think either there's a patch or yeah yeah yeah yeah because right now I'm sometimes changing the comment message because I have to delete them then I have to fix the comment message I think I'm not sure if you're in an already true name so we'll have to check but it's not it's reasonable yeah so I wanted to understand I don't understand how to use the patch queue with the depth 8 patch headers me too no you guys are so depth 8 is the so do you mean depth 3 maybe there's headers that go in Debian patches files and there are some headers that mention things like which bugs are closed and how the patch relates to Debian yeah depth 3 somebody would have to kind of like write a paragraph and pick them up because that's not kind of handle but you can pick you at the moment so yeah I really like that idea and I also would like to have some kind of mix about are you guys using those headers would you be picking up that's for us we use a I think in order to encourage use of it one could write a free commit hook that only applies the patch queue branch and then pre-proculate so you have some template and this could actually encourage some kind of formatting but then yeah this would be optional of course but it might be an idea then yeah the problem with the commit hooks you actually need to manually install them right yeah pre-commit hook of course I'm not familiar with all the hooks but you know what I mean just pre-proculating with the pre-commit message to encourage a repo and not if you're computing it from another user you can get them that's what I would I would rely on that or not rely but it's handy outside yeah but I wouldn't okay I wouldn't use it okay but if you put it in a commit message and they rely on that after the form and those unblockable lines sorry if you put this as an unblockable message then those lines will end up after the folder and form and such applies created by unblockable unblockable okay that might be a challenge for you and I don't know about that but I think that's a problem isn't it I don't know I think it is so this is something that people could look into and report back to them and say hey here's how you do it that would be a really useful thing for everybody so three of you sound like a couple of ideas if you want to try to flesh something out that seems reasonable for the three of you and report back to the mentalists so we've only got one minute we have less than one minute left so thanks everybody for the useful discussion thanks for the talk yeah