 OK, right. So yeah, it's 1 minute to 11 o'clock, so I'm a bit surprised. Maybe that's why I'm late for all the talks. OK, so welcome, everybody. I'm totally happy to see so many people here, more than I expected, so that would be a rather small round. Cool. So yeah, we've written up some preliminary agenda yesterday, which is in Govian on the screen. First of all, we might want to make a short, quick introduction around the faces here I don't know. Also for the people following around at home, it might be nice to be able to connect the faces to the names. So those of you who feel like saying their name out to the world, please do so now. I'm Gregor, and I'm of the pro group since a couple of years. I'm Harlan. I just joined about a year ago. I'm David Bremner. I forget when I joined the pro group. It was some time ago. I'm Nicol Tini, and I joined something like 208. And I'm maintaining parallel as well, at least. Dominic Harleves, and also have been maintaining parallel and have been around for a little while. Hi, I'm Seha. And I just joined the pro group a few weeks ago. I'm Mika, and joined the pro group a few weeks ago. For us all very, I have been a member of the pro group, but mostly don't do anything. But I occasionally upload a package, and I've been doing a very poor job of being upstream for the pod generation tools. You don't have to introduce. People don't have to introduce themselves, but I don't want to. My name is Josh Enders, and I'm a pro user. OK, so I'm Colin Watson. I'm in 10 and a couple of pro modules, and I'm actually in the pro group. I'm Noah Meyerhands. I'm not an active member of the Debian Pro group, but I'm here to try to maybe boostrap some activity into it and get more familiar with what's happening and how I can help. I'm Yixuan, and I'm not a member of the group, but I'm really interested. And currently, I maintain one pro-related package. Hi, I'm Ryan. I was active in the pro group a while ago, and I'm interested in getting back into it. I'm Alex. I'm externally not in the pro group, but I do maintain several pro modules and a lot of more pro code. I build my Callister. I'm a pro user. We made the move like a fly around the node. I am an intro to Intrigui. I'm part of the pro team for a while, and focusing on the GTK, GLIP, and things like that. And Dean Zhang, and I have been not part of the pro group for a while, but I'm still looking around. Hi, Martin. Pearl 6 contributor, new to the Debian group. And then we have someone in the back. So I'm an urge. I have apparently been a member of the pro group for years, but only a couple of months ago I got three packages in. So I'm newly active. Thanks, everybody. So what we've seen now is that the pro group has quite a few members. I mean, the Pearl package, Pearl project on Elias. I haven't looked it up recently, but I think it's something like 200 people. I wanted to look at the first item, which probably was taking a quick look at the team itself. So I'd like to open the floor for people to give their impressions, their feelings, how the group as a group is doing, how the situation with members is, do we have enough or not enough? Is there something we could improve in this regard, communication, cooperation, things like this? And I have a few statistics, which are just some numbers afterwards. I'd first like to open up the floor to others. Well, for my part, I think the group is doing OK. There might be a bit less people than we used to have. Not quite sure, actually. And I have had the feeling for some years now that the sheer amount of packages, there are bugs that don't get any attention from us, I think, sort of normal or minor bugs or something like that, when they are not easy ones. I know we've been forwarding them maybe better than we used to, but there's lots of data upstream, things like that, too. But not bad anyway. But good, actually. For a couple of thousand packages or whatever we have, three. Is it just me or has the number of packages grown much faster than the number of team members? Yeah, it's not just me. OK. So essentially, the same concern. I mean, I'm not in a position to criticize because I'm not active. Anyone else? Just was going to say that I remember in 2008 that the title of my talk about that was 666 packages. So that's probably a tiny number by now. Yeah. I mean, from my point of view, I also had this impression that you mentioned, Nico, that we've lost a bit of momentum and maybe lost some contributors, mostly during the 2012-2013 year with the long freeze. So what I did the other day was to try to look at some numbers if this expression can be backed up by numbers. So I looked at Git commits, Git short lock, which David told me to use, and counted the number of people making a commit in the last three years. And the result is that the numbers don't support my impression. So the three years are from September 1st to August 31st. And the number of people making at least one commit to one of our 3,000 repositories, or repository's Debian directories, is not changing a lot from 83 to 71. Last year was a bit lower, and now it's 84 again. And then I've counted totally manually the amount of people with more than 10 commits and with more than 100 commits. And these numbers are actually quite stable. I might also have missed the day in between or something. I'm not sure if the since and until is inclusive or exclusive. But just to give a rough idea of the numbers. Greg, do you have statistics for uploaders? I was asking Gregor if he had statistics for the number of active uploaders, which is always a bit of a worry of mine. But I know there's maybe three or two active uploaders. One, there are more. OK, so maybe I worry too much. Yeah, so question remains, is there something we need to do in attracting new contributors and maintaining existing contributors? I would like to say that as a, I guess my main involvement is as an occasional dine stream from you, because I often find myself dealing with the parallel version transitions in Ubuntu in response to corresponding things in Debian. So we've recently just built in 5.20. And my impression from that side of things, as somebody who deals with this from dine stream of various Debian teams, is that the parallel team is one of the best organized. The transitions are generally excellently well prepared. There are a small number of things to clean up, but there always are. That's fine. I think from that point of view, the question is whether you folks can cope and just avoiding burnout. But as far as whether you're doing a good enough job, I think you're doing an excellent job. So please keep it up. Well, so regarding attracting new contributors and maybe more diverse contributors, there are these low-hanging fraud sessions that we've had this year. And we'll talk about it a bit later today. So we're referring from Adobe Writing Units right now. So I would like to put in my contribution as a new project member than what I found as barriers to entry. And it was hard for me, it shouldn't have been in retrospect, to find the best practices for doing packages by the portal group, which would be distinct from what I would have done on my own. Or it was not clear to me how I could get a package which was already maintained individually into the portal group. I have been invited several times to hand it over, but I wasn't sure how I could help the process and become an active group member at the same time. Maybe it's a matter of educating the people who are outside the group about how easy it is to get in and what it is that they have to do in order to follow the standards of the group practices. Giving a talk at DevCon is probably a great way to start. I guess it could be a short document if you know how to use IRC. So if you show up on a deviant forum, everything else follows. I think to become an active member of the team. So I am not discounting that we do have some documents, but maybe they need to be more publicized or something. But I wanted to point out for anybody who is thinking about joining the team that what we're talking about here is a communication issue and not actually being difficult to join the team. To show up and people will walk you through. Actually, that's maybe one of the few ways I'm still active in the team is giving people some hints on how to get started when they show up on IRC. Okay, so I guess we jump to the next topic. So I guess many of you have heard about this project from Enrico called Contributors, which is about to collect in one central place all contributions from every one who contributes something to a deviant. It is a commit, an upload, and edit in a wiki, a translation of mm, whatever. And we as the pro group currently don't add our tag to these contributions. We don't have a separate subproject there. So Tinchu knows a bit about it and might give us a status update. I was trying to find the goby link to a good interview. If you haven't seen it, this is a pretty cool project. And they got started last year. I was a bit involved in the idea before. But basically, yeah, the idea is to get sources of information from different places and said, okay, this person has been doing something for some team or something. And that's a way of saying who has helped them in any way. It's a way to recognize people. And I think it's pretty cool. So I just got signed up by Enrico. Didn't have much of a choice to maintain the Git source, which is basically still not there. So that will, and I will also do a special source for the package per group. So that's why I arranged with Enrico yesterday. So I just need to get down and do it. But basically what you will have in a few days is that every time you make a comment, you will appear as a contributor up to that date. And then the Git and then the package per sources. I think that's, I can say this much more. Okay, looks like that's taken care of already. Thank you. The URL for the whole thing is www.contributors.debian.org www.contributors.debian.org if you want to look around there. Okay, so the next one is a more technical one. Auto package test, step eight, step enhancement proposal eight. The idea to have the binary build packages tested, also integrated into the CI.debian.net I guess, continuous integration. We're not doing this right now and Nikon and Dominic wanted to say something about it. Yeah, well, I mainly wanted to keep it on the agenda. We discussed it briefly last year in MoMahCue and I think we had a thread about it last winter or November or something like that and came up with the few technical, maybe solution might be a bit strong forward anyway. As I see it, it should be mostly trivial to get most of the PKG per modules packages to just declare that their own test suite can be used as an auto-PKG test suite as well, but there probably are some exceptions where you need to write to the T directory and stuff like that, but most of it should be straightforward. So I guess we maybe just should revisit that and maybe I have a hacking session or something while we're here. I don't know if we'll find the time. Well, that's for, does anybody know how complete the CI.debian.net test? Does it, how often does it run the tests and stuff like that? What's the status currently? Antonio is not here. Antonio, I think Antonio arrives tomorrow. So you should probably ask him tomorrow, but it apparently runs regularly for now and for my own packages there are many logs already. So yeah, and I think the goal was to always run when the dependency package changes and it appears to do that. So I think it's user overriding. Yeah, that's the main thing when the reverse dependencies change then try to check the run time tests. Yeah, now more on this topic I think. Yeah, I guess sitting together and hacking on it would be nice. Does someone want to take the ownership of this action item? As in pushing and reminding others? What, it's a nice Queen's table. Ownership for the duration of DevCon, but then I probably forget about it. So, okay, because I want to learn about auto package test. So this is a good opportunity. Thanks. I am. The next things, more short information. Also started last year in the same buff. The idea of integrating upstream Git repositories into our packaging. Git repositories which could build package supports by while adding the table on top of the upstream tag for a minute correctly. To make this easier, there are now some three scripts in the package, PerlTools package. Well, on the Git repo of the package, the last version is not uploaded yet. So you can then just type dpt, that will PerlTools import oric, which will use usecan and create the remote and should do everything that's needed. What's still missing is integration into our fancy MR config, so that the remote is added when you clone the repository. Well, fetch it, update it. The thing, it also needs to go into dpt checkout, which is the sub command of dpt to, well, just check the repository without caring about URLs. So I think that should also, or might also be a task for our hacking time, which we have in abundance. Yeah, I've worked with David on it and talked about it with Intree, and I think I can take the responsibility to remind you guys to sit down with me. Yeah, that too. Okay, so that's it from me for this point. One of the recurring topics for discussion, which we have each year. Well, first for many years, we had to switch from subversion to Git, but since we have switched to Git three years ago, the next recurring topic is how do we manage our patches? Yeah, but I just wanted to keep this one on the agenda too, because I just hate committing patches to Git. And I mean, editing the patches there and forgetting to do quick push after checking out the repository and then building. I know DPKG build packages will apply them for me, but sometimes I do it the old way. But anyway, there's talk of these things in the project elsewhere too at the moment. I think the Python group is looking at alternatives and there's a trade at the Debian Devil list as well. I think it looks like there's broader interests in this. For what is this word, we've been using Git DPM for patch management in the Perl package for a couple of years at least, maybe more, and it's working great. The idea is that the patches are, it creates temporary brands from the patches and then you edit those with the normal Git tools and then you transform them back to patches. It's fine. Git build package and the patch queue thing and I think there's a third one, Debt Cherry, whatever. I think, yeah. One of my apparently increasing number of tasks for DebConf is to actually upload Debt Cherry. So my one user is sitting across the room and he seems happy, so I'm 100% approved already. Can you say into sentences what Debt Cherry is? So the idea of Debt Cherry is to treat, I guess, the Debian branch as an integration branch where your packaging commits are just part of the Git history and then extract those as patches at package build time. So, well, there is all kinds of trade-offs. This is one trade-off which is that it's the most natural way of working with, if you came to Debian from Git rather than the other way around, this is somehow the most natural way of working. Manoj wants to advocate my software more strongly than I do, I think. This was a very bad explanation. That's probably because he wrote the software. If you're a end user and you're using Git to develop, you create your upstream branch, either the upstream's branch or what you import in using the Torbal. Any patches that you carry, you create your own feature branch and you work with it. Then you create the master branch by default which has your dot slash Debian in it and this is where you merge in all your feature branches to have a see-with-it build and you test it. You never worry about dot slash Debian slash patches because Dev Cherry comes in and you take care of all that for you. We just say, go ahead, build my package. While it is being checked out and built, dot slash Debian slash patches is created. You never have to worry about doing, pushing the build stuff. You never have to worry about editing patches. You work with feature branches and you work your current packages as if you were just using pure Git for development. I've got a paper out there which I will share to the Debian Devil mailing list after I get back home where I compared Git DPM and Debian Cherry and the history and bisectibility increases by an order of magnitude if you're using Dev Cherry because there are no fake ephemeral branches to get in your way of bisecting what you did. There's nothing to get in your way of being able to, it's in features back upstream because it is a feature branch by itself. I really think it makes, it shapes off about 30% of the time that I used to spend packaging things. This software sounds amazing and I must try it. Okay. So I guess we won't take a decision right now in this minute and I mean we have a tradition of taking some years until everyone is happy. But how do we proceed from here? That's probably the question. If you want to look at programming, for example, look for three packages that have my name in them as a bloater and they have been using Git DPM Cherry for this. So I guess that kind of brings us on to the question about how uniform do we need to be within the team in terms of packaging consistency? Because there's one sense in which it would be nice to be able to demonstrate within the group different approaches but on the other hand, the whole point of having a group is that we have some level of uniformity. So I don't know what the solution to that is. Maybe we should have a kind of a list of experiments that are ongoing or demonstrations that's ongoing that we can refer to and then we know to treat those packages with care. We already do this for the packaging helper which shall not be named. So I think we could probably adopt some convention that this package is a packaging experiment or something and that's an implicit promise that the person doing the experiment will take care of that package and not expect other people to deal with it. Yeah, and maybe list it on some of our wiki pages so that it's also visible without stumbling upon it. I don't know how formalized it needs to be. We can just document some convention. I mean, maybe it would also nice to have some ISE sessions. I guess my main concern when we switch techniques is we have to make sure that at least in theory, 200 people, I mean, I've seen 80 people commit something in a year so that all those people are taking along and are feeling comfortable so that, yeah, need some training. And I like to learn something on ISE when someone teaches me there so maybe that's also a way to go, to do a demo to try out something. So a comment from IRC down suggest maybe using readme.source to document any crazy experiments. Yeah. Is that seem reasonable to everyone? Yeah, totally. Okay, so, well, we made some decision, I guess. Progress? Ooh! Decision? I know, don't tell the technical committee. I'll volunteer the first three packages that demonstrate Devcherry. I'll write up a readme.source to say what I've done. And really, if you're using Devcherry, there isn't really much you're actually putting in to get, I mean, the whole point is that you don't do anything and let Devcherry handle it. Thank you. Okay, and can you send a message? Let the poll group know when you're on the mailing list here. Good. Do you want to do the same with us? Yeah, I guess I should look at using GitDPM for one or two packages for some reason but it does come into mind, but I'm not sure if that's the one to start with. That's one of those which would benefit a bit from proper patch handling anyway. It has all kinds of... Yeah, there's a bunch already. Doesn't purchase in a few branches, so, yeah. Okay, cool. So the next one is a short status update on the hardening side we've been working on. Yeah, so, we've made good progress this year in ensuring that many or most of our architecture-dependent packages are built using the hardening build flags. I have no idea what's the current status about it but it's quite good. What I would love and it is more like a call for help because I won't do it myself, is to be able as a team to track our progress in this respect, to know if we are doing better or worse than before and to be able to easily identify action items, that is what's left to do, possibly ordered by popularity or something. So one good basis for this could be case-cook scripts that generate distro-wide statistics about it. The thing is these scripts only basically look at each package and increment counters and don't have any memory in the end to tell you which package was hardened and or was not. So this should be a simple matter of Python programming for anyone interested, should be a quite easy task and it's been documented in great details and the Tails bug tracker, what's needed to do and some leads and how to do it. So if someone took this task, then it could be very useful for any Debian packaging team or maintainers who would like to make progress in this and to be useful for Tails too. And possibly for other Debian derivatives. If you're interested, talk to me. I think there's a LinkedIn check or two that track those hardening flags at vital or in effect. So it might be enough to just count those over time but it's not very elegant anyway. I just like to point out that with a 5.20 per transition there's, in the current version of the package there's about that effects from, it looks like there's three packages that have lost the hardening flags and it would be nice, very nice to have a system that would point them out quickly. In this case it was mostly by luck that Gregor or Damian noticed it with the WX Perl package, you know. Yeah. Okay. Just one sentence in my experience, the LinkedIn check for hardening is not totally accurate. Or at least it doesn't totally agree with what BLHC built like hardening check does. Rust is nothing so. Yeah. There still seem to be a lot of false positives where it thinks that you're not building hardening when you in fact are. And I think that a lot of that is that it's really difficult to detect whether you have hardening turned on for some of the string functions. There's just a lot of code patterns that will generate non-hardened calls without any hardened calls in the same binary and then LinkedIn can't figure out that you have hardening flags turned on. So that affects multiple of my packages and I've like manually double-checked the compiler flags on every single object file and I'm quite certain that they're built with hardening flags. There's also another source, does the build log checks, checks there's some something that's scanning them at the moment I think. There's a link somewhere to build log checks where there's errors, the other errors as well. And I think it's notice is when the hardening flags are missing as well. So there might be another approach. Yeah, and if someone is writing, is into writing Python code, talk to LinkedIn. Excellent, we have something like nine or eight minutes left but we are getting closer to the bottom of the list. So low-hanging fruit sessions, that's all. So last year it was proposed to have monthly low-hanging fruit session that is we meet an IRC on a day scheduled in advance and we spend a few hours together working in low-hanging fruits that is thing that we can do a lot of and clean up our plate and get a quick feeling of working together and making progress and small repetitive tasks that are not always fun to do. And I think we've worked on forwarding patches last year on hardening, on handling one transition I think too. And we've done, I think, three or four such sessions in total and then it's disappeared, mostly because of me stopping or coordinating it. So my question would be, do we want to start it again? Do we want to change the format of it? And how do we make the organization lighter so that we don't stop again doing it because of the paperwork overhead? So if anyone who participated in it has any feedback whether they would like to start it over or not or possibly changing the format that would be a good time to share these feelings and ideas? Well, I like the meetings and I would like to get them going again. Anyone else? Comment from RSA, Damian says they're totally great. So I was going to echo Dam's IRC comment but also throw in my own opinion which I think this would be a really useful way to get new people involved in things. The problems are self-contained and they're concrete, they're easy to wrap your head around and get something done. And so I think it would be really useful just to make sure that these sessions are well publicized, I guess, easy to find out about and see who shows it up. Is it worth even publicizing them beyond the current group to the projects at large? There's a way of getting more people involved in the project given that we have a possibility of difficulty getting new people in that actually then continue actively. Okay, five minutes, so please be short. I had a couple of other things I wanted to mention. I don't know if now is the time which one on your list. For the low-hanging fruits we should probably talk about you should say one sentence about having a date per month then I'll check that. Yeah, to make the organization lighter my proposal would be that we choose a fixed date in the month and do it every month on the same date and then we can refine the time details later. So pick the 17 for 23rd or whatever. We can do that later on, very male. Is that fine with you? Okay, so we have two issues left which are not really relevant and are on the open task anyway so Wookie please use the last two or three minutes. Okay, so we had a couple of things come up with the bootstrap sprint last week one of which was the fact that it's very hard to cross-build Perl which makes it hard to bootstrap new architectures because Perl's important and has to happen quite early on and I just wondered if anybody... so Neil Williams did a load of work on this about a year ago which made it basically we hacked around the problems to some degree with the Debian Perl cross package which just stores the results that you would get if you were able to run the things the way Perl expects you to on a foreign machine. But anyway, does anybody care about this problem and wants to help try and improve matters? Because upstream seemed quite enthusiastic to have some improvements when Neil talked to them but because he suddenly got a job all this action just stopped and I haven't seen anything, I suppose anything's changed in the intervening year or so. Anyway, we would like... What would be the question for Don Nico? I do know that the state of cross compilation upstream is probably a bit better than this time last year particularly because I think the Android port was done that way so it's probably worth me or Nico looking at whether there's anything we should be integrating or doing. So we think some of that stuff may have filtered through actually and there has been some improvements. Okay, that would be good. So Helmut Goruna is running a thing called RebootStrap on Jenkins every day trying to see what happens. So the moment he gets about 40 packages in with a lot of hackery and bodgery and one of the problems is Perl. So anyway, something to bear in mind. And the other thing is the last DevConf we talked about the multi-arch embedded interpreter problem. I don't know how many people here understand that issue but it turned out... Yeah, exactly, none of us. But it turned out that the solution we came up with last year doesn't actually work because of the way Deep Package operates so it's still broken and we don't really have a good solution yet. And I don't know... I know you've sent plenty of mails going well I did this, is that good or bad? And everyone just went silence. So I'm not exactly sure what it is we've ended up releasing and what level of breakage and multi-arch Perl crossness we have achieved because I haven't really looked or tried. For the 20 packages I've made the Perl package itself multi-arch allowed and Perl module is now multi-arch foreign and that's just far as I dared to go. And the long mails you mentioned have been sort of... Well, it's not encouraging. One more thing is that I did see Gillian mention on some mail of his that he's going to fix all the remaining multi-arch bugs before Chessie. I'm not sure if he was serious but I suppose he's going to revisit this as well. I don't know. Okay, we have one minute left. I'm wondering is anyone of you using carton the bundler for Perl approach? Yeah, it's like you're developing some Perl project and depending on C-Pen stuff but don't want to package it yet. So the idea is to just install it in your working directory and if it's working you're considering it. Which is quite useful for continuous integration stuff with developers having their own environment available before actually doing some packaging work like depending on new versions. And I'm feeling like it's not really that common to use so I'm wondering if that could raise some attention maybe. It's Carton is Miyagawa's work and it does actually generate Debian packaging. I wouldn't necessarily say it's of such high quality that you would want to put it straight into the distro but the intention is that if someone is running their services on Debian that they could use Carton to make a standard bundle for all the Perl modules that they need and then when they deploy they can actually just have that bundle create Debian packages and install them. It's kind of like Python wheels you kind of have to debate whether you really want that to be using that in the distro. It's in some ways a competing way of distributing Perl modules. For those interested it's packaged. I'm aware that it's packaged. But it has its flaws so I'm wondering if more users are using it. So I got some scary looks from behind and some science that we're out of time. You guys can go on for a bit longer. So yeah, well thank you. Let's call it a buff. Let's meet again next year and before on ISE and then the afternoon. Thanks.