 Hello, everyone. So I think it's time to start this annual Pearl Booth. So the entry rate is going to not taking. I will check IRC for people that are not here. High people that are not here. So maybe IRC, people who are here for the Booth can waive or something. Oh, there's no stream. There should be a stream. Oh, you mean maybe Gregora means a stream with a screen. He, of course, have to handle the fact that IRC is faster than the stream. So question will be maybe or comment might be flowing a bit later. Hey, Gregora. OK, so maybe the time difference with the stream is a bit high. OK, so Gregora was nice enough to prepare most of the Gobi documents. And OK, maybe we can start with the start of the Gobi documents. So welcome again. Maybe everyone wants to present themselves. Well, I can start. So I'm Clément Naudence. And well, I'm quite new in the team. I think I joined maybe two years ago. Something like that. And I'm not a DD, but well, I try to help where I can. And I also maintain a particular applet. Hi, I'm Intry. I joined the team a while ago. And I mostly focused on removing crappy packages and maintaining what you need to develop applications using modern technologies in Perl and in lightweight, modern object-oriented, and type validation systems for Perl, along with a little bit of compiling, like hardening, and replicable bits. My name is Akatsu Nokubi. I mainly maintain about Japanese text processing modules, like NKF or Kakashi. Almost module depends on other shared objects. So it is not, it requires some architecture dependency. So it's my turn. Well, I'm just Dan. I'm just your average lowest level Perl user. I really wouldn't be using Perl. I'd be using Python, like all the young people, but I couldn't get the Python to fit on the one-liner in the make file, so I ended up using Perl. And now it's too late. Hello, I'm Rhonda. I'm using Perl since ages, like last century. I never really was directly part of the Perl team, but more or less always scanning around. And I'm maintaining the IRC pod for DevCon, which is written in Perl. And I'm starting a new job in September, which also will contain a lot of Perl, so I may be more tightly connected in the future. Please, no one wants to introduce themselves on IRC. We'll come back to that if needed. Just for the sake of people, here on stream who don't follow IRC, we have a bunch of people there, Karnil, Dem, Gregor, Nikotini, and Xavier Guimar. Here's my issue. My issue is, see, I use both the experimental and the unstable Debian versions. And all the time, every time I use aptitude, there's always some gunk about Perl not being fully ready to get installed. Only about half the days of the year is finally up to date. Otherwise, there's always some holding pieces falling back, and I was hoping you got to get your act together and update the whole thing at the same time, and not have pieces dragging around, so as always have some gunk and aptitude, it's not resolved. Is there any possibility you could please do that? We can say, here's the same issue, but that would be interesting to specifically say whenever it happens to you, maybe note it, file a bug somewhere, or talk about it on IC. I did, and I said it, because I use experimental, I should be used to the problem and not complain about it. I suspect it's because when a new major version of Perl, like right now, 528, is prepared, it's a transition, and the transition is prepared in experimental, and when it's ready, and most of the issues have been fixed, then it's uploaded to CID, and then started the immigration process, transition process, to what testing is started. So it's kind of a purpose that this is a place where issues are identified and fixed before they affect everybody else. Okay. And Tini is also commenting. We are to Perlin Experimental, so it's not really feasible to have all, so 600 or so archaic packages rebuilt in experimental at the same time. So yeah, that's mainly the issue. Okay. So this year we had a couple sprints, one in Hamburg during mini-DebCamp, and the other one during DebCamp, right before DebCamp. So the reports, the report has been written, it's available. So the sprints were mainly in Hamburg, three people attending to plan for the Perlin 528 transition, add some auto-pickage tests, or fix it when it does not work and so that they don't arm-testing migration, try to fix a few bugs, and mostly move to Salsa. That was a very big, most of the work, apparently. Sadly, I did not attend. I don't see yet that. And in during DebCamp, well, it was mostly the pages not been generated yet. So we have a problem with the reports for the DebCamp that we will fix. Well, we have the goby. So yeah, it was a removal of a bunch of JTK2 independent packages in preparation for Buster, where JTK2 will not be available. So preparing a step back. Yeah, well, some reproducible build problems, a bunch of new option release. Well, the usual. New windows. I'm just curious if we had an equivalent Python maintainers meeting. Would there be many more people or less people than this? Sorry, I did not get to. Today is the Perl maintainers meeting. It's say in the neighboring room, there's a Python maintainers meeting would be lots more people or less people or what? I have no idea. I think these teams are structured, organized socially, very differently. And also in terms of how the maintenance of the interpreter itself and the maintenance and the library modules are done or coordinated or working together. I think it's quite different. I'm not sure we can compare. But later during this buff, we will provide some statistics about how many people do some metrics. And you can ask the other guys and non-guys how it is for them. And you can compare. I'm curious of the results. Yeah, I'm just curious. It seems like these two things are the most closely overlapping and functionality. And I was curious if we have a new user coming in and they only have time to learn one and not the other should they learn Python or something? Yeah, that's... So, regarding the sprinting homework, Gregor was commenting. So it was just three people, Dominantini and him. Dominantini were mostly working on 5.28 release. And Gregor was mainly fixing... well, was fixing way too many alliot to Salsa documentation things. And his feeling was it was very productive, but more people would have been nice to which Dominantini agreed. And Karnil was also there, but mostly as a security prince. So he was not able to contribute very productively. So regarding depth camp, we were maybe four people participated in person here. Very few team members attended depth camp this year for very diverse reasons. And, well, from my vantage point, it was also very productive and more people would have been nice. Yes. But yeah, well, so far, I think we can still say it's successful, but we need to have more people in sprints. Yeah, Gregor was commenting on the depth camp sprint, so he tried to follow, but it was, of course, a bit difficult because he had to work and times and difference. But it did help him and motivate him to work from home. And thank you for that, Gregor. Apart from sprints, we also have longed foot fashion. So that happened every month. The 21st of every month. I think if I remember correctly, yeah, the alternate thing, we alternate the time between, oh, yes, looking forward. Okay, back to that. So plans for next year, well, I guess it's very probable that we will have a sprint at depth camp. The question is, will there be another pro stream sprint? And where and when? I think there were also people at the snow camp, but apparently not from the pro team. Okay, proposition. Well, I know I don't have any apart from the depth camp. It's usually complicated for me to have another sprint during the year. See if we have... Yeah, Gregor has a bit other streaming problem. Well, not problem, but delay. I won't do any logistic work and it's unlikely I managed to attend, but I feel that it's easier and it probably increases the chances that more people can attend by piggybacking on an existing event like Quasden in Hamburg. Yes, okay. So Gregor is saying that there will be another mini depth camp in Hamburg, so yes, that's a very good candidate for having a sprint for people that can go there. And it will happen early June at the Pentecost weekend. So from 5th to 9th, apparently, but that has to be verified. You can probably ask Olga. You would know about it. So Antini said that having the sprint at Hamburg worked quite nicely to him, but a sprint sprint would be fine for him. Well, I personally think it's easier to gather people if there's also another event. I agree with you. Gregor agrees with Antini. So Luangit food session, so the LHF sessions were up in each month on the 21st, alternating between 5 UTC and 7 UTC, well, 17 and 19 UTC. So of course, times of difference, we alternate because we realized even if most contributors and participants of the team are in Europe, it might be easier for some if it's at 5 UTC, even if it's maybe... Well, I know it's harder for me when it's at 5 because I'm usually not out from work. Did anyone here, well, besides me, attended this one LHF this year? I recognize the technique. Okay, no. So this year 10 happened. Two were consulted for lack of participants. In the two previous years, it was eight happening in 2015, 2016, and two consulted, and in 2016, 2017, it was 11 happening and two consulted. So 10 this year is not too bad. It's in... We're in the middle. On average, on the other side, the participation is getting lower. Because we have an average of three participants. In 2015, 15, 16, it was 6, 25 participants and 5, 72 in 2016, 17. So, yeah. All in all, so the topics are... It's low-undead food, so we try to do all the maintenance stuff, that's the thing that needs to be done regularly, new optionaries, adopting package, fix bugs, add-on to package tests, and sometimes different stuff like QA cross-packages. All the discussion, like this year it was a lot about migration to Salsa, which was also a lot of work. So, next year, well, I think we should continue and maybe try to recruit more people. Maybe thousands of times happen. I mean, we do that on the 24th of each month, that way it's not always the same day of the week, so there are more chances. If you can't have a specific date, you can another day. Anyone having any comment on that? Do we continue like we do? I'm okay with that, but even if the timing is not perfect for me, means I can at least attend a few per year. I didn't manage to attend any last year, but just being able to read the report, the list of things that people have worked on has helped me keep in touch with the team and helped me motivate myself to do some work in the following days after the session I had missed. So, even for people who are not here, it might be useful. Now, I mean, if it's too frustrating to have these only so few people are depressing, it won't be the one blocking if they decide to stop them. Okay. Anthony is okay with the current scheduling. Dan says a fixed schedule with alternating hours seems to work nice, and Gregor says for him, 21st is fine. So I guess we have a consensus on keeping things as they are. Gregor is also wondering if we should change to one fixed time. Yeah, well, I've personally no strong feeling about it. Well, I know that it did happen often for me this year that it was either the wrong day or the wrong time. But, I mean, that's just bad luck. So, yeah, I would say keep it like this. Yeah. Maybe the question is whether the initial reason why we had these different times still holds. I'm not sure in terms of the current team composition, but still having different times and being open to more different times zones may help potentially new members. Exactly. Let's be welcoming. Dan is also wondering why changing to one fixed time. Gregor is fine, but he's wondering if five UTC is early for some. Yeah, well, I don't think we can find a better arrangement personally. I mean, it works at times. Otherwise, it would have been even less participation, probably. Yeah, well, Gregor is coming to the same conclusion. OK, let's move on to team status and some stats for last year. So last year, so we have a script in the script's repository. It's a root of the bird group salsa project. Well, bird salsa group. So we have 60 people with at least one commit in the last year. It was 58 in 2014, 15, 60, 56 in 2015, 17, 54 in 2016, 17. So it's pretty stable. 19 people with more than 100 commits. So yes, we can tell that it's only a third of the active people. Usually every year we have another script and try to ping inactive member to as they still want to be in the team. But since we moved to salsa this year, we did move only the people that were active, so don't really need to do that. Gregor says that the ICS file for the LHF is good until 2038, so let's not change it. Comment on that. So barrel 528, it's in the work. So we have a bug, a transition bug to keep track of it. 902557, if anyone wants to have a look. We have the rebuild logs and rebuild package repo and pearl.debian.net. And there's also a Gobi page in team pearl 528 QA coordination. There are good people adding stuff on the Gobi because I did not follow that very closely. So only two non-blockers are left in UWSGI and Colleague D, but no archive-wide rebuild yet due to lack of resource space on pearl.debian.net. Can this be fixed with Debian sponsorship? I think that would be a good idea to at least ask. Could the people maintain pearl.debian.net tell us if it could be fixed with Debian sponsorship? There are roughly 3,000 reverse dependencies of pearl.debian.net continuously since the sprint in May. For the streaming delay, to see if someone wants to comment on RFC. We can go back to that if needed. The next item is migration from Alioth to Salsa. So we migrated in May. I think it went quite nicely. Most of the problems were actually to change all our documentation and a lot of scripts, that and all the repository, and we're still kind of working on that. Gregor was making some optimization in the Armour config, for instance. I don't know if a lot of people use Armour. I know that Gregor and I do, to have all the repository up to date and not update all the repository every time, which is not nice on your machine or on Salsa. We have more comment on the transition. So Antini says we should go right anyway, since once the non-books are fixed. I'm okay with that. Any objection? I think going ahead means uploading to seed. Currently no one's strongly against. And Antini says that if we skip the full archive rebuild, it will take maybe a couple of weeks, because both blockers have fixes and work arounds. Okay, so maybe we could, of course it depends on the release team, but maybe we could check with Dom this base problem, see if we can do something about it, and if it's nothing easy or trivial, let's go for it, put that in the report as for comment on the mailing list. I mean, not everyone can attend the booth. Other people might have things to say about that. We have 15 minutes left. 15 minutes left. So back to the migration to Salsa. So yeah, most of, I think most of our documentation, including infrastructure and stuff is now fixed. We still have no real replacement for Pet, which is a shame, but fortunately Tracker is going to have the feature we need the most. So, you know, well, Kanashiro is monitoring a Google Summer of Code, Arthur, to do that. And from what we saw, it looks good. So I think we will be able to manage without Pet, and it will actually be better, I think. Apart from that, I'm personally very happy with Salsa, and all the shiny new features it had. Is anyone missing something from Salsa? Something we should fix, maybe? Oh, and Gregor is saying about missing Pet, please try dpt.new upstream in PKG Pearl Toolgate. Thanks, Kagura. I will. So, and Gregor thinks there is still a script to clean up local repose that is still tied to Alioth, FSFS script. So we have to check that. Well, do we really need it, and should we replace it? I have no idea what it does. Me neither. I did not, even though it existed. But if help is needed on that, well, I can have a look. Oh, and Gregor is wondering if it's possible to turn off project creation via the web interface. You might be able to do that with permissions. Yeah, I would have to check GitLab documentation. I'm not sure. Because, yeah, whenever one creates a project on Salsa using the web interface, it creates it all wrong, because we have all the projects that have the same settings. So there's a script to do that in the PKG Pearl Tools. It's dpt-salsa-create-repo. And yeah, anyone really needs to use this to create a new, whenever a new package is created. Unless project creation does the right thing. Yeah, well, the thing is there are, it would be actually nice to disable project creation from the web, because, well, I don't think it's possible to automate all the stuff regarding permission. It's possible to have them inherit properly. But all the settings with all the web books for the KGB bot and stuff like that, yeah, I don't think it's possible. And then, well, it's difficult to know which projects, if some projects are created with the web interface, it's difficult to know which projects are okay with regards to Salsa Config and which are not. So, yeah, I can check the GitLab thing and ask the Aliothan bins. I guess that's it for Salsa. And Gregor is commenting that the script he was talking about before removes repos from your disk which are removed from Alioth for a removed package. So, okay, well, that might be useful. So, yeah, maybe we should, well, it's especially useful for people like me who use MRNF also repos on their disk. I mean, people are just getting, only the Git repos they work on, then they probably don't need that. So, probably not a lot of people are using that. I'm using that, but I'm not going to end this time to optimize for by 2% the MRN time or disk usage. So, well, last item, projects for all next time, items for discussion. So, yeah, we had ideas in the Debcam Spring. So, well, I'm going to investigate this GitLab thing to check if we can do that. And so there are, yes, there was this hardening flag status. We have in our LHF checklist, we have hardening flags, items that not a lot of people know about. And every time I look at it, I have to think what was it already, just about adding the hardening flags to every project on Debian rules using a developer option. So, the idea is to make that systematic. Yeah, well, we can continue doing that as we did. That is, I mean, I do it every time I look at the package. Maybe we can do some kind of mass hardening activation. Now that Salsa is in good shape, it will probably be easier. We definitely want to remove, finish the removal of the LibJTK2 Pearl dependencies. That's on its way. So we can only hope it will be done for the freeze. And yes, there will be the patch replacements. I think next year we'll probably also have to think about the mailing list because we have a replacement for the alert list, but it's normally, well, we might have to change again, switch to something else. For instance, using tracker, after Buster is released, because we don't know how long this will be maintained. Dom was saying it will be okay to maintain it for release plus one or two, but no more. We will have to think about that in advance so that it's not too painful. But that can be an attempt for next year's buff. Personally, I have no other proposition if someone are saying what happens sometimes is that this kind of discussion actually happens after the buff when we have the report and people reply in the mailing list, which is not so bad. Yeah, and Tini is saying that changing the maintenance fill in all our packages is going to be painful in any case. That is true. But we will probably have to do it at some point. So probably look at the options like using tracker instead of mailing list. Gregor is asking if anyone has ideas on how to motivate more people to run auto-package tests before uploading? Oh, I think maybe Tini has an idea. Well, we can always try to do that with a CI, but that will be painful with a lot of packages. I don't think this will impact the problem Gregor was talking about. I think that the main problem is I get full update units from release, upload, and by the time the CI would run the auto-package test, the package is already accepted and since it. So it's, yeah, I guess that, I mean, the fact that, for example, running auto-package tests is a one-line config setting to enable in S-Build, maybe not very well advertised, maybe... Will we fix the documentation? Whoever set this up locally could identify issues in the doc and maybe update it or improve it. I don't know. I think maybe you could start by asking people who don't do it why they don't do it. That's probably a reason. And then we'll fix it. Ask on the mailing list, I guess. Does anyone use on, don't use auto-package tests for a particular reason? The auto-package is maintained. I know I do use it now that I have a working setup. Yeah, DEM is suggesting something triggered by an unstable commit of the changelog. I wonder if we can have a message from Salsa whenever we push, like, have you run an auto-package test? You know, trigger is pushing to us a demonstration of using the GitLab pipelines in the CI. But yeah, as he was saying, it might happen too late often. I mean, the default auto-package tests we have are not very complicated or anything, but they are still useful. Thanks everyone for participating and thanks especially to those who are in Europe and had to wake up at... crazy hours to attend remotely. Thank you.