 All right, let's get this done up. Welcome to the yearly Haskell booth. The southern side is the same one as last year's, but the numbers changed. But in the last year, I'll try to keep the talk part short, so that we can have more time for proper discussions. But I'll still say a few words. Click over here on the Haskell team for those who are not already active in it. Then a few things that have changed recently, during that camp or shortly before, fancy new stuff. And then open discussion for various things. And then a few bullet points of stuff that we could discuss, but of course there's room for more. So yeah, we have a Haskell group, so we maintain both the Haskell compiler, the Haskell library, such as the both. A few Haskell programs have been built. Historically, we were only doing libraries. We took over the compiler later, because actually maintaining the compiler and maintaining libraries is a fairly different thing to do, but yeah, we do it now. So yeah, that works. I like these numbers. They're not the same numbers that I was up to date with the same script from last year. So this is packages in the archive per maintainer. And we are at, but of course the numbers are far too high. It's actually binary package version combinations across architecture that have ever existed. So that's something that the number I'm going to look at. So if we exclude the version from the list, we have all behind the variant kernel and package curl. So they have more packages, but we update them more often. At least if you count the bin that we use. So the read loads on the build CD servers, because of the eye problems. And now we are on the third place of the archive, which is still good, but that's still teaching its binary packages. And if you've used our packages before, you know that there are three types of packages per source package. So the source package is maybe the right metric. And oh yeah, there's still the third place now. We surpassed Ruby extras. I guess that source packages that have ever existed. So we don't maintain 800 packages right now. So there might be some old ones that have been removed in the list. This is just SQL queries on project B, the SQL database containing project information. And of course, Paul is far, far ahead. I once founded the Polyglue, but I don't mind them being first. That's OK, a little bit. This is the number of uploads done per each group member in the last six months. And there's a slight, I mean, the numbers are not very healthy, I would say. So this is one thing for the suppression. How can we make sure that there's more people on top? Oh yeah, we get back to the point. More people on top than maybe, I mean, the number of people is good. It's just that the number of uploads is not very evenly distributed. And there are reasons for that, besides. This is new package upload. This is any package upload. And it's on, of course, depending on when it's in. I ran it half an hour ago, so. Because I had 20 packages except for that. Maybe it's not uploaded, but no, mate. OK, we can have a look at that. It's just that I think the tendency is still right. Yeah, the tendency is still a bit different. OK, we seem to be very efficient compared to team size versus uploads. So we do a lot of uploads per person. And that's partly one of the benefits of packaging Haskell. Because Haskell packages are very homogeneous. They are very consistent in what they get from upstream. They have good metadata. They usually build if the metadata is not fulfilled, and the dependencies are fulfilled, so it can be packaging stuff very efficiently. Less, very little work per package. Otherwise, you couldn't maintain so many packages with so many people. Also, if it compiles a work, so if I change something in the dependency and I rebuild it by Haskell's type system, as long as it builds, I'm fine. I don't have to test it. We do run the test suites. They are there occasionally. But otherwise, I'm happy to upload an un-tested Haskell package to the archive, as well as to the class. Then there's new and spelling mistakes. There are people now working on something like a distribution, but not on the distribution side of things, but rather on the Haskell side of things. It's called Stackatch. It's run by a complete company investing in Haskell. And what they're doing is basically doing our job. So they are creating a curated set of packages, so a set of packages that are known to work together. All dependencies are fulfilled and everything builds. And it's great because that's what we've been doing, and they're doing it better. As in, they have better QH tools, better automation, they are quicker, and they are only focused on Haskell. So the plan that I came up with that nobody complained so far is that we were just building their work. So for the packages that are part of Stackatch, which is on all of them, unfortunately, we will simply take whatever is in the latest long-term release of Stackatch, whereas long-term here means a few months. That's not too long. And then for this way, for a large portion of our packages, we don't have to worry about, is this package compatible with its dependencies in this version? So that's not a time saver for us. A couple of points, as the same as last year. Well, this is kind of a remedy for this negative point. Sometimes, upstream doesn't give perfect metadata. And then we have build failures that we have to cope with, maybe we have to patch things. So for everything in Stackatch, this is not a problem anymore, which is good. And for everything outside of Stackatch, if it becomes a problem, we can kick it out. We can ask them to join Stackatch so that it will not be a problem in the future, or we continue doing the manual patching and working around issues. And then the old problem of having to rebuild everything if we update a new version. But we seem to be coping, and neither the release team nor the WannaBuild team has ripped off my head since we started doing this, so it looks like they're getting this coping stuff. So for those that are new here, a little bit about how we work. So basically, we try to always maintain this invariant per package. So we have a package plan, and I'll come to that on the next slide, which is basically a list of packages and versions. Something that we believe is a consistent set of packages, and that's basically our to-do list, what we want to upload to the archive. Therefore, of course, everything has to be less than what's on package, but that can be quite, it can be ahead of what we have in our packaging repositories. Then, of course, can be ahead of what's being tagged because it's still being rocked on, hasn't been uploaded yet. And that should always be ahead of what's in the archive. And this simple formula kind of guides our work. So if this new version package we consider, and we want to upload it to Gabion, we first make sure that the package plan lists it and everything's fine. Then the next step would be to actually do the packaging, create the change log entry, change control files, and we have a tool that tells us which packages need to do the step, and we have another tool that helps us with the step, and we have to build it. And if we upload it, it'll be tagged, so it's this step, and that step should only be an inequality here if the upload fails because somebody messed up, you should link it in the rejects, or because it's a new, and it takes a while, it goes through a new, so this should be an equality in most cases. You can interrupt me if you have questions. Don't mind that. Okay, package plan is a kind of neat thing that we have, but probably don't have many other teams have. It's readable. It's this file, packages, versions, and then maybe some annotations like don't run the test street, because for whatever reason we don't want to run it. And then we have a tool, which is this one that I ran here, which checks the package plan and tells us about things like, for example, this file, this package has been removed from the archive by now, and I should have also removed it here, so everything in capital letters is something to do, it's a mistake, but actually, everything in capital letters is an error. Anything else is something to do, like these are packages that we want to have in the archive, but we don't have that, don't have in the archive yet, so we need to package them, and then it runs Cabal Install, which is the upstream package tool, comparable to CPAN or whatever you know, and we just ask it, can you install these seven packages in these particular versions fulfilling all the internal dependencies? And if it says, oh, yeah, I could do that, so we're running this Y run, then we're happy. And if not, we get an error message, and we have to do something. So this is the package plan, turned out to be quite a useful tool. Right. We can add patches to upstream metadata, if you have to. We can also add Cabal files for packages that are not on-haggaged, which is an exception, but happens in a very few cases. So it suits the needs. All right, next final agenda is a little update of what has happened recently. So until, I guess I could show that. This is the Debian URL checkup, and right now 89.72% of all Debian packages that have a VCS DAX field have a broken VCS DAX field. Because we were basically the only one using DAX for package maintaining, and now we switch to Git, so our DAX repository has disappeared, and we haven't yet updated the VCS field in the control file. That's something we have to do soon. So now we're using Git. And we do it slightly different with most of other people, so we have all packages in one repository. I think it's a good thing. You don't have to worry about creating repositories and ideas, setting up hooks, whatnot, and you can do commits that affect multiple packages at once, and sometimes bumping the standards version in 700 packages. I don't want to see 700 commits for it. I'll stop here. I can explain the choices even more. Let's defer this discussion if there's discussion about that. I'd only load that it's a Debian only repository layout, so there's no upstream sources in it. There are better tools, I believe, to get upstream sources into your directory. So this has been done, people have convinced me to do it, and slowly the tooling also catches up, so we're adding more tooling to work with the Git repositories. Another new thing that I've been proposing and that we are slowly following is the concept of key packages. So previously what happened was, say, a new version of Git NX pulls in random full diagnostic somewhat obscure package, and the next version drops the dependency again, but then by then we already have it in Debian. Should we just keep it to Debian? Probably not, because the only reason it wasn't Debian in the first place was for the dependency of a certain program that we want to have in Debian, and unless there are other good reasons to keep it, maybe we should just remove it again, similar like you would remove dependencies of your machine, of packages that don't depend on it anymore, so it would get automatically removed. So we don't want to accumulate the crowd. So the idea is that we mark packages that we want to have in Debian on their own right as key packages, and we can do this marking in the package then. This is, for example, for whatever reason, a key package. And then we can use the package line to find out, well, what packages do we have in Debian that are neither key packages nor dependencies of key packages? And then we can look at them and say, oh, well, yeah, let's just get rid of it. Let's just not grow with a package size too much for another reason. So then we don't have to mark hundred packages as key. Packages that are part of Stackatch are excluded from the removal, so they are all basically everything on Stackatch is considered to be a key package, not because it's important, but rather because it's simple. So everything on Stackatch is causing no work for us or very little, so let's just, yeah, let's just keep them in. It's a sign that they're upstream somewhat active, and it's okay. During that camp, we created a package, documented, collected, assembled. Our tools are a little bit better. Previously there was a script running line and rendering repositories. Now we have a proper package. See you next time about that. Before that camp, I backported KABAL and THC itself, 7.8 to Jesse, so in that sense, you are stable, that means release is a bit more useful for the development. You won't get libraries from them in at the moment yet, but you can at least get your compiler and KABAL and install whatever you want with a much four year old compiler. Then we have the latest compiler, which has been released only a few months. So we're still very good for that in standards in experimental. And we have started packaging, like after the packaging has been filed, to support GCC710 and the corresponding Stackatch release, which is Stackatch LTS 3.0. We did that on a branch of the package plan, the package repository, and we are basically making good progress. So there's a dozen packages that need to be looking at and looking at can just mean we're gonna be removing it. And I'm also waiting for a coordination with the Penbook maintainer so that we can test Penbook's reverse dependencies, but it's only a few packages. So I guess we can, maybe actually, just upload everything to experimental by end of the week. Might be a good goal. Then we have something to work on, I feel it. Why do experimental first? There are a few new packages and some of them are interdependency trees. So if we do it to unstable right now, we would have last number of packages in unstable and uninstallable until the new processing has happened. So if you upload experimental first and wait till FTP master have approved all the SQL packages that are new, then we can then again rebuild everything for unstable and upload GCC710 and all seven other packages in one go to unstable and have a very short time of SQL being broken and unstable this time. I've been saying that for every participant it'll always cause something went wrong and it was broken for at least parts of it for a while, at least you can try. Chris has worked on documentation, which is a very good thing. So the Wiki pages are now has a very useful page, which is highlighted in both getting started, which gives you the steps you have to do for training a new package, up-training a package, planning for a new package, these kind of things. I'm constantly making changes to the page because we are developing our tools. So this is a perfect page for the next slide. You can run up-get-installed package-escal-tools and you get a command DHT, Debian Haskell tools, which has sub-commands because our commands are the way to do it this century. And here's a list, we can just briefly go through it. Okay, Cabaldebian is a tool that's actually being developed outside of Debian. It's by David Fox, who in his company uses Debian and Haskell extensively. So he created a tool to create quite nice Debian packages from Cabal descriptions. And he's collaborating with us to make the output suitable for the official archive and we can make more progress there. So, in some cases, you can actually take the output as is without further ado. Mostly depends on where the upstream has put the copyright field in the right place. If not, you just have to modify copyright and control and at least the dependencies are being generated correctly in most cases, which has been the main difficulty upgrading packages. So this is a wrapper around it, which looks at the Debianization with this there and finds out whether we are on the test suite or not, what flags we pass or not, so that in even more cases, you will get the correct output if you upgrade it directly. You still have to manually check it, for example, if flags depend on the architecture, if testing depends on the architecture, these things are not mapped easily, so the output will still be not quite right. Debian.DST is a factory-generated purpose tool, which you point to a Debian directory and affect the upstream tab all and create a Debian system directory without even unpacking the upstream tab all. So nothing you have to use directly in this context, but maybe it's nice to have if you need something like this. DHT-init creates an initial packaging using Cavaldebian and it looks into the packages plan to find out what version number you want. So basically, if the package plan says this is all right, you just say Caval-init, I'm sorry, DHT-init, foo, and then you will get the packaging for package foo, actually, escalate foo, so you give it the Caval name and it'll produce the data package. Make all is like a script, which is a proper program because it's written in Haskell, the difference between a script and a program, I guess. It rebuilds all packages from the packaging repository in a sub directory, making sure that only build packages are being used and not what's in the archive. So you get a clean test of all packages, takes maybe, I don't have a good number, but I think it's somewhere between six and eight to 10 hours to build everything. You can maybe speed it up a bit. And that's basically what I want to use to make sure that transitions like GCC 7.10 are properly tested locally before we connect them. It's also something you can use to get dependencies installed or at least build a new machine that are broken in the archive because I haven't rebuilt them or because it's stuck in new or whatever. So you know that I'm stuck waiting for that to continue all the packages as it was the case before. It doesn't take much space. It's only, it's less than a gigabyte all the build packages, so it just takes time and you can leave it running a little. You'll notice changes and only rebuild what has changed. So if you keep it directly up to date, it might actually be a convenient way of building packages. A mass change is on handy script. If you have to apply a change to many or all packages, you give it a description of the change. You give it a shell script on the line, which is usually running Perl or SED on certain files. And then the list of directories you want to apply the change to in it goes to the directory, make sure that the git state is clean, applies the change, sees if they actually changed something. So to combine this either potentially you just run it again, it doesn't matter. And if there's a change, it creates a change log entry and commits to the repository. Tag, creates a tag, a calling power tag convention. I don't have nothing fancy about that. Upgrade is similar to in it. It uses Cabal Devia. It uses a package plan to find out what version you want to upgrade to. It uses Cabal in it to update the divination. So you get the new dependency specification and the build depends. And it does some fancy thing with git. So it... You need that napkin, if you could return. Yeah, we had a napkin drawing. So it creates an empty branch that's unconnected to the rest. Runs Cabal Debian on your old version which might override some of your manual changes to, for example, change log or dependencies at least. Then it updates the new version, runs Cabal Debian again and takes that commit and cherry picks it on top of your changes so that hopefully you have less manual work to do if there are some changes to the dependencies that are not being done again. Still, it might be the worst conflict in some cases. And you should still check with this afterwards. Upload is just a convenient script to upload, tag and push. So you pass it out changes file and it'll sign the changes file. It'll sign it in the temporary directory. So to not interfere with the make all script which will otherwise maybe redo the package if you sign the source file, the tsc file. And then uploads it and tags it correctly. And the last two are basically a script to tell you where this inequality is something to do. You have what to build. Looks at the packaging repository for any packages that have a change log entry that is marked for unstable or experimental. So it's marked to be, that it's ready to be uploaded but there's no tag. So usually if somebody was not in the X-Ven and pushes a change to the repository, I can run this script and figure out, oh yeah, something new to do for me and I build it and then upload it. What to upgrade compares to the packaging repository with the package plan and finds anything that's where this one is not an equality. And these are the packages that I would then run upgrade on and that's it. You can add more scripts to the tool and add it to the package if you want, of course. So it's the package plan test script part of this. Yeah, right now the package plan and the repository are mostly separate. So the package plan has its own. You have the packages by the heel and you also have the test packages script directly here. I guess that could also be changed since it doesn't really, yeah, I don't really care about. There's no good reason to have it separate. There's no good reason to have it. It's not separate, so it could probably do it. Yeah, people probably should because it makes it by the main page. Are these things that, the main page is not? There is one complication which is that the test package script retains some state inside its own world, which is one of them is it copies your current installation to your C installation's package version's code. So you can add, you only run it for the system. The same, you can't run it on a non-stable for 7.10, even if you switch the brand, it doesn't work. But that's also true for some of these tools, like Kabal Devin. If you're targeting THC 7.10, you would run these scripts or at least some of them, and you probably need to know the scripts about which one in an S-chain route that has 7.10 installed because they query the current compiler for information about versions of the bundle dependencies. So we mean so that it's not just a script, but it also has a state. Yeah. Okay. I'll try to reduce the amount of study it has. So I guess I was still talking more than I wanted to, but okay, we are at the discussion. So some working questions that I think, something in silver from last year's, do we have a good policy on what we want to package in Bedion and what not? Currently, the situation is we package whatever we, as maintainers, individually think, oh, this is what I still have, and let me do it. Let me just keep it there until the end of time or until it's causing problems. Then there has been a discussion on dynamic linking, whether we should support it or not, how we support it, what should be different package names or just use a system of provides and depends. Is there actually a benefit in dynamic linking? Is does it out by the cost of dynamic linking? I guess it's a good discussion to that. And then of course, can we change our workload so that it's spread more even? And for those who are new and want to contribute, here's some things that we can do independent of just updating the package. So this is the end of my prepared part. So I'll happily start now having the both, both these items above. Thank you. So, the question of what should be packaged. I'm, I really like the idea of the Stackage LTS stuff. I think that's very good. But there's still the question of what do we, the Package Haskell Group, package, versus how do we deal with stuff that people who aren't in the Haskell Group have packaged that's built out of Haskell. So Pandox is a really good example here. We have things that depend on packages that Pandox produces that Pandox is not packaged by us. So we have an external dependency. It's annoying. It's really annoying. You know, I can understand Jonas's desire to not necessarily be subsuming into Package Haskell. But by the same token, therefore, one really good question is, should we package anything? That build depends outside of our set. Or should that be handled separately? I don't know if we actually get to ask this question because Pandox is there, Pandox is maintained by Jonas. And we have packages that will depend on Pandox. And I think we want to have these packages. I just almost feel like there perhaps needs to be a separate set, so that we have, because at the moment we've got a set of packages that consist of a strongly connected, as it were, set of things, that we can reliably maintain and manage as a group. And some other stuff that we can possibly reliably maintain and manage as a group, providing their external dependencies so it's in the line. No, technically Pandox is maintained by the group. It's just not part of the technical technical aspects of it. It's not a problem in Pandox, you are therefore allowed to fix it. It's just that it's a bit separate because Jonas is taking care of it and it's also used by lots of non-Haskell geologists. So, like user support or something, it's good that there's a proper maintainer to have it. It's just an annoyance in our workflow. Okay, yeah, but there's no answer to the question. Is it the only package? It's the only package. In that sense. You're right, yes. Only one other Haskell library package, otherwise, it's Akta, which comes to the Haskell library, but it doesn't have any reverse dependencies. And actually, I am told that I'm going to move Akta into our packaging repository, if I wonder. Okay, here, a few people here actually, I don't know. So, we've just talked at lunch, so you knew and you want to contribute, that's great. So, what about Q3? Well, I haven't packaged anything with Haskell. Interesting Haskell, but yeah, I haven't any kind of, mostly I'm interested in it. So, maybe in the future, but not in the future. I have a few packages that I haven't been treating well lately. First, I have to get a good foundation before I go to Insta. What makes it come here? So, of course, the basic assumptions, I also have a general understanding of those. Okay, but I did, so I did try to package something at some point, and I remember there was like, so there was two different wiki entries, which helped you use a cabal, transform a cabal to a Haskell package, but it, so I could, I managed to do something which wasn't enough to use it to put in the server because that was my original idea. I wanted to put something on the server and I couldn't compile the server, so I need to compile it somewhere else. And for now, it wasn't enough, but it wasn't anything that could have been uploaded to Devian and that's why I wanted to work into it better and I thought it was a good opportunity to maybe try it now. So, as a user, what do you think we could do to the packaging of Haskell Devian to make your life better? So, it makes GNC entries very accessible. And there are quite some, not many, so... Is there anything that you want that we don't have? Because it's so easy to use that I might come out and just skip over and you don't have to learn that stuff. And a script that I think would be nice for someone to write would be one that double checks the co-installability of the content of the archive, because in Jesse, we've had a mistake. So, the mistake in Jesse was manifested by the char1 problem that I saw earlier this week. We had two packages that were independent and built against different instances of char1 or binary or something. Binary. Binary. And, because we did a last-minute fix that sorted something out with binary. Having a script that we can run against the archive to ensure that we're not getting into a situation where we need multiple versions of things installed simultaneously would be helpful, because to hit that problem again, do it once, oops, mistake. Do it more than once. I'm not sure if that's actually possible, because I think the situations that lead to that are that you have a package that has a very odd dependency, like it files a new version, something that comes with a THC, and you're making an exception to the rule that there's only one version per package. And that's a logical consequence. Otherwise, you would have to rebuild everything with a package version of binary. And that wouldn't help, because binary is a THC-bundled library. It comes with a THC package, and it's not upgradeable easily. At least there's something. Right. At least to be able to know what the limits are would be nice. Ten of them? Okay. We should just try to think of extra possible tools. Yeah, yeah. That was the initial, I guess we should start. So let's maybe come back to that when we're thinking. Because that is somewhat controversial, and involves work. So for those who are not aware of it, currently, every has to have binary in the study clinic. So all there has to be is being linked to it. It's one big block, maybe a few dozens megabytes in size. But it doesn't have any dependencies into the Haskell world anymore. So all the build dependency, or the dependency problems that we have managed for library packages, we don't have to do that for the binary packages. So even if something is broken, very interesting, people can still install a 10 log. That's the current state of the Salesforce. What we could also do is we could link them dynamically. THC supports that right now. And if we have the dynamic libraries already in the death package, we would probably move them to separate package and only contains the SO file, the dynamic library. And the question is, what are the benefits, what are the disadvantages? So the benefits that people want to have from it are smaller binary sizes, because that is in libraries. And if they have multiple different Haskell programs running at the same time, they save memory. Then another plus is that it's easier to specify data dependencies. For example, I think, let's say, time lock side proc. It may ship some data files that require the runtime. So right now we have to jump through some hooks to make sure that every program that links this library inherits the correct dependencies onto the data package from the library. If it's a dynamic library, you can have the dependencies directly on the dynamic library and everything easy. Then security updates are always a plus. You can rebuild the library and every binary package using it is either broken or gets a security update. Then the downsides are that the size of the packages list will increase for every web user. And I've heard people complain about, I want three packages already. I think four-packages person's package might make difference by more. Then we have breakage and unstable for programs. Unless you have two packages for each program, one dynamically linked, one dynamically linked. Also maybe not the nicest thing. This is basically cosmetics. Installing a package will now put in lots of packages with severe names. People might be put off by that, so apparently we're hiding that in a way. Anything else, yes? We might get upload amplification in the sense that if we update a library package, we may have to update a whole bunch of program packages as a result. That's actually another trade-off. Because the other choice you have when you split the package into the package holding the SO and the package that has the master SO process, is that you can actually, instead of just provide the package with a version number, you can actually generate the package with a version number. Now, that has the great advantage for stable. Once you're not updating very often, is that programs that were linked against an old version of the library will still be there and will be fine. But of course it has the disadvantage in a rapidly developing situation that you accumulate dots and dots and dots of these libraries. And more importantly, every single upload goes through new. And you can't use the built-in views and the new features to rebuild stuff anymore. Yes, so that's another trade-off. Currently, many of you know this, but I regularly look at this output of the script which tells me what Haskell packages need to be rebuilt on the archive, and I schedule them using one of them. I have special access to that. And that's a very cheap way of rebuilding stuff compared to doing a source product look. And I would rate or anything that has less automation than this. So changing packages names, unless we can do it in a new, which is not supported at the moment, seems to be out of features. Yeah, because in that case, what will happen is that you get lots of packages that you get. No, but it's not a pulsing combined source packet that makes up package names. You have the package name into dead-end control. So you're actually manually changing the control file to change the package name. Yes. You can automate that, but it's still a source product. I don't mean that the Java people don't use the Java. It's a matter of frequency, I guess. We will see people do it, but I wouldn't want to do it. So what is Haskell specific on these class minuses? It's just that Haskell is a more rapid changing and that it's historic only offered the one way. And it's more rapidly changing. It's changing in such a pressure because every minor version upload requires an equivalent to an SL name bump in... That there are no APIs. There are already APIs. But they're changing. They're not stable. That's the problem. Another plus is that you can link a non-Haskell SO to a Haskell SO. You can't make a non-Haskell SO to a Haskell .a. So you could have a Haskell shared object that exported signals to other libraries. Did you do that? Today you could depend on a Dev package and use the SO that comes with all dependencies of these libraries, statically linked into it. You can't statically link it into an SO because you're not using PIC. So we'll end here. So our Haskell libraries are not positioned independently. So you can't link them into an SO. So the .a's aren't built back? Right. So we're providing SOs that we don't use them? We'll be using them because CHC requires them if you load something with CHCI from a Haskell. So CHC is using them internally. Right. But we're not using them in the distributions themselves. We still need them to be there. Okay. I think we're out of official time. Is there anything in this room afterwards? I don't think so. But the video team wants people to leave, right? I don't know. I mean, we could just all go to... We could just continue the bof, the dinner bof. So go to dinner together. At least those who are not scared by now. And see what comes out of it. Yeah, I guess I got... Again, I talked too much so we couldn't really get much. I think we have more psycho-society things. I think we have a few. Okay. Then I'll officially call this end. And thank you all for attending. Thank you.