 Welcome everybody. Now we have running of the package test for your package box with Paul Gevers and Anthony Cero. Please enjoy it. Hello everyone, welcome. This is a box session, so we are going to have a discussion. We invited people to come from the DevCI channel. You can also add questions to the pad and I answered them. I asked yesterday on IOC and we got a few questions to start, so that's what we're going to do. So the first question in the pad is where can I start? How can I find out how to for them? So if you go to ci.dev.net, the documentation link in the header page points to all the documentation we have for CI and there is a tutorial which has the initial steps. And there is also recording from DevConf, I would say 16 maybe or 15. I don't even remember anymore, but in the last DevConf there were a few sessions on this topic and we have recorded tutorials for that as well. And then if you know packages that are similar to yours, like if you are doing a Python library, so you might want to look at other Python libraries and so on. Does anybody else have input on that question? Well, what we regularly see with out-of-the-package tests are actually that they run the upstream test suite. Typically you would also do that during the build, so as a build test suite. But what happens with quite a few of the build test suites is that you test them against a just build code. Of course, with out-of-the-package tests, you want to test them against the installed binaries so that sometimes need a tiny bit of work. But typically if a package already has a test suite, there is a big chance that it is suitable to run an out-of-the-package test as well. Yes, for example, for Ruby packages, the test runner moves the local library directory away so that it forces the test suite to load stuff from the installed packages. For instance, for Python libraries, if you have a separate test directory, you can copy that to a temporary directory and then run from there to make sure that the test code doesn't load the code from the installed directory instead. One of the most simple things to do, I wouldn't call it a full out-of-the-package test, but a superficial test would be just to ask for the help. Then at least you know that the binary can be installed and be replying with the help. I mean, it's not great and it's not much, but it can catch a tiny bit. It already tests that the dynamic linker can load all the libraries and find the right symbols and that kind of thing. It's better than nothing, but it can be useful if you have nothing else. Another question is, for each package, is it practical? Actually, I would say that nearly every package that does something, I mean, doesn't contain just data principle should be testable. I mean, obviously some are much easier than others, but as I said, if you can just print the version or the help, then it's a start. Julian, changing, do you guys have any input on that, any impressions or thoughts that you want to share? Not exactly, because for the Go packages, they are just a bad source of the Go libraries. So I don't know, is it good for the Go libraries? I'm not sure about that. Say just around the test again for all the Go libraries, I'm not sure. Yeah, I think the Go libraries, they ship source only. They are only used to bind, to build stuff against them. Yeah, so then to run tests, you have to build it again. Yes, not only building, but also test including the upstream. I'm not sure if it's really good for now. What I think would be useful is if you build like a small test application that uses the installed library. Like we have an app, we have a package config, auto package test that uses package config to build a small application, linking it against a package to see if package config, if our package config file is correct. And if you can find this app. Yeah, so I may be mistaken, but I think that Lipsy is also doing something like that. Yeah, when I wrote an auto package test for a library package, that's what I did. Building a small application that links to the library and uses something from there. It's a good starter, I think. Is that an easy way to write another test case for the package while we still benefit from the auto depth aid? Yeah, that's a great question. So if you add a Debian test control file out of the page, we will append that to the results of the auto detection. So you should say test suite equals auto package dash package dash package type. And you also have a control file then it will use both. Okay. So you can add an extra test beyond the one that's predefined out of that page. That works just fine. A tiny bit of discussion on IRC about the doing just the help. And I was commenting there that auto package test, the control file has a restriction. Actually, it should have been something else, but there's a restriction to mark the test as superficial. The restriction is called superficial, which means that care is taken that the migration software knows how to judge the value of the test, which basically means that your package doesn't get the age reduction for a passing test, but it's still helping. I mean, if it suddenly starts breaking, there is something going on. So it's in that sense, it's a good test for reverse dependencies. Yeah. Let's do that. Does anyone want to add anything on this topic? Okay, can we move on? I guess we can move on. So next question would be, is there an easy way to run tests in CMake packages? Does anyone have experience with that? I don't. Yeah. So in general, auto package test should be testing the binary packages. So I'm not sure how that plays with the build system to build the source. I know that some test switch needs to build the actual code first to then give the tests. And that can be tricky. But ideally, you'd want the test to run against whatever is installed and not against the source tree. But I don't have any special wisdom on CMake. So if anybody finds it, then please comment on the pad. This reminds me of an idea that I already had in the past, but I never went ahead of it, which is maybe collecting helper scripts in some binary package that we can reuse across auto package tests. I remember in the past, someone doing something similar for Auto Tools, which was building only the tests and not the actual application and building the tests against the installed code. I don't remember the details here, but yeah, I think that's useful knowledge and we are interested in knowing about that. I think after the latest MiniDead Conf poll, did an effort of collecting all the chips and discussions we had in the last few dead cons and putting those in the wiki page. Is that on the CI wiki page or the auto package test wiki page? I think it's the CI wiki page. Yeah, so there's a lot of previous knowledge on the topic there. And if you find new patterns or stuff that you think is useful in other packages other than your own package, then we are interested to hear about it. So please drop by on hashdabci on IRC or debian-ci, at least dot debian.org. We want to know about it so that when people ask the same question, we can recommend it. Do you have particular examples of CMake things that go wrong? I mean, running the build time tests, we can run a build time like we normally do. And then auto package is just for tests. We want to run a binary time. So if the package has stuff which only runs on the source tree at build time, then we just do that at build time. I'm not sure what we need to care about doing it afterwards. So I think the tests at build time and auto package test obviously serve a different purpose. But that doesn't mean that they have to be the different tests. So if the tests that you can run at build time, if you can test the installed binaries, then that's great. Yeah, true. I don't know if there's a specific example of a CMake problem where those tests only run on the source tree and won't run on the binaries. But if you have tests like that, you just run those at build time. And as you say, there might be well-be tests in the set which can also be run on the binaries at binary time. Yeah, so I think on the path there's an answer. I think that's from Samuel. He's actually patching to apply, what does he say, to make the package links against the installed library rather than the just build library. Right, yeah. I mean I have seen things which the testing assumes the build path. Yeah, so sometimes it would need a bit of work. Yeah, exactly. I think sometimes you might need to argue with the build system to make it more flexible. Yeah, and for instance one of my packages recently we had to build a piece and then some stuff got built in the build tree that you didn't want to be there because then you would be using that instead of the installed package. So you build the test, delete the thing that you didn't want and then you run against the installed package. I mean it's ugly so you always have to value if it's worth testing. So one of the great things of the auto package test versus the build test is that it can catch stuff that other packages can make you break your package. Really for the migration stuff and that's really why you would like to be interested in actually running auto package test. So if your build test need a bit of persuasion to be run at auto package test, if they test actually your dependencies, that's still great. Yeah, see what I mean. Now I guess the fact that we discourage our path is one of the things that makes stuff work in either context. Quite a lot of build systems carefully stick our paths in everywhere to try and make it work in a particular way and then we spend our time taking them back out again because that's generally wrong. Right. And I guess there's similar bits and bobs. So I've seen people do strange things with LD configs to test in situ rather than test in context. Yeah, I'm seeing there's a lot of discussion going on in the pad about this. I'll shut up. I don't have any particular knowledge. Well, I've come across some of these things. So I guess if we have examples of stuff where the build system makes it hard to run the tests later on the binaries because they only ever thought it was going to be run at build time. We should probably go and fix that and then beat upstream up to explain to them why that's not right. Yeah. All right. Can we move on to the next topic? I think there is a following the order in the pad. So there is a script that Michael bank wrote after the last mean that comes, which is called dv dash auto package test, which is a helper script to run out of package test on Porter boxes and on local. SH route. So that's something useful that people might want to look at. So maybe you sometimes you have regressions on hardware you don't have. So it's useful to have a way of running out of package test in Porter boxes. As you know, we can we can't do anything we want on Porter boxes. There's some work around you need to be able to install the test dependencies in the in the CH routes in a way that's not by hand. So this script that Michael wrote can probably help with that. So it's on salsa. The link is on the on the pad. Yeah. It reminds me this much request open on that to make that script tiny bit easier to use. Right. Yeah, I don't know. I haven't checked. We should probably follow up on that as well. Yeah. Yeah. So next question is there so that relates to the to Michael's script. Is there something similar for normal normal packages to start similar as for Porter. Boxes also, but for normal architecture, I assume x86 64 bits. So someone commented in the bottom saying that we do have x86 Porter box. And yeah, I think if you are running on your own machine, then you can just run out of package test itself. Maybe that if it's a small package that doesn't have a huge dependency tree, you can just install the package and get the dependencies in your main system. Otherwise, you can use the LXC runner, which is the same thing we use for CI. Or you can also use KVM. And you can run the test directly on your host system. And when I'm developing, I usually do that. Unless it's something that requires a bunch of packages that I don't have, I don't want on my system. Is there any easy way to set up something similar to CI.dev.net? Yes. I mean, for example, the data CI using the LXC for environment. So I want to set up LXC, which is in my laptop. So is there anything special except for installing the normal LXC? Yes, you can install the CI itself. It's available in the archive. You can just install the CI and run the CI setup. And it will do the exact same thing that's running the infrastructure. It's currently not in testing, though. It is not. Yeah, there was some issues in the middle, but we can get it fixed. That's actually documented in the tutorial on the website. So you just install the CI and run the CI setup. And it will create a container and configure everything in the way that's the exact same. It's the very same code that runs in the infrastructure. So you get exactly what we have on the service. Next question. When is CI.dev.net? Go into start supporting isolation machine. How can one help with providing isolation machine support? So we need to work in the FCI just to write the glue code that creates the VM images and runs them. I think that code is even already there. And we just need to find some time to find worker node that can actually do necessary utilization. I'm hoping that after that conference is over, I'll have more time to look back at this. But if you want to help come by IRC and talk to us, we can investigate that together. I think we are really close. I remember getting the code part ready. And we just need hosts that actually support necessary utilization so that it's not dreadly slow. Yeah, because I think that Martin once proposed to create the hosts on the fly and SSH into them, right? That was I think what he proposed for AWS. Yeah, we can do that. But then we probably found a lock into the AWS calls, right? Yeah, we need to investigate how to do that in a way that we don't get locked in. Would the ARM workers already be better supporting that than the AMD workers that we currently have? I have no idea. Because basically if we go for isolation machine support, we actually need it on all the workers of one architecture. Yeah, do you think we want to move to it or add it as an option? Well, if we do that, we have to think of how to do it. Because basically if there's a test that requires isolation machine, we don't want to run it half of the time on a worker that doesn't have isolation machine support. Because then a test can become flaky just because sometimes it runs an isolation machine and runs all the tests. And sometimes it's on a machine which doesn't have it and then you run a subset. So that would be bad. So I think really either we need all the workers to support it or none. Or we need a clever way at the DEPCI master worker a way to schedule the task at the right worker, which I think currently we have nothing in place for to be able to do that. Yeah. So then the master would need to know the restrictions of the test which currently it doesn't care about. So I think you need quite a bit of changes to do a sort of in between kind of setup. Right. So maybe it's easier to just migrate everything and make sure it works and just use the isolation machine. At least per architecture. I mean obviously you can have one architecture that does have it and the other one doesn't. But not half of the workers. Right. Unless we really do that properly. Yeah. If we need we can spend some of that money. That's there. Sure. I think it's mostly time of developers that want to design the thing. Right. Yeah. But yeah, that's my point. If it's probably easier to just spend money and get the infrastructure that can run the stuff, then we add in code to support conditionally running the package that need isolation machine only on VMs and then routing the package to the right worker. Yeah, let's see. Yeah. But is it always the same person that's Oskig or? I don't know. I didn't, didn't check. So, oh, I think you can take the next question. Yeah. I found a question slightly higher up, which already has a answer. So I quickly do that. Is there a location in policy for test only binaries which need to be in the binary package if we're going to run them without the package test? So one answer already there is so test binaries aren't generally useful, but so part of the point that I wanted to make there is that the stuff that you need to run a test can just be in the source because the full source tree of your, of your, the source package is unpacked and available for the outer package test. So and if you need a executable as a binary, then you can just build it. I mean, you can build a test binary and then run that. So I don't think you need test only binary packages if that's what's meant there. And obviously, if it's just in your source, it can be anywhere in your source. I think some of the known packages have test binaries. I remember Simon mentioning this. And actually, two days ago, I was attending the Linux Plumbers conference, micro conference on testing and fuzzing. And there is someone working on a proposal to extend the FHS with test-related location. So it would be something like user test bin or user local test bin or slash hop, slash test, slash bin. So I think it is going to show up at some point. I'm trying to follow that. What I see is that we have user libexec install tests, like install dash tests. There's stuff like RTK it in there. Yeah, that's the place where it puts this stuff. And yes, I have seen a couple of binary packages. I think that's mostly for data. But I guess I've also seen issues with that because actually, those binary packages then were built from a different source package. And then in the way that we normally wanted, it's tricky to keep the restrictions, the dependencies, the version dependencies correct such that you actually match the version of one source to the version of the other source and stuff like that. So I think in that sense, it makes more sense. Typically, of course, except if it's huge or something to have the data in the same source. Right. Yeah, I see your point because we don't have a way of specifying. And obviously in the past, we've... No, go ahead. Yeah, so obviously in the past, the trivial answer to that was just to use a data source somewhere on the internet, but with our desire to be more self-contained and at least be able to not run tests that need the internet. That's sort of a more annoying thing to do. Yeah, I was going to say that we can't use the same mechanism that we use in the Debian control file to say this binary depend on this other binary with the same version. So we can make like a program depend on the corresponding library that comes from the same source package. We can specify the exact version. We think it's binary version, source version. I was confused, but you can do that in Debian control because the package expands that. But I don't think you can use the same in Debian test control. So you could be running the test binaries from different versions of the package, right? Yeah, but if it's from the same source package, then at least for Brittany, that's okay because we typically ask for sets on the source package level. So I've seen this problem where there was actually a source package introduced to only contain the data because it needs much less updating. So it reduced the size of the source, the source of the source, the source star ball. But yeah, I don't think there's a one fits all answer to this. It all depends on the circumstances. Yeah, so we have another question here following the order in the pad line 58. I think you can take one. So the question is, what's considered superficial? Some concrete examples would help running the program with dash dash help certainly. But what about building an application using an installed library, for example? Should I declare that superficial? There are a few answers already there, but I guess you can answer as well. Well, yeah, building an application. If the, just building the application, I think is still a bit simple. But building the application using an installed library and actually let it do something with the library. That's not superficial. I mean, if you have the library doing operations on graphics and you build your test application and you do an operation that you want to happen to test actually that your API score works. I think that's enough to not be superficial. So yeah, currently at the release team, we are really at the level where we say, well, just this help is too little. Testing that packages installed is really not where auto package tests are meant for. So we don't want to have a bin true command. But remotely more than help is probably okay, except of course, if LibreOffice would just do LibreOffice minus, minus help, that would like really be far off. But yeah, so it depends. It's not so straightforward. Yeah, so far we are trusting the judgment of maintainers. Yes, that for sure. And nobody cares except for the release team at this moment, I guess. Yeah, but I think every time we find some bogus test, we do report bugs. Yeah. And just having been true as your test, that's RC. So don't do that. Yeah, next question. How would you test GUI packages? Yeah, I've seen a lot of auto package tests run that X, V, F, B hyphen run. I've never tried it myself, but that's apparently that happens quite a lot. Another thing that GNOME, I think, is doing in its own test is actually using the accessibility bus to test. And that has the great additional feature that you're testing accessibility of your GUI. There's a couple of examples already on the pad. It says Bumbum is a nice example. And it's calling Xdo2. Never heard of that, but maybe great. Doctail can also be used. Apparently QT packages even have integrated support. That sounds awesome. Yeah, yeah. Next question. What about standard tests with this side of the package test? For example, Debian tests Cmake full would try to create and run a simple Cmake lists.txt to check for the package versions listening full. Or Debian test package config full. It would try that package config dash dash leaves or dash dash c flag full work. I wonder, wouldn't it be worth adding this kind of logic to auto debate? Probably, yes. Yeah, so Pino is saying he's using this quite a lot, which basically means if there's a pattern, it probably should land in auto debate. Yes, yes. Yeah, yes. So auto debate will generate the control file and then the code that handles this needs to be in some other package. But yes, if people find common patterns that repeats across packages, we want to know about them. We want to centralize the implementation. So that's why the auto debate exists. So we have a centralized control file generated from Ruby, Perl, Python, GKMS, R, Go, and a few others that I don't remember right now. But that's the idea. We should have common testing infrastructure and add support for those types of packages to auto debate. And actually, it doesn't contain the code and regularly doesn't contain the code. It just knows what to to to call. So quite a few of these languages actually have their own package that knows how to run the test for the language. But auto debate that knows that this package exists, then I can call it and this is integrated with auto package test. Yes, we don't want to control the actual test runners. We want the corresponding teams to own them. So auto debate is just the glue that tells auto package test how to run test for the package without maintenance have to duplicate the same test control file in thousand packages. I love the next question, Antonio. Yes. How can we help in development or otherwise? Yes. Yes. So there is the development needed. There is out of package that develop and needed. There's even a WMPP bug open for that. Yes. Out of package test is true request for help right? Yeah, we need so auto package test is Python. And Dab CI is Ruby. So we need people who wants to help with that. But we also need help to maintain the system. So maintain the infrastructure. We have all the infrastructure automated with Chef, which will be migrated to a similar tool in the future. But it's using Chef at the moment. And you can, for instance, bring up a couple of BMS and have a copy of the infrastructure locally that you can play with. And yeah, I think come by IRC or meeting this and talk to us. I think I'm planning or organizing some better way of getting people to help like maybe periodic meetings, maybe some other way of mobilizing people to know how can they help. Yeah, I think we should have a regular meeting life like this one, maybe. Yeah, I think so. It's a lot more practical this day. So I think we should do it, yes. I see in the IRC backlog that many workers died in frequently. So maybe we can try to move the worker to the cloud. Or I think the one who gets something like cloud in the, in the for the test. Or maybe we can also move to the worker to some. Like the source of CI, they say, provision of VM and run the test and throw the VM away. So there will no worker will not die because every time you're provisioning a new one. Yeah, we can move into that. That's what Paul was talking earlier about, which is using some API to create VM, then run test and throw them away. Yeah, out of access support that we just need to find a way that don't lock us into AWS or any other provider for that matter. Yeah, we also run on packet and now with UI. So that's already three and then we have one at IBM, I guess, without. So we already have four that we would need to support. Yeah, that's a good question. Next question. I have seen cross-arch auto package test patches fly around as an example on the ATSPI 28K package. Possibly useful to document and factorize. I think there's a much request from Ubuntu to help support that actually. Yes, there's the cross-test support for auto package test itself. Yeah. Much request 69. Right. Well, this is one of the things where it's clear, I guess, that we can help. So there's quite a few people that provide merge request, but for nearly for a lot of the areas that auto package test actually has code for, I feel very uncomfortable to actually merge because it's stuff that I don't know how to do or test. I mean, so basically there are now a couple of people that I just trust and I hit merge without even knowing what the patch is really doing. Yeah, I mean, we got a couple of patches from Simon and I just say, please just push if you do that thing. There's a couple of more people that are actually working and I think so either these requests, the people know what they're doing and that should just be also told. And if they want discussion, they should state that and say, well, this is something that I think makes sense, but I'm not sure. And the problem with the whole code of auto package test is that it was written and long maintained by other people. So it was Ian and Martin that did a lot of the stuff. And now they are not so much involved. So it's mostly Antonio and me, but a lot of their I just don't know how to judge. So it's very difficult to hit the merge button because I don't know what it does. So if people chime in and comment and help review the code or just say, well, I like this thing or I don't or that's already a great way. I guess maybe stuff, the merge request that have a popular demand make it easier. I don't know. I'm sure that makes it easier that we run in like since almost a year now in a bunch of like the patching question. Right. Testing. Yeah, so I guess we should hit the merge button, but maybe somebody actually. So I was hoping that Ian would sort of say, well, just give me access and I'll commit the pieces that need to or or you or, I don't know. Yeah, we need to review that list of hoping major question and do something about them, not let them there forever. Yeah, but I'm always hoping that somebody else, which I trust actually comments on stuff before I have to hit the merge button. Which is a weird thing to do when you think about it that way. Yeah, but then we can't wait forever right. We did this in the in the recent past where we merged something that look at okay, but then it broke something. Yeah. And then we just go in reverse. Nobody dies from that. Oh, that's that's true. So we should probably go to. But what was that published fast public often or something. Yeah, that's true. Daily into production. Just merge the master branch daily into the reduction machine. Yeah, well, currently on on CI we have the, the, the, the policy of each your own dog food. So it's we're running the Debian packages. Okay. We run them from unstable in stable though. So, so thanks a lot, everybody. It has been a great both and see you in the next time. Bye. Bye bye. Thank you.