 T reading the topic of this buff is pure part and we would like to know what sucks. Envel wants to start. As mentioned in the description of the talk, what I find most most difficult is parsing the output. What I would really like is please tell me if there are any errors and nothing else. Pure part output looks like it has debug enabled at level 2. It has two levels of debugging-output enabled. One is a very custom level that I invented myself called Dump. I've recently sent to the BNDeval mailing list a proposal for a new format, which basically to summarize, it would have three forms of output. One would be to standard output, a summary of the test that are being run, and for each test, whether it succeeded or failed. And since there are a fairly limited number of tests, this would be a fairly short bit of output. And the style of the output would be very similar to what Lintian produces. So hopefully this would fix the problem of finding out whether this thing worked or not. The other output would be the current log file, but it would not go to the screen, it would go to a file. And it's a useful thing to have just in case something does go wrong. And some of the things that go wrong are things in the environment. Something runs out of memory, for example. And this can be hard to detect, except from the log file. And the third thing would be, in case there is a test failure, to optionally produce a snapshot of the virtual environment, the CH-shoot, so that it's easier to debug. This would be optional because such a snapshot will be somewhat large, and it would be a bad idea to produce it always. But for the people who want to debug the problem, then this would provide some additional help. Last, anyone think this is a good idea? Definitely. Does anyone think it's a bad idea? I look at it differently, because I don't look at one single package being tested, but all 25,000 in Debian. So I definitely want to have flags, so I can say this test failed because the test environment didn't have network, which would be not a failure in the package. And other stuff, which happens occasionally when you run it all the time. And also to be able to categorize different kinds of bugs, like files owned after Perch or something. Yeah. And the Lintian style output should make your work as well easier. It will not just say that something went wrong. It will be slightly more specific. Any other issues anyone has ever had with view parts? No. That would be a very short both. Well, I'll try again. I should have reported this as a bug, probably. I find sometimes that even if I tell view parts, I hope my memory doesn't fail. I tell view parts to ignore some files, but it still changes its exit code, or it sees as failed because there were files ignored. But that would be a bug. Yeah, that should be a bug. So I'll try again. How many of you use view parts regularly? I don't. I'm lazy and stupid. So I don't use view parts. James. So I'm not sure if this is a problem with the package itself, or the way that it's run on lintian.debian.org. I'm not sure quite where it happens, but I saw a failure the other day where the package failed to install because it depended on a new binary in that new version of the package. So that wasn't available in the base repository. Is it to do with the way that the packages are supplied to view parts for the run? Or does my explanation make sense? Could be that when the binary is built, or the package to be installed is architecture all, and depends on a AMD64 binary, which is not yet built on AMD64, which the machine runs on, then the test will fail. But it will be detected and automatically redone seven days later. Ideally, of course, it would be detected before running the test, but someone who wrote the code didn't think of all the cases. All the things that are wrong in view parts are my fault. All the things that people enjoy are his achievements. Do you get contributions? Yeah. So just looking at the way, sorry, view parts builds its truth right here, and apologies if this is a well-known thing. I see that it does a full debut strap and then pulls stuff out until it gets back down to only priority required. True it. Firstly, is it intentional that that means that every run of a package through view parts will detect any errors that are present when perching priority-important packages? And secondly, have you thought about just using a debut strap variant that only installs required in first place? To the second question. No, I haven't thought of that. I'm stupid. I shall file a bug then. To the first problem, it's fairly endemic to view parts at the moment that it's very hard for it to detect problems in, well, in this case it would be in the preparation stage, which could be dealt with. But in the actual testing phase, it's very hard to detect the problem, difference between a problem in the package itself or a package that it depends on. And I would love to find a way to fix this. I spent several minutes thinking about this in 2005. I didn't come up with anything. Presumably, well, presumably you could, you'd have to find out what sequence of operations was going to be and intercept after each one. So do each of the duplicate operations in sequence and then test the cleanliness of the truth. But presumably at early stages you'd find a lot of crap left over deliberately. The other thing that is that I currently recommend that people use the dash dash keep minimize option, which doesn't minimize the CHQ, because frankly these days this doesn't actually find any real problems. Well, it finds them very, very rarely. And it uses a lot of time, so there's no point in doing this every time. I have been thinking of the possibility of using Python apps instead of running updates manually as a separate process, well, automatically run updates as a separate process, which might give me more granularity as far as catching errors. My experiences with Python apps elsewhere have been excellent nowadays. In former times you had to do rather more messing about than perhaps you might want. You do find yourself having to re-implement bits of the app get front end, but often that's what you want if you're doing this kind of thing. Problem with dependency resolution are the or depends, because there it's not deterministic what gets installed and what gets tested before. Normally packages only tested if all depends have been tested, but if they are all depends then it is enough if one depends is satisfied. That should be improved. The correct solution, that would be to test every permutation of dependency if that is possible. And we would love to have some more hardware to do this. A few data centers would be sufficient. But yeah, given the complexity of the dependency tree in Debian, it's never going to be complete the testing we do. But it would be good to at least cover all the usual cases that users actually encounter. I don't think users really encounter a problem where they removed an important package, for example. The initial versions of few parts didn't do a threshold minimization. And I only added that after I had gotten the basics of the tester to work. Because when I went overboard and decided that I will find all the bugs ever done in Debian, thereby introducing a lot of problems. So I'm planning on writing few parts from scratch because every time I touch the current code I break it. The few parts being an automated testing tool has no testing suite whatsoever. There used to be one, but then I formatted my hard disk. So I started collecting back packages. So we started collecting back packages. So we have something to test few parts against. I don't want to stand up. You could sit on the table. No, that will break. Okay. After I've rewritten few parts I will try to keep the command line interface compatible with what currently exists, because this will make things easier. Skipping this minimization phase is one of the things I want to make the default. Because it's not useful enough to have everyone run it by default. And if you want to run it on the few parts.debian.org machine then it's easier for us to add an option than to have everyone else add an option when they use it. So I will deciphering my handwriting. But I think I have written MBF. And I have no idea what that means. Mess buck filing. Oh yes, you wanted to talk about that. Yeah, that's basic. There's some bugs which are more frequent than others. And there was the release goal for Lenny so it starts clean archive. And that has not been reached. There are still packages which are not clean, but I'm not testing Lenny anymore, so I don't really know how many it are. And I would like to discuss which bugs should be fixed and filed with either important or serious severity and then do mass NMU campaigns to fix them. And to discuss this properly I would need a list of those bugs that don't happen in my head. Own files are not removed after Perch. I think it's annoying, but it's really just cosmetical. There's other stuff which is files in user local which I think is so ugly that I really want them gone, but maybe still it's better to file them still, which is important. And other bugs like this. But I think this should be really discussed on Debian Devil on the list. Mass bug filing should be discussed on Debian Devil. And then it's easy to write. I added quite some functionality to PU parts by writing small best scripts which mostly grab in the lock files because there is no flex indicating it. And it's easy to extend these scripts to really then file 50 bugs at one time because it's just the same lock file to grab for the error. And I think it would be good to have functionality in the system that we have that will have a semi-automatic bug reporting facility for certain kinds of stuff. So that the system generates a bug submission email but doesn't actually send it a human needs to look at it and confirm that it's worthwhile sending. Hello Ian. Can you tell me what's wrong with PU parts? Yeah. Something I would like to do at some point in the future would be to add support for other kinds of virtualization than CSU. For example KVM and Xen. Because one of the things that makes me nervous about PU parts and running it across the entire archive is that since the devices in a CSU are shared with the host and if there happens to be something that writes to slash dev slash sda then my host is slightly screwed. Anyone else feeling this would be a good idea? Good. Once that is done I would also like to do things like allow the packages to start services because at the moment it's not saying to allow the packages to start services in the CSU. There's all sorts of things that can go wrong that way. And this means that many code paths don't actually get tested in postings for example or pre-inst or pre-room and so on. I just wanted to say that also I see another use case for using virtualization. If your package in any way depends on a kernel and I've just run it and it says oh cannot install the kernel because bootloader and so on and so on. So even if it's not a problem of security in running the host system it's just not flexible enough just in a CHL. So I totally agree with that proposal. Then something that has suggested to me is that if we are running inside a KVM anyway would it not be worthwhile to run test that test the functionality of the package. Ian. I have a system which is able to do this and it's been running in my cupboard for the last year or so. But there's nothing really useful being done with the output. Firstly because I don't have a huge amount of time to read the reports that it emails me. And secondly because I haven't really what I've decided to do very recently is that I'm going to put it up on a web page and see if I can get it on the packages QA page for your package. And that might get the more motivated people to write some actual tests. Because at the moment there are no tests as far as I'm aware in pretty much well there's probably like a couple of test packages where the tests haven't been removed but there are basically no functional tests that are wired into my system. So what tests do you run? Well I run tests that are provided by the package and if the package doesn't provide any tests then I don't run any tests. I do build it. In the WN Roots fire the tests are provided. No there's a separate spec for how to provide the tests. So as a background these tests are tests that test the installed package if I remember correctly. And the question I have is this something that would be worthwhile integrating with view parts? Well yes there's a good deal of shared stuff. It may or may not be useful to completely merge the two things but I've got some pretty funky virtualization stuff and I think it basically runs all the time on Zen. And I think there's probably the functionality there what you need to do is get the interface usable to a state where view parts can use it and then anybody who's got a Zen install and it's reasonably there's a sort of reasonably flexible sort of plugable virtualization thing. So if you just want to run it locally you don't have Zen, you can use a churrut or you can use Zen if you've got a Zen or whatever. So I think that's probably in terms of a detailed discussion is more of a case of us both sitting down and reading each other's code and documentation than a thing for this boff. Okay, fair enough. So as regards functional tests I've looked at Ian's framework before and agree that it's necessary to test the installed package rather than some build time thing that may or may not completely fail later. But as a matter of, as a pragmatic matter of getting lots of tests running I was struck recently by the general usefulness of DH Auto Test which is a fairly new thing in Dev Helper 7 and I wonder if that would be useful to wire in. It's anything that uses Dev Helper 7 will be using it probably by default but the test output will only really wind up in the build log and it might be useful to have it somewhere a little bit more central. How does that work? I mean DH Auto Test, what does it do? It looks around for various tests that are provided by upstream that it might be able to run. At build time. It's heuristic. At build time. At build time. Save. All of the DH Auto commands are, they're called that because they're heuristic in some way. Yeah. The problem here, integrating this with few parts would be difficult because few parts but certainly there's a lot to be found somewhere and acted upon. I've been missing functional tests because any test at build time test the upstream software and what we are interested in here in testing actually the packaging of the upstream software I mean presuming already that the upstream software works so I'm surprised to find out there's a framework and I'm really interested in that. Right, so it's called Auto Bookage Test. Auto PKG Test and I'm not entirely sure whether the version that's in the archive or my public tree is exactly the same one as I'm running but you can email me about it. The big problem with it is precisely that if you've got some functional tests that you want to hook in it'll provide you with a framework for running them and if you upload the package with the tests in I'll even run them for you occasionally but nothing really very useful will happen with the output at the moment. Could we perhaps find a situation where if there is a problem running those tests that the maintainers and uploaders are automatically emailed about them. I asked about this on Debian Develop and there was some hostility to well it wasn't exactly clear what the hostility was towards. Well perhaps, yes. Another thing to do might be to use the PTS email preferences system and I'm not sure whether we should this would be a thing that people will be described to by default but certainly I could already very easily just email the PTS. I think if you put it on the PTS page like PU-parts is now integrated there that people either can pull from there or get an email if they want to but PU-parts also doesn't mail failure to the maintainers because I think that would be possibly if there's a bug that will be 100 of mails sent and 100 of people annoyed so I would rather not do it. Right, so if you break some lower level package that does, you do really notice this right if Lipsy is broken then I see this in my logs and I get a whole pile of emails saying well you know this is broken and that is broken and the other is broken and you know actually I'm not entirely sure that would be too bad a thing because you know you firstly the person who broke it would suddenly get lots of attention and secondly the bug would suddenly get lots of attention. On the other hand the other package tests are added to the package by the maintainers so I would expect them to want to know about failure. That's true however at the moment because it actually does a build it builds and installs the resulting package before it discovers that there are no tests just I mean I did that deliberately so I am doing build and installability testing already and if you want to do functional testing as well there's a whole framework for defining that. I don't think email should be sent just email, I think bug should be filed and that will result in emails then I'm fine with emails and to file this bug there need to be tools to analyze the output and then humans to validate it because then we are automatic bug filing which is probably... Well at the moment I don't have the effort to deal with that kind of output I mean essentially what's happening at the moment is the emails going into my mailbox and I'm ignoring them because there's too many of them and I can do better stuff than I should fix the program to make these emails available to other people and that should be higher on my list than going and chasing up individual bugs. At PU-parts it has at the moment 300 bucks in SID or 350 or something so if I file them manually and I file one a day I'm done in a year which is not so good but on the other hand when Lukas started doing PU-parts mess runs on the archive two years ago he really filed several hundred of bugs which were all the same and those got fixed and the archive became cleaner and I think filing bugs is the way to go and dealing with these log files is why I'm at the moment not working on making the performance of PU-parts Debian org better by using AVM containers or whatnot but because I rather need to spend time to deal with the output which is there with the current speed and the solution I see there is to get the log files analyzed by more people than me. So can one currently just browse the output of PU-parts and go rather than hunting through for each of the 50 packages I maintain here Yeah there's PU-parts.debian.org which has all the log files and there are pages for each source package and there are also pages for each maintainer or uploader so there's somewhere Colin Watson page with all your packages. I mean this may sound very lazy but if anybody fancies getting on the CC for these emails that I'm currently getting and ignoring and thinks that they have time and effort to look them up so it's very formulaic and it may only take 10-15 minutes each one but if you're getting 10 a day that's rather too many to deal with and some days it's none and some days it's 50 Perhaps a mailing list for the time being Yeah having them public would be good I agree. Debian develop is a mailing list Debian develop announce On the whole I think emailing the package tracking system would make the most sense because it's an existing mechanism and people already know how to subscribe and unsubscribe or people know where to find the documentation how to do this because it's weird but it's there and it's worthwhile using and in fact this makes more sense than emailing the maintainers directly because then other people interested in that package can also subscribe easily How easy would it be to extend your stuff to support say KVM? Pretty damn easy I would say The thing that is the functionality that I don't know how to provide in KVM that's not because I think KVM will find it difficult is essentially snapshotting and resuming a VM Now this is something that QM you can do so I imagine KVM can do it but No I've done it So you need to be able to because you don't want to have to boot the VM every time you run a test so you want to snapshot in a kind of booted running state and then resume it That's an interesting speed of my session I will have to copy Yeah so I'm able to instantiate you know wipe all output of the previous test and instantiate a new VM and it takes a couple of seconds Sure A couple of seconds is about as fast as it takes the Piatti machine to unpack a pre-made tarvol for a C-shoot So in fact on Piatti it makes no sense to keep using C-shoots I learn new things every week When I was younger it was every day but I'm older now Right and because of this approach what the tests run in a they don't run in a churrut they run in a properly normal booted system that's got you know it's good enough to run an SSH server because that's how the test machinery talks to the test bed Sure and I'd like to have that as well Well actually I want to have I also want to test the case where we don't allow services to be started within the virtual environment because when I first started doing that it brought up a number of issues in a number of packages Every time you start doing some kind of systematic testing on a large code base you find new stuff Yeah some of that stuff is quite exciting Running it with a properly booted system does seem to help a lot I do have problems with kernels though Anything that attempts to install a kernel I can't really cope with because the way that the Zen guest boots doesn't really I don't have it set up to use Pygrub In terms of wiring this into PU parts it's just a question of what interface do you have an internal interface to how your Chiru works or is it all just like scattered through the code Let's assume that PU parts doesn't exist yet because I am going to be rewriting it Right Yeah in that case you should probably look at the interface I've defined which is basically you run this program and then you sort of talk to it to tell it what to you have a child process which acts like a server for the testbed and you can send it commands like open the testbed and grab it for me and throw it away again and tell me how to SSH into it I looked at auto package test years ago then I was still actively working on PU parts and I thought it was very impressive and definitely defined by you in a very good way but I didn't have time to actually look at integrating it which I probably should have but time goes Fair enough So maybe you and I could sit down today Sure, I'm not here tomorrow so today We've had a lot of talk between Ian and Colin and do you anyone else want to say anything What's the one thing you've always wanted to have from an automated package tester That's not entirely a silly an idea as it could be This is slightly facetious but always that other people are running it as in, you know, I've tested testing its own packages is one thing but what you really want is that Debian as a whole is using it effectively and so the most important thing seems to me to be odd reach and making sure that it's in a state that people are running it as a matter of routine or at least looking at the output as a matter of routine I would really love to see everyone run all the available automated testing on their packages before an upload Before an upload is good but at all It's better than nothing Even at all it's fairly hard to achieve right now One of the problems is that Xen and KBM and PEOPART tend to be fairly rough on a net book and a number of people are still relying on low end machinery to run things, that's why we have I intend to do that really to file those bugs which are there now, these are 300 and then there will be whatever 5 new bugs a month and I can easily file those and if there are 30 then it's a mass bug filing But it would be, sorry it would be a really good thing to have some kind of bay of not making sure not tracking that everyone does this but having a culture where doing this is natural of what you do anyway should be integrated into PEE builder PEE builder and PEE builder runs PEOPARTs automatically after it They build long runs lintien after building a package you could perhaps have that happen 8s I will not have to file 5 bugs a month but suddenly it will become 10 and 20 Something else I wanted to say is that the faster a test method is the more likely I am to run it Lintien is just about on the edge Debuild runs it so I put up a bit of it and it's sufficiently useful that I put up with it if it were if it took 5 times as long then I would probably control C at most of the time Yeah, I know that feeling Auto package test is really rather on the slow side for that because typically what it does is it takes some version of the package and builds it and maybe not necessarily the package that you are testing if you have got functional tests often you find that you have bugs in the tests so you need to be able to update the tests and still test the old package and that means that sometimes you end up building two versions of the package and it's all a bit slow really Something that we perhaps could do is have Debuild not run via parts but check whether there is an existing problem and check the change log whether it has been fixed So yeah, I do have a sort of slightly sophisticated because it takes so long but my expected time to scan the whole archive is months I've got a thing that tries to guess what would be a good package to do next based on does it have a bug open that was related to auto package test and if so, well why bother since the bug hasn't been closed or what version was it the last time we tested it and has it been updated, that kind of question Yeah What's our estimated time for running of an entire archive? A month So yeah, all of these things are slow But I'm sure on that hardware it can be with AVM snapshots can be improved by factor 3 or something because it's untas the base system all day Yeah, that seems like a thing that should be possible to optimize away possibly using the virtual machine snapshotting Right, the time consuming part is entirely the build and that means I haven't really optimized very much the sort of infrastructure but if all you want to do is install the package then the total overhead is 5-10 seconds maybe in terms of selecting a package instantiating the VM, SSH copy the package across install it, throw the VM away Very little of Yeah, you could basically do that in the time it takes to install the package Yeah, if we assume 10 seconds per package then that would be 8000, 8500 per day which is very impressive I think I want to do some coding now So, any other blue sky kind of suggestions for automated testing of packages Has anybody looked at integrating any of the desktop testing approaches into, perhaps more into auto package test than into pure parts because you'd be talking about functional testing at that point I have no idea if anyone has looked at it but it definitely should be done Things like, you know, start the start the program up, press a couple of buttons see if it explodes, entertainingly It's amazing that See if it starts up, in fact, just come to that It's amazing how many times you find problems even if the only thing you do is start up firefox because that tests an entire stack of things that can go wrong and back when I was slightly younger the stack on an MS-DOS machine was there was a bias there was sort of MS-DOS on the side and then there was your code These days we have slightly higher stacks of abstractions and code and stuff that gets foggy all the time And many of them are wobbly and fall over at the slightest provocation I was not referring the compass here But yeah abstractions and libraries, as I said, are really good but there is a problem with making sure that everything works together And sometimes you have problems where I had one case, I forget what package it was early on in PewParts testing where package true would work when installed and gone through the entire testing and package bar itself would work but it depends on foo and foo breaks if bar is installed resulting in viernas And this is the kind of things that would be nice to catch before user's catch it in a release But that's what I was saying even if people aren't running it before upload which obviously helps anybody running unstable and avoids slowing down developers and so on But even if people don't do that dealing with trying to catch things before they get into a release is hopefully the ultimate I would like to catch before they go into testing and then automatically or semi-automatically file a bug saying that there is a serious kind of problem that PewParts or other package test has found Please don't let this package go into testing until it has been fixed You mentioned the expected time to run over the whole archive Seems to me that perhaps a slightly more interesting question is can you keep up with flow and unstable Easily We can do about a thousand packages per day It's fairly rare day when a thousand packages get uploaded Excuse me while I go and make some uploads Please do Auto package test is more in the order of a hundred packages a day depending on whether today's list of packages includes open office Yeah, so that's a bit iffy But basically often you upload something and then it won't quite get around to testing it again and then somebody will upload it again so it's a bit one version and that it degrades relatively gracefully I think Out of interest What kind of hardware do you run it on? At the moment it's a dual processor P3 866 How much memory? A gig I think and nearly all of that is assigned to the testing guest obviously So on the order of the same as as the Piatti machine I think No Piatti has 4 gigs now I think and 2 gigahertz to a core or something Okay, what do I remember? And then the memory really makes a difference Yeah But Piatti does It puts us huge amounts of IO so memory helps Yeah, if somebody wants to give me access to a nice fast machine then I can stop having this thing wearing away and my under stairs cover making the house warm We might be able to point at someone who has access to a machine Yeah, but loading the machine was that then we cannot be rebuild the archive so fast every rebuild test Yeah, well, we'll see But Debian has more hardware access It would be actually nice to have this test run on more architecture because, for example I know like right now it doesn't work on alpha because I got bug report but I cannot test it myself beforehand That is certainly true The way the package are tested by PewParts is that I wrote well I wrote PewParts then I wrote PewParts master and PewParts slave and the PewParts master tells the slave which package is to test and we could run these either a pair of these master and slaves either per archive or have a setup where which would be slightly faster where only architecture specific packages get tested by this but if you already have a bug where an architecture old package or sorry architecture any package fails on alpha but not on some other architecture then it might be worthwhile to look into having more of this architecture supported by this The question is then where do we get the hardware Well Debian has a lot of money But where do we get the hardware to test very slow architectures or packages that is really the issue The binary packages should be the same on all architectures and I see the greater risk of difference there that there are still manually built packages accepted into the archive the uploaded packages are not rebuilt on auto builders and I think that should be changed to make archives more similar and sure of course it would be great to test all packages on all architectures but I think for the first step should be to test the architecture specific packages only on the other archive architectures But if we can get that then we could also add things like if you get such a bug report that you can request testing of that package on all architectures Buying alpha hardware is slightly hard these days unfortunately because they make very great heaters I admit once a machine that had an alpha CPU that burned for hard disk because ventilation wasn't good enough So in terms of other architectures I mean if we've managed to get something auto package test like working with KVM like it's the sort of Zen booted system snapshot thing if we get that working with KVM KVM is essentially just an accelerator for QMU and this means that in principle we could boot the entire system an alpha system or M68K whatever under QMU and do all of the tests whole build everything and obviously be a bit slow but maybe useful That's a very good point I haven't thought of that and slowness is irritating but having it work slowly is better than not having it at all and again if we do only test architecture specific packages on that plus by request something else then hopefully no one wants to build open office under QMU instead of M68K Possibly It is faster See, I've learned two things this week But anyway, yeah Sure, that sounds like a good idea Is there anyone here who would like to be spammed by all the test results of their own packages every time something happens If there's a failure, absolutely But not successes Or something that looks odd But yeah, if it's just everything is peachy well, it's not that exciting I mean, at least for me Okay, I would find it very exciting to know that my package installs I would assume that if it didn't I would get a mail Oh yeah, your packages actually have users Yeah, exactly what Lars said I do sometimes wonder Is anybody with me actually using this and then sometimes you discover bugs and you think either everybody who uses this program hasn't upgraded to this version yet or nobody is using it and so getting a mail saying I tested this package and it installs that would actually be useful Maybe it would just be enough to send them by default with some easily prockmailable out filter to packages Before you came, we actually discussed sending them to the BTS which has a filtering system Right, exactly, and that's the optimal way to do it Yeah, now that you're here and you know everything about the bugs is there if we wanted to start doing automated bug reporting from pure parts and auto packages and possibly lintien and rep and other things is there any advice you can give? Is that varying as best as underwear? Yeah, the main things that would be useful is to set up a set of user tags and a user that you guys use that's common to you that identifies these bug reports so that way it's possible for somebody who is interested in them like you guys are because you want to see what's going on with them whether they've been fixed what people are doing with them that way you can select this whole set of bugs so once you submit them include a user header for whatever makes sense for pure parts and then user tags that are appropriate so like if it was an example it failed an installation or something like that totally up to you of course and that way you can select this whole set of bugs very easily and you can discern between the different types of errors and see whether people are fixing them and everything and that's the only advice that I could give I mean people are going to respond and say I assume if you're finding these sorts of bugs they're actual real bugs and that users would be finding them and filing them and at least in this case it's a comprehensible report as opposed to a user telling you some random failure that doesn't tell you why it failed and it failed actually for a different package Yeah, like deep package Yeah Something I once thought about was whether it would be worthwhile to have a non-user tag indicating a automatically filed bug I mean I wouldn't have a real problem with it my general method of dealing with proposed new user tags is to require that they first be implemented with user tags and that basically sort of forces the impetus onto other people but if you have a set of bugs that are filed with an appropriate user tag that have cohesive existence and there's a rationale behind it then generally I'm open to adding new tags I just generally won't add new tags until somebody is proven to me at least that they're going to use them because otherwise it just makes it more complicated and it's a waste of time and that can be implemented as a user tag automatically filed bug Yeah, it's at least a start I mean if a bunch of people use it then it's definitely something that we could think about The reason I'm thinking about that is that filing even semi-automated bug reports is a fairly large amount of manual work and manual work tends to be things that we do too much in the BN Yeah, it doesn't scale So yeah, maybe that's something that could definitely be done I mean if it ends up being useful so I mean I wouldn't have a problem with it Okay, I guess we are done one minute early possibly 90 seconds Thank you everyone