 So this VOF is about wacky ideas, things that would be nice to have. Basically what I'm trying to achieve, this is mostly a workshop and the project is basically only there so people on ISE can make suggestions and stuff. Most of the work will happen on the slideboard since we're trying to achieve it. So the idea is to collect all the ideas we have and discuss whether they are virtually pursuing or not and perhaps in the ideal case get a plan for implementation or perhaps even a transition plan. I brought a few ideas with me so I believe did some others. And I think we're going to kick this off by about one or two minutes presentation on each idea. So together we can discuss whether anything should be pursued or not. Okay, I think I'm going to start with the first idea I have which was package build dependencies. The problem is that some packages interdependent on each other when building. The worst case in this regard is the tool chain where you have to basically build a small compiler without any operating system support, so-called freestanding compiler. Then you use this compiler to build the kernel and the kernel header files to build the libc. Then you have all the header files together to build the real compiler. So you basically have to go through the compiler source tree twice. In fact it's even more complicated due to things like libgcc which the libc has to link against but which in turn links against the libc. So you have to go four times through the gcc source tree. The idea would be to allow in each packet stanza in the debbing control file another build-dependence line. And essentially if you can satisfy all the build dependencies for all the packages then you do and you call the binary arc binary in-depth targets. And if you cannot then you try to satisfy as much as possible and then you call the single package binary-package name of the targets to build the single packages out of the source tree. Then you don't clean out the tree, unpack the next source and so on. This way you can bootstrap automatically which comes in handy if you build a cross-compiler to a change which basically have the same setup procedure. But since you already have a fully functional host system you may be interested in getting and building these in an automated way. So this would be the first idea. Would anyone else like to present the second idea or for this both or should I continue? I think the thing about architectures is that you seem to have saved things for later. Yeah, we'll talk about that here. You're the expert. Well, I'm basically considering myself the moderator more or less. The idea was to basically have this based on audience participation. I have an idea but it's not related to this. I'm going to hold on to the other one first. Yeah, it's not... There's no formal topic. There are things that would be nice to have. Would it be alright if I used the whiteboard? Sure, that's why it's useful. I think he thought that was a frame but that's just another screen. We do have some flat jobs around the place somewhere if that would be useful. Yeah, you should put a real solution to it. Yeah, put those on work. There's one script. Sure, people. Organizations, everything. Just go ahead. Let me outline briefly the problem that I have and the proposed solution for how to resolve it. As a devian user, I love devian. It says, so many packages, everything works right and everything's great. When I usually have problems, it's always the same usual suspect, which is hardware that does not have a driver already downloaded with Linux, which leads to either one or two scenarios. Don't use it or use a proprietary driver in order to use the hardware. I don't like proprietary drivers because it's been my experience, usually, that bad things happen, like kernel lockups. It's proprietary drivers. The issue, however, is I can't just go to the manufacturer and say, look, I don't want your software. I just want the hardware. Give me the hardware, give the specifications to the Linux kernel folks because I trust them to provide a working driver for your hardware. The problem there is, I'm just one person. Why do they care? Just go away. So I have a proposal for resolving this issue. The issue is, I wanted to create a service that runs at boot time. It's a strictly voluntary service, not saying it runs by default. It's a feature that at boot time you can say, I'm going to run the service this time. And the service I want to run is a bootstrap operation on top of the discover service. At boot time, when you typically run your system, discover or something like discover is one that detects all your hardware, determines which modules to drive your hardware and loads them into the kernel and everything works great. I propose a package that provides a service which I call, should I use green? As soon as you better. I call it informed vendor. It's a service that, in the list of services that run, I want it to run after discover, so all the modules that can detect the current hardware that has kernel support up through networking. So I have access to the internet. And then right here I want to have a service that tells the user, I'm going to pause here for a moment. I see all the hardware you have available on your system. The following hardware I have drivers for, the following hardware I don't have drivers for, would you like me to inform the vendor of the ones you don't have drivers for that your hardware doesn't work? And as a result, I'm not going to use your hardware, I'm going to recommend this hardware instead. Now the idea here is I don't want to, it's not about nagging the hardware vendor, it's informing the hardware vendor. But also, so this is going to inform the vendor of broken hardware. I also want to set up another website, MySQL database that, it takes a snapshot, takes a snapshot of your hardware, sees all the stuff you have running, takes an MD5 sum hash on your MAC address of your card, your ethernet card, or connection I should say, sends it off to the database so that Devin can have a running total so far of all the hardware that all the Devin users have. And so for example, this hardware vendor whom my thing doesn't work for can say, I ignore you, you're only one person. Devin says, no, he's not just one person, we have 20,000 users using your hardware. Are you sure you really want to anger all those 20,000 people and say, I'm not going to provide you with a Linux, native Linux driver for your hardware. So that's the idea, is to have, make the vendor aware voluntarily by the user. It's not, this isn't meant to be a spam. It's not supposed to be automatic ones all the time. It's strictly voluntary by the user. It requires user input saying, yes, I want you to inform the vendor. It does not always run all the time. It's just another option on your grub or Milo menu. But also let's Devin know what the true statistics are on all the hardware out there that's currently being used. I think the statistics approach is the more useful thing in this, since the vendors are probably just going to ignore the difference. One or two, but what if you had 20,000 emails one day of just saying, I'm sorry, your hardware doesn't work. I'm going to recommend my user use this hardware instead. Yeah, but when it's a form email, it will just be the spam. So yeah. In any case, like the Ubuntu people have made this hardware inventory system that actually, you know, I think it's the first time you actually run it. It has to, do you want to send your hardware to the Ubuntu database? So they already have such software that is graphical only right now, and database. And we should coordinate, I think, with them in some way so we could make huge pressure on the vendor. That would be at least... Okay. Well, I think this is not just an idiot thing. It's any distribution that you do. Yes. I was going to make a point. There's a sort of positive side to it as well, because as well as the hardware that isn't supported, you can also say, well, look, there's 20,000 users who have card A that isn't supported. Actually, there are 5 million users using products from manufacturer B or whatever. You can start to estimate the size of the Linux market, which is something that holds a lot of defenders back. They know it's out there, but they don't really know how big it is, because nobody's actually attempted four more statistics, right? They're like, like, Linux counter, that was one of the three. So that's probably a, you know, severe underestimate. It's effectively a popcorn for hardware. Yeah, that's right. I think it's different to machine database online, because it's just something they keep themselves, so we don't actually know anyone there. I was just wondering if this thing, obviously, if it's only... So, sir, I was wondering if this is where she said it's still available. Just... Workout CPU, workout. And that's it. 10 statistics, also. Yeah. Yeah, I think it's about a little bit. Yeah, but if your randomness is maybe, like, on top of you there, or something generic like that, so it's not only them, and frankly... Should we do it? I mean, should, in fact, we step on both. There's a pot of popcorn. Isn't that... Yeah, I think that's very nice. There's a couple of comments, of course. I mean, there's one operator called Vista that has quite a lot of hardware compared to the three problems. And the fact is, with the kernel, as it is, it's got the best device support. So I was a little bit worried that we might be sending out messages like, oh, we're really struggling here, but the fact is, we're not, I think. Well, I think that's what you get a message. You can emphasise the pot of that as well, you know, like 5 million monthly work. Yeah, yeah. If we can't take the randomness of the movement, we would also say... we would also present on the web pages like, this hardware... this and this hardware doesn't work, but this... all of this long list does. That's right. The website shouldn't start from the negative perspective. Here's the list of the things that don't work. Sometimes that is more useful. Indeed, but, you know, do you want both to be presented? I'm trying... I'm working on... getting... super for people to get the laptop and getting the best out of it. And... I think... I came up with quite a similar idea and it's a little bit of a thing that could be helpful. Today, if you have a one-in-one laptop, many laptops can share the same device, PCIe, PCIe device and so on. And... these sub-general aids could also be used to inform the user about, okay, you have this device and if you want it, it's some kind of things. And they get to the page where it says this is all your device and this is how you can configure it to get the best out of it. And... finally, it could also be used for the VIP to say, okay, today people have to do this to get the hardware to get the machine to work, the wireless start to work and this could also be a support to have the statistics and to have people say, in one click this is how to get the machine to work and to get the best... I just thought that it could be cool to see about reliability and hard drives, if you sound wrong. It'd be great to know what ones are reliable, hardware is better, you know. So there's lots of things that could come out of this. Google had this before. Yeah, but Google didn't tell which hardware are more reliable than others. Yeah. Yeah, I guess that's more useful information for them to keep secret. We all buy the crap ones. If you do have hardware and it's not recognised by a driver you're obviously going to have to say, I need this to be able to look up what it is to know what vendor. But likewise, you could then also see if there's a new driver available just to the user that they can upgrade to get the machine out of it. Likewise, you could also know which version of the driver is in a kernel and black is to say, well, actually the driver you've got or what's present in the kernel that you've got is known to be faulty and don't use it or go for another version or stuff like that. So there's a lot of feedback you could use based on... Integrating with backpasta bugs or like, you know, to know all the users because you are kind of off to the stable. Generising it would be a service that creates an online database for me. Information that is available on the system about possible upgrades much like a package tool but a bit more generic with plugins. This can fit in Enrico's work on Libept actually because the system has a plugable data collection. It could fit. You want to have some annotation support as well on your website. Oh yeah, this motherboard does actually work but it's named. Even within the hardware that has driver support some of them don't really find that out from you. Most people that see this information are seeing this information because they bought the hardware and are running the service. Sure, but it could help some people who are sitting behind that. What I want, considering buying a new laptop and you think, I don't know how to choose one. I want one where anything works. That's actually quite a hard thing to consider. If this could match back to a list of... this laptop contains all these choices but the thing I really like about the idea is I really want to have size okay, if I already bought the hardware and it does not work I would like to list it says okay, we know it does not work here's all the alternatives that do work so I'll note, don't waste my time if you're broken, get one of the ones that's known to work I'm much happier that way. There's a big problem with that it's very hard to identify exactly which bit of hardware is going to work because you can't rely on the model and you can't rely on the description you can't rely on the manufacturer you can't rely on the vendor sometimes you have to really go in and almost second seed you almost have to test the circuit board itself to see whether you're going to get a $1 or a zero and pick it up and see what chips you've got because the FCC IDs made me reliable in the case to which hardware it actually is but it can be resolved by if you have this kind of procedure it divides within the T16 laptop it makes it very difficult because it makes it difficult from the user perspective because they say well I've got a model 647 Belkin DTSA34 and well you might have to mate with it as you see exactly I was just thinking it could attract other communities I don't know about you though I just played quite and they're all about showing the specs of the machines exactly so you could get some people interested just for the deep value of it you're connected to the web actually right now you're connected to the can you just visit I have this web page can you pop it up on IRC yes I'll be right back so I'll take you there the developer or just a quick dvm.go slash dvm on can you pop it up yes that's it that's an attempt and the idea is to get support for the user to tell them how it works that's working if your computer doesn't work better because it says test page, go to ThinkPad, and then D16. And there was, currently, it's not manually, because I want to do one thing at a time. And so the first part, to help the user who wants to buy this laptop, and say, OK, this works, this works, this works. And if everything works, apparently, there's a thing you got on the page. We end up with a logo, which is a trail, a bar right, where currently, it's an ID, it says something like three stars, so the laptop is working properly. Four stars, most of them can do this, and it's perfect. And then two stars, you're done here. High-star comes pretty soon. Yeah. Two large people. And below, it says how to install. I mean, it's just, maybe you'll find the search page today, and it's like a different realm, where you have to look for a laptop, for a suit, and then it's on the laptop, seems to be the most concentrated place for searching information. A lot of us haven't taken one. But it's very, very late, so I think we'd like to install it first, so it turns out the hard way to do it. It doesn't tell you that now, a year later, in fact, it just works. And usually, it's not on the wiki. I mean, I've met many of the search pages, and that's why I came up with this idea. And it's not on the wiki, so people, if they have comments, have to write mail or something, and it's, whereas if it's on the wiki, or on the proper tool, that it's a database, it could be. I think the important thing, I think for something like that, is that there's a huge date of the last change and last update on the beginning of the page. Or things like this information matches the current version, something like that. Because even on, like, think wiki, the others, which is, like, maintaining quite well, and they'll probably find out the information. OK, so generally, to sum it up, there exists some projects already. But the nice to have would be working together. There's also, you know, Greg, Kroger Hartmann's kernel drivers team that they've got, where they made it, like, a general offer to device manufacturers to run for drivers. So, I've already. Also, Kenshi Muto has written a hardware compatibility check tool, where you input the output of LSPCI, and it tells you which kernel drivers you need, and if it's supported. And that information is generated directly from the kernel, as far as I know. So, I think we could also persuade him to list from which version onwards it's supported, or something like that. So actually, we need, isn't it, yeah, the latest LSPCI, the United Minds, are, and listening. Another idea, this could be a live CD. Yeah, I would be saying, like, take for a test drive before you, I don't know why I heard that. You've got to come up with a USB stick, and, like, you want to let me see that. Yeah, I'm not even allowed to do that. Just keep on data and then walk out the shop with you. And a test drive for 350. Yeah, it's, yeah, just ask, say this person, can I just put this CD for a test drive, and it scans the machine, you have a copy of all the data that's currently on it. And, yeah, that would be really nice. Anyone? For any, it would be better, really. And, like, it would be quite simple to hack the DBN score to do that, really. Or the DBN live helper. Okay, any volunteers? I could see the song, Do the Demon, that all the things that send it to a database. Yeah. The live CD is easy, in fact. Yeah. Yeah, I mean, all the web stuff, that's probably the big bit. Yeah, I mean, okay, it's setting up a database, it's not difficult either. I could do it, but I don't have a telling of it. Some things are just right. Yeah, of course. So, if anyone's interested, maybe we can set up something. What would be nice, I don't know if you can access some effort to get a code on to you, but to get a hard one. Yeah, there are databases online, phdbdb.com, that shows the latest 20 entries that were submitted and not overall, but when they're all broken in. But it's, you get it because the database is quite a database because you have USB, PCI, and you have tons of manufacturing, tons of source and information, conflicting with the others. This could be an easy time to quite a big project, couldn't it, really? Yeah. So, let's think about this. That's why I came up with this idea and say, don't go with a lot of I think, but I think it's necessary. This was connected to the internet. And everyone will be able to bring in an S1 once. So, the tags and all that kind of stuff, so I throw that in here as well. Yeah. Is anyone running this now? Yeah, the cameras. Yeah, well, I think that this is the point where we should look for a volunteer that coordinates things. Yeah. Well, I go into it for the live shooting. Okay. I'm not saying that. I'm not saying that. I mean, still, anybody working, that would be interesting. Well, me too. Okay. You can take the name for it, have it, then we'll look for it when we start talking about it next week. Well, set up a Debian Alive thing for, I don't know. Yeah. Can you do that, Simon? Sure. But isn't that the question of what you want to do within Debian or some floating web? You should not be doing it live nowadays. I mean, Debian Wide Project or a new Linux wine project. I think it should eventually become with the new wide one because you always want to share. We all face the same hardware problems. It depends on which aspect. I mean, clearly the detection and reporting stuff is general and should be upstream. But I mean, database, you might want, if you're trying to make a database, what does Debian run on this? And that's kind of somewhat specific. So, with Debian live, we'll have to be Debian. So, I think, you know, you would be able to do that. Absolutely. I mean, if we could just do a little bit of hardware stuff, or we could try to, whoever's running this, really. So, look, can we just... How about we just put our email address down and go on some mailing list and then let's go from there or something? Yeah. Well, I've seen in a box from an authority or something. So, you could actually ask. Yeah. There's lots of other people or somebody will know about who's doing this. So, you know, are we going to do a piece of paper or something? Yeah. I think it would be better to write an email to that comes with this mailing list with... Yeah. I think that's the... Yeah. Good idea. Yeah. Okay. Good one. Next idea. You're right. May I have one? Yeah. Go ahead. Well... We'll discuss some of this idea. This... I don't know anything. It's pretty simple. I really don't like a media package. What a media package is is like the GNOME package. It contains probably nothing except it depends on the world GNOME packages used for our system. Yeah. This has a major drawback, which means that if you want to have most of GNOME, you're unable to... If you first install the GNOME media package, it installs a lot of stuff, and if you remove one of them afterwards, you get the GNOME package removed, which is a bit annoying because on the next update, you won't get a new software part of the GNOME suit if there's a new one coming out. So the idea is to use dead tags to noble the packaging system and to subscribe to some kind of channel of package like some kind of RSS feed of a package. I mean, you could say configure like this channel thingy to get every packages answering the dead tags shoot GNOME. And then it will be able to being asked, hey, there's this new packaging GNOME suit available, you also install it. And you could always say yes to that and remove package unneeded ones when you have to free some disk space. So that's... I'm trying to get Erin Cozini to talk to me for, I don't know, half an hour to explain how we could implement that and I haven't been able to do so yet, but I would like to experiment on that. But I would like to have some input on which parent tools on Debian could fit that because I don't know where it could be integrated, like it will be integrated, it shouldn't be integrated into DPKG at some point, into Absitude and to a very new tool, how do you coordinate between all that. Are the packages in that way Max Auto for Absitude, whatever, like it's pretty, I don't know. Is this a good summary? Like the first part is already done, actually. Yeah. Yeah, you could sum it up like that. Yeah. I was wondering if anything was missing or something. You're missing the... like it's missing the rational of it, but... Yes, the reason why it's annoying is it stands, which is true. There's an issue with... there's a difference between the set of packages in no desktop or Ubuntu desktop or whatever, and all no packages known to man, which is worth a lot bigger set. Dev tags will presumably tell you about everything from no... but it won't tell you the things that we know, the sort of standard desktop set. Do you need a new... I'm especially in Ubuntu, you want to say the standard desktop set apart from all those Asian fonts. We could probably add a new refriing the current vocabulary facets, whether it's all known there. You've got sweet known at the moment. You've got sweet known desktop. Or you could split sweet known into two. No more, no more main one, then core or the other thing. But I think that there is an issue there. You don't necessarily want everything. Yeah. Well, Gstreamer uses a sort of version of Gstreamer plug-ins. Good band and everything. Yeah. It's a functional level, not enough are you going to need this? Well, basically... There will have to be some Gstreamer common, Gstreamer uncommon, Gstreamer rare, or something like that, like in Magic. I suppose the point is Dev tags has these great part of the facets, so it's probably a facet for, I don't know, popular. What? Popular. That's it. Thank you. The next step about that is to replace task cell with something using that, actually. So you could actually install the walls. A wall kind of should at the end of the installation process. So, yeah. This opens up a big issue about how do you integrate aptitude with Dev tags? Or do you use a different tool at the time? Is it such a different way of looking at the world and not different? I saw that CNM Reco's kind of searchy tool thing, which is quite funky. What's it called? I found what it's called. You can search for things and it reduces your set until you get something you bought in time, which is kind of different from having a big list of everything in the name. Have you been trying to have a kind of pretty flexible framework, where you could specify any Dev tags query, because I'm in a task cell programmer and I would like to know every single new task cell library to be installed. Right. Okay. I have a question about the experience. Especially when a new release comes, sometimes, I don't know, for example, CPU frequency scaling. Today in search, I think I know CPU frequency scaling installed by default. So, everybody choose one and then edge contain and people don't know that it was, so this particular package was preferred and installed by default and it's just mainstream mainstream of young. Yeah, like we want to have the minimum, the standard meta packages. Which are the problems that we want. It's very annoying that every book I've installed, the first thing users ask is, please take all of those fonts away. Which breaks once you just stop straight away. Which is actually something that users want to just give them the standard things when they update. So it's broken really quite badly. Yeah. In general, that's a big problem. I think last some bit of work on automated testing on that. The difference between a setup system and a system that has been upgraded from the last version. I think there might be interesting automated testing for stuff like that. It's true. There is quite a lot of things that slowly get rolled if you upgrade version 5 versions because of the problem that there's now a new standard package and there's really a good mechanism for that to propagate. Because often you'd quite like to say given the choice, yes, because I just have to understand the thing. I don't want to carry on using what I was using three versions ago because it's installed. It's not a big deal to me. And I want to say it does devatize actually solve this problem. It can, in theory, what we need to make this happen. What is the notion of this common package? You will need a tag that would be, I don't know, preferred or recommended, but I know that we tend to have our flame walls in our community and every time we have to choose on something I don't know. Well, it was ZMAG scale all over again. Because you're trying to define just aspects of the thing. I'm not trying to say this is preferred. That's actually an attribute of devu and it's not an attribute of each package on the road. It's a different level. It's a bit tricky actually. It falls between all these stones, this aspect of please give me what's to fault. I'm not sure where that information should live. It's like the concept I want to know about all new household packages. I want to know about new standard packages or packages I've got that are right now not default. If the final goal is to replace a task cell or provide an alternative for it, you have to include that information that is already present in the tags. So actually, people are making their choices anyway now. The choice that you get, a particular version of the CPU scaling software installed, has been made. So all we want to do is to reflect that in this metadata so that if you're arguing about what's preferred you're going to say that somebody decided in the sense that that's the one you get by default. We also have a priority standard in the d-package, which I would normally use so the big problem is basically that if a new priority standard package turns up it's not automatically installed unless the user says no. I would assume that that wouldn't happen on an upgrade. That makes sense. It's an app thing. I'm going to carry on this stuff but that won't be... I'm quite busy at the moment but maybe in like spring next year or something. Your question is saying I want to be informed of all the aspects and attitudes. Recently discovered you can from a common line search a lot of things and attitudes have brought in the facility to search a lot of things and aspects. It would be better to say I want to attack. The framework is all there for? I want to be informed of all new ignored package would be easy and it should be easy to track. The framework is all there to do that in a certain manner, writing a tool in a way of some kind of feed reader instead of having a web brother you can ask to come. Your app keeps track of new packages and you can ask of packages that are both new and haven't had to attack, for example. The only problem is that each package only has one new flag. So basically you have to view the full list of all the groups you're interested in and then you say, forget new and then you start over and so there's no way to say. I flipped over the list of packages that the new flag for the Haskell package would keep it otherwise. That would be an EF. When we handle this in 64 studios, we have a meta package called 64 Studio which is basically all our personal favourites and recommendations and that's operated along with all the packages and calls in new packages and you do an upgrade. I wonder if you can actually subscribe to these meta packages so for example there's a bunch of users who all agree that these are the kind of packages they want like all the Haskell developers so you subscribe to a meta package and there's that Yeah but I mean actually you could then just even an individual could post on their own site these are my personal recommendations if you want to subscribe to my personal feed you get the same packages as me. This goes with another topic which is people are working on packages that they don't have now we have which parts an idea of reviews that they don't have reviews or packages but we're going to talk about that at event-community.org but there are also people working on some kind of Amazon style recommendations based on POPCAN and we can remember with POPCAN 90 so we will have that data prison so that's definitely something to be a great change to that. Yeah I suppose we must be on a personal level supposed to do like a project one so like say you were working on a program with like eight other people you know you can share your taxes and that way it doesn't have to be from the beginning itself. They have just build essentially just your flavour of it it's quite a powerful idea potentially being able to say use this package set and set it as an easy button to say just get me the code set Yeah I mean that's what we currently do for our commercial customers in a sort of recipe that builds the exact structure they want and generates the ISO image to give them an installer and also an app source so if they just want to get like sales packages so you know that's for the smallest groups of like a half a dozen people you know and they get their own ISO so if anybody wants to find out more about that just ask them basically it's a mixture of there was a paper called CDD custom demo distribution and then we took some pieces from PDK which is Progeny's development kit. Okay I'm just going to comment you could use Live Hub for that sort of stuff. Alright I can't talk to you about that but yeah so it's a bit of CDD a bit of PDK and we actually contributed some improvements to PDK so it would actually resolve dependencies for you automatically enabling you to build like a working installable Do you think your own parking set would work? Yeah I mean we have some packages and then have a scheme for picking them. So I mean there's 64 studio package which is like what you get when you download the main ISO from our site but there are also variants of that which are sort of most of all of those packages plus a few others that are specifically needed for that particular device like they may be making a device without a quality keyboard like handheld in which case they need like a virtual screen keyboard normal PC users would need that so that it would be in their variants and all managed through PDK So where do you keep differences between your version of the normal package and the dev unit version? They're all on the server on the 64 studio. Yeah but on the museum if you've patched the source to change something? Yeah well most of the patches go back upstream Okay and some of the ones that don't go upstream are straight onto the dev impactors and so So you're not keeping changes which are not going to be on the services because they're just special for studios and that would probably be handled by a script rather than the ability to not patch itself. Because that's the bit I've been trying to work out on how to keep and how to manage your little bit of difference. Although we asked about that well I'll share a little bit with some people. Yeah we used a mixture of track Yeah it's actually a CDD discussion which we're supposed to be having some other day I'm sure. Is it? I'm curious. But it's an interesting topic If you have a version of packages that are slightly different for a customer that you basically need to mark them as slightly different. Yeah I mean for the most part we're just fixing bugs. So we fix the bugs for your options Yeah I mean that simple bad case but there's a lot of other stuff that's multiple to change which isn't necessarily generally applicable. Yeah some way. I do that I have a special review for a customer and the priority is higher than Debian. Yeah but if you have a bundled package and you don't know whether it came from Debian or it's from you. Well definitely. I mean ten reviews for me. For the user comments I'll just look at that kind of where the package came from. Thank you. Would it be an idea to have some sort of tag on the packages? This is the variant built for foods customers or Yeah we do something like that. I don't think this is really available. You could do it. We actually put it in farming. So for example if we fix tags today we then backport it to Edge because that's what it's based on. And then we put whatever the farming is and then it's like underscore BDO for backports. So you can see just looking at the package name whether it's strictly Debian or backport. Do you change that in the Debian chain as well or just in the package name? No it's just in our backport version. So again when you look at it in the in the Debian version you're going to get the Debian version so you're not going to see the Debian BDO because that was already in the package name. True but they change the maintainer. It's more than that. It's Origin. Yeah Origin is the device that's not inside the package. Okay. I thought there was some other one. Can we go? Yeah. I think Okay. Get on Joan. I think I'm going to do one of mine then. Yeah. Yeah the idea the problem right now is that we have some packages like kernel models that have to be built against each kernel that we have. So if we have five kernel images then we need each kernel module to be built five times. And if another kernel image is added we have to basically rebuild the module source against the new list of kernel images. This can be done in an off-made way because the Debian policy this allows modifying the control file at build time and because it would modify the list of binary packages to build and if you have 12 outer builders uploading their version of the package with different lists of binary packages the archive software wouldn't know which packages should actually exist in the archive. The idea would be to have some sort of template like in C++ where you can say for each package that matches linux-image-something we build another package called full-modules- whatever we just matched on. Or for each architecture that exists in the archive we build a package gcc- whatever we just matched on. So for example we can put cost compilers like that. Yeah, they are. This is a huge coordination effort. Are you suggesting or the off-made way of doing it? The idea would be to upload all this to the outer builders. So if you upload a new kernel image the outer builder would know some black magic that has to be specified still that we see a new image-something but we don't have full-modules-something we need to build that gets the source for full-modules and maybe it would call binary-full-modules-something so a per-package target again. Sounds reasonable to me. I don't like it's easy to compile your module for the kernel you're running but still it's a bit of a burden. I'm even thinking about having some sort of a personal outer builder that if you're running a custom kernel on your machine tell us if you're a personal outer builder where to find your kernel and it builds all the module packages for your kernel so you can install them right away and this could also be done to build the kernel packages themselves so for each linux-config there's something we build linux-image-something and in order to build a kernel you just make a small package that contains the config file and the outer builder will pick it up and build the kernel in it. It's like a game to you. It just falls down so I like to really build them. Is it really necessary to extend the dev-in-control instead of just making some meta script that will detail that for you and generate the new model file for the new package? We are not supposed to modify the control file at the time. Yeah, but we can generate the new package with another control file. Yeah, but this still needs manual processing. If you have a really fresh list that matches that you can use to generate the list of binary packages that should exist from the list of binary packages that already exist then this could be implemented in the archive maintenance software so that if a kernel package is removed okay we just remove the kernel package and applying all the regular expressions we also know that we have to remove all the few modules before that kernel version as well. Yeah, but we already know that. We already know that. In a sense. Yes, but we don't know that's an automatic way that the build system, the archive system can use. It's personal knowledge. Basically presently it works like this you update the control file by hand upload the new version and by uploading the new version in the changes file you have a list of final packages that are supposed to be built and if a package is stopped from the list the archive maintenance software removes the output of the package and you cannot do this if you rebuild the control file at build time because then you are dropping packages for the other architectures for example this would confuse the archive maintenance software that's the problem I think we also have substitutions in the package chain field which we also can do since the key for the substitutions is the package chain field at the moment I think another meta level of that how hard is that to do it requires changes in the archive software it requires changes in the chain file format it requires changes in the AC format it requires changes in the package and control it probably requires changes in the CDBS and depth helper this level of hard but it can be done yeah any other wacky ideas because a slightly less wacky idea yeah go ahead it's actually hardly any wacky because it only affects in the packages and it doesn't break anything that people only have but it's very useful for our users and no distortion so if you have used OpenVPN package before or if blockd and maybe others you will have noticed that the inner script supports only the start and stop as any other script but also start and sort of an instance of the demon and stop for example you can run etc if blockd start dth 0 and stop edit here so it just affects this one instance of the demon per interface and OpenVPN runs once per VPN and running so I have besides the regular start and stop for everything I can control single instances and I think this is very useful and I think there are several demons where there are use cases they are able to have several separate instances for example to listen to different IPs they have a test server for your web server you have a regular web server running on port 80 and a test server running on port 80 you still want to control both using the regular inner scripts because they get started and start up and that kind of stuff so what I propose is very simple it's for every package that makes sense which is most packages besides having delete the servers start and stop and reload and restart have an optional instance parameter and then depending on the kind of package have an easy way to create new instances for example for OpenVPN you just have a new conflict file NTC OpenVPN nameofinstance.conf and it detects these for a patchy you probably need a different like an EDC patchy instances and then another name for directory and in there are all directory files so if a demon is configured by a file you just have a second copy of the file with a different name in a well defined place if the demon is configured by a directory you just have a copy of the directory in a well defined place the current places can just be like for patchy for example we suggest to use default as the name for the instance that we already have so if you add new ones it doesn't break anything and maybe if you don't want to start by default in startup you have a list of instances to start and EDC default package name yeah I think having a formal way to do this would be quite an advantage so you can have a tool that you can tell that you can use to manage your own levels we are missing to do that we are missing an interface with that which is query somewhere to know the list of the different options there is no way to do that I think there is a run level somewhere around where you can I think even quite comfortably where you can have a matrix of services to run levels or something like that basically this would be an extension of that where you can basically open a folder below each level with the instances it might be difficult to get the instances into the run levels then but having a formal way for that would be nothing microcosm would still bind all instances to the same run level and same number because it's still the same init script which is usually okay if you are trying to run an Apache and then open with the end to tunnel over this Apache and another Apache that defines the interface of the open mid-end but then you are doing backstab anyway so this is very limited in scope it just means if you have a demon that makes sense to have several instances then it's about implementing it for the init script it's very useful for the user and the semantics as used by open mid-end I think are very safe I think it would be interesting to push that into an optional policy I think that this could be a push into public policy since currently no packages violate this since if they don't support it then they just ignore it the extra argument so it could be general policy so the way we solve this problem in jimp and I kind of like this way is that we have a general init script let's say openDTM makes a very good use of this because you can have several dtms up and down and then when the user wants it will create another init script it creates all the concrete files and then links to another init script which you supply openDTM.instance and then you can start talk that scripts independently you can add them to run levels whatever so it's just a little couple of lines of black magic that you part the name and from that you create the concrete directory the interesting question is when creating a new instance you have to know how to create a new instance how to buy it creating a new config file in a specific directory or something if there was a former waitress to create a new instance that for example if the package supports of course a ram and depth config script that lets you configure the new instance and creates a config file for you and then starts the instance or something like that that would be a nice thing to do you actually want a config file and then you just want to accept what the format has changed there will be a special little bit of magic for each I'd rather keep it simple I think that's unnecessary I think if the user wants to run some instances he knows what config files to copy he can just I don't know it takes quite some time to find out about this option in openDVN actually so I didn't know it was there that's a good thing but it doesn't take you any way I'm sure but if you need it I'm sure it's a user share doc that will be in Whitby automated automating the creation of a new one is a separate thing I think it would be different tools one exam to release to have a list of virtual machines and then and to have several tools and new config action files yeah so all you have to do is convince the maintainers of each everything that's got linear script in it I think you could reasonably want to run too often when we're talking about in-script I can rather make an announcement about another it's a regular deal to already be implemented at Bepcom a lot of in-scripts are very much insane all those in-scripts don't have any special requirements they just copy the in-script from somewhere else and change the binary and it's not very neat and it's not very useful it's hard to change things in in-scripts and it's hard to fix bugs in in-scripts so some of us are hacking on right now as meta in it which is a small file that just describes how you start your daemon in the simplest case it's just one line of comment line of your daemon and you put it in your source package you run the depth helper meta in it it'll install this meta in it file and then run scripts that create the real in-script files from the meta in it file optionally for other in-script systems as well if they're installed so you're supporting immediately all supported in-script systems and you don't have to worry about running in-script yourself anymore and for the system you can install custom daemons also by just running one line in one file in EDC meta in it this is more than a vector in here it's actually something in progress and you can find it on viki.daemon.org slash meta in it the MNDI and if you are having a package with in-script and daemon I'm very interested to find more guinea pics test cases so just talk to me about meta in it m-e-t yeah m-e-t n-e-t-a n-e-t-a n-e-t-a yeah that sounds good and there's some code that's already in a package that I've changed my first package myself to use it it's probably I still have some bugs but well it's not and I'd like to see more support for it but it actually can become more or less the summer way of doing in-scripts for anything that doesn't have special requirements it does not support this yet it has to be thought through how this can be generalized but it might be a place to support it for demons in one place later does this automatically help with the in-its in-it d-ag-side sorry in-end d-ag-side in-end d-problem no no okay alright, that's it okay, anyone else? okay, go ahead we're gonna have Rocky do this one okay what do you think? it's normal about an hour even though I'm already okay, I'll make it quicker in the bits from the DPL as well this morning there was a mention of needing to get more contributors to things like artwork and graphics and so on to make their distribution sexier this has come up in 64 Studio because we have quite a few users who would like to get more involved in contributing things like that, not only artwork but the graphics sense but also things like sounds I had a request yesterday for some new music for the discovered game in Deviant that actually has copyrighted music that's suddenly become aware and so that music needs to be replaced otherwise the package is gonna have to go so the problem is though at the moment as far as I'm aware there is no class of membership available to non-programming contributors to the project so you have Debian Developers and everyone else which is like non-members basically so what my wacky idea is a new class of membership called Debian whatever not Debian Developer but Debian or something else the membership, the knowledge of Debian whatever it might be Translators need the same kind of support Translators are another example It's about giving people a title well at the moment yeah at the moment I want to about man page, you contribute to man page because you don't provide because you can't access the machine I'm not in development right access to what to put it to make an SVM question what you are describing and so then I want you to finish to get the full picture of what you had in mind is very like what Holger asked people with Debian Community that's actually the problem he described was the same as yours and the idea was to create a portal to get people more into Debian when they are poor users or small contributors not necessarily talking about small contributors in the case of a translator you might have somebody who is going to upload a great deal of material an artist who can do a great deal of work not necessarily a minor contributor but there's no mechanism for getting those people as project members because they have to prove they are programming for us first and maybe they are like 90% artist in only term program I think it's been a very nice several years and there's been a void there in the information that's here for taking package skills or access not necessarily it might just be to be able to upgrade like a translation or a piece of artwork because at the moment they have to go through a Debian developer who is already busy with their own stuff you know how that's why it's such a huge problem we found it when translations the fact going to public is fundamentally that translation requires a new upload Holger's idea was I wonder how many years he wanted to provide something about this Holger's idea was to have some kind of karma level and so each contribution will get more points and they can have a high score that was at least proposed in February but he was still discussing this with people and the whole thing running so we should throw up about this idea during the Debian community I think it's not just about our karma points it's a good idea but I think the sort of peer approval system that exists for Debian developers is a really good one even if it does take a long time to get in should we in fact just have a different set of tests are we going to be able to give people the same level access to some kind of process but they're basically just showing that they're not complete or do we know what a different mechanism seems to be we have it already and some different tests team maintenance packages if you have team maintenance packages in submission repository you can have your half walk upload a new icon directed to that and then when the next Debian developers can upload basically the same goes for translators as well if there was so this is basically a technical problem if there's a mechanism for people to upload their contributions which it's not in the form of a Debian package because this is not what they do some automated mechanism can prepare a different way that a Debian archive can understand and then you basically need a Debian developer to formally sign off on that because it's an upload but you can still have an approval system what do you propose these ages to get the uploaders category as opposed to administrators that's a society thing basically what's required is the ability to upload something which actually has a recipe small change there's other skills where people need to have an involvement for example we've traditionally handled the reading very badly not at all and for those people we've never made an upload but they might have the skills to be able to have their gift to the gap to see you need to actually be able to improve those people the thing I understand about the organization they can be peer approved yeah a big problem a good example of something which is more fundamentally different there we go yeah the big problem with the uploader stage is that basically it gives people the right to upload but not the right to vote the second class citizens we need the other way around we need people that are able to vote but not necessarily able to upload they are they didn't pass our tasks and skills so they have the right to vote our former members but they do not upload normally but instead they go through some sub-project I mean this is what they do in their life you can sign up as a guest and get access to whatever but it's hard for outside people to even see that that's what you mean you want this word best you don't have to come up with some new rules there have been a lot of people who use and then they got mentals as well tenementals actually making up goals I would like these people like if I have someone that is doing like spending maybe three days a week to make hard work for a DVN I would like that person to have the right to vote for all of them yeah there's a recognition aspect to this it's no good to just say if somebody is spending enough time on their bit they want membership or recognition but going back to making the desktop sexy I'm a little bit concerned that we're going to go back to some bad days where we're going to start thinning the open office and putting a whole lot of artwork on our desktops for me the desktop is dead and the only thing that people should be using is a web browser and we're encouraging this sort of sexy image with our flash Chrome, KD, Fiend with an icon I just do a little bit I couldn't put a nice background behind the web pages yeah and there will always be another page and Sexy is not just on Kloops it's also how it works for normal I don't even have a login page so yeah it's best to call it like getting non-developed members or something what's your idea? there will be a lot of account holders in the other community I think there's been some a lot of creative to share I think the perception from the outside is that Devin doesn't actually welcome non-developers I think that's what people think and it's not actually true but as you say it's not obvious where you go because 100% of Devin members are developers 99% whatever it's perceived as a developer organization okay, it's more you you agree I do one, you do one okay she goes, you have an amazing concentration better I need to slap myself we've missed that we missed we've missed we've missed one page sometimes you have to throw that yeah, I have another evil one you say that yours is the beginning because yours are hard if we have an API change in the compiler then we have to change all the value packages or rather recompile them and re-upload them and make sure that no one is using the old and the new versions in the same process image at the moment this all has to be done by hand because you cannot change the zone name since it's not a change in the library the API per se it's a change in the compiler API and you have to change the package name somehow to force people to upgrade things together and changing the package name is because the policy of not changing the control file at build time in the new process all the library package main hangers have to change their packages and re-upload them and coordinate these uploads with the outer builder main hangers so you basically have to have a big flag day where you upgrade all the outer builders to the new API and then people only accept libraries with the new name and before that you cannot upload libraries with the new name so one particular day you basically upload all the libraries at another time and that's a bit a bit of the coordination work that could be automated and the idea is the idea is to split up the depends field into depends and link depends and the link depends field exactly like the normal depends field except that the ABI field which is also new would have to match in those parts it defines for example if a package says that its ABI depends on the C++ ABI then it would list ABI C++ and a packet build time would be extended to C++ and then the version of the ABI like 1002 for current and basically d-package drain control just calls a plugin for each ABI that is listed which finds out which ABI was used in this package build and inserts that into the control file and the package manager and the user system enforces the only package with matching ABI depend on each other link depends we have to distinguish between normal dependencies that don't care about the ABI like a tool that I'm going to call and things that I'm going to link where I care about the ABI that's why I want to split these up wouldn't this work exactly perfectly for your problem with kernel and models because you can depend on a kernel as it is for an ABI you see that your models do not support that kernel which is an ABI so you don't need to do that complete thing could I leave you on a slightly related the problem is that we have multiple kernel ABI at the same time in the archive and but it will still install the right model packages and there will be a way to see automatically the parse which one you need to review which one you need to add basically this is where we don't want to introduce new package names for new ABI and with the kernel package we want to do that because we want to have multiple options available to the user here we basically don't want multiple options because we just want to make a transition from one to the other and we want to make it as smooth as possible for example I think the auto builder could learn that when a live package is uninstallable because the ABI of dependency changed and it automatically gets a binary rebuild so if all goes well an ABI transition would be uploading a new compiler and the auto builder would solve all the rest what do we have to do we just have to add this deep package right? deep package and out and the archive yeah the archive the equipment would have to handle that I think for unstable we don't care because the auto builder would have to sort it out unstable unstable is then broken during a transition and maybe it could be done in a staging area or something like that so but the difference to the templates is that we want multiple packages at the same time the problem is that you've turned the multiple version of the same package installed yeah these are specifically conflicting packages since they have the same zoning yeah okay any other okay do you have an agenda for that maybe try to make one some years ago it seems to me the thing to do with this is just do it and extend we don't have to use it do we there is something you can just choose to use oh do we have to force it on everybody on a day so we should just make it facility available and then start mattering people to use it it is existing we might have tried using it for all we have oh actually but we can use it on the next transition yeah I hope that's not the 10 years so okay my what the idea is that from a gentles perspective the dependency are way too static there are some ways to specify based on architecture like whatever are stuff like that or for the kernel for libc but this is still like not a very generic solution but on the other hand there are quite some packages that could make very good use of deciding automatically what dependency you want for example there are quite some dimensions like name dash dev dash doc dash that gdb stuff so and there is as far as I am aware no way to say to the package management system hey I am a developer I will be compiling so so for everything that I want installed I want to add the headers the static libraries for it and it should be absolutely real to add some like suggest feel like a flag like these are doc dependencies so if I have some of flags that the gentler would use flags in the app config or whatever the package config then it would make the suggestions a real advantage in the same way you have like several packages compiled gnome flavored kdb flavored console flavored so you can do that I want to install this package if you find it gnome flavored then better for me install gnome flavored so I don't have to I have a nasty idea sorry I don't want to when you are saying I always want all the Dutch Devs installed the matching Dutch Devs for the library packages you have the matching header files for example the relationship is in DevTags like you can manage to use DevTags to find to know which package showing something I have another nasty idea the idea would be to allow independent specifications like not only AND and OR we don't even have proper AND at the moment and proper parent uses but an implication operator like if we have Jesus in conditional yeah so for example you could do like for example let's say you have gnome whatever then install full gnome or if you have KDE then install full KDE stuff like that so this would be prioritizing not only by order but by the flags you have setup as well as I'm actually thinking about real boolean algebra here but that's the beginning no that actually works quite well we have that in gen to dependency it's absolutely nice so you're saying like the dependency is smaller made the dependency yes much smarter because there is absolutely no point to impose a decision for each package and user because he's using gnome he'll probably want as much as he can doing gnome but this also far the operator is like very static with how it behaves this is useful like if we want the packages targeted for the known desktop or KDE desktop basically yeah basic or if you want documentation or if you want support for a pair or for a python and some packages they are sealed up like this there are quite some things what we pick even what we've got it's entirely static it's not entirely static is it you can choose what you take take all the others it has a really interesting use case it's very because it don't so much yeah okay that you could have a meta package that for example that puts in support for some language and gnome has a dependency that says if this meta package isn't installed we also pull in our language files for this language well binding to me yeah mostly like conditional dependencies basically the idea would be to have a full Boolean Agrabah on dependencies like you can say depends A implies B so I don't you can do that you're in a quite huge kind of world because what happens if you install the package later like that's the you know that the package will get installed remove what's happening then I think the languages you want a different language languages are orthogonal to packages you know you could use this package to sort of hack that I think that's a bad solution actually I don't think it's a good idea to make that Boolean decision based on install package name because sometimes you want to have package installed just for testing for plain with them but you want to have your system configured in a completely different way so it would be better to have on some fields that the users select and not on based on what package you have installed so yeah it's an example of way it's a bad example it's a bad example I see no kind of support whatsoever in DBN that we will have to you know if what happened if I enabled the option before like after installing the package that had the decisions not to install the dependency because of the question mark yeah that's the Boolean I think I would solve this because if you have it depends A implies B and you install A then you put in B so you basically say no what if I install B later what happens to A then? nothing but you could have some mechanism like I think you mean something else what if you have I used something and then you install A which implies B then you change that use so you don't want to use it later what happens to B what happens to A in that case that's the problem the package manager will have to sort it out basically it depends if A implies B would allow another package C to say that A depends on B if this package is installed then A depends on B that's basically the same thing so what are you talking about context basically isn't it I mean if there's any user context it's like it's like the unknown package could say if I depend on I depend on the relationship between language support language support method package and the language support for known having a certain relationship for example which would be an implication relationship either the method package is not installed or both are installed or just my package is installed I mean the problem we sometimes have with users is shifting context they install the disk for like a week they're happy and then like after a week they discover that they need a developed package because they need to build something quite common one is they want to make a proprietary graphics modules until they need kernel headers and so then they contact switches from someone who doesn't need developed packages someone who does need developed packages so they don't know how to manage that because they're like what are we thinking for the developed package and it wasn't the sort of thing yeah if you if you install the method package install all development packages then all the library package suddenly depends on the development packages so you install the method package and all the development packages that would work yeah but you can make them only global flags you can use make them like some overwrite like for this package it runs this and that flag for build devs we'll do that we'll get build dev so install just the build dependencies for that package we'll bring in even the tools yeah but this only works for build dependencies it's not like for some other development packages yeah you're right there are all currently kind of implicit assumptions that the dependency tree is fairly static and if we make it smarter like this some of those assumptions break and I'm not quite sure as you said it does seem like a good idea but there are things to think about exactly what happens and whether it really solves things right now okay the question is should we think of all of this over lunch and we can do lunch