 Why freezing hellover? Because the possibility of a d-package 2 ever being released would probably happen when that happens. If anyone's followed the recent developments. So anyway, good morning. I'm going to be giving you a little talk on d-package and possible, probable features coming up in for later releases. Obviously the title of freezing hellover refers to d-package 2. The great way I'm going to do this is if you have any questions at any point during the talk, just stick your hand up. This lovely gentleman will run to you as fast as he can with the microphone and probably fall over and then I'll just repeat the question. So, just going to quickly check. Everybody here does actually know what d-package is, right? No, okay. Yes, in a sort of not quite a way. Good. Probably soon. Probably you're all aware of the history of d-package as well. It was originally a pearl script written by a couple of people including Ian Murdoch. And a certain individual known as Ian Jackson, who is probably falling asleep just up there as we speak, decided to write the currency implementation we've all been using for about the last 10 years. And it's been maintained by a variety of different individuals over the years from people like Guy Maw, Ben Collins, Vicky Ackerman, who's also sitting there. So I have the original author and previous maintainer in the room, so I'm going to be feeling intimidated through this talk. So yes, it's the package manager we all use in Debian. It's the thing that actually sticks the stuff on your system. You should all know this. So what the talk is probably going to be is looking at where we can take d-package. The existing codes work for 10 years. It's got quirks. It's got bugs. It's got problems. But by and by, the actual implementation has been pretty good. The code isn't, but the implementation is pretty good. So the first important thing, but first important slide, is actually what isn't going to change. It does actually work. The first most obvious thing we're actually not going to change about this thing is package states. The things like the half-installed, installed, configured, all the various little states the packages go through. Because these actually fundamentally work. We don't really want to go missing around and try and RPMize it or anything like that. The fundamental sort of basics of d-package, the way it unpacks packages, the way it then configures them and stuff like this is very elegant. It's served us well for 10 years and I think it'll serve us for another 10 or 20 years further. The design's very good. On the other hand, the process of moving between a couple of these states could be cleaned up a little bit, which is pretty much the fundamental core of this slide. One thing I'm actually going to do is a quick little demo here. I want to see how many people knew you could do this with d-package as an explanation of why the package states are really good and useful. Hopefully, I've got a terminal here somewhere. That's not going to be the terminal on the screen. Woohoo! I need to be root. This is the one disadvantage of your head. I'm going to do that. How many people here have ever encountered the situation where you've got a deb on your disk and you want to install it and you don't really know where the dependencies are and stuff like that? Everyone's encountered this situation? How many people know how to do that? Tom, how would you do it? One way. Or you could just sort of tell d-packages to one packet and that will do half of the job. You know this one comes up in a later slide. The fact that it takes half a week to read its own database. I just used the word Ian. It doesn't have the control Gs and the control Ds and the control Cs of some other Ian. Sorry. No, it's just taking ages. It's taking a very lot of ages. So there we go. It will unpack and it just leaves us unpacked. If you actually look at the status of the file at the moment, it's left. If you want it to be installed, it's okay and it's going to be unpacked. And as Tom said, you can actually just go apt, apt. Can you fix it? And apt will take slightly less time and apt will want to install a bunch of new packages for it. So it's a kind of cute way of working. This way of working, the fact that you can unpack packages without the dependencies on the system because you only need them to configure them and stuff like this is actually what buys a Debian's great ability to move between different distributions and pick between them and stuff like this. You don't mind waiting for the database to be read? There you go. So you can remove it. If you want to downgrade it to the previous version, you can do that. The states that the package goes through for an individual package are elegant, they're simple, and they buy us a lot. One of the general complaints tends to be that the package is an atomic. If you try and do an upgrade and you don't have the dependencies, it leaves you in an invalid state where in fact actually it's a perfectly valid state for d-package packages to be in. It's just unpacked or half-installed. It's just not configured yet. It lets you go and fix the dependencies and then just configure it without having to find the debut installed in the first place. The level of user atomicity on top of it is something apt and deselect and front-ends can provide for it rather than internally. Other things that aren't going to be changed, the existing inter-package relationships, things like depends and recommends and suggests and everything else like that, they work. I think we've got a much richer metadata format than things like RPM, and I think we've sort of had a good run with that, and I think in general, fundamentally, it's not actually that broken. It's not broken at all, actually, I don't think. There's a few tweaks and improvements we can make. I mean, quite often a requested one is version provides and stuff like that. They request for things like architecture or specific dependencies and conditional dependencies and things like that we can extend. There, as one of these, have problems with syntax or basically just making it not suck as kind of to use, but in general, it's sort of... They kind of work. We might add a couple of new relationships to the thing. One common one that we've talked about quite a lot is breaks. Breaks was actually one of Ian's proposals long, long ago that's just never been implemented. Basically, a breaks is to conflicts as it depends is to pre-depends. In other words, a lot of people at the moment conflict a package which has a bug in it to try and avoid their package and that package being installed at the same time, but what this actually means is that you have to rip out the other package first, remove it completely, and then install the later version later during an upgrade, which is quite evil, where a breaks would actually allow the two to be unpacked but not configured at the same time, so an upgrade could be done a lot smoother with it. There's also enhances, which will be a new field. Some people who follow policy might not think so, but in general, the deep package support for enhances is no existent, but actually I think deep package could actually support it quite usefully. If a package declares an enhances on another package, there's no particular reason why those packages post-ins couldn't be run with an enhance flag. If you say you enhance another package, you get the opportunity to do it. This can be quite useful, things like spell checkers and things like that, where they can say you have a generic language pack, spell checking, dictionary thing, which enhances OpenOffice and Mozilla, and then it's post-ins that could catch the enhance argument and go away and actually do whatever it needs in the post-ins to hook itself into said packages. But again, the existing relationships won't actually need to change. Sorry? Hello? It indeed. Yes. It's not new at all. No. No. Where are we going? Yeah, it's backwards. Suggest? Sorry. The question was, was enhances semantically different from suggests? In the other direction. Yes, it suggests in the other direction. If A suggests B, then in theory, B enhances A, but enhances also gives the opportunity for B and A to know about it when each one is installed. It's the intended theory. So yeah, when the maintainer scripts and arguments again, but I think we've got a pretty good, rich way of dealing with package installation, pre-inst, post-ins, pre-RM, post-RM, they come up with four basic things to go through, and we managed to stick arguments in these to allow us pretty much flexibility. It could be a little better documented for people. There could be some sort of examples to give around, but there's nothing fundamentally wrong with it. I think sort of stuffing it all into one big script or speffering it out into 100 would actually make things a lot worse than they are now. So again, I don't think we really need to change anything like that. The Derbyn Rules won't change. It's an executable makefile. I think that's pretty much served as well. I don't think sticking everything into one file or multiple files again is going to make anything easier. Makefiles are reasonably flexible. The package doesn't particularly care that it's a makefile, to be honest. It can be any executable, but policy mandates are makefiles. Again, I don't see any reason to go from that to a different source format, like a speffile or anything like that. The only thing, there's a little note on the bottom there, but you'll see there's possibly changes to the source package format, which will be where the tar balls and patches actually go. Anyone who is following Derbyn and Develop Announce, who will probably know about the Wig and Pen format that Brenda Noe Day and Elmo and myself came up with in a powerful night. That's the most good things that are actually developed. It's not really changing much. It's just improving little bits around the edges where they need to be. Basically, allowing changing a diff to a tar ball of patches is the fundamental change to it. Which, pretty much again, the existing kind of stuff was pretty well designed. The one thing I did say about the top was that although we're changing the existing package release of statuses, so the package goes to unpacked to installed. If you break it, it can drop to half installed and it can be de-configured and stuff like that. It generally works. It's given us the great flexibility to build things like apps and deselect and stuff, which I think nobody in the room would deny. We have probably the most unparallelt and rivaled ability to upgrade and downgrade and you can switch between Derbyn distributions within this model and it generally seems to work. The one thing I did want to look at is how you unpack and how you remove packages. That kind of stuff at the moment is a little hairy. Sometimes it doesn't quite work right or anyone who goes through the bug tracking system will see long lists of bugs where things like diverts do exactly the wrong thing in most circumstances and things like that. There's probably a way to clean this all up. There's one shot something. Sorry, I'm just getting hallucinations now. Must be the red bull. The basic idea is two basic clumps of description, which I'm going to go through in two slides. If I'm on the right screen. Next slide please. Thank you. The basic two ideas are... That's actually quite evil isn't it? That really didn't render at that resolution. The basic two ways of expressing the changes I want to make to the unpacking and removing of packages. The first one can be classified as filters. The basic idea is to split the unpack and remove process into two. The first sort of step of this will be taking the data about the file you want to install or remove and passing it through various filters to turn it into things. A deep package divert would be a filter that takes this data about the file to install and changes the path appropriately. The stat override again would be this filter which would alter the states of the file and the metadata before passing it to the thing that actually installs or removes it. The filters would be basically a... It can be built into a deep package where the default ones will be like diverting the stat override and they will be provided by a system administrator so they can be a script on the disk or something like that. They can also be shipped in a package so a package that knows it wants to do mass filtering of its own package list could ship a filter. It will probably be a bit evil if people did that too much but there are a couple of good reasons for it. It means we can do mass divert through things like FHS transitions and stuff. Because the filter is a script you can actually then go well anything beginning and the the second stage install and remove process would actually see the result of gets moved here so you can do move the whole of userlib to userlib by 486 Linux mass divert with this system and do it. Actually then there's no particular reason you need to change the packages at all. You could probably do multiarch by just making sure multiarch systems diverted the whole of userlib and the whole of user include into the right directories. So that could be a one way of doing it. It's cute. It's evil but it's cute. Relocate? You can do that kind of thing with this. The idea here is basically to specify a young pack process and remove processes two stages. The first stage is you get the metadata out of the pack. He was making a joke about RPM relocate. Was it? What were you saying there? Do you mean relocatable packages? Yes. So the question was there. With this could you do relocatable packages? The answer is yes. The package itself could put a filter in to actually do whatever it needs to do to relocate things to different hierarchies and stuff. But the idea here is to split the unpack process and remove processes into two stages. One is given the package and its list of files. Modify that list and the data about those files to fit the system it's going to be installed on. And then there's a second stage coming up in a minute which we'll share. We'll move to that slide. So the second stage, after we've filtered everything, is we have this thing called classes. Now just a random question. I'm not expecting any hands. Has anyone actually used in earnest the Solaris package management tools? A couple. Anyone use them enough to know about classes in Solaris's package manager? Oh good. I can claim to have invented it then. The basic idea, the term is just stolen directly from Solaris and we'll almost certainly not be using these two terms. They're just convenient for this talk. So the idea is after you've passed the package file data through a filter to turn it into what you actually want to put on the disk, the job of the class is to put it on the disk or take it off the disk again. So every file in the class is run through the class itself. Don't confuse this with object orientation. It's just a really bad name and I apologise. Basically every file in the package will be given a class which will default to the default class. An example of other classes you can have is Confile that exists today. Another one is documentation. You can create a documentation class. You can create translation classes. You can create whatever classes you like. What the class does to the file is it selects which piece of code drops it onto the disk and which piece of code takes it off again. So by setting things into the Confile class the three-way merge Confile could actually be run instead. This gives us an elegant way in the script. The thing is just using two function pointers is actually going to do it. But you can also then again just as you can do with filters some can be built into d package. Some can be provided by the system administrator as a shell script or whatever and configured and some can be provided in the package. So the package itself could provide a invent a class for some of its files and provide a script to lift the files out of the package and put them on the disk in the right place again and do the various kind of widget tree that is actually needed to do that. They run after the filters so they only see the filtered metadata. So if all things would work through Divert all things would work through that override. So there wouldn't be need to be any special magic to make Confile Divert's work because Confile's take place after it's been filtered it would just work. And so on. Again they're customizable. The class is consigned for each file that falls back to the default. The nice things you get are two special types of class you can invent at this point. You can have a do nothing class which can be installed by the system administrator. There's a class script that actually literally does nothing. These can be used on embedded systems to just not install the documentation. So you can just have a class script for the docs class which doesn't do anything. So you end up with a package on the disk without its documentation which between the classes. Yes. So. I'll repeat. So the idea is you could use Globs and the question is would you use Globs and the signs of them to hierarchies and stuff. There are various possibilities to that. The details of the implementation will be hopefully as elegant as possible and simple as possible to use but the underlying thing will be that you can go through this process to be installed files through that on-go to that process to be installed. You get optional content so you could have the do nothing class for user share in multiarch packages so you can exclude the entirety of one hierarchy in stuff that you don't want or you can have remove only classes. A remove only class would be a class that only implements the remove method so you could register log files in your package, not install them but have a remove process to actually take them off the disk again when it's perched. Yep. But what happens if I after I have installed a package I change the filter and the classes configuration and I remove it and it's only half removed or is this information stored anywhere? That's a very good question actually. So the question was I don't know if anyone heard it was what happens then if you change your filter and class configuration. The truth is I haven't quite decided yet. Obviously changing the filter and class information will actually affect what the package is on disk. So there is a possibility of being able to refresh a package and re-run it through it but you might need the original deb for that or at least parts of it or you just wait for the next upgrade. I mean there's no different than changing a divert now. I'm tending towards the fact it should just work. It should do the right thing in all circumstances. It's actually kind of easy to do that. You just re-run sort of you de-configure and pack, configure it again and it should just work over an entire system. There's no particular reason with this model it shouldn't be easy. So you could change an embedded system to a real system by just dropping by restoring the documentation in place which again is kind of useful. Both of these things require that we can store a little bit more information about the package itself and particularly the files in it. So we can do things like this. So what we end up with is a whole section on metadata which is horrible little term. Everything becomes a metadata problem at some point or other in its life. So to be able to do things like know what filter and know what happened and know where we put a file and whether we put it on the disk or not and things like that we actually need to store data about it. So what we do is we keep a record of what we do now in the status bar but we also keep a record of every file that's supposed to be in that package or could be in that package. We keep information about what its file name is going to be what its file name was before the divert filter took it, the stat override filter took it etc. We record its permissions it's supposed to be its permissions before the stat override took it. We take its MD5 sums or char1 or something that won't be broken broken this week anyway. So we can kind of store that information. Obviously we probably need something a little faster than RFC822 that we've been using now. To be entirely fair we need something a little faster about three or four years ago when Debian reached a certain size but I have absolutely no reason why we don't keep the status database that's very useful. You can open it in a text editor and stuff. So possibly an index for it or some sort of basic things. Anyone who's heard Tridge do a recent talk you might have heard him mention that I've been looking at LDB and the TDB back ends he's written for SAMBA which are extraordinarily fast and efficient ways of indexing it and still having in RFC822 text file alongside it and being able to just edit it and the binary data gets put back into place. There's some quite cute stuff he's done with that. But the basic idea is obviously is to extend the existing status information to include information about the files. We'll also allow later editions to the database so we'll allow you to register a file in the post instance with the addition of classes and sort of being able to say this is a log file it's not shipped by the package but it's removed. This becomes a little less useful but there's still reasons why you might want to say well this file has just appeared, I've just created it register it as this information as you can stick additional entries in the database later and of course you can just ship the information package. The other nice thing you can probably do is create a signature of the metadata of the package so you can create a signature based on all the files in the package in there check sums and stuff like that and sign it and you can compare this to the same signature that comes in the deb so you get a very cheap tripwire like process. You can compare the source file on the system the digest on the system build it all together, sign them compare that to the same data in the deb the signatures match which some people seem to think is important it's also kind of useful when I think probably that's more useful when you've got a hard drive on the way out it's a very good way to see where the hard drives dropped holes but it does actually create the first possible use for signed debs that I know of that isn't already satisfied by the signed app repositories so there are the basic three ways of changing the unpack and remove processes so rather than having a single 1000 line function which does the whole job of unpacking package we split it into kind of pick a filter, run it through the filters pick the class, run it through the class it becomes quite simple and elegant and we can actually do repeatability and stuff with it which hopefully will get us a little bit more reliability between upgrades as well other things we're just going to mention is there any questions about that first? the thing about signatures on those are actually useful since you can do it if you don't have the complete repository available but it's also not just about trust it's also about mistrust since you can just say I trust your archive but not everything that this specific maintainer does so I can just invalidate his key in the package there are other uses for them the signed debs are they have use cases above and beyond just pure verification of a mirror they're actually for trust of the individual builder you can choose not to you can let not to trust a particular build so on, yes so that gives you the basic undone of the unpack and remove stuff to the next slide we started off with what's not going to change so we kind of go back to what's not going to change it's kind of a big thing about what I want to do is not change a lot of things so these are things that people have suggested or probably shouted which we're probably not going to when I say probably not going to say they're not, we're not going to see them this slide's especially for BDAW anyone who follows formula one will know HP gave the Williams team a hell of a lot of money and a lot of computers and they came up with this really bad nose for the car that just didn't work so there we go things we won't see in the package too and I'll explain why a lot of people seem to think that a source RPM and a source deb are the way forward a single file containing all the tabels, all the patches instructions to build it, the metadata and everything like that sounds initially fine easily to download you can point someone out and they just have to download one file upload now that's where the interesting comes so let's take a small package a couple of people maintain open office every single time you change a patch you have to upload all of the original source again we don't have to do that at the moment you just upload a new diff so that hurts the poor ADSL line or hopefully multi-tier T1 that open office maintainers will probably need anyway that's kind of bad X is another example every little change of the patches would need to upload the original source again okay that's fine and there's another person who tends to sob when he mentioned this it's Elmo, he cries about his poor little mirrors who would be getting the multi terabits of data as all the new upstream sources came in a thousand times a day with every, yeah we don't want to do single file source packages okay they're easy to download but quite frankly apt get source is easy in fact actually apt get source is easier because you don't have to ferret around and learn the structure of the pool for developers like us we probably do know the structure of the pool but most people shouldn't need to give them a little tool that apt get source is fine the other thing is source RPMs if anyone's ever used them we're a bitch to unpack you need RPM on the system and then you generally need to be rude because it wants to stick them in user source RPM and you need to be rude to build the things because it's not made them as you know they're actually not really that great I mean that you know deep package source at the moment is pretty useful I mean you can do it as a normal user and you don't need any special real tools you can just grab the files off the pool yourself and then tar them and patch them yourself it's not too bad, yeah Ben you don't build RPMs not as rude how do you do it? do you have to change anything about the configuration to do it? he said he builds RPMs not as rude but he's had to change the configuration of things to actually let them do it it's possible but it's a bit difficult so two file source packages three file source packages the current Debian source package I'm sure you'll know is three files or two a tarball, a diff and a DSC where the diff is optional in the case of native source packages I'm guessing what you'd want is a DSC somehow encoded into the tarball or patches it loses you the method the sort of data attached to it the thing is you don't need the DSC if you don't want to use a deep package source it's entirely possible to download just the tar and the diff and do it yourself the DSC is an optional component of it so at the point you're getting that the efficiency of three files is assuming people are downloading themselves they're downloading themselves they can unpack them themselves they're built with everyday tools so I tend to think having the separate DSC is nice it's like having AEM read me on an FTP mirror of your upstream source or release notes alongside it it actually turns out to be quite nice one thing I would like to put in though is the last changel of entry in the DSC because that's one thing I keep on doing I keep on going to source packages and trying to read the DSC to find out what it actually changed that probably belongs in the changes file but we don't stick those on the source mirror so there's something there that could be improved but in general I think the current source package format is generally right other than the tweaks we've already started with Wigan pen which are supporting a few of the more evil things everyone tries to do I think everyone's read that announcement read the description of Wigan pen have they or does anyone stick a hand up if you'd actually like me to explain the differences okay a couple of people Vicky obviously keeping up to date with the package now he's the basic idea of Wigan pen is it's a pub in Canberra they don't do they don't go there it's also the name of the source package format the basic idea is rather than a tar and a diff what you allow is a tar and a tar a lot of people now are shunning the idea of shipping a diff which a single modelific diff with all the changes and are instead moving towards shipping just a collection of patches in the Debian directory the tar just lets them tar up the Debian directory with those patches and ship that as a tar ball of patches and Debian directory and have depackage source automatically apply those patches when it's unpacked so it takes a lot of the the hard lifting that you have to do at the moment it also allows to allow multiple original tar balls so a glibc would become a set of tar balls and a Debian directory tar ball it's a slight improvement it doesn't break the existing tools because it's backwards compatible the current source package format is still technically correct under it but it allows those people who do need extended things to avoid some evilness like tearing up tar balls creating original tar balls that actually contain changes because they were binary files it allows us to get rid of some of the evilness we do there so again, I touched on this earlier again I don't think we want a spec file to replace the Debian directory I think personally that having the separate files of the Debian directory on that complicated you can read exactly the right file that you want, I mean if you can imagine having the change log control mixed together with the rules and stuff like that it would just get a bit unwieldly having the separate files works it's worked for us for ages, the tools pass them and I don't really think it's that ugly does anyone else here think that's the Debian directory is particularly ugly yeah I think that spec file has an initial advantage it's just one file to copy around between packages and edit but it's having been told that it's the ugly people don't tend to know about it and when people do I've been in a situation where I've worked for a company that was doing packaging of different operating systems but after a few weeks even the most cynical thought the Debian way was the best simply because it was the easiest to use you just edited one file you didn't have to go page, page, page, page through the spec file to find the line you were looking for added to that spec file then stick postins and pre-or-end and everything in there file based dependencies is another one I'm going to do the fourth one first postins.d is another kind of thing if anyone's followed the long discussions that have kind of occurred as a result of it the ac Linux people that's right basically what it was is the idea of having a directory postinsd, preinsd, blah blah blah all the scripts in that would get run every time a package's postins was run and every time a package's preins was run and things like this that's evil you could quite easily clock your PID or counter several times during a run but there's also the fact that why do you want this I think that the stuff if you actually want to better see when another package is installed in this model you would enhance it and then there's also this idea of triggers a trigger is something like ld config ld config could provide a script and a package would go I need ld config run at some point and when the run is finished d package would activate all the triggers run the script so you just run it once so you can get rid of the leg work we can do this for scroll keep things that aren't needed until the end of the process and there needs to be run once for 100 packages we can move to this trigger mechanism which anyone's read the bts will probably know of was speaking to the sce linux code about it quite a lot and it turned out of course the sce linux guys what they really wanted was us to just stick the sce linux context setting code into the same point as d package applies permissions to files which is exactly what we did and Manoj sort of mumble did kind of that work and then did it again when it kind of tested it the question was would that be a filter a filter could certainly add the context information class could certainly apply it that was very much part of the design was at the same time have 10 minutes so we go very quickly on to file based dependencies and why we're not getting those they suck for anyone who's used rpm you'll know that rpm you can depend on the existence of slash bin slash bash now try doing this when you've got apt involved this is one reason why yum and stuff like that take forever to run because you have to find which files contain bin bash blah blah blah I think we can find a lot more elegant ways of doing this one of them is this idea of feature dependencies which is this really unfinished brain dump of a horrible thing which we're trying to find the elegance in one of the way I design things which scares the living crap out of elmo is that I go around the whole crack for root first to find out and I actually, as you might guess from these slides I find out what I really don't want to do first and find out what I do want to do in the middle of that so there's kind of been talking about these feature dependencies and they're kind of they're not right yet but I think there's a much more elegant solution for solving the problems file based dependencies are actually trying to solve I think the same for so named dependencies that rpm also supports I think we can come up with something a hell of a lot better I don't know what that is yet but I'm willing to entertain ideas if anyone else has got any even if it's just policy of package names to be honest which is what we've done so far but we've had some problems with that in the past so how do we get there wonderful circuit the 113 is the current version of D package just a quick thing I always get asked why it's 113 110 was the previous CVS branch where we did our work 111 was the name given to CVS head which didn't build let alone work so when I started actually branching off and doing my own stuff I actually created a new development branch which went into experimental I didn't want to call it 111 because CVS head called that 112 sounded too stable so it got called 113 it was the next free number in line basically so we keep uploading that to unstable now we keep applying patches to it it carries on much as it has and then in a few months time we'll freeze it and we'll go for a freeze cycle a translation cycle which we'll probably do a 114 most of the major point of this is to test how long it's going to take to release it again we may do another cycle so we may get a 116 in etch or a 118 or a 120 or we may only get a 114 in etch but this is the idea here to make sure D packages are frozen and stable at the point we need to stabilize the release because we want to get etch out the door and then in parallel the 2.0 work this work outline that slide will take place probably initially into a private archive then maybe experimental and it will all be done in parallel the new code will only arrive once it's been heavily tested then the old code and 2.0 isn't even scheduled until etch plus one so we're looking at a little bit down the line basically three year development kind of process for this there's some of the some of the problems people have with the code there the fact that the main function of D packages in one file which is 1,058 lines long and it's just one never ending function C base exceptions meld people's minds except Ian's Ian likes them but some people broke never free malloc a while ago and the brain did it D packages memory malloc and there's been options for a library code base as well the idea of having a libD package app could link to and that brings us to the end hello let's go back yes oh right so sorry I should say that processArc stuff is being taken care of by filters in classes we by moving the stuff into smaller functions that do the work and kind of the question was the four bullet points to people's complaints what am I going to do about them sorry processArc stuff we split up into the smaller functions we use the filters and the classes and things like that and that breaks that function up into very small logical chunks which makes sense the C base exception stuff I don't really know I kind of like them I think people who kind of hack on D package get used to them after a while and actually start missing them in other software but then again maybe this way of cleaning up the code a little so that doesn't jump back to a completely different process like D package does at the moment sometimes when it never free malloc has been broken well we need to fix the memory management of D package it needs to work on smaller systems it needs to work on larger systems there's probably some fixes that can be done there probably restoring some of the original intent of the code and kind of fixing it in other places and library-fied code base is definitely something I write all my software as a library with a shell around it so that D package is going to gradually move towards that possibly yes sorry question was does that mean I plan on rewriting D package source and in fly inference D package dev has not to be a twisted mass of pearl the yeah most likely it will probably get rewritten at some point that's not on these plans are just about D package in next year at devcon 6 or in devcon 7 and I might start talking about that I mean this is a three year plan and this is the first six months to a year of it so we'll look at those kind of changes a little later down the line but the basic fundamental things here is the idea of these filters, these classes and improving the way we unpack and remove packages so that pretty much does it we'll just move on to about two minutes of questions five minutes of questions my time is running faster than mine so anyone who wants to get involved the website address is www.dpackage.org which is the wiki which basically anyone can brain dump stuff on and ideas on and it's pretty much open to anybody who wants to log into it the mailing list dev in D package at list.do that's pretty much a discussion list now mostly it's translations and ideas and stuff like that anyone who wants to subscribe to that and join Incarn bugs go to a different mailing list and bugs join that if you want to help bug fix join it now please bug fix and lastly this that's the url where these slides will probably be made available if you can actually read that that's it really any questions everyone's asleep Ian do you know that the DNS for dpackage.org is broken? yes the question was do I know the DNS for dpackage.org is broken the answer is yes my ADSL line is down I would love a secondary Matt so the question was how do I feel about expanding the metadata and control and putting more stuff in there basically depends on what it is probably you said you wanted to talk anyway so we'll talk about what kind of things you want if I'm not feeling ambivalent about it also I don't want to add huge amounts of crap in there that aren't needed we need to know what it is package description translation sorry can you speak up a little I can't hear you at this point package description translations package description translations this is the translation of metadata and control yeah I'd love to do it can someone come up with a solution that doesn't involve underscores everywhere and we'll work it out the main problem there is shipping the translations to be honest I'm open to suggestions on that I don't know much about translation now Christian Perrier who was sitting around there somewhere he's single-handedly as one man task force handled the translation a deep package for about the last 6-9 months coordinating all the translators he's working on solutions to do man pages he's an excellent guy and I'm sure he's going to have some ideas and talk to him and he'll talk to me in my language is probably a good idea there hi I have a few questions sorry if I forget some of them one of them we're still keeping the database where the DPKT database as a text file there will still be a text file available it may not be the primary thing that deep package reads every time but certainly if the text file is edited it would regenerate the binary data and if the binary data changes the text file is not going to go away second one are we going to be able to erase classes erase classes for example define a documentation class that you want to have as long as your system has space and then you can when you run out of space erase one class yeah but I think you could do that that's one thing you could do you're setting right the class that way and the other question about conditional dependencies could we add new dependency fields like dependency depends one depends two at that point why don't you just use dummy package conditional dependencies are something I will wonderfully entertain the sport of when we can come up with a syntax for it that you can do them today with dummy packages you can create dummy one, dummy two package depend dummy one or dummy two where dummies declare the dependency set so you can do it today it's just slightly ugly but I haven't seen the less ugly way additional dependency fields like that seem to me to be ugly and what is the other conditional dependency that you were talking about sorry Dee when I mentioned conditional dependencies you said that there were two the other thing people want is architecture specifics they can say depends libc6 on i386 only which currently use a control.int to pass there's probably a way of doing that the codebase already lets you do it it's just commented out okay thanks any other questions so everybody's stunned into silence and fearing for their lives or fallen asleep so that's good okay then, that's it so anyone who wants to come chat to me I'm going to be hiding in the corner somewhere so definitely come down and speak to me and I'm sure I'll entertain whatever ideas you have thank you