 Welcome to the second installment of the Wacky Ideas BoF, the first most already quite productive, the Starseks Intel. And, well, should I begin with one idea or would one of you like it or not? Okay, I think BoF, I think this would be a good idea. The idea is that for customization, we have to be careful that the build dependencies don't currently say whether they are for the host or for the target machine. So if you want to customize, we have to have all the libraries rated for the target machine and all the tools for the host machine, which the current build dependency doesn't quite distinguish. Well, the solution would be quite obvious. Another control file field. I'd like to call it build-dependence tool. And, of course, the binary in-depth job variant for that as well. That will just allow us to distinguish here, and the default is to install stuff for the target. It's a normal build-dependence because most packages only list libraries. And in case of cross-compiler, basically if you're doing a native build, you just install all of the build dependencies. If you do a cross-compiler, you install the tools on the host machine and all the rest on the target machine. That would be well. Simon, with the build-dependence tool, but that's actually for the native dependencies then, yes? Yes. What about swapping that around doing build-cross-dependence? You're actually only listing, you leave the existing build-dependence line intact. You just list the ones that we need for cross-compiling. Could also be done. I thought it would be better this way because most of the packages don't list a single tool, depth helper. And a lot of packages, actually there's libraries, so they actually would be smaller. Yeah, I know what I mean next to you. You're thinking of those, but trust lists save them up on GTK, and then they're looking for all the dependencies we've built by SHLivetimes. Yeah. The idea is that you can basically, for one thing, make this transition easier because you can just go through the entire archive, look at those packages that have a depth helper or a CDBS listed, and just automatically provide a patch that moves those to build the PEN's tool, and that would catch 80% of the archive. If you kept those as they are, and only specified build-cross-dependence, you wouldn't need to change any of the packages. You would only need to change the ones that they crop up. Yeah, could also be done. The important thing is not to call it most of the archive. Yeah, that's confusing. Yeah. Because I've got a little bit of support for build-cross-dependence already in the tools. You can look for that. If you're currently looking for it in Debian X-Control as a new control file, we can easily put that into Debian control. Yeah, that's the X-X, I've seen it before, it's probably adopted. It's a question of whether it's easy to have it there. How many things do we have to change to implement something like this? Is it just de-package and dev scripts, or is there a whole lot of stuff? I think the only thing that really cares is auto-builders and the de-package build package. Yeah, we developed it. If we believe build-dependence as it is, and add in a new field, that's less changes. Because the scripts need to look for the new field, the ones that interest you in the new field, that's the cross-component model. Okay, makes sense. Make it. Okay. Anyone move that after a key is forgotten? It will duplicate certain parts, but I can't see how you can avoid that, because we can't tell from the build-dependence line which ones are actually needed by the cross-component doing the build. What if you do something else? What if you try to push this as a change in the way that the packages are? Properly. Yes, if you push the change in the policy, it will actually mandate that everybody states which are build-dependences for the time being before that. That's not easy to actually determine. Even for my own packages where I'm actually upstream, I couldn't be absolutely sure that I could specify the cross-dependencies for packages I wrote myself. That one of the ones I actually maintained was written by other people upstream. And it's not inevitable how you would actually identify which ones are the build-dependence that are actually needed by the cross-component at build time, and which ones would also be needed in install time. And the actual install time was fairly well picked up by the existing scripts. How would a maintainer who thinks that would never cross-compile something, know what to put in there? Very different. It's not very different. The only way I've found so far to determine that with any reliability is to actually run a cross-compile build in a series route and see where the package fails, because at the moment there aren't any shanks. Quite easily give people a tool to keep building using QB or something. It's actually hard to generate a massive cross-builder thing. It's basically a cover to tell people to use PD-Build to check that in fact you're not relying on things in your system, as opposed to actual specified dependencies. So we can provide essentially the same technology to cross-builds automatically. You're right. But unless I do that, then there's no real way for the European maintainers to find out. I have a script that could do that. I could separate that from it and then make it a separate package on its own. So it's easier to install without having to install all the rest of it. QB already exists. QB already exists. Maybe I'll do the right thing. If it's a Dash Dev package, then it's probably a cross-build dependency. Probably, but not necessarily. We don't want to introduce ones that aren't actually deep of a Dash Dev. Yeah, exactly. You see, that won't work. Yeah, but that's boot essentially anyway. So the problem with that input is that we're throwing away essentially. I mean boot essentially in many ways. Yeah, it would be nice to get rid of build essentially. So that you can actually specify everything. I'm not sure that the gain of not specifying G-lipset is anything. That makes big pain if you try to support it with a different Lipset. Doesn't she live there to take care of that for us? Well, not if you've written depends code on G-lipset. So build depends, yeah. Yeah, the only case about runtime is not about the build dependencies. That's the problem. Also, you don't want 10,000 packages specifying Lipsy and then to do a Lipsy migration. Well, you probably got a Lipsy 7 there. Yeah. Okay. Yes, we do. Most source packages will work just fine with Lipsy 7. Yes. And if the control file says build depends of Lipsy 6. Then they won't. The version number in the package today is good enough to just say any old G-lipset. Actually, it's because Lipsy is a problem and it uses Lipsy 6.0 as a package name and not Lipsy Dev. Yeah, but you left loads of packages. Actually, most of them have unversioned Dev packages. Yeah, that's to do with whether you want to allow the two languages to be installed at the same time for a transition. So you can actually have whether you want to have Lipsy 6.0 and Lipsy 7.0 installed at the same system. Or whether you want a full sprint. You see all of his brain exploded. Okay. Anything else about this? A related issue is install time dependencies. Yeah. Can I make a suggestion that would go slightly towards this? If packages that were at least clever about it had some... If there were a standard way of naming the host compiler or a standard way of naming the... Well, I suppose you really mean a standard way of naming the host compiler when you're building cross. We have. Is that already exist? Yeah, yeah. I'll let you do Jesus here. What? No, no, what I mean is if there was a way for the package source to say, this I want to compile with a host compiler because I'm going to run this actually as well now, rather than the package. Yeah, you can set that from bypassing... Autoconf already supplies only the two. Autoconf gives you both GCC and the retarget GCC. Does it? Yeah. Well, the key thing is it'll be off as well. Throughout, you should say, you know, your tool should say which compiler you meant. People don't very well off them because they haven't really ever thought about it. The facility is... Well, most of the time. What's the default if you just say GCC? It assumes that build equals host. So it determines the build and then it's behind the console. If you write $CC in your make file, which compiler do you get? The native one. You get the native one, so that's precisely wrong. Yeah, yeah. Which means that a package maintainer who is aware of this problem and would like to make it easy for the cross people by letting them set an environment variable that points the new compiler... Like these. Like these. That's it. Yeah, yeah. Basically, all packages built with DHMA and using that help, I already do this right. And I think the CDBS also does this right. Yeah. Backpacking, crazy options. You can call in backpacking, do that build, can you... That bottom line, that dead-horse can you type? Yes. You query that by deep package like that, you give that to a variable. But it's a standard code that you pass to a configure. You put dash-dash-host, deep package architecture, query, dead-horse can you type. And then you would do that for the build type as well. You pass that to build. And from then on, it is all about it. All you need to do is pass... So if I was using auto-comp, how does this work? You need to use this as a prefix in front of the compiler name. So I have to say everywhere in my make file, dollar, del, bin, boon... No, you can redefine CC. I have to say CC equals this. If build does not equal host... Yeah, if build does not equal host... You're kind of not thinking about this from the right point of view. Think back from the point of view of a lazy maintainer who doesn't want to get involved with all this crazy crap. Right? Put it like that. And what they want is I want to say, fine, if I need to... There's three places in my system where I need to use the native compiler rather than the target compiler. And I don't want to change all the references of the compilers in my demo. I just want one change. You're CDBS. Right? And I don't want to use some crazy crap. It's not very good to use that couple. There is a page which defines things you should change. I think the answer is CC equals that once at the top. Yeah. And if you need a host compiler, which only very few packages actually do, define another... So it's silly that the default is the native compiler when the default ultimate cross compiler. Yeah. So it also comes with... That's a historic default. But it's not going to change because of the cost of each thing. But it also comes with... I don't want to use the native compiler. Native compiler. But that's the majority. Well, no, because if I'm doing a native build, then the native compiler and the target compiler are the same. Yeah. And it doesn't matter if I choose the target compiler because it's the same compiler. Yeah. But this does this right. Yeah. If host... That is the default. The dash dash build and dash dash host are the same. It uses CC or GCC. So in this case it detects it's not cross compiling and uses the native compiler as CC. When you... These are arguments to config that. Yeah. In the non-altoconf case, you have to handle that yourself. That is basically what Altoconf is for. Yeah. But it would be much nicer if this could be set in some way where you didn't have to thread all of this corner through your rules file and then through autoconf to no useful effect. But it does this automatically. If you watch the actual... This is a bar conversation. If you watch the actual build process you'll find that if you use the ordinary deep package build package it will specify build and host for you using the environment variables. You only need to specify build and host if they're actually going to change from what you're currently learning. Why don't we talk about this in the bar? Okay. Yeah. Anything else about... Build and host. Host and target. It's confusing that was a list. Yeah, exactly. You know what you're talking about. We get confused with this. Yeah, the big problem is... Host is the machine you're building on target is the machine you're building for. Yes. Gnu uses build as the machine you're building on host as the machine something is running on and target as the machine you're building code for. So they have three. That's because if you're building a toolchain there are three. Yes, yes. The Gnu terminology is basically you know, unintuitive. Everything is a toolchain. Any normal mortal who hasn't spent ages thinking about it already is. Which pretty much guarantees they'll get it wrong. They just look at the father and go, there must be host. No, actually it's not. That's bad. Okay. So I think the toolchain... Well, we need to decide whether we are going to try and make this a change. You know, these two variables put them in control files or just add an xx1 which is kind of easy. Yeah, I think we... I think since policy only follows existing practice we'd have to introduce this in X control and if we have enough mindshade then we can... This xx facility which has been provided because that seems like the ideal way of having a new file. What do you put in the control file with xx1, xx1, whatever depends on where you want the information to get in the end? With our scripts we'll be reading it so it doesn't really matter where it is. As long as tools don't handle it, ignore it. If we want to do that then we have to add two lines one for the host and one for the target because normal build depends would have to dispose. But if we duplicate in some of that anyway and they build cross-dependence line or xx-build cross-dependence you can leave the original build build depends as it is. The less changes we make to table and control the easier it is to get the next update through without matches, really. Yeah, that's true. As long as there isn't some... it is kind of nice to have two things separately. It doesn't matter to the same thing. One which is everything, the other which is the difference. I suppose. So, yeah. Let's try that. Yeah, the prospect of replacing the build depends line would be better than xx-stuff make a reg-x match against x in series of letters, type of the thing and also on just thing without the x-type and then you'll have to change it again. Yeah, that could be done. The problem is if certain packages need both for the host and the target then you need to have a way to specify the two. That is true. Then you have to write it in three places. In the original build depends in the one and in the other. Yes, yeah. Yeah, that's why the host have been trying to... Yeah, it does that happen, in fact, probably. I'm not sure it does. The first 85 packages we've dealt with. I think it's... There was also that. Could, for example, happen for something like lip-set depth which could just be used for some bit of tool and because it needs to compress something during the build and also need it for the target to uncompress what was compressed during the build. So I think that's quite likely actually to happen. I'm inclined to add two lines. Yeah. If you just add two lines which replicate all of the information and just replace it for your cross-builds that will mean that if the build depends on somehow wrong you'll have a kind of extra place to accept the build. That's one of the ideas behind the DemianX control file that we have another control file where we can introduce these features and have a pre-processing to build normal control files from there removing extra features. Yeah, I'm not sure about that. That being what? That being the slash control.patch. We already got the Demian control.patch.pl just control.d It does seem excessive that we already have the Demian control.patch and the X control file and there's a mechanism for adding extra variables named as XX. It seems like two different mechanisms. Yeah. I think we should scrap one of them. There you go. Details. Okay. We've already even done this so I think we should do so. Okay. This only affects a small number of packages, is that true? Yeah. It would be nice to try and find a way if you did not have to change 18,000 packages. Well, we are anyway, because we cross-build and change the leaf dependencies. Nearly everything else has to change a bit unless... I see in this case... Yeah. It's even if you can, except for we can automate the generation because this is a thing here. But writing it down in a separate way rather than concatenating it as it has always been essentially about every package we've done. Yeah. Once we have quit with the mass we can also make this a tell people to go fix their packages. But at the moment we have 85 packages and... Yeah. Okay. Next idea. Do we have any more ideas? Of course. I have a really bad idea. Yeah. So at the moment we lack a revision control system. Well, it's not quite true that we lack a revision control system. Everybody is using the archive as a revision control system and the archive is a stonkingly bad revision control system. However, it's only slightly better than what people have decided that they prefer. For example, there's the 40 million different content package system. There's the the format which nothing knows how to generate. There's people uploading integrated revision control system with the kind of RCSs that have their repo in the working tree like Buzzer and I think it does this as well. Sometimes you can find source packages in the archive that have revision histories in them and the whole thing has become a complete clusterfuck. So my proposal is and this is going to be completely controversial is that the Deming project should choose a revision control system and replace the source archive with it. This was proposed in Bundu as a source package. So this is a lot easier in Bundu because Mark Chuckleworth can just decree that everyone will use his favorite revision control system and that he needs to have an enormous 3 year long flame work out. So that's why I'm proposing this in the city ideas box and not on some mailing list. So obviously this is not going to fly but I would like to precede everybody's mind. You should realize that if you describe our development as a work flow to almost anybody else they will bog away, right? You go to a BSD developer and say so what do you do to make some change they will look at you like you're completely barking and say didn't that go out sometime in the late 70s? And the answer is yes. Well okay, people were doing it still in the 80s but by that point it was already looking at the bottom of the tooth and people had realized that the revision control systems were the way forward. Also it makes dealing with derived distributions of all kinds really painful. So we've got the embedded people who are doing effectively a derived distribution where they carry a package to every package and they've got some crazy, crazy clusterfuckers and we've got a gootoo who've copied everything and applied their own patches and every six months the gootoo people take all these patches and they've got some crazy cron job that dips them all backwards and forwards and tries to wedge the patches back in properly and we've got all the other derived distributions who don't even bother with any of that and just copy them once and then fly off into the sands until they die The whole thing is just a complete disaster I'm sure we must be able to do better than this So I want everybody to go away and think of workflows and good ways of doing things that might actually and how to get from where we are now to somewhere that's not a complete nightmare So next year we have 10 new systems 10 new systems and then we can have something like a bake off We have already The key thing is if we can come up with an interface, one of the things that we lost 10 years ago you could download a package you could write a program that would download all of the source packages and then unpack them all and then do some kind of code analysis on all the .c files and you can't do that The reason you can't do that is because you go deep into source-x on some package and you get a bunch of toggles What? What the fuck? Or you get a bunch of code that exists but in fact we have to take these patches here which can only be applied by running some code from over here and some code from over here and it doesn't work on stable anyway It used to be right so now we don't anymore have So everything I've got the name of a binary on my computer, it used to be that I could take this binary and I could find out the source code that that binary was made from entirely automatically and then we'll get static where to do it Now, there's no static where to do it You'll notice this very much because they live in their little world where their package and maybe a couple others that they care about but if you're trying to do embedded work or you're trying to derive like the monthly developers do or you're trying to do anything like that anything like cross archive it's a complete nightmare Nowadays I'm using our sync My standard approach is unpack the package and I never build in this tree because the the rules files are too unreliable and the package systems are too crazy so what you do is you are sync to a build tree and you build it and then you see how it goes wrong and you edit the build tree and then you have some kind of crazy diff room to recreate one of these patches if there's a patch system or some other thing and then you are syncing it again with dash dash delete dash a and then you rebuild it you rebuild all the packages from scratch every time you make a change and it's like you know I have better software than other tools than this in 1989 on my my complete computer it's just incredible so basically we need one tool that we need a standard interface I don't care what's behind it there needs to be a standard interface for all the usual operations which are get this particular version of the source and then when I say get source I mean make a directory on my disk the thing that roughly gets fixed to the compiler so that I can edit it in my editor without having to do any weird shit we do have all these patch stacks for example in dpatch right, that would have to do model properties I think well because the reason people are using these things is because they want all the features so the archive, the native archive system is too feature poor and that's why people are using it's too poor a revision control system so they decided to pick a different revision control system which is better for them but worse for everybody else now it's a perfectly normal economic thing that people do but it would be nice if we impose some kind of structure on the situation the obvious answer is to have a revision control system that has all the files in it or even half a dozen revision control systems provide that we have a standard script in theory you're allowed to write your Debian source control changelogs in a different format nobody's ever written a parser for a different format but you could do something like that where you say get source for this and it goes and looks up and finds out over the source of this is kept in git or the source of this is kept in ducks or in TLA or maybe it's in the normal archive is it practical to put the whole thing in explode because it's a bit large you could have a separate repository for each one I'm not I don't suggest a particular thing if you did I think put the whole damn thing in git that would definitely not work because when you wanted to check out especially you wanted to check out the source of hello you notice the syntax error you then have to download the source for X as well and everything we always get the whole tree a lot of the projects will use a revision control system upstream it's only a case then of tying the upstream revision control system with the Debian packaging we don't need necessarily to mirror all of that in our own because there is some form of tag on the upstream our own our own patches our patches and our packaging with these scripts and all that kind of stuff we want those I mean that's why people are having these patch systems it's because they want those under revision control and I want them under revision control too and for the packages that I'm upstream and then they maintain before I wouldn't do anything but keep the whole thing in a revision control system let me propose one that's a crazy idea to solve sure idea what if we kind of design or something like that do something like a backend which would actually work in the same way as an upstream revision control system so it actually have a mirror revision control system which is actually the same as the upstream and actually do something like this thing and using the same revision control system it's kind of compatible so if I don't know the maintainer or fellow uses SCN you've got a backend for that and you have the package in SCN but if the maintainer uses Git or something like that it's also crazy but you would actually have to put them somewhere behind an interface and you that interface would have to live on that interface would have to live on your system because having it indirect through the devian servers all the time would be painful for them they have this SCN2 bizarre they have some crazy thing that imports all the repositories that anybody's ever told them about into their own giant buzzer each one is a repository and they sync right they have some kind of import mechanism well it's similar to what so they have a local mirror of what the SCN has if you have a look on Allioff the team maintain packages each team sort of has its own way of working in its own revision control system but between teams they're actually quite different and I think what we're sort of proposing here is you sort of take one team model on Allioff and sort of try and apply that across devian so we've got a consistent approach so you aren't using some divide down the team level that's ideally I might not to have to change everybody else's workflow because well that means I'll have to change my workflow as well because the compromise that everybody comes up with will be not the one I like and anyway they're getting that through devian's cap herning party would be impossible so I think the message we can hope for at least at the beginning is some kind of at least a unified interface where certain operations can be performed so in a unified in particular you want to be able to ideally you want to be able to get a package fix a package in any of it in a standard way that doesn't involve having to know all sorts of complicated stuff about the way the team structure their workflow where you get to interact in a standard way with maybe some to all of these I don't know where the interfaces are that's the question isn't it where are the interfaces because we already have standard interface for getting a package and filling with it it's just that the filling with part can be very different right what I mean is if the package for example is a C program that if it consists mainly of C codes that what you get you can grip so that a lot of the time you want to fix a bug in a program and a lot of bugs are quite easy to fix if they're happening to you and you don't want to have to learn all about the way this program is behaving and all this other kind of stuff and really we've made it so that it's very hard to work on packages that you're not intimately familiar with what about the difference because you've got a couple of projects as well as well as looking after my own Debian directory in that I don't put it into the table but I do check it out and build the packages in that it's my own way of doing it not a lot of you can do the same thing but the actual upstream CVS is very very bare it only contains the make file AM, the C files, the configure.ac and a few other little bits the actual it's much much bigger it must be at least ten times the size because it's got the make file.am and it's got the actual generated config that's got all the link for each of other stuff now if you're actually going you've got to have some kind of way of determining how to get from the upstream CVS to a usable package where they've run autogen and they've done the first configure that they've made well if you've got those tools those tools could be installed on the satisfying the built-in pins it's quite easy we've already got an automatic way of doing that the risk is if you do that with that disparity between the upstream CVS and the tar-wally sender to Debian just by regenerating all those main files your diff gdms could be huge because you could be running autogen under a more modern version of autoconformity that's what I'm saying what we have at the moment where we've got this orientar gzm and diff gzm in the archive often isn't useful to anybody if the package is really maintained in some revision control system then ideally you want the convenience of the standard way of finding that repository and checking out the right version from it rather than to be given some toggle to maintain a field in Debian controller what you almost need is in Debian rules we've got the configure target we've got the binary target what we're always talking about is an intermediate D-patch does it but not all our revision control systems do it but we're always talking about an intermediate target which do whatever you need to do to the archive all the tab walls and get rid of all the patch systems and so we end up with volume source code just before packages there's two problems with that the first is a practical problem when you're editing your package which is that you could run this unpack target fine but then you edit the code, you build it, fine it works it builds the binary, you test the binary the binary fixes the bug now you go, Debian's build package yada yada yada yada and upload it and it's not what you're changing it's an output file and the other problem and that's one reason why I don't why my workflow involves running R-sync twice now because that means that I never had a file I edited randomly batched and it happens it happens, right if you do this a lot if you end up with a workflow that means that it never happens to you you put lots of browser in your computer you get lots of dissipation, you make loads of copies the other problem is that that involves executing a package and there's lots of things that you would like to be able to do to a package without having for example you're scanning a package for security violations and it would be nice if you didn't have to instantiate fresh virtual machine unpack the package, run the crazy unpacking script which would download stuff from the internet and help us and then discover that it's rooted, your testbed you know, from a testbed way and get mute you would like to be able to write a program that can examine the source code of every Debian package without having to trust that those Debian packages don't contain batched well, Wig and Pen is partially there but it's not really very sufficient because because you can't it's trying to be its own revision control system isn't it? and it's not a distributed revision control system but it doesn't have merge tracking if you look at Wig and Pen plus the Debian archive has a revision control system which is the way that any previous developer asks them how it looks they go why have you invented your own crazy revision control system when there are at least there are half a dozen revision control systems that people are actually using in anger and at least three of them people disagree about which three are actually usable so let's have an unpack how do you fix the I just want to change the spot is the only way to fix that to do away with the magic unpacking part I think so that's one of the strengths of Debian which I guess when we grew from thousands to tens of thousands that was one of the strengths because we were enabled we were enabled to have the diversity that people could do whatever they wanted but it's now turning into weakness because we don't have any consistency so originally these pack systems were only used for very large and cumbersome difficult packages where essentially it was practically impossible it's not like X and you couldn't X was an enormous source package and it would take hours and hours and enormous quantities of dissipated build and unless you were at some kind of wizard you would basically do it and fix the most trivial bug in any X program and the build dependencies were nightmare and the build system took ages and it was very fragile and it was just hopeless so the added, so there wasn't much resistance you know and when the X people said oh we meant to have a creative pack system as well there wasn't much resistance from anybody because all the people who would have been hurt by it weren't doing it anyway and now that every tin-pot little package with a 10k line C program is using a pack system because they're always using a different pack system and I was forced to use one recently because I wanted to incorporate the original package and its docs with some of the docs on the wiki and you can't make a patch that would just like stick all this stuff in there I don't know why you can't patch file files or something you can't patch files away I couldn't get rid of stuff so you have to go oh there's this magic system which will let me take this table and that table and magic and things so now I'm doing it too only because I couldn't do it the original way because it's shit as you observe so why are you complaining about it well I'm just saying everybody's doing it and they all have a good reason but yeah there is a problem so the pack system world has kind of consumed people aren't there's not a great deal of new innovation and fancy new exciting pack systems coming out the latest because it puts me around for a little while now and it's the point now where you can sort of they all work in roughly the same kind of way the same kind of basic idea and it would be possible to take the quilt machinery out of the source package and put it in some developer tool instead and have the source package have some kind of field of the control file saying basically just say that our source package has stopped being one format and we have six supported source package formats one of which is a Git repository and one of which is a where the Git tree is in the archive and you download it and it's a Git checkout with the repo history and one of them is a is a deep patch thing so is that repository then going to only hold our packaging code and our patches and then we get the tar ball from the original source no obviously not as much as possible but the important thing is that it's machine readable that you basically have a new and decarative yeah but can we know that we will have to support all Russian controllers through all our screens well if the version control system that upstream are using is something that they pulled out of their rear toenail three years ago and nobody else can use it then somebody has to lose and the maintainer gets to decide who loses and that's the situation we've already got this problem with crazy or eight tar balls and stuff like that probably in that case you could actually fall back with just regular or eight tar balls well there's no reason why the current source format would obviously be one of the options but I would discuss it in half an hour yeah think about that and you actually thought about trying to find that's what has to happen next someone's got to sit down and think about it and we need to think about which operations a non-maintainer really needs to do and after you've got those operations soon we'll discover that non-maintainers want to do quality things they might want to do things that aren't supported by the current revision control system so we'll say I'm terribly sorry this person is usually very bit full yeah, anyone in tears well I've gone straight to the original net so I suppose it's my fault that's right, it's all your fault deep hatch isn't my fault but I created the the void deep hatch I'm just going to say two things simple for newbies I kind of knew this whole group and I don't really understand what you're talking about to the extent that you can abstract away complexity but oftentimes it doesn't work but if you can just make it simple it'll make it easier for other people to join and the other thing I think that's important is I know it's of course hard to get everybody to change what their workflow is but you can probably get them to change once so if you can get meaning in the minds and decide we want to do acts and now of course it's going to cause everybody to change but it's just a one-time change don't mention about it you will never be able to change the way the workflow is you will never be able to change on a general level if you don't change the tools which they are using everybody is using so that basically reduces to dpkg, act, attitude and that's all because that's all the only the only tools which everybody uses and even if they don't want to change eventually they will be forced into changing their workflow in a way because otherwise there's no constraint if they are used to it one time it works unless you see the benefits and you will not see the benefits unless you try don't underestimate as well the number of packages in the alcove that haven't even been touched for the last two or three years there's a lot, even if he wanted to change it's still taking people a year to get around to it because it involves anything not revealed we still have to change there's 70 packages to use a doc I think yes but at one point I just say if you can make a change it's kind of a one time thing and everybody is going to digit about it but six months later they all forgot so you want to give everybody some very very love memories that's bitchy he's not ever tried closing one of the banks that I fired do we still have four digit do you fuck numbers all the way I think so yes I think 789 was closed recently okay so there's one thing I was just going to say I do see a lot of non-RC bugs that are sitting around for month after month after month I don't know if anybody cares or if anybody is working on it or if there's one it depends on the maintainer yes if you fix it yeah I can get it to compile I can get source code you can get the source code I think the culture has changed there used to be people very stroppy just fixing things it's a lot better now better because the release manager kind of enforced enemy policy exactly it's a reduced culture change so it's still getting really stroppy and we've also got a month here which has gone far over the right so the fact that we still have a level of control people are a little bit more like we're not going as far as everyone can do everything I frequently get these emails from people saying are you doing anything about this shall I unmute it and I always write back straight away and I say thank you very much and I'll look at your packs at some point and then three weeks later I go what the fuck then you feel obliged to do it properly exactly so last time this happened to me somebody enabled in Mexico conditionally Von Arm but even so that kind of generates a culture of just more dynamic culture where people get around to things I think it helps a lot in trying to I think that one of the worst ideas which Devin came up with was actually owning packages because that created some kind of responsibility but on the other hand it created some kind of barrier into trying to fix bugs and that's why I think we still have 4 digit above numbers no that's not why we have 4 digit above numbers if you look at the 4 digit bugs once they're if anybody cared if even the submitter cared enough about this bug to spend the probably hours that would be required to fix it then it would have been fixed but I've got 90 or 40 digit bug numbers open and they're not easy I've got dozens of 5 digit bug numbers on and they're all the things like I want the Nipsey behavior to change in this way that's incompatible with everything else but not broken that's not laziness that these things aren't getting fixed there's actually problem with this possibility well on the other hand there are cases when you actually cannot touch a package because the maintainer is really really really possessive of a package if it's something to go away I've never encountered a maintainer who actually refuses to patch a business but I've never met you yes the perfect answer is offer unlikely to NMU provide a patch and say what do you think of this patch and if they don't reply or if you feel like you can just say I'd be very happy to prepare an NMU and I assume that'll be fine by you unless you tell me how to patch it and you say it writes within the next couple of weeks if you haven't been well sometimes you get back this thing saying no no your patch is completely bonus but at the very least now you've put it on the spot and you've been helpful about it you've learned because you did it wrong sometimes they were wrong and you were right surely not okay I've tried to condense everything into a single sentence that's a whole sentence basically what you need is a way to is a decorative language that defines what to do to probably unpack the source really it has to be little more than an identifier saying what more complicated method is to be used to select between different things and maybe indicate a version yeah patch management rather than sort of pure pure we want to have three or four different state of patch management arrangements we're going to have to allow people to add their own and we're going to have to tell them you can't use one unless it's unstable because not being able to unpack the source on stable will completely fuck backwards and lots of people will say there's going to be some resistance there why can't I use my shiny new feature yet because it's shiny and new and therefore no not to work properly yet I mean it doesn't work at stable well if you have a package whose encoding is not supported by any source whose encoding of the source code cannot be decoded by anything stable then you won't be able to fact-quot it without changing the format we're now reformatting it first on review the hideous our package will be featured on the APKG like new variables my subversion of doing so goes backwards for time backwards but you won't even be able to start fixing it if you can't unpack the source code all the other problems can be fixed by saying oh god no not that in feature I just didn't particularly if you're just doing it locally you just need this thing to work right now you delete all the stuff that's not compatible you compile it and hope in 90% of the time that actually works but you can't even start on that if you can't unpack it if it's using a version of D-Patch then we just have to back-put the new Rolf package div and then in middle you could put the new in stable that would be cool back-putting the feature for new version that's the trouble there's a new RCS system which isn't in stable yet unless you back-put all the one in stable is in compatible in practice there's really just code for normal RCS that you might actually use but there's nothing if you're using buzzer you have terrible you have terrible trouble if you're trying to do anything if you like me you're having a buzzer out in a devian half it's just impossible to use stable buzzer on anything to do with a bunch because a bunch are all using some shiny E4 madness you and shiny in ways that I don't follow and if you run the old buzzer and it goes we just have a new topic okay, Eddie you want to say something about TDFs for example go ahead there's something that doesn't matter because anyway it's important so the remaining year which well it's not I'm not taking credit for it but I'm proud of it this whole thing started in September last year looking at the IHT and meeting about a way to provide selectively only some translations which the user desires to install in his system so initially we thought that the best way to do that would be to actually make independent packages well not fully flash packages not dev packages but because dev would blow away entire packages files instead of 18,000 packages you would have something like 100,000 million packages or something like that something like you just can't make a calculation it's easy just to multiply the number of languages with the number of packages and 103 languages well something like that so it actually would blow away the packages files so another proposal was to actually have separate packages files which would be constantizations or something like that which would actually specify things like filename and would have some other suffix instead of dev would be t-dabs, translation devs or something like that would actually specify the language and things like that anyway we thought that I wrote a proposal for that we thought it's devian.org slash translation devs and well iBars also wrote a proposal which came after that discussion and recently Mikhail Brammer came up with a slightly modified proposal about this and this but this is the one from the first day who actually described Mikhail about in fact actually having separate packages for translation is not that great of idea because you have that problem with documentation with development files and you would actually if you do it only with translation there's no much benefit outside the IITN interested people but if you do it in a way that other people can't benefit from it then you can do something different you can do something that actually targets more than just people interested in translation so could you go to the proposal from written by me actually the idea is that you have something like initially that was the idea to have some special packages which have localization material which can be more files it can be for games let's say audio files which are in French or in the guys shouting in French instead of you get the idea so they can be also architecture dependent and that's why probably it was a good idea to have separate packages but in the end we thought maybe we should go back and look at the proposal for the PKG2 which was made by Scott James Ruhman I think two or three years ago in defining classes for files within the package so you would have something like define the package for for a package you would define the class which says no translations these are documents these are one pages and you apply some filters on them installation time or at some point build time or something like that the first idea was for engineering people to use the remove filter because they want no one pages, no documentation no translations and the deal was the release and actually it's working just working in the last afternoon so now the problem is that although it's a really nice feature to have that for mdm it's not that a nice feature for a person who just wants to have the latest package I'm not that in mind just talking about the staff so there is potential for it to extend it the other way around so that you can actually control because it's a simple pattern match either on install or we adding it as well so that it activates on build as well so you can choose which ones go into the package or which package they go into or actually build in with the package build package so that sort of thing can mean as well so actually you were thinking that we could do something like implement based on the filters on the classes of packages you can generate separate packages it would be entirely automated within d-package so that whenever we come up with where the translation files go then that could be automated entirely so that the maintenance would make any changes in that I think we should put everyone up to speed about what's the problem with just having something like you have a dm package a regular dm package and you split out let's say translations and in installation time you just remove translation or stuff like that that won't go because actually you might have the possibility the next time the it's not type equivalent to the original it's so fuzzy and ignored yes, theoretically yes, but in practice it happened to one of them they were broken at some point so actually somebody well these are the most useful tools exactly I'm suggesting that our tools should prevent that from happening yes, but you cannot patch getex to do that or something like that I'm not suggesting patching getex because the message id's are the original strings yes so when you look at the source package when you're building there's t-dev you can tell statically from the t-dev whether or not it's type-safe or not and if we just make the build fail yes please again this is not completely very extreme or not I have another question which actually is related regarding the package version in fact my understanding I'm not a developer is that if there's a security update your finding would change the finding of the t-dev t-dev would change yeah so it means downloading again even though it's exactly the same wouldn't it be possible to have some intelligent hash of the original file of the original file to be translated and keep this hash in the control file it would have two benefits one you don't have to download it again if it hasn't changed since the last security update and the second one was to address the problem described before is that if it doesn't conform you just ignore it I thought I didn't quite understand your question on one hand you always have the problem that even though you maybe had three or four translation updates in a t-dev source package which was your most likely as a translator to do if you have a security update or a regular update in unstable then it boils down to if the maintainer merges back your changes from the t-dev so actually if you actually did three or four updates as a translator you did the updates and then the maintainer of the basic package does an upload if he doesn't include your translation you're you're worth watching yes assuming he does include it if he includes the translation you have a new version and the user downloads the new translation which is basically the same as the previous one yes but I think that not to do in this way it would be actually more harder it would be a bigger kind of process than actually trying to just let them upload it again doesn't matter what I mean is having a hash if inside the control file instead of having the mv file having a hash file whichever a kind of hash file of what should be translated it means but that doesn't contain any version information so actually you don't know if the next version has the same hash but when you do an upload of the Debian package how do you know that hash it's okay for the new version and how do you know that hash is bigger or less than the previous one because the package includes the package the Debian package of the binary files includes the hash of what should be translated doesn't work you would have the same hash in both files actually you have a broken you have a broken workflow because actually you can't have the Debian package now and then you have several updates later and then you don't what you do you have to rebuild the Debian package so it contains the hash which you have in the it's an instrument storeable separately yes they actually having the translations in the original binary package is just wrong yes if you have a security update then you rather issue the security update now and the translations get reverted to some you know some translations get broken and it reverts to English and have a security update and then the translator can fix the new string later yes you want the opposite it will no more depend why would nobody do a greater than or equal to you you want less than or equal to so the most recent it will always be available even if the package actually gets a couple of stages ahead all of the translations no I think I think it's over complicated to do that actually get text work ignore translations that should be okay if there is a second download so nothing will actually go badly wrong if the translations and the main package are installed an hour of sync just not get some of the translations you might not notice you might not notice and if you do notice well it's not dreadful it's just better actually the way that I was thinking that you can do this and make it work would be something like when you have to get installed the package you get the devian package and then in first which absolutely you definitely want to do that automatic but if for some reason there isn't work there's some kind of problem then it's still fine to go ahead that's exactly what this character is and that's one of the main criteria should two files be in the same package or not is it absolutely essential they get installed together well in this case it's no absolutely not essential so they should be in separate packages they're kind of optional dependencies and the criteria for whether they should be in the same source package is are they maintaining the same workflow by the same people so they shouldn't be in the same source package you mean the absolute source package if it's there then we should use that and you were just talking about the idea we don't have a good way of doing this source package from upstream that contains stuff that we get imported out of source well that's one problem and another problem is that say you have released Edge and then it's the case for French they actually and I shouldn't say this but they have 100% translation code or the t or their code anyway they have one or something like that nobody nobody is insane enough to think that an update of a translation would break the package but they can't do it right now so actually they would be able to do also for pure depo translation so that's also a problem so we decided in our radicalism that not only are these things they're completely separate and we want to institute a new system which does let you speak the language out because it depends on how we can manage what we're maintaining a kind of tie so that ideally we can match it up just enough for a tie most of the time it works right so that you automatically you decide to just finish the install that's going to require magic and we want to completely there's a lot of I mean if you think of all the bookkeeping that goes into individual packages a lot of that is unnecessary here exactly we don't care for example if a translation package has one of its files overwritten that is we can choose yeah that's what you decided and then we can provide also the file which can be in the pvp well the thing is that a localization package if it gets into the system it's not fatal exactly and the reason why did you do this is because all the files that it previously installed in any it's so that if if the package is going now overwrite some file and then delete this package later it knows but we can't do that for a translation which means that the dotlist files or ddebs would not need to be read when you were just in a normal ddebs you don't need to read the dotlist file ddebs and the rest of the time you don't worry if some other file some other package clashes with it then well you may lose a little bit this translation file she wouldn't want to clean it with the overrunning other packages so you still have to read other packages ddebs files which and the reason you don't want to read is because if you have a system where you have lots of translations some kind of demo system where you can store them all it will be all the other files and then when you're every time you start to have ddebs this doesn't pay before this is basically what we're talking about since everybody's installing all the translation files but at the moment the translations are not so it's not one file per one got this file per package across translation which would be if you just want to easily read ddebs this file then you have a serious problem very soon and that means in a normal scenario you don't have you don't have more than four languages on a system because it can be actually in terms of data service which is the main thing and then you can share it with everybody it's very possible ddebs well probably you might want to do something like you don't you don't need to just until you install the php magic web interface and just before that you enable the translation you're right basically so actually you would be in a space particularly straight away you do need to automatically basically automatically do install these translations like a shadow of the original these translation files include these description translations or is that separate from no it's a different thing it's a different thing that's already it's a separate thing it's a proposal they're going back to these no it's a it's high ATN packages or something like that they get translation of the I've long been in the opinion about the extended descriptions shouldn't be in status or packages even the English extended description should be a separate file because this causes lots of performance problems that you don't really care you don't normally care about in precisely up-to-date descriptions right and it means that Romanians have to download the English when they didn't need it whereas you would have a Romanian packages file that contains the English text when there wasn't a Romanian text yeah okay another proposal was although Mihael is not here I would like to take that into consideration maybe we could take this tdev issue and maybe extrapolate that to classify documentation on packages and things like that in dev packages and transform most of them in this kind of shadow packages when you want to install a package you just say I want dpkg and if you enable some switch or option or something like that you say okay I want dpkg and the documentation and the development files and everything that's one page about it and everything that's all sorts of things and well you get reasonable you also get there there are some nice possibilities here I think we should be careful when we design a code not to rule this out but also we should be careful not to immediately enable it because if we're not careful we will create an explosion of at the moment we've got a bit of a problem in the archive of maintainers who think that tpkg package needs to be split into six sub packages and if we go down this route each one of these six sub packages will be split into four packages itself and there will be now 24 files in the archive of each of which and separately installed and separately um putting man pages in translations and other similar stuff in translations doesn't make sense but we do have to have automatic machinery that checks and it has to be not in in deep a bit on the earth in small time it has to be at the point of archive acceptance something has to check that this tdev contains well firstly that all of the get tech screens are type equivalent and secondly that the package only contains translation files or man pages or the things that it's explicitly allowed to contain yeah um why don't we do that build time because at the moment the translators are we going to let people always check twice um time is the security check and the previous thing is so that the uploader gets told now rather than six hours time yeah talking message from time I think we need a multi-layer approach but you can basically say that there's the first level packages and at the end there's a package file and everything and there's sub packages that contain some component that not all people want in case the sub package is large enough to write to actually make sense and on the third level only translations that currently have this special case documentation is another possibility but current arrangements for documentation they're a bit clumsy but if we for example invented a if we have a strict a strict machine rule for package naming that would make the whole documentation thing just go away I could think of making the independent part of the large package imagine turning it down into the sub package we have that sometimes with data files yeah that's obviously done there these things can be done at ad hoc translations are really special and we should invent we should be clear that we're doing something translations are really hard problem because every package has 100 sub packages and the user needs to be able to select which kind of columns to have in the main tracks there's also I think it wouldn't be a good idea to advise people who want to attempt this meeting to look at this page because there are many other issues which were raised in the proposal I think we've talked for 6 or 7 hours about this I still didn't read it also you're lazy you're asking me questions the point not all if you have 3 days the first level packages the sub packages have separate files in the archive but they don't be as themselves in the packages on the third level there's some kind of a manifest of further splits of a single package for example one could combine several languages into one if each translation has 4 kilobytes then we just combine the 25 of them their users would have to download 100 kilobytes and then just extract one or two languages so you actually combine the filters and install them with the file I think that in some time filters are a quite large good idea but they're a powerful general technology that is helpful with this stuff you don't have to take care of it we want to be very careful I'm hesitant every time we introduce new fancy features into our packaging system we've got a thousand developers and one percent of them go off and do something completely fun and that's 10 completely fuck what's in packages which everybody else has to do there's also related to that filtering they could be very interesting things for a system administration to do with filters they're allowed to define their own classes of files and they say something like for this class of files I don't want any backups or for this class of files I don't want any intrusion detection system to check them for all kinds of fancy things we in theory have also already classified in the file position yes that's definitely a good job considering all the things you've always been very good at 100% perfect for example translations like audiophiles for a game or asters audiophiles they they live in some packages sometimes the US share some package maybe a media BE or media French or media things like that because there isn't a good place to put them so invention the question is everybody wish they shouldn't have some sort of manifest file lists all the files in the package and declares what they are like documentation or documentation and if many have that it's called the data file you can't always tell from a file name cases where you can't tell from the file name that's because the file system doesn't call out that particular kind of file for a special location you're suggesting that we shouldn't invent a new interface to deal with that problem we should fix the problem I think that's one possibility that we haven't properly explored there might be reasons why we didn't want to be inventing an easy test we could talk about this for several hours and this is not really to do with translations the problem is this is a complete because there are things some file is English documentation or English man pages it's going to complete different directories so either we end up with a huge list of directories we have all the all the I-18 and infrastructure under user share under user share map under user share info and so on user share package language if you have the same media files which are localized user share package is true that you can't tell from the file name which language a human can tell a computer can't really tell because where in the file name the locale is encoded is not well defined that can be defined it can be set by navigation all over the world which actually has some localize and we it's not an idea of how to find its localize files but would it make sense to allow a package to define it in some control file that a certain neo-register file name and then go a special symbol in the thing which which means this is where the locale can have characters 5 and 6 in this package so actually every maintainer would define that because they actually want the translations just go away to the translators and let them deal with that are there packages that have non-standard names or languages so that doesn't help you there because the if you've got a filtering system where it's kind of plugging the language out of the file name if you've got a translation table right this is going to happen no I don't think there are this is something that can be walked into a package build file and not that so you can have a T there contain a manifest as Simon suggests which defines which files belong with languages but this is by many things you can infer it the other would be to have this information in the main package if the main package adds some directory where to put localized files it would have to define this directory like users share my package the actual locale is some place what about your question if there are many packages which define really non-standard way of test-filing locales actually there are not that many they actually either say two-digit code or they say language.encoding or two-digit code.language.encoding name or something like that there's only let's say four or five schemes of defining that because otherwise nobody has their own crazy pattern table nobody calls English UK some packages have in the past if they do they kind of have some kind of standard encode they actually do something that it's let's say easily this is not kind of computation you want to form during the install what the installer wants is a very clear declarative rule that says does the user want this file and that means that the file has to be tagged with something that matches a set of desires a manifest class thing also we could simplify the bugs against the 10 packages no no it's not reasonable to ask a devied maintainer to rewrite the i18n support in some giant crazy packages that was written by astronomers ten years ago but also a lot of the i18n support is embedded in KDMI and I'll presume no in terms of the standard way and the standard format of how it should be most applications the other reason you want this information in the t-dev I have not in the corresponding dev because if you have version skew you want to be using the information from the t-dev that you are installing and not have what proportion of the t-dev depends on this depending on what dev you had at the time you unpacked it the question is also whether all packages are really committed version skew and that don't fall over with it I think getex can handle it getex doesn't fall back to the anything that can't handle it we'll have to do with plan A until we can fix it I mean that's a very serious problem if it can't cope with missing translations no it doesn't it just falls back in the getex yes but the question is I know that the book you have crazy magic well for that case if it doesn't handle missing information then you actually say for this kind of application for this package I'm not splitting in t-dev so we'll just split it it's actually better than in the current situation at least we haven't made it any worse and maybe we'll fix this problem before we fix it or maybe we can come up with some work around exactly putting this in a pool with us I don't think it's necessary to stop it exploding yeah can we just take in terms of that do you think we have enough info to re-write this page now we don't have much to work with you you're going to go away and you're going to write this page with what it seems to be there weren't many there were many ideas floating around so I we have most of them on tape but I think five minutes isn't like missing that's the problem they're only going to go through a table no that's just not right yeah so a meeting like this should be used to produce decisions so the question is do you have a clear idea what decisions we have reached well let me summarize so actually you would have to take my word for granted now what I have written initially it's quite good but we haven't quite decided if we want to do something like sub packages it's not clear if we want to do that but it seems like a good idea no actually also month pages of documentation besides the translation so if you have translations month pages of documentation okay prologue okay so for the first step we want just translation depth and maybe trying to leave design decisions in so that it would be easier to do the doc split packages later don't try if you try to put that in this proposal you'll get all sorts of other people interested in this and they all have different ways of doing documentation and that's not really what you want this gets invited into the course like these but docs only multiplies it by 2 as opposed to by 100 yeah it's a very different kind of yes it's kind of nice so we might want to do that but I'm going to kill a bit of co-op really so actually quite a question how do you actually access to people there were also some okay let me just try to figure out what I want to say there was some idea that there would make sense for some cases to just have one t-depth file with all the translation if the translation files are really small let's say 4 kilobyte per translation and things like that that would actually mean that you would have to do something about situation like this and you actually provide some manifest file and say this if you want to install this type of translation you would have to actually get the common t-depth and filter out things so there is a question I have here if these translation files are so small that we're going to bundle them all in one package and maybe just download it how much extra costs are we imposing on use if we say well there's no facility to avoid setting up on edits probably that's very many each translation isn't that huge but if you do it for a significant proportion but you could do things like having a package with 10 t-depth but being 10% of translation in each one the problem is that you would end up doing what Ubuntu is doing is just uploading from time to time a huge 100 megabytes Debian package and just because they're Ubuntu doing quite differently because their Ubuntu language packs are cross-depth they have a whole column in the table and our t-depth are 1 7 in the table maybe some subset of a row but not they don't span rows we've got a whole bunch of KDE and particular documentation files which are pushing sort of 30 or 100 megabytes because they've got all the languages and they've got all the graphics for screenshots in all the languages now if you can split those out in the t-depth and have to take 5 or 10 t-depths which just were the relevant language yet whereas a get-packs file which is just text you can just have one team and maybe have a provider or a mania in the moment lots of screenshots in it you have a separate t-depth for each language some sort of size threshold that's interesting if you do it like that then you don't have to invent machinery that sits on the user system and filters out which then should be installed and then when the user says later actually I wanted the Canadian French as well because now I have some not just there who insists on Canadian and not French French and then you have to they tell this they do something and the UIs do this and the computer has to get a headache trying to find out which package it needs to read download and re-install it's just really painful we must bet if that just translated to install small packages yes it's not that big it's ok then install so if you're happy bundling them in the same t-depth then you're probably happy installing them all together and if not just let it to the match the problem is that you would have to change the semantics of the package file name because actually if you want to say provide something then you would have the translations file the translations file needs to provide not for each so you download the Romanian translations file and it's going to tell you not just which version of each t-depth you want so package name version number it's also going to have to tell you what language code to download there will be some token in the file name and you don't have to have that anyway if you install Canadian French in fact it turns out that there are no separate Canadian French translations because who knows why you won't have to download French translations anyway and you wouldn't want to be able to do that with a mapping thing in the translations file rather than having two identical t-depths well I think that that's way too much going to say you don't want the French Canadian you just want the French Canadian and you don't want the French Canadian I was just looking at that now for the deep package filtering GenoC supports the idea that you have a very customised one where you have the generic French you default like this so by default on something like French Canadian you would actually say to the user well for your case you want French and French Canadian so if you fall back all the way back to politics it's implicitly supported within GenoC so it's easy in that way you might want to say something like well if you're installing deep package and your local is French Canadian which you can tell because we're downloading the French Canadian translations file the French translations Canadian file would say deep package version number French French French Canadian version number for several because you might want to download several t-dems so we just not have the Canadian French thing the more Brazilian point of view is the point of view you can't get wrong that in the case where there isn't one and you download the more generic file anyway GenoC will sometimes fall back itself from Canadian French to French French is the point that the Canadian French file only contains four translations it's a full translation in that case in that case you only use that way one t-depth a language it should be well you need to download one t-depth per language except the case where there's this specific one in which case you would get redirected to the same just a second I think the most sane way to do it would be something like you want to install your system in French Canadian but in that point install everything you want to do that instruction manual which says how to do that it says explicitly if you want to do that and you want to have a full back language you would have to activate French and French Canadian actually DPKG just says which are the t-dems you want supplementary French Canadian and Romanian okay they in fact it shouldn't have any notion of that French Canadian might well be in the same t-depth exactly in which case it's going to look in this file it's going to say there's going to be a file where it looks up and it says deep package well for deep package it's downloaded the translation files French and Canadian French and these two have to match to the same file so that means the file name has to not say in Canadian in some cases they don't want to be able to take it in other cases you put something like language French and French Canadian to get the same file in you should make this file much smaller very wiki it's a wiki isn't it much similar to that well we've got two files a lot of the best ones I know we're going to get billions of files so is that worse the complexity of trying to worry about we've got random subsets things and we've got to be able to we've got to be able to we've got to be able to we've got to be able to put a million files in I think we missed something we can get it down to just a second I think we missed something if we want to do that here then we'd have to have a unique translation file for all languages every language would have to have a translation file and every language has to have its own translation file so you cannot specify French something else so this is actually a translation file in two different translation files oh so you actually have translations in for French and the the subsets will specify the language in the file then you suggest that there should be two files two translations files which have both going to the same file and this actually it's gone so it makes sense because yeah the filter stuff is something that we don't understand too if we decide we need it actually filtering what about using inside that instead of using just control and data and having not having to bother to extract then filter going to value very much because the data has to do with the split actually this would all be done whatever you do it will be done automatically during building the TDA but the point is that filtering isn't just the overhead of complicating the package form if you filter the package it's going to be a score that if the user decides later to have another they have to the package system has to get a headache and this is all this is all to say having something on the user's disk that we've already decided that we're prepared to download despite not really wanting it in the embedded word in the real world it doesn't matter in the embedded world it would be a problem but there aren't any translations to be in the actual video ok a million and a quarter million is the space is something down here yeah probably yes ok so should we move this to the night thank you