 Is that team Mac Tiger talking about free standards? Thanks, so this is a bop and what I would mostly like to do is just have a discussion about free standards and how they affect Debian. But I prepared some slides to kind of give some background for people who may not be familiar with what's going on in the free standards role first. So what I'd like to do is just go through those and only take a few minutes but hopefully we can look at the discussion going and then we can start discussing things from there. So I'm Matt Tiger and I work with people at Packard. One of the things that I do for HP is I'm HP's representative to the Linux standard base and look after free standards related stuff for HP. Okay, so the first thing I like to do is talk about very different free standards efforts that are going on in the community. A lot of these are associated with a group called the Free Standards Group which was actually created after the LSB as kind of a paranormal organization to help foster the LSB and other kind of starting free standards efforts. So the Free Standards Group has various different work groups. The one that most people know about is the Linux Standard Base. People in Debian were very familiar with the file system hierarchy standard. In addition, there's also an open IATN group that works on internationalization issues. The LANA people might know about. Open printing, accessibility dwarf and a few other work groups that are just getting started. I'm going to cover these more in detail in a minute. So the FSG consists of members from the corporate world, nonprofit world and individual sectors and it's pretty easy to become a member of the FSG. If you're a corporation, it's just a matter of getting money on nonprofits and individuals are able to join pretty much for free as long as they can show that they're contributing. BDEL points out SPI is a member of the Free Standards Group. So it's nonprofit. It's run by membership elected board directors. There's nine directors and they're explicitly taken. There's a certain number of seats assigned to each of the three different membership sectors. I think there's a couple corporate directors, a couple nonprofit directors and maybe four people who just represent individual open source developers. And the Free Standards Group maintains a small staff of an executive director and they hire some contractors that explicitly work on some of the work groups in cases where it's kind of nice to have somebody who's being employed full time to look after things. The various work groups are basically open source efforts and sometimes it's kind of nice to have somebody who's able to dedicate their full time to it to really help keep things flowing. Okay, so probably the FSG's largest work group is the Linux standard base. So just a couple quick points about the LSB. The LSB is a binary development standard. This is something that differs a lot from POSIX and other UNIX standards in the fact that it's not just about source compatibility and source APIs. It's about APIs and about being able to have a binary portability of applications across Linux distributions. The LSB is kind of interesting in that it specifies interfaces and not implementation. So for example, the LSB will say something like, you must have a library called libc and it must have these symbols in it and it doesn't explicitly say you will have glibc version 232 or something like that. It just specifies the interfaces. So the interesting thing about that is that what that means is that it allows for competing implementation. So right now all the LSB compliant distributions are all using glibc but if somebody wanted to, they could go off and implement their own interfaces, their own libraries for these various different things. So for example, if a commercial unisease like HPUX or AIIX or Solaris wanted to go off and implement the LSB, they wouldn't necessarily have to do it with glibc. They could write their own libraries that will also be compliant. The idea is that this allows us to have competing implementations moving forward so we don't ever get locked into one particular implementation. Okay, so the LSB produces various different things. The most important of which is the written specification. There's also test suites that will help you check your runtime implementation to see if you're compliant with the written specification. The work group also produces a sample implementation which is useful for people who are developing LSB applications because they can run them on top of the sample implementation as a way of testing that their application is indeed binary portable. There are also development tools to help you develop LSB applications and then there's also certification program that LSB runtime implementers and LSB application implementers can actually get their application certified as being compliant. Okay, so as far as how this affects Debian, Sarge was nearly LSB 2.0 compliant. There were a couple of Lipsy symbols that we were different on. There were a couple other tests that we failed that were actually deficiencies in upstream that the certification program granted exceptions for. So... Deficiencies in the standard or deficiencies in our upstream source provided? Well, deficiencies in the upstream source but I would also say that's a deficiency in the standard because the standard shouldn't have specified something that wasn't in upstream. Okay. And Edge is nearly 3.0 compliant. 3.0 added a couple things namely Lipsy standard C++ and so there's still some things that are being sorted out there. So Jeff Laquilla has been working a lot on tracking this stuff lately and trying to figure out what we need to do. Jeff, do you want to talk a little bit about how that stuff is working? Sure. You have no more than five minutes to talk about the magic linker hack. Yep. Is that one live? Hello? Okay. As Matt said, I've been working on actually doing a lot of the LSB testing and trying to find bugs and filing bugs and trying to figure out some of these problems. At one time, Progeny was able to certify a version of their woody-based componentized Linux for against the LSB version 1.3 and I believe that since then, Roger So has done one against 2.0. Is that right? That's what I thought. The issues surrounding getting the LSB compliance basically have to do with making sure that you pass the tests. The specifications are mostly POSIX plus a few additions like the FHS and so on but really the tests sort of determine whether they will give you certification or not. Yes. One question from me would be, wouldn't it be possible to run the LSB CO tests or start to run them right now something like once a week and put to the size of the web page because that would enable the system to push all people into the LSB CO compliant and our current release points are smaller cells. If they're compliant, you are a buggy. So yes. Glad you asked, actually. I have a quick question for the microphones. First of all, I think Matt has a... To repeat the question, the question was, why don't we run the tests for LSB 3.0 and put them on the web page? The answer to that is we do. First of all, Matt has a web page where he collates a list of LSB bugs, which I guess he's getting ready to pull up. Also, I have made some of my test results available on hackers.progeny.com slash tilde lequea. That's L-I-C-Q-U-I-A for those of you without omniscient spelling capabilities. Let's call it again. Hackers.progeny.com slash tilde L-I-C-Q-U-I-A and it'll be under there. So go ahead and hit that. There should be an LSB and there should be an LSB 3.0 in there somewhere. The results here, most of these are from the LSB 1.3 reference. There it is. There it goes, at the bottom. Right now I have results for Progeny W and 3.0, which is basically a Sarge as of two weeks before release. I will be adding more results against newer stuff, including regular Sarge. I would really like to see this edge, this LSB compliant is the exception of some MTAs if you don't care about. Well, edge is definitely important. We definitely want to make sure edge is LSB compliant and it can be certified after release. However, there is also a significant amount of interest, especially in the deriver's community, on having an LSB compliant Sarge. I don't disagree with that, but of course I'm now speaking about how to manage the next data release. Right. So that's certainly an issue. LSB compliant is without a footnote, basically. So I wanted to address the question too. Right now to run the test suites, it's kind of an interactive process and needs to be done by hand. And so it requires, Jeff and I have been basically the only people that have been running it and I haven't been able to keep up recently, but here I'll show you the testing results. For some of the older testing results, I would generate reports and generate a nice table format where I would do what I was calling an annotated summary of the results. So I would go through the failures and then in line I would put kind of these color coded things about whether or not this was actually a problem. And you can see a lot of these are green, which means they're actually waivers for them at the certification program because they're either a test suite efficiency or a specification interpretation issue. There's a couple in here that are red that we need to figure out what's going on. So we're doing all this by hand because the upstream tests require that you answer a lot of questions when you invoke them and that kind of thing. So the interesting thing is that somebody from SUSE actually wrote an expect script that wraps around all this and feeds it the answers that you want. And I haven't had a chance to take a look at it yet, but what you alluded to is a good idea, which is what we really ought to do is have this stuff running on a weekly basis where we take a truer, a full system that's running on stable and get it up to date. Actually we could have multiple truer, one for unstable, one for testing, and then have automated tests running. And all that, what it's going to take to do that is just somebody sit down and set all that up. So a couple dedicated machines, one for I-64 and one for I-36, and if I could get PowerPC and S390, maybe I'll set that up. The idea is that we should be able to run it on all the OSP architectures on a regular basis. It's easier. I have access to almost all architectures. It means to a level of the architectures we are running if I am curious on architecture, and I can set up such a thing, and then any of these machines will say, okay, this time once a week, and then even if she ends it for all of these things, because I'm running a build team as well. So it's sufficient for me to understand. So what we need to do then, I think, is get this automated with the expect stuff. Make it comparable. Yeah, and ideally put it in a package or something, and install the crown or something, you just install it and it just runs it for you. Because it would be nice for more than just Devin to be able to run this too. You know, there are people who are driving their distributions from Devin, it would be cool if they had an automated way to do the test. Right now, I have some instructions back on this main page here. Let's see where it is. I have instructions here on what it takes to run the test suite. So right now when people ask me how do I run the test suite on a Devin-based distro, I can point them at this document. Upstream provides the test suites in RPM format, so you have to install them with Alien, and you have to do some other trickery to get them to work. And actually that's something that's being worked on upstream as well. So there are instructions in here on how to run the major test suites. Okay, so let me go back to the slides. Are there any other questions about that stuff? I have just a point of curiosity. Why are the tests so interactive given that it's pretty obvious that something like a conformance suite is something you're going to want to be able to automate? Yeah, so the tests are based on TET, which is something that is provided by the open group. And basically what the tests are outputting is a journal file and actually a written report, and that's how you do certifications. So it runs this test, and so the kind of questions that it's asking are pretty easy to automate. It doesn't matter if it's sitting at that expect script, but it's asking you what is the name of the product that you're running on? What is the name of the engineer who is going to be submitting these reports? And then there's a few other questions to the information that the tests need in order to be able to run some tests and that kind of thing. It has to prompt you for the root password, which is kind of tricky because when you're automating that expect script, you know. It sounds like the kind of stuff that a comp file would be able to do. Yeah, so ideally this would be solved upstream, and I'm trying to fight that battle in the L.C. worker to try and get that fixed. But for now we have to work it up around it. Strictly speaking, it is possible to generate that comp file through other means. As long as the comp file exists, a lot of the questions get skipped. The main question is making sure you get the comp file just right. And generally it's easier to just, you know, okay, I just want to run the test so I just type run tests and answer my questions. So some of it is laziness. As with many things in the computing world. I also wanted to talk a little bit about something that I've been working on recently. It was my observation when working with Woody. I did a set of packages, updated packages with patches that made Woody LSB compliant. And submitted that. It was quite extensive. And so Joey Schultz kind of freaked out when he saw it and said no way. As far as stable proposed updates, you mean? Yes, for the LSB stuff. For a point release of Woody. Which is certainly very understandable. They were extensive changes to libc, changes to... Well, ABI changes, right? So we don't allow any ABI changes. So that's understandable. So the idea at the time was, well, this is kind of too bad. So we'll have to do this for Sarge. Well, Sarge has now been released and as has been observed, Sarge has a few problems with the LSB as well. This is caused in many cases by some of the needs of some of the other distributions requiring certain updates. Bugs that are fixed in libc and so on. A lot of these have come in and are now required by the LSB and the other distributions don't have a problem. But when you freeze your base for, what, nine months or so, it kind of makes it difficult to do an LSB update. It kind of makes it difficult to do, say, a glibc update to fix a bug that the LSB kind of needs. Also, the current LSB test scripts, part of them test the X window system and those tests require X.org's XVFB. This is because there was a bug in X3D6XVFB that the LSB test suite now tweaks, causes the test suite basically to fail and to report completely bogus data. With all of these issues and the fact that there still is a lot of interest in Sarge LSB compatibility, I began working on something that I think will mitigate a lot of the problems and actually could make it possible to make Sarge LSB compliant. And this is a dynamic linker hack that I posted about in my blog and I think some of you may have heard of. Before you move on to that, that's going to blow people's minds. Can you please file a bug, tagged Sarge against XVFB? I'd like to get this taken care of in a point release. We might have to do a point release. Actually, I know we're going to do a point release of X3D6 for Sarge anyway for a security problem and it's not just me working at Fron's Pop that's volunteered to work on it as well. So, you know, because there's more than one person, there's a non-zero chance it'll happen. I'd really, really like to roll in anything that would help us with the LSB, especially if it's small. And this sounds like some trivial bug fix. Yes, we have another boss, Joey. Yeah, well... Just to say, yes, we definitely will do that. I've been wanting to talk to you about this, by the way, but, you know, conference being what it is. I'm across the hall, man! Well, that's true too. We work at the same place. Yes, go ahead. Another thing I would also really would like to see bug reports. You just say, for example, with the new update for Sarge before the release. Actually, we had new update for weeks before the release. Anything, if you say you need it for LSB compatibility, please open a C-bug against G-Lib C and we would have allowed an update. Well, this is a much more serious bug than the updates that were going in. We're talking about G-Lib C233, basically. So, you're basically able to have needed for the G-Lib C. Yeah, we're talking basically... I'm showing you my four symbols, but yeah, technically we would have needed to move to the new versions. That sounds not so bad. Yeah, and it's mostly just because we froze the... So, even though the updates were going in, we had still frozen on a version a very long time ago. Of course. Well, this might have gone in until two and a half months before the release. For the next release, we have a very short base release. And it seems like about three months or so. I've heard that before as well. This time we do it. I've heard that before as well. There's only one way we can do it. If you stop all saying, oh, we heard it before, but we really believe it, it works only if you work together. And not if all of you just say, we heard it before, it won't do. And really getting sick to hear that again. Mind control, man. You need mind control. Yeah, we would do this with H plus one. Yeah, and I'm not sure where the bug got dropped, because, you know, when we run the test suites, whenever we turn up problems, we file bugs, and I'm not sure exactly what happened here. Because we do have a bunch of them filed, and we actually have a nice thing that this came up with. So we have an LSB Debugs tag, so we can tag... Well, it's a pseudo tag, so you have... Are they all at serious? I don't know the answer to that question. We filed them all at normal, and there was some intent to upgrade them closer to the release. We were calculated like eight months ago that we wanted LSB. Time in the freeze and GLC versions, it might not happen anyway. But that suggests to me that all of those bugs should have been in priority series for a while. Although at the time, the criteria was LSB 1.3, and the chart slipped out long enough that it was 2.0. I'm not trying to bring anybody up, but from here forward, please make all LSB compliance issues serious. Yeah, so what we really need to do is talk to the release team and go ahead and declare now, you know, 3.0 compliance is the goal, and that means make those... Or possibly 4.0, depending. Yeah, well, so 4.0 is going to be 18 months out. So how about relevant LSB compliance at time in the freeze? Yeah, that might be workable. Start it serious, and if people want to argue with a severity down, we can have that discussion, but the default should be... Closer to the end of the release, we'll have a discussion. Just leave it alone and say... So just assume that relevant LSB compliance at release is a release expectation after the release team and everybody else figure it out. Yep. Okay. The suggestion was that we should make sure that all LSB compliance issues are serious bugs or higher so that they're release critical. The idea being that if people disagree, that's a debate that can be hashed out, but that should be the default assumption so that we make sure we have relevant LSB compliance at the time of release. Yeah, if we're going to deviate, let's do so consciously instead of accidentally. So one more point about the lack of 2.0 compliance. We had filed bugs on everything the test suite had turned up, and the missing symbols was something that was turned up by a new test that the LSB added in 2.0 that does nothing more than just look at the libraries and make sure all the symbols are there. So the LSB test suite does not have 100% coverage. As a matter of fact, it's pretty poor on its coverage, and so it's not testing that everything is there. So we were compliant with what tests were being run, but this one caught us by surprise because of the stuff that we didn't have. So hopefully we'll get it better next time. And this leads to the dynamic linker hack, the famous dynamic linker hack. One very nice thing about the LSB, well, first of all, I should probably back up a little bit. How many of people know what ELF is, at least outside of the context with Tolkien novels? Okay, good. For those that don't, ELF is the binary format that is used by Linux executables. The interesting thing, which I don't think a lot of people know, and I certainly didn't know until I started working with this stuff, is that you've all heard of the bang hack, the hash bang syntax for scripts. The same thing exists for binaries. The ELF header contains essentially a reference to what it considers to be its interpreter. And on most systems that, on most executables that you'll see, if you just type GCC and get yourself an executable, it will have a link there to slashlib slash ld-linux.so.2 in that particular field. That's the famous dynamic linker. The dynamic linker is run, it then loads all of the relevant libraries and resolves all of the dynamic library hooks in the executable and then passes control of the executable. In the LSB... I just did a string on a standard binary, and you can see the very first thing that shows up in strings is this hard-coded path to the linker, so it's pointing at lib ld-linux.so.2. One interesting thing about the LSB is that it mandates that LSB compliant executables must not use that particular interpreter. They have to use an interpreter slashlib slash ld-lsb.so. And then a number. The number corresponding to the version of the LSB that the particular program is claiming to comply to. Most systems, including Debian by default, simply create simlinks to create those... slashlib slash ld-lsb.so.everything. And if your LSB compliant, then it's just to know how often it's easy and the sim link just works. The hook was provided so that people who did have issues with providing the LSB, for example, if they were running a little behind or a little ahead, could provide a separate dynamic linker that could link in separate libraries just for LSB applications. And that essentially is what I am working on right now. I have got a hacked-up version of glipc2.3.5 that builds a dynamic linker that treats libraries in slashlib slashlsb at a higher priority than all other system libraries. So if you have a library that has LSB compliance issues, you simply build a new package with the changes that you need to make it compliant and put it in slashlib slashlsb or slashuser slashlib slashlsb. And then LSB applications will pick it up. The test suites will use it instead of the regular system libraries, and you should be able to pass the LSB without having to change the ABI's for the underlying distribution. That's basically it in a nutshell. My current status with it is that I have the dynamic linker part working, but since it's a part of glipc, I'm also working on making sure the glipc that goes with it works, and there are issues with the fact that the compiled locale information is incompatible between 2.3.2 and 2.3.5. So it's more of a compatibility issue. So when I run the tests against my hacked up with glipc with the dynamic linker hack, I get somewhere in the neighborhood of several thousand test failures, nearly all of which are, you know, I can't find my locale, but the test failures, like the test failures we had up earlier, some of those that were, that was on the screen, most of those go away by using 2.3.2.5. So the hope is that in the future, if the hope is in the future, like Edge will release with compatibility just built in, and we won't have to do this at all. Lizzie, Slashloop, SlashLSB, and so on will be empty, but given the fact that we have yet to release in such a fact, given the fact that we have yet to release in such a fashion, if it turns out that we have a problem that we need to fix and perhaps the release team is unwilling to change Edge's ABI, we have a way of working around the problem. And Devin's not the first to use this special hack. The LSB team added it back in the 1.0 timeframe and since then, I think both Mandrake and Rhett had have had to use it because in our case, we're a little bit behind the game and that we're using an older DLIPC. In Rhett's case, at one point, they were too far ahead and were using a newer DLIPC and had to provide the old one, kind of an old vibes kind of thing in order to make sure that it was still there. So it was good vision by the LSB team originally to realize that this was going to be an issue and to provide this way to do it. Questions about that? So the question I have and part of the discussion we can talk about is whether or not such a hack would be suitable for adding to a Devin point release. You know, we can set it up in an alternate ASTA archive. This is like adding a package to the distribution. That's correct. So basically what it would allow us to do is not break the normal LIPC ABI on the system and just have this package installed alongside. Based on the conversations I've had, we'll always kind of show the process and it's possible. Please repeat, Jeff. BDL was saying that it seems like it will be possible for this release. I think that's very true. I know that Joey was very, very interested in LSB for Woody and the main problem was that the updates were just too much. But he actually like went to some length to try and figure out if there's any way we could get any of it in back in the Woody timeframe. So if we can do this without having to change anything, it seems like a slam dunk to me. Now obviously I can't speak for him, but anyway. All right, so I'm going to move on to the rest of my slides and at the end if there's still time we can come back to additional questions. Okay, so we talked about the LSB. Quick highlight of the LSB chapters, just so people are aware. These are just the chapter titles, because people often ask me what's in the LSB. Introductory elements talks about the fact that basically we're just leveraging a lot of existing standards, and so the LSB really is taking advantage of POSIX and single unit specification. Basically anywhere we can point to an existing standard that was already being used by unices and it was pervasive in the community, we just pointed that. So the LSB really has to skull out things that are above and beyond those existing standards. There's a chapter on ELF, it talks about the linker stuff and points to ELF specifications for what the miners actually should look like. There's a chapter on base libraries. These are mostly libraries that are provided by the U of C or the AVI's that are provided by them, so Lib C, Lib M, those sort of things. Utility libraries are things like Lib Z and a few other things. Command and utilities specify things that you can expect to find. I think I pointed this out earlier, but the LSB is a development standard, so it doesn't really talk about what kind of utilities you're going to have on your system. It's really only, these are things that developers can expect to have on the system, so the set of commands is actually fairly small. It's mostly just things that developers would expect to be able to use in install scripts and have their packages used to manipulate the file system and things like that, so it's pretty small. Execution environment talks about things that the applications can expect to find on the system, and that includes things like FHS and where they should install things. System initialization is really interesting, and this actually came up in HMH's talk earlier. The LSB specifies the dependency-based system initialization system, so the idea is that it doesn't explicitly specify system five in scripts. It basically specifies how to do things with dependencies in order to allow for other init implementations. Standardizes on how users and groups work and UID ranges and GID ranges, and that kind of thing. This was kind of interesting because Debian and Red Hat based distributions were slightly different in how they dealt with those ranges, and so that has since been cleaned up and the distros are all the same now. And it talks about LSB package format and how to install LSB packages. Okay, another one of the FSG workgroups is Falsus Markey Standard. I think everybody in Debian is familiar with this, and I quote from their page, it says, a set of requirements and guidelines for file and directory placement are unix-like operating systems. So currently Debian policy specifies that we should be FHS 2.1 compliance release managers have contacted me and said, okay, what do we think? Do we want to move forward? Because FHS 2.3 has been out for a while now, and I think that that's going to be the goal for Edge, and we need to start looking into doing that. The main new features in FHS 2.3 are slash SRV, which is intended for kind of a lot of things that we have been putting in VAR, so currently in Debian we use VAR www. The idea is that anything that's data that is going to be used by a service that's kind of being exported from the system you should put in slash SRV, slash media is kind of a replacement for the slash MNT, slash CROM kind of hacks that all the different distros are kind of different on, so the idea is you have a slash media and then underneath there you have mount points for all your removable media. And then the other thing that it adds is Lib64 for doing architectures where you have both 32 and 64 bit libraries. We were already working on the multi-arch stuff. We in Debian were already working on the multi-arch stuff when this got added, and I said, hey, you know, it's a really good idea to add the Lib64 stuff when we think we might come up with a better way to do this, but Red Hat and Sousa had already embraced Lib64, so they felt, yeah, it's a good idea to go ahead and add it and at some point in the future we need to. We'll go ahead and specify multi-arch in the FHS and Lib64 will stay in there as an option for a while until it conventionally be deprecated. Do you think that will cause problems for us? Could you repeat that? Maybe you'll ask if you think that will cause problems for us moving to 2.3, so we can just put it in there and make it a sim link and it doesn't say anything about implementation. Basically what it's spelling out is just that if an application attempts to install to there, it just needs to do the right thing. So if we make it a sim link and it drops things in elsewhere, that's fine. It follows sim links when it's packing by call, so that should be fine. And eventually we'll be able to make it, if we do multi-arch, we'll be able to make it a sim link to the right multi-arch location and it should do the right thing too. Okay, next work group is Lelana. This is the Linux Assigned Names and Numbers Authority and Lelana was basically created to solve namespace issues and probably I think one of the main things that it was created for originally was the Linux device list stuff. So basically how device numbers are assigned to different drivers, so that was the first thing that they started out. So Debbie and this affects MakeDev, Devofest and UDev. But it's turned out that having somebody to manage namespace issues has been a very useful thing. So the other thing that it gets used for now is that the LSB delegates anything, any namespace issue to it. So this includes things like when an LSB package is going to install an op, where they can install that. And so developers can register a location and op to make sure that there's not going to be namespace collision. So namespace collision for where they install package names, what they call their init scripts, what they call their cron stuff, that too. And then the other thing that I found on their web pages is this Linux-owned Unicode Assignments which if I understand it correctly is a way of character sets to Unicode in the kernel. Okay, OpenITN, ITN is internationalization but this workgroup looks at internationalization, localization and multilingualization. It does provide written specifications and these written specifications kind of span all levels from the kernel to base libraries, you know, core details that kind of thing all the way up to desktop level stuff. It's orthogonal to what the LSP is doing and what a lot of other workgroups are doing and that they kind of have their fingers in all places. They also provide tools, implementations of utilities and that kind of thing and also some government outreach to work with governments. And Roger So, who I think is here at the conference but not in the room, is on the OpenITN's string committee. Okay, so there's some other FSG workgroups I'll just briefly mention. There's an opening of this, simply working on accessibility issues namely things like LibATK and associated stuff and also doing a lot of government-led work. There's a dwarf group that specifically has to be looking at the dwarf debugging format and also an open cluster framework which is trying to get all the people in the cluster space together to leverage tools and kind of try and of everybody inventing their own stuff. Okay, another group that's only loosely affiliated with the FSG that's been doing a lot of good work is free desktop.org. I think everybody is really familiar with and they're working on stands for windowing toolkits, window managers, hex extensions, desktop features, you know, things that a lot of the distas that are already been doing and Devian's used to, like the way that applications should be able to drop in menu information and mind settings and things in, you know, desktop trades and icons and how to cut and paste and drag and drop and put forward and trash work and that sort of thing. And they don't actually publish formal ABI standards. They do publish some ABI stuff, but they basically rely on the LSB to take what they produce and run with it. And so the LSB has a desktop subcommittee that is basically going off and taking with the FSG, or excuse me, what free desktop.org is producing and making ABI standard modules for that that can be included in the LSB. Okay, so one thing I wanted to point out is kind of the difference between free standards and existing standards and kind of why the FSG is different than like ISO or POSIX or the Open Group or that kind of thing. And free standards are developed as free software projects and the way that we're all used to dealing with free software projects. So it uses an open publicly archived mailing list, open, you know, basically all the participation is open. Anybody can attend phone conferences and face-to-face meetings and can get CVS access and that sort of thing. But the other thing that's kind of interesting is that we don't allow for any IP incumbrance or things like what's called RAND, which is what's called Reasonable Mind Distributory. You guys might remember a while back, the WC3 got in a whole world of heat over trying to add some stuff to the standard that required some sort of license fees or something like that. I can't remember what the technology was, but unfortunately that got shot down, but free standards in general have policy against doing any such thing. So the idea is to provide in those strings attached to element environments so that anybody's using the standard and it's free to implement however they want. About RAND, I don't know. Well, first, how many people in here already know what was meant by RAND when the W3C was toying around with it? Okay, not terribly many. Okay, the idea was some of the big vendors came to the W3C and said, yeah, we know it's not fair for Microsoft to do things like predatory pricing. So as long as you're in a business, everybody gets to pay the same license fee for implementing. And of course this was just a poison pill against free software, because distributors like Damien can't afford to, it's not consistent with our free software guidelines to pay a fee just to implement a standard. So. Yeah, so to kind of repeat what he said, the RAND stuff was kind of explicitly an attempt to divide the commercial implementations that were willing to deal with such a clause from the free standards or the free software people who ethically wouldn't want such a clause. And also to some extent, a lot of the people who were doing the implementing couldn't afford either. So free from both the free beer and the free speech. Okay, so one other thing that I wanted to talk about was that the FSG is in kind of an interesting position and the LSB sometimes takes a lot of heat from pre-software developers who are saying, why do we need this? The LSB is really just about making an environment for proprietary applications. And so what the FSG attempts to do is kind of walk this, do this balancing act between the free software community and the people that we're trying to attract to use Linux in order to grow the number of people that are using free software. And so it kind of has to live in the middle where basically what we're trying to do is have a balance between free software ideals and develop free standards in the way that meets our ideals. But we don't want to piss off proprietary developers because we don't want them deciding they're gonna take their toys and go home and go off and implement their own standard that isn't as free as we'd like. So it's kind of hard because there are many of us who would like to see the LSB be more of an open source project and do things in the proper way. But we have many of the people who are funding the FSG and hence the LSB are big companies that are trying to push things the other way. So it's this kind of ongoing struggle. Question? Yeah. Is there anything else that tries to solve the same problem like some big companies trying to impose? Well, there hasn't been yet, fortunately. I suspect if we ever, so one of the things that's interesting about the LSB is we don't ever include anything. We have a license criteria that's actually stricter than the Deventry software guidelines in that we say we won't allow anything that precludes proprietary implementation. So the LSB doesn't ever add anything that is under the GPL or any sort of license like that because what we wanna provide is a development environment where proprietary developers are free to develop too. I suspect if we ever did add something that was under the GPL and also made it so that proprietary developers could no longer use all the interfaces than the LSB that the LSB would fork and that the proprietary people would go off and create their own standard. So that's kind of why we have the license restrictions that we do. Pee-Dale? Every once in a while, some group somewhere has come up and said, wouldn't this all just be simpler if we had a single reference implementation of what a Linux base system is supposed to be. And we just put that out and got all the distributors to use it and then there would in effect be one standard Linux. The problem with this is it plays with that whole freedom of choice thing which is sort of core and fundamental to the way our community behaves and expects to behave. And the other problem is that most of the people who propose this are people that are coming from sort of the other side of the equation. Their motivations are not exactly pure. What they're trying to figure out is how they can build one instance of a non-open source, non-free software application and make it available to as many of us as possible. And it's okay for that to be their motivations but it doesn't necessarily lead them to either have lots of expertise in how to build good Linux distributions or to make good technology choices for what should or should not be in the base. And so I've actually been a very strong supporter of the LSB model of addressing the problem because this notion of building standards that people can build competing in different implementations against is one that has worked so well for open networking protocol standards and other things like that in the past. I can't imagine that the internet would have become what it is today if it had been developed in some less open and less sort of egalitarian manner. And only one implementation. Yeah, only one implementation of protocols by one technical group. And so this really in some sense is all about dealing with hassles and sort of the extra process overhead that comes from an LSB kind of creating a specification that isn't just a single body of code that there's obviously more work involved in doing it this way. But the hope is that in the end you preserve the sort of fundamental behavioral freedoms that are a part of what's made this such a differentiated and valued and successful thing for all of us. So and yeah, that's the single largest request that we get as people who come to us and they say I just want there to be one Linux. They're coming from Windows, they're used to only supporting one thing and they don't like the fact that there are all these different distributions and so we get this request a lot and a lot of people try to solve this problem. And as far as we can tell, the only way to solve it properly is to do it the way we're doing with the LSB. But to name a couple projects in the past there's been things like United Linux, Linux Core Consortium, User Linux, that kind of thing. All of these, at least one of their goals has been that they want to be the pervasive core that everybody builds on top of and they want to solve the problem by just having one set of bits and one implementation because that scene is kind of a holy grail that everybody wants. But then we lose the diversity that we have with all the distribution. Even Windows isn't just on the build. Yeah, people pointed it out too. Yeah, so his comment was even Windows isn't just one. People, and I've heard that before too, that people pointed out you have several versions of 95, several versions of 98, several versions of 2000 and people think it's one version but it's really different and software developers have to test across all those as well. But to some extent, since they're all controlled by one party, it's still kind of one implementation even though the ABI's may change from release to release. So, but yeah, they have a similar problem but they seem to be winning the marketing war at least in making everybody think that it's one and the same, so. Okay. Well, let me just hit this for about 10 seconds and then I think we're done with my last slide. So, we're always looking for new standard ideas as you know, you as Debian people, if you see things that have emerged as a good standard or you see things where they're competing standards where we ought to really come up with one solution for things, think about trying to implement in a way that you can write written specifications for a neutral implementation. One of the things in particular I've been talking about is that I've been talking to Kibok about, hey, when you're doing this new de-package stuff if we can write it in such a way that it's really not Debian specific, maybe we can get everybody to adopt this new package format and that sort of thing. And basically, it's all about the other consensus. You need to involve all the parties in the process so you're not gonna get anybody to adopt because people really like, if they're not involved, they're just gonna ignore it. Okay, that's it, I think we're out of time. Are there any last questions before we go on? Okay, thank you.