 Hello, welcome back We apologize for the delay, but we were having a power issue So we should just get on as quickly as possible with Keith Packard's talk on the X org Okay, I Actually put together two talks for the conference this year. This isn't really a talk. This is supposed to be a boff I'm trying to get comments and concerns from people who are here and to Figure out where we should be going in terms of packaging and Distribution and development model and get some ideas from the people who have done Larger projects than even X So today I am officially an ex-widow system architect because I put it on my slide And I'm actually allowed to admit that I currently work for Intel Intel has a large open-source organization Focusing on Linux. We do Drivers for all of the Intel hardware. We have a plan of record of actually providing open-source free software drivers for as much Hardware as is used in Linux So if you find a piece of Intel hardware that's used in a Linux environment that doesn't have a free software driver Let me know and I'll try to fix it. I've been working on that quite a bit Okay, so this is supposed to be a boff I wanted to tell you what we're doing how we're doing it And then I wanted to find out if there were any questions and comments from people who are suffering through X right now The X development model has changed pretty dramatically in the last couple years I wanted to tell you how what we're doing how we're doing things now how Appearance is totally different from reality as always and How you can be involved if you want to right now? X.org development we have an X.org host name the shortest one on the planet the only single letter domain that I know of It's pretty cool. Unfortunately, we don't really have control of it active control of it the X.org domain is kind of locked up in in a single machine hosted at MIT I So all of the X.org development is currently happening on free desktop that are because those are those are machines that I own and So I get to say what people do with them. They're hosted at Portland State University on machines donated by Google last year We the development the developers collaborate as is usual on IRC and email And it's currently all of the development is dominated by free software people X.org development used to be dominated by commercial Linux commercial Unix vendors and even Windows PC X server developers We fixed that we pretty much hijacked X org development and now it's all pretty much done by friends of yours and mine Which is pretty cool. It's a major coup. I would say Yeah We got a bunch of stuff going on Some of our key projects are obviously we're continuing to work on the X server. It's a whole bunch of little projects going on inside there I'll talk about those in a little while and Then there's some special projects that people may have heard of AI GLX and XGL I wanted to talk about the difference between those two and then some other recent work in Xlib and XCB might as well talk about that one first Xlib was written Originally in 1987 and hasn't substantively changed since then except people have been hacking the bejeebas out of it I was originally written as a single threaded library and in about 1990 people just figured out that computers were going to be multi-threaded pretty soon And so they hacked in support for multiple threads They hacked it in a pretty in a pretty gross way. The code was actually Design well semi-designed so that threading could be Implemented by replacing kind of the insides of the library and leaving all the API bits alone But instead of actually doing that and replacing the insides and leaving the API bits alone People decided to hack the insides to support multiple threads That turned into an utter disaster if anybody's ever tried to run Xlib in a multi-threaded environment actually use multiple threads talk to the excerpt Talk to a single X connection. You'll know just how bad it is Mozilla actually has a an environment variable option You can turn on Xlib threading support and when you do that Mozilla locks up about two seconds after it starts So that lets you know the state of Xlib's threading support To fix that we're actually going to throw Xlib in the trash and we have a migration plan for that It's pretty cool. We actually get to replace one of the key libraries in your system with something newer That's a great project Some students at PSU and their professor actually put together a new library called XCB Which is just in a protocol binding. It doesn't do anything other than just map the X protocol into user into an application So you have an API that emits protocol. It doesn't have any key code transformation It doesn't have any event queue management. It doesn't have any input method hooks. It doesn't have any of this cruft It's just a protocol binding one of the cool things is the professor is a software engineering professor They actually formally proved the threading model in XCB correct So instead of a system which has been hacked into not working We actually have a system which was designed to function from the get-go kind of a novel approach in free software Yeah, I know we always start with dev null and start typing until it until it does what we want and stop when it does In this case, they actually started with a Zed formal proof of correctness. So that's actually worked out pretty well Let's see. I wanted to talk a bit about the difference between AI GLX and XGL How many of you have heard of AI GLX or XGL? How many who have ever actually tried to run XGL? Stunning two people Yeah, okay, and Riko's seen it actually running on somebody else's machine massive display corruption Okay, XGL is getting a fine excellent reputation for quality and and support ability. I love that. It's great XGL is an X server written in GL Which is to say that it's just a regular X server and the entire back end just uses the GL API for all graphics It's a neat it's a neat idea It was one I thought originally would be well received by the maintainers and developers of the close-source GL drivers Which I thought was a pretty neat hack And I thought would be a good way of reducing our overall code maintenance It turns out that it's actually not as huge a win as I thought it would be one of the advantages that GL offers today for a GL driver developer is that the API doesn't have any guarantees for pixelization on the screen Which is to say if it looks good, it is good, which is generally a pretty good plan for a graphic system I mean looks is really what you're after here. You're not interested in exact mathematical accuracy And as a result GL implementations have huge fudges all over the place. It's like well, this doesn't get quite the right answer or Exactly what you would expect it to do, but it looks the same or it looks as good or sometimes it looks better For instance, we now have an isotropic any aliasing in a lot of environments We didn't used to have that GL didn't really even advertise that for a long time and Implementation said well, this is pretty easy to do in hardware now Let's implement it even though it's not in the spec and make it the default and see what people say So here you have technology advancing ahead of the specification And able to do that because the specification doesn't rigidly define what it does well in contrast the X specification says Precisely what pixel values are going to appear on the screen? Does anybody think this is a good idea? We have one person who still thinks precise pixelization is a good idea. Well, you're wrong and the whole crowd stands against you Um precise pixelization was an interesting idea. It was the idea that you could actually come up with Functions that specified exactly what you wanted to appear on the screen and by doing so it would be easy to implement Because you knew exactly what you were supposed to do And it would be very easy to test that was one of the key benefits of precise pixelization. It was easy to test It turns out that people want to do different things and in particular they want to cheat with their hardware They want to make things go faster They want to not worry so much about exactly the content of the screen And they want to be able to implement things in a wide variety of different ways for instance most 3d hardware Doesn't use a Bresenham Algorithm for the edges of polygons. They use DDAs DDAs are a lot faster because there's no feedback required to implement a DDA It's just simple ad And as a result a lot of 3d hardware uses DDAs for polygon rasterization or at least used I don't know what it still does The problem is you can't use a DDA if you have your set specifications as Bresenham And you say it has to match exactly because occasionally you get the wrong answer Well, even though it goes 10 times faster you can't use it Which means that a lot of x implementations are dog slow for any actual graphics like lines or polygons Because they have to follow the specification exactly well So getting back to XGL so one of the problems with XGL is that it's trying to implement the X specification Which is a fixed pixelization specification on top of a very fluid library And the developers of the GL library were kind of we're kind of in a quandary It's like well They really want to be able to mutate the GL library and change what it draws in the screen and yet in order to Support XGL they have to provide some level of guarantee of pixelization Not a great plan the XGL API the GL API is also enormous there's a bazillion functions and a lot of code that implements it and Saying that X is implemented on top of GL is equivalent to saying that the entire GL API Can now be used by the X server which is to say that your X server can use any piece of the GL API at any time Which to a GL implementer says that in order to provide precise pixelization All of a sudden the entire GL API now has to be fixed in some Relationship to the X server implementation as the X server implementation isn't constrained to a subset of the GL API All of a sudden your entire GL API becomes a part of your X server And so the people who implement closed-source GL drivers were say were actually kind of terrified by this notion that all of a sudden their Entire GL API was going to be exposed and required to be constant over time so that people could implement X servers on top of it The goal of XGL is different from the implementation the goal of XGL is to get us away from Using the tiny little corner of your graphics hardware that implements 2d functionality The goal was to take over the other 95% of your graphics chip that implements 3d functionality today and use that to accelerate your X server Well, there's the obvious way to do that the way that we thought about originally was to use just use the GL driver and use A GL API there's another simpler method and that is to use the 3d hardware from your 2d driver you just Use exact you implement a subset of a 3d like driver using the 3d hardware that implements the 2d operations that you need Alternatively You can use bits and pieces of the GL API to accelerate pieces of the X server and then use Direct hardware rendering for other pieces of the X server So in other words you can write a driver specific a GL driver specific 2d driver that uses GL functions for some of its operations and then uses direct register rights for other operations And that's where AI GLX really comes in AI GLX is an acronym for the accelerated indirect GLX project The original open-source GL implementation called Mesa was ported to 3d hardware in about 1999 By then precision insight now tungsten graphics. They actually implemented direct rendering Direct rendering infrastructure where you could accelerate GL to 3d hardware inside a window system inside X in particular And the way that they did that was they had all the GL commands go straight to the GL API and directly to the hardware and then the driver would communicate with the X server over which pieces of the screen It was supposed to draw to so the the X server had in effect almost nothing to do with the the direct rendering GL driver All that it was doing was sitting there telling the GL library What piece of the screen the window that you were drawing to appeared on and when it moved? It was a pretty simple architecture. I mean it was a pretty complicated architecture. Although it was simple simple in in principle The problem was is that when you had a network connection you didn't didn't have direct access to the hardware There was no way to accelerate rendering because the X server didn't know about GL All the X server knew about was where the windows were on the screen So when you talked over a network X was GL was really slow Accelerated in direct GLX what it does is it takes that acceleration accelerated GL driver and sticks it inside the X server So that when you talk to the X server over the network and send GL commands over the network They can be accelerated It's a really simple sounding change and in fact the direct rendering infrastructure had always in theory supported it But nobody had ever bothered to implement it. Well, why did we do it now? You know why this sudden change we had direct rendering games were running plenty fast people were pretty happy with how things working Why is accelerated indirect GLX all of a sudden interesting? It's interesting precisely for the same reason XGL is interesting It's interesting because Accelerated indirect GLX gives you the combination of X and GL rendering in the same package all of a sudden your GL rendering Can be can communicate with X objects? Before GL rendering and X rendering were entirely separate You had the GL world and you had the X world and the only place they intersected was in the frame buffer when the pixels Actually went out to the deck and out to the screen Otherwise there was no communication so when we started looking at composited window systems where the windows are the contents of windows are manipulated by an external application and You start talking about how do you manipulate the contents of windows that contain GL graphics? And how do you manipulate the contents of windows with GL graphics? All of a sudden you the X and GL overlap becomes much more obvious Right all of a sudden the objects that you're manipulating with GL are the same as the objects Manipulating with X you manipulating window contents You mean you're manipulating the fundamental frame buffer and you're starting to merge them together In the direct rendering infrastructure that we have today There's no way to communicate between these two worlds the object sets are totally discreet and in fact the memory allocation in your frame Buffer with any of the free software drivers today actually splits the video card right down the middle and says okay X gets half and GL gets half and They never communicate about the contents the memory Which means that you can't talk about a GL object from an X application or you can't talk about an X object using the GL API accelerated indirect GL and X GL both merge those two worlds so you can communicate about X objects with GL API and you can communicate about GL objects with the X API in particular both of them implement a new extension called texture from PIX map Which is to say we take a texture I mean an X PIX map or an X window contents or any X object and we convert it into a GL texture all of a sudden now you have the first integration between these two worlds of the two object sets you can manipulate X contents with GL and That's exactly what X GL does and that's exactly what the new metacity hacks are doing in terms of GL based compositing managers They're taking X content and manipulating it with the GL API So all the glitz that you see in the comp is demos all the glitz that you see in the red hat AI GLX demos those are all using this very simple notion of being able to take X contents and Manipulate them with GL and that's where all the whiz bang comes from so it's one tiny little extension that implements one tiny little function And you get all this huge huge leverage No, why do we need to do this with AI GLX? Why can't we do this with DRI well in the DRI world remember? Framebuffer is split down the middle GL gets half X gets half and they can't communicate with AI GLX You have one process that can actually communicate with those two object sets You have the X server that knows about all the X objects and all of a sudden it also talks to the direct rendering infrastructure So it knows about the GL objects So the AI GLX actually physically copies the contents from one set of the one part of the framebuffer to another to move the Contents of the X of the X PIX map into texture object X GL implements this at a slightly different way in the X GL world all of the PIX maps in your server are already textures So texture from PIX map in the GL world in the X GL world is really simple just as oh those are the same object So now we have this we have tied the semantics of these two systems together in this one simple extension So they're functionally exactly the same in terms of the glitz you can provide to the screen How are they different? X GL is an X server implemented on top of GL It does not a standalone X server at all. Where does the GL? What does the GL implementation on our X server require today? It requires an X server Because GL doesn't have any way to configure the framebuffer mode selection It doesn't have any way to do memory this initial memory partitioning doesn't have any way to manipulate a cursor on the screen So X GL today requires that another X server be running on your machine So if you run X GL you actually first start X and then you start X GL So you can imagine the resource consumption issues in this environment, especially because X Splits the framebuffer memory in half and X GL uses no X objects. It's strictly a GL based application So you've already lost half your framebuffer memory just by starting X GL Plus you have to configure and manage an entirely two entirely separate X servers You have to configure X the underlying X server and then you have to configure X GL on top of it So this is layering, you know complexity So it looks like X GL today is a really cool demo because it demonstrates what you can do with GL based compositing It was easy to put together didn't require any any cooperation among giant entities in terms of what the Hardware abstractions were gonna look like it just used the GL API to provide it a lot of interesting bling But it looks like today like a more a more rational approach to this is to move in an AI GL X fashion where we can take the functionality of the X server and the functionality GL and get them integrated today in terms of hardware support in a single X server and then moved in the future to moving additional functionality Into the GL API What do I mean by that in the AI GL X world? What have we done? We've got the GL API embedded in the X server so that we can Accelerate all of this indirect GL calls. What is the additional what is an additional advantage of this environment? Well, one of the cool things you can do now is all of a sudden the X server can make GL calls It doesn't have to be in response to GL requests by applications It can be in response to whatever it wants to do So if you want to accelerate some operation and your driver says, oh, I already accelerate that with my GL API Well, that's pretty cool. We have an accelerated operation through GL instead of Duplicating the implementation and doing that again in the 2d world. We can just call the GL functions from the driver So I get all the same advantages of the X GL world in terms of being able to take advantage of the GL implementation that we have Plus I get all the advantage of the existing X world where I get to incremental to refine the system and Improve it over time so that it does what we want And I don't have this sudden jump from the 1x server to 2x servers with a goal of eventually Eliminating the underlying X server. It's a morse. It's a smoother migration path The end end result for both of these products is exactly the same We have an X server that is implemented largely in terms of the GL API that is a standalone Run standalone on the hardware Okay, I think I have more slides here Another status report item About oh, I don't know about five years ago. We started modularizing the X window system. It's traditionally been Implemented as a giant ball of source code and that when you get a release you got the tar ball from hell It had everything in it applications libraries X server documentation specifications Fonts everything all in one giant tar ball. So if you look at the old Debian packaging for X It had about oh, I don't know 50 or 60 packages binary packages that all came from the same source package So when anything in that source package changed, what did we ship? Oh, we shipped about 50 or 60 new binary packages Which meant that every time you made a small change in X you got this huge download bolus from every single every single Debian user and From every single red hat user and SUSE user everybody got to update their entire systems every every time anything even Changed in a small way We did 7.0 last year That was the last monolithic release Because it was joined at the hip with 6.9 We wanted to do two releases that kind of synchronized the code base before we dived off in this modular way This next month this month soon. I don't know weeks not months We're gonna do a 7.1 release and this is the old the first standalone modular release It's kind of the first acid test of whether our plans are gonna work because it's not done in modular form Which is to say the packages that we're putting together for this modular release really are totally independent and released Independently before the modular the roll-up Katamari release is made So we're taking all these little tiny packages most of which haven't changed and we're gonna roll up into a big release So when those don't get dumped into Debian and it moves forward to 7.1 Essentially all that happens is a lot of existing packages that haven't changed get rebranded Right all of a sudden there are there are 7.1 now what Debian is done The Debian X maintainers have done as they've actually switched the numbering system over from this 6.9 7.0 numbering system to the underlying package numbers. All right, so you see X server 1.1 1.0's 2 right now. I don't remember the exact version number so when When we roll this stuff and you get a new version of X You'll note that things like X mod map which haven't changed in 7.0 There is no new package for that the package that's included in 7.1 is exactly the same as the package was included in 7.0 with the same version number so Debian won't roll that package at all for this transition The only thing it's gonna roll over are the packages that are different It's supposed to be I don't know when Ajax said he was gonna get it done But it's it's gonna be pretty soon. He's been rolling version numbers of a couple of other packages that have changed The cool part is it now that Debian's at 7.0 at the 7.1 transitions gonna be a piece of cake really they'll do it soon Anybody remember the the great How long did it take 18 months to move to some x3d6 release? It was like the 4.1 release took like 18 months or so Yeah, I hope to never see that happen again The cool part is quick fixes are now possible. It's really easy for any Debian developer to say Oh, I need to fix my video driver. It used to be the way that you'd fix your video drivers you download The entire X source code and you'd rebuild the whole thing and you'd get 50 new binary packages Do you think oh cool now? What do I do? Well today? It's really easy I actually demoed this a couple days ago BDL had a trouble with his laptop the TV out wasn't working quite right I mean the video playback wasn't working quite right and an airplane flight. Everybody knows what's the most important application video playback So Both as a favorite of BDL who offered to Has always done cool things for me And as a test of whether the modularization system was working I decided to fix his bug because I already had the fix in my own source code So all I had to do was get to pack do two apt commands first to get the source for the video driver And then to get all the build depth to the video driver I hacked the source for the video driver, and I de-builded it and bang I had a dot dev and I set it off to him it took about you know Five minutes to download the necessary packages and about two minutes to do the build and all of a sudden He had an updated dev and he had his bug fixed So this is actually working We have a demonstration of the modularization system helping BDL watch his movies on the airplane Of course, I sent him a Binary of the driver that I built on my machine not using the system because I didn't know that it would be this easy So he actually got two giant blobs of stuff in the mail Yeah, of course one uses the dot dev because it's way easier to install Okay, so what's coming in? I'm gonna check just check the time here. What's coming in 7.1? AIG Alex is included XGL is included as well. So you can play with either those technologies There is a change in the driver a bi so every single video driver is going to get revved for this this release This is the first time we change the driver intentionally change the driver a bi Yeah, everybody's laughing it used to be that the driver a bi would get accidentally changed somebody would make a commit to some file And it's like oh about six months later. You notice. Oh all of the old binary drivers don't work anymore Well, why didn't we notice this before this? You know, why did it take six months to figure it out? Well, it's because we had a monolithic release and when you touched a header file and typed make it would rebuild all the parts of The system that depended upon that header file, which means if you change the driver a bi then all of your drivers would recompile automatically It's like well, this is cool. I don't have to worry about dependencies because make figures it out for me Well cool for you the developer not so cool for the poor schlep the user who got a new X server binary But did not get a new driver because nobody noticed the driver's changed nothing was changed in the driver itself And so the drug the poor schlep gets a new X server, but not a new driver and it doesn't work It's like well, what happened? Well the a bi change but nobody noticed well for 7.1 We actually intentionally changed it so instead of the normal thing wherever release changes the a bi and you get Drivers on accident this time the a bi changed on purpose and everybody gets new drivers So yeah, well at least we know ahead of time. It's gonna change We did a bunch of little fixes that were kind of pending Nvidia had a couple fixes we wanted to do Composite needed a new fix so that xv could work in a composite environment So you could watch your movies in a composite environment have translucent movies so you could watch more than one at a time What's the technical term for watching more than one movie at once in translucent in translucent mode That would be called orgy mode Yeah So we're hoping to be able to actually implement some translucent video stuff And I actually have an updated Intel driver that I'm working on that actually I'm making my minion work on That has the ability to do textured video So I should be able to demo multiple multiple video playback in a couple days. He did it at a weird spot though Okay, I work for Intel today Wide-away work there. Well Intel is the only driver of the only provider of video drivers right now that supports their own video hardware with free software drivers and Intel has done that since the i8 10 since Intel has a corporate policy of providing free software drivers for its hardware and The chipset group has been actually contracting out this work to tungsten graphics for the since Since days of yours since the i8 10 so Intel has been paying an external entity to do the i8 10 driver And they've been doing a pretty good job Obviously their goals and Intel's goals aren't exactly in a line. So the drivers had some some Ross spots occasionally We're working at Intel now to actually pull some of that development in-house and do more of the work and more of the refinement and release management of that driver actually from Using Intel staff so we can actually get a better better handle on distribution integration driver quality management testing Documentation that kind of stuff So we're working on some a couple of new projects right now We're working on switching the Intel driver. How many of you have an Intel Intel graphics chip on their laptops Yeah, quite a few how many of you do not have a BIOS that knows about the native screen size on your panel In other words, how many views you how many of you are using 855 resolution hacks today? Yeah, a bunch. Well the cool part about the BIOS is the BIOS knows how big your panel is There's a little table there that has a panel configuration it knows exactly how big it is and Windows uses that to find out how big your panel is to know how to set the appropriate mode But what Windows doesn't use is the other part of the BIOS that actually does the mode setting There's actually BIOS calls and if you've ever used the visa FB dev driver for For the kernel you'll note that you can set, you know arbitrary graphics modes from the from the kernel Well, the kernel does this using code in your BIOS to set the video modes. It just says oh, mr. Bios, please set a random video mode well These days Windows doesn't use that code Any bets on how well tested and supported the BIOS mode selection code is in most laptops these days Yeah, it's never been run So all of you who are suffering with 855 resolution right now, whatever that nine. What is the name of that program is that? Yeah, all of you who are suffering with that right now are suffering because Windows doesn't use that code anymore Windows use goes to direct to the hardware the Intel driver is the only driver today that uses the BIOS from its primary and soul mode of mode selection Why does it do that? Well, it does that because the Intel chip itself? Doesn't talk to most of the output devices on your laptops anything before the 915 chipset like 855 845 810 There's no laptop support in that chip This is seem insane to you. You have a mobile chipset that can't actually talk to your panel Well, okay, so it's a little crazy what they do is instead They have this little dongle on the side called the talks to a through a channel called DVO And so there's this external chip not made by Intel that you have to program to get to talk to the LVDS The local panel stuff or TV out or external DVI or all this kind of stuff The only thing 855 supported was VGA or this DVO stuff so the problem is there's a bunch of the external chips and They are programmed in slightly different ways So the Intel driver used the BIOS because the BIOS was supposed to know how to program this thing for the panel turn it didn't But a laptop manufacturer would just say oh mr. External chip vendor please tell us how to program your chip and they would hack the driver for the laptop that they were shipping for Windows so that it would know how to program the chip that was in the laptop to get it to work so the reason the Intel driver didn't use direct register programming was because it we didn't have the tungsten graphics didn't want to spend the money and time necessary to have Drivers for all these external chips. Well, we're gonna bite the bullet and actually implement drivers for all these external chips and Provide of course source code for it. So we're hoping to be able to get We're hoping to be able to get some Reasonable stuff working in the near future if you have a 915 or 945 chip today That we have a driver that works on those chips or if you're talking to a VGA spigot The driver will work on older chips as well, but 855 845 with DVO output. That's not working yet But we hope it had that working in a couple months. So this is not going to be included in 7.1 because it's not done yet Okay, the last thing I wanted to talk about was source code management X has been X has had a long history of a variety of different source code management systems, of course Before 1986 it was the usual source code management system of oh, we had a backup yesterday And let me see what changed from yesterday by restoring the backup and doing a diff Not such a great system. That was in the 1985 era for X X 11 We started using RCS for our source code management, which was pretty cool because you could actually like You know see what changed between different files. It was pretty cool. Yeah in about 1993 1993 1992 something like that switched over to no Worse than that in 1994 they switched over to clear case the source motel your code checks in but it never checks out X3d6 wisely decided not to use a closed source nightmare source code system like clear case and use CVS So we have a bunch of we have a wide variety of different source code repositories RCS clear case CVS The usual adventure So right now our development is largely done in CVS But I got sick of CVS because it's slow and buggy and so I decided to switch to the git And so I switched Xlib over to the git one day and sent mail and said oh by the way Xlib development is now done in The git and I got the kind of underwhelming response from the community. They said what the hell I said, oh, yeah, you guys didn't know that I've been looking at the git for about six months. I guess I forgot to tell you oops So yeah, it's kind of a forcing function at this point It's like I'm just moving more and more stuff from CVS to the git I had to write a new git CVS to git conversion tool because the available ones weren't sophisticated enough to handle Zorg CVS repositories with their interesting history So I'd actually have a pretty stable tool that will take so that's a kind of additional side benefit, right? Oops, giant itch to scratch new tool developed So now I can actually transition pretty sophisticated CVS repositories to the git in a lossless fashion I think I could probably take that same tool and import CVS repositories into mercurial or BZR anything else like that. It'd be interesting to see the main challenges this this tool are not interfacing with an existing or I mean Modern SCM they're entirely in figuring out what happened with CVS So it'd be interesting to see if I could take that tool and actually take CVS stuff to BZR or Mercurial be interesting to see if that would work So I'm hoping to finish this transition this year Probably stale stuff will just stay in CVS because until people change it it doesn't have to move So we're gonna have a mixture probably for the coming years of CVS and the git as People move stuff over and if people try to move stuff to the other other SCM I'll smack them not because I think other ones are bad, but because having more than one for a project. It's probably a bad idea So that's the end of my stuff people have comments and questions Oh, we have the microphone runners Who are now selected selecting who gets to talk? Will you think? Will you think that I don't appreciate the the driver that you provided through tungsten graphics if I ask you this question the last programmers reference annual from Intel for the integrated graphics 30 is 845 and 915 and 945 and all that is it come right? It's not can you comment? Okay? Yeah documentation for Intel graphics The reason that Intel doesn't provide documentation right now is that it costs money to publish documents in an external form It's the only reason there's no intrinsic intrinsic desire not to publish documentation the chipset group has Doesn't have a very strong motivation to do the support it's required to provide free software drivers But I mean frankly there their key focus is in the Windows market by moving the driver development in-house To the open-source part of Intel one of our goals is actually to be able to help them with the process of Preparing documentation for external consumption So I can't promise anything but one of our one of our we have five or six open recs for technical writers in our group Whose sole job will be to take Intel Tech technical documentation and prepare it for publication So I'm hoping but nothing's happening yet Another question sit on Sort of standard and University available. How does that stack us up against? Vista and OS X in terms of the malleability in the flexibility of the underlying Windows display mechanism And was how well does Yeah, I was working on it. Yeah, he Mark asked that the if we do if we provide AI GLACs, how close do we get to the capabilities of Vista and And OS 10 and the answer is in a graphical sense. We're there. We have a GL based Graphical environment that fully provides As much acceleration as you can ever possibly want from the hardware as long as GL keeps advancing From a configuration and customization perspective. We're not even close in terms of the ability to switch monitor outputs The ability to support TV out the ability to have do hot plugging of monitors and input devices We get a lot of work to do yet But AI GLACs gets us closer to the graphic output piece and doesn't hinder our ability to to innovate in these other areas That's a piece of the puzzle is Is there a much point into moving so much stuff to using GL when such a large part of graphics hardware has a Featured drivers still today and it seems that the two main manufacturers are still not willing to provide the open source drivers Well, that's another one of my secret plans working at Intel The two main the the two main manufacturers you talk about ati and invidia actually currently Provide less than half of the chips used in desktop Linux systems today Intel oh Intel the Intel's fraction of that particular market segment is pretty significant. It's well north of 50% today My one of my goals in pushing the quality of the Intel graphics drivers and helping push the quality of Intel graphics Hardware is to make it clear that you can have free software drivers and Help the and help the Linux market grow and that if you want to play effectively in this market You have to play by the rules So I'm hoping to encourage ati and invidia by example into providing free software drivers for their hardware But in terms of moving more functionality to GL Nvidia in particular has actually been doing a pretty good job of supporting their hardware with drivers Their drivers they have 7.1 drivers available today So even if you're stuck with a binary driver at least it works I'm not saying it works beautifully at all hardware and in particular laptop. So not very well supported but They're already using a lot of the 3d hardware and they're 2d driver. So GI GLX provides us a way to use GL or GL like stuff for acceleration of particular drivers It doesn't force driver authors to use GL for particular acceleration So where invidia has 2d acceleration for a lot of the a lot of the new stuff the new graphics that require or Encourage use of 3d graphics They're going directly to the hardware today and that's still possible. It's just additional engineering effort for them So yeah, like I say, I'm trying to encourage them to providing documentation and source code for their chipsets by Demonstrating that a significant fraction of the Linux market is unwilling to use closed source drivers So that's my real goal. That was that's one of my real goals of working at Intel. It's not to make Intel graphics better It's to make the Linux desktop better What will happen with? People that have old video cards that don't have 3d support once XGL is the standard server was I said I'm hoping not to not have XGL as a standard server That's one of the advantages of staying with our traditional model and adding the ability to accelerate particular drivers with GL operations It means that existing X server operations can continue to use the 2d parts of the hardware where it's where that's the only thing available And so your applications will continue to work as they do today And we won't have this sudden performance drop if people switch to an X server So this the X server what AI GLX offers is the potential to use GL to accelerate operations For hardware that supports it. It doesn't require that you have a fully functional GL Implementation to have a high performance X server You can still have an X server that goes straight to the hardware and goes as fast as it does today of course as Applications start using fancier and fancier features Older hardware is going to be harder and harder to support But there's not a lot I can do about that other than encourage people to make sure that applications don't depend upon them In particular the glitz that you see in things like comp is or some of the AI GLX based GL compositing managers None of that's required to run applications. It just it's just prettier About the only thing we have now that is required is the ability to do Any alias text and that dramatically improves improves the quality of the presentation on the screen and doesn't Doesn't actually take a huge amount of CPU to do even in software Yes, I see the time sign So the the the plan again is to make sure that old hardware can you just to work as well as it does today a Guy named Jonas on IRC asks is there any chance that Intel will release graphic cards? So the free graphics drivers can be used on non-intel machines such as AMD 64 I don't I can't really say anything about Intel's hardware plans Even if I knew anything so his name was you almost Maya by the way. Yep. I have a second question Is there any work going on about xdmcp to make it? Use less network so that it works faster. Oh Using X over the network We aren't doing anything inside X.org right now But of course nx has done a bunch of work in that I did a bunch of research a couple years ago to try to figure out Why X was slow over the network and using X raw over the network even with compression? The network traffic doesn't turn out to be a performance problem for X The only real performance problem is latency if you run s X over SSH It's pretty compressed. It doesn't take a huge amount of a network bandwidth, and it's actually pretty usable One of the things we need to add to X is some loss loss the image compression So you can transmit images to the X server without sucking bandwidth and maybe we'll do that Question all the integration of the question was about whether we're going to integrate nx into X a little more tightly I don't really have any particular plans on that right now X works nx works pretty well as an external agent And it does a lot of external caching to disk Which means that I certainly don't want it integrated into my X server which runs as root So having it as an external agent seems like a nice security policy right now So it doesn't seem to need to be integrated. Why would we integrate it? And it's working pretty well as a standalone project Would it be an idea to add some functionality to the X server that allows for a Limited interactive response without consulting the application about everything for example If I hover the mouse over a button the application needs it needs to highlight the button if it wants to do so Oh in terms of okay, so the question is about restructuring things so that we have actual interaction built into the X server A bunch of people have tried that in different windowing environments if you've ever seen the Sun news windowing system The problem is is that actually requires the developer to become a network protocol developer as well And because your application is now split between the server where the interaction is occurring And it obviously has to be programmable and the client where the application Intelligence operates so you have to be able to program the X server with some programming language You have to develop a custom network protocol that communicates those interactions to the application And now you have to write an application in a separate language So to do application development of that environment you have to do three things you have to develop a network protocol You have to program in the X server Customization language whether that be post script or scheme or nickel and then you have to write your application in C Or C plus plus or Java It's a huge effort for application developers to undertake and I haven't ever seen its system successfully implemented in that model It'd be cool if they could but I see it mainly as a toolkit problem most most of the toolkits are already handled things like Well doing the highlighting of buttons the most have us over or deactivating other radio buttons This is this is the kind of intact at this level of interactivity I would like to see in the X server to have a responsive user interface Even and even if the network is a bit laggy like I say the problem is is that now you have to program the X server Yeah, it's a toolkit problem. No, it's not a toolkit problem. It's an application developer problem Yeah, I think we're almost out of time. We can take one more comment or question. Is there any way to? Extend X in such a way that you can disconnect and reconnect sessions such as oh actually session disconnection reconnection That's a good question. I wish you had a lot more time repeat the question. Oh, I got a nice. They have signs for everything these days The question is about session disconnection and reconnection application disconnection and reconnection They're actually proxies right now that support that the problem is is is that there's a bunch of state in the X server That if you if the connection is severed Abruptly you can't recover from the particular state in question is PIX map contents If an and right now you can't transparently fix applications to handle The the network disappearing set the connection to the X server disconnecting suddenly Because the applications expect PIX map contents to persist across connections That's the only state we have right now that we can't recover And so fixing applications to be able to recover Toolkits to be able to recover from that would mean we could do this without a proxy X server in the middle Which is what people do today They put a little proxy X server that holds the PIX map contents and and and replays the PIX maps When the connection is reconstructed, but we can't do that yet today So an extension that would allow for PIX maps to be damaged to be and restored would be nice And that's I think that's all we need. I've seen a demonstration of this happening with GTK And it's quite transparent to GTK applications. There's a GTK applications that calls the API and Basically, it allows you to just detach different applications Sure the problem with the GTK API is that it requires that the X server be disconnected gracefully And so anything that requires a graceful disconnect in the X server is not very interesting to me because the real question is How do you recover from a network failure? I think that's the the most interesting question So yeah, I mean you can do it today gracefully. You just can't do it in the in the failure cases How will things like XCB interact with with NX and other sort of protocol optimization extensions for or proxies for X I'm hoping that XCB will actually improve NX optimum NX performance Although NX has actually been pretty carefully tuned to completely eliminate the dependence upon latency by Proxying a significant portion of the application state onto the client side of the wire So in in in effect, I don't think XCB will will make NX worse, but I don't think it will make it significantly better either I Think I think up for time today. I have another session tomorrow if you have more questions I can talk less tomorrow and we can continue on with this discussion then. Thanks much