 Good morning. Good afternoon. Let's see. I was going to talk about what's going on at x.org right now. I'm sort of wondering if you want to bring that up slightly. Is this still leaving? Like that? That's not still, is that still automatic? Yeah. Okay. That's quite a bit of feedback before. So usually I talk about x.org in kind of an abstract fashion. You talk about what we are doing. This year I really want to start focusing on who we are. And I'm sure that I'm able to get their pictures up on the screen in the coming years. This is a picture of Adam Jackson who has been working very hard on x.org release management for the last year or so. He just finished x.org 7.2 which came out last week. It contains x-server version 1.2. It contains xlib based on xcb. How much is another cool stuff? Actually this is one of the first releases that really started taking advantage of the modular process of pushing the on-the-loathing or modular release system. Usually x.org releases... Do you have a question up there? Oh, we're just hand waving. I don't think we have a camera so you can't be on TV. Adam actually released the x-server itself version 1.2 some time way back in early January. But the full x-server release wasn't ready for release at that point for documentation reasons. So it's kind of interesting we started seeing people take up release 1.2 of the x-server now and wait for the full roll-over release of x.org 7.2. Coming close on the heels of this release, I'm going to do an x-server 1.3 release the next month I go and start trying to push the pace of releases up immediately to projects so we can get features out into the distribution in a more timely fashion instead of a six month schedule. What are some of the features in x.org 7.2? We did a lot more auto-configuration stuff so the system will auto-detect monitors, auto-detect input devices. You could run a reasonable x-server now with no x.org.com file most of the time, sensible things. It doesn't always do what you want, but it usually does sensible things, which is good. While we're moving more in this direction to get rid of the configuration file, one of the keys to getting rid of the configuration file of course is to be able to automatically to be able to change things after the server has started. So you can take the configuration without having to restart the server. Yeah, the first are many pictures. The one thing I noticed in collecting pictures for this talk is that most of the x.org developers apparently spend a lot of their time drinking. I know that's hard to believe. This is Dave Airely at a conference. He has actually two beers and two hard ciders and he's going at the quad-fisted at this point, apparently. Which is appropriate because Dave Airely actually works on a lot of different pieces of x.org. He's been working on the Radeon driver with R300 and R500 support. He's in a very unique position because he actually has documentation for all the ATI hardware as a part of his other job, which is to work in the gambling industry doing hardware and software for gambling devices. So he has to be very careful because it would be terrible for any of the documentation that he has for the ATI chipsets to leak out in the form of source code. So he has to be very careful at his development efforts on the Radeon stuff. Most we have here. Other people that have been working on the Radeon developers has been Karen Schmidt here. There's my cursor there. This was from, I think this is from LCA, the Litix Connova Australia at Intercontinental. It's a nice meeting. So what's been happening in the Radeon driver? The Radeon driver is not supported at all by ATI at this point. So this is a pure reverse engineering effort by these developers. They're going in and actually taking the existing close source Radeon driver and making applications and driving the driver with the applications and seeing what happens on the other side to try to produce how the hardware works. It's a tedious process. A little docs go a long ways and they have none. At this point the R300 driver, reverse engineering R300 driver is fairly usable. The R500 driver is just barely getting started. They hope to have some 2D and load setting code relatively soon. I think Michelle Banneter has a pretty good demo of the R300 driver now. It's not popular. So if you catch him in the hallway you can get him to show you what the open source R300 driver is up to. Dave's been working on the Mandarin 1.2 support with the resizing rotate stuff that I can show you in a while. And Mike's been working on some of the more 3D stuff. This is Michelle Banneter who's here somewhere as we have to hear a little bit. There he is. So I took this picture earlier today. This one does not have anyone to be here yet. Again, it was about 10 o'clock this morning. So I'm going to start drinking a 10. 11. He's apparently a news expert or developer. Moving on. Another important driver that we have now is we're trying to reverse engineer the NVIDIA hardware to build an open source NVIDIA driver. The project that's doing this work is called NVOA. I asked about the NVOA driver of developers to send me pictures. None of them did. Unfortunately I found a picture of Jeremy who was apparently, I think, in an intense state of inebriation. Otherwise he wouldn't have a full hand as well. There's something about 3D graphics and any drinking. The NVOA project right now, they're actually starting to make some progress. It's not a driver usable by normal users at this point. I think what we learned from the Radeon R300 reverse engineering project was that the driver is unusable and barely functioning for an amazing long period of time. And then all of a sudden there's a big knee in the development curve and it goes from being unusable to being very usable at a shorter time. I think the NVOA project is getting fairly close to that. So I expect to see in the next six months or so people being able to use a NVOA driver at least for 2D graphics if not for 3D as well. They're getting solid polygons painted on the screen right now and that's a big step to actually getting triangles up on the hardware. Apparently they need to use the new texture memory management system that tons of new graphics has been doing to the Intel chips and it's been moving into the Radeon driver. They're going to need to migrate that into the low driver running texture at all. It's kind of amazing how it works. So these guys are brave souls. They haven't gotten any death threats like the Radeon driver developers apparently do, but I don't think NVIDIA is very happy about their particular efforts either. And it's finally a happier frame of mind. This is Eric Anhold, one of my co-workers at Intel who's working with an Intel driver right now. He's drinking. I think those four empty glasses are probably his. Yeah. So the Intel driver is making really good progress these days. We actually have a combined efforts of two companies working on it right now. We have Conston Bapix who is under contract from Intel for a long time to support the Intel driver and do a lot of work in it and they continue to do a lot of work in the Intel driver at the 3D space and then Intel itself as we have a team of about a dozen people at Intel working on the driver, on the open source driver right now. Derek's one of the key developers there. So, what? You've never met Eric? So conferences are entirely developers who are working on the same co-base to get together and meet each other, which is kind of cool. And apparently I was supposed to bring Eric to this conference like apologize. I had actual work for him to do. So what's going on in the Intel driver right now? The big change over the last year is that Intel has built a team internally for driver support and driver development. And that involves QA, it involves development and involves distribution, it involves support for the OS vendors. So we're actually interested in getting Intel driver support into the existing operating system distributions. I have half a dozen QA engineers making sure it works on legacy hardware. So if you have an i830 laptop, you can run new drivers. So that's a big part of our effort. It's been an important thing for us to do. The Intel driver before I knew it all my way was the Ranger 1.2 development work. The Ranger 1.2 development work is not limited to the Intel driver. That's where it's starting because I have to start with something and that's what I pay to work on. So that's how it works. One of the interesting and exciting changes that happened in the last year that started in the Intel driver is the new texture memory management system. With any UMA hardware, so any AMD or NB or Intel hardware that uses system memory for graphics, you used to think that it's kind of a bad thing because it was a performance problem, but it turns out to have some pretty phenomenal advantages in terms of uniform graphics performance. One of the things that we're able to do now is to be able to take the GTT table, it's basically an MMU for the graphics chip and map all the physical memory dynamically in a video card. So I can actually, with the new DRI infrastructure for memory management, draw to any page in memory. So I no longer do I have fixed allocations of texture memory and fixed allocations of 2D fixed maps. When we finish this work in the next year or so, we'll be able to draw to all of the physical memory and virtual memory as well in the future. So that's kind of exciting. We're finally moving out of the limitations of video card memory spaces and into a space where we have pretty much unlimited memory, which is kind of cool when you start thinking about compositing some objects in thousands of windows, each taking billions of pixels. I wish I had stuck in every company. That's a great course for you. Here's another developer on the Intel chips I keep with Whitwell who did all of the 3D development on the 915 and 965 chipsets. Another tungsten developer. And I wanted to have a picture of Malapur, who's been this poor gentleman. So the Intel contracts that he was working on specified that he was to use the BIOS for mode setting. And then they said, oh, but we want some additional mode setting support. So we want to be able to do things like rotate the screen or to be able to do things like have different modes on the internal and external displays. BIOS doesn't support any of that. So he had this amazing Root Goldberg mode setting adventure where you kind of used the BIOS a little bit for some primitive mode setting, and then he just stacked this code on top of it. So the man with the limitations and the amazing implementation. Unfortunately, we've thrown off his code way. I hope he's not there. I'm really impressed with what he did. And certainly the architecture inside of the driver that he put together has been a solid foundation for us to work on. A big change in the last couple of years and pasts. We had this, I mean, originally X drivers were all based on direct access to the hardware. When you wanted to program a video mode, you would go in and tweak the VGA registers directly. You'd go in and reprogram and hide it directly on the VGA port. Because all of the cards were basically VGA cards, this worked really well. They were all very similar. You had the ET4000 or ET1000, whatever your video chip was. It all looked like a VGH. And you could reprogram the video mode with a really, really small piece of software. Then along came Windows 95, Windows 98. At some time in that era, all the video cards all of a sudden grew their own mode settings hardware. So in order to program a new video mode, you had to know a lot more about the video. And it became common for X drivers, at least a few X drivers, to start leaning on the video BIOS itself for mode setting. At that point the video BIOS threw some extensions and it knew how to do linear frame buffer mode settings to actually get the video BIOS to come up in a mode that the X server could actually use at that point. It was kind of neat because you had to write a very little code in an X server to get something running. You asked the video BIOS to program a mode and bang, you were running in a frame buffer and then you just tried a little acceleration code to get an X server to come up. I think one time I needed to do a performance analysis on a new video chip, and I had an X server running accelerated on this hardware in about three hours using the video BIOS to go to the mode setting. Well, that was great for about a year. It was interesting for about a year until we discovered that the video BIOS didn't do everything as it wanted. And the one problem with the video BIOS is if it doesn't do what you want, you can't do it. So for instance, what I'm doing right here, I have images up on the screen here and on my laptop I have my notes. So this is one machine running two screens. The video BIOS isn't no good. You can't do this. What Alan and Laura Hain had to do for the Intel driver was actually use the video BIOS to program the mode and then go fix it up by touching the registers directly. So that gives you the best of both worlds where all of your video BIOS thumbs are exposed and also all of your mode setting all of your mode setting code has to know a lot about the hardware. So you've got all the bugs in the BIOS and all the bugs in your own code mix it together and cause it all kinds of problems. So this is Thomas Wintershofer, who's an attorney in Austria. He said for the sys driver that that was a bad idea and he was right. He was way ahead of the rest of us. He actually came up and said for the sys driver we're not going to use the BIOS anymore we're going to switch to native mode setting because there's a lot of things I want to do the BIOS can't do. So he converted the sys driver three or four years ago to use native mode setting. He struggled with that for a lot of time. There are a lot of things it still can't do with the transition. Another person working in the space this picture is not visible at all, I apologize. This is Luke Verhagen I don't know if he's here today. He's probably actually listening to Edwards talk right now. He's doing the same thing to the via driver. Except there's two via driver projects right now. The old via driver which is based on the BIOS based mode setting and this new unit mode driver which is native mode setting and they're trying to figure out how to get the capabilities of the via driver how to not lose those capabilities and yet still have native mode setting because there's a lot of stuff that Luke's driver doesn't do yet. So we're working through some political issues there so we actually have two via drivers currently for this work. And the Intel driver right now I have recently transitioned it over the last year from via space mode setting to entirely native mode setting as a bunch of the requirements and what I wanted to do with render 1.2 just couldn't be done at the last award. This is Daniel Stone Okay, this is a pretty accurate picture. I have a lot of picture choices for Daniel. This is the one when he looks the least drunk. Daniel spent as an Australian who's actually currently working up in Finland so he's moved from 42 EC to minus 43 EC. Yeah, he really needs to be drunk. He must have been drunk. He works for Nokia right now and if any of you have seen the little N800 handheld computers that what do they call personal internet tablets. Yeah, okay. It's a little... They have a crop up stand now Apparently they know their market. So Daniel's kind of a he's a really funny guy. The cool thing is one of the requirements that he wanted for the N800 tablet was to be able to use Bluetooth input devices Bluetooth keyboards, Bluetooth mics and that kind of stuff. And he said, oh, I'm going to be able to use multiple keyboards. Well, in order to do that I have to go off and fix this entire input infrastructure so it can handle a hot plugging of input devices. And so he took this tiny little feature that his product management came down with we need to support Bluetooth keyboards and he said, oh, I have to do all this work in the open source sort of project. So this is a classic engineering story where you take a tiny feature that's required by your product and blow it out and it's in the open source project. So he's actually spent the last year digging into the vows of some of the nastiest parts of the X server. The X input extension the X keyboard the XKV extension and all of the input and end processing inside the X server. And what he's done is he's ripped all that code out and replaced almost all of it at this point I think. So you can actually start up an X server with no bouncer keyboard connected to it and you can actually hot plug all the devices after the server started and dynamically determine what kind of keyboard, what kind of bouncer connected and give you the ability to add additional keyboards so you actually have two keyboards and they're going at them simultaneously. I think that goes from two hands free to zero hands free. Not sure that was addressing the market or not. Which is cool because now we can actually plug in additional keyboards into your laptop and you can change from like my laptop as a Japanese keyboard player. I can change it. I can use a French layout keyboard. In addition I can switch back and forth between two different keyboards which is very useful. And use them at the same time. He actually has a friend of his I think I have a picture of him as well. Yeah there he is. Peter, hi I'm sorry that the picture is not very in focus. This is an Austrian who's moved to Australia. So it's kind of funny because Peter lives in Adelaide and Daniel comes from Melbourne and now Daniel lives in Finland and Peter lives in Australia. So we've got long distance computers. Peter is working at a research lab in University of South Australia. No I think it's University of Adelaide. There's three universities in Adelaide it's a town of 100,000 people I'm impressed they have that many. But he's actually working in multiple exploiters. So he's not interested in being able to use multiple mice on the screen at different lines. He's interested in using multiple mice on the screen at the same time. And I wish you could show, I wish you had a movie of him doing this. I've actually watched him operate three mice simultaneously. Which is really funny. He's got two hands and he really is doing the other one with his chin. It's cool. Yeah. Apparently he can operate multiple mice with one hand all the way he can. Click buttons this way to speculate. But he's written a new window manager that allows you to grab the two corners of the window separately and like move it around. Yeah. Oh, yeah. So the question is, so what if you have a multi-touch screen and you put all ten fingers on the screen because I'm looking to see what you get. Well, because Daniel has this hot plug into an architecture and because Peter's stuff supports multiple cursors it could work. I don't know. Well, one of the interesting things about Peter's work and I think one of the most aggressive things about his work is that he's trying to make multiple cursors work using the core protocol. So he's trying to make it so that you can actually send mouse press events to two different windows and not have and let two different windows think they both have the mouse. So you can be operating two menus simultaneously. And he's trying to figure out a synonym so that our existing architecture which all believes there's only one mouse will work with multiple mice. And I think he's got a really hard one. Pieces of it work okay. Most of it doesn't work so well. Most window manner, for instance, get really confused when you click in two windows and try to drag them simultaneously. He found one flux box apparently works. That's amazing. I don't know. It's pretty cool. So I got to meet him in Australia at LCA and we brought him over from the X Developers Conference and another heavy drinking Austrian. I know it's a shark. Here's another Portland native. One of the things in Xorg 7.2 is a change that most people will never notice but it's very fundamental in the system. We've lived for nearly 20 years now with XLive, the C binding for X protocol being this library called XLive. XLive was written in an era when we had one CPU pretty much for the entire lab. So this notion of dread handling and multiple applications not so big. And the other problem with the era of XLive was what we were speaking before the era of shared libraries. I know it's hard for you guys to imagine. In 1987, there were no shared libraries on any system and so every application got to link statically against every library. The result of that was kind of odd. It meant that in order to keep program linking from breaking pretty much every new feature that went into XLive. So XLive drew things like internationalization support and XKB support and color management systems. So XLive had a couple of pretty bad bugs. It couldn't do multiple threads and it was due to load. So a team at Portland State University, led by Professor Massey has been writing a new X protocol binding called XCB. XLive is a megabyte. XCB is 30 kilobytes. So it's the lower factor of 30 smaller or something. The big change for XORG 7.2 is that XLive now uses XCB. So when you run the X program that talks XLive your X application is actually using XCB for the underlying transport. Does this change how the program works? Not at all. What it does do is it enables application developers to start vibrating their applications to XCB native interfaces. And eventually we can shed this X load. We can get rid of this megabyte of memory. We can make our applications thread safe and world domination. And here's another of the students who's not working very well today. This is Jamie Sharp, who's done most of the XCB implementation. XCB actually uses an XML description of the protocol. So you have a textual format that describes how the protocol works. That means you can generate C bindings for a library. You can also generate documentation for the protocol because you can put documentation in the XML descriptions. You can also generate a list binding for the protocol from the same XML or Python binding or whatever you want. The other neat thing we're going to be able to do is we're going to be able to generate the server side protocol interpretation code as well. Biteswapping code. All this cool stuff we can do with the description of the protocol. Until now all of this stuff has been done by hand. Every excellent marshaling and every server un-martialed function was written by hand. Over the last year Coverity and other people have discovered half a dozen or so bugs in the X server which are buffer overrun bugs and other simple bugs like that that are caused by bugs in our sort of all un-martialed code which is all written by hand. So if we had started with this descriptive format of the protocol being automatically generated in C code we could have saved ourselves six potential local root compromises to your system the last year and I don't expect to have that number go down in the next year so I expect to keep having issues like that until we can convert to the XML descriptions and use that for the generation. Another XCB developer Ian Osgood who's been working on a bunch of the extensions in XCB that's been converted over as well. At this point I think XCB covers pretty much all of the X extensions so it really is a complete replacement for X that at this point. It's ready for use and your systems will have it pretty soon. Let's see what else we have. Other work over the last year is this mapping Allen over here. He's been working on a couple of things. Matthew's been working on Zephyr. Zephyr is kind of a cool little project. We used to have an X server called XNest where you can run an X server locally and the XNest server would actually forward all of the XNest over to another X server and you'd have a window on another X server that contained an X session. That was kind of cool. One of the problems with XNest is that it actually created remote objects for all of the things you were doing. So when you created a window in your X server in the local XNest server it would shadow that and create another window in the remote X server. All the image formats and fonts that were available were the ones that were available from the remote system. So there was a lot of limitations with that and there wasn't any pretty good reason for that. But Zephyr is a it's a dumb freight buffer based X server so it paints the screen image into a local freight buffer and then copies those images over the wire reporting remote X server. So you get a couple of advantages. You get all of the whizzy new extensions that you may not want locally because it's just rendering the dumb freight buffer but we have the supports every day. And the other cool thing is you can disconnect from it and connect up a new X server and paint multiple X servers that's some things that you can work on in the next year. One of the things that it did gain this year is the ability to dither down from a 24 bit Zephyr screen you can actually send that out for wire to a 16 bit remote X server and dither the result down against some resolution reduction in that way which is kind of useful. Matthew actually works in a company called Women Hand which has been working on the N800 device that a lot of other men in devices which is pretty cool. So we have a little consulting company that does embedded X work. Some other notes about what... Matthew's also been working on fixing up the X resource extension with an X Res talk. It's a hot embedded work. What's his biggest problem in embedded work? He always runs out of the rate. So he's very interested in analyzing and tracking resource usage of applications in the X system. He's been maintaining the X resource extension which lets you see what clients are using how many resources and also monitoring the external monitoring applications so that you can monitor resource usage in real time. The thing that Eric's been working on is EXA which is a new acceleration architecture. This time he's tracking XC. What? Oh no, it wasn't. That was it. The last picture of Eric was Pugwade which was down in Villanova, Spain last summer. You know you can get a giant picture of Sangria for like two Euro? So he's been working on a new acceleration architecture called EXA. The whole goal of this is to make it easier to write really fast X servers. And in the last couple of months we get the notion of the new very application system we're doing and some other new algorithms for doing accelerated rendering. It's pretty clear that we can do all of the acceleration that we need to do all of the drawing that we need to do with the graphics. It used to be that we had a whole bunch of rendering that we needed to do with these glorious grand migration schemes when you needed to do stuff in software to pull the image out of the video card carefully rendered in software and put it back here and put it back in port. Eric came up with a much simpler scheme. Everything can be rendered with the accelerator now so what you want to do is you want to get everything on a video card. And the only reason you don't put everything in the video card is because your video card doesn't have enough memory. So now he has a simple scheme for getting everything in the video card and then noticing where things have been rendered in the video card so that when you need to page that image out and put another image in there because you ran out of memory you can actually track which areas of the image in the video card have been changed and pull just those little subsets of the image back out so you can imagine painting a giant 1000 by 1000 window on the screen and you update the cursor to see how it went on the left-hand corner so you push the entire million pixels onto the screen paint the cursor with the hardware and then you say oh I need to draw this other window now so you have to take this whole million pixels back out of the frame buffer and put a different million pixels into the frame buffer oh if you kept a copy of those million pixels before you painted the cursor off-screen all you need to do is copy the cursor the part that was modified on the screen, back off-screen and so that's what this new BXA damage you've got to stop does so I'm hoping the BXA can become a viable acceleration architecture in the next year so probably sooner than that and we can take advantage of the video memory that you have and the acceleration that we can get for the simpler architecture I wanted to talk a little about the OLPC I know Jim gave a presentation this morning about OLPC as an artifact it's not any more important to me than any other PC architecture but in terms of a development process and what their target what their platform targets are I think it's very important they have some very critical resource limitations in the environment and they're starting to, and because of the large number of units they're going to be deploying that are running the software all of a sudden their interests become very important to us because they're going to ship a large number of units and all of their units are very resource constrained so Jim's starting to get a lot of starting to push a lot of people into reducing the amount of memory the applications are using and making sure that they they have scalable rendering architecture so we can cut down the amount of CPU resources they need so that applications can be used in a smaller lower performance platform I think it's really important and Jim of course works with J5 it's another OLPC developer working in application development and trying to build a UI that can work in this resource constrained environment and still provide a rich experience for their target audience which is pretty cool this is Dave Revlin here I don't know, can you see him at all? No, not so much Dave Revlin he showed up what was it about in 2002 or something and put together a bunch of accelerated rendering codes for Kyro and he took that and actually built an X server out of it called XGL over the last couple of years he's become a very responsible citizen it's kind of frightening he started out as a wild kid he's built CompiS the Compositing Manager I assume something we've seen in CompiS demos or something a 3D compositing manager I don't know which one I can run but yeah the life of developing an X server it mostly doesn't work so he's actually put together a couple of things in the last year he's been stabilizing CompiS he has a nice plugin architecture now so he can add new compositing modules but over the last four or five months he's starting to finish one of the last problems with Compositing Desktop and that's the one of transforming input as well as output if you've ever seen the ExpoZ application in MacOS 10 one of the things you cannot do when you have the windows shrunk down you can't manipulate the windows in their transform state which is kind of okay for ExpoZ you can imagine you can drag windows around and look at them but you can't manipulate them in that small state and for ExpoZ that's fine ExpoZ is trying to let you manipulate a large number of windows and navigate a larger space to be able to navigate any desktop but if you're talking about a speed 90 fire where you're talking about a tilted environment where you're transforming images for projection you really want to be able to manipulate the windows in that transform space and David is finally solving that last problem in our composited environment by doing some server side input transformation I know I've talked in a general conference about various input transformation schemes none of them have worked but it looks like David is really actually going to succeed and that's exciting to see the final piece of our composited transform desktop for the place relatively easily no work on my part that's always the best part of it other people just say here's the solution I need we had ExpoZ born elections this last fall we have a new board Eckhart is one of the ongoing continuing members of the board he's actually presenting in another fall down the way so he's not here I want to show you his picture I don't actually have pictures from the rest of the board members which is kind of sad the board is we've made a couple of changes and we're hoping to make more changes the Zorg board is encouraging people to if you're a group of X developers and you want to do some you want to put together a meeting to do some concentrated development we're interested in hearing from you and we can help you find this place and we can help you travel travel finding and coming for renting a room and that kind of stuff so if you have concentrated X development come and ask us we can help you put that together we're running a couple of conferences this year XDC the X developers conference that was in Palo Alto a couple weeks ago and we're hoping to put together another X developers summit later this fall hoping to do that in Cambridge that's not set yet though Matthew Herb is an open DSD developer who's at the conference this week one of the things that I've noticed over the last years really taking a lot of responsibility for responding to security alerts that we get and making sure they get driven to completion and the bugs are filed and the bugs are responded to and the disclosure is happening on time I really want to thank you for that Open DSD certainly gets a lot of credit for its ongoing support of open SSH development and in this particular case XDAR is getting huge advantage from their focus on security and stability by having help us with our security experiences here's Carl and Daniel drinking heavily at oh sorry not Carl drinking at DDC last year at Ottawa and here's Jamie and Carl and Mark at DDC in Ottawa that was a fun time okay I go at Ajax's first picture didn't head on drinking but there he is this is might have been at DDC or something and there he is again so I have some time for questions or comments if you have any the question is whether Intel will offer graphic by DDC and I'm starting to talk about future potential yeah other questions or comments today yeah so the question is what about moving drivers to crowd space and I'm hoping to be able to work on this next year the video drivers there's a lot of different pieces to a video driver there are pieces that we already have in the kernel today if you look at the way our 3D drivers work right now we have a significant kernel component their kernel component managers, interrupts DMA transfers and then there's some resource sharing between applications sharing video card that can't be done anywhere but at a kernel you have to manage interrupts in the kernel the other piece that I'm hoping to move into the kernel in the next year is mode setting not because I think mode setting belongs in the kernel but because I want to be able to support the spend and resume and I want to be able to support panic messages coming up on the console so I'm actually going to I'm working on an architecture to move mode setting into the kernel driver that supports DMA supports interrupts and supports mode settings that I can have so I can actually have a comprehensive single kernel driver for the video driver what I will leave in user space is the generation of contents for the ring buffer, the generation of textures the generation of coordinates just like we always have so we leave this kernel user split like we've had for the DOI and the X server for a long time where the user mode drivers generate the content and the kernel load drivers manage the content as it's delivered to the video card so one of the things that most kernel drivers have that you see in a typical driver environment is that the kernel driver provides a uniform interface for user space where user space sees the same EPI for all video cards like a disk controller what does the disk controller do well it provides block level access and so every disk controller looks exactly like this for graphics cards that doesn't seem appropriate I'm not going to provide a uniform API to user space for the graphics card we're going to leave the bulk of the data generation in user space because the amount of data that you're the amount of transformation you need to convert a common API into a card specific data doing that in kernel space makes no sense in particular one of the things you need to be able to do to drive a graphics card right now you have to be able to compile programs for GLSL, GL shading language, or even the older architect program and architect program those are all programming languages in an abstract sense the common API is not a common instruction the common API are these programming languages and you actually have to compile them the only way I can provide a uniform API to graphics cards is by putting a compiler in the program which makes no sense at all I'm not trying to do that similarly, when you're generating vertices for video cards, the vertices need to be in a card specific format which is a tremendous amount of code because the amount of data that you're transferring is not relatively large and so in order to have a common API for all of that I would essentially have to have GL as my API in order to take all the vertex formats that you want to be able to transform into the card specific data format so again, that's a broad kernel API by putting a kernel API that focuses on taking the instructions that the user library is generating and queuing them for DNA to the card just doing the DNA transfers in kernel space I get the security, I get the sharing and I get the advantage in kernel space without loading the kernel with an entire implementation of GL so that's the plan and not only is it how we've done it it's how a lot of other systems have done it as well the graphics cards are much different than any other workflow there's frequently much more power but we can process it and we're not getting slow again so what? oh it's also very similar to this yes yeah oh yeah, the free reference is going to exist I'm hoping to just build a framework on top of this architecture it's got node setting memory allocation systems doing the frame work is pretty easy yeah other questions? the open graphics project people building open source graphics hardware I haven't seen their hardware yet certainly I'm used to having drivers for their hardware in XOR and cooperating with them I don't really know if there's more to your question yeah what about for EGL is it dead in the line can you hear me from them? the question is about EGL the X server based upon GL I've had some pushback from that from driver writers for GL drivers one of the problems that we've been in and that I kind of agree with is that GL is a very fluid spec and in GL what looks good is good is correct so what looks good on the screen is considered correct by most people using a GL application GL applications are used in animated environments mostly and so the quality of pixelization that they're interested in is much more related to the performance that they're able to get the pixels up on the screen which pixels they actually draw which is to say GL implementations typically focus much more on performance than adherence to some abstract specification and as a result when you try to do a 2D X server on top of another GL library you don't get any pixelization guarantees and in fact oftentimes the look of the same GL execution on two different GL libraries is dramatically different and that's ok if you're playing video games it's ok to an extent if you're doing CAD drawing it's not ok when you're doing most 2D applications where you're looking at text being displayed on the screen or you're looking at icons being drawn on the screen most people are not interested in having several pixels out of whack so the problem is in order to build an X server based upon GL you need a solid base to build on and GL libraries don't provide us a precise enough specification or constrained enough implementation space for 2D applications to sit on so when you build an X server on top of GL what you end up doing is you end up artificially constraining what the GL libraries that you're going to use can do in terms of flexibility of rendering algorithms the flexibility of polygon pixelization so all of a sudden GL libraries feel very constrained how they can explore the performance versus quality space in their implementations so several GL authors have said we wish you wouldn't do that because it makes it very hard for us to write the GL library that satisfies the performance requirements of our other customers so I don't know maybe at some point GL will become well enough specified that it makes sense right now we have some advantages so very primitive I need to be able to fill solid colors fill gradients and paint textures the number of operations that I need are very small it doesn't take very long to write a 2D driver for a typical card so the amount of code saving that we're getting is not very large and the amount of pain that we're putting on the GL implementers is fairly high so right now I don't think it makes a lot of sense one of the architectures we are looking at is to retain the existing 2D API but allow specific driver writers to leverage their 3D implementations for the execution of 2D operations that's a project called GLUCOS where a particular driver author say you're writing me a driver writing a an Intel driver you know how your 3D library works you know exactly what it does to satisfy your 3D so instead of writing another big pile of code specific to your hardware you can just call into your 3D library and because we can now call GL functions from inside the X server with AI GLX you can implement 2D operations using GL calls but because we're making this a function of the driver writer instead of a global decision for all operations a driver writer gets control of what pieces are implemented using this GL code and what pieces are implemented with custom driver custom driver custom driver code so we're kind of instead of making a big switch to a GL based X server we can make a slow migration where we slowly use different GL things for areas where it makes sense especially when you talk about the render extension where the pixelization requirements are different than the CoreX protocol so I don't know, right now it looks like a combination of AI GLX makes full sense all we have maybe one last question yep sorry a million projects out here is it cool? thanks very much