 They couldn't supply this closed source driver on their product, so we had to have a... Well, we had this driver update process where we provided means so that you could download the drivers from the website of the eight from 8Gi, but we couldn't ship it with our product. Basically, due to the fact that it loaded the kernel module which was closed source into a GPL kernel which violates the GPL, so we would have been in a big trouble if we had provided it in more CDs or whatever. And so, we set a good directory for both sites, consumer, users, and also for limited distribution of both, so I put pressure on 8Gi to change the situation. Now, users are usually not very vocal and public, and express their disappointment about the situation. Public, meaning now, Linux distributors, mostly technical partners of 8Gi, they don't give public advice to their partners, but they like closed source. Of course, there were discussions going on. So... Well, another thing that appeared, I think, was last year was the legal driver of a project where, well, some people in this room were involved in Daniel Stone. Daniel? Where's Daniel? Okay, Jerome? Jerome in here? Oh, there's Jerome. It was the attempt, pretty successful attempt even to reverse engineer what the closed source catalyst driver and the bias does to initialize the hardware so it would basically, a mode-tuning driver, help to overcome this problem. And it also was kind of serving as a means to put some pressure on 8Gi I have a setting in the world. We're doing something that doesn't necessarily need information so we can do our own driver. This was aimed at our projects that we don't need at the time. And when the R66 generation came out, there was virtually nothing. There was no driver that could deal with it. There was no open source driver, there was no open source driver given insight. And also the catalyst driver at the time of the release of the hardware, the closed source driver by ATR, from ATR, was not able to support this hardware under Linux. It supported it under Windows, of course, but not under Linux at the time when the hardware was released. And also for AMD, the company that purchased the API, a way to look into this situation. AMD actually had a very long history with open source drivers and supporting the community in writing drivers for AMD driver. So there was a different, different, different AMD and different parent items and so this kind of helped to change the entire situation. We had to have a long history of good relationships with AMD because we kind of started with the X86-64 project. We did the port of Linux to X86-64, which was a very successful product. And it had a pretty close relationship to AMD. So AMD came to us and asked us whatever ideas were to create an open source driver here. So, we looked and see if we could meet up together and put together a proposal of how this could be done, different steps involved there. Well, one point in there was to have community involvement early on, so release something pretty soon. There somebody would have to do the first bits and then go and share them and so that keeps them jumping at once. Another thing that we asked for was documentation for the chips that is available to everybody very early on. Without any strings attached. There used to be, at certain times there used to be a program for open source developers where they could sign an NDA to get documentation from ACI. I think this is kind of messy for both sides. ACI needs to keep track of the system. Actually, we didn't want, we wanted to point this out. So, with our proposal and then AMD asked us to jump-start a driver, create an initial driver that can then be released to the public together with an initial set of documentation. And think it was around July, mid-July. We got the first drop-off documentation from ACI, which at that time was still under NDA because it required some cleanup or it needed to be looked over. And we started to write this driver. Okay, so before I talk about the list and the details of the driver, I think John also already had a transparency on this, so it can be brief on this. Yeah, John has a diagram of the different components that are involved in graphics. And he has three components of life. One was the driver inside the X-server, the X-server contains the driver that's basically the mode setting, which does 2D acceleration. We have two different architectures in the X-server right now. One is the both X-server and X-server acceleration architecture, which others can be involved in the vehicle. Oh, what? Which has been around for, even before X-server 6.4.0 was released. So it's pretty old. Basically it accelerates the 4x protocol. There's not much or pretty much nothing to accelerate any new functionality and a new red-ring functionality like the X-Render extension. Then we have ACA, which is newer and supposed to help to accelerate the X-Render. It doesn't do so much acceleration for a protocol anymore because it's not that often used anymore. A lot of drivers support both architectures because X-A considers a more stable, more reliable one. X-A is the more experimental one and in many benchmark results you still see that today it's still a little faster than X-A. I don't want to go into the research so much on this. So I think John went into this already. We can do this right here and right where we can do this either through classical end-of-the-world programming or using the command processor. Another thing that the X-Render does is set up the DRM. Basically, this is a great model. Actually we are looking into a new model using a travel-based memory manager at the moment. The X-Render is still in charge of allocating memory buffers. All this allocations are static. All this is done inside the X-Render. Another thing that the X-Render does is context switching between 2D and 3D applications because you need to have a different hardware stage and this is also done in the X-Render. Another thing that's been done in the X-Sover is to program video overlay scale for a video. This is something that we don't pretend to do anymore and ready on HD but I'll enter this later. The second part is a general module, the DRM module which basically prevents the man from offering 2D hardware. Hopefully not all do this. That's the basic command checking so that no illegal commands get to the hardware because once you can do this you can have the graphic chip do all kinds of blanking things on your hardware and it also takes care of the X-Render. The third piece is actually the AI client-side module. It's a sub-module that gets loaded by the LitGL when you run the 3D application and it pretty much converts the OpenGL stage to hardware-specific commands. And today with the scaler contains the backing of assemblers and not scalers, shaders. It contains the backing of assemblers and compilers for those shaders. So, here to understand the next slide a little bit better I used a graphic that I picked out, the 808K. I watched that out, this is okay John, I'm using this graphic. Not very detailed, not completely accurate. It destroys most of the parts that we have been programming in the radio and the HV library so far. So, to give an overview, there's some product missing. Here you would have first the memory sequencer or the circuit engine which reads video data, there's band line data from the frame buffer and also the TRT controllers which provide the timing information, provide the timing signals and are driven by the PLL so the pixel plot is also out and there. The data that comes out of there gets then sent through the display tagline. It's got the gamma direction, arc direction. This is what we do in those four lookup tables. Then, further down the pipeline you have a scaler which can rescale the image if your screen lights on laptop. It has different physical size than the image of the frame buffer you want to view. The scaler engine. Here is something drawn in. There's a video overlay engine. This is something which we will probably not use ever because the overlay engine does not, at least on R5XX and R66, does not have a scaler anymore so you could overlay video data and YCRCB and YCRCB parts this format but you can't really scale it to the size you want it to so whatever video you will have you have in the original size. This part is probably never going to be used. Further down is an output crossbar which we have two independent CRTCs and two independent memory papers so we can have two sets of data from your frame buffer for the display engine and here in the crossbar it gets distributed on the different outputs. Now, today if you look at the video card it must have two jacks on the back. Oftentimes there are both DVI jacks. DVI jacks, most of them depends on what you want to view. Most of the DVI jacks provide two signals, one analog signal for the old monitors with an extra like this there is an adapter which it can plug on here and also a digital signal for digital panels must be digital monitors so analog outputs for normal CRTCs or analog input monitor, digital outputs for DVI and yet outputs you have not under it, you have LVDs for panel, mass panel from laptops and also you have television all these different kind of outputs and how they need to signal to these output controls by a crossbar actually here is a TV encoder which when you hook up a TV there is some funky rescaling of the data so it fits your TV norm actually what you've done is you've got a TV encoder here a TV norm actually we don't have much documentation on this part or not any documentation at all so this is pretty much a platform for us so this picture is not completely accurate it seems that either you can assign this to both jacks okay, well now there we are so what's the present status of our radio and HD drive pretty much useful mode setting we can support most of the outputs our TV out is not completely done yet we've got most of the pieces coded up but we can't control the TV out the TV controller you've seen on the previous slide yet we will be able to control this probably so as I will come back to this on a later slide but all the pieces for this are in place yet we can drive all kinds of digital outputs we can drive DVI we can drive channels we have support for XRNR 1.2 pretty much damaged today we have support for XA both XA and XA at least for our 5XX we've probably heard from John's talk that on our 6XX any later generation after our 5XX the dedicated 2D engine is gone that you would have to use the 2D engine now there is emulation later in the later hardware which pretty much emulates the 2D engine but this needs to be initialized so the statement that we have and the code for this is not yet in the driver so we don't have that yet in fact the code for this will probably be in the T-Core that's going to be released so we can plug it up there and do 2D emulation also for our 6X what we've done in the driver very early on we looked at the different subsystems we've seen on the previous slide we separated them out in software we looked at the orders X drivers everything was somehow clumped together and was just a huge problem because the original kind of extra we should use or the design that's been used was just for one single output for cars so the first step what we did we separated all the both separate subsystems for all the different hardware pieces separate systems for the PLLs for the series of keys the surface engine and the sequence here that means the data from the frame buffer for color transformation for the outputs we separated all of them in the back for analog the OVDS for panel the TMDS for UVI and HDMI we also had the support to extract the connectors the physical connectors because if you don't have the situation where you have different outputs connected to one connector this is the case on the DVI's world where you have an analog and a TMDS on one connector but you can also have another thing you can have here one output share of several connectors which is the case between analog and TBO's they usually use the same DAC so you have a strategy later to have that structure connected here that's very helpful from there this is the separation the separation of the different subsystems we looked into how to map this into XR and R1.2 so we didn't go into XR and R1.2 and tried to map the hardware but the first step was to map the software and then check what we can interface into the extraction model of XR and R1.2 XR and R1.2 most of the systems this kind of helps because whenever a new engine or whenever a new one hardware comes around it's very easy to block this thing and also the hardest thing in driver development especially in graphics driver development is to get the bitbanning right especially when your documentation kind of sucks so writing up those pieces is probably the biggest work the infrastructure on top and also a lot of work but it doesn't change that much there are not that many generations probably so it's a good idea to have those pieces reusable so when you change to another environment it can carry those along so it's kind of nice to have them abstracted out a lot of the old drivers don't have it which makes it very painful to do models like this on R1 and R2 people take one of these really old drivers it's really painful to block things apart and stick them back together so then it takes the XR and R1 and R2 so we abstracted out these things very from the premise from the beginning so that we are very flexible here well another thing of course not a big deal first or ancient before I tried it for a speed to reduce the speed from the display this is all separated out in different subsystems also we added software abstraction monitors something like this is also part in XR and R1 and R2 we found that we needed to do some more some different validation than R1 and R2 so we put a layer on top of this but this is completely a software abstraction another thing we have a well John talked about it already we'll have to talk about this on a layer slide too we have a layer for atom bias that interacts that deals with the bias for all of them and AD2 and bias so this separation really proved to be very useful very flexible for the integration of R and R1 and R2 group to be very simple now our initial driver didn't support R and R1 and R2 right from the outset because we first wanted to get all these different sub-modules right and we just plugged them to give up somehow hard-toed you can get some more to see how this works and later on we integrated this into the R1 and R2 model also the design is very portable so we must be separated between driver-private parts and hardware-related functions and interfaces to the other parts of the X-server into the device-dependent EDX card or even the logs from directly from DINX from the device-independent layer it's possible that somebody asking just a few days ago on the mailing list if it was possible to port the driver to what was the operating system called? yeah it's currently another open source operating system I've never heard of and if you take all the bits that she pieces out you can take them and port them over to a completely different and also whatever you do you can people talk about how much of it you can take all the bits and port them somewhere else and maybe Miss Understood what I was referring to maybe the abstraction between a driver-level low-level function and anything that interfaces above layers because you don't want to have EDX functionality it's way down inside your fifth-balling pieces because if you want to port it someplace else you would have to change all those things so yeah I'm not sure if anybody would want to port this to PageRide I don't know what the status of PageRide is at the moment is there anybody actually still working on it? yep okay right right oh, like you guys have are you going to use ATI or are we pulling on? okay I mean we will be seeing more and more embedded I mean, ATI hardware embedded in system chip or something like that and that will be probably targeted by Keith Bryant I mean, it could be interesting to to have those things and some of these abstraction pieces be shared across the internet I think what very few people know is that another strong area or another area where ATI is pretty strong and is running hardware for big devices Jeff may have known more about this Jeff and that's the other thing so we're kind of we're kind of really into how this area of this store is doing all the stuff