 It's GTK work in the past, and Python contributions as well. So I come from an open source background, but not really from a hardware or very hardware related or hardware dependent background there. And so I've been working with Lenaro for this last year, setting the team up and trying to understand what the problems are that we're trying to face. So I've been looking at this embedded GPU problem with a lot of interest because I think it's at the heart of the difficulty that ARM has had in becoming this mainstream platform for people developing open source. And so I've prefaced this problem by just talking about what I think the problems really are. So I think that there are actually a number of different problems depending on who you ask, what is the problem. But I'll look at them from the perspective of, first the OEMs are the people that are giving things to end users to use. So OEMs are the people that are building these devices and giving this to people. OEMs that are trying to build ARM-based devices, and so Nokia is a good example and I've gone and spoken to them. Other examples of people that are trying to do ARM-based netbooks and canonical has done a ton of projects like this before and failed in most of them. So these OEMs are trying to build projects, trying to build products that run straight Linux. And when I say proper Linux, I'm saying non-Android Linux, so proper Linux, Xorg and so on. And they have a really hard time building a device which performs well. The second problem, which is a problem which I'm more familiar with, I think David's also very familiar with this, we want to have open source drivers for us to be able to work on the hardware, for us to be able to debug it, for us to be able to not get caught in such figuring out, oh my God, is the bug that I'm running into happening within this blob or is it within the kernel itself and so on. And GPU vendors won't provide an open source driver stack. So that's sort of like the three angles of the problem there. The last one is a bit of a fake problem. So the true problems that the GPU vendors are, that they would like to sell more silicon and they would like to spend less money carrying the burden of maintaining the software. So it's a bit of a fake problem, the third one there, right? I put it there because at least it states things in a bit of the diametrical opposites there. So OEMs are trying to sell devices and they can't get them done, so they cancel the project, the performance is terrible. We want to have open source drivers because we want to have the rights, you know, want to be able to go out and debug and improve performance of our devices and GPU vendors won't do that. So let's talk about why they won't do that. Just a second portion to that preface, so who are the people that are involved in this? So OEMs such as Motorola, Dell, HCC, Samsung, Gantoshi, but there are tons of companies that are out there that are trying to put out ARM-based devices that use embedded GPUs and are failing to do that with straight Linux. Linux distributions are another, you know, important party in this problem here, so people like Chrome OS and Android, but also people like me going to Ubuntu that are trying to put Linux on-arm devices and ship them are running into this. SOC vendors are all these people that actually sell the Silicon, TI, Qualcomm, Nvidia, FreeScale, StereoX, and Samsung. These are examples that I put up there. There are other cases like there are people like Renes Saz and actually small SOCs as well that put out ARM-based cores. GPU IP vendors, the people that actually own the IP that goes into the SOCs, so I listed the three big ones here that I think are the three important ones. There are other ones, so Vivante is another example. There is Via, who acquired us three. I don't know if all these companies, they are still selling IP, but in general, they're companies that have sold graphics IP in the past, and there is graphics IP out there. And Linux and Xorg, engineering community. So I think these are the actors that are involved. These are people that if you go and you ask them, what do you think about this embedded GPU stuff in Linux? They'll say, oh, my God, this is terrible, nothing works, it's super bad. I think all the people that are here, if you go and you ask them, they'll say there is a problem. I don't know if they'll agree with what the problem actually is, but that they'll say there is a problem. Right. So why am I interested in helping solve this problem here? So in Lenard, we have a bunch of different working groups that look at different portions of upstream open source that we want to make better. And there's a graphics working group, which is run by the sky on the left, is Jesse Barker, who came to us from ARM. And Paul McKinney, who's lead of the kernel working group, is also here this week as well. But also trying to, hi, Paul, how are you doing? So he's also helping us understand what the problem is and what possible solutions would be. Now we know that there are a number of problems that involve non-free GPU code, but we're not exactly sure which ones to focus on and how to solve them yet. And so I actually don't know anything about what I'm talking here. So the truth is that the facts here are either Jesse or Paul's. And I've tried to interpret them in a way which I can understand, because I can go out and talk to the vendors afterwards and explain, OK, this is actually what we should be trying to fix. Or this is why we can't do things this way. So as I said, I don't come from this background, so I don't actually know a lot about the hardware or the history behind it. So I did some research in going to these problems to try and figure out what does the architecture look like? Why are people providing these binary drivers? Why don't they just open source everything? And why are people having performance problems? So the first one there is that actually it took me a long time to understand exactly how these things fit together. First, in the PC world, I'm used to going out and buying a PC card which comes from a vendor. And it's a discrete card. It has its own memory. You plug it into a PCRA GP port, and it works. And in the case of embedded GPUs, the vendors are actually just IP vendors. All they sell is an IP. And some silicon vendor goes out and takes that IP and puts it on a chip. And then so it's actually a little bit hard to figure out who you actually have to blame for the problem on free drivers. Do you blame the silicon vendor who's the guy who's selling the chip? Or do you blame his supplier who's the guy who's providing you with the graphics IC? That makes the problem a little bit more complicated. Now, the guy who's a graphics IP vendor, he isn't really motivated to provide specifications or source code. He doesn't really feel the pain directly because he's going to go out and license his IP to somebody who's only going to be selling silicon years from now anyway. And so there's not a lot of pressure on him right now for him to open source stuff. There might be pressure for him to open source stuff, which he's doing in the future. But then he gets into problems because I'll talk about patents a little bit further down the line. But then they justify it by saying there are lots of patent issues or potential patent issues in this area here. I don't know if we're safe to publish this stuff. So that's one of the areas there. Now, the OEMs, who are shipping Linux-based devices, they're buying the SOCs and the IP and putting it together and selling it anyway. And Android has shown just what a great success that they've had just following that strategy. So not talking to the open source community at all or not providing anything at all in terms of specs and drivers and still shipping products and lots of them. Today, many more products run Android than run regular Linux. And I can say that to be a big problem for us. I consider it to be a big hole in where we're going. So I want to also understand that and try and figure it out. And because of these things, open source doesn't get any better. So nobody there is actually trying to make open source any better. They take the stuff, they'll sell it, they'll benefit from some interaction in it, but open source as a whole isn't getting any better. So that's the first part. The first thing is that I was surprised. It's just how complicated it is and how much people don't see eye to eye in when they're describing the problems. The second one, and I just use this picture to illustrate it, because it's in an embedded world or an SOC world, there's one thing which I think here is really interesting. So if Arn Bergman likes doing these interesting analogies, and when I think of a PC system or a regular PC system, it's like every memory is like a little bathtub, right? So you have a bathtub which is on the graphics card and you have a bathtub which is attached to the CPU. In an embedded world, or especially in modern ARM based processors, it's more like a jacuzzi where everybody is in it together. So you have the GPU and the CPU using the same piece of memory, but it's not just those two pieces like you might see in an embedded graphics on a regular desktop PC. You also have modems, GPS devices, hardware codecs, cameras that are all reading and writing from the same memory. And so that actually causes us to run into a more complex situation. So the problem there is more complex to understand and actually to solve there. So in general, an ARM based device will allow multiple different peripherals to access memory and modify it. So the best example that I can just describe it and actually Paul was the one who came up with this is that if you're doing something like you have a camera and you're doing a preview like this, the camera is actually writing to memory and that's getting rendered without interrupting the CPU. And so this is a bad example because this is a Symbian phone but there's some people here have N900 and so on and this is the way that if you want to get to a low power device, that's how you do it. You'd get the camera to be previewing data and display it on the screen without actually waking up the CPU because the CPU doesn't need to do anything. It only needs to do something when you actually press the button or an alarm goes off. So some form of shared memory management is required. And so normally what these SOCs, what these chips provide is some form of user space library plus a kernel side memory manager component there. In general, because these devices don't have memory management units, they don't have page tables or anything like that. The devices that just have to write to regular heart, it's just to the physical addresses. In general, they need large, increasingly large portions of contiguous memory because if you're trying to decode a camera and ship that over the wire, there's a lot of data that you're processing there. And so a lot of contiguous physical memory needs to be made available there. Whether that's a long-term solution or we should tell people to fix their hardware, I'm open to us discussing that but at the moment that's what that's the situation we're in. And the GPU is one key participant here. So I'm trying to show this as like a systemic problem. It's not just the GPU that's default here, it's actually the GPU together with the CPU and all these other different pieces of hardware that are there that often need binary drivers themselves. I need to write to memory in the same sort of ways or manage them in the same sort of ways. Okay, so the third thing that surprised me was about specifications. So I was actually surprised when I went and I looked at this and I saw that, wow, actually Intel AMD did provide GPU specs. And so if you go on Xorg, you can actually see the AMD or ATI-MD specifications. And if you go to the Intel site, there actually are specifications there for the Intel-based chips. Now, I'm not sure 100% about this because it's very hard actually to understand what the product lines are and how they base off each other's chipsets and what are variants of what, but I know that you can get actual specifications and code for at least for the JMA 900, 950, all the way up to the X3100. I don't know if you can get it for the 3,500 and 4,500. Can you? Can you? All right, so, right, so you lost me right at the beginning there, but I think you told me that we don't have actually, we don't even have the JMA 900 to 3,100. And we, right, and we do actually have, and the drivers do support the 900, 950, right? Right, so we don't have specs for those, but we do have drivers for them, but not necessarily stuff which is unreleased, right? Right, of support and quality as well and how much acceleration you actually get out of it. You're right. And AMD, I know we have R300 to R700 specs, but I don't know about the new stuff, but not necessarily specs. Right, right, they're sort of the same, yeah? Right, just writing the specs is very expensive as well. Okay, so I've had this conversation actually, both with R and with Imagination now, and I've said what are the first steps that we are going to go in the direction of open sourcing this stuff? And I've cited this as an example, you know, we do have specifications for these. I'm not sure if this is what we should be asking them for. I'm not sure if this is the first thing that they should be providing us with, but it's probably the place where they're most comfortable with starting doing something there. And in general, when I discuss that, it's about opening up to patent risks, and I think that there is some competitive advantage that they still believe in there, but they don't say it at least. I think that the real reason that they state is that it's a patent issue. So about patents, there's a non-trivial set of GPU-related patents, and I did that like a summer search a little bit. Maybe actually, well, I won't do that, because it will get too long. Anyway, I did some research on this, and there are about 2,000 patents today registered with the word GPU, or with the letters GPU in them there. And so, I think the reality in this market is here is that everybody probably infringes on somebody else's patents. I think everybody, you could say that about everybody that's playing there. But there's a difference here in that Intel, AMD, and NVIDIA are pretty big companies, you know, and in general, you wouldn't see them as suing somebody else for graphics IP, because it doesn't make sense if you're a big company and you produce a lot of IP yourself, you probably don't wanna be suing somebody else because it's likely that you're gonna be, you know, you're gonna be infringing as well. So in general, I don't think those companies there will be the ones that would be there to sue there. But imagination and arm are pretty small. I mean, Michael thinks that NVIDIA's legal department is bigger than arm itself, and Qualcomm's legal department is definitely much bigger than arm itself. So these are pretty small companies here. And small companies, you know, these things that we dub as patent trolls, they may own, you know, some of these key patents. So one of the examples that's been cited is Rambus, which is still around, even though they're not shipping actually anything these days. So why are they still around? You know, what's their interest? What sort of business side are they in? Now, because it's patents, the vendors will tell me that the reason is patents, but they won't even investigate the issues themselves because you're not supposed to investigate if you are an IP vendor, because if you are infringing, it's much worse for you. So patents might be the real reason. They might actually be telling us the truth. It might actually be the real reason why they don't give us the specs or the code. I don't know if that's the truth or not, but that's what they're saying today. And well, I'll talk about what we should do, face to that later on, but it's one thing which has been repeated to us back over and over again. And then the Mali case where, you know, Paul and I have had a number of calls with them, they've actually gone out of their way and said, okay, we will produce specifications for these portions. We will provide you with open source code for this, but not for that. And so there is like a dialogue, but I'm not exactly sure where we want to drive that dialogue now. And so this is why I'm asking for help. Okay, and so fifth is just how many goddamn binaries you have to actually get to make the system go. And this is, I think, is actually amazing. So in general, what they provide to you are OpenGL ES and EGL Implementation. So it'll be a library, a binary library that they'll provide. So usually there'll be a shader compiler mixed in there as well, because without a shader compiler, this doesn't actually work. Normally there's, some of these vendors have provided, and I think this is like a step in the right direction, an X driver. So at least they're thinking about, okay, what do I do in an X world? And so this is device dependent X, what they provide to us. And something which maps what X is doing to something which is hardware specific. And so there'll be some GPU support in there, sometimes some hardware glitter support. So if you have like an IMX51, for instance, there's a hardware glitter in there. So some of these SOCs have interesting capabilities. Very few of them actually do use them, or do use them well. Jesse was telling me this story about when he used to work at SGI, that the amount of effort that went into writing an X driver was gigantic. Like they would, the drivers that they would write would be 10 times or 20 times the size of drives that we have today. And so I think SGI, of course, took X very seriously and invested the effort, but that the performance gains that they got were also impressive. And so it wasn't diminishing returns. They were doing 20 times the size of the X drivers to get the performance out of the hardware they were shipping. Anyway, so what else do people provide? DRM or DRM-like device dependent code. So often not exactly DRM, but something which will pay patch into the kernel, which will allow DRM-like functionality. Hardware code X drivers, so this is not really a GPU thing, but sometimes they'll do part of the decoding on the GPU and so they'll provide you with a driver which does that. And often they have a separate unit which does hardware decoding and so that also they'll provide a binary driver for. And then in general, they'll provide a few open source kernel shims to drive that. And to wrap it up, a generally free memory management component of which all of them have reinvented. And so UMP is what Mali provides. There's HWMM, which is STRX and provides, which is then wrapped into UMP because the driver actually talks UMP. There's Cmem, which TI provides. There's PMem, which Android I think uses. So there's a couple of different implementations of this and we should probably try and figure out what they're trying to do and convince them of a standard back. And just how much goddamn binary, so this is just an example of what you get when you get the power VR bits that drive an OMAP3 and OMAP4. Well, yes, OMAP3 and OMAP4 roughly. There may be one more incarnation for OMAP4, but this is more or less what they're shipping there. There's a subset at least of what they're shipping there. So there's a shader compiler in there in the middle. There's an EGL component, GLES. They provide open VG as well. And then there's a bunch of stuff which I have no idea what it does. And I don't know if anybody else does either. Well, they probably do. Anyway, so thinking about potential solutions. So what are my possibly clueless predictions about this? Because it's been reinvented so many times and people will keep on doing it and there is a lot of open source there. Some form of memory management API will be necessary and it will need to address the fact that people need to allocate possibly large contiguous areas. And so Paul has been thinking about this. There's some tie in here to memory regions and if it's actually kind of complicated, but there's a lot of different components that can go into the solution here. I think that the vendors will still ship binaries possibly forever if they have a good enough team. So I think that we will coexist with binary drivers potentially forever. That's actually one good question that I wanted to ask so I don't understand the history behind this as well. There is an open source ATI driver, but there is still fire GL, right? And, right. Right. And in particular because of NVIDIA, I think vendors will always ship binaries. I think they have a very big software teams, very good software teams. Like I've gone in and spoken to the people there and they're doing lots of really good cutting edge work. And so I think that they'll always ship binaries. Maybe there will be an open source implementation, but I think there will always be binaries there. And so if we're living in a world where people will always ship binaries, then how do we ensure protection and debugability? You know, which is at least one of the practical consequences of having to live in a device which has a binary that's running on it. And so Paul has thought a lot about this and maybe you should come up here and talk in the end about what you think we should be doing. But there are a couple of different suggestions there on things that we could do and Paul's actually outlined some of the hardware impacts that that would have. So one of the solutions is having something like a memory protection unit or a full blown IOMMU which would allow all the devices which are talking to this memory to see virtual memory instead of having to talk to physical memory. And that removes some of the risks and some of the difficulties in scaling that. But I'm not sure if that's the right solution there. My point here is just that we should try and figure out together what we would like to ask the vendors for. And then Paul and I will go there and we'll actually talk to them and make it happen. So those are my suggestions there. Are there other components to the solution? I'm not sure actually what they are. One of them could be trying to unpeel this patent onion. So like drilling down a little bit more in that. And I'm not sure if patent pools are the answer there or if going to the vendors and saying, look if you do open source, then we can have some sort of relationship with a patent pool that will make it easier for you or protect you in that situation. I'm not sure exactly. Reverse engineering is just something which has been done in the past. Does it get us to somewhere? Does it actually improve dialogue? Does it make dialogue more difficult? I'm sure that people will do this anyway. I'm not suggesting that we should stop doing reverse engineering, but is it a core part of what we should be doing? Should we be looking at this in order to solve the problem? And what else has been tried? I'm not sure what the history was behind Intel or AMD. If it was a lot of conversations, if it was just a lot of patients. Reverse engineering does get you somewhere at the point where the driver that we've brute forced into existence is so compelling on its own that it's not worth them continuing to invest any engineering effort into developing a closed one. Once we have something that's 90% as good as an embedded chip driver, we're good. And we're not even close to that for the Nvidia and Radeon chips yet. So I think that, yeah, reverse engineering is really the only way you can bang forward for a lot of these chips, but it's not a pleasant solution for a lot of reasons, not least because it sort of rewards people for not having played ball in the first place. Right? Just got to take as well, just another difference in the, it's pretty much the market segments. When you look at graphics cards for PCs, they're selling into a Windows market and Linux is a 0.1 game where nothing in that market. In embedded GPU market, they're trying to use us. You have to remember that's the big difference from my point of view is there is no other game in town. It's Apple iOS or Linux. You are not going to use, you will have some Windows vendors, but Windows just isn't there in that market. They don't have that excuse. They're trying to use that as one of the possible reasons, but it's not the game. They're using Linux because Linux is the operating system of choice. It's not the same as the X86 market. Is there no future market share? Dads and being, you know, they're coming, they're coming to all this. It'll be there for a few years. I mean, it's, it's a three to five at least and probably longer than that. It's a completely different game in terms of money. And, you know, I know from the X86 market AMD cannot afford to spend any more money because Linux just isn't there. But in the arm market, Linux is there everywhere. That's true, but the amount of money in the embedded market is quite a bit smaller per unit than in the, you see, not probably. It's like we're in a magnitude of my friend, you know that. But no, but I just, from the point of view of dealing with the vendors, the position of power is different. The Linux community needs to talk to them from the point of view of, you are coming to us. You know, you can't, you're giving them the power is a lot of our problem because we're so used to having to ask for stuff from the X86 days. No, let me temper that a little bit. Yes, it's not appropriate to talk to the embedded vendors in the same way that we talked to the X86 vendors. Okay, however, I disagree with your thing that we have all the power, right? That's not true. We should have, we've just probably given another way too quick. I think the situation is a little more complicated than that, right? I mean, there's a lot of things they can do in a lot of different ways. Even if they use our code base depending on how things play out and what they want to put up with in terms of their own development, right? Yeah. But I agree. I have more hope for the embedded market. But at the same time, we do still have to be pragmatic, right? Yeah. But yeah, I would say they're not lying about the patents. I totally agree that that is their problem. I just don't know why they don't face the same problem with every other area of that arm is a CPU manufacturer. Other people make CPUs. They still tell us how the program your CPUs. Because... Sort of. They tell us. But they give us pretty much a lot more than that. Right. It is, that is definitely true. So, and then... They face these problems. So on this topic actually, I have some bonus slides. Which are, what if we did nothing? So we could do absolutely nothing in this area and just say, you know, this problem will solve itself. And maybe it will. But let me, let me postulate what I think will happen. So I think normally what happens in this sort of situation is, you know, somebody's trying to use this binary thing and it requires a kernel shim and they provide in this patch that will talk to it. And upstream says, oh, this is rubbish. Go back home, you know, this is horrible code. I'm not gonna take it. It only talks to this binary that you're providing anyway. So normally what happens is that the upstream will reject the code submission. Now the distributions themselves, they hate redistributing binaries. They hate this with a passion because first users keep finding these bugs which the distribution can't debug or fix. So they have all these reports in the bug tracker which are like, oh, I have this binary thing in here and it modifies the whole behavior of the system and they're like, fuck, I'm not gonna deal with this. Second, keeping up with the ABI is horrible because they need to do security updates as well. And so if you have this binary thing that's dependent on a kernel shim and the ABI bumps and the kernel shim is gonna break, then they can't afford to put out an update which is gonna break everybody's computers as well. And so distributions hate this stuff. And OEMs end up suffering with performance and reliability. And so they pressure the IP owners to either fix or provide the source code to this thing so somebody else can go and fix it. And eventually the IP vendor, they get tired of dragging these patches, too many people are complaining to them and eventually something will happen. They provide specifications or source code or so on. But could this case be a bit special? So one point here, without GPU support, a mobile device or a desktop is kind of useless. And if anybody has picked up a sharp Netwalker, the performance there is like really, you can't run anything on it properly. It's just very slow. The screensaver which comes, you can enable GL chess on it and then all of a sudden it will be horrible. I don't even know if I agree with all of this. The one thing that I would caution about this is the bit about the support being state of the art and the binary's hurting performance. Many ARM devices are strange in that the GPU is useful if you're playing Angry Birds but if you're scrolling the UI around, it's not that great. Android is basically software rendered. It's true. And the GPU comes into play for Flash and for games and for games and not for a lot else. So... Well, modern Surface Flinger will use the GPU and now that there's talking about doing actual GPU acceleration within the browser and then moving on to WebGL after that, it's more, I think it's more complex than that. To the extent that WebGL and things like that change the support landscape for things that you have to be able to do in your device, then the GPU will advance but the HTC ARIA I've got in my pocket is not going to be that device. So I don't know because if I go and talk to ARM Princess and the Mali guys, they're also interested in stuff like OpenCL and moving compute dramatically over to the GPU. And so I think that's saying that in the embedded space, the GPUs, I think what you're saying is that the embedded space, the GPUs are not as good or not as useful. I think that's true maybe up to now but it probably won't be true for the next five years. Right now, the device that you get is has a different balance of computing performance between the two devices than we're used to. And if you want to do something like OpenCL, you're not necessarily doing that on your cell phone. There may be cases for doing ARM compute nodes out in the world but fundamentally compute nodes are power heavy devices. It takes electricity to convert, it takes power to do math. So what kind of embedded device are you thinking of? Are you thinking of a cell phone or are you thinking of a blade full of arms? So the modern ARM CPUs, stuff that they're putting out these days, so A9s and A15s, the silicon vendors are not really coming out with a range of application processors. They're going out with one of them that they're probably gonna use. So some of them will have stuff like SATA on chip, for instance, and they'll be more focused on the server workloads. But in general, there's gonna be the same core and the same GPU and everything. So I think we'll see less specialization going forward. It'll be less discreet units that the silicon vendor will put out because it's very expensive for them to start. For TI today to put out 100 different OMAP4 variants is very expensive. And so I think there won't be that much variation there. That's what I think, but I'm not entirely sure. But anyway, let me just, yeah, okay, sure. Sorry, it's just like coming back to Android and your previous slide when you were talking about distributions, but isn't Android a completely different situation? There are no distributions pushing against having binaries or anything. It's the vendors, it's a different case, is that the vendors are shipping Android on their devices as they see fit. So there's nothing really, there's nothing really like pushing back against them not to do that. Right, I'm saying that Android is sort of circumventing that. So this is sort of challenging. So there's no game in town for Linux on all of these mobile phone devices. There's not, that's where the millions are that are coming out. Right, but that's sort of one of the arguments to why I think we should also temper this thing about us being in a very position of a lot of strength because the truth is the OEMs today, they sell all these devices and they say, what difference does it make if I provide these specifications or a driver anyway? I'm gonna open myself up to a greater patent risk and it's not gonna make any difference in the graph of shipping products. And so I think that we have to be balanced there. I don't think that anybody here knows actually what we should do about the problem. But I think at least we know some portions of about it there. Anyway, so valid or not, the patent argument is hard to challenge and the concerns aren't gonna go away by themselves. So figuring out what we should tell them with regards to the patents or what we should do there is I think is a good point. And I think that today in the GPU case, nobody has a very strong incentive to change. And I think that problems will solve themselves when there is enough of an incentive to change. In this case, I don't think there is enough of one. So anyway, when I go and talk to an OEM like Chrome OS, I have a conversation there, they say that Linux and Xorg are really what the problem are. I go and I talk to my friends over in the Android building and it seems to be much easier. The results are really good, it performs well. And so I think that's something, one thing which is a dangerous outcome of this is that the OEMs suddenly think that this is a very upside down way of looking at this. And there's one thing which I think is really interesting is that this trend of OEMs picking up Android kernel branches to ship on non-Android devices like the Palm Web OS devices do. I think that's a really interesting thing to ship an Android branch of the kernel even though you're not using a lot of the Android functionality there just because the Android branch is better maintained or it's easier for you to integrate a vendor binary into. I think that's a really weird trend that we're seeing there. And but the final thing I wanted to say here is that the GPU vendors, they do want guidance on what to do. So they are actually asking us and Paul and I and Jesse have spent lots of time talking to them. And so there is surprisingly good dialogue there. And they are interested in improving the situation. They come back to me always and say, look, it has to be a practical thing. Okay, you can't just go back there and say that everything has to be free period and we're not gonna talk to you until you provide everything. So it has to be sort of a practical dialogue there. And again, if I've got it wrong, and I've seen some interesting things that come out there that I didn't know about, what else could we do? What are the different things that could we do? And that's the end of my presentation and now I really wanted to hear what you think. But I mean, Exorg goes out of its way to maintain an ABI for the lifetime of a particular server release and in historically much longer than that. I'm not really sure from a process perspective, as someone who's been the Exorg release manager once or twice, what did I do wrong? Like what am I to blame for here? And I've heard this a couple of times now and I have no idea what I've done wrong. Yeah, I also think the Android thing is going to be Android's new. They've got a few years to work on this in a couple of years time when they start getting version skew and try to use a new kernel, this shit's gonna hit them and they're gonna worry about it then. They're not worrying about it now, but it will happen. I don't think they've got enough time. We changed the ABI a lot. They've got a lot longer periods of stability. But they are gonna break it at some point. Something will change. So specifically on your point about the X components, so what I've heard, and I'm not sure this is true or not, but this is what they say. What I hear is that they say, look, we can't provide open source and because of that, it's very hard for us to understand exactly how to write a driver that works very well. So that's at least what I've heard. And so unless they are in a situation where there is a maintainer who is willing to sign an NDA, it's very hard for them to actually understand what they need to do. And the fact that there are so many different acceleration models within X also makes that complicated because it's hard to understand exactly which one you should crib from if you're trying to crib from one when you're writing your own. Okay. So it's more... That's what they tell me, at least. So basically I'm hearing that there's no X device driver's book, right? What's that? Sorry, I didn't hear you. That there's no book for X device drivers because you can go to Barnes & Noble and buy an outdated book about how to write Linux device drivers and you can look at that for a bit and have kind of a rough roadmap but there isn't a comparable document for X drivers or to the extent that there is and I know that there kind of is. It's out of date and doesn't really tell you the things that you're interested in. And I mean, the secondary thing of course is that X is a terrible window system and it shouldn't be running on your phone. But if that's the thing that you decided you wanted to do then yeah, you may want better documentation on it but it's a very, very small interface. There's not a lot in an X driver anymore. And I have difficulty understanding why it's easier to look at the Linux kernel code and bolt your interfaces into that but why that's easier than doing the same thing to X which is by any metric, less code. Less to read. Less to understand afresh. I think that they actually, it's not that they can't produce a driver just that when they do, it performs really poorly. So that's really what the problem is. Well, yes, X is a terrible window system and it's not something you should be running on your phone but if you wanted to do that. Well that's an interesting statement there. Is that like an official statement? Should we take it back as an official statement? There's a question there, sorry, it's on the side or somebody with their hands up. Right next to you, sorry. Sounds to me like you sit down at the table with the vendors and the vendors ask, do you want OpenGL, OpenCL? Do you want the shaders? Do you want the offloading for the video decoding? And you say, yes, but I want the drivers as well. Could you sit down with them and say, I want the drivers on a new core that doesn't have all these features? It just does like 98% of what I want. Just say I want a newer, simpler core with the docs up front. I'm not sure if I understand Paul, what are you saying? They say these, yeah, I want a simpler core that does 98% of what I ask Paul. Don't they sort of give us that already with like an unaccelerated frame buffer? That's pretty much all that's not part. The reason I'm passing this microphone around is that just your voices alone are not being picked up by the AVs. So if you've got a question put your hand up and I'll get the mic to then talk, thank you. So I guess the question, and I don't know the answer to it is, is there a useful subset that is something that you aren't gonna get hammered when you release it? Maybe there is, maybe there is. I don't know, I haven't looked at all the 2000 packet patents. If you're talking about something with a web browser on it, with today's standards or Android or something like that, like you say, you only need a frame buffer. Well, most of it's done in software anyway. Why can't you make that the de facto chip that you pick? Well, I think that's what you end up getting today. That's right. Yeah, that now, and it's slow. So can't they tack on the little bits you want over time without the patent? No, 3D is pretty much patented from every point. So screwed. Yeah. I think one thing that they've talked about, they'd like us to merge the kernel drivers and stuff and leave the user space alone, but making the argument back to them that that's just not, I'm saying it's not going to happen. I just think it's, there's no point doing that. I think people are taking, other than this community are being difficult about this, but it's like, what's the point in us maintaining something when we're getting no use out of it? The burden of maintainership is where the core is. It's like, if you're going to give us this, put this code in the kernel, but we can't actually do anything with it, why do we want it in the kernel? Can you not just keep maintaining it and keep it outside the kernel because we're not getting any benefit from it. If you're complaining about your drivers being slow and we discover, oh, it's because your interface between your kernel and your binary pieces is really crap, but we can't change that to tune your driver because we can't change the binary piece. If you come from the point of view of who's maintaining this, whose time is being spent on it, why put it in the kernel? Where's the benefit? One possible outcome if we presented that to the GPU vendors in the embedded space, would they be saying, okay, we'll supply a maintainer? Not problem solved. What would be your reaction to that? I will wait and see. But the thing is that doesn't help us. We still can't use that code to fix it. We can't tell them. They're asking us, why are drivers we write crap? Well, we don't know. Because you won't tell us. Because you won't tell us. Well, most of the open drivers are crap as well. So I'm not saying that the potential is there. And the problem is, if you've got a set of defined interfaces between all of these components, there's four common memory managers, you think? Yeah, just a point. Now, I think this is the answer. I'm probably wrong, but the way to get embedded IP vendors and SOC vendors to support open source more is to do something like some of the initiatives that have been done with the open moco and open hardware sort of thing, where we go and take it, and we've got the IP, and we've got the driver access, and we write the drivers ourselves, and we go to that effort that's a lot of time and effort, and it's about the same amount of effort as reverse engineering. But we release devices that then mass sell, not only to hobbyists, but as consumer devices as well. If we beat them at their own game, and we start making more money than they are, they'll sit up and take notice. They'll hit their bottom line and their profits, and they'll change their behavior towards open source. Well, I think your point, where we'll actually see something happen, will be OLPC. I think they're currently the hope for doing something with this because they have a good line on not doing, like trying not to be profit driven, trying to do the right thing and get the driver source. I'm not sure, but as with everything with OLPC, they won't do that until, like the same problem they have with the wireless driver with the first one was until they're actually shipping the hardware, they can't release the driver. So we're not going to know if that's going to be to solve the problem or not. I also think that once one vendor breaks, there'll be a lot more incentive. I agree. And I think it's to break one of them. And the easiest one to be honest, to break is probably Qualcomm because their driver is very like AMD's hardware and we already know how AMD hardware works. So in terms of, I can write a driver for a Qualcomm device, it's probably easier than I can write a driver for a Tegra. Just shout, I'll repeat it. Let's make this problem less bad. Work on things that slice at the kernel user space boundary and that capture the command streams as they go back and forth because that's how we figured out Nouveau and that's how we figured out R300 back in the day. And those tools are valuable even from an open source development perspective because then you have the ability to trace the command stream as it goes back and forth. So if you're wondering from a code perspective if you can do anything to make this better, yeah, better tools, better tool chain, it's always good. I think another area you could probably, if you have dialogue with all of the vendors is to try and get all of the vendors into one place and sit down and you'll actually get them to say what they're, they all have the same problem. They all, and they all think they have a better implementation than the other guy, but pretty much all probably don't have, they're all pretty crap. We know this because we've never seen anything that was closed source be opened, wasn't crap. We haven't seen, I'm not saying open source is always better, I'm just saying we've never seen anything transition that wasn't complete crap. They all think they have secrets that's important, they're probably not, but I think if you sit them down you could probably get some sort of agreement at least on the minimizing their costs in terms of common memory management. Why have we got forward? Can you guys do better? Especially if you're all buying imaginations and you're putting it in different hands and you're all different memory manager, you know? Because even the GPL code we've gotten for the kernel is horrendous and it's got shims and it's got operating system abstractions and there's like chunks of it from PowerVR and chunks from the vendor and they're sort of all shoved together and you're going, I couldn't make sense of this if I tried and I've seen code for Intel for their SGX containing patches with bits of Nokia and TI in it. And I think the Nokia has Intel stuff in it from Pulseploit. It's like people just, you know, somebody just gives them a big load of code and they just ship it. Yeah, I think that's probably true of the stuff at least that they provided a source as free software. I don't know if that's true of the actual EGL and GLES implementations of the shader compiler. I'm not sure about that. We've heard about some of those. Yeah, it may be true, but I'm not so sure about that part of it at least. Just to add to the points that we made up the back here which I think were passionate, but very good. The problem is while they control the hardware and we want to use it, then we're always going to be held to what they want. And so when we can actually own the IP or at least design the chips ourselves, even if we don't fab them, at least then we can start writing our own drivers. And I think, you know, it's just asking them to play ball and be nice is never enough. Like when they want us to sign these NDAs and write these drivers, we all have to get together as a team and say, no, we're not going to do that. And then there's no good developers left to write these drivers so that drivers are even worse and worse and worse. And eventually they've got no choice but to open up these things. Right, I agree that- I think that's the future. Having buyer power is a great, I agree that it's a great lever. Unfortunately at the moment, none of these vendors are actually feeling any of the pain. So that's why there are no TI, there's no, sorry, there's no imagination engineer coming up here and saying, hey, let's work together on this piece of kernel technology. It's because at the moment that they really don't feel any of this pain themselves. They have enough of a team to maintain the drivers themselves, if even if they have to produce like, you know, or out of kernel drivers, you know, that binary modules which they can rebuild when they ship, if, you know, they have enough, they have enough software expertise to do that themselves. And so that's, I think that's why they don't feel the pain themselves. And so this is why I think that the solution has to be a little bit more subtle than that. Yeah, I think if you can get them to talk to each other as well as talk to the Linux community, you might find there's a lot more commonality between them. Like, and they may be able to at least say, well, we can start with a decent memory manager, you know, or we could figure out whether the current memory manager is good. We've got a couple. Yeah, so this is one thing which I have discussed in the past with Jesse said, we could do an embedded graphics summit, maybe like a one day summit, tag it onto another event. I'll tell you one thing that was interesting about starting up Linaro. So in the first meeting I had before Linaro was called Linaro was with a number of the vendors and they got together and it was a really bad conversation. They were like, they wouldn't say anything. They said, look, we're never gonna talk to each other about any of these subjects that you want us to talk about. And I was very disheartened in that meeting. I said, wow, this is terrible. I don't wanna work on this project. Nobody wants to even talk to each other. And we know that there are big problems, that they're gonna be competed out of the marketplace anyway in a year if they don't work together. And the reality is that since we formed Linaro they actually have come together and surprisingly discussed a lot of the stuff that they do internally with regards to power management which I think was something that they would consider to be crown jewels at the time. So I think that there is good dialogue and they do see that they need to do something about it but I don't know if they know yet what they need to do. And so you're right. Maybe having something like a summit, bringing these people together, talking about things is one of the right solutions to it or maybe one of the first next steps we should talk about. Anything else? All right, you're free to go for beer. Thank you. Thank you.