 Yeah. Good morning. Welcome to Taiwan. This is awesome. I've been to Taiwan for a couple of years. It's glorious to be back. The food, of course, is amazing. The people are friendly. The climate is warm and inviting. Maybe a little more than warm and inviting. I'm going to be talking about some work that Valve has been paying me to do as a contractor on making gaming better in the Debian environment and other Linux environments. Valve has kind of a reasonably long history now with working with the Linux community. They, of course, are a commercial software company, and they do a lot of closed-source software. But they've realized that there's a significant market and a profitable market to be made in selling their proprietary software on top of free software stacks. So there's a boundary between what they hold dear and keep private and sell as commercial, closed-source software and what they rabidly and actively support the free software community. I'm on the underside of that boundary trying to make the Linux desktop environment better and to support games of all kinds, including Valve's games. And that's what I want to talk about today. Some of the work that I've been doing. Let's see if I can get this to work. So this is about making games work better in Debian. This is not about games in Debian. There was an awesome talk by Marcus yesterday. If you want to learn about the status of free games in Debian, you should go listen to that talk from yesterday. This is all about hardware support. Getting the operating system and the hardware drivers and kind of the ecosystem, the 3D graphics ecosystem working better for Debian and getting games working better there. I'm going to talk a bit about the current gaming environment, hardware support and API support. There's kind of a transition going on in the world in API support. And then later on I'm going to talk about the work that I've been working on in particular and some other work that other people are working on to try to make, to try to improve the status of the world. And those are all system software stuff. I'm, I, I apparently am a system software engineer because I've been working in system software now for about 40 years. So apparently that's what I do. And so I'm going to be talking about this under the application. So playing games, games on Linux today, how many of you play games on your Debian desktop? A couple, yeah. I'm not among that set. I don't really play video games. Kind of the best video game ever is, is the program called Emax. And that's where you, right? That's, that's where you get to change how the computer works. And it's like, wow, that's a pretty cool video game. So other people enjoy, enjoy a different kind of video gaming. And it's fun to watch them play. I'm kind of a, of a generation where we didn't really have Twitch games when I was a child. And so I really never learned how to do that. So my skills are kind of lacking. It is, it is fun to watch and fun to participate and fun to think about the problems that these, that these interesting interactive graphical applications bring to the system. So that's where, that's where I find my delight is in solving the problems of making these things work. Because they are highly interactive applications. I mean, it used to be the only highly interactive graphics applications we had where things like flight simulators, right? And it turns out that you can apply all the same technologies that we use for flight simulators and other things and bring that into gaming. The, the goals are very similar. Some of the, some of the thought processes are very different. Of course, there are lots of free games available on Debian. Marcus gave a really good overview of what's going on. That there's a whole group within Debian, the Debian Games group, which works on bringing more games into the Debian environment. There are lots of non-free games, including all those available from my, from my, my customer Valve. Valve has supported the Steam, their Steam platform on, on Linux for quite a while now. I think it was 2012 when they started doing that. And that means you can get, I think there's like three or four thousand games. I don't know the exact number that are available through the Steam store that will play on, play on your Debian system, without requiring any, any other proprietary software other than the games and the Steam environment. And there's also a number of games that you can play today. Window, legacy Windows games, things like Eve online. You can actually play that through Wine today and those work surprisingly well. And that supports improving over time as the Wine environment gets better and better graphics support. So there's no real fundamental reason why a Windows game running through Wine wouldn't, wouldn't be able to offer the same performance on Debian as it does on, on a Microsoft platform. So that's, that's something that some of you can play with as well. And the reason, one of the important reasons for thinking about that is that there are some really old games that are really awesome, right? There's some old Windows games that you can't get the source code to anymore. Nobody has it anymore. Nobody's going to be able to recompile it. And Wine offers a remarkably good, a remarkably good platform for running those. And we support that, of course. So the first API I want to talk about is OpenGL. This is really popular 3D API in, developed by SGI as the Graphics Library back in 1989, 1990. I'm not sure the exact year. So long, long, long ago, eventually became released from SGI hardware and kind of generally available. And then, and shortly after SGI released the initial specification for OpenGL, there was a free software implementation by some, by some people called Mesa. And so we've actually had a freely available implementation of this industry standard API for probably 25 years now. And at first, of course, it was as many free software projects, you know, just, just something to be done in the spare time and kind of a toy to play with. It did software rasterization, it didn't take advantage of the hardware, and so it wasn't very fast. But it proved that we could actually come up with a completely compliant and, and extensible implementation of a very sophisticated 3D rendering API in our environment. And today, actually Mesa is a, is one of the leading OpenGL implementations in the world. It really, it really is, there really is at this point Mesa and NVIDIA's proprietary stuff. And those are the two kind of, kind of premier OpenGL implementations. On AMD and Intel hardware, we're at OpenGL 4.5, it's nearly 4.6. I think it's one more extension to be done for OpenGL 4.6. And so we're basically, at this point, we are at the, at, you know, we're no longer lagging the OpenGL world. So Debian provides support for current OpenGL applications. It's no more that, no longer that you have to say, oh, I can only play older games because Debian doesn't support the APIs necessary for modern games. In reality today, Debian has full support for pretty much every API that an application is going to need. We also have support for embedded, the embedded version of OpenGL at 3.2, which is I think the current, the current version of that standard. And that's available on Intel and AMD platforms because Mesa has this tremendous advantage because it's a free implementation that people can play with. We actually have OpenGL ES and OpenGL API implementations in the same library that we don't have two implementations. We don't have an ES library and a GL library. We share the same implementation. So any place you see OpenGL, you will also see OpenGL ES, which is pretty cool. They are different. They have some, you know, stupid little semantic differences between the libraries. But because of the common code base, we actually keep track of both of them quite, quite actively. Another, another really good platform that we support quite well in Mesa's is the Raspberry Pi using the Broadcom chipsets. My close friend Eric Anhold actually works for Broadcom and supports, his entire job is supporting free software drivers for Mesa for the Broadcom platform. So if you have a Raspberry Pi, you can actually get a free software stack for all the rendering. There's still some firmer requirements for mode setting, and he's working on that. But it's, that environment is a challenge, but it's kind of, certainly an interesting challenge to work on. So we actually have competent support for OpenGL ES on that platform. And a lot of OpenGL, but it's the existing Raspberry Pi hardware is limited in some fundamental ways, and you can't actually do OpenGL 4.5 on that hardware. But it's actually surprisingly close, and there's a lot of good, a lot of cool work that's been done there. There's also a reverse engineered NVIDIA driver called Nouveau, and that actually supports OpenGL 4.0 something. I think it's probably 4.2, 4.3, it may be higher than that. The fundamental problem with Nouveau is because there's no support from NVIDIA for this at all. It offers very limited performance and hardware compatibility, so it's usually only supports older hardware. It doesn't support it at full performance. In particular, it has severe limits in how it can do thermal management and memory timing modifications. So while I love the Nouveau project, because NVIDIA doesn't support it, it really has some fundamental limitations, which makes it very difficult to use in a gaming environment, especially. So if you want to do gaming in Debian, you really have two strong choices, both of which are supported by the hardware manufacturers in free software, and that's Intel and AMD. Sorry, I recommend you purchase that hardware. If you have a Raspberry Pi, you can play there too. It's a little more difficult. Getting the free software drivers running in a Raspberry Pi takes a bit of work. Right now, there are some other things, reverse engineering mostly going on, Vivanti, Adreno, and the Tegra X1. These are all being reverse engineered, and they have various levels of OpenGL support. If you want to play with these, it's fun to play with. Again, these are reverse engineered. They're not supported by the hardware vendors. So unless that's what you want to do, which is to say you really want to do reverse engineering, and that's an interesting thing to play with, I wouldn't recommend trying to use these for gaming because they just aren't well supported. But it is an awesome task to go in and figure out what the op codes mean and figure out what the various bits in the rendering engine do. So if that's the kind of video game you like, here's some awesome platforms to play that game on. I think that's a fun thing to play with. There's a new API coming into town. It's called Vulkan. How many of you've heard of the Vulkan rendering API? Okay, awesome. So it's very different from OpenGL in some ways and very similar to OpenGL in other ways. It's much closer to the hardware. It exposes a lot more of the hardware vagaries and variances to the application. And of course, game developers think this is awesome because then they can tweak their games for the underlying platform. It actually has some advantages over OpenGL in that the behavior is much more tightly specified and there's a test suite for it that is required to get branding. And that means that the variance of Vulkan behavior between vendors is exposed in the API and smaller than you see in OpenGL. So in OpenGL, many games are designed to only work or designed primarily to work with NVIDIA hardware or potentially with the AMD close source drivers. And as a result, when you run them against Mesa, Mesa is like, hey, you're not following the spec here. I'm going to throw an error and your application is not going to work. Developers, of course, think that's awesome because our implementation is much more rigorously applies to the standards than the NVIDIA implementation. Game players on the other hand aren't very excited by that because then their games just don't work because their games are expecting the looseness of the NVIDIA implementation. Vulkan, on the other hand, is much more tightly specified. So the variance between the various implementations are smaller. Intel and AMD are the two supported Vulkan platforms right now. So if you have a modern Intel chip, I think it's past Skylake. I think I have the implementation past Skylake. So it's not older Intel chips, it's modern Intel chips and modern AMD chips. And that's largely because of the requirements that Vulkan places on the chip in terms of what it does and how it operates. And so it's not likely that we're going to get support for older chips at any time, just because the hardware can't do that. It is a lower level API. It offers significant advantages for the applications in terms of being able to manage the hardware and being able to squeeze out every bit of performance. So as games migrate from open GL to Vulkan, you can watch them improve in performance. In particular, they take much less of the CPU inside the library. In an open GL implementation, there's this vast semantic gap between what the API provides and what the hardware does. And so that library has to sit there and try to effectively figure out what the application wanted the hardware to do through the API and then stand on its head and turn around three times in order to get that hardware to do what the application wanted. And Vulkan narrows the semantic gap between the application and the hardware and reduces the CPU overhead everywhere. So it's a pretty nice API. I've been doing a bunch of work in Vulkan because one of the things that Valve is doing is aggressively moving a lot of their code to Vulkan because of these performance and functionality advantages. Let's see. Non-free API, NVIDIA ships binary only drivers. I'm not going to ask. It's just too embarrassing. I don't want to know who uses the binary drivers. There is not very much collaboration with the Debian community. And if you install these drivers on your system, you will get a lack of support from Red Hat. If you install these drivers on a Red Hat system, oh, sorry, you installed NVIDIA binaries. We offer you no support for that. So you can understand what people think about these. One of the big things that I started doing at Valve was working with head-mounted displays. Head-mounted displays are fun. You put them on and the real world disappears and you can construct whatever virtual world you like. Obviously, the use for virtual reality. Inside the head-mounted display, there's an IMU, a little inertial measurement unit that figures out where you are in space and what direction you're pointing. They use these little, they're called lighthouses, these little rectangular, these little cubes that sit up on the wall or on a shelf to actually orient the device absolutely in space and it's pretty cool. Valve did almost all the implementation or almost all the design of this hardware. It's a pretty cool design. I don't know what they've published on it, but I got to go and play with the hardware designers who were showing me how this thing worked and it was like, wow, there's nothing inside this box and it provides you sub-centimeter resolution and position with basically no hardware, which is always cool. Inside the display, there is a single panel. I think it's 1K or, I don't remember the resolution of the panel. It's like 2K by 1K or something. It's a strange resolution. There are a bunch of optics in there because the panel is about four centimeters from your eyeballs and I don't know about you, but I can't focus that close anymore, so they put some optics in there to make the panel appear further away from you and to make the fusion of your vision work correctly with two separate views. The result of that is that there is a bunch of distortion of the image coming into your eyeball from the panels, so you're not seeing the panels don't just get translated right into your eyeball. There's a bunch of optical distortion. They could have spent a pile of money and put a nice thick and heavy lens stack inside the optics, inside the head-mounted unit to make that, to make the optics nice and non-distorted, but they decided, as is often the case in our world, that it was cheaper and better to do that in software. So, yeah, thanks guys. So as a result, there's actually a steaming pile of software that gets between the application, which is constructing these two eye views, between the application and the actual presentation of the image of the user, and that's called the VR compositor. So the application generates a view for each eye, and then the VR compositor takes those images and actually gets them into the display. So you see this, we've interposed another piece of software here. So, now that I've kind of introduced the current state of the world, there's a couple of things in my world that are kind of the big issues for us that we're working on right now. Obviously, we're doing a pile of software to do virtual reality support, and then there's also this broader issue that occurs across all gaming about stutter. How many of you have played a game, and as you pan across the scene or move forward, you see the scene kind of jerking occasionally and stopping and moving in weird ways? How many have seen that when they're playing a video game, yeah? Pretty reasonably common. One of my friends from Crow Team actually put out a post on that, and I thought I had a link in the presentation, and I'll see if I can find that, that describes what the problem is and why it's actually becoming, why it's not getting fixed any time soon or hasn't been fixed recently, and what that problem stems from, and I'm going to talk about that in a minute. And the other change that's happened in the last 10 years or so is that people have moved from direct display window systems or clipped window systems to composite window systems. That makes it possible due to all kinds of shiny eye candy in your desktop, and it makes gaming really hard, and I'll talk about why that's such a pain. So virtual reality, Debbie, and there's kind of three main virtual reality systems. There's open HMD. This is free software for head mounted displays. It doesn't have great support for current hardware. There are some reverse engineering efforts being undertaken. I think the data that's in the HTC Vibe is pretty easy to parse. It's got like JSON or XML description of the display. Figuring out how the lighthouses work and how all that data is transmitted is a bunch of work, and I would love to see more support for that in open HMD, but it's not quite there yet. There's another software stack called the OSVR, the open source virtual reality. It's somewhat open, and what they're trying to do is take more of the stuff that's in the SteamVR environment and kind of somehow interpose a different software stack. I really haven't looked at OSVR. I just kind of found out about it last week and started poking around. But open HMD is actually pretty well documented, and you can actually use open HMD with existing head mounted displays and start playing with virtual reality totally in free software, which of course is awesome. The one that I've been helping but not really working on is SteamVR. It's closed source, but it does have full support for the HTC Vibe headset. So if you're playing in Debian and you don't mind using a little bit of closed source user space software, you can actually play with the HTC Vibe headset using the Steam stuff, which is pretty cool. And so that does all the Lighthouse supports. You have absolute position and orientation. Okay, so last year, I don't know if I presented it. I don't think I presented this at DEBCOM, but I presented it at a bunch of places about this display leasing, and the goal for this was to solve that composite window system problem and more by getting the window system out of the way for head mounted displays. And we've actually got that all implemented now. It's then available in the upstream Linux kernel. It's available in the kernel that's in Debian and stable. It's in the X window system. Sources that are in Debian and stable. And it's most of the work is in, I think it's, yeah, most of the work is available in MACE as well. So this part's all done. So this adds support for taking a monitor that's in your environment and saying, hey, window system, yeah, that's not yours anymore. Let go of that. Let me borrow that and communicate directly with the kernel. And this means that the virtual reality environment can manipulate that display directly and get rid of a lot of latency and a lot of uncertainty in the environment. That's been surprisingly successful. We actually have it up and running now and the SteamVR application supports that. OpenHMD should be able to support that without too much work. It's really not that big a deal. And so I'm hoping that's going to become more prevalent. I would like to figure out if we can use this for non-HMD gains as well. Because of the performance advantages it offers, Microsoft Windows has a similar system where you get to do something similar in their environment. I would love to just be able to adopt a lot of the same APIs and functionality from their world and see games using this directly because it's, you know, you save a millisecond or two per frame. And for those of us who work in interactive graphics, that's kind of a lot of GPU time that you get back. So you can do a lot of cool stuff in that. So that's all available now. So what we did for, what I did to support SteamVR was just all that I really needed to do was implement like half a dozen Vulkan extensions. But of course when you implement an extension in an API you have to implement all the framework and scaffolding underneath that API in order to make it work. So I started down in the Linux kernel. We added some stuff to that. We added stuff to the X window system and now we've added stuff into the Mesa Vulkan implementation. There's a little more work that I need to do here. There's a new extension that's called the Google Display Timing extension that lets us get some information about presentation that we need to work. There's Chronos is working on a similar extension. I don't think that's, and so eventually that will come out and we'll get that integrated into Mesa and that will help finish off that work. And then there's another mechanism I'm working on within Chronos that's going to help the VR system not get too far ahead. Right now the VR system just starts presenting frames and it hopes that each frame is going to take less than one frame time to compute and display. But it has no way of knowing if that's true because the Vulkan APIs are like they have this gap. It's like, yeah, ask us to present a frame and we'll present it sometime. And then sometime later it will appear to the user and you can't find out when that happened. So there's a bunch of stuff actually working on getting that support into Vulkan. Okay, I want to talk a bit about Stutter. So it's a jumping or jerking in animation. And there's really two fundamental causes of this. And one of these really surprised me. And that's the one we're working on fixing. So you can either draw the wrong thing at the right time. So in an animation what you're trying to do is present the correct motion on the screen and you present a frame and you wait 16 milliseconds and you present another frame. You have to predict what you need to present in each frame so that you draw the correct thing there. So you can either draw the wrong thing at the right time or you can draw the right thing at the wrong time. And that usually comes from drawing too much in a frame and missing a frame. But it actually can come from not having enough control over when frames are presented and ending up presenting a frame early. So that happens as well. So this is what happens when you miss estimate, when you draw the right thing at the wrong time. So you can see in this little demonstration on the top you can see the system is estimating the time between frames exactly 16 milliseconds apart. And so it's drawing, it's smoothly drawing the objects of the correct location in every frame. And the wrong frame is actually miss estimating the interval, the display time interval and it's saying, oh, this object, the frame time between these two frames is 20 milliseconds. So I need to draw this thing, you know, further to the right because the animation is further to the right. And so you see this discontinuity between the location of the object, the time that the user has presented the object and the time the system thought the user was going to be presented this object. And this is just a gap in the API support for display timing. The applications have no way of knowing when displays are going to be presented. And they often misestimate when that's going to happen. And so you see these little tears. Even though the application is drawing 60 frames per second, it drew the wrong thing and it looks terrible. The other thing that happens is you get a rendering underrun, which is to say you cue too much drawing to happen before the frame needs to be displayed. And because we can only start displaying a frame at the top of the top of the interval, we can't typically these days, we can't start displaying a frame in the middle of a V blank, in the middle of a refresh cycle. So we have to wait for the next refresh cycle. And so now we've computed a frame and we have and we can either not display it at all, or we can wait for a frame and display it late. And those are your two choices. So so this happens in for several different reasons. One of the ways it happens obviously if the scene complexity increases dramatically, like you fire a bunch of bullets and you have lots of explosions on the screen and the GPU is sitting there computing lots of fire. That's a really common thing, you know, and and some people actually think that's kind of amusing, you know, you get a lot of explosions and all sudden the the the scene goes jerk, jerk, jerk. And it's like, wow, exciting things are happening because we can barely see what's going on. So that's that's kind of the classic mechanism. The other another mechanism these days, of course, is when you're sitting there playing your game on a laptop, it all of a sudden the CPU and GPU are getting too hot. And so the system is like, hey, things are getting warm here, let me throttle down the computational resources available so that we get rid of some thermal heat. So this is something that the game cannot even predict is going to happen very easily. It's like, things are getting warm. All of a sudden the operating system is throttling down my resources. And so all of a sudden I'm starting to miss frames. So this is this is kind of a new thing for game developers who are used to having 300 watt GPUs that never slow down. And all of a sudden this is actually becoming very common. This happens really a lot in mobile gaming, especially on your phone. If your phone starts, you know, roasting your hand, you might notice, you might notice it slowing down. The other thing that happens in composited environments is all of a sudden the window system presentation mode changes. Something happens on the screen. You go from full screen to windowed. Some window pops up over the top of your your system. And all of a sudden the window system is like, well, instead of just putting you into overlay, I'm going to have to composite a bunch of stuff and I'm going to steal about four milliseconds worth of your GPU time this frame to display information. And so all of a sudden you only get 12 milliseconds instead of 16 milliseconds to compute your frame. And you drop a frame. And that really sucks. And so we need to find some ways to fix this problem because smooth animation really is it's useful in a regular desktop gaming environment. But when you think about a VR environment, it's really critical. VR environments, if you put the when you put the headset on that the real world disappears and you're counting on the computer to keep the image in front of your in front of your eyeballs looking stable so that you don't fall over or other bad things happen. So when we talk about fixing stutter, it's useful on the desktop. It's critical for VR. So obviously, obviously, I have a big interest in making this work for my VR work. But it's also really important for smoothness and fluidity in regular desktop gaming. So we obviously need a couple of things in order to help the application get some idea of whether they're about to start underrunning on the GPU, whether they're trying to do too much computation with the GPU. We need better measurements or how much GPU time frames using those extensions are becoming available. We've had we've had that ability for a while. One of the things that's missing from the just, you know, how much GPU time am I consuming one of the things that's missing there is the ability to relate when the GPU rendering finished to when the display cycle happened to start. And there's no Vulkan and OpenGL don't really provide any relationship in those two times. So you have no idea how much spare time you have. And so we're adding some extensions to Vulkan to report when V Blank happened before and when it's likely to happen in the future. And also provide some information about when presentations happen. It's like if I asked for an image to be shown to the user, I really want to know, hey, did you actually manage to get that on the screen at the time I asked for or was that late? And should I start trying to scale back the number of objects in the scene so that I can try to hit my V Blank targets more often. So we're doing a bunch of work in Vulkan to try to fix this stuff. There's a bunch more work in the in the Windows systems. So when you're trying to accurately present something in an accurately present something at it to the user, the data that you want is not when, you know, some random operation happened within the GPU. Oh, I managed to copy this image into the frame buffer. Awesome. I don't really care when that happened. The time that I'm interested in knowing is when the photons started going from the monitor to the user's eyeball, right? That's that's the relevant time here because that's the time the user sees. I don't really care when I pass the image to the Windows system, which means that in any of this computation of accurate presentation time, it's not something we can do up in the up in the rendering library. It's something that we have to engage the rendering library and the Windows system and the kernel to make sure that everybody is working together to get this information sent back to the user. Applications have to be able to control when their frame is is is to be displayed. They need to be able to say, oh, well, you know, I need this scene complexity in order to show all the cool art that my I paid a lot of money for. So I'm going to instead of reducing scene complexity, I'm going to go from 60 frames per second to 30 frames per second. And as long as I know that I'm going to display every other refresh cycle and display every frame for two cycles, then I can at least offer an accurate, if not as smooth an experience. But that means the application has to be able to to know that it can trust the Windows system to delay a frame by a certain amount so that the frame is presented at the correct time. And when when frames are displayed, the application needs to be able to get feedback from that. Did it work? Right? Did I actually manage to tell the Windows system and that information come back to me telling me that my frame was displayed at the correct time? So those are the two critical pieces of information. We're getting pretty good at the first of these. There are APIs in Vulkan and OpenGL and the X-Windows system that lets you tell the system when to present a frame. And that control is starting to work pretty well except for composited Windows systems. So in a composited Windows system, the application provides an image to the Windows system and then the Windows system does a bunch of computation with that. It'll put overlays on it. It'll merge that with other images on the screen to construct the scene for the user that includes all the applications of the environment. So they're two obvious common composited environments in our world and those are the X-Composited extension, the X-Windows system and Weyland. The X-Composited extension seems more complicated because it involves an external compositing manager which may do arbitrary computations, but Weyland is just as complicated. It just integrates all that complexity into the Windows system server instead of putting it in a separate process. So these two environments are very similar from this problem because all of a sudden the applications images come in and now the compositing system is going to take that image and do stuff. And it can do a lot of computation of that environment. And that means that the amount of time available for the application to render using the GPU is reduced. How much is it reduced by? Well, it depends. If the application is running full screen they're being there, the composited Windows system may do nothing at all. It may just hand that image right off to the kernel. So it may have all 16 milliseconds of GPU time. If the user pops up a little dialog box over the application down in the corner and the compositing system says, wait a minute. I have to take the applications image and this little text chat window and paste them together before I hand them off to the Windows system server. Now all of a sudden we're talking about copying the applications image into another buffer, blending the chat window over the top of that and then handing the resulting image to the Windows system, to the operating system. And that can take a lot of extra time, a lot of extra synchronization. So all of a sudden the application's amount of rendering time gets reduced by a lot. We contrast that with a kind of a classic clipped Windows system where every application gets a set of pixels on the screen so that when you want to present a scene from an application you know that all you have to do is copy those bits into a frame buffer. There's no arbitrary complexity of computation. Every application has a pretty much fixed overhead. And the real problem is not the overhead but the variance. The applications have no way of knowing that. So one of the bugs that I'm trying to fix this year is there's actually a bug in the X-composite extension for dealing with this frame timing stuff. When you have the composite extension running and you're presenting an image from an application, the time that the Windows system does the copy, which is to say it's going to delay the copy until the time that the application wanted that image to be on the screen. And you think, awesome, it's going to delay it so it gets displayed at the right time. Well, no, because it doesn't display to the screen at that point. It copies it into the, thank you, off-screen buffer and then tells the compositing manager, hey, there's new contents over here. And so now the compositing manager has to say, oh, let me construct my scene image from that. So it takes the application's image, which was handed to it when the application wanted to have it shown to the user. And then the compositing manager has to do a blending of whatever it wants to do, construct a final image and hand that back to the X server, which then means that that image is not going to be displayed until at least the next frame. So when you run the composite extension with a 3D application today, you are guaranteed to get at least one frame of inaccuracy in your frame timing requests. So that's kind of bad. And of course, the application gets told that its image was presented when that copy occurred, the initial copy occurred. So the application always gets it displayed at the wrong time and is always reliably lied to about the time that it was presented. So these are errors in both halves of this. So I'm working to fix that this year. I'm also working on, so the way that I'm going to try to fix this is I'm actually going to try to, I'm actually going to copy the image immediately tell the compositing manager and then somehow associate the compositing manager's presentation with that copy. And I've got a couple of ideas. I think I'll do something ad hockey at first and then come up with something a little more principled later. But so I want to associate the compositing manager's presentation of the entire scene with the application's request to present its application image. So tie those together so that when the compositing manager, the presentation is done when the application, once it's image presented and the application gets told that the stuff was presented when the compositing manager presentation occurs. So I'm hoping that's going to kind of resolve these, resolve those issues fairly nicely. So I have a further goal beyond that is to actually separate out the kind of the primitive, simple compositing environments from the construction of the content on the screen. So you have the compositing manager doing all kinds of crazy decorations everywhere. I think we have this really simple operation. It's like I've got these applications. They're stacked on the screen. I want to blend them together and send them to the video hardware. So I want to pull the construction of that compositing stuff into the X server so that the X server has tighter control over that. And the advantage of that is we're going to be able to do better timing predictions when we have to use the hardware. The second advantage is if your hardware has multiple video planes that it can scan out from, I'll be able to use those for your games as well. And that means that I'm going to get to basically not interacting with the compositing manager at all when I want to re-display an application's image, which means that the jitter of this variance in display time is going to go down a lot. And that's my longer term goal. And I'm trying to get that done this year. It's for it. So in summary, support for gaming is obviously improving. There's a bunch of companies that are working really hard to make sure that we have competent gaming support in free software. The free software implementations of the OpenGL and Vulkan APIs are very well supported by AMD and Intel. And so that's getting better all the time. As I say, we've now caught up. We're kind of at the leading edge of API development now instead of trailing by years and years and years, which we used to do. The VR stuff is coming along. I'd love to see more work in open HMD to get free software HMD stuff really available. It's time at this point to solve this long-term stuttering problem. And that's really what I'm focusing on for the next six months or so. We used to solve this by just buying faster computers, but game requirements are now getting heavier faster than hardware is improving. And so we need to actually come up with principled mechanisms to make sure that games can actually do the right thing even when their resource can strain. So thank you very much today. I really appreciate the opportunity to speak at DEBConf. I love coming to this conference, of course. It's my people. And thank you again. And have a great time for the rest of the week. We've got about five minutes for questions. Anybody has a question? Come on down to the mic. Yeah, come on down to the microphone down here. OK. Sorry, I didn't keep very close to the presentation. But my question is that it's that basic issue in the free driver or non-free driver have any result, have been resolved before, especially in multi-screen setup? In multi-screen v-sync? Yeah. So we have the advantage right now that the system knows which monitor the application wants to be presented to. So we have all the tools that we need in order to present your when your application is presented on a particular screen. We have the tools necessary and the APIs necessary to make sure the v-sync happens on the presented monitor. We just need to solve some of the stuttering problems to actually get the timing accurate. So yeah, we have the infrastructure available and the applications don't have to change. We just have to fix the underlying implementations to fix that. OK, thank you. Yeah. I suspect like everybody else that we're all ready for lunch and thank you again for coming. And I'm around the rest of the week if anybody wants to ask me questions about what I've been up to or anything else. I'd be love to talk to everybody here. Thank you very much. Let's go find some lunch.