 All right. Hi everyone. I'm first of all, sorry about the delay. Sorry to the FOSTA organizers for making them look bad. It's all my fault, so I'm also happy that I have a pretty short talk right now, because I think we'll still manage on time. If not, we'll run a little bit over maybe with questions if people have any. So yeah, my name is Eric. I work at Collabora, and I'm going to talk to you about what's happened in the last year of the Zinc project. So yeah, I'll just start. So one year ago, I was here at FOSTA, and I presented about Zinc. For those who don't know the Zinc project, it's an OpenGL implementation on top of Vulkan using Mesa 3D and the Gallium interface. What we had was running OpenGL 3.0 or at least exposed that. Turns out that there are some more tests running and stuff. We were far off on a bunch of the details. So yeah, we were failing a whole lot of Piglet tests and I don't have the numbers, but it wasn't great, but it was working for some applications. So this fall, Zinc got upstreamed in Mesa and became a part of Mesa 19.3. So it's no longer living in a branch. It's now shipping. It's not compiling by default, but you can enable it the same way as you enable any Gallium drivers. We've started getting some contributions from other people. So now I think five people have contributed to Zinc. I think last year there was two, so this is interesting. Now everything happens upstream and the upstream issue tracker and stuff like that, so no point in filing issues in my fork any longer. We've sadly had to revert away from the OpenGL 3.0 stuff. So we're now only exposing OpenGL 2.1. There's two major features missing that were in the last prototype that needs some re-engineering. Yeah. So since last year, we've added some features that are quite nice. We now properly support control flow in the shaders. So you can do all of your ifs and switches and loops and stuff like that. We properly forward point sizes if they're not written from the vertex shader. We do alpha testing and transform feedback and conditional rendering support because we re-engineered some of the way things are emitted and some how curious where it worked because yeah, the old thing failed in a lot of cases. Yeah. So I'm currently doing a quite more structured testing than before. So these numbers are actually a little bit out of date. They were up to date three days ago, but now I've improved some stuff. So we have about 3,000 something tests passing. I think we're down to almost half of the failures now, and even less crashes. So that's probably closer to 80 percent pass rate right now. Most of the failures right now, or at least a big bulk of them, has to do with unsupported edge flags, which we need to do something more clever about. Right now, we just fail, throw an error and give up. Yeah, and also some stuff due to unsupported lines tipple. Lines tipple we have an easy way forward with because there's an extension for better lines in Vulcan. Now that it's exposed by I think both the ANV and Rad-V drivers that allow us to forward lines tippling. Last time, I have some experimental patches for it. There seems to be some differences in how this works in Vulcan and OpenGL. So it didn't really pass a whole lot of tests. It started stippling, but there's some more stuff to it. Yeah, performance is something I keep getting asked about. I keep trying to avoid the question a bit. It's not great, but it's also not really the main focus of the project. I'm not saying that we don't care about performance, but I'm trying to prepare us for a future where we don't need OpenGL anymore. So hopefully, I don't think this is going to be super relevant for the next five years, and then machines will be a little bit faster and OpenGL, like more high-end stuff will be ported, and I think we will not care that much. But of course, it's nice to get the performance we can. Yeah, so not much has changed there. I don't really do any systematic numbers here, but Pheronix did a benchmarking of Zinc on top of Radvi versus Radium SI, and I'm surprised by the results. Some of the tests are almost on par. I'm guessing those are cases where GPU bound, and for some reason, we happen to do things that are okay for the GPU. But some are pretty far off, and then we're talking about maybe 25 to 33 percent-ish of the performance. So still usable, but not anything you want to game on, for instance. Just from how Zinc is engineered, I think there's quite a lot that can be done for performance. We're doing a very simple translation model, where we're not trying to be terribly clever, and I think at some point, we're probably going to have to start being a bit more clever. Yeah, so some about the stuff I'm working on. Next on my to-do list here is crossing up OpenGL 3.0 again. Yeah, and these slides are out of it or this already, because I added back instance rendering and texture-proof objects over the weekend, or not over the weekend, but the last couple of days. But yeah, the conditional rendering and the transforms feedback stuff needs some more work. I also want to start testing for OpenGL AS 2.0. I suspect that we're already there, but in terms of features, probably fail some tests and some bugs there, but I think we should have all of the stuff that's required. But I haven't spent any time on it, but I think after I've landed GL 3.0, I think this might be what I will look into next. Yeah, and then it's the cool like this is moving slowly currently, because I'm working on Zinc as a part-time R&D project at work, and I'm super busy with some other paid client work at the time. So, I don't have a great way out of that. This is going to be the case for some months going forward. But if you need Zinc to move faster, I think there's two options. You can either work on it yourself, or you could hire a collaborator work on it for you. Yeah, I think we're at the point where I think to move this faster and more robustly, we need to find some paying customers or something to spend some proper time on it. All right, that was my talk. So, I managed, didn't I? Yeah. One minute over. Yeah, sure. Over the years, so I'm curious what is your mid-to-long-term goal like do you aim to support all versions of OpenGL or only like the modern ones? Yeah, okay, sorry. So, the question was if I plan on supporting only old or new OpenGL versions or like what the long-term plan for Zinc is, and the long-term plan from my perspective, this is going to take a while, but it is full OpenGL support all the way as much as we can. I don't care too much about hypothetical Vulcan drivers, for instance. So, I'm pretty happy to use extensions where that gets me out of a problem. But I don't see anything in the way for, for instance, if someone has a driver and they want to hire OpenGL first and we might have to implement some lowering to some master stuff. For instance, foresee quite a lot of fixed function stuff being lowered to geometry shader stuff in the future for instance, stuff like Edge Flags. So, yeah, long-term goal I think, I don't see a reason why not to go for the full OpenGL 4.6 compatibility profile, but it's going to take a while. Anyone else? So, two topics at the same time, just looking at how much work it is and how you would like to be hired to work on it. Is this Linux only or is it usable on other platforms? One platform you can think of would be the Mac, where the OpenGL support has been completely dropped and something like that could actually help developers, even corporate developers to see quite some work. Okay, so the question was about platform support and if it's Linux only or if it works on other platforms as well. It works on Linux and on Mac OS. So, I have not, I don't have a Mac, I don't test on Mac, but we have users who use it on top of molten VK, on top of metal to run on Mac. So, there are people who do this and there's some interesting people trying this for some kind of big projects. I don't think anyone is doing this in production, but yeah, I don't, so right now, the implementation is kind of tied to Mesa and Windows system implementations there and Mesa doesn't, to my knowledge, have any Windows, Windows integration apart from the software Rasterizer stuff. Zinc has the ability to hook in as a software Rasterizer, basically as a mem copy into a frame buffer, and I guess in theory that could be wired up, but I think it's gonna be pretty terrible for performance reasons, so I kind of, I'm not planning on going down that road. Features that you are missing from Vulkan, which prevent you from. Right, so the question is if there's any, I don't see a reason why we can't add extensions, especially not, I'm primarily targeting the Intel ANV driver, because that's the hardware I have, and that driver happens to live in the same tree, which is super convenient. So at some point I might look into that. I'm more interested in this for compatibility than for features though, because there's some things that are just a little bit too crazy to implement without any extensions. But I think the answer is yes, we could go down that road, we're not actively pursuing it yet. This could very well change, for instance, if we get a paying customer who does care about performance. I think that's probably a likely outcome at some point that we get some benchmark results or goals that we need to reach. All right. How hard do you think it would be to do things? It could be useful for some compatibility stuff there. You basically just open the old maps, it's almost straight to a Vulkan folder, how many of them are like, but what really? The API translates more or less directly to the Vulkan API. And I think to answer that question kind of truthfully will be kind of a long answer because we're not implementing OpenGL, we're implementing the Gallium interface. The Gallium interface is much closer to something like the Dart 3D10. We're taking a pretty naive approach where we're kind of like just pulling the handbrake whenever semantics break, if we don't. So right now we're going pretty directly, I would say. We're doing stuff like, for instance, Vulkan has the concept of render passes. And we're just starting and stopping them whenever commands aren't allowed inside a render pass. So we're not trying to reorder things and track dependencies, for instance. That is a horrible idea and that's definitely gonna have to change, but for now it works much better than I feared. I think one of the reasons for that is probably that I'm currently targeting desktop GPUs largely and they don't really benefit from render passes as much as tile-based renderers do. Shaders, yeah, so the question is how do I deal with shaders? And this is, I think, one of the more interesting parts of Zinc, at least for me. So we take shaders, for Mesa we get shaders, okay, I have to wrap up soon. We get shaders as GLSL or ARB Vertex Program, they get converted into some IR and then into NUR, which is a general IR in Mesa. And NUR gets translated finally to SPUR-V, which gets handed over to the, yes. Probably, so yeah, the question is if going from NUR to SPUR-V and back again could be avoided, because the driver, the Vulcan driver will, if it's a Mesa-based Vulcan driver, will translate back to NUR. So we're gonna do a little bit of a dance there. Yes, I think that could be avoided. This, I think, is an interesting point to look into making an extension. That being said, NUR isn't really made for serializing in a, there is a NUR serialization level, but it needs to, it needs to serialize to the exact same version, so you can't have, because NUR changes over time. So you would have to either guarantee that the drivers are built from the exact same tree. I think it would be possible to negotiate like NUR and the Mesa-SHA version and try to avoid it in that case. And yeah, I think that would be interesting. It would maybe save us some time. I haven't seen any profiling data indicating that currently shader compilation is a problem, but I'm guessing that might become a problem once we fix some of the other worst problems. All right, time is up, so thank you all for all the questions.