 All right. Well, thank you all for coming. Also, thanks to Luke for organizing this step room, but I don't think he's inside right now. Well, thanks to him anyway. Brief introduction about myself. My name is Nicolae Henle. The NL sound is a bit difficult for me as well. I am somebody who likes both the theoretical and the applied. So I have a PhD in Math, did research on discrete optimization. But I also always like to contribute to open-source software and actually got started with graphics drivers around 15 years ago when I reverse engineered back then the Radeon R300 3D programming interface. So it's fitting that today I work in AMD's Linux open-source graphics driver development group. My main focus is on the Mesa OpenGL driver. I also do some work in LLVM and occasionally some other things. But in this talk, I will give you a more overall overview over our graphics stack on Linux. So I like to start with a picture that shows the high-level components. At the bottom in red, you have two kernel modules, the Radeon and the AMD GPU, the more modern kernel module. They do the mode setting, the memory management, all that. Above that, you have user space. So there is this thin libDRM layer. And on the left-hand side, the components that are part of the X-server. So the X-server has the general device independent code and input code, but it also has some graphics device specific code. In our case, there are Radeon and AMD GPU X-server drivers which correspond to the kernel module of the same name. So if you're using the Radeon kernel module, you'd use the Radeon DDX and the same with AMD GPU. And then to the right of that, you have a bunch of different driver components that implement graphics APIs like OpenGL and Vulkan. So there are parts that live in the Mesa open-source project under the OpenGL and Multimedia umbrella, as I like to think of it. There is the R600 driver, which is for older pre-graphics core-next GCN graphic card. And there is the Radeon SI driver, which is for the more modern cards since, let's say, 2012. It builds an LLVM for a shader compiler back-end. Same as the RADV Vulkan driver, which also lives in the Mesa project and was developed by community members. And then to the right, you have some other drivers that are developed by AMD that have more of a Windows origin. There is what we call the AMD VLK Vulkan driver, which is the same driver code base as the one that Windows users get. And there is another OpenGL driver, which is really most irrelevant for workstation workloads. Now, there are many ways to slice and group this diagram, one that I think is maybe most relevant for Vostem is what's open-source and what isn't. And you see that almost everything is open-source. There is a story about the Vulkan driver, and I'll get to that in more detail. It's open-source with a caveat that has to do with this SCPC component. And the closed-source OpenGL driver, which, well, it's closed-source and looks like it's going to be that way, and stay that way. Another way to slice this diagram is just by what kind of hardware is being supported by it. I mentioned the GCN, which is kind of a breaking point for us. 2012, new generation of hardware. Everything that's in green supports those cards. What's in red is only for the older cards, and you see that the Radeon kernel module is kind of in between those two. And in the same vein, like slightly different colors now, there are legacy components that are not necessarily going away because there is still old hardware and it should still be supported. But we are not doing new feature development on them, which is the Radeon kernel module, DDX driver, the OpenGL driver for older hardware. And this problematic SCPC component we would also like to phase out. Okay, so this is hopefully clear to people in the room here. If you do have questions, please ask them. Some major milestones of last year. There is a big story of upstreaming a new display driver in the AMD GPU kernel module called DC. There is the kind of Christmas present of open sourcing the Vulkan driver. There is something about package delivery that I will talk about later. We achieved conformance on OpenGL 4.5 in the open source Mesa driver, and we managed, and continue to manage, we have been doing that for some time to deliver zero-day support for new hardware in open source drivers. There are some caveats I'll get to that, but there was a release in open source and you could get it to work. Yeah, it has to do with the display driver. Okay, so brief overview of this, you know, Radeon AMD GPU situation, where the Radeon module was kind of red and green a little bit. What happened was that, so in the middle, you see kind of hardware generations of our hardware. And what happened was basically that the Radeon kernel module is the one that was always there. But about five years ago, more or less, there was a decision internally to AMD to do Linux write and to do open source write. And part of this decision was to say, well, you know, back then there was this thing called FGLRX for a closed source driver. We don't want that anymore. We want to have one single kernel module which works with all our drivers. And that is basically how the AMD GPU kernel module got started. Its initial development was in the Sea Island generation and then kind of the way, the time when we really committed to it was with the volcanic islands. So, and then the support was back ported. So, the idea is that AMD GPU is really for everything that's GCN. And Radeon at some point should be phased out for GCN and only used for the older cards. Right, so AMD GPU just supports the more modern features. All the Vulkan drivers only work with AMD GPU because of certain features that are lacking in the older kernel module. It also supports our compute stacks. It has a GPU scheduler. And so, the idea is really that going forward you should use AMD GPU for all the modern cards which is not the case today by default just because we don't want to, there might still be some bugs. And I mean, we don't really, we think it works but people get really angry when you break their system, when they update their kernel. So, changing the defaults will maybe still take some time. But you can switch with these kernel command line arguments since 4.13. Okay? Right, so one of the milestones that I mentioned was upstreaming DC, this new display driver. Now why did we want to do that? I mean, implementing a display driver is actually a lot of work. There's a lot of magic that goes on that you have to talk to hardware engineers. And being able to share a common code base with other operating systems and really helps to support all the kind of more advanced features that you expect from a display driver, like these days supporting audio via HDMI, supporting advanced display port configurations and all that kind of stuff. So, the decision was made that kind of an existing display team within AMD should be brought into the open source world. And if anybody has ever been in contact with such a project, you know that that's very, very difficult. I think it took longer than people in AMD that we hoped for of course, but I think we're in a good place now. The display team has really arrived in the Linux kernel community, I would say. But it was a challenge, you know, it's a huge code base around 130,000 lines of code. It was first published as cleaned up open source basically two years ago and it took almost these two years to actually get it upstream. And this leads to the caveat that I mentioned before because the display driver, the new one, it supports almost all of GCN, but the point is that it's required since our very latest generation of hardware, the Vega generation. So, you know, at the time that Vega released, there was open source support fully for Vega. It was not upstream yet. Now everything is upstream and going forward it will all be upstream. So I think we're in a really good place now with the display driver. All right, now let me talk a bit longer about kind of the big news from December. We've been saying for a long time that we will have an open source Vulkan driver. And, you know, the internal team worked hard to make that possible and now it has happened. You can download it from GitHub, you can build it yourself. And, I mean, some people have done that so it should work. It is the same code base that the Windows Vulkan driver uses. And actually it is largely even the same code base that the Windows DirectX 12 driver uses because they internally share a lot of code in a common library called PAL, the Platform Abstraction Library. I get to that in a moment. It supports all the GCN-based GPUs with the AMD GPU kernel module. There is some, you know, kind of official support level for distributions. You know, you're people who can work, compile things yourself. It should really work on all the distros. There is no particular reason why it should only work on Ubuntu or Red Hat, but this is kind of what the Vulkan team has committed to that they support. And if you use anything else, you know, go download it and try it out. It should work. Yeah. Yes, the Vulkan driver should also work with 32 bits because, you know, Windows, many games still using 32 bits. And I see no reason why not. I mean, I hope I'm not saying anything wrong, you know. But I think it should work, yeah. I don't know if there are games on Linux that use Vulkan and are 32 bits. I don't know, actually. Okay, yeah, that's a fair point. Brief comparison. So we're now in the kind of funny situation that there are three different Vulkan drivers that you can use on Linux with AMD graphics cards. There is the open source AMD VLK driver. And there is a close source variant of this AMD VLK driver. Why is there a close source variant? Well, the thing is that, what is the shader compiler backend? The open source variant uses LVM for a shader compiler backend, but the close source variant uses an internal shader compiler backend that for various reasons was not open sourced and will likely never be open sourced because the kind of the thrust is more towards going to LVM. And then there is the community-developed RADV driver that lives in the Mesa repository. And this is kind of, on some highlights, a comparison of these drivers, right? The two outside ones are open sourced. The AMD VLK driver is being supported by AMD. The RADV one isn't. Community contributions can go to both of the outside ones. I don't know if there has been a contribution so far to the AMD VLK driver, but the Vulkan team would like to have them. They're very open to that and would welcome them. So, you know, you are very invited. The AMD VLK close is kind of in a funny situation because of course you can't really contribute to it, but it's the same code base as the open one. So, you know, your contributions might end up running for Windows users as well. AMD VLK lives in its own repository on GitHub, whereas RADV lives in the Mesa tree. The open source ones, they use LVM. Well, our QA only looks at our driver, of course. Well, support for new GPUs. Obviously, the Vulkan team that we have internally gets a head start in implementing features of new GPUs and will be able to have zero-day support as they have had in the past. Whereas with RADV, I mean, the people who are working on RADV, you know, they're really good guys and they also contribute to our open GL driver and I really like working with them, but they are at an inherent disadvantage there. There is a lot of tooling that AMD produces, especially for game developers, you know, for game developers to be able to look at their frames, analyze their performance, et cetera, et cetera. And the official driver supports it. Well, Windows support, you see, I mean, theoretically, the open driver could run on Windows if you added the parts that are missing, which are not open source yet, so it's... Well, okay, let's talk a little bit about the architecture. I already mentioned, yeah? With my Mesa hat on, I would say it doesn't really make much sense because it's a bit of an alien element in there. It's not a plan at the moment because it's too disjoint. I mean, even when Intel wrote their Vulkan driver, people thought, well, maybe, should this really be inside of Mesa? But then they had a good reason because they share the shader compiler, whereas here, this is not the case. Only the LLVM parts are shared, so, yeah. Wait for my other talk, and there will be more details there. Okay, a brief look at the architecture. So this is kind of from top to bottom what is running when a Vulkan application is running. There is a loader, the Vulkan, which is provided by Kronos, or actually developed by Luna G, but the details of matters, so it's vendor-independent. It loads the Vulkan driver, and the Vulkan driver internally has two parts. The part that is called XGL because of the legacy, and this one lives in its own repository, and then there is the part called PAL, which again also lives in its own repository. And so the interesting part is that most of the knowledge about what the hardware actually looks like lives in this PAL component, right? The Vulkan API translation component, of course, there's also background knowledge about the hardware in there. You couldn't just take that and use it, I don't know, for an Intel Vulkan driver. That wouldn't make sense, but most of the hardware details are down there or in the shader compiler or pipeline compiler, as it's called here. This is a bit of a more detailed view of what PAL has. By client, the diagram means the Vulkan front end that translates Vulkan to PAL, and then there are operating system parts, I was told not to go over there. And on the right-hand side, there is kind of a clean design where the source code structure mirrors the hardware structure. You have some component that is kind of the graphics IP where the actual 3D rendering happens. There is a component called OSS, which means operating system services, which is basically where the DMA engine and the transfer queue lives for those who know Vulkan. And there is a video encode, decode part, which has some AMD extensions for that that are implemented there. Something should be said about the development process. So the way it works currently is this. There is the repository on GitHub. Contributors are welcome to submit pull requests, which would be added into the internal code base where development happens. From there, there is a regular automated code cleanup process to remove things like references to new hardware that has not been released yet. And from there, there are weekly pushes back into the GitHub repository. And from there, people are free to take it, compile it themselves. Distributions would be very welcome to package it, obviously. Also, our official packages will probably, well, they will probably, for the time being, not be derived from the GitHub code base. Maybe I'll mention something for that. There is a separate LLVM branch also for stability because there are kind of specific patches that need to be cherry-picked. But its development follows a bit of a different model because in the case of LLVM, LLVM is the upstream and the ideal is really to get the required changes into LLVM upstream as much as possible. Okay, future plans for this. There is optimization work that's going on to get. So right now, the situation with the shader backend is that the LLVM-based one is not quite up to par to the closed-source SCPC one, but the goal is to get them equivalent or maybe even LLVM better so that we can really say, yes, the LLVM-based one is the one that we distribute everywhere. And of course, in the future, there will be feature support, new GPUs will come out. And the Vulkan team is aware that getting a proper open-source process going is not always easy and they do want to make sure that an external contribution process is ironed out and work properly. All right, I do have some more slides about how do we actually deliver our drivers to our users. Now, the main thing to take away here, really the main important thing is we want things to be upstream first, right? For as much as we can do that so that people can just get their distributions and it'll just work out of the box with AMD hardware. And you know, for the most part, this is actually where we are today, right? This is where we are today for the most part. But there are some cases where we need to provide our own packages. So this happens because of specific customer engagements, maybe it happens when new hardware comes. New hardware doesn't always align with when the distributions pull from upstream. So for that, we have our own packages that we provide and since the end of 2017, we provide them as a kind of layered approach where there is an all open, open source core onto which additional closed source packages may be installed. This works in a release calendar that's shared with Windows. It has this version numbers like 1750, 1810, this will be the next one and so on. And right now, what happens is you can download a big tar ball distribution specific tar ball from the AMD website. It'll contain packages. So for Ubuntu, it contains Debs for Red Hat, it contains RPM. Those can be installed via a script or maybe if you're adventurous even by hand. And those are the distributions that we're supporting there. And just to remind you of this picture that we had in the beginning and looking at it from the perspective of what's in the all open core and what's on the kind of pro add-on. The all open core consists of everything that's in green and all the stuff that's in green will also be present if you're using the pro add-on. Specifically, the pro add-on contains the workstation open GL driver but multimedia will still use the Mesa code. Currently, it contains this Vulkan driver because the compiler part which is not closed source but of course the plan is to transition that to the fully open source LLVM basis. All right. So, I'm almost done a bit before time on purpose so that people have time to switch if they want to leave because I do have a talk now also but it will be much more technical. I think it's very interesting but maybe the target audience is a bit less broad. The main thing to take away from this is, there was a conscious decision made five years ago to try to do Linux and do open source write with an AMD and it's been a long process and not everything has been 100% figured out now but I think we're in a very good place right now. If today, you buy an AMD GPU, it will work with the standard packages, maybe with the tiny caveat that if it's very, very new then maybe your distribution doesn't have it yet and you might have to compile your own upstream thing but I think we've come a long way and we're still going to continue with that. So thank you for your attention. Yeah, maybe Satya? Are there any plans to merge the AMD Vulkan effort with RayDMV or RatV? That is a very good question and in some sense that's a question that I can't answer because I work on neither project. I think right now the situation is kind of maybe one of friendly competition. Having both drivers kind of makes both of them better. In the end, we'll see how things play out. Obviously, we would like, I think I mean, merging is not really something that makes sense based on how the code base, but maybe taking the best effort merging certainly and this depends on people who are potentially interested in it. The Vulkan team is aware that maybe the process needs to be worked out and if you're interested in these kinds of things, try and talk to them. Yeah, it may have been slightly longer. I wasn't at AMD back then personally and I don't know at what time the effort really started and at what time was being talked about. I'm not a historian on these things, I'm sorry. Yeah, I think back of the window, you think? Again, I wasn't there but my imagination is that it was a mixture of getting both, in some sense, a better brand recognition and kind of a small but still technologically important corner, but also there are just customers who say we want open source drivers. Give them to us, we want them for support. Okay, there was another history lesson maybe over there. So the question was whether that is really a commitment to move to open source. Yeah, yes? I mean, okay, well maybe we take that offline. I don't know what you're talking about, but I think that if you look at the signs fairly, I think they point in the direction that it's a serious commitment. And there are good reasons for it, like I said, there are really customers who are asking for that. Now, sorry, you were. Yeah, so the question was about the documentation availability for GPUs. There is absolutely documentation out there. So if you go to AMDs, if you search for, I don't know what the right keyword is, developer resource, I mean, there is a page that has lots of CPU documentation and GPU documentation, including the ISA and everything. It's not as much documented as we would like. I mean, sometimes maybe you also have higher expectations of what our internal documentation looks like than what it really has. It's just, we don't have infinite amount of money and it takes the time to go through, to make sure that things are cleaned up if there is something that can't be released. And we try, but in the end, the code is the best documentation. Okay, maybe one more question and then we go to the. Well, so at least with OpenGL, Wayland works out of the box since quite some time. For Vulkan, I don't know. This was mentioned to me from someone in the Vulkan team and I didn't verify what precisely is needed. It is possible that maybe some small extension is missing. I mean, in the end, it shouldn't be much because you just need to export surfaces in the right way and. Yeah, yeah, yeah. Maybe, yeah. Yeah, okay. There is an alternative to CUDA and it's called ROKAN. There is an alternative to CUDA already and it's called ROKAN. It works. Well, the problem is that there is this, it's not just CUDA, right? There is this whole infrastructure of learning frameworks and well, we're working on it, but it's an uphill battle, unfortunately. And yeah, ROKAN ROC M. Well, maybe, well, yeah, I think that's okay. There was recently some trademark thing, but look for radion open compute and that's a CUDA like thing. Okay. Ah. Thank you.