 So hello everybody, I'm Alejandro Pinheiro, I'm going to give a presentation about ESPV, about the work in progress bringing this station to MESA, but first I would like to make some announcements. The first one is that the XTC is announced and it will be on Coruña on September this year and you can follow the old updates on Twitter. It's rated with that, but I would prefer to mention that beforehand, that the MESA drivers, the inter MESA drivers for Linus is now conformance, I don't know if everybody knows what that means, that means that basically Coruña that is managing OpenGL just has at the street, so you pass all the tests, you get certified as conformance and Intel got this to be conformance science day zero or day one as you prefer to see that. That includes passing the test for this extension that is part of the thing that I am going to explain on this presentation. So, what are the topics that I am going to cover, I will make as introduction to what is everything, what I am talking about, what is this work related with. I will also make a summary of the development history, what we found when we started, what issues we had and why, how we solved those issues. That led to the technical decisions, to the work we did. I also mentioned how we test the work that we do and finally a summary of the current status and future. So, introduction, ok. Who is doing all this work? The initial work was done by Nikolai Handel, I am not sorry if I pronounced that wrongly. Right now, this is myself, this is Dwargo Lima and Neil Robes from Eagalia and this work is supported by Intel. So, I... Can I interrupt for one second, we only have audio on one side, if I have a mono. Ok. Is this better now? Hello, hello, one, two, three, testing, testing. Ok, sorry. Ok, introduction, ok. I am going to make some brief introduction to add the main term that I am going to use. And the third one is probably everybody just, you are already used to GLSL. It is the open GL setting language. It has been used as the open GL language of science 2004. It is a kind of SEAL-like language. One of the things, I mean, one of the features is that the setting, the source code is included with your program. So, it is not exactly private. I mean, even if you try to do that, in the end the driver should be capable to get that code. Some people didn't like that. I mean, here we are, everybody is open source minded. So, probably all of us is not a problem, but for other people from the industry it was a problem. And then we have SPIR-V. It was initially introduced as just SPIR for open GL. In this case it is a binary format. So, they solve these private problems. Initially it was based on LLVM. And then SPIR-V was announced in 2015 and became part of OpenCL21 and Vulkan. That is probably the reason they added this V. And in that case it stopped to be based on LLVM. And now it is just one spec. So, what we have now is that, well now no, at that moment is that we have OpenGL and Vulkan and each one has different setting language. One was using gceler and the other one was using SPIR-V. But at the same time people wanted to use both. People were porting from one to the other. So, it was needed some kind of interoperability between them. So, the first one was this extension. But this was for Vulkan. This extension defines how a Vulkan program could use GLSL. But in this case it's not a kind of extension for Vulkan itself. It's a kind of front-end extension. It means that for example if you have a compiler that we are already doing GLSL to SPIR-V conversions, this one defines the scope for this compiler. What we expect from this transition. So, after this one they provided the equivalent but for OpenGL. That is the main topic of this presentation that is SPIR-V. This one defines, in this case defines two things. One is the ability for OpenGL to load SPIR-V modules. And the other is also this front-end definitions about which feature from GLSL we want in this case. So, in this case it's at the same time a driver and front-end extension. And there is also another small extension called SPIR-V extensions. It is because the previous extension, the main, I don't know how to say that. It's based on SPIR-V 1.0. So, what happens if we want to extend that functionality. So, these extensions allow to expose which extra features we have. And the spec implies that in some cases we will get some OpenGL-specific SPIR-V extensions. And, related with these extensions and related with this front-end, the chronos in addition to the extensions also creates a reference front-end. One is GLSL and that, for example, it uses the definition of these extensions to create SPIR-V binaries based on GLSL. And they also maintain SPIR-V tools that are tools to manage SPIR-V modules like assembly, assembly, validate, etc. So, we have, I mean, if you are SPIR-V with OpenGL you already know that GLSL has a long, has been used for a long time. So, one of the reasons they define these extensions is to reduce the scope. So, for example, the first extension that I mentioned remove a lot of features, most of them all GLSL features and add some kind of Vulcan-like features. And then we have the OpenGL extension that use that one as base but at the same time it changes a lot some things. Like, yes, I know, it's confusing. It was confusing the first time that I read that because the second extension removes some GLSL extensions, removes some Vulcan extensions and add specific ones and tweak existing ones. So, for example, subroutines were removed on both. But, for example, atomic counters were removed on the first one and then read it on the second one. So, and the specs say that, I mean, they have a list of these features were removed on the previous extension but we are now including again. But the big change here, in my opinion, was that in SPIRV, the names of the variables, the variables, uniforms, UBOS, etc. are considered like the book information or are optional and they are maintaining that here. So, for example, a frontend could get a GLSL shader and then create the binary without any names. So, that means that in OpenGL, when you load the SPIRV, it needs to work without any names because it's optional. In fact, I will mention that later, but in fact our approach is as it is optional, we are not implementing that for now. Probably we will implement that in the future but for now we are not taking into account the names that comes from the SPIRV binary because it needs to work without them. So, it's another way to test that we all call this correct. If you don't have names, then you need to use locations, bindings, indexes, etc. So, I'm going to make a kind of development history about how this goes. So, the pre-story, I mean, how the interest for implementing this extension started. It started in 2016. I mean, all these things that I have on quotes are emails that people send to Mesadep. So, I don't know that the name of that person, it used a kind of nickname. So, it sends an email asking about the interest to implement that extension. So, several drivers, several drivers developers started to say, well, I would prefer this way and the other way, etc. So, it was a month, almost a year later, Nicolai sent a request for comments threads to Mesadep. It includes a starting point with some code. It also has some questions. It was also focused on radiancy and some people started to jump about the approach. But it didn't went too far from there. It also mentions slightly the testing, but the thread was not really long. So, Galia, the company I'm working for, jumped on the implementation of this extension around September last year, mainly because Nicolai at that point didn't have plans to continue to work on this. So, but we used his code as a reference and was really, really useful as a starting point. He had a code for Mesadep. I will explain what is played later, but yes, it's a test suite, a different test suite for OpenGL. So, our first steps were integrating it with the Mesadriver with the Intel Mesadriver. There's a missing word there. I start to use the CTS test for this extension in order to evaluate what was missing. One of the things that was based on that code it was near, the thing I don't know if everybody is used to this terminology, but on compilers when you have a language, you usually create a kind of three intermediates in the representation. Sorry, I'm not a compiler. I am not a person with a background of compilers so sorry if I explain that properly, but basically you parse the language and you get a intermediate representation. And at the intermediate representation there are several approaches. So on OpenGL they have one and at some point Intel decided to create a different one and they started to use that and then for different reasons there are plenty of talks around if you are interested, but Intel started to use that and then other drivers started to use that too. So, not as an utility, other based on Neer. So, for example, on Intel Mesa driver we have a chain like you start with the GLSL shader you create, it is passed to the GLSL higher representation then this translated to Neer and it finally converted to the more low level interior. And when Intel created, they wrote the Vulcan driver they created a Spirby to Neer pass by passing one of the intermediate languages that were around. And why they didn't just remove that representation. The reason at that moment is that there isn't a linking based on Neer. So the linking is done on the previous intermediate representation that is Mesa higher. So Neer is this intermediate representation but the objects are already linked. And again a definition of a compiler's topic from a non-compilers, from a person with a compiler background. If you go to Wikipedia this is a kind of generic definition a linking is, I mean when you compile you usually compile different objects and then you need to merge them together and this is the link more or less you get the idea. On Mesa if you have tried a lot the GLSL link those two things, two main things gather info from all the objects because the GLSL shaders are really you can query a lot of state information from them and that information is also used for validate because you can have one shader and other shader but they are using the same UBO but the definition should be the same so it needs to be validated. So the first approach the first approach from Nikolai was trying to reuse the IR linker as much as possible. So he's taking account that we have the Neer but what he did was creating taking only the variables and converting them to GLSL IR variables and trying to reuse as much IR linker as possible and then after that it used the new shader as usual. That was a really good approach for the bodystrap that allowed it to have a lot a lot of the linking already done so it allowed it to test the code with several cases and getting some shaders working but well, the part is late, sorry so after all this introduction then we started to code we started to get into the dirt so the first approach was trying to get the CTS test passing the two main reasons is that the two main reasons I thought that would work with that the main reason to use the CTS test it was they covered most of the spec as no no as it was intended and at the same time they weren't too many tests they were like eight so it gave us a lot of it was a limited scope we can gather them the other thing, I mean probably other people on our team didn't expect that at least I didn't expect that that we needed to add more features on the SPIRV to need that intended but if you think that is that is normal the thing is that at that moment the SPIRV to need pass was focused on Vulkan so it was focused on it has only the features that Vulkan needed but there were some SPIRV features that were defined on the spec were let out because was only needed for OpenGL and that includes atomic counters transform feedback the selections and we also need to do some tweaking on how the SPIRV to need pass a 100 UBOs and SSBOs oop I don't know where I am do do do do do do... okey, so at that moment when we started to call trying to get the SPIRV we started to think about the link and the thing is that as I said this trick to convert Ok, eu colo que na oposita, o ároo é na posidilación, é a variabilidade near para a variabilidade IR, mas o ároo, como eu digo, nos permite agarrar o ároo para suportar, por exemplo, nós émos o ároo para os counters usando o ároo inicialmente, mas ao mesmo tempo é que o ároo é unha artífica, porque todo o ároo de formación é no ároo, no ároo sábado, e Niye o ároo é que o ároo é biná產, o ároo é que o ároo é fair, o ároo é bem-avidado, e o ároo é quí Wave ou o ároo é tapén dos áros, onde os árovs estão no ároo brasileiro, o ároo é f fractionidade self破da. no difenso de xaideis, o xaideis should be the same. Explicit binding is optional e, sin explicit binding, o link needs to set a binding for that UVO. Mas, now, as the names are optional, e o option means that needs to work without them, it's the opposite. Explicit binding is mandatory, so we need to base the linking on explicit bindings and forget the names. There are some layouts that are not supported, so for example in UVOs right now there are, we can only use one layout, there's a typo there, sorry. And that means that for example all UVOs are active because on the linker, on the GLSL linker there is a lot of code just to evaluate if the UVO is active or not. So that means that in the end for UVOs and others there isn't too much GLSL IR linkers to use. So the thing is that at that moment we were finding that we will need to write a lot of new code for the linker and we started to think it's really worth to ask that support for the GLSL linker, especially if we don't have a GLSL IR representation what we have at that moment is the new shader. And additionally we have other developers, in this case I mentioned Timothy Asher, but probably others, that were adding some helpers related with linking based on near. So we decided at the moment to switch and started to write a linker based on near. There I list all the regions that I mentioned there and for the people worried about, I mean it's true that there's a lot of work but the scope is really the finest because at this moment GLSL does include all the features that the current linker needs to support. The thing is that as I said GLSL has been around from 2004 so that means that Mesa linker right now needs to support features coming from more than 10 years ago. So at that moment we have a clear plan, the clear plan was greeting everything and everything based on the new linker and we focus on getting the CTS test passing. As I said, it's where covering most of the spec, specifically a lot of the corner cases related with mapping, some other things and there weren't too many. The tricky part here is that in the same way in that our implementation were a work in progress, those tests were also a work in progress. So we used a lot of, well there I say some but I can say a lot of time testing, reviewing, submitting feedback up even fixes for those tests. It reached version 21. So it was a long, long review and feedback. Especially because this review, I mean we were not the only ones doing the review. We were the ones from Inter for the Inter Mesa driver doing the review but there are some people from MED who are providing the feedback. So as I said in the beginning at the introduction we got those tests passing and that allowed us to get around October, November, I don't remember exactly when. And that was enough to get the CTS conformance test passing a lesson related with those tests. So at that moment we started to clean the code, we started to submit patches to Mesa dev, we'll mention more about that later. And the code was clean enough for the submission. So some people will start to wonder if that was enough, I mean, if we are passing the conformance test that means that everything is done. No. So we were passing the CTS test, it was not production ready. The CTS test was focused more on the, on following the spec in really on all the specifics of the specification but we're not really, they didn't test too much on this question. And probably they will extend that later but I don't know, probably they will keep the current focus and assume that if the specific for the station works then everything else is covered by the other test. I don't know what will be their plans. At that moment we have two options to how to approach things. One was mentioned, it improved Piglet and the other was working on a GLSL to speedv back end. So Piglet is another test suite. In this case was public, was open source since the beginning and it's really used a lot for the Mesa project. Okay, this is also a type, heavily used by Piglet developers, of course Piglet is used a lot by Piglet developers but what I wanted to say is used by Mesa developers. And one of the tilties that includes is Sader Runner. Sader Runner runs, it use a text file where you put the source code of the shader. The values for the uniform, for the UVO, etc. And then it also you can put what is the spetted outcome. So when you run this test with Sader Runner it's checked at the end. The main advantage of this is that it's really, really easy to write a new test. And Nikolai added support for GLSL for the Piglet test. It's mainly two things. It's a new script that parses this text file for the Sader Runner and then use GLSL to create the SPB binaries. And it also adds some... It also tries to fix the shaders. The thing is that some of the shaders as I said before the spec also includes what kind of GLSL you expect when you do the conversion. So for example, specific locations for uniforms are mandatory. But a lot of the test of Piglet doesn't include a location because are all and at the time there was not needed. So this script tries to fix the shader assigning locations and a lot of things. What they want to test. And the other thing that Nikolai added to Piglet was support on Sader Runner to load the SPB binaries. So you have an option, SPB, and then you can run... So I think I mentioned that on the... You can switch, actually. That means that you can run one test by default using GLSL and just passing one option to load the binary. And as I said, you can write test easily. For us it was a really, really useful tool because you could read one shader and run on the already working path and then just using a new option on the command line you run the SPB binary. Well, we love it. We use it a lot. It was most... Our alpha object was passing the CTS test in order to improve the... I think that these features were what we used more to get the code revolving. Well, we also added some features. The thing is that, as I said before, the GLSPB says that things should work with our names. And the thing is that the piglet, in order to fit the data, was using the open GL approach that was, for example, for UBOs that was using the name of the UBO and asked for the uniform and then with that index started to say that where is everything and put in there. But with the SPB we don't have names so we needed to change how to fit the data for UBOs and SSBOs. Well, no, sorry, for SSBOs was already in place, only for UBOs. And finally about all the work on piglets both for Nikolai and both for us, it is not clear if it is going to go upstream. The thing is that some people when it was debated in the past having some doubts about changing the self-runner and especially some people have doubts about adding a dependency on piglets for GLSL lang, for different reasons. The other approach also mentioned that it was first mentioned on May was create a GLSPB package. The idea will be that Mesa itself will include a way to convert GLSPB and in a badness will be provide an alternative to GLSL lang that right now is the main way to get GLSL shaders to SPB. And just in case some people think that is just a chrono tool for Amateus, I think that for example for the last version of the Doom game I think that they use GLSL lang to convert from GLSL to SPB. The other advantages were optimizations in the sense that the GLSL compiler has several optimizations so the idea will be, sorry? Right now this is not a proposal because as I have said it's not as easy as just the switch because also how you feed that is different. So at least the linking will be I mean all the test that involves linking we could do this as you say running it twice. But yes, I mean there is a lot of details about how this testing will work because it's not just testing running it twice. In fact one of the things that Nicolas was doing that I didn't mention that was also creating a kind of blacklist. I mean this and this and this test run that so if extended that approach the idea will be running twice only the white listed test. But yes, the testing is still the testing right now is mostly focused on getting the extension the developing of the extension going on and at some point we will need to define a better policy. In fact is what I said before it is not clear if all this change on piglet will be over stream because with this approach you don't need change on piglet to get the speed binaries. Because as I said on the next slide well it's not in the following one. In this slide the idea will be that internally I don't know probably a environment variable you will ask Misa how do you pronounce Mesa because we are Spanish and we pronounce Mesa but probably Mesa is ok. So that internally I don't know environment variable like use SPB that the normal change will be GLSL IR and then NEAR but internally create a GLSL IR SPB and then SPB to NEAR In that way there are two things that will be tested SPB to NEAR and also the support of this extension. The thing is that internally you will also need to do this phishing that a piglet was doing. This say doesn't have a specific location so internally we create a specific location for this test. That will allow to run the piglet test without change but the way to fill the data will be still different. I don't know probably in the end there will be some kind of compromise something like don't use GLSL to create the SPB about use this approach and then make a small change from other runners to fill the data to the test. In any case in relation why if people ask if we are using that it's not finished and for example one of the things that is missed because it's not finished in the same that in the previous slide Iron Romantic send the first version October and some people started to review that but it's not finished and this phiseries is not still fully reviewed and it's not still merged and it's also a work in progress. For example one thing that it was missing was UBOs so in my case that was working on that I only had I need to rely on piglet. Ok so I don't know how much time I have Ok ok Sorry if I was speaking too fast but it's what happens when you in the presentation. So ok we're working right now as I said we got all the CTS test passing and then we have a little everything partially covered. We have uniform, we have atomic counters, we have UBOs, we have distillation, we have transform feedback so we have well when I say plenty of programs I mean obviously tests because the current support is not enough for a real application and this small extension that is about the spore stations is fully complete but that was really really small. So what's missing from for all these filters it's missing to polish them, to test them it's also missing a rise of a rise that is the way that OpenGL calls multidimensional rise we have some support but we have still something to fix on the SPIRV to near pass we miss multi sample it must arrive that we need to add more validation the thing is that the spec says or spec that SPIRV binary is correct but spec that is correct each model but at the end we will need to make some kind of validation when you are using different models each model for each stage and obviously it is more testing and more testing means not only adding more testing but as we were just mentioning before agreeing on something I mean agreeing on what we do, on the policy or while listing etc it will be it will be complicated I mean it will be not trivial for the plans of streaming right now we have around 80 patches more or less and the plan is sending small patches now and then we send a initial one on November I don't know it was like 20 or something patches it's partly reviewed but it's still pending at the final review we send the 4 version in January and I think that's all I hope not forgot something so you have any question and no, no, no sorry we were really focused on just only the openGL but yes I know that there were some threats some misadepth related with OpenCL but I think that they are still using LLVM but yes, I know that it's curious that there are several people working on a kind of SPB support but yes, right now we are only focused for the OpenGL and in fact as far as you know the OpenCL is really out of the OpenGL support so I think that in the end probably there will be really different in both OpenGL and OpenCL I think that the support will be somewhat different it will be awesome to share some of this but and at the same time as I said before this extension defines the scope for OpenGL so I'm not sure that the OpenCL needs will be the same even after the SPB I'm not sure that the needs will be the same so you were going to be and so you are ok, thank you for the initial work but it would be somewhat rude to say that on a presentation this is the reason we use so much the self-run writing a test with self-run it is really easy one minute to 30 seconds if it's really small but with CTS it's like the other advantage is that with self-run it's just modify a test file and you run it with CTS you have to write the C++ code compile and is to add some support to CTS to this so in video you should so you should build it ok, it's good to know the idea is like well let's just run it twice in a different mode so like the shader cache is another thing that has no specific piglet test and the testing strategy is to run it once and then run it again with the shader cache enabled and you know like if you with piglet you can't short out the tests in any way to scale it to lots of machines and it's unreasonable for developers to wait for two and a half hours to touch anything in any case just one thing we are talking about this running twice we internally are adding tests and we are using internally but we are we internally we are not doing this executing two times other tests we are adding tests internally for us so what I mean is that probably it will be the two things probably when we get this test material enough and if they approve the GLSlang or SPIRV tools on Pili that I don't know if they are going to we will add so I don't know let's say about one hundred tests specifically for SPIRV so the idea of running twice the others will be extend the coverage so what I mean is that or at least is what I would like to do is having some tests only for this feature to test I mean it's impossible to test everything of it for example I have like ten ubo tests that probably could be go there I mean it's a ubo test it's a basic one, one with an array one with two ubo and I think that makes sense to include them and not just focus on the other extensions because as you say that would be problematic I mean because those tests for example I remember that when I was working on the SSBO I started using SSBO tests but those tests were not using special locations so I needed to use that ok for the first step we will do that but then I will write my own tests just for me so I think that in the end probably would be a compromise well at least I would like to be something like that ok so any other question ok so I think that's all thank you