 So we have, it's past five, so we can start. Talk is about IGTGPU tools, so a little bit about the past, what's happening right now, and what we are planning for the future. I'm Adek, I work for Intel's open source graphic center, and I'm maintainer of the project. So first to give you some context, IGTGPU tools is like a set of tools and tests for testing DRM drivers. So we are targeting the kernel APIs, not like OpenGL or anything in that regard, it's just like KMS, memory management, command submission. So it started as Intel GPU tools, but it outgrew that because KMS is generic, so it shouldn't be vendor specific, it shouldn't be just one manufacturer. And since a lot of other drivers implementing the same thing and should be having the same way, there's no reason to duplicate the effort in trying to implement the same test suite. So we dropped Intel from the name and we rounded it to IGTGPU tools to be more welcoming. And I think it's doing better nowadays. So there's also one of quite interesting projects called VKMS, which is virtual KMS in the kernel, which is just a virtual KMS driver that you can set up some knobs on, and it pretends to be this place. So it was used by the Google Summer of Code students. It was developed by Google Summer of Code students and the development of that kernel module was test driven by using IGT, so that's quite achieving in my opinion. And the interesting part is that they help us quite a lot because here and there are a couple of interlaces and a code that is not specific because they weren't working well with the drivers and they helped quite a bit ironing the stuff out so we run nowadays much better on any hardware. And yeah, if you are interested or like work on VC4 or AMD GPU and you want to run it more or you see some rough corners, then please let us know because we are working for Intel. We cannot do everything for you, but we are there to help. So talking about the governance. We have just two maintainers for the full project. Like some projects have just one, but like we have two. And we have 67 people total with commit rights to the repository. Most of the commits don't go through the maintenance. Like if you have someone with commit rights and they review that change or like even if you cross review stuff, just point to other people and they can push it. What if you have those are not Intel? So we have people from AMD, we have people working for ARM, we have people working for Bootlin that have commit rights. And maybe in future we'll see a maintainer that's not from our company. I would be also nice if someone would say about it. So as for some statistics in 2018, only like less than 200 patches out of 860 were pushed by the maintenance. Everything else was just pushed by people having commit rights and cross reviewing that stuff. And this model is working quite well. So we tried to model ourselves after I-915 and like now dear I'm also have two maintainers and a lot of people with that are doing stuff. So we tried, okay, graphics problems. So we tried, I guess I'm standing in the wrong spot or something, okay. So we tried to be as open and like ask non-limiting to people as possible. And this proves to work pretty well because we haven't had any like serious incidents. There were a couple of accidental pushes. Someone forgot to add like reviewed by tags. But like other than like one or two reverts, there was nothing really worth mentioning. So people are generally trustworthy. We haven't had to revoke any rights or anything like that. So and we think quite a bit more contribution by the company. So AMD, for example, was working on color management and KMS and that's quite nice. Bootlin is working on Camilleum. So more not later. So current state. So we tried to contain Intel specific parts as well because in the past it used to be not only test but like quite a bit tools as well like graphical overlays for statistic performance and all these. We still have those, but like major chunks now are the tests. And we had just one flat directory with all the tests and all the Intel stuff there, all the KMS stuff there. So to tie this up, we tried to move our stuff to be a little bit separate and make sure that most of the generic stuff is the first thing you will find. So that played out quite well. And we have 50 KMS test binaries with 2000, over 2000 subtest. And this is testing everything from like setting the CRTCs and emerging through the pipes, planes and trying to like do flips and atomic mode set, legacy mode set, testing the cursor planes, testing the universal planes API, making sure that rotation works. And most of that stuff we read back through CRCs or you can use Camilleum, which is display simulator by Google. So it's FPGA with a couple of serializers or serializers that you can read out the display port or VGA directly. So, and most of the tests can be used on quite a bit of hardware. There are a couple of issues with CRCs on certain platforms, but I think I have something on that later. And this is pretty good. Like I think that majority of those 200, 2000 subtests can be used. Is that my laptop or this is something for the speaker? That's interesting. Okay, so most of the tests can be used on any vendor hardware. So it shouldn't be specific to anyone. So it's also heavily used because we use it in as a part of Fawcii that Martin presented about and basically it executes 24 seven. We execute six million subtests a week. So we have a lot of results to pass for. And whenever you are trying to make any comments or contributions to IGT, you are not tied by us like to say like no, it's very breaking us because you get results from us. So if ever like any change you made breaks anything on our side, you'll get automated emails. So you are not tied by the maintainers or like other people to say like, yeah, you can contribute that. You just rely on the automated systems and you can check all the results that we generate. So all the test results, all the logs, all the parsing of that and filtering and bugs issued through IGT, they are all available publicly. Martin spoke much more on that topic. So you can just watch his presentation or just try out those links. So everything is documented that the types of found the test lists, how we execute, why we execute that particular stuff. So. And we went a couple of changes to try to be a little bit more modern and friendly recently. So we switched from auto tools to Mezen and it's not complete switch mostly because we were just annoyed by the build time. So Mezen, everyone was switching in our areas with DRM, Mesa. And it's much more faster. So from clone to build binaries, it's four times faster on most hardware tested at home. It's much easier to maintain and it's much more readable when it comes to configuration. And yeah, we still have auto tools because we have a couple of people using Debian stable. We doesn't have enough, recent enough version of Mezen yet. And Mezen is still undergoing quite a rapid development. So we're waiting for it to stabilize enough. So we'll keep out the tools on the side, but like the main and supported build system or this method. The new runners. So previously for executing things, we were using Piglet. Because Piglet has first class support for running IGT tests and getting results of them. But there are a couple of analysis. So whenever we wanted to run it, we had to also deploy Piglet. We had to have like a Python installed everywhere. And then like trying to cut off all the tests from the Piglet distributions that we are not interested in what's also kind of annoying. So we had issues trying to implement a couple of features because they didn't really fit well with overall Piglet model and would constitute basically almost a rewrite of the core features. So we tried to write something that is mostly compatible on a common line level and on the output level. So it generates basically Piglet like JSONs, but it's much more focused and like smaller and dedicated for IGT needs. So our redistributables that we use in our CI system, I've seen 95% size reduction, which is nice and speeds up the execution. It's also a smaller CPU footprint because it's written in C. So you don't have like a garbage collecting kicking in or like some slowdowns. Python uses whatever. So, and this is really important for some of the performance tests we have because we have like a suspend test, power management test and like a lot of the unpredictable noise is definitely something that we won't want. We have full journaling. So whenever the test execution starts, we have just like a journal files that we append to the end of and flush the rights to the file system. So if the machine dies at any point, we know more or less like what happened which test was executed with Piglet. It wasn't always the case and like trying to do a proper journaling there would be much harder. We also have much better handling of incompletes because of that. So whenever this hardware hang happens, then we can recover the state and like continue execution from the next thing and collect quite a bit of logs. So we also implemented aborting on series kernel gains because that was the issue we've seen when we were executing quite a lot of tests that are touching the kernel APIs and setting it in different state. Sometimes we hit some warn ons which are still allowing us to execute stuff but are almost like lighter version of bugs of asserts. So kernel still continues but the state is kind of uncanny. So now the runner between each test checks for the kernel chain. So if we see any of the like bug page taint saying that something went wrong with memory management or that we have warned on something, then we log that information and abort further execution. That cuts off noise quite a bit and kernel drivers tend to be quite noisy. And we also have implemented feasible mode that for executing sub testing on one exec. So as I mentioned before, we have 50 KMS binaries with almost 2000 sub tests. And the tests are written in that thing that you can run the full binary or like certain sub tests of it. But in the past because of the, helping it IGT runner is implemented, we were running always one sub test at a time. That means that all the initialization code was executed for each sub test wasting quite a bit of time. So this is huge speed up because now it, the runner understands the IGT notation of sub tests. And if you have like a consecutive sub test from the same binary, you can just like squash it into one exec, which is a huge time sequence, especially when you have to do vitico and restore. We also migrated to GitLab. So we are like most of the kernels and stuff mailing list oriented. So we take contribution on mailing list and this is where you will get the results. So as soon as you send patches, 30 minutes later you will get the CI results saying whether it's working or not. But we started shifting towards GitLab. And first thing was that we started just with the repository there. And then we started using CI CD pipeline. So that means that we can move some parts of these things that were happening behind the cartons at Intel to the public space. And we also like contributed cross compilation jobs. So we cross compiled for them because that was quite a huge pain for a lot of our people before that we didn't really test it that the IGT even compiled for them which turns out that was broken quite often. So now we will notice quite fast when that breaks. And I think that should make our people much more happy and more willing to work with us. So we also because of that we also distribute Docker files that are always up to date way of communicating dependencies because with me get updated. We have like a list of a couple of dependencies for one distribution using their names trying to and trying to figure out like what you have to install is usually a couple of invocation of configure or meson and like see what's missing. That was quite annoying. So because we have this cross compilation and just like a compilation test and comparison between two build systems we have Docker files for Debian and Federa, I believe. So this is pretty good and always up to date way of communicating dependency because if they ever update we'll see a build failure. It's we also try to trim them down. So whenever we drop dependency like recently we drop the open SSL dependency because we use different implementation of SHA-1 so we just trim it out from there. We also now considering issue migration. So currently we are using FDOs, free desktops, Baxilla and it's going to be EOL because of all the hassle that it undergoes. So we'll migrate all the issues from there. We also consider switching to merge two quests because as I said before, we are mailing this centric but IGT has the possibility of being one of the very first projects in the DRM subspace to start leveraging the merge requests. So I know that message is way, way ahead of that but they're not that tight to the kernel. And there are a couple of discussions happening recently about making IGT the DRM test suite. So there are many companies that have many test suits testing the similar stuff internally as I mentioned before and we had a couple of those at Intel as well but we managed to converge on IGT and I hope that we can do something similar for this whole DRM subsystem because, you know, Cronus is extremely nice because you get full specifications, you get conformance test suits. For the kernel APIs, even though they are common and they are shared, we don't have that. Like how do you even know what are the expectations and the documentation is also not perfect, not exhaustive so they're, you know, having expectations and behaviors documented as code that can both be referred by user space developers and by driver developers to make sure that they conform with something is kind of a good thing. So DRM subsystem maintenance recently proposed that IGT could be required for new APIs. So whenever there's new user space facing API added to the KMS area, it should be backed by IGT test if that's feasible. Of course, there are a couple of corner cases where IGT is not perfect and needs more work so they don't want to block on that. They don't want to make people struggle but if it's possible, then yeah, please back it up with IGT tests, which would be nice. And there are also, there were a couple of concepts from people not running on traditional PCs. For example, some of the GPUs don't have CRC support. So our tests, especially the KMS tests doing the planes and pipes are heavily CRC-centric because you set some state and then compare the CRCs to the golden value, making sure that like, okay, this is actually what we display. We also have like a waste of drawing stuff using Cairo and then calculating CRC out of that and comparing the values that the hardware gave us at the end. So some of the hardware is not, doesn't have the required capabilities but there are connectors which should allow us to do. There's still some discussions in that sparia, like how to handle that, where it should be implemented, should the kernel like mimic the CRC interfaces or should IGT be aware of that? But whatever the maintainers agree on, I think we'll be fine. So yeah, if you work on any driver, if you deal with KMS, if you have any issues like or problems, please write regression tests, write running, it's on your driver and just talk to us. We are on IRC, we are on mailing lists. There's also maintainers filing, IGT, if you want to talk to us more privately but we are more than happy to make that work for everyone but that can happen without other people because we cannot do that. The job we are doing for each vendor, for each driver, we are mostly focused on us but we want to help others getting there too. So any questions? Okay, thanks.