 to noon, evening, wherever you are. So this is about the media subsystem in the kernel, and particularly, how do we test this? I assume that not everybody is very aware and knowledgeable about the media subsystem. So the first step is to give a bit of history, show what features there are. Very, very high-level architecture so that it's a bit clearer to have a bit better understanding of why it is quite hard to test this subsystem. It's pretty old, so the first drivers appeared all the way back in 1996, 27 years ago. Time flies. BTTV was a PCI card that you could use to capture TV, so it had a tuner, and you could just tune in into a channel and capture it. That one still exists today. It's still in the kernel, it still maintains, and in fact, it's currently undergoing quite a bit of work because it's the last remaining driver that is using an old framework that you want to get rid of. So I'm mentoring someone that is doing that work, replacing it with the new framework. It's actually a really nice card. If you want to capture, say, record video from, for example, an old VHS tape, then this is pretty good hardware, and that is quite amazing for a 27-year-old piece of hardware. One other that didn't fare so well was a black and white webcam. It was about postage-sized images in black and white or monochrome. Using a parallel port, that driver is gone. That no longer exists. These first drivers, they didn't use a framework. They just did everything themselves. So the video for Linux, version one, yes, there was a version one, appeared around 1999, but it had lots of shortcomings. Three years later, they developed version two that we still use today with lots of additions and improvements. I started contributing about a year later, so I missed the development of that API. But it's now 21 years old, and I think for an API, it's not doing too bad. About 12 years ago, the first version of the API was removed, so you won't see it anymore in the kernel. So this also proves that, yes, you can remove an API from the kernel, but it takes a lot of time. These first drivers, they were all fairly similar. So TV, capture, video capture, webcams, all video capture, be a simple pipeline, nothing particularly complicated. But then you got smartphones, and they were using SFCs, and they wanted to use a sensor and do lots of complicated processing steps. So you started to get these really complicated devices with lots of processing blocks inside, and we needed support for that. And around 10 years ago, we finalized the first version for that. It was called the Media Controller. It did lots of different things, but it allowed you to control what is actually inside the hardware. Now the media subsystem, so driver slash media, it's not just video for Linux. It is also digital video broadcast. So that is digital video in Europe, that's the DVB standards. In the US, it's ATSC. They all fall under the same, we call it DVB, it's sort of subsystem of the media subsystem. And also infrared. Now they are all part of media because in the beginning DVB was completely separate, but then you started to get tuners that did both analog tuning and digital tuning. So both subsystems had to use the same hardware. These days, it's all part of media. Infrared remote controls might seem an old one, but most of the TV capture devices came with remote control. So I think about, say, 90 to 95% of all support at remote controls, they're all coming from these types of devices. That's why it's part of the media. I'm not discussing DVB and infrared in this talk. I'm just saying it's part of media, but there's not a huge amount of things that can be tested with that and there's not much done in that either. The last edition that I made about seven years ago was support for HDMI CEC, consumer electronics control, and I'll get back to that later in this talk. So I'll be concentrating for this first two-thirds roughly of the talk on video for Linux. A terrible name, by the way, if I could go back in time and tell them, don't do this because you're talking about the video for Linux. Linux subsystem, it's an awful name, but it's stuck. So I have to live with it. But it's actually an API that supports a huge range of features. So video capture, video outputs, tuning for TV tuners. It has a VBI capture and output. VBI stands for Vertical Blanking Interface. And with analog video, it basically goes back all the way to your old cathode tube ray systems where you have an electron beam and it's showing the picture and then it has to go back from one corner to the other corner. And during that time, you have the opportunity to send metadata. Or it's a sort of a sideband channel that happens during the blanking time of the video. And it's used for close captioning in the US. In Europe, it's primarily used for teletexts. I think that it all still exists and it's still operational. So if you have an old TV, it will still be using this, I think. We had support for that. But it also supports memory to memory devices, primarily codecs, 8264, 8265, but it can also be a scalar or a color space converter. So you give it a frame compressed or not, and it will be processed and you get something else back. Do the thing, maybe a bit old, but radio support is there. Why radio support? What does that have to do with video? It comes again back to the old TV cards. They all had a tuner. And if you already had a tuner for video, then it was easy to add a tuner for radio. So consider radio TV without the video. So you're just left with the audio part. I'm not only radio capture, but it is even possible to support radio transmitter. We have a few that tend to be really USB sticks that allow you to transmit on a particular frequency. And then if you have a radio, you can listen to it. So for a defined radio as well, I'm not going into detail about that. RDS radio data system that is sort of VBI for radio. So it's metadata traffic information that is part of the radio FM signal. You know, analog radio, it's slowly dying out, particularly in Europe, I think. So how long this I haven't seen any patches or anybody talking about this in ages is probably dying out slowly. And there is device topology, the media controller, what we call it. And that is for these very complex pipelines that it gives you a view of all the blocks inside the hardware, and it allows you to change links between them, so you can bypass certain processing blocks if you want to. And also a low level sub device control, so that allows you to, for example, directly control a sensor chip. Again, it's part for the support for these complex hardware devices. And then strangely enough, touch devices. We had some some laptops, a specific hardware, and they wanted to have some debug. Basically, it's a picture, grayscale picture of the touch points where you touch the panel and it shows up as a grayscale picture. And it was used for debugging, see what is going on, certain somewhat obscure features, but, you know, it's basically a picture. So it's part of video for Linux as well. There are, I think, only two drivers that actually use this. What is also interesting about media hardware is that it's never or almost never one chip. It's a constellation of all sorts of different devices. So you typically have a DMS engine. You have a sensor. You might have a video receiver, tuners, infrared, some moxers. The sky is the limits. They are infinitely inventive. But the key point here is you have just a whole lot of devices that all have to work together. So a typical driver, if you look at it, it has what we call a bridge driver. That's the top level driver. That's typically a platform driver or USB or PCI. And that usually does the DMA. So it's sitting on a bus where it's able to transfer the video data into memory or the other way around for a memory to memory device. But this is where the DMA engine typically is implemented. And then there are a whole lot of additional devices on the board or possibly inside an SOC. That are all determined through a USB ID or PCI ID or a device tree or whatever. And the bridge driver will load all those in. Until they're all there. And then it is finally able to register everything and you're ready to actually use the device. So this is quite important that it is not just a single driver. You typically have lots of different drivers that all work together. And internally in the kernel that APIs between these different hard drive blocks is more or less standardized. So that allows you to easily swap out a sensor, for example, to an auto model without having to rewrite the whole bridge drive. With such a large feature set, we have lots and lots of I-Octols. Fidium Philinex itself has 82 I-Octols defined as of today. The SuperDevice API. So that's for sensors and low level things like that. They have 25. The media controller has eight. So well over a hundred I-Octols. That sounds terrible, but luckily devices only have subsets of features. So you have a number of core I-Octols around 20. Again, this too sounds worse than it is because some of these I-Octols, they actually translate into a framework and a driver only has to implement, say, three functions and everything else is handled inside the framework. So you might have eight controls, but you have to do a lot less in the driver. And there are also I-Octols that replace all the ones. And then we always provide glue codes that translate the auto I-Octol into the new one. So the driver only has to support the new one. So it's not quite as bad as it looks. Still, there's about 20 core I-Octols. And then depending on the feature set, you have a number of, say, six I-Octols. If you have video inputs and other six, if you have an output. If you don't have a tuner, you can save eight I-Octols. If you don't care about analog TV, that's another eight. So you're basically have a subset of all these I-Octols in practice. But still, it's quite a big API, but it's very different things. And then it depends on what exactly it is that your hardware does, how bad it is, how much you have to implement. So how on earth do you test such a wide variety of I-Octols and features? That is really what this talk is all about. Let me start. This is what we do. This may not be the best way. Perhaps there are better ways that you can think of. This is just what we have today and how it works for us. And I think it works reasonably well. It's certainly not perfect, but we can get the job done. So the problems with complicated hardware like this is, of course, as I said, the vast variety of hardware out there. And not only that, a lot of this hardware is very difficult to get to obtain. So if you say you want to make an application that is able to work with webcams, there's no way you can get all possible webcams because a lot of the older ones, they are you can only get them on eBay or not at all. If you are talking about the complex video pipelines, there are often development boards that you may not be able to get as a private person. You might have to be a company. They might be very expensive. So there's just no way you can. Well, I'm one of the main thing is I've been doing this for 23 years and have several drawers full of hardware. And I don't have full coverage of all the features. And anyway, if you get an SFC, say a development board or a single board computer, it takes a lot of time to set everything up, tends to keep breaking. So it does not scale. It does know it's not an option to just buy everything in sites and make a big test form because that doesn't work. And especially if you are just, you know, I want to make an application to work as a webcam or you can't buy everything, you know, it's not possible. So that also means that if you want to test APIs, we don't have the hardware to cover it all. Speaking as a subsystem maintainer, if I make changes in the media core frameworks, I would like to know that there are no regressions. If I don't, if I'm not able to get all the hardware, how would I do that? And even if I had all the hardware, it would take ages to run all the tests and all the different hardware. So again, it does not scale. This is not workable. Related to that is I'm a driver developer, for example. I'm making you a driver. How do I test my driver and know that I implemented all the IOTLs that they have to and that they're all implemented correctly and I covered all the corner cases? So I would really like to have something for that. And as an application developer, it comes back to the earlier points that I made. I can't have all the different hardware out there. So I need a way to be able to test my application, see if it can handle hardware that is different from what I have today. So perhaps not a simple webcam, but perhaps I'm set of talking to a webcam. I'm talking to a HDMI video receiver. How would I test that if I can't get the hardware? And finally, that is particularly an issue with HDMI, CEC. We'll talk about that later. It might not just be your own device that you are working with or making a driver for, but it might also be that you want to verify a remote device, implements everything correctly. So there are lots of things that you want to do and you're all blocked by the fact that it's not possible to get all the various type of hardware. So what we came up with is a number of things. So first of all, we needed, and that was actually to protect our sanity as maintainers. We created a compliance test utility that driver developers can run against their driver to verify if their driver is compliant. And in fact, if you submit a new driver today or are making major driver changes, the compliance output has to be included in the cover letter of your back series. As a substitute maintainer, code reviewer, it's marvelous because if it passes, then I know that all the standard stuff is already tested by the utility. I don't have to review against corner cases where a field isn't filled in or not correct because it will be called a compliance test. It's of course great for the driver developer because they get a lot more confidence in their work and for me it's great because it saves a lot of time in the code review. The only thing I need, well, the only thing, the main thing I need to take care of as maintainer is to look for those things that I know that the compliance test for one reason or another doesn't catch. It doesn't catch everything. There are simply the number of permutations is huge. So not everything can be found. And certain things is not something you can actually figure out from the application side. But this is a huge help for us. So there's one part and the second part deals with the hardware. Since we can't ask people to buy all these different hardware devices, the next best thing is to make it ourselves and emulate it. So we have a number of what we call virtual drivers and not really the right thing. The driver is real, but the hardware that it emulates is virtual. So we've been calling it virtual drivers and the name stuck. So they emulate hardware and then you can do whatever you want. You can emulate the wildest things. And we worked hard to make these drivers support as much variations as possible. So that is fantastic because now you can just as an application developer, you can just load that driver and test it with your application. For us, it means that we can test core framework changes against those drivers using the same compliance test. If it passed before, then after the change it still has to pass. So that gives it all comes back to giving a lot more confidence and helping out to avoid having to deal with large hardware farms or anything like that. And finally, a nice advantage of emulating hardware is that you can do error injection. So you can, for example, emulate what happens when the device is suddenly unplugged or when there are errors during video capture. And that is also very helpful in trying to test whether your application is robust enough. The compliance tool, V4L2 compliance. It's the main workhorse that we have started 15 years ago. That's when I first started writing it because I was sick and tired of having to do the code reviews. It's kind of like dry swimming. You don't really know whether you didn't forget anything. The initial version just had, I think, it verified three boring eye orbitals. It took six years to finally get the tests for video streaming in, which is one of the main key things you want to test. And another year to test all the various combinations between video formats. That's really the format that the video ends up in memory because there are many different ways you can encode video in memory. And also for crop and compose combinations where you crop bits from the picture and only copy that into memory. It gets very complicated very quickly when you have these things. So it was also very difficult to write these tests and they're certainly not perfect. But again, it's good enough if it passes this, then I have a lot more confidence in the device. And you may wonder, why does it take six years? And that is basically writing test code is hard and it's kind of boring. You'd much rather be working on new bleeding edge fancy stuff than trying to test all these boring things. So that's also why the first version just had a few eye orbitals. I thought, okay, let's just start. Start with something and then bit by bit extend it and make it bigger and bigger. And it took quite a long time before it really took off. Currently it depends a bit on the driver that you're testing but there are about a thousand tests that are being performed. A lot of these are very simple. So testing at a field is set up properly, things like that. So what is important that if we create a new API in video for Linux, then it also, it has to be documented. Very important and documenting your API is a very good way of figuring out whether your API actually is understandable, can be used. If you have to write pages of pages of all sorts of, if then else things in that situation do that, in that situation do that, then it's probably not a very good API. If you can have a nice description that makes sense, then it's probably a much better API. And the next best thing after writing documentation is to write tests. Unless you have to be able to write a test for it as well. So you need to have a way of detecting that the API, that the IOCTL is actually available, know what it can do, just handle all the corner cases, have it documented all the corner cases, whether to return errors, what sort of errors does it return. So the combination of documentation and writing a test is really helpful in getting confidence of your new API. What is also important to note, the compliance test is actually more strict than the video for Linux specification. It assumes that a driver is using all the correct core frameworks and is completely up to date with the latest and best practices of the specification. And since video for Linux is 21 years old, things have changed over time. So certain practices are allowed because you may have an old kernel with an old driver that still uses that old method. But in a new driver, you don't want it. Now the compliance test is always assuming that you're up to date. You're using the very latest kernel, not even the release kernel. You're using the kernel, our staging kernel for the new features coming into the next kernel. So you really have to be at the latest and greatest kernel tree of the media subsystem. And you need to get the latest code from the VFRL utils git repository. That's where this utility is because we keep it in sync. So when you make changes into the staging tree for the kernel, then we also update VFRL utils so it understands the latest additions in that kernel. So we always keep those two in sync and you need to be at the same level if you want to run it. As I said, tests, it's hard and time consuming to write tests. So keep it as simple as possible. And most of the tests are basically, it's a simple micro fail-on test with a condition. And if the condition is true, then it will just return an error. And in the text output, it will just show up as you can see there. It's a fail and then the source code of the test and the line number of the test and what the condition is that it's testing for. It doesn't explain what is going on because that's way too much work. You have to go into the code, look it up. What is it testing here? If you're lucky, there is a comment. Most often there isn't. And then you have to dig a little bit deeper or ask me. But it is, you don't actually run the compliance test all that often. Only if you work with a new driver or something like that, then you need it. So you can spend a lot of time in very fancy failure messages that nobody really uses. It's much better to keep, at least that is my philosophy when it comes to testing these things. Keep more important to have the tests, even if they are a bit obscure. And perhaps you may have to ask someone what exactly is going on here, what is being tested. More important to have the tests. And then if it turns out to be really confusing for users who will need to use this to check their new driver, then you can always add some more comments. But most of the time it's actually fairly easy to understand what is happening. And so I have a couple of questions on this topic. So the first one is, is there a way to test a media device or individual video and sub devs need to be passed? Can I see the question in the chat window? Yes, you can see that in the Q&A box. Q&A. Yes, you can test the media device. So you have a choice between, depends on the options that you give. You can actually just give lowercase m and then the media device. And then it will actually go through all the devices that are found in the topology of that media device. But you can also check individual video or sub devices. So both situations are possible. But yes, you can do that. It's really handy and we use it. I will come back to that later in the demo. We will actually use it in a test regression that we run every day where we just pass the media device and then it tests all the devices inside there. Is there a way to associate kernel version with compliance test version? For example, if we are developing on 6.1, then which version of compliance test do we use? No, no, we, this question occasionally pops up. Most of the time the compliance tests will work fine, even though it's for later or staging kernel. It will typically work fine on, for example, on 6.1 kernel. But there may be failures because some things have changed. It's usually okay, but not always. The main purpose here of the compliance test is two-fold. It's for ourselves, a sub-maintenance, to make sure we don't introduce regressions. We have to be at the very latest bleeding edge staging tree. The second is for people writing new drivers. Actually, you want to be on the latest kernel because you want to use the latest features. This is something we never actually worked on. If we would try to detect, oh, this is a 6.1 kernel. These and these tests I shouldn't do. Then the code becomes terrible spaghetti code and very hard to maintain. The whole purpose, the way it's been developed is really to keep the maintenance load as low as possible because it's annoying to write tests. A lot of work and you want to keep it as simple as possible. That also means that if you look at this fail on tests, a lot of these tests are done by topic. So you test all the input and output IOTS. But if there is a failure in the beginning, then it will just return that test and it will say at the top level, it will say input outputs this failed. It doesn't try to be smart about it and continue. It just fails that topic. Quite often there are knock-on effects in later tests because certain information is stored because for example, the number of inputs, if there was an early failure, the number of inputs would be zero or at least different from the actual number of inputs and then later tests might fail on that. So if you are running this, you always start with the first failures and fix them first and then rerun and see how much that fixed later on. Again, it comes back to keeping the threshold for writing new tests at low as possible since that is the most time-consuming part. I'm lazy. Okay, I admit I'm lazy. It would be nice to spend a lot of time in making really fancy tests but that just doesn't scale. If I may add, that's exactly what we do in tests to the Cardinal tests as well. K self-test and K unit and on all of those because you do not want to tie release to information into the tests because we want to be able to take the test and run them on any Cardinal version and get results. And if the test is newer and then you have older Cardinal you're testing on, the test will all, the one thing that to ensure is that tests will always gracefully exit and say, I can't test this feature. So that is what we do in the Cardinal really. So that's exactly what V4L does also, just to add that. Thank you. Right, so that was the compliance test. Now it's the second pillar of testing in the media subsystem and that is test drivers, the virtual drivers that emulate hardware. So the main one and that most people know is the Vivid driver. This actually came from a much older VV driver. If you go to a really old Cardinal 2.4 probably, it will still have that driver. I think it was originally contributed by some German magazine where they wanted to have a test driver or video that sort of emulated the webcam. But it was quite limited. And at some point, I can't really remember when exactly I was sick and tired of it and ripped it out and replace it with Vivid, which is far more capable than the old Vivid. So it does video capture output, vertical blanking, radio, software defined radio, metadata, touch capture, even HDMI, CEC emulation. And it's quite close to what a real hardware of that type will do. Well, there's no real hardware of this type because this is an insane piece of hardware that almost combines everything that you can throw at it. But it is an excellence. I'm very pleased that I will give a very quick demonstration later about this one. And it's really neat and it's not perfect. If you're interested, there are a number of volunteer projects for this driver and some of the others as well, where we want to really would like to have someone improve it even further, make it even closer to what real hardware would do. But this is a pretty neat driver and at least Debian is distributed. Distributed, it's enabled as part of the kernel. So you can easily run it, load it and run it. The other we have is VI-M2M, memory to memory video scaler, VI codec, that's a memory to memory video codec test driver. So it's both a decoder and encoder and even what we call stateless decoder. I'm not going into detail. But that's great for testing codec APIs, VIMC, that's more like a complicated video pipeline type of driver. Very new, Vizzle test driver to test stateless codec APIs. So we really have a fairly good set of test drivers for emulating a variety of hardware. And this allows us to run regression tests using these drivers that covers quite a large part of the video Linux API. And it's really the only way you can do that. There is one question in the Q&A about when is the right time to test compliance? Well, it doesn't hurt to, so most people I think run it at the end when they are satisfied with their code. But it doesn't hurt to run it earlier. However, if you are still implementing iOctoS that are needed, then it will keep failing. So what is true is that the sequence of tests is kind of from the beginning core iOctoS and then building it up and then streaming is I think pretty much at the end. So you might be able to at least verify, do I have all the core iOctoS correct? You can do that relatively early on and then do I have all the input iOctoS correct? You're building it up a little bit. But it really depends a little bit on your driver and how you've been developing it. But again, it doesn't hurt. I mean, if you get lots of failures and you know that is because I haven't implemented something yet, then you just postpone it. It definitely has to be done before you submit to the mailing list because we want to see it. But how often you do it during your development? It's up to you. It might, you know, if you do it say once a day or something whatever when you have a new feature, it gives you a bit of confidence if it passes that particular test. But for us as a maintainer, the only requirement is that it's part of the core for that. Obviously, if you post it with lots of failures, then you're not going very far with your submission. And if you have questions about failures, just ask me. I am perfectly, I'm very happy to answer that. That's not a problem at all. Okay. Demo. So I wanted to first show off, oh, that's my webcam here. That's not new steps. This is actually the Vivid Driver. I hope it's readable. But this is a QVV2. It's sort of a GUI Swiss Army knife for drivers. So if I run it, this is what you get. So what is quite nice is we made a test pattern generator in the kernel that these test drivers can use, these virtual drivers. They all use the same codes. And it's fairly extensive. So we have a whole bunch of color test patterns. You can do all sorts of interesting things with it, depending on what you want to test. You can even move it around. Let's stop that because it's very annoying. It has an OSD where it shows time and some frame numbers, some more information about what is happening. You can see here, vivid controls are all sorts of interesting test things. You can have a square in the middle or some special codes that cause problems. You can have a disconnect. You can inject an error condition for a buffer. There you see the error appearing. So there are all sorts of things that you can do here to emulate real issues. And what I am, okay, I admit I had fun with this. So this is actually the TV input that's emulating the tuner as well. So here you see the frequency. And if you change the frequency, hey, suddenly you lose the color information. Just like a real TV. You know, when you get a bit too far from the optimal frequency, then the first thing to go is to color. And then if you go a bit further, you get a static image. So sue me. This was fun to do. That was very easy, actually. So this is, and the other thing, so it has a webcam. The webcam actually goes up to 4K. So you can test that. HDMI. So there's again a huge number of resolutions that you can do here. So that's what it looks like. So this is kind of a gooey Swiss Army knife for these types of devices. Nice to play with. But we are here for the compliance test. So this is a run the compliance tests for visits for the video default video notes. This is what you get. So 113 tests. They're all good. Well, that's what I really hope that would happen. You can test streaming. So he starts streaming. It tests various combinations of streaming. So using select with no polling at all using select to wait for an event when a buffer is available to use an EPOL as well. We had some subtle issues with EPOL in the past. So this is a good test to have also some tests for working weights. There's all sorts of combinations there. And you can also make sure I'm using the right one. Yeah. So this is using the media device. So the media device contains the whole topology of all the devices if it has this test just checks the media controller. So that's the eye of the topology, et cetera. One thing to note at the start here. So this is the version of the compliance test. This is saying that it's a 64-bit architecture and this is the time t is 64-bit. So on 32-bit architectures, time t can be 32-bit or 64-bit. And when we added support for 64-bit time t on 32-bit architecture, we added support for that in the compliance test to verify that everything is correct. So here you can see how it is compiled for the modus. And this is the shot of the last commits from which it was built. So if you post a cover letter containing the outputs of FIFA2 compliance, I always check the shot to make sure that you're using the latest version. And all too often, they just make a user version distributed or obtained from a distribution, W or whatever, and they're always too old. So I always use that to check that you actually have the right one. So if I use lowercase m2, then it will actually start going through all the various devices. So I'm not letting this run because this takes too long, but I have outputs of tests. So as I said before, we use this test. No, wait, let me show something else first. So as part of the FIFA2 users, we also have a contrib test directory and there's a test media script. And that is what we use. That is what we use in our daily bills. So this is actually running through all the virtual drivers, running FIFA2 compliance on them, do all sorts of other tests, for example, unloading it unexpectedly. And that takes about 17 minutes. And there you can see the real power of all these tests because it starts out with the FIFA driver and then it's testing all these various devices. So video zero, video one, they all do different things. So they all have different tests. And if we go all the way to the bottom, yeah, here we go. We have a summary. So these are all the FIFA devices. You can see a summary of all the tests that are being done. So for complete tests of Vivid, you get almost a thousand tests. It's done twice because there are two Vivid instances or one is single planar, the other is no, they're configured in different ways. So we want to do both. CEC tests that are being done, get more into that later. And then the VIM2M driver, VIM, VITE codec. And at the end, you will see that there are a bit over 3000 tests that are being done in order to verify that there are no regressions in the core frameworks and no regressions, of course, in these virtual drivers either. The whole run takes about 17 minutes. And it's very useful because we've caught a lot of issues with it. Things that you're developing and you don't realize that you actually break something. And it's typically being called by this test. And it relies heavily on VFRL2 compliance to do all the actual work. Any questions? Before I continue, are there any questions about VFRL2? Okay, it looks like a question just showed up in the Q&A. How do streaming tests work for media devices? Will all video nodes be streamed? No. So that's a limitation of VFRL2 compliance. So it will stream all the video nodes, but one by one. It won't try to stream on multiple video nodes in parallel. The number of permutations and the complexity of the tests is basically what makes it very difficult to do. If you're volunteering, feel free. But it's really hard, I think, to do that right. So a typical case where... One case where it does happen is for memory-to-memory devices, of course, because you always have to give it something in order to get something back. So that is definitely done at the same time. But a common other case would be for analog video capture and vertical blanking capture at the same time. So we don't do that. It is something I would ask. Again, it's standard definition, so it doesn't happen very often anymore. If you would make something, a driver that does that, then you would always ask, have you tested that? But it gets very difficult when the number of combinations just becomes insane when you try to do that. Monitoring the latest API changes and updating the compliance tests corresponding to me sound like a very hard task. How do you do this? Well, first of all, I'm one of the media maintainers, so I know about API changes, and we'll have discussed it, perhaps quite possibly made it myself. So the monitoring part is easy, updating the compliance tests. So whoever is adding the API is responsible for implementing the compliance tests. So if it's me, if I added it, I proposed it, and they accept it, and I did it, then I will have to make the tests. Someone else did it, then I require that they make a patch for it. So it's part of the work of adding APIs, which is why adding an API is hard, one of the hardest things to do. It's hard because it's difficult to design in a way that isn't outdated next year. So you're trying to be future proof, but at least you want to make something that stands to is still something that works five years from now. I've lost track of my chain of thoughts now. Oh, yeah. The other part that is difficult is making sure you've covered all the corner cases, all the error conditions. Does it make sense? Can you document it? And again, if it's difficult to document or it's difficult to write tests for it, then that's an indication that your API probably needs some improvements. So it's just part of the work of adding an API. That's the way it is. How does compliance know how to configure a media pipeline? Only for the visual drivers. Specifically for the visual drivers that we have today, only VINC needs this. And there we know it. So there are the test scripts. It's actually not even the compliance test. It's just a test script, test media script sets it up. If you have to deal with such devices, you will have, most likely have to use lip camera. And I'm not very experienced with it. I'm not an expert on it, but I assume they will have a lot of tests as well in lip camera. It's just a limitation of the compliance test. So yes, if you want to use it, you would have to configure the media pipeline first before you can use the compliance test. So next parts. This won't take as much time as the previous one. HDMI, CEC, consumer electronics control. So first of all, what is it? I'm the proud maintainer of what is probably the slowest bus in the kernel, which goes at a blistering speed of 400 bits per second, 10 bits per byte of payload. So that's 40 bytes per second. I'm very proud of having made that API. It is a pin on the HDMI connector. So that makes this even more bizarre. So you have these 600 megahertz of pixel clock sending pixels at a blistering rate. And then there is this single pin going at 400 bytes. It's pretty insane. But it all originated in the old days of video recorders, where what they wanted to have is you put in your videotape and would start playing and the TV would automatically go home. So that's when this was designed in at least a physical layer comes from those times. And they had a microcontroller almost certainly that was doing polling on the bus. And they weren't very fast. So you were limited to this speed. And when they designed HDMI for some reason, I still don't know, they decided to just take that protocol and incorporate it in HDMI. It's only the low level physical layer that they copied or the lower levels of protocol, not the high level protocol messages. That's quite different. But the idea is exactly the same. You have a Blu-ray player, you put in a disc, a TV, and a V receiver all goes automatically on and they can all communicate with one another. So when you made this subsystem, little subsystem, the Ioctos were not a problem. There are about 11 Ioctos. Most of them are very simple. I wrote a compliance test, CEC compliance. The amount of work done to test the Ioctos is quite limited. That actually was never the problem with CEC. CEC, as if you look into the specifications, it is a very clear committee projects, committee standards, all lots of legacy stuff. I'm fairly poorly defined. They improved the bit, HDMI 2.0, but it's still not the greatest, most precise specification ever around to make it worse. Especially in the beginning, a lot of vendors added custom messages. So it would work if you had, say, from brand X, you used to display from brand X, the Blu-ray player from brand X, and the AV receiver from brand X, and it would all work. If you replaced the AV receiver with one from brand Y, it certainly wouldn't work anymore because it was relying on some custom messages. Things have improved a lot. These days, most devices do a fairly decent job to at least the main messages to get it in, in a standard way, but it remains one of the protocols that it doesn't always work, especially if a device doesn't adhere to the proper implementation. So one of the things that I wanted to do with CEC compliance, I have to add here that at work we rely a lot on this protocol. So it was not just important for me, but also important for Verging. So for the CEC compliance, that's what I wanted to have. It's not just that I could test my own implementation, but also a remote implementation. And I could connect it to a display and just run a whole bunch of tests and see if the display would implement everything according to the specification. So typical answer is usually not. There are almost always some oddities or things that are not quite the way they should be. There are also discussions sometimes, what the specification actually says, etc. It's not the greatest one. There are ambiguities in there. But CEC compliance is interesting and different from what is done in VFRL2 compliance in that it's probably 90% of the code is more related to testing a remote device than testing my own device. So it would, again, hardware can be a bit tricky to get. So I extended the VividDriver to emulate CEC. VividDriver already supported or emulated HDMI. So it made sense to add CEC to that as well. So useful for regression tests, application testing. And lastly, there is actually a very nice CEC GPIO driver. Remember, it's just a single line in the HDMI connector. So you can just, if you have the right device, you can hook up that CEC line to GPIO pin on, for example, a Raspberry Pi. And then you can use the CEC GPIO driver to directly drive that pin and read it out and use that to implement CEC. You would not normally do this. A normal CEC hardware is basically two parts. Either it's an IP block inside an IC that is dealing with all the timings. And you will just get, so when you want to transmit something, you give the whole message and it will just send it out. When it receives something, you get an interrupt and you read out the whole message. Everything else is handled inside the IP block. It can also be a microcontroller that does basically the same function. So it would actually pull the pin. But from the outside, you have typically some sort of mailbox interface to, again, get the whole message out and you don't have to deal with the low-level details. There are some drivers, some devices, I know one all-winner chip that was very cheap, and they actually just provided a register straight to the GPIO pin as well. But the primary extremely useful reason for using this driver is that you can get the whole low-level trace of what is happening on the bus and you can do low-level error injections. So, for example, arbitration lost is a very difficult condition to test when you're sending a message and someone else is sending a message at the same time. So one of those two needs to win. A lot of hardware doesn't do that right. And you can test that using a GPIO driver. The only reason you can use this is that it's so slow, because normally you can't do this fast enough and reliably enough. But because it's such a slow protocol, you can actually get away with it. So we use this a lot for testing various devices as well, very useful. Okay, this won't be just a short demo because it's not terribly interesting, I think. So here you can see if you just run ccctl, you see the fifth driver, this is the capture device, and it's already configured as a TV. It has a logical address, which is zero to 15. It's basically a nickname of the device. It's a horrible protocol. You don't really want to know all too much about it. And this is the other fifth device. You know, ccc only makes sense if you have two devices talking to one another. So fifth has an input and an output device. And internally in the driver, they talk to one another. So I can get a topology and then the TV device detects a playback device. It's all very nice, well-emulated. So there are actually two main ways of running the compliance test. The first is minus A, or full is, I think, test adapter. So this will test, this is basically similar to what V4L2 compliance does. So it's testing all the ioctols, testing what happens when you give an invalid ioctol and all the basics are tested here. I'm not continuing that, takes too long. But this is the least interesting bit. More interesting is what happens when you provide a remote device. So now the TV device is trying to test the remote device. And you see there are some failures here because I didn't start some helper functionality. But it is going through all the various ccc features and testing whether the remote site supports a feature and how well it's done and if everything is correct or not. So this allows me to test whether a display, for example, implements this correctly. A lot of the ccc compliance tests are very similar to V4L2 compliance. It makes sense since they work both. So I used the same template. It worked very well. And just to show, say, test. So here you see again all the fail-on tests. That just keeps going. And that is basically you just send messages, check what you get back is actually something that makes sense. And if not, you fail on it. So exactly the same method is used here that I use in V4L2 compliance. And it keeps the code simple. Lots of tests here. It's hard writing test. It's an annoying writing test. So you want to make it as easy as possible for yourself. I mean, I went into programming to let the computer do the work, ideally not me. Never works out, by the way, because you always have more to do. Some resources. So this is the main Linux media infrastructure API, the main kernel tree, V4Lutus repository, bunch of mailing lists. That's it for me. Any questions? There is one that just showed up in the Q&A. Let me drink something first. Is CAC implemented as V4L2? Well, HDMI is implemented as a DRM KMS driver. So first you need to talk about the direction. So HDMI, if you have HDMI outputs, it is typically a DRM KMS driver. It doesn't have to be. You can't make video for Linux HDMI output drivers. We actually have one. That is ideal if you're just sending video, because then you just give it a frame and it's outputs over HDMI and don't have to deal with tearing or any of those other complications that DRM KMS has. You don't have a GPU or anything. It's very simple. Here's the data. Here's the frame. Send it out. But it's very rare. Almost all devices, they just have DRM KMS with some sort of a GPU or a frame buffer. And that is what you use for HDMI outputs. For capture, it's video for Linux. That's nothing else. If you want to capture HDMI, let's go through video for Linux. That's DAPI for that. CEC is something that is valid for HDMI outputs, HDMI capture, and also dongles. So there are, I should have one here on my desk, actually. I can find it in between all the cables. I don't have a clean desk. You can't see it here, but I can guarantee that it's not clean at all up here. I don't know if you can see it. This is a little, little dongle. It has a USB, mini, no, yeah, mini USB connector and HDMI inputs and outputs, and it's sitting in between. And it's basically allowing you access to the CEC pin over USB. So this is something you can use for devices that do not support CEC, and you can add it using something like this. So CEC is shared among different subsystems, but the framework for CEC is part of driver's media. But as I said, DRM KMS will actually be using that framework as well. And the main reason is the driver media is because I'm a media maintainer and I have access to that. So that's why it ended up there. Any other questions, perhaps? And if you want to talk to me, I will be at the Prague EOSS, what student embedded, I forgot what it stood for, the next Linux Foundation Prague for a better Linux conference. I'll be there. So if you're interested in talking and you're there, then we can do that. That's the end of June 26th through 30th, I think. Looks like this is it. I hope it was useful for you. It is, it's really useful. I have, I am mentoring 25 people this time for Linux bug fixing. And they are, some of them are really interested in knowing how different drivers are tested in hardware and compliance tests. And so this is very useful to them for sure, because they don't always know where to find the hardware and so on. And, and having you do this presentation about the virtual drivers and how we can test some of the stuff API from the application side, as well as the driver side, and is very, very helpful. Great. Awesome. Well, thank you, Hans and Shua for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today, and a copy of the presentation slides will be added to the Linux Foundation website. We hope you are able to join us for future mentorship sessions. Have a wonderful day. Bye-bye.