 Okay, good morning everybody. I will start off with doing something that I don't normally do, which is explain a little bit about my day job, because that is kind of relevant for this presentation. So I work at Cisco Systems Norway and we make video conferencing equipment, anything from desktop systems to board rooms. And what makes those interesting is that people want to connect to a laptop for a presentation, to show a presentation. So we see Windows, Mac, Linux, laptops, AMD, Nvidia, Intel GPUs, HDMI and DisplayPort. So you get a wide variety of video sources that are connected to it. And you can also connect to Google, Chromecast, Apple TV, although the vast majority is presentation sources. So for those laptops, we are a display. We look like a display. So we have to prepare an EDID that works for all of these permutations and give a good result. On the other hand, we talk to TVs or PC monitors or an HDMI switch or what AV receiver. So we need to parse the EDID inside those displays and do the right thing. So EDIDs are really important for us. And we see a vast variety of both good and bad EDIDs. Now, if everything was perfect, I wouldn't be here and I would be out of a job. Luckily, EDIDs have all problems and pitfalls and complications. In other words, that's job security. So first, a little quick introduction. So that gives a bit of a background. Why am I interested in this and why I'm giving this presentation? Hopefully, everybody can take away something useful from this. First, explain the abbreviation. EDID extends the display identification data. It's basically typically an eProm inside the display. It doesn't have to be an eProm, but it quite often is. That stores the display capabilities. And the source would read it out and determine what it can send. There's no point in sending 4K P60 to a VGA monitor. It's pretty old. First version, 1994, probably based on earlier implementations that were unstandardized. So say 30 years. Originally VGA displays later extended to DVI, then HDMI and display ports. And we had a meeting earlier this week in a meeting room that actually had a VGA projector. And it had an HDMI to VGA adapter. And later, someone needed to use BC to HDMI adapter to talk to the change. So you have display port, HDMI, VGA. When you start dealing with those adapters, that's a whole... Let's say, even with all this disinformation, you still can get really crazy things. How to read an EDID is specified in the Visa EDDC standard enhanced display data channel. It is basically an I2C communication channel, even over display ports. It's encoded over an AUX channel. But at the end in the display, it is still going over I2C. But you often see I2C and DDC interchangeably used. These eProms are specialized. It's not a standard eProm. Although I think if it's small enough, you could use it. But for bigger EDIDs, more than 256 bytes, you need a specialized eProm because you have some special extra I2C address. I'm not going into the details. I'm just saying these are typically custom eProms. As it doesn't have to be an eProm, quite often it's just a bit of memory in HDMI receiver that the display would write into at boot and that would then communicate over I2C. The current version that you will see is EDID 1.3. It's mandated by HDMI interfaces. I do not know why. I've never figured out why that is fixed at 1.3. If anybody knows, talk to me after the talk because I would like to know. The rest of the world typically uses either 1.3 or 1.4. Display port interfaces typically are based on 1.4. The differences between the two are really minor. Some flag, some bits have changed meaning. A few requirements in 1.3 that have been dropped in 1.4. It's not really small. Most EDIDs consist of 1 to 4 blocks. Each block is 128 bytes. The first block is called the EDID base block defined by VISA, the EDID standard, and there may be up to 255 extension blocks. Each extension block has an identifier and that maps to a standard that describes it. You typically only see CTA 861 extension blocks used mandated by HDMI and display ID which is defined by VISA. That's what you typically see for display ports. The reality is that typically, so in practice you can see different variations. I will go into that a little bit later. First, signaling. Actually reading out how does that work. You connect a laptop to a display. I'm sticking with HDMI mostly because I know it best and that is in practice what you will typically see in video conferencing equipment. Most often HDMI, but most of these things are also true for display ports. So you connect your laptop to the display. The laptop sends a 5 volt pin. Display sees that something gets connected. It will send back and hot plug the text pin with a high voltage. The laptop detects that and it sees that it is high and that means that it can safely read the EDID. That is, there is an EDID and it is not being modified at the moment. It is a stable contents. Then it reads the EDID over the DTC, aka IHBC lines, passes it, figures out what is a safe video resolution to use and then it can send the video to the display. Inside the display, I've shown it here as an EEPROM. That is where the EDID is coming from. Most cases the display will power hot plug detects and the EEPROM by itself using its own power. There are some cases where it is the 5 volt from the laptop that is actually powering the hot plug detects and the EEPROM. That allows a laptop to read the EDID even when the display is completely off. You don't see that all that often anymore. For example, we have been using an EDID emulator. This is just a little gadget. It's not a real display. It just emulates an EDID and that works in that way. I would say that about 98% of all the displays that we see powering themselves and there's a small percentage where it's using this loop idea. Hot plug detects has its own issues. First of all, the specs describes what it considers low and what it considers high. So 0 to 0.8 volts and it's low. You shouldn't read an EDID 2 to 5.3 volts and it's high. You can safely read an EDID and of course the obvious question is what do you do with the remaining voltages? They don't say. One problem with meeting rooms is you often have long HDMI cables. That means that the voltage can drop a lot. So usually if you want to support those long cables you put your thresholds in exactly that undefined area because it's always better to get a picture where possible than nothing. Interesting thing about long cables is manufacturers often make a lot of effort making sure that the video gets through but these voltage lines they don't pay enough attention. So you can have a long cable and if you just send the video it's fine but if you actually see measure the voltages then it might be that your laptop wouldn't even see the display because the voltage drops too much even though it could carry the video you don't see that there is something connected at all. Or you see it but you can't read the EDID because the EDC lines have the same issue with voltages. So it's an interesting thing if you buy long cables this can be an issue and it also depends on the laptop, the precise video source where does it put the threshold itself. So long cables, not all that easy. Another thing, if the whole plug detect pin goes low for more than 100 milliseconds it's supposed to re-read the EDID. It can be changed. It's surprising to me that quite often this does not happen. It's either assumed to, you know, it only reads the EDID at the first connection and doesn't expect it to change afterwards or it has some debouncing algorithm that is set to large. There's definitely something you need to take into account when you're writing, say, a DRM driver. This place often toggles the hotput detects when they switch inputs or go from on to stand by or vice versa. When you connect a cable or disconnect you can get pin bounce so you see the hotput detect toggling very quickly in a short time. So these are all things you need to take care of when you are writing software because these can all happen. So that's electrical. Let's now go to the meat of the story. So assuming that electrically you actually were able to read the EDID and now you're starting to parse. I'm using an example EDID. It is based on a 5k monitor but it's a Frankenstein EDID. So I removed some stuff that is not interesting and I added some things that are interesting so you won't find this anywhere in an actual database. But 5k monitors are a very nice example of a lot of the issues that you will encounter. So we start with the base block. All this is by the way output of an EDID decode utility. I will mention that at the end of the presentation. It parses an EDID and shows it in human readable form. So it starts off very simple. EDID 1.3. Vendor and product identification. I removed the guilty parties here. Actually not all that guilty. They were constrained by some of the issues relating to what they want to do. Basic parameters and features. So first thing, maximum image size. So that is supposed to be the size of your physical panel in centimeters. First of all it's the limitation of 255 centimeters and yes you get larger displays and no they can't store that in here. The exact size. The other issue is that quite often vendors they have a whole series. Basically the same hardware just panels in different sizes. And they don't bother to update the EDID between the different sizes. So you can't fully rely on this. It gives an indication only. Next up, color characteristics. We go straight into the deep here. Color characteristics, those are the colorimetry of the panel. So how does it represent red, green and blue? How exactly is what color space is it using? PCs on their desktop they normally default to sRGB which is standard color spaces. I'm not going, the whole color space topic that is the full presentation by itself. So you just have to believe me here. And as long as the color characteristics stated here are the same as sRGB then there is no problem. Everybody is in agreement. Problem occurs when it differs because when you are transmitting RGB over HDMI the lovely CTA 861 standard which is pretty big has a big table and there there's a small food note that tells you that when you are sending RGB over HDMI you are supposed to use the colorimetry from this base block. However, only MacBooks do this. Everybody else just sends sRGB. So if you try to calibrate a display on a Mac compared to a Windows or a Linux machine you will get different results. And the flip side on that is what does the display do? What does it expect? Does it expect to see the source using these specific color characteristics? Or does it expect it to send sRGB? And I have absolutely no idea. It's only this year when an amendment to CTA 861 was released where you could explicitly signal I'm sending sRGB or I'm sending default RGB using these default characteristics. Since it's so recent nobody is using this yet but I'm hoping that it will pick up because it's really annoying that it is undefined. One thing I forgot to mention at the start I'm on behalf of Cisco part of the CTA 861 workgroup for the past five years and I've been trying to improve the standard and actually this is one of the things I managed to get in. Also by the way one of the changes that went quickest everybody just agreed hey yeah this is undefined this is a mess, let's fix it. So next up in the EDID established timings this is a bit of archaeology remember 1994 this is one of the first things that were defined so you get lovely resolutions like 640 by 400 Yes you can use it but this is old stuff slightly later in time standard timings hey now we go up to some more decent resolutions beginning to look like something and then you get detailed timing descriptors now it becomes really interesting these are meets the normal stuff that people use usually the first detailed timing descriptor should match your preferred resolution and your native resolution as well notice however this is a 5k display that timing is not 5k why is that? because when this was designed they could not imagine a 5k display so the maximum width and height is 4095 so the best you can do as an EDID writer is to have at least something with the same aspect ratio that comes closest and that's what they did no fault of the EDID it's just a limitation of these old base blocks notice that the detailed timing descriptor also has an image size, physical size the chances that that matches the image size earlier in the base block is close to zero if there is an image size in the beginning then ignore this if there is no image size in the beginning it's undefined then you can use this but be careful you see here it says detailed timing descriptors but there is only one and after that you get other things so detailed timing descriptors are four in the base EDID but usually you will see only one or two being used as video timings and the others are this space is abused to represent other information about the display so you have one display range limits which gives you the minimum and maximum vertical and horizontal refresh rates and the maximum pixel clock so there can be many timings defined in the EDID and if you actually go through them all and figure out what the minimum and maximum ranges are and you match that with this this is usually wrong there are almost always one or more timings that have lower refresh rates or higher refresh rates unless you have some software to check this it's very hard to do this manually there is product name, product serial number and then the number of extension blocks that follow so that was the base EDID now for this 5K monitor the next one coming up is block one it's called a block map extension block map extension block terrible name basically an index in the extension blocks that follow now there are only four blocks so you have just two indices that means that you have wasted about 120 bytes of precious April memory on a pretty useless extension block it could have just read the whole EDID and look at the type of each extension block and figure it out so this comes from the EDID 1.3 standard that mandates this if you have more than two blocks the second block needs to be this specific extension block remember that ACMI requires 1.3 so if you want to have valid EDID for an ACMI interface with more than two blocks you are forced to put this into block one the second block since the beginning is very wasteful but because ACMI is stuck in 1.3 that's the only way you could do it nice complication some of the older transmitter hardware might read only two blocks of an EDID so they would get the base block and then this useless extension block and not be able to see the following blocks now to be honest it's pretty rare these days but it is something to keep into account when you write an EDID that not all hardware is able to read your full EDID so if it just reads the first block then they will make different decisions to one that's more modern and can read your whole EDID so put at least some decent values into your first block now ACMI clearly thought that this was silly as well so I think it was last year they came out with an amendment that added a special data block to the CTA extension block and if that's present then you could override the number of extension blocks so the first base block would still say I have only one extension block then it would read the CTA extension block and that would say no you don't have one extension block you have two extension blocks ignore the first one yeah you're all laughing that might be why I haven't seen this in any EDID yet so by the way when this EDID was made this didn't exist yet and I'm not aware I wonder if there are any implementations at the moment that would actually parse this so we go to the CTA 861 extension block notice here so there's a video data block with all the various resolutions that are being supported CTA defines video identification codes basically byte values every byte has a certain meaning but above that you see native detailed modes 1 that means that the first DTD in the EDID is supposed to be the native resolution now remember the first DTD didn't match the 5K resolution so this would indicate that 3840 by 1080 is the native resolution clearly it isn't for the first thing it also sets a native bit so now you have two native resolutions that conflict and neither of them actually match the actual native resolution ah yeah then you get some audio information hey there's two speakers left right lovely some color imagery what other color spaces and support and then you get the video capability data block that has a very important one RGB quantization selectable this is I said this is a Frankenstein EDID the original didn't have this I added this for illustrative purposes so RGB quantization is a bane of my life when you're sending red green blue out as a source you have two ways of doing it the first one is that zero is black and 255 is full brightness full range is black 235 is full brightness limited range consumer electronics equipment basically everything relating to TV uses limited range it goes all the way back to the old CRTs and original TVs long long horrible history PCs didn't care about that we can do full range from the start so of course you have more levels it's bending it's a good thing full range in my view but consumer electronics is limited to well limited range so when you're sending some so as long as consumer electronics is completely separate from IT worlds then there's no problem but these days you know every laptop has an HDMI output TVs can use each IT equipment as input and blu-ray as an input so the worlds have collided and now you have a problem CTA 861 standard said okay if you're sending a video timing consumer electronics timing so 720p 1080p 4K all the things you get when you put in a blu-ray disk basically then it's limited range by default if you get an IT timing 1920 by 1200 those types of things then it's full range the chances of anybody so both source and sync doing all the right things based on the defaults are pretty remote 50-50 at best and it gets even worse when you add all sorts of adapters in between PC monitors tend to behave differently from TVs they tend to assume full range TVs assume limited range different graphics drivers even on different OSs make different decisions so it's horrible basically I'm pretty sure this is wrong by the way I haven't tested it yet but I give it an 80% chance that this will display the colors will assume most likely full range while my laptop is sending limited range so I'm not the only one who saw that and in CTA 861 version H syncs are required to set the selectable RGB bits that means that they accept that the source can explicitly signal whether it's full range or limited range they can parse that and if you are a source and you detect that the EDID supports this you really should explicitly signal this in fact the recommendation is to always explicitly signal this what can go wrong it's at least will at least be as wrong as it already was and it's luck it's actually better so RGB quanta say I could make a whole presentation about this I'll keep it at that so next up you have a vendor specific data block this is defined by HDMI specification has a bunch of information and it has then a weird list of HDMI fix remember CTA defined a number of video identification codes when the first 4k displays came out and HDMI wanted to support it the CTA standard didn't have corresponding VIX now apparently instead of talking to the CTA guys and say hey we need this now we make our own so that's what they did there are only 4 for VIX that they defined these are basically 4k, P30, 25 and 24 let me come yes an interesting thing here so that's going back to the previous page if I signal one of those HDMI VIX what is the default RGB quantization range they didn't define it you go through the whole spec and they say nothing about RGB quantization range later on CTA added proper VIX for these timings and still later the HDMI specification said yes they are interchangeable so then it becomes clear since these are CE timings consumer electronics timings the default is limited range in the meantime there were lots of drivers that assumed full range so actually pretty sure that the Intel driver started out with full range and later had to switch to limited range it's all because they didn't define it and we had major problems figuring out what to do in our own products like this so these additional HDMI vendor specific blocks they indicated you have an HDMI interface but this gets tricky when you have HDMI to display port adapters or the other way around it's not clear at all whether they are supposed to remove these blocks or add these blocks I find the specifications for such adapters mostly non-existent and I think every interface standard just points to the one to figure it out and nothing actually happens for this for once this is the remainder of this extension block there's really nothing special here for once some HDR information some more detail timings and that's it now we get to the last extension block that is hey 5k amazing so the only extension block that can actually represent 5k that is display ID extension block so that is why this particular display has four blocks in total the base block and they're forced to use the block map then the cda and then the display ID block just to tell everybody that hey I support 5k notice though that it says preferred but it still says nothing about that this is the native resolution they could have done that display ID has support for that but for some reason it wasn't added here so even though you can figure out from this edid that it supports this it still doesn't say that it is the native resolution now we're a bit related to those adapters the same thing happens when you mix cta and display ID who has priority what should you do when there are conflicts not defined I try to bring it up now let me it's not defined there are several reasons why you might want to use both even on the display port interface cta is very handy you can represent lots of video timings in a quite concise manner it is very well understood you might have a display with both htmi and display port inputs you want to keep the edid as similar as possible so you want to reuse that information if you do make sure that there are no conflicts between the two they're all saying the same thing then it will work out well if you get conflicts then well what do you do to support these 5k displays we had to do quite tricky things in our parsers to figure out whether it's you want to make sure that it is not a broken edid you want to make sure yes it is really a good edid yes it is really a valid resolution that we want to support so I think this edid gave a lot of interesting has a lot of interesting issues let's now go to how do you how do you parse them and check them so I maintain an edid decode utility it goes all the way back to 2006 was maintained by Adam Jackson was part of the x11 one of the x11 utilities included with that by 2017 it had become quite outdated and we wanted to have a really good parser that we could use and also check edids they had already some conformity checks but by no means was very limited so it took over maintenance in 2018 and have been adding support for all the latest features and things and improving the conformity checks and it's not pretty good I won't say it's perfect especially on the conformity checks there are still some things that can be improved but you know there are so many standards and a lot of these requirements are small print but if you want to check an edid and see how well it is implemented this is a pretty good utility there's also a web page this will be updated daily with the latest version of edid decode you can just drag in an edid and it will parse it and give the results useful options for this edid decode utility minus c to the checking you will see warnings, failures occasionally it's a parse minus n will show the native resolutions according to the rules and the edid minus p will show the preferred timings based again on the rules now you have noticed in the previous outputs of this utility that all these timings come with split out in all the parameters so this utility knows about all these timings knows the exact precise video timings that they represent that means that it was very easy to also add some options to where you could just let it do the calculation I want the CVT timing with a certain resolution put in the parameters and the program will give you the exact timing you may remember so those that may have used it X11 also provides CVT utility in the GTF utility to do the calculations I've discovered that those are wrong in some cases this is all based on the actual standards it also has even the very latest timings, OVT optimized video timings that was introduced in CTA H61.6 so this is completely up to date as far as I know it so what will happen if you run this and check the preferred video timings for that 5K monitor so if you just parse block 0 you'll get that resolution you remember that was the first detailed timing so that's wrong if you parse block 0 in the CTA block you'll get the first DTD again and the first VIC with 1080p again no 5K only when you parse block 0 in the display ID block then you will get 5K so it depends on the parser what it understands and what it's doing what the result is that you get if you do the same for figuring out the native resolution then it's not there at all as I mentioned before it's just missing CTA added however support handling preferred resolutions and native resolutions native resolutions are very new came out in February this year preferred resolutions has been there a bit earlier it also added lots of support so this is a test EDID that I just manually crafted which basically has support for all the different ways you can implement timings and as you can see CTA now supports display ID timings as well you can use them there and those can go up to 5K it has optimized video timings that they have a resolution ID stating what the resolution is and then they have they can support various refresh rates and from there timings follow this is all part of the CTA standard you can read upon it there's a video format preference data block where you can list your preferred resolutions in order there are basically references to timings you defined before and finally there's a native video resolution data block that says which timing represents the native resolution and if you do the same thing with EDID decode for this one then you will actually get the right information so if you parse it with the CTA parser that understands the native video resolution data block you will get the proper native resolution and that is what you want but this is also new that before this is before this is all filtered through to actual drivers that parse this and display manufacturers that implements this we are probably several years further resources EDID decodes repository it's part of the linuxtv.org where we keep all the media subsystem tree and utilities the EDID, EDDC and display ID standards are freely available from Visa CTA 8.1 standard is freely available from the CTA.tech websites I have an email and that's it about EDID nicely on time and I have a few minutes for questions you are close by on one of the slides there was 12 bit or 14 bit color that defined for 5k monitor what is it about so how about fitting it into 8 bits that's let me see oh yeah this one supports 12 bits component deep color or whatever yeah that's deep color I haven't talked about this at all but that means effectively that the the bandwidth, the frequency will go up in order to accommodate that so if you look at here the maximum TMDS character rate at the bottom section of the megahertz that is the number of pixels that you pass through it's not the actual frequency so if you go with higher bit depth the actual frequency will go up as well but the number of pixels that you transfer remains the same and you have to of course the cable has to support that so if the cable doesn't support it you don't know so a properly implemented adapter so the display is a display port and you are sending it to HDMI so HDMI to display port converter so the adapter is supposed to add these HDMI vendor specific data blocks whether it does you don't know the good ones will, the bad ones won't oh limited range yes it started out in the analog world so you never had precise black and white levels it was just waveform and they just said this voltage is black and this voltage is white because it's analog you would get these overshoots so when you digitize this you might want to preserve those overshoots so that's why you have this extra headroom there are also interfaces STI some serial port variations to transfer video that are actually using bytes in the 0, 1, 2, 5, 4, 2, 5, 5 range to signal metadata so there too you want to stick to limited range this is all broadcast TV stuff but hey that was the world until for many many years so that's all reflected here come to me afterwards I don't want to yes but it's a yes but answer okay thank you yes too late I have to stop sorry thank you very much