 Hi, my name is Guy Ducos, I'm part of Philips. We have co-developed together with Technicolor a dynamic HDR solution that is now standardized as part of one of the features, new features for HDMI 2.1. So right here is a demonstration of dynamic HDR. Does that mean every scene has a different setting in the HDR? Exactly. The idea of dynamic HDR is that together with the single video stream, you're also transmitting dynamic metadata that are explaining how to optimize each individual image to any display in the market. You know that for HDR displays in the market, you will have 1,000 nit displays or 700 nit like the OLED behind me, or 300 nit for mainstream TV displays at a lower price. And you need to support these TVs in optimizing each image for their specific capabilities. And that's the goal of this dynamic metadata that are part of the dynamic HDR. So does that mean it can recognize what kind of scene it is and then set the peak luminance differently in each scene? It's not really the peak luminance that is set differently. What is set differently is how you distribute the optimization of the bits across the brightness and the colors depending on the image. So that you really make sure you see for each image, you will see all the details and all the colors as much as you can on the specific display that you have at home. So what are the bits you're talking about that's being spread around? It's like you say it's about the colors. It's about the color, it's about the brightness. Again, the content, for example, will be shot at 4,000 nit, right? And the display will be at 700 nit. In that case, the question is how to map the 4,000 nit content down to the 700 nit display capability. And you want to do this mapping differently for each image, whether the image is only a bright image or a dark image or a mixture of both so that you keep as much details as possible visible on the display. And that's why for each image you have different dynamic metadata that helps the TV to optimize the picture to a specific capability. So when you take a camera and you film an HDR video, you're actually shooting for a peak luminance? Usually, you grade the video on a reference monitor, like a Sony BVM reference monitor or Dolby Pixar reference monitor. And this is giving you the peak luminance to which your content has been graded, indeed. And so you just do a basic calculation, 4,000 divided by 700? It's much more complicated than that. It's a kind of machine learning, artificial intelligence process that is optimizing for each image, understanding what should be kept as the details, where the details that have to be kept and how to optimize the tone mapping of the image to the specific display capability. So starting from the peak luminance of the content, so 4,000 nit, for example, and moving down to 100 nit if you want to go down to SDR or any value in between. For example, here we have an OLED display, so this display is optimizing the picture for its peak luminance, that is 700. And so basically the standard that you're doing is running on this box or is running on the TV or is running on both? So the feature we have developed can run either on the set-top box or on the TV. The interest of HGMI 2.1 is that now the set-top box, instead of doing part of the image calculation and then the TV is doing another one, the set-top box can transmit the dynamic metadata over the HDMI cable so that the TV is doing the complete image optimization. That's the idea. So it's a big deal to do the dynamic HDR. Just when you have the HDR is kind of great, but the dynamic is even better, it's important. For sure, because what's happening is, again, if you consider that moving forward, the movie studios and all the content providers want to shoot as bright content as possible so that it's future-proof, while today the displays are still rather limited in terms of big brightness. So if you want to avoid having completely saturated images or clipped images on the existing displays, you need to support these displays by these dynamic HDR capabilities and tell them how to optimize each image to their display capability. Because there are so many TVs that do HDR and every one of them has different peak luminance, and every one is different. So you're kind of like a backwards-forward, comparatively displaying the content. Exactly. So that's indeed the case. Here, with a single stream, first the dynamic metadata, you can adapt each individual image to any display on the market. And the interest also of Technicor or SLHDR1 technology that is here mentioned and demoed here is that the video stream that is transmitted is the SDR video stream. It's a high-quality SDR backwards-compatible video stream for legacy TVs. So is this the one you're talking about? SLHDR1 standardized. But you said it's a single stream. So how is it different from HLG? So HLG or HDR10 are two static HDR solutions, meaning that they use the same EOTF and a single curve for all the images. And the TVs from there have to try to optimize all the images. And they do a kind of average optimization because they are not analyzing each image individually. Here, the difference with dynamic HDR is that the images have been analyzed individually. Their optimization has been calculated upfront in the head end. This information is sent together with the video stream to the TVs so that they can use this information to re-optimize each image to their display capability. So how much content out there is dynamic HDR so far? Well, you have different versions of dynamic HDR that are also standardized in HDMI 2.1. Technicolor is one of them, but you also have Dolby Vision. You also have HDR10 Plus from Samsung. So dynamic HDR is coming globally as a need to re-optimize the pictures for each TV on the market. And that is now widely available on the market. So just to display HDR content better, you don't need to film the movies again differently, right? Depending on the standard. For Technicolor, indeed, the idea is that we calculate these dynamic metadata just before the distribution encoder. So the production can be done in HLG, can be done in HDR10, can be done in SLOCK 3. This is fully independent. Just calculate and transmit the dynamic metadata just before the head end or at the head end and before the distribution.