 Welcome to our next talk at the first day of Divock 2022. It's about color management. I think this is one of the most famous mysteries in computer science, apart from printers, so that only very few people really understand. So I'm very happy to introduce Amy as our guest today. Amy is from Argentina and this is his first talk at Divock. He did many of them before, but this is the first time at Divock. And he is a member of the Krita development team. So I think many of you know Krita as an application. And well, it's interesting to really get insights from a core development person that really understands color management. I'm very excited to learn about that a little bit more. And yeah, so as usual ask questions. We collect questions in the in the pad. And so after the talk, we will go through that questions. I'm sure there are some of these questions as I would have many. So as usual, yeah, let's start. And I will be here and talk to you later about the questions that are coming in. So Amy, handing over to you and Joy and yeah, educate us about color management and ICC. So, you know, my name is part I'm part of the Krita development team and the topic from this talk stems from something that I found very curious during work and which I call the last frontier on ICC profiles. For some bits about myself, I am a full-time contract development with ADS Kritas. I started with a pretty theoretical background on color status, but over time I specialize in many things that are very well course, for instance, in programming or cross-platform real systems. For this reason, you will find my posts on a lot of free open source projects about there, for instance, Humbro, OpenColorIO and so on. If you want to check any of these things out, you can visit my website, which is nspark.me. So, for the motivation of this talk, you know that I work in Krita, which is a free open source pending application part of the KDE suite. We provide an extensive toolset for artists as image manipulation professionals, and our main aim is to support as many platforms and stores as possible. Currently, we support Linux, Windows, macOS, and we are working on Android support. To achieve all this, we need a fairly exhaustive text suite because the main aim is to catch as many URLs as possible before they're landing productions in order to, of course, save tiers. This is very important because our main branch, that will be Krita 5.1, is currently clocking at more than 4,500 CPUs plus binary objects, which amount to almost a million of lanes of code. And this complexity increases very rapidly once it starts factoring in the compiler flavors, the library versions, prototyping systems with support, but CPU architecture with support, and so on. But the main question is why this thing of the test suite is important. This is because until recently, our test suite couldn't test one of the many color spaces that we support. And this is an outlier, this last frontier that is called ICBCR. We couldn't test ICBCR because there are simply no free ICC profiles available. If you go around in the Internet, you will find only two known instances that are 1996 copyright based and across systems. And for obvious reasons, we can't use them in free open source software. But oh well. By this time, I expect you will have many questions. For instance, what's ICBCR or a color space, an ICC profile, or just, well, any of the confusing terms that I have used. And this is what I will be answering during this talk. For this reason, I have separated into four main sections. First, I will begin telling you the primary knowledge management. Then I will be introducing you to the ICBCR color space, what exists and what is important for CRETA. In the third section, I will show you how to turn from an ICBCR specification into your own ICC profile. And finally, we will make a brief recap of what we have seen. So, for the first section, color management, Avesh Arman makes a very formal definition. He calls color management the use of hardware, software, and processes to control and adjust color among different devices in a digital imaging system. It's very complicated, so I split it into three main questions. First, why do we need color management as developers and as users? Secondly, how are devices color managed? And finally, how are colors specified in the color management system? For the first question, why do we need color management? The end color management is to achieve formally device-independent colors translation. In other words, how can I have this block of color on my screen and make it look the same in a printed job? This is important because each device, be it a printer, be it a screen, be it your smartphone, measures colors in a different way. And formally given as in color management, color management's aim is to produce identical representation from different input devices, be it a scanner or be it a camera, and make it then look the same on different auto output devices, in this case, a screen or a printed job. Second question, how are devices color management We need to answer the three C's of color management. It starts with calibration, which means set your device up to a known desirable and most importantly, repeatable state that you can go back to if things go wrong. Secondly, you need to do characterization, which is measure how your device responds to color inputs, then describe that response in a device-independent manner. And finally, store it in what we call a device profile, which usually goes along with whatever image the device makes. And finally, the most important step for us as script developers is called conversion. That is simply take the image, take the source profile, and transform it to fit the destination profile. Third question, how are colors specified? They are specified as coordinates in what we call a color space. From a geometrical point of view, a color space is geometrical and n-dimensional, one dimension for each channel. That turns light stimuli, what means the colors that we see, into vector coordinates. If you have done web development, you have already run that at least across three color spaces. These are RGB, red, green, and blue, HSB, which is huge saturation and value, and finally, HSR, which is huge saturation and lightness. From a mathematical point of view, a color space is defined as a coordinate system in which we define a subspace, and each supported color will be mapped to a single point inside that subspace. If we think of the set of supported colors, that is the color space's gamut. To construct a color space, we need to specify two things. First, a set of three independent reference stimuli that we call primaries. For instance, in RGB, they are pure red, pure green, and pure blue. And secondly, we need to define the white point that is the color of the light source that were used either in the scene and in the measurements of the primaries. These colors are represented by what we call chromaticity coordinates, and if we plot this in a chromaticity diagram, the triangle that is formed reveals the space's gamut. For instance, the sample that we have on the right is the gamut of the ICBCR profile, according to one of the specific fields that we will see in the third section. Now, there are two main architectures, but nowadays we use the one that is specified by the international color consortium that is called open loop color management. In open loop color management, all the calculations are done in a single space that is called the profile connection space, which is an intermediate and device-independent color space. Under open loop, each transformation to and from each device is mapped to a single transformation to the PCS. For instance, instead of making a wide set of transformations to go, for instance, from a camera to a scanner or from a scanner to a printer, we only define a single transformation from the camera to the PCS and from the PCS to the destination device. Color management architecture, according to this, ICC has four key components. First, the profile connection space that we will be using. Secondly, the color management module that is a software library that does all this hard work and which usually comes with your operating seat, but of course there are private vendor offerings available. Third, you need a device profile that will be containing all the data to transform between the PCS and your device, and we will be making it in stock. And finally, the rendering intent, because when you map between different color spaces, there can be cases in which you can start from a color space but there is no equivalent color. And of course, the color management module needs to predict a built account for this. So how do we process color values? ICC profiles in the current specification, which is 4.3, have two main ways in which they can process colors. The first one is called matrix plus tone response curves of TRC, and the second is color lookup tables. Matrix TRC combines a three by three matrix plus tone reproduction curves, which you can also call gamma curves, or OETF that we will see later. This specification can only convert RGB, color spaces, or gray scale, and most importantly, they are not stored directly. You do store the tone score score for each channel, but the matrix is derived from the coordinate of the three parameters. This is the flow of a device color space to PCS conversion on the matrix plus TRC. You start from the device space, you apply the tone's response curve to re-linearize this channel values, and finally you apply the matrix to convert from your device space to PCS. And to go from the PCS to the device color space, you need to reverse the direction of the transformation. You start from the profile connection space, you apply the immerse matrix, you apply the inverse tone response curves, and finally you end up in the space. The most important thing here is all these calculations are done automatically by a color management volume, so you do not need to make extra effort as a profile designer. Now, the second alternative that I mentioned is color lookup tables. This is designed for n-channel color space, for instance, the printer space, Sian, Magenta, Yano, and Black, or for more complex color conversion for which a matrix plus TRC is simply not enough. First difference from matrix plus TRC is that each transform direction is explicitly stored into separate tags inside the ICC profile. And also, there are two main ways in which to store this transformation. The first one, which is the standard, and also the required version, stores this into 8 or 16-bit outside integer depth, and it's called A2B0 to go from the device to the profile connection space, and B2A0 to go from the PCS to the device color space. After that, you can override this with the floating point depth transformation, which is called D2B0 from device to PCS, and B2D0, which is from PCS to device color space. Let's note that's one of the rights that transformation defined A2B0 only if your college management module supports it. Now, the integer version A2B0 and B2A0 are what we call color transform structures. This can have up to five elements in four possible and fixed way to use them. For instance, for the device to PCS direction, we can use a set of three-tones response curves called B, M, and matrix, and B. B, M, and B are both sets of three-tones response curves. This is the most close one to a matrix of DRC. A, then a color lookup table, then B, again A and B are two-tones response curve sets. And finally, the most complex one, but also the most expressive one, and the heaviest one in terms of storage, which is A, color lookup table, M, matrix, and B. For the PCS to device, you need to reverse the flow of the transformation that I mentioned earlier. It is important to note that unlike matrix plus DRC, this version has a fourth column in the matrix that allows you to apply an offset to the transformation. And this is what it looks like when you transform with color lookup table from device to PCS. You start from the device space. You linearize with the set of A curves, this one for each channel. The lookup table takes the end channel and outputs a set of fixed three-channel values. You apply M, you apply the matrix plus the offset, you apply B, and finally, you end up in the PCS. To go from the PCS to the device transform, you need to reverse the direction of the transformation and apply all the curves, the inverse curves, inverse matrix, inverse curves, invert the color lookup table, and you end up in finally the device space after applying A. Now, the alternative, which is D2B0 and D2B0, are floating point color transforms. The first difference that we'll find in specification is that this transformation allows you to use as any component that you want, be it a matrix, be it a tone response curve, be it a color lookup table, as many times and in an end year, as any order as you wish. Second difference that you will find is that the viewers that you will be using, for instance, Apple scholars and people, do not support them. And the third one that this will be affecting us the most is that the supported parametric tone support, the response curve types, are by far less. We will see why this is important in the last section. And the final ingredient of an ICC profile is the Illuminant. In this case, ICC profiles are expected to represent all colors under a very fixed source of illumination, which is called Illuminant or White Point. The ICC expects you to use, in particular, one which is called D50, which represents theoretically the warmth of the light shortly after dawn or before dusk. But as you may be expecting many color spaces, like ICVCR, use other white points. For this case, we need to specify what is called a chromatic annotation matrix that adjusts the color value from the specified illuminant to the corresponding one under D50. These are always easily searchable online, and for instance, Elastone has made a lot of great research on this subject, and this is the equation that is followed. You start from the value in the PCS under the device viewing conditions, you apply the chromatic adaptation matrix, and you end up with the final value that the ICC expects under D50. Now, for the ICVCR color space, ICVCR is what we call a device-independent color space encoding. Device-independent is because its specification is fixed into two standards and do not depend on a particular device. Color space, it is because it is a mathematical transformation of RGB, and encoding, it is because it specifies, first, the digital encoding method it will be using, for instance, 8 of the terms inside the integer as well as floating point, and secondly, it specifies the value range of values, which it depends for each of the options. Why does ICVCR, such a complex space, exist? It exists to encode digitally-colored signals for your television. According to Michael Tunes, this was meant to cover, first, the need to retain color balance between different input devices, in this case cameras, because if the color drift between different cameras, this is going to be trouble. Secondly, it was meant to maintain compatibility with monochrome, that is, to say, gray-cailed TV sets, and for these reasons, the color will be fit into luminance and chromaticity signals. And thirdly, but not this important, we need to maintain as much efficiency as possible, because for instance, it will take a single 1080p frame, for instance, the one for this stream, and we send it with 8-bit RGB and compress, this means that we will need to send 170 megabytes for each frame. This is impossible nowadays, and it wasn't possible 20 years ago. So what does ICVCR do in this regard? It transforms your RGB pixels into three-signal values that we call Y, CB, and CR. First, I is the luminance signal, which we see here as the grayscale plane of the image. Intuitively, our eyes are more sensitive to the green components, so that's why I contains a lot of portion of the green value of the pixels. Of course, there is a small contribution from R and B. CB and CR together form a signal that we call the chrominance signal. These two are complementary color difference signals, because they arise from, in the case of CB, from blue minus luma, and CR arises from red minus luma. This whole thing that I was telling is analyzed by the International Telecommunications Union into two separate recommendations. The first version that is called BT601 was last updated into 2011, and that is what we call standard definition transmission. This is less than or equal to 480p, as well as ideal analog TV sets. This was obviously designed for the compatibility with the legacy monochrome TV sets, and targets are of course the D65 pipeline. This is the matrix with three separate equations that transform from R, G, and B, to Y, CB, and CR. The second alternative, which has been updated into 2015, is called BT709, and these are revised versions that target high definition, as well as high dynamic range transmissions, and for these reasons it drops legacy compatibility in its change for an accurate luminous response that comes from measurements on our eyes. Again, this transforms the targeted D65 white point, and looks like this. Now, for the range of ICBCR, it's standard use was in analog circuitry, and for these reasons, in the case of 8-bit unsigned integer, the white value is expected to range from 16 to 235, and CB and CR from 16 to 240. The extra room is allowed from the carrier seam to the analog carrier seam to over or undershoot without issues. Now, both matrices that I gave earlier, as well as the ICC, do not operate in unsigned integer, but operates in front in point. So, you will be asking what is the range, and the answer is that only BT601 explains it in a very specific section of the standard, and it says that Y ranges from 0 to 1, CB and CR ranges from minus 0.5 to 0.5. Now, for the gamma correction, ICBCR takes and returns not directly the RGB signal, but a gamma-corrected RGB signal. This is accounted for in the earlier equations by the apostrophe in R, G and V. This gamma correction is formally called optoelectronic transfer function, and accounts for non-unianity on the image sensor and on the display device that we have, and a legal signal. Now, there is little information on ICBCR's actual usage, so we can consider an alternative that stems from the old-school CRT elevation sets. These elevation sets follow a power law that relates the emitted light air with the driving voltage, with a given value that is called gamma, and depends on the device. The official recommendation is defined in BT1886, the apostrophe into 2011, and defaults to gamma equals 2.4. Now, to craft your own profiles, is it unheard of? No. For instance, obtaining a Minimal SRGB profile is of interest for image practitioners, for instance Facebook. Okay, so we need to wait for him to reload. Okay, yeah, let's wait for some seconds. Let me come back. We let him know that he lost connection. Okay, but we can still see the slide, so I can try to take over and start to explain what I'm... I guess I'm unable to do that. Okay. Good. Well, again, if you have questions, use the... Okay, good. Use the pad to ask your questions. We already got two questions in there. I think focusing on the user perspective. Okay, and then wait for him to come back. So, what I can remember... Okay, here we are. Yeah, back. Okay. Okay. Yeah, I think this is something most of us have experienced before, so let's cross fingers and he'll look back in a few seconds. Okay, it looks like he is back. Okay, yes. Okay, good. Okay, go ahead. So, we were... If it's important to graph our own profiles, it is nothing that is unheard of. For instance, Facebook and the game developers are very interested in shipping Minimal SRGB profiles. And Elstone has also researched a topic that we call well-behaved profiles that are meant to be two-round trip, numerically stable, as well as standard compliant profiles. What's ICBCR? It's easy to make a profile out of primaries plus a white point like RGB, but ICBCR is not one of this type of profiles. The conversion is done to two separate steps. We need to apply the matrix because the gamma cores, and this leaves us in linear RGB, and only then we can apply the matrix plus TRC to end up in the profile connection space. Is it really possible to make a profile for our own use? Yes. The core transformation flow is like this. We go from ICBCR to and from the PCS. We need to apply the core transformation matrix to end up in RGB, then linearize it with the gamma correction cores, and finally apply the transformation to PCS. And of course, because ICC expects D50 and we are operating under this 65, we need to use the chromatic adaptation matrix. For this reason, we will be implementing an A2B0 plus the B2A0 core transformation pipelines. To go from ICBCR to PCS, we need to adjust the input range because of the 0 to 1 versus the minus 0.5 to 0.5 discrepancy that we saw earlier. We need to apply the ICBCR to our RGB matrix. In this case, we invert the one that is given in the specification. We apply the tone's response cores, and finally we apply the RGB2PCS conversion matrix. This is a fixed non-matrix that you can source online. For the A2B0 expressing the tone response core is easy because it is automatically supported by this data. But for the D2B0, this is not the case. Both A2B0 support bias-width curves that we saw earlier. However, the latter transformation cannot represent the second segment of the autodetronic transfer function. I don't know if it was an oversight on the ICC part, but this means that we need to use a sample core. And this has an account for the fact that we must also apply the offset factor and before the conversion matrix depending on the direction. So for the A2B0 version, first we need to invert the specification, RGB2YCBCR matrix, then apply the complete ICBCR to RGB step into a color lookup table. Then we need to apply the tone's response core that I directly go into the M element, and finally apply the RGB2 profile connection space matrix in the matrix element. For the D2B0 version, all the steps can be expressed individually and the springs of significant savings in space. But unfortunately, because of the need to sample the autodetronic transfer function that we saw earlier, we need to lose approximately 10% of the potential savings that we could have gotten if we didn't need to. For the PCS to ICBCR, the things are a fair bit simpler now. We need to apply the PCS to RGB conversion matrix, invert and apply the tone's response cores, and finally apply the RGB to ICBCR matrix that is taken straight from the specification. And finally, we adjust the input range. In this case, because we cannot use two matrices in D2B0 and B2D0 transforms, we need to plug together this adjustment with the matrix above for A2B0. But in D2B0, this fits directly inside the fourth column. For the last step, chromatic adaptations, this is a very easy step, thank goodness, because this is 552D50, a very well-known transform that in particular is already used in SRGB. Alternatively, a good color management model can also answer this directly into your profile. So for the conclusions, as you have seen, color management is a very complex beast, a seriously complex beast, and we cover a very small primer, the uniqueness of the ICBCR color space, and how to go from the ICBCR specs into an ICC profile. You have noticed that we didn't cover the extra implementation because that will make for a whole other, our different talk. And for this reason, I have made available the full profile generation code on GitHub on the github.com slash amyspark slash ICBCR ICC profiles, and then extra references at the end of these top slides. All this effort is already live on Krita Next, which is our nightly alpha build. Again, this is a nightly alpha build. We do not intend it to be used in production, but we are free to do it. If you want it, you can get quite a copy of Krita.com. Thank you for watching. I am open to any questions you may have. Thank you very much, Amy, for the talk. Actually, it looks like a complex topic and even more complex than expected in the first place. I got two questions I can ask from the chat. The first question is, how does Krita Color Management compare to other graphics and image tools? Do they all have similar CMS engines? Is there something good or bad or just average, or what should I look at as a user? Compared to Photoshop, for instance, we at Krita do not have access to Pantone color sets, the official palette specifications. However, we have access to a color management module that is called Little CMS. I'm Matt Maria from HP, and we are currently working a lot in the optimization part. Something that if we were using a proprietary module, we couldn't perhaps do it. From another point of view, we cover spaces that other applications do not have, for instance, XYZ, Directly Lab, which is a both profile connection spaces, as well as ICBCR, which is directly unheard of in any other application. Okay. And second question is, I think, coming from a user perspective. So can you recommend some maybe good online resources or maybe step-by-step guides from a user's point of view to take care of all these steps in color management, so not going into all the details of specifications from ICBC? Is there anything you can recommend or maybe people can ask you? There is a very good book from Wiley. I don't recall the name, because I can share it in the recap talk later. And of course, there is a specification of our Pita manual at docs.crita.org. Okay, good. So I think we are done with that. So Amy, thank you very much. I think we have a follow-up big group button room to join you for some more time to get more questions. Maybe we can demo some stuff and people can have a look into it. That would be great as well. So I think that's for now. Thank you very much. So we ended a few minutes early. That's good. I need to get some time to relax and understand all the complex stuff. So thanks again and hope to see you again next year, maybe with next evolution and next round of updates. Thank you. Thank you.