 Oh, it was loud. I didn't expect that. OK, so let's maybe get started. Welcome to this workshop. And at first, I would like to say thank you. Thank you for being here at the conference and, of course, at my presentation, especially that we are in Amsterdam. And as we know, this town has so many interesting things to offer. But we here in this room know that the most exciting thing to do right now is talking about color management. And color management is all about consistency, as you can see on this image. This is a little bit more consistent. And it's an interesting thing. I would like to show you some example, because color management involves hardware, software, and knowledge. When I say about hardware, I mean displays. How they react to the signal, software. How do the software handle the colors? And of course, our knowledge. Do we know what to do with all this? And let me show you the complexity of color management as such on one very simple example. Let's say that we take a photo, the photo that looks like this. And we want to save this photo as a JPEG file. Then we want to bring this JPEG file into Blender's compositor and pass it through the Color Balance node. And then we want to save our result as a JPEG image and, of course, display it on the monitor. So pretty simple stuff. But let's take a look what happens under the hood. Before we save the image as a JPEG file, such adjustment is done to it. And this is called gamma encoding. And then it's saved as the JPEG file. When we bring it to compositor, the reverse curve is applied onto this image. So we revert the process that we did. Now, we bring this image into the Color Balance node. And we set the default setting of lift gamma gain thing. And what happens? Inside this node, you guessed it. The image gets gamma encoded again. And then our adjustments are performed. And then before it goes out of this node, it gets gamma decoded again. So then we save it as a JPEG file. So the image gets gamma encoded again. It's a third time. And then when it goes to the monitor, the monitor decodes the gamma. So we have exactly the same image as before. So this is the whole process. Do you see any logic in this, any sense? Why would anybody want to do so many things? This is independent from us. We cannot do anything about it. It is happening. And believe it or not, I will tell you something interesting. This whole stuff is made just for one purpose, just to manage correctly the non-linearity of responsive CRT monitors. Yeah. OK, just a few words about myself. My name is Bartek Skorupa. I'm a Blender Foundation certified trainer. I sometimes publish some video tutorials, write add-ons for Blender. And I run a little post-production facility where we added to visual effects and 3D stuff for TV commercials mostly. Before I dive into explaining all of this procedure that is going on, I would like to talk a little bit about something more general. So what is color? And to talk about colors, let's talk about light. What is light? We can say that light is a wave. To be more specific, there are two waves, electric and magnetic. And they are coupled. They are identical, perpendicular to each other. And every wave can be characterized by the wave length, which is the distance between the peaks of the wave. And we are surrounded by various waves of various wavelength. Some may be as tiny. The size of them may be as tiny as the size of an atom. But we also can have electromagnetic waves 100,000 kilometers long. And we, our eyes, what happened? We can see just a small fraction of the whole spectrum. It's something about between 400 and 700 nanometers. And of course, this is very uncommon in nature to encounter such a simple wave that is just a sinusoid. It's more often that the waves look like this. So it's a combination of several wavelengths. And they simply influence one another. So this is one way of graphing how the wave looks like. But we can also see such graphs. Those graphs are called spectral power distribution. So on the x-axis, we have the wavelength. And on the y-axis, we have the power of each of the wavelength. And now, how do we humans see color? We have three receptors in our eyes. They are called cones. And they are responsible for color vision. And here on this graph, we can see how those cones respond to wavelengths. So you can see this one on the left marked with the s-letter. It is mostly sensitive to the wavelength of about 440. And it, of course, grabs also other signals, the surrounding signals. We can see that the response that they overlap, so some of the wavelengths trigger 1, trigger 2, or even can trigger three of those receptors. So we can see that our color vision is very lossy. We cannot distinguish between colors very well. It is possible that we have like, and I would give a definition of a color. The shape of the wave is a color. Or the spectral power distribution is a color. We have different spectral power distribution. This means that we have different colors. But because our vision works as it works, it is possible that we would have like, two, three million colors, completely different colors, that will stimulate our cones in exactly the same manner. So we will simply not distinguish between those colors. So this is a bad news, but it's also a good news, because we can create our displays, our monitors, very simply. We can make them such that they emit just three colors of light. And the mixtures of those colors give us all of the color sensations that we can experience in real life. So, yeah, all of the displays are created this way. Now, let's imagine that we wanted to create the monitor for other species, like for the dog, for example. Dogs have even more lossy color vision than humans. Dogs have only two cones. So in order to make the dog see all of the possible colors that he can see, we would need just two colors, not three. What about this monster? Mantis shrimp, or creative shrimp, maybe. This guy has like 12 cones. So it can see colors that we don't even imagine exist. So the monitor for this guy would be much, much more complicated than for us humans. But we're creating displays for humans, so three colors is enough. Let's go back in time to the ancient times where only CRT monitors existed. There is a little problem about this monitor. It's response to the signal. You can see the graph on the right, so the response follows approximately the curve like this, which means that if we sent the image that you can see on the left directly to the CRT monitor, it will darken it, and this is not what we want. But again, this is like, this is a well-known error. It has been calculated. So it's very easy to compensate for this. And we do compensate for this before we send the image to the monitor. So we apply a curve like this, and this process is called gamma encoding. And then the image gets sent to the monitor, the monitor darkens it, and we see exactly what we want to see. And now the question, when do we apply this gamma encoding? And the decision has been made that those data, this encoding, should be applied onto the image itself. So when we have the popular image formats like JPEG, PNG, TARGA, TIFF, we have this gamma encoding included into the values of the pixels. So the values of the pixels no longer represent the real values of the light intensity. And there is a standard that is called SRGB. Almost everybody uses this. And this, the best mimics the response of CRT monitor. So pixel values don't represent light intensity. But one day, some more modern monitors appeared. And do they have the same problem as CRT? No, they don't. They can react to signal linearly. So why on earth do we still live with the response of CRT monitors? They are like, I don't remember when I saw the CRT monitor for the last time. Well, there was a majority of CRT monitors and just one modern monitor. So who should adapt to whom? It's obvious that, well, some device is built into the modern monitors such that its response is non-linear. So we break the response on purpose. This is what happens. Then of course, more modern monitors appeared, more and more and more, and CRTs are almost gone, but we still live with this SRGB stuff. Now, again, pixel values don't represent light intensity. So what? Is there a problem? We have the image, we can display it. We see the colors exactly as we want it, but there is some problem and there are like two approaches to this problem. The classical approach would be simply ignore it, forget about it, don't care about it and just work on the values even though they are not representing the light values. And there is another approach which is the linear workflow. So before we do any adjustments to the image, we first fix the data, then do all of the operations that we want to do and then apply gamma correction or gamma encoding afterwards. So let's take a look at the classical approach and let me at first show you some advantage that it has. When we take a look at the two gradients, which one appears more like evenly distributed? When we look at it, we feel that the upper one is more evenly distributed and the lower one is not evenly distributed. And it may surprise some of you that the one above is sRGB and the one below is linear. So this represents even distribution of the light intensities. And this just brings us to one other thing which is the human vision and human response to light intensities. We don't respond to light linearly. We better see darker colors than lighter colors. This is of course, from the nature point of view, this is okay because we don't care what is the difference between the light intensity of the sun and the sky, but we do care when we are trying to find something in wood. In the forest, I mean, it's better for us to better see darker colors. So this is the question, what is the middle gray? We would normally say that, well, middle gray, if black is zero and white is one, so in the middle, what do we have? We have 0.5 or 50%. But in fact, for how we perceive the light, the middle gray is 19%. 19% of light intensity is seen by us as a halfway between zero and 100%. And now, 50% in sRGB is about 21% when we are talking about light values. So this is why it looks more appealing to us because 21% is pretty close to this hour perceived middle gray, 19%. So for example, when we apply the curves adjustments to our image, when we are working in linear space, so when we take a look at the shape of this curve, this is a very popular adjustment, s shape, so we increase the contrast a little bit. We want to brighten the bright colors and darken the dark colors, and you see in the middle, this is something that we perceive as being middle gray and in sRGB values, it's pretty close to the, it's 50%. So this shape of the curve looks more appealing. But, there's always some but. Let's see, this is the image and we want to make it twice as bright. What should we do? And just thinking logically, to make something twice as bright, let's simply multiply the values of the pixels by two or do something else. Add this image to itself. So take this image, take the second instance of this image and use the add operation and we will get this. This is blown out and this is not a natural behavior. This is not how this image would look like if it was lit by twice as much light. So when working classically in sRGB color space, we have the math where two plus two equals 10. So we invented like crazy blend types, blending modes, like for example, screen. Whenever I hear screen, blending mode I want to screen, in fact. And this is something that strange math, screen is sometimes called this is at without clipping. And yeah, you see how this looks like. This also doesn't work exactly the same as if we added the light values together. Let's take a look at some other blending modes that exist. And as you can see overlay, for example, you see that we have the condition if something is below 0.5 or if something is above 0.5. So it simply works differently. And this is very important that because this 0.5 is pretty close to our perceived middle gray because this is 50% in sRGB. So those algorithms for various blending modes work good in sRGB space, soft light. Again, you see the same condition. Now, how about something like this, lift gamma gain. So this is something that it is called here as it should be called, lift gamma gain. But sometimes in other programs you may see like shadows, mid-tones, highlights, which is not precise because lift gamma gain operation algorithm works such that if we adjust lift, it doesn't mean that we operate only on the shadows but we operate on all of the range of the colors but on the shadows the most. And the same applies to gamma, mid-tones the most and gain highlights the most. This formula, this operation, this algorithm also works best in sRGB color space. So how about linear workflow? In linear workflow, the first thing that we do we fix the data. So we simply like gamma decode, decode the gamma. We un-gamma the image. So in, for example, in Blender we are using linear workflow which means that the values that we are operating on are the real values of the light. So we can very easily simply add the images together and we will get exactly the result that we want. And then before we send this, before we output this to some file format, we apply the gamma again such that our monitors can display it correctly. So this is linear add. So this is what would really happen if we added the values of the light. So here is a comparison. Maybe the difference, especially between screen and linear art, maybe it's not that obvious but believe it or not there is a difference. Okay, so I screwed the slides because I already talked about it. Oh, but what about lift gamma gain in linear space? In Blender we have the node that is called color balance and we can use the lift gamma gain operation and because we know that it works best in sRGB space. So this is, I think the only node that does this. It converts the linear image into sRGB then adjusts, then make adjustments and then on the output it un-gammas it again. So pretty, pretty crazy thing and that's why we have all this mess that I showed you at the beginning. But we shouldn't use lift gamma gain in linear space. We should never do this. It mangles the data, it shouldn't be used, period. Instead, if we want to work in linear space and have similar behavior, we should change the formula to offset power slope. I will not go into details how those algorithms work but it's better to use this one in linear space. What about the blend types? We have blend types in Blender and they are the copies of the blend types that exist in other programs like in Photoshop at the default settings after effects at the default settings. And Blender was not always linear and when those blend types were implemented, they worked fine. But then Blender became linear but the blend types stayed untouched. So some of the blend types, especially, of this one is nonsense because we have the proper add in linear workspace so this is nonsense but this one will not work correctly because now 0.5 is not middle gray but it's 0.5, so 50% of the maximum intensity. So this doesn't work as it should as well as the soft light and some other blend types. Okay, color management in Blender. How is color managed in Blender? So, I mean, I made a mistake, sorry. Oh, right, okay. This shows us that color management in Blender is not treated, in my opinion, as it should. The settings for colors are all around. We can see the settings here, for example, in the scene stop and here we have color management where we can set several things. When we import images, so now here we are in image editor, so here we can, for example, change the color space, how this image is interpreted. When we are creating materials, here also we have something that has something to do with color management, how we interpret data and how we will handle them later. So it's a little bit crazy. One place, second place, third place, so crazy. Here we can choose color, non-color data. And it really doesn't say much about what we are doing. In my opinion, it would be better if we use the proper names, like sRGB, linear, req709 or something like this. And here, color, non-color data. Color means we interpret this image as being sRGB. Non-color data means that we interpret this image as being linear. So this is the difference, but it is not very clear. You simply have to know this to be able to work with this correctly. Okay, now, we know that human vision is not linear. And sometimes we would like to take more dynamic range than we normally have and try to squeeze it somehow to the space of the monitor. And of course, if we simply take more range, values way, way beyond one, and squeeze it linearly to the space of the monitor that is between zero and one, this would be stupid because we would lose details in the shadows and we would devote much data into the highlights where anyway, we don't see much of a difference even if the differences are huge. So that's why we have something that is called log encoding. It's been with us for ages. I don't know when it was used for the first time. When we encode logarithmically, we see the image like this that looks really flat, but what is good about this image is that we have the data about the details in the shadow and we have the information about the details in the highlights. So pretty like a small adjustment to the image can bring all of this together and we don't have a clumping. Normally without logarithmic encoding, we would probably have clumping in the sky. Now, log encoding in Blender. This is something that I would like to show you now. Let me first go to the Blender 2.79 and this is default Blender 2.79. Thanks to Troj Sabotka, we have in Blender the Filmic Log Encoding and unfortunately it is set that this has been implemented in Blender 2.79 which has been implemented, but in my opinion it has been implemented the wrong way. This is not the intended way that Troj figured out because here we have default and so on and we should have something else. We should have this and here we have Filmic Log Encoding. This is the original design of Troj Sabotka. So in order to make Blender 2.79 work as it should with this Filmic Log Encoding, we should anyway do all of the procedure that we did in Blender 2.78. So simply install the configuration. And, yeah, manually, yeah, yeah, thank you. So then it will work properly. So let's, for example, imagine, oh, this is the default 2.79. This is my stuff. So you see that the lights that illuminate this object are way too bright. It doesn't look very nice, but if we change this to Filmic Log Encoding, we are simply grabbing more data and squeeze it into the range between zero and one and, of course, this image looks very washed out. But this is okay because then afterwards, after the render, we can do the final color correction, final color rating, so this is okay. Here we have the setting that says look and we can choose some contrast and here we are looking at approximately something that will be final. But the procedure when using Filmic Log Encoding is that, because this encoding happens after everything has been done. So when we do some compositing and so on, it happens before we apply this encoding. So when we are using curves, when we are combining render passes or so on, we can do this normally. Then this encoding happens and if we want the final adjustments, final color correction, we should render this export to some file format. The best file format is 16-bit TIFF and then color correct this 16-bit TIFF, but this time using sRGB encoding. When we save, even if we have those looks applied, when we save images to EXRs, then those data are not passed. We simply get 32-bit information about all of the ranges in the scene. So I hope that now maybe at least we all understand why all those crazy things are happening and maybe it will be helpful for us to understand the differences. Perhaps somebody saw that when you are doing something in Photoshop, it looks completely different than something that is done in Blender. I don't want to judge which workflow linear or sRGB is better. Both of those workflows have advantages and disadvantages, but it's simply good to know the differences and it's good to know how to handle those things. So thank you very much. No, let me show you because the first thing is, okay, the question from Sebastian was what is the difference between the default configuration, the default in 2.79 and original choice configuration? Of course, the name, this is one thing, but here, for example, when we choose Filmic, we don't have, oh no, this is, for example, when I set the look to none, it looks exactly the same as the base contrast. Yes, base contrast, let me maybe increase the power of my sun lamp. Base contrast, none. No difference whatsoever. I cannot get the real raw file for color correction later. This is how it works. And then the other thing, false color is here in 2.79, the default settings, which shouldn't be here. Take a look at the difference, bam. I change sRGB and I look at the false colors, but when I am in sRGB. False colors, by the way, this is the coding system, like which parts of the image are lit more and which are lit less. So when we see some white areas, it means that we have overexposure in those points. So sometimes to set the power of our lamps, it's good to use this lookup table. And now let's take a look here. False color is here, so we can have either filmic or false color or sRGB, which is called default here. So this is bad. And I don't even know if this filmic is the real filmic. And yeah, base contrast, none, no difference. And we have differences here when we use other contrasts. So this is why I claim that this has been wrongly implemented. And the opposite data applies to the internal part. So this is wrong because you don't have the access to the raw data. And this is what you want. You want the raw data, not, yes? Okay, yes, there is a workaround because of course I can save the file as EXRs and then I will have the raw data. I misused the word raw because you're 100% right. Those are the raw data. All of the values above one and so on. What I mean, I want to export the real lock encoded data. And this is something that I cannot do at the default Blender 2.79 configuration. It's impossible. Because, yeah. Those are the data that I was talking about. Flat image, completely flat, without any contrast. And here we can see the difference, base contrast, and none. This is what I call raw. Raw lock, let's say. Something like that. I didn't hear it, sorry. I must say that I don't always use this login coding. In some cases I do, in some cases I don't. This login coding is good when you have big differences in lighting conditions. Like for example, sun from the outside and some lamps inside some room. So then when you don't use login coding, it is impossible to properly grab all of this range and squeeze it to the screen space. So sometimes when I am creating some pack shots, very evenly lit, there's no really use to use this lock encoded. So it's an option. It's not a must. Logging. Yes. So it was a long, long thing. Let me try to repeat this. Maybe I will manage somehow. It is all about consistency. So for example, you want to work close platform. Of course, when you want to work on the real values of the light, you save everything to EXRs and you have the access to all of the data without any changes. Sometimes however, you get the material from the camera, let's say, Arri Alexa that uses logarithmic encoding. Then you want your data that come from Blender, for example, to be consistent with the other data. And then when you want to work in other application, like Nuke, you want to use this filmic logic, filmic lock encoding and none look. Because those will be maybe not exactly, but very similar data to the ones that come from the camera. And then exactly the same adjustments can be applied on one image and on the second image and then they will work together. Yeah, yeah. Okay, so thank you very much once again.