 All right, welcome. Good morning everybody to this presentation at the Blender conference about our path to a high-end 3d vehicle visualization which I gracefully subtitled a year of a journey through this preferred mess Who are we we are a design company based in Austria with three other subsidiaries in China the United States and Germany We are specialized in the field of Transportation design and product design brand development and all of these things we connect that all up under the hood in one roof and To do that we are actually more than 200 creative people at the moment in our main office Coming from 30 different nations the company name if I didn't mention it is Kiska So where have we been one year ago? We have already been doing for our main client, which is the KTM sports motorcycle company 3d visualizations for online Configurators you might have seen them around if you went on the website and tried things out now a Year ago approximately they approached us if it would be possible to actually generate Let's say more sophisticated more higher resolution material for them they were aiming at replacing the photo shootings for the Prescott material so the informative material that's printed in folders and all that kind of stuff and they were challenging us constantly with an animation of a competitor which they asked us if we could do something which is Similar to that at the same quality level We had some time to kill and we came up with the animation which hopefully plays in a couple of seconds here Don't worry. There is no audio to it. We just focused on the visuals We took the 3d dataset which we already had in place for the configurators Put it into an environment and rather rendered out this 30 seconds clip, which is a simple box mapped animation So that's where we already Where one year ago now a lot of the presentation here is going to be about choice vodka We met The guy actually never in person. We just happened to be in contact with him via email and via chat rooms But how come we even knew about the name? last year I gave a talk here about using blender in design business and At the end I posted a wish list for the blender developers the last point was actually regarding the documentation and Half as a joke. I put his name into the presentation. I was asking if he's in today I was hoping to actually meet up with him and discuss a few things which we went which we came across during the visualization process his name came from two of the most important Wiki pages regarding color management in my opinion at the blender wiki one is about color management in general and the other one about handling the alpha channel which was really giving us issues all the time and While reading through the pages we had the impression that this guy seems to have the answers to the questions which we still had So he wasn't at the the conference itself and I went back home to Austria But one week later he happened to actually send an email to our reception asking to to make contact and Answer a couple of the questions a Little sidestep to film it blender itself The first thing which I was able to trace down in the history was as a question Which was asked on blender stack exchange by a user called set again The link is actually posted here and he was actually asking about How to render a wide dynamic range in cycles how to properly handle it? I was a bit curious about the question because I wonder what's wrong with it Actually, I had no idea what the what the real issue is and In the answer you can see a couple of these images, which I'm presenting here One thing which is a big problem about the implementation of was until 2.7 9 Was that what's called default view transform? It's an sRGB transform which basically does the nonlinear part of transforming the pixels on the image But it doesn't handle the higher level scene referred data at all It basically clips what's happening on the top of it Now filmic is actually using a much higher dynamic range as many of you already know, I guess and is mapping a Much much higher top value to the 1.0 scene referred value that you are going to judge on the screen And the main issue with that default view transform is basically that if you're using it you are basing your Lighting decisions when you're creating an image on something which in terms of scene referred values has been lit by candlelight So it's you're using far too low intensities. The shaders never actually do what they were coded for The second thing which filming at blender addresses is the desaturation part And desaturation we all know this effect from taking a photo We overexposed it by accident or on purpose and we get these huge white blown out areas But if you think about this from a mathematical point of view, it's actually really complex Because think about an RGB triplet which only has an intensity in one of the three channels like 0.80 or 0 given in this example Raise it up really high throw any kind of any number of curves at it It's not going to change in any of the other two channels. It cannot it's mathematically impossible Now filmic is resolving that by using a three-dimensional lookup table Which is stored somewhere on your system once you have it installed and it's calling that whenever the intensity values are rising To a certain level. This is a comparison from the same question on the same topic top Is the default view transform you can see here the value ranges of scene referred 0.4 0.8 1.6 So now the pixels are getting really hot But the colors do not change at all this they put they are just the same When you use filmic and you run the intensities up first of all, you don't hit the 1.0 red that that quickly But second of all when you raise it further it turns into white and white is only achievable if you change all three channels at the same time So down the rabbit hole we went If you know Troy, you know that he's got a chat room on stack exchange, which he titled the rabbit hole It's solely Purpose is to discuss color and color management issues So we hooked up with him and one of the first questions that he asked to me is what does that mean? RGB 100 what what's the meaning? If you would need to guess what would you say? red color red color This is the main answer which you get if you're asking that question, but actually The the sad thing is it means absolutely nothing Unless we know the metadata behind it So if we know what the chromaticities of the three primary colors Which we are using to represent color if we know where the white point is and if we know what kind of view transform has been used to map those colors into the value range then we know if it's a red or not and Especially we know and that's the question which you need to ask yourself which red So unless we know these three information We don't even know if we are within a color space because if you're working linear, you're not in a color space. I Come to that in a little second What you see here is a XYY color chart. It's got a super intrinsic name which has got an explanation goes back to an experiment Which was done in the year 1931 the main thing to point out is Our way of mixing color has three primary colors Which are at a certain location in what is known the absolute color space and There is a white point that they refer to and if you're dialing these values you are basically shifting your Target point inside of that triangle around The main thing to know is you cannot get outside of the triangle No matter what value you put in unless you put in negative values, which is which is anyway a bad idea If you are within the triangle, you can say that your color is within gamut if you are outside of it It's outside of gamut Outside of gamut is an issue because you cannot mix the color This is just a reference image of a of a second color space the black triangle outside It's actually the ultra HD HD TV one, which is 4k resolution Then the inner one is the one that you're actually seeing today. That's an sRGB transform Just for reference. So why do we have all of these color spaces? Each kind of monitor that we use has got a certain hardware that hardware is mixing the colors using LEDs And those LEDs have got primary colors Now by definition if you are on an sRGB monitor, they need to have a very certain shade of this color Unfortunately in reality due to manufacturers Tolerances how many how much money you need to put into this LED it can differ quite dramatically So that's why when you buy a monitor you sometimes see in the chart this monitor has got 80% coverage of sRGB That's a cheap monitor. I quickly browse over a skip over this one because All I really want to say about it is seen referred data I used to think of the energy values when you are rendering So when you're doing a rendering in cycles and you get some values back, which you then hopefully output to an open XR image Those values are the scene referred energy values of the pixels that you get The display referred data at the bottom is the result of this data Which has been run through the display transform sRGB is a display transform filmic is a display transform So the display referred data is at the end of that chain and in our case. It's mostly non-linear Color that we are using So it was time to to really go down to it and ask Troy some questions things which were bothering us all the time and most of the questions actually were more about the alpha channel less about color And what we also learned is how I answers that question now I'm referring here to a question which is also asked on blender stack exchange by a different user It's the exact same question which we had and we didn't find the solution ourselves But with the help of Troy we figured out Imagine this you have got an object with a shadow catcher behind it You've got a background of an arbitrary color non-black and you want to alpha over that in blender You need and you get this image looks great. That's what you want Then you save that out as an 8-bit image PNG So you have got an alpha channel in the image you save it out You get this when you load it up in Photoshop or in our case It was the web browser, which is the most painful compositing application. You can imagine You give it the same background color you use alpha over by using layers and you get this Who of you has rammed into that issue none of you a few maybe yeah one in the background great Did you figure out why? Great, this is your day so we were coming back to him asking him that question and The first thing he asked us I can see your renderings online. Somehow you are handling it. How do you do it? and we said Basically, we split out color information on alpha channel and then we manually color manage the alpha His answer was quite specific on that because by definition alpha channel is always strictly linear There is no non-linear transformation thrown at it ever. It doesn't matter if you're an 8-bit or in something else It's always linear a few days later. He came back to me and said this is amazing. What a puzzle you gave me I don't understand why I didn't come across this earlier and it's so fascinating But because it's so fascinating. I'm not going to give you the answer You should figure out yourself But I give you a few hints So he was giving me the hint about what I said earlier that alpha is always strictly linearized and This is the key to understand what the problem is. I'm not going to bother you with the formulas because There's there's not so much to say about them The only thing why I put them there is if you put in the values of a shadow catcher object and you do the math You will figure that in both cases you get the same result. It's absolutely identical because the RGB is always black And I said I still can't figure it out and a few days later. I had this moment where I actually realized what's what's happening Now if you look at this graph here, I tried to schematically explain what it is We have a background color. We have a linear image By the way, the top line here is the linear composition if you use blender compositor, for instance and if you just look at the schematic view here and and Judge the the data the image RGB data is linear the alpha channel is linear So we have a linear image we alpha over that over something linear and then We run it through the sRGB display transform and we throw it at the screen everything's good When you save out that image you actually save it out at this location Because the image has the non-linear transform burned into it Then you open that in Photoshop or in the web browser but the components which you get are a non-linear RGB and a linear alpha because by definition it's strictly linear You take that your alpha over that over the background and what you get of course is a shadow Which has been composed linear and the pixels which have been composed in sRGB space And that's why the shadow is darker and the only really chance you have is before you save it out Do exactly what we did hand the color manage this channel. There's no other option It's not the nicest of all fixes, but it's pretty much the only one with which works But keep in mind this only works if you are an sRGB color space if you have another color space You need to handle it according to that Another thing we came across is this We always had a couple of render layers which were basically masking out each other So the red dragon is masking out the metal one the metal one is masking out the red one We use alpha over to merge them and then we alpha over that over the white background and we get ugly white seams Now if you think that's a blender problem try the same in V-Ray and use nuke as a compositing application It looks exactly the same It's not a software bug. It's actually in this case a simple user bug. We are doing the wrong thing When we set up render layers like that the alpha operation to merge those two is not intended to be alpha over in In fact, you simply add all the components together The node group in the in the schematic view down there actually shows how to do it Alpha A plus alpha B gives you a perfect alpha in the end Which you can then use to alpha over the result over the background and the seams are gone More information about these ported of compositing Nodes is actually linked in here and one side comment if you're using natron on nuke as a compositing application They have in the merge node a mode which is called disjoint over which does all of this for you You just need to switch it and it works Briefly about monitor characterization profiling. That's also something we did together with him, which is quite ridiculous So a guy sitting in the in in Vancouver I believe is telling us remotely how to correctly calibrate our monitors and it worked out The only thing I want to point out here is this image, which is actually also in the blender wiki Now if you look at those lovely blue and green colors And stare at them for a few seconds And then you take a closer look at it You will realize that this Chart here perfectly blends with the top and perfectly blends with the bottom Because it's the exact same color It's an optical illusion, of course and the effect is called chromatic context The reason why I put this in here is for for two scenarios one Sometimes people think they just throw up an image and then hand style their monitors and that's the calibration Please don't These hardware devices are really built for a reason Because your eyes are relative and they always take surrounding colors into their into the judgment And the second thing is if you are running ever into the issue that you rendered something overnight You start color correcting it in the morning you go for lunch Two hours later you come back you start color correcting again, and you think my decisions in the morning were all shit You start to counter correct that and you counter correct and counter correct Then you go for for dinner But unfortunately your client wants it for tomorrow, so in the night you counter correct what you did in the afternoon and The next day you actually start the process again because what it did in the night is not correct If you happen to run into this check the lighting conditions at your office space Because chromatic context has an influence on that if it's tungsten light or neon light or daylight or morning light Does make a difference? More information can actually be found on this Stack exchange question here, which explains you how to do the profiling and how to actually handle it The ICC profile which you get is not used by blender by default This is actually correct handling because other application in the via VFX industry also don't do it The question linked before actually explains you how to properly implement Your ICC which you get because you need to do something to it Cool that was the first part of the presentation now. I believe it's time to look at some output examples Everybody's still awake Okay Now let me start with the KTM adventure 1290s. I put this project in on purpose. It's a bit more than a year old actually and And the reason to put it in was this was the last project we did without filmic So this is default blender view transform at its best compositing was done in nuke and There was a lot of color correction actually done to it But what you what you see is the result is not bad But you always have the impression that there's a high number of light sources pointing from all directions at it And this is the effect which you usually see in the older blender renderings which which are around Now when you start When you start looking at the higher intensity areas like this area here You will notice that there is just white left There is just no color information that could have been used And this is always the death you needed to die when you use the default view transform either you focus on nice darks and Skip the bright areas or you focus on the bright areas and you totally lose the black ones With filmic that actually changed So the 250 and the 390 Duke which comes later Were the first projects where we actually used it in production it's the same guy who did the renderings and The change of the result we could see after a week of using it It was really amazing how how much it it actually influenced and here at the 250 Duke I actually want to point out the white parts Together with the black parts, especially when they are together like in the reservoir bottle down here or in the spring This albedo color and this albedo color and this albedo color can finally be the same This is what this was not possible before we had to cheat all the time a shot at the display Things which are not so much obvious, but which you also get with filmic is these beautiful soft roll-offs of Smooth surfaces when it comes across Is something that looks much more natural than it has been before Yeah, and also details like again white part white part bright areas chromey in in the in the headlight And the white spring and it's all the same thing So you need to color correct of course still because not every lighting decision that you take is going to be correct But it's just much less hassle and the things just behave You know when you when you try to change the exposure by dramatic values you still get something that works One last thing to point out what we learned when doing these renderings is when you do press kit stuff There is actually legal conditions linked to it So you're you cannot just fake some some text which is on the tire anymore. You can't just write I don't know kiss cut performance tires like we did in in the beginning on it Now it has to be the real text because if you do something wrong somebody can sue you and they can't tell you You're selling something which doesn't exist the 390 same project I just put into to show what actually we do and what the It's in German. It's called Reinseik and I didn't find a proper explanation for it for us They are a bit like color graders what they do so if you slide across that this is the rendering that we give them and That's what they make of it So that goes out for printing the folder So what you can see is it's really color correction its intensity correction at a small Level which is happening, but there is no like cables photoshopped in no detail added all of that is in the render already And that's only possible because using film it actually gave us the time to do that Next project is the 690 Duke R. It's also special because That was basically revising an old set of data and trying a few things out for us So what did we do first of all? This is the first bike where we used the principle Jada The other ones were done using our own PBR implementation, which were great and everything was fine But the principle Jada actually Congratulations to all people who contributed to it. This one is really really good Also in terms of noise clear up behavior when you wait for a couple for a few seconds It's really consistent. It's better than the the old components which we had at least I feel The second reason to to do this project was to test out if you could use natron as the compositor Now we are not using for the Prescott image deep render compositor Because we really need the OCIO color nodes for the output section. That's why we use either nuke or natron because there you have it and Yeah, it worked out Was it fun? It depends on the resolution. I mean, this is 6k by 4k in the original material You need to wait some time. Let's put it this way Yeah, finally What we can say is that using filmic really brought back the fun to the compositing stage because you've got so much room to play You can really change things last minute and that was something that we had a hard time doing before I think you can see in the results Last project that I want to show is the 1290 super dugar. It's also a bit of an older project But there was a funny story about it because this one happened to be featured in the red bulletin Which is a magazine published by Red Bull They usually only focus on racing stuff So if you built the next rally they put it in if you put them all build a motor GP It's for sure in that magazine if somebody jumps out of space and onto the earth. They put it in but not production bikes This one was the first one that they ever printed in their magazine as far as I know at least and I'm not sure if the guys knew that what they were printing was actually a blender rendering Through and through so finally what is there to say a huge? Thank you to all the developers which are developing this awesome tool blender is fantastic What they did in the last year is is an amazing journey Thanks for the shadow catcher to everybody who contributed. I Put in vodka for Sergei asking me yesterday which one he wants and he tells me he doesn't drink vodka need to find something else The principal cheater amazing tool denoiser has changed the game and Also everything that went into ID remapping because we use that through and through we use it really a lot and at the time when it stopped you Losing just about everything if one reference hasn't been found That was a day. We marked red in the calendar Also, big. Thank you to try some vodka because he was supporting us for the year in in every way that he could gave us really a lot of insight into topics which we only heard about before but Yeah, really great to have these conversations. I was in the chat room a lot with him In the evenings at night and my girlfriend already thought I was having a secret love affair or something No, it was just color discussion And finally, thank you for coming and joining this presentation. I hope you learned something. I hope you enjoyed it Thanks a lot