 When we map color values to numbers from our darkest color to our lightest, say, black being 0 and white being 1, there's the question of how to scale the values in between. If we stay in accord with how light acts in the physical world, such that as the number doubles, the amount of light doubles, we would get a scale like on the bottom here. The problem with this scale, as you can see, is that in the upper range everything to human perception looks approximately the same. Human perception just can't distinguish very well between very bright things compared to how well it can distinguish between grades of darkness. So if we want a scale that better records with human perception, we'd have something like the top here, where the values scale with their perceived brightness rather than their actual physical brightness in terms of quantity of light. So in this scale, say, 0.5 looks like it's the halfway point between 0 and 1, even though the actual halfway point in terms of physical light would be considerably brighter. Not only does this perceptual brightness better match what we expect, it effectively gives us more precision within the range because we can make finer distinctions between dark shades. Conversely, though, we can make fewer distinctions among bright shades, but we can't perceive those differences as well anyway. This perceptually linear scale is often called gamma space, whereas the physical linear scale is often called linear space because it's linear in accord with the amount of actual light. Most displays today use the sRGB, standard RGB color space, which is equivalent to gamma with an exponent of 2.2. The relationship between linear space and the standard gamma space is that we get the linear space equivalent of a gamma space value by raising it to the exponent of 2.2. For example, 0.5 in sRGB, its linear space equivalent is 0.5 raised to the exponent of 2.2, getting this approximately 0.218. Conversely, to go in the other direction, to convert from linear space into sRGB into standard gamma space, you raise the linear space value to the exponent of 1 over 2.2. And so, for example, if we have the linear space value 0.218 raised to the exponent of 1 over 2.2, that gets us 0.5. So the diagram here is showing these two curves. If the straight dotted diagonal is our linear space values, then the top curve here shows their gamma space equivalents. But if the dotted diagonal line are gamma space values, then the bottom curve is showing their linear space equivalents. So the problem for us in 3D rendering is that we typically do our lighting calculations in terms of linear space because we want to correctly model how light is additive. If we say double the amount of light on a pixel, we want the brightness to double in the physically correct way. So that's why we use linear space. The problem is that our linear space color values are being read by an sRGB display, which interprets them as gamma space. And so they're going to be displayed darker than we actually want. Again, note in the two scales for a given value, the gamma space color is considerably darker than the equivalent linear space color. So to fix this, we should apply gamma correction. We should convert our color output from linear space to gamma space by raising all the color values by the exponent of 1 over 2.2. We can see the effect of this in Advanced Lighting Example 2, gamma correction. And at first it's disabled, but if I hit space, it'll enable it and notice that it is considerably brighter. This is off. This is on. This is gamma corrected. The simplest way to get gamma correction in OpenGL is to enable GLFrameBuffer sRGB, and this will automatically apply gamma correction to the frame buffer before it's displayed. If though we want custom control over the gamma correction process, which in some cases we do, instead of enabling GLFrameBuffer sRGB, we can apply the gamma correction ourselves in our fragment shaders. In this example, before we do our frag color output, if gamma is enabled, then we're going to convert our colors from linear space to gamma space by raising them to the exponent 1 over 2.2. So that puts our colors in gamma space, but another thing we're doing is we're changing how light attenuates, because the physically correct thing to do is to attenuate by 1 over the distance squared. But when we don't correct for gamma, that just makes things look too dark, and so typically the attenuation then is to use 1 over distance. But if we are using gamma correction, then we should do the more physically accurate thing and attenuate by 1 over distance squared. There's one more gamma related issue though, and that is the question of our textures. Are the colors of our textures defined in terms of gamma space or linear space? Well, typically they'll be in gamma space, because that's what artists prefer to work in. So if we're doing our rendering in linear space, we need to convert the textures into gamma space. We could do that conversion here in the fragment shader when we read from the texture, but of course, rather than read doing the conversion of your frame, it's better to do it just up front when we load the texture. And conveniently in OpenGL, when we load a texture and specify the internal format, if we specify sRGB or sRGB alpha as the internal format, then OpenGL will do the conversion from gamma space to linear space for us. Until now, we've only been using RGB and RGBA, for which OpenGL will do no conversion. It'll just load the texture as is. So in our example here, we're actually loading the same texture twice, once as is, and once with gamma correction. And then in the rendering, when gamma is enabled, we use the gamma corrected texture. Let's look at that example one more time. Your gamma correction is turned off, but now it's enabled. Now let's look at Advanced Lighting Example 6 HDR, which stands for High Dynamic Range. And what's happening here is that at the end of this tunnel, the lighting calculations are producing RGB values that exceed one, but thanks to the High Dynamic Range Tone Mapping we're performing, these Overlit Pixels aren't blown out. We can see the texture detail. If I disable the HDR Tone Mapping, we get this. Everything is blown out, and we can't discern much detail. Now, to apply this Tone Mapping, we could do it directly in the fragment shader that renders our polygons, but we're going to implement it here as a post-processing effect. And in that case, the frame buffer we do our original rendering into, the color attachment can't use the default RGB format, because in that format, values outside the range 0 to 1 are automatically clamped. In this frame buffer, we want to store the values that exceed one, and so the internal format we're going to use is RGBA 16F, as in 16-bit floats, and in this format, the values are not capped at 1. So that's how we set up the frame buffer, and then we can render into it, and then we use the color attachment of that frame buffer as a texture to render onto a quad for our final output image. In the fragment shader, for rendering onto this quad, we apply our post-processing effects, which in this case is first we do the Tone Mapping using a simple algorithm called Reinhardt Tone Mapping, where we take the color value and divide it by that color value plus one. So for example, black, which is all zeros, that would still be zero. 0.5 would be 0.5 over 1.5, so it would be 1.3, 0.5 becomes 0.33. 1 would be 1 over 2, so it becomes 0.5. And as our values exceed one, well, this ratio will approach the limit of one that will never reach it. Because the numerator here is always going to be smaller than the denominator, the result value is never going to reach one. So note that this mapping makes everything darker, but small values, like say 0.1, don't get scaled by much. Medium values, like say 0.5, get scaled a bit more, but it's the values close to one, and then exceed one, which gets scaled the most. So our darker to medium colors get compressed into a somewhat smaller range, but it's the bright colors close to one that can compress a lot more, and that creates space for values that originally exceed one. These bright values that exceed one are getting compressed into a relatively small range, but at least now they're all less than one. Having applied our tone mapping, while we're at it here, we can also apply gamma correction, and that gets us our final output color. In this algorithm, we negate the color, multiply it by some exposure value, which in our example will just start off with a value one, but we'll see the effect of raising and lowering it. We then take that product and pass it to the built-in exp function, which finds the natural exponentiation, so the value of e raised to this exponent, and then we subtract that from one. This is what the curve of the exp function looks like, and because we flip the sign of the color, our values are always going to be negative. So for zero, zero, zero, total black, we'll get back one, but then as our color value gets larger and larger, even potentially exceeding one, it's going to approach the limit of zero. So exp here is always going to return a value between zero and one, larger values for darker colors and smaller values for brighter colors, and so that's why we subtract it from one. Exposure values between zero and one will effectively darken our colors, but exposure values greater than one will make them brighter. So now if I rebuild and run the program again, the result looks pretty similar to what we had before, not exactly the same. You would notice the difference if we compared them side by side, but our default exposure here is one. We can raise this with e, so I can make things brighter, but I can also make them darker with q by holding that down, and you can see in the top left, the exposure number is going down. Now it is about 0.5 right now. Now we could use an exposure modifier with Reinhardt tone mapping. Here I'll rebuild, but particularly when we raise the exposure here, here I'm raising it up to two. You'll note that the effect is not as profound as it was for the same value of exposure for the so-called exposure tone mapping. But as you can see, the concept of the exposure modifier still works. Here's example seven, which demonstrates a bloom effect. I'll toggle it off, on, off, on. What's happening here is that we're rendering to two color buffers, one rendered as normal, but then into the other, we only render colors above a certain threshold, and everything else remains black. We then blur the image in the second buffer, and for our final image, we additively blend the blurred image onto the regular render. This does mean that a lot of pixels get their color values basically doubled, but after we apply tone mapping, the resulting colors are only a little brighter than the original. So now we're going to need a frame buffer with two color floating point attachments. Again, 16 bit floating point. And we're going to make sure that the wrapping clamps to edge so that our blur doesn't wrap around the Pac-Man style. We don't want that. Note that we have to call draw buffers and specify that we're outputting to two color buffers rather than just the usual one. And then when we render the scene, there's really nothing new here. We haven't seen before, but then in the shaders, vertex shader, same as usual, nothing special here. But in the fragment shader, note that we have two outputs, not just the usual frag color. We also have bright color for the second color buffer. And we do our lighting as normal basically, ending up with a result color. But then we compare that result color against this other constant color, and the comparison is done by getting the dot product. And when that dot product exceeds one, then we write the color out to the second buffer, otherwise we just keep it black. Either way, we always output the color to the first color buffer. Note though that the glowing boxes in our scene are being rendered with a different fragment shader. For these boxes, there's just a uniform light color rather than any lighting calculations. But again, if the dot product of that color with this other color, if that exceeds our threshold of one, then we also output this to the second color buffer. Next, to blur the second buffer, we're using a so-called Gaussian blur. And this is a two-pass blur in the sense that it separately blurs in the horizontal dimension and the vertical dimension. So here in this loop, we're actually iterating 10 times, applying the two passes five times each, horizontal, then vertical, then horizontal, then vertical back and forth. And in each pass, we're basically doing a post-process effect of rendering from one buffer into another. So for this, we've also created an array of two frame buffers called our ping-pong frame buffers. An array with indexes zero and one. And in C++, when you specify true, where an integer is expected, it's interpreted as one and false is interpreted as zero. So again, we're rendering into one of these ping-pong frame buffers, but reading from the other. And in the fragment shader, this Boolean uniform horizontal specifies whether this is a horizontal pass or vertical. And in a horizontal pass, we're sampling from 10 different texture coordinates in the same row, blurring them together. In vertical passes, we're making 10 different samples in the same column and blurring them together. Note these five weights that are used. The values are diminishing left to right. In each iteration of these loops, we're getting a pair of pixels starting with two adjacent pixels and then working outwards in both directions. And we want the inner pixels to contribute the most and the outer pixels to contribute the least. That's why the weights diminish in each subsequent iteration. So that's the gist of this Gaussian blur. I won't belabor the details here. The advantage of this technique over what we've seen before is that in the convolution kernel variant, for each pixel, we're reading nine samples, the sample for the pixel itself, and then the sample in all eight directions. The blur we want here, though, for each pixel, we want a wider area of samples. In this case, a 10 by 10 grid, which would mean 100 samples per pixel, which is just too much. In this Gaussian technique, we can effectively blur each pixel over a larger grid by separately blurring in the vertical and horizontal dimensions. It can be shown that this process gives you similar results, but instead of 100 samples per pixel for each time we blur, we're effectively only doing 20, 10 in each dimension, and so it's much cheaper. Anyway, once we've applied our blur, we then need to render our final output We're doing so, of course, into the default frame buffer, and we bind two textures. The first is the non-blurred image, the regular render, and the second is the ping-pong buffer that we last applied our blur into. And then in the fragment shader for this final compositing, we sample from both of the buffers, and when bloom is enabled, we add the two colors together, applied tone mapping, do gamma correction, and that is our final output. And that's how we get this bloom effect. It's on, off, on, off, on.