 At a point in our scene, if we render a cube map and then convolut that cube map, which basically means to blur it, then that convoluted cube map is a good approximation of the ambient indirect light in the scene that reaches that point. So when rendering objects near this point in space, we can use this cube map to get ambient light values by sampling from the cube map using each point's normal. In this example, we're rendering the spheres again, except instead of just using a fixed ambient value, we're sampling the ambient value for each point from a convoluted version of the cube map you see rendered as our skybox. On this side of the spheres, they're also getting hit by four direct lights, so the difference is a little subtle, but if we go around to the back, you can see that the spheres are significantly brighter than they were before, particularly the spheres down at the bottom, the ones that are less metallic because they pick up more diffuse light. This more accurately grounds the objects in the scene because they're picking up the indirect light, bouncing off everything in the scene back here. This side-by-side comparison makes the difference more clear. On the left, we just have a low fixed ambient value, but on the right, the ambient value is being read from a cube map. Once we have our irradiance map, our ambient environment map, once we already have that, making use of it is quite straightforward. In the fragment shader for rendering our spheres, we have the sampler cube uniform, and then to do our ambient light calculation, well, we sample from the irradiance map using the normal of the point. We mix this with the albedo, but we also need to know what proportion of the ambient light is diffuse rather than specular, so we use the Fresnel-Schlick approximation once again. There's no actual single point light source here, but for the sake of this calculation, we imagine the light source to be at the view vector reflected off the surface, so n is effectively our halfway vector, that's why here we're plugging in n.v. f here effectively represents the proportion of ambient light energy that is specular, and from that we can compute kd, the proportion of the light energy that is diffuse. To get our final ambient value, we factor in any ambient occlusion from our AO texture, and then our final color is the ambient plus LO, which is the luminance of the direct lighting. So using the irradiance map in this example is fairly simple. Be clear though, again, that irradiance maps are only really accurate for rendering the precise point at which the irradiance map was captured in the scene, but for rendering points nearby, it's still fairly accurate, especially if the geometry captured in the environment map is far away from that point. If your environment map is captured from very close geometry, then as we move away from the point, the environment map gets less accurate more quickly. So these irradiance maps do work better when they're captured in the middle of a large room rather than a small room, or even better in the middle of a large open environment. One trick to get more accurate results is to generate irradiance maps at different points in your scene, and then interpolate from them when rendering points in between. Anyway, let's look here at how to actually construct our irradiance map. In this example, we're using an environment map captured from a photo, and it's provided not in cubemap form, but rather as an equirectangular map. It's a single rectangular texture, but it represents a scene in all directions, and there's a simple formula that will translate from a three-dimensional vector into UV coordinates we can sample from this map. Now, we could just use this formula to sample from this equirectangular map directly, but the formula adds a bit of overhead at runtime, so it's better to convert this into a proper cubemap and use that instead. So the first thing we need to do is load the environment map, which is in this format called HDR. Fortunately, STBI knows how to load it, and because it's HDR, the color values can exceed the value 1, so we want the internal format to be 16-bit floating point. We then set up the frame buffer into which we're going to render the cubemap, and we create six textures, one for each of the faces. Note, though, that we're not yet attaching any of these textures to our frame buffer. We'll do that when we render. Here we're setting up the projection transform and the view transforms for the six faces, and then here we do the actual rendering onto the cube faces, and each time we do, we attach a different one of the faces as the color attachment. Note that we set the viewport to 512 by 512 because that's the dimensions of each face. As for the shaders for this, in the vertex shader, we're outputting the world position, and then in the fragment shader, the interpolated world position is going to be that vector for that pixel of the cubemap. So we have this function here, sample spherical map, which takes in a three-dimensional vector and spits out a UV cord, which we sample from the equi-rectangular map, and that gets us the color for the pixel of the cubemap. So now we have our environment map in cubemap form, and we want to convolut it, which means basically the same thing as blurring it. And this convoluted cubemap is our radiance map. We create a cubemap texture for it and set up the six faces. Our color values are still HDR, so we need 16 bit floating point, but we're going to make this blurred version much smaller. We're shrinking down from 512 by 512 down to 32 by 32. Because the radiance map doesn't have any high frequency detail, we can get away with a much smaller resolution, and we get still basically the same results. So having created our radiance map, we then want to render into it, just like we did to create the original cubemap, except now we're using the same vertex shader, but a different fragment shader. Again, the world position from our vertex shader is the vector for this pixel. The radiance is the color value, which we're going to write out last thing at the end. To get our convolution, to get our blur, imagine that we sample not just from the normal, but also average it with other sample vectors from the same hemisphere. To get these samples, we'll create a tangent space around the normal. It doesn't matter how the tangent and bi-tangent are aligned relative to the normal, they just need to be orthogonal, so we get our tangent, the right vector, by getting the cross product of just some arbitrarily picked vector and the normal. That gets us a tangent we call right. We then get the bi-tangent, which we call up, by getting the cross product of n and right. In these loops, phi is the angle that we're rotating around the normal. We're going 360 degrees around the normal, so that's 2 pi. But then our samples are rotated up to 90 degrees away from the normal. And given these two angles, this gets us the sample vector in that tangent space. This converts it to world space, which we can then use to sample from our environment map. But note that our samples are not averaged equally. The more a sample diverges from the normal, the closer it gets to the rim of the hemisphere, the lesser its contribution, and at the rim, in fact, it's zero. That's what we multiply by both the cosine and the sine of theta. So that gets us a weighted irradiance value, which we add to the total. And then after the loop to get our average, we divide by the number of samples, but multiply by pi. Again, I won't explain the precise math, but note that the samples we took are weighted, such that only the normal itself has full weight. All the others are diminished. So if we divide by the number of samples, the irradiance value would be much too small. Multiplying by pi compensates for that. So that's how our environment map gets convoluted. The value in each direction is being weighted together with all the diffuse ambient light within the hemisphere of that vector. And that's how we can render objects with ambient light that grounds them in the scene. The indirect light bouncing off the environment is factored into our rendering.