 In this scene, we have a single point light at the center of this box, and it's moving back and forth and casting shadows. We are capturing its shadows onto a shadow map, but unlike in the previous example, the shadow map is captured to a cube map rather than just a two-dimensional texture because the shadows are cast in all directions. So we need, that's why we need a cube map. So here we're setting up the frame buffer for our shadow map like we did before, but instead of a 2D texture, we're creating a cube map. And so we have to call text image 2D for each of the faces of our cube map, each time specifying that it's a depth component because we're just capturing depth information in this texture, not any colors. Also remember that for a cube map, we have to specify texture wrapping in three dimensions, not just two. So there's ST and R. And then having configured our cube map, we just attach it to the frame buffer just like we did in the prior example with the regular texture. Now, when we render the shadow map, one way we could do it is make six different draw calls each time binding a different layer, a different face of the cube map, and effectively only render into that face for that call. And so we'd have to do six different calls each time using a different camera matrix looking in all six directions. However, using so-called layer rendering in a geometry shader, there's a way we can do it in just one draw call, and that's how we do it here. Here, into the shadow transforms vector, we are pushing six different combinations of our projection matrix here, shadow, proj, and view transforms looking in the six different directions. Notice that a projection matrix this time is perspective rather than orthographic, unlike with the directional light where the light rays are all running in parallel, the rays of vision and perspective, they emanate out from a central point. Notice our field of view is 90 degrees, just like we saw with cube maps before. And now when we render, all the transforms are pushed into a uniform shadow matrices, which is an array. There's actually no great reason why we created shadow transforms at all. We could have just set the uniforms of this array directly instead of pushing them into this vector, but whatever. We're also sending uniform for the far plane and the position of the light, and then we render our shadow map. And in our render scene, it simply just renders a bunch of cubes. So looking at the vertex shader, it's simply just transforming all the vertices into world space. But our shadow map is also rendered with a geometry shader, which is taking in triangles and emitting triangles. For every input triangle, we're emitting it six times, but for each of the six copies of each triangle, we're setting this special variable glLayer to a different value, to a different face value from zero to five. And this glLayer controls which layer of the frame buffer the primitive is rendered into. So each triangle is going to be rendered six times into each one of the six layers of the cube map, but when the triangle is rendered for each of the six faces, each time it uses a different one of the projection and camera transforms. And so the clip space coordinates of each copy of the triangles will differ. For a triangle which is entirely in bounds of one of the six frustrums, but not the others, it'll effectively get clipped in those other five, and so it won't show up for those other faces at all, which is the behavior of course we want. And then for triangles which straddle the border between multiple frustrums, it'll automatically get clipped and effectively split amongst the frustrums that it's a part of. And so the parts of a triangle that properly belong to different frustrums will only render in those respective frustrums, which also is the behavior we want. This is a little confusing, so let me say it again. Each triangle is effectively being rendered six different times using the six different projection and camera transforms from the shadow matrices array. Because each copy is set to a different GL layer, each one renders onto a separate face of the cube. Thanks to automatic clipping, the triangles only show up on the appropriate faces. Now then in the fragment shader, what we want to store for the depth value is the distance from the light to the frag position, but GL depth component values actually get clamped into the range of 0 to 1, so we're going to want to scale these distances into the range of 0 to 1. So we divide the distance by the length of the far plane, and that is what we manually set as the frag depth. Now recall that for the direction light, we didn't manually set the frag depth, we just let the depth values get set automatically, but in the case of directional lights, we use an orthographic projection where the depth values are kept in a linear range from 0 to 1. Here though we're using perspective projection, so the depth values automatically here would be set into the range of 0 to 1, and they would represent the distance from the light position to the frag position, but they'd be scaled non-linearly. And so when we use these depth values, when we render the scene, we'd have to undo that scaling. It'll just be easier for us though if we keep the values linear, so that's what we're setting this manually. So that's how we capture the shadow map. Now when we render the scene and use the shadow map, we need to remember to set the depth cube map as one of the textures. We're also going to need far plane because we divide it by far plane in our shadow map, so we're going to want to multiply by far plane to unscale the values. And in the vertex shader here, all of this is just standard stuff we've seen already, except one thing, we're doing a little trick to render the outer cube because we're using the same cube geometry for the outer cube and all the cubes inside, but the outer cube, which is the walls and ceiling and floor of our scene, we're seeing the backside of the surfaces, and so the normals are effectively pointing in the wrong way. For our lighting calculations to be correct on the walls and on the floor and ceiling, we need to reverse the normals of the outer cube, and so we have this uniform reverse normals, which is set for that outer cube, and so when this is true, we just flip this sign of the normals. That's just a little trick so we didn't have to define separate cube geometry for the outer cube. Anyway, now with the fragment shader, the main is just like we saw in the directional light shadow map example. We call this shadow calc function to get a value that's either 0 or 1, 0 meaning in shadow, 1 meaning not in shadow, and so either we nullify the diffuse and specular when shadow is 0, otherwise when shadow is 1, then the diffuse and specular will contribute to the lighting. What's different here is in the shadow calc, our shadow map is a cube map, and so we supply a VEC 3 as the coordinates, and that vector is the vector from the light position to the frag position. Note this doesn't have to be normalized because the magnitude of the vector doesn't matter just its direction, and then from our texture call, we just want the R component because the G and the B are actually just 0. R is what contains the depth value. This value though we stored in the texture is in the range of 0 to 1, we want to unscale it back into its original range so that its length is comparable against light to frag, so we just multiply it by the far plane. So then we return 0 in the case where the depth is less than the length of light to frag, otherwise logically at least they should be equal and so we'll return 1, but because of imprecision they're not going to be exactly equal, and we'll also want to add in this bias factor like we did with the directional light shadow mapping. In that case we were dealing with depth values that were solely in the range of 0 to 1, but now these values are in the range of near plane to far plane, and so typically we're going to need a larger bias value. 0.05, that's actually the same value we used before, but in this case looking around the scene everything looks fine to me, so it seems our bias is okay. To get soft shadows we can use a similar percentage closer filtering technique like we did before, and we're going to need offsets from our vector, so we can start this array of offsets. This is basically all the permutations of 0, 1, and negative 1, except note that we're emitting 1, 1, 1, and negative 1, negative 1, negative 1, because those vectors would point in the same direction as 0, 0, 0, they would just shift the magnitude, but the magnitude doesn't matter when we sample from the cube map, only the direction matters. So we're going to emit those two vectors. Using 25 offsets does mean we're going to be doing 25 different samples, which is probably overkill. We could pick a smaller set of offsets, say 10 or so, and get basically the same results, but for simplicity we'll just use these 25. So in the calculation we'll use these 25 different offsets to get 25 different samples, and we apply the offset onto our light to frag vector, but note that the light to frag is normalized so that the offset has a consistent effect, regardless of the distance between the frag position and the light position, and we're controlling the magnitude of the offsets by this radius factor. Without this radius, if we were to apply vectors like negative 1, 0, 1, onto our normalized vector, then we'd get a vector pointing in a totally different direction, it would be nowhere close to where our original vector is pointing. We want vectors pointing in nearly the same direction, but not exactly the same direction. So we pick a radius factor that's going to shrink our offsets significantly, in this case 1 over 500, and the larger the radius we pick, then the larger the offsets, and so the larger the area we're sampling from for the blur, effectively getting this more blur, the smaller the radius, the tighter the sample area, and so the less of the blur. Because it generally looks better to have shadows far away from the camera blurred more than shadows close to the camera, we're going to scale this radius factor based on the distance from the frag position we're rendering to our camera position view pause here. And the clamp function here is setting a minimum for the scaling factor to be 0.2 and the max to be 6. So when our camera gets very close to the point that's being rendered, such that the returned value here is 0.2, then we'll effectively be shrinking the radius by a factor of 5. But then when the camera is far away from the point, then at most the radius is going to be scaled up by 6, getting us larger offsets and thus blurrier shadows for points further away. So that's how we get the vectors that we sample from, each time to the loop we get a depth value, which like before has to be scaled up by the distance to the far plane. And based on this comparison, we either add 0 or 1 to the shadow value. So shadow here will be a value between 0 and 25. Divide that by 25 to get the average and that is our shadow value. So now we have softer shadows. From further away there's a stronger softness effect, a bigger blur, but as we get closer the blur tightens up. The effect is a little subtle, but if I come back here and we don't scale the radius based on the distance to the camera, and instead we'll just make this a fixed 6. So now the radius will just be fixed at the largest radius value that we had before. Rebuild, run the code again, and the shadows from distance are just as blurry as they were before, but if we get closer they're not sharpening up the way they were before. The effect I suppose is still a little subtle, but if you compare them side by side you'll notice the difference. If we picked a different scale of values then the effect would be less subtle.