 So far in our rendering we've just been ignoring the question of shadows such that for a point when we render it We just imagine that all the light sources are hitting that point regardless of what might be between the point and the light If though we want to account for shadowing Well, the obvious solution is to cast a ray from the light position towards the point and see if it collides with anything Before hitting that point such as here in this diagram The light ray is hitting the top box before it hits the one on the bottom right and so the light ray is not reaching the point on the bottom right box The test for whether a point is in shadow then is simply a matter of comparing two distances If the distance the light ray travels before it collides with something if that distance is less than the distance from the light To the point we're rendering then the light ray must have struck something before it reached the point And so the point is in shadow Otherwise if the two distances are equal then the light does reach the point So the obvious solution to our problem is to trace rays to cast a bunch of rays from the light sources and see what they hit Again, though ray tracing is generally just too expensive for real-time rendering still these days And so the alternative in rasterization is to construct what's called a shadow map In this example the light source is a directional light and so the light rays are all casting down in parallel onto the same And we can store the depth values of these light rays if we simply render an orthographic projection from the position of the light The depth values would then all be captured in the depth buffer of that rendering When we then render the scene proper we can sample from this depth buffer to get those depth values and Compare that depth against the distance from the light to the point we're rendering and This is done most simply by transforming our point here P into the coordinate space of the light itself That both gets us the distance from the light to the point But it also effectively gets us the UV coordinates that we need to sample from the depth buffer from our shadow map Using those UV coordinates we sample from the shadow map getting back another depth value here in the diagram C And if that value C is less than P then the point P is in shadow So that's the general idea of shadow mapping and we can get results like this here in the scene We have a single directional light casting down on the scene and a shadow map is being generated from that position And then when we render the scene from my camera position that shadow map effectively tells us which points are in shadow and which aren't So that's why these points here are only being lit by the ambient factor They're not at all being lit by the diffuse or specular factors Now you notice the edge here doesn't look particularly great Well, we could alleviate this by just using a higher resolution shadow map Though that of course would be more expensive There are techniques for getting a better blur on the edge of our shadow maps when we render them in this case We're using a fairly crude technique for getting a blurred edge There are a lot of tricks to get better results, but we'll just demonstrate the basic idea So the first thing we'll need is a frame buffer for capturing our shadow map so we're generating a new frame buffer here and Creating a depth texture a texture, which is of type depth component rather than RGB It'll have our shadow within shadow height dimensions, which is just 1024 by 1024 It'll use nearest neighbor filtering and we'll just set it's wrapping to repeat for the moment And then we attach it to our frame buffer as the depth attachment And because for this frame buffer all we care about is the depth values. We're not actually rendering out color We're going to disable drawing to the color attachments by calling draw buffer gl none here We're also never going to be reading from a color attachment for this frame buffer So we'll also call read buffer gl none, but we're not doing any read operations from this frame buffer anyway So that's not really necessary Now in the rendering loop every frame we are going to first be rendering the shadow map Because our light is directional our rendering is going to be orthographic So we use glm ortho here to set up the projection matrix The view matrix we get by calling look at which is from light position pointing towards the origin because that's the direction of our light We combine them together to get what we'll call the light space matrix So this gets us from world space into clip space or actually because it's orthographic Clip space and NDC are one and the same so it also actually gets us to NDC We set that uniform for light space matrix and before rendering We make sure to set the viewport to match the resolution of our shadow map 1024 by 1024 Bind the right frame buffer clear the depth buffer notice We don't have to clear the color attachment because there is no color attachment And in render scene here You can see that we're just rendering a bunch of cubes and a flat plane the floor and so the model matrix is being set for each one of them and then in the vertex shader Using that model matrix, we transform everything into world space and from world space We transform into clip space, which is actually also NDC in this case using a light space matrix But then in our fragment shader, we're actually not doing anything because there's no color output to generate There's no frag out. We're going to write all we want is the depth value written into the depth buffer, which is done automatically for us Now the shadow map texture is full of depth values in the range of zero to one Our light space matrix gets things into NDC Which is in the range of negative one to positive one, but then they get scaled to zero to one for screen space So our final output depth values are in the range of zero to one And also remember that unlike in perspective projection where z values are scaled into a non-linear range In orthographic projection the z values are simply linear So now we have our shadow map, but before actually using it in our scene to render Let's just display it on a quad so we can see what it looks like and that's what we're doing here We bind the depth map as a texture and when we render the vertex shader It simply just translates texture coordinates as usual. Nothing going on here. Our quad coordinates are already in clip space We don't need to transform them and Then in the fragment shader we sample from the depth map using the texture coordinates Texture always returns a vector with our GBA components But in this case for our depth map, there's just one component the depth component Which is stored as r if you try and access GBA or a you'll just get the value zero So we always want to get r here and that gets us the depth value Again because our projection was orthographic the depth value is simply in the linear range of zero to one So we don't need to do any funny business We just want to output a grayscale value where for depth values of zero We get black and for depth values of one we get white. So that's what this does here If we build this and run then we see this I've actually here set the output resolution to 1024 by 1024 because that is the resolution of our shadow map And remember this is the perspective of our directional light Which is up high in the scene and pointing downwards towards the origin And you can see the bottom box is darker than the top one because from the perspective of the camera It's closer. It's actually hovering off the floor. Whereas the one on top is sitting on the floor And the points on the floor itself towards the top of the window here They're further away from the light and towards the bottom to get closer So that's why they get darker at the bottom. So that's why we have this gradient getting darker towards the bottom Now to actually use our shadow map and rendering the scene Well here first we're rendering the shadow map just like we saw And then when we render the scene We're going to be passing in the light space matrix because we're going to need to transform points We're rendering from world space into the coordinate system of the light itself So that's why we pass this in and we also make sure to set the depth map as one of the textures In the vertex shader now All of this is like what we've seen before when we've done fog lighting when we do ambient diffuse and specular lighting There's just one new thing here We're taking frag pause, which is the world space coordinate and using the light space matrix to transform it into light space And we get frag paused light space as one of our outputs because we're going to use that in our fragment shader And in the fragment shader what we're doing in main is we're simply competing ambient diffuse and specular like we have before So there's nothing new here But then when we combine the diffuse specular and ambient instead of simply always adding them together Diffusing specular are being multiplied by this shadow value returned from shadow calc Which is either going to be zero or one Zero if the point is in shadow effectively nullifying the diffuse and specular so that only ambient lights the point But otherwise shadow calc returns one in which case we get the full diffuse and specular So in shadow calc, we're taking frag position light space Which is an ndc. So the xyz are all negative one, the positive one We want to rescale it so the values are zero to one So we multiply by point five and add point five giving us pause, which is the rescaled coordinate We then sample from the shadow map using the x and y which are effectively the uv values to read from the shadow map Again, it's the r component that stores the actual depth value And then we take this depth value and compare it against our fragment position in terms of light space its z value is effectively the distance of that point from Not the point or directional light is defined that but rather the plane of light defined by the directional light A directional light you can think of as being a plane that emits rays of light in parallel And we want the distance of our fragment position to that plane And that's what the z value effectively is So if our depth is less than that z value Then we're in shadow and so we return zero Otherwise if the point is being hit by the light Then these two values will be equal and so we'll return one Now if we run the code Well, we're getting shadows, but we're also getting a lot of ugly artifacts We're getting stuff we don't want this moray pattern this strike pattern on parts that aren't supposed to be in shadow at all We have this big section over here, which is all in shadow even though it should be fully lit We also have Jaggy shadows if we zoom in here We don't get the soft effect like we had in the first demonstration And one more subtle thing over here Is that the shadowing is repeating our shadow map is being rendered in places where it shouldn't be rendered at all So this actually will fix first and this fix is simple This is because our shadow map is set to repeat And so when we sample for points that are totally outside of the the field for which we've rendered our shadow map Is just erroneously repeating over here So to fix this We just come in here and instead of repeat for our shadow map We simply want to clamp to border and we set a border color to one And this means then in the fragment shader this depth value will always be one And so as long as pa z is never greater than one then this will always be false and we'll get one as our output So if I rerun the code and zoom out And yeah, well you can see first there's the border of our ugly moray pattern because effectively everything outside the Field of view of our of our light projection of our shadow map everything outside is not being Shattered except for this big section over here. Why is this still dark? Well, what's going on here is that in light space the z values of these coordinates are actually greater than one And so even when the sampled value from the texture map is one Well, the frag position z in light space is still greater. And so it's being shadowed erroneously We can fix this in the fragment shader by very simply capping the z value of pause here to be one So when it's greater than one, we just cap it at one Now if I rebuild and rerun it We don't have that big shadow section anymore. So we fix that problem Now what about this big glaring moray pattern this really ugly stuff Well, what's going on here is that our floating point values have limited precision And also our texture map is of limited resolution And so when we do our calculations and compare the depth against pause z For cases where they should be precisely equal they often aren't because of these imprecisions And so for these shadowed stripes on the ground the depth value should be equal to the z of pause But because of imprecision it's alternating back and forth between being slightly less than and slightly greater than Hence we get this striping and hence on this box. We get this funny stair step So to account for these imprecision problems to get rid of these erroneously shadowed points We can do a little hack here of adding in a bias value We can take our depth value and just add in a little bit of bias So that when the depth is just slightly less than the z over position It'll end up being equal or greater This does mean though that in some cases we'll get points that should be in shadow but aren't But as long as the bias value here is not too large those errors will generally not be very noticeable Now we rebuild and rerun the code and what do we get? Well, we got rid of the ugly moray pattern But let me circle around the box here and see if things look good Things mostly look good, but you'll notice say here Obviously those points should be in shadow and yet at that corner they're not So it seems like our bias value might be a little too large in some points Now finding the right bias value is a little more art than science It depends on various factors of what your geometry looks like the resolution of your shadow map and a few other things So often we just have to experiment to find a good bias value But we can generally get better results if we observe that the problem is more acute for surfaces that are angled away from the light So for surfaces that are perpendicular to light. We don't need very large bias values But for angled surfaces like in this case this side of the box We'll need larger bias values because it is at a sheer angle to the light. We're just coming from up there So what we can do is scale our bias value based on the angle between the direction of the light And the normal of the surface Which is something we actually already compute here. We get the dot product of dot light normal between light direction and normal So let's pass this in to our shadow calc and then for our bias We'll take that dot product subtract it from one and then multiply that by our bias value And so for surfaces fully perpendicular to the light source This dot product will be zero and so we'll get one times 0.05. We'll get the full bias value of 0.05 But then as the dot product shrinks towards zero as the two vectors more and more coincide as the surface points towards the light Then the dot product increases to one and so when subtracted from one the value gets smaller and smaller And so our bias value is going to shrink But we don't want it to ever scale down to zero So that's what we call max here and establish effectively a minimum bias value of 0.005 So our bias is going to be between 0.005 and 0.05 Larger for surfaces that are angled away from the light source And now if I rebuild and run things don't look immediately different But if we come over here This little section here at the corner that was out of shadow erroneously is now properly in shadow To get a blur we'll use a technique called pcf percentage closer filter Where rather than sampling the shadow map at a single point We'll also compute a shadow value of 0.01 for eight nearby points offset a little bit in eight directions Add those values all together which we divide by nine to get the average and that is our shadow value So this explains why our shadow calculation returns a float rather than just a boolean of true or false Because we want shadow values that are potentially between zero and one not just zero or one Anyway here, so we're competing the bias like we just saw using the dot product When we sample from the texture, we're using an offset defined by this x and y of the loop Which is negative one the positive one for both the x and y's So we'll have nine different offsets including zero zero to get the center point itself And we want to sample effectively from the nearest texels in the shadow map So we're competing the texel size by dividing one by the texture size of the shadow map Texture size is a built-in function, which here should return a vector of 1024 1024 So our offset chords are being scaled by one over 1024 We then sample using those offset texture chords getting this depth value And for each sample point we add to shadow a value of zero or one using the same formula we've seen before So shadow will end up being a value between zero and nine Divide that by nine and that gets us our final shadow value between zero and one So if I rebuild run again Now we're getting some reasonably soft shadows They're not the best looking shadows, but they're pretty decent better than what we had before