 The idea behind ambient occlusion is that a point on a surface with other geometry very close by is going to get less ambient light because less of the light bouncing around the scene is going to reach that point if it's blocked by nearby geometry. So the points increases and crevices especially are going to have stronger ambient occlusion. In this example here with ambient occlusion enabled, it looks like we get some extra shadowing folds of the statue and the corners of the background walls and floor. In non-real-time rendering, we can do very high quality ambient occlusion factoring in the details of all the geometry. In real-time however, we can only afford a crude technique called screen-space ambient occlusion. To determine how occluded a point is, we project out a small dome from that point in the direction of the normal for that point. And within that dome, within that half sphere, we consider a number of random sample points and we want to determine how many of those sample points are blocked from the camera by other geometry. In the diagram here, the point being considered is on top of the fender and the blue dots represent sample points that are not obstructed from the camera but the red dots are sample points obstructed from the camera by the side of the hood. This diagram is actually a little messed up because where it depicts the position of the camera in the diagram doesn't match up with the image on the right. In the diagram, the camera actually should be further to the left, but it conveys the general idea. Anyway, having determined how many samples are occluded and how many are not, the occlusion factor is the ratio of the number of occluded samples to the number of total samples. So here in the diagram, our occlusion factor is 6 over 11 because 6 of the 11 sample points are occluded. The more the occlusion, the less ambient light should reach that point. So example 9, SSIO demonstrates this technique. And as you can see, we're not applying any albedo texture to the model here or to the walls and floor. Instead, every point is just given the diffuse color of 0.95 and an off-white color. And then we're applying our usual fog lighting, but ambient occlusion is being factored in for our ambient lighting. So looking at the code where we render, first thing is the geometry pass. We're doing deferred rendering here, so we're rendering into the G-buffer. And the shader for that looks much like we've seen before. Except note, we're competing fact positions in normals in view space rather than world space as we have usually done before. For our fog lighting, either works just as well, but when we calculate the ambient occlusion, we're going to want these in screen space. And then in the fragment shader, we output the values with the G-buffer. Notice we don't have any specular component this time. We're not bothering with any specular lighting here. And the albedo, the diffuse color, again, we're just setting it to this off-white color for every pixel. So that's the first pass. Next thing, we want to calculate the ambient occlusion for every pixel, outputting the occlusion factor to an SSIO texture. So we need a frame buffer for that. That frame buffer is constructed up here. Nothing really new going on, except note that the format is just red, not RGB, because we're storing just one color value. You'll notice also we're creating this blur frame buffer. And that frame buffer looks very much the same. And we need this frame buffer because after generating this SSIO texture, we then want to blur it. Because we're computing our ambient occlusion with a relatively low number of random samples, the ambient occlusion map comes out pretty noisy. But we can mitigate that by applying this blur. In this case, we're using a simple blur that just averages together 16 adjacent samples. There are surely better blurs we can use, but this is adequate for demonstration purposes. And anyway, once we've done the blur, then we do the final lighting pass. And the blurred ambient occlusion texture is one of the inputs. In the fragment shader, we do the fog lighting as normal, except the ambient occlusion is factored into the ambient lighting. So now the hard part, how do we generate the SSIO texture? Well, one thing the shader needs are sample points, and those are generated up here. In this loop, we're generating 64 sample points, setting each one to this uniform array. Each sample has an X and Y that is randomly in the range of negative 1 to positive 1, and a Z that is randomly in the range of 0 to 1. We then normalize these vectors, but then we randomly scale their lengths in the range of 0 to 1. So this gives us a random point somewhere in a dome region where the dome has a radius of 1. We generally though want the sample points to cluster near their origin, and so that's what this logic is about. So that's how we generate our samples, and as we'll see in the shader, these sample vectors are going to be relative from the point we're rendering, which means we'll need to define a coordinate system at that point. We have the normal for the point as we always do, but we haven't defined any tangent or bi-tangent. Well, because we're using these same sample points for every single point when we generate the ambient occlusion, it's actually best if we can randomly rotate these samples from point to point. So we actually want to pick a random tangent for our normals, and once we have a tangent, then we can use the cross product to find the bi-tangent. So to pick these random tangents, we're going to want to have this noise texture. It's a very small texture, only 16 pixels and 4 by 4. Notice that we're using 32-bit floats, and notice we set wrapping mode to repeat rather than clamp. So looking finally at the SSAO fragment shader, we're getting the fragment position and the normal, which we call are in view space here, not world space. We're getting a random normalized vector from the noise texture. Notice that we scale it by noise scale. If we just use text chords without the scaling, then the rotation of our samples would only very gradually shift from pixel to pixel over the whole image, making the sampling less effective. Anyway, from our random back, we want to find a tangent orthogonal to our normal, that's what this formula gets us. We then find the bi-tangent by competing the cross product, and we put the tangent, bi-tangent, and normal together to make a TBN matrix. So now we want to compute this occlusion factor, where 0 means not occluded at all, and 1 means totally occluded. In this loop, we determine whether each sample is occluded, and if so, we add 1 to the occlusion factor. Otherwise, if it's not occluded, we add 0. Once we have the number of occluded samples, divide that by the number of total samples, and subtract it from 1, which we set to the frag color. So in the case that none of the samples are occluded, this ratio will be 0, subtract 0 from 1, and we just get 1. But in the case that all of the samples are occluded, this ratio will be 1, 1 minus 1 is 0. So the more occlusion, the larger this ratio, and the closer the frag color gets to 0. To determine if a sample is occluded, we need to compare its Z value against the Z value in the depth buffer for the pixel that the sample corresponds to. So we need to find the sample first in screen space. We first do so by multiplying it by the TBN matrix, and this TBN matrix, remember, is defined by vectors in terms of view space. So this is rotating our sample vector into view space. Our samples recall, I'll have lengths in the range of 0 to 1, but the size of our sample radius is something we want to define in the shader. So we have this radius constant, which is right now 0.5. So this rescales a sample to have a length somewhere between 0 and 0.5. Then we add the sample vector to the frag position, getting us the position of the sample in view space. We then get that sample position into screen space by first multiplying by the projection matrix to get us into clip space. We get from clip space to normalize device coordinates by dividing by the W. We then finally multiply by 0.5 and add 0.5. This gets us X, Y, and Z values in the range of 0 to 1, which actually technically isn't really screen space because in screen space the X and Ys have the dimensions of the actual output image. But anyway, close enough. We then can use these X, Y values to read from the G position, getting us the view space coordinate of what was rendered at that point in screen space. What we want to know is whether our sample in view space, whether its Z value is closer to the camera than the Z value of this occluder position. Remember that in open jail, Z values are negative in view space, so we test whether occluder position.Z is greater than or equal to sample.Z. Adding in a little bit of bias, this constant defined up here, which avoids a little bit of acne. And if so, if this potential occluder is closer to the camera than our sample, then our sample is occluded, so we add one to the occlusion. So this seems to work, but if you look closely, there's actually a little bit of an odd problem, and that is that notice here the hands and the feet, how they kind of have a dark outline. It's as if the wall behind them is being occluded by the feet and the hands, even though the feet and the hands are actually nowhere near the wall or the floor. We shouldn't be seeing that dark outline. What's happening here is that say those points around the feet, many of their samples in our current test are being occluded by the feet, they are further away from the camera. However, the feet are definitely not within the dome projected out of the points on the wall because it's just too far away. What we need to add is a range check. We're defining this flow of range check that's going to be between zero and one. When the occluder position is within the dome, it's set to one. For points outside, it's set to zero, except because we don't want an abrupt cutoff, we use the smooth step function. Smooth step is like a clamp, but it also smooths the values within the range into a curve, like seen here. So for example, when the length from the fragment position to the occluder position is equal to the radius, this ratio will be one and the range check returns one. For any length less than the radius, it'll also return one. As the length exceeds the radius, the range check value will start shrinking to zero. So if the occluder position is far outside the dome, the range check will diminish the amount of occlusion to something relatively insignificant. It'll never be totally zero, but close enough. So now if I rebuild, now if we look at the feet and the hands, they don't have that outline thanks to the range check. So that's how we can do screen space on an occlusion. Last thing to say about it is it's actually highly debatable how realistic this effect is. If you look in the real world at, say, corners between walls and ceilings and floors, you can find some examples that seem to accord with the ambient occlusion effect, but you can probably find more examples that don't. Turns out light phenomenon is extremely complicated and the screwed approximation doesn't really match reality. On the other hand, people generally do like how ambient occlusion looks, and so on that basis it's perfectly valid. It tends to give her scenes an appearance of more depth and detail. Just understand it's not necessarily realistic.