 So now if you look at basic lighting diffuse example 2.1 You can see it has diffused lighting There's the light coming from the white square But then if I go around the orange cube notice that Lighting is hitting the different sides because of different angles It's hitting them with different intensity and in fact even on each surface. It's a little subtle I don't know if this shows up in the video But because the angle from the point to the light is actually More perpendicular on the right side here than on the left side of the surface. It's actually brighter towards the right edge Particularly towards the top right than it is to the rest of the surface So be clear that this is computed on a per pixel basis even in cases where the the difference of angle is relatively subtle So what this looks like in code is first off our vertices now each have a normal vector attribute For our purpose, we want the normals to be totally perpendicular with their respective surfaces In cases where you want the lighting to smoothly transition from adjacent surfaces Then you actually want these normals to not be totally perpendicular with their surface You want them sort of averaged with the other triangles as vertex as a component of and that way when you compute the Lighting it'll give an appearance of a smoother transition over the edges Which is something we'll demonstrate later, but in this case we don't want to depict any illusion of smoothness We want this to just look like a hard edge cube So again, all of these are just fully perpendicular to their surface and now in the vertex shader We're getting in not just a position, but a normal for each vertex We're also getting the uniforms of model view and projection matrices and we compute gl position as normal But we're gonna also need the fragment position in world space So we apply just the model transform onto a pause and then to get the normal in world space as I described We take the model we're going to cut it down to a mat 3 to effectively eliminate the translation There's a built-in transpose function and a built-in inverse function And as I mentioned we can flip the order of the inverse on the transpose We could do the inverse first if we wanted we would get the same result So now in our fragment shader We are going to be computing an ambient component and a diffuse component And these are going to be combined into one and then multiplied with the object color and That is our output fragment color Now for simplicity, we're using the same light color uniform for both our ambient light and our diffuse light And actual rendering you generally want to configure these separately So they wouldn't necessarily be the same color value, but here for simplicity. We'll just have them be the same And for the ambient we simply just multiply it by this ambient strength to scale it down. So the light color actually is 111 so it's just a fully white light. So that's being scaled down to values of point one for our ambient and Then for the diffuse again, we need two vectors We need the normal vector and we need the light vector and both of these need to be unit vectors Otherwise the dot product doesn't give us the correct result Now even if the vertex shader doesn't scale our normals so that there's still unit vectors Well, when we interpolate between unit vectors, the result is not necessarily unit vector So regardless, we're always going to want to normalize this to get norm And then for the light vector to get the vector from the fragment position to the light position That's light pause minus frag pause and again, we normalize Now that we have norm and light direction we get their dot product That's what gets us our diffuse strength except if the result is negative We want it to be zero and so that's what the max function is doing here Max given two values returns whichever one is greater So if the dot product is negative, we would get zero as the result Effectively putting a floor of zero on the value here So now we have our diffuse strength and we multiply it by the light color and that gets us a diffuse component So now that we have both of our light components We simply add them together and multiply them on our base color and that gets us our result As for specular light, again, the idea is that light on some surfaces tends to bounce off mostly at a reflected angle And so for cameras looking at the surface and in the path of this reflected angle It's going to get the full intensity of the specular component And as the reflected light vector and the camera vector as they diverge the specular effect falls off How quickly it falls off is something we're going to govern with a constant For some surfaces, you might want the specular highlight to appear as very large spots So you want the falloff to be low But for other surfaces, you want the specular highlights to be small So you want a high falloff you want the specular component to diminish quickly as the angle increases So if we call the vector of light reflected off the surface and normalized if we call that r And call the vector from the point of the camera normalized if we call that c Then very much like with the diffuse component We want to get the dot product of these two vectors clamping any negative values at zero But because we generally want the falloff to zero to happen before the angle reaches 90 We're going to raise the value to the power of some constant k So if your exponent is just say a to 16, you're going to get fairly large specular highlights But if you increase it to 32, 64, 128, then you're going to get smaller more focused highlights Because remember once we clamp the dot product, it's going to be between zero and one You raise one to any exponent. It doesn't increase zero to any exponent doesn't increase But for any value between zero and one when you raise it to an exponent You're making it smaller And so the larger the exponent the quicker those values are going to get smaller And in the examples I gave they're all powers of two They don't have to be powers of two But I believe the reason you might pick a power of two exponent is because then it could be computed cheaper Because when you raise things to exponents of two, uh, you can do so by bit shifting And so it's a cheap operation But in principle any value of one or greater is valid for k Now because specular highlights are generally meant to be focused We typically don't want there to be any specular component once the angle between the reflected vector and the camera Is greater than say like 45 or 70 degrees on the on the high end But occasionally you do want to have a quite large specular highlights Such that there should be a specular component for angles even greater than 90 degrees But the algorithm as we just explained takes a dot product between these two vectors And so when the angle exceeds 90 degrees then the dot product is always going to be zero So there's a simple variant of this algorithm called blendfong Which starts with the observation that for the bisecting vector the half vector Between the light vector and the camera vector Well, when the specular effect is in full force because the camera vector coincides with the reflected angle Both in this half vector, which we'll call h should match the normal And so instead of computing the reflected vector and finding its dot product with the camera vector Instead we're going to compute h and find this dot product with the normal And so as h and n diverge the specular effect falls off The advantage here is that for any camera position on this side of the surface Any camera position where the surface would be visible h is never going to be greater than 90 degrees from n And so the dot product is only zero in the case where the camera is on the plane of the surface Or on the other side where we can't see the surface anyway And so if you want very large highlights by using low constants a low value of k You're not going to get an ugly cutoff artifact of the specular effect like we would with the regular algorithm There are also some special cases where this is a little more efficient But I don't think they apply when you have a perspective projection So they would only work for an orthogonal view. I believe that's the case. I'm not sure But the main reason to use blend fong is simply that you get more correct results Just keep in mind that for blend fong Your k values are going to have to be larger to get the equivalent effect as with the regular algorithm For example, if I use 32 for k for regular fong and 32 for blend fong The blend fong highlights are going to appear quite larger This is because for the same change in camera angle here the bisecting vector h here Proportionately changes this angle to n at half the rate And so for the same setup of light and vector with blend fong You have a smaller angle and so you then would have a larger dot product And so to get the equivalent level of falloff you would need a larger value for k Now let's see specular lighting in action Building on the previous example we now have added in specular lighting So as you can see as I position my camera in the right spot You're seeing highlights on the on the surface there It's more sharp if I come over here Yeah, you can see a fairly distinct highlight because my camera's positioned in the right spot to see it So all that's different in the code now is in our fragment shader Well, first we have another uniform for the view position And so we have to set that on the c++ side by sending it to the position of the camera And like the light position it should also be in world space We need everything to be in terms of world space for those calculations And so here we're doing the amputee and diffuse just as we did before but now for the specular We're having this constant specular strength, which just simply governs the intensity of the specularness And as you can see here, we're using the same light color value for all three components Which that's just for simplicity in real use cases You might have different light colors for the different components between here That's all just one And last thing we need to get this spec value, which is from the formula I showed you So view dur, that's the vector from the point on the surface to the camera So it's view pause minus frag pause and we normalize And we're not doing blend foam here. We're just doing the regular specular algorithm So we get the reflection direction of the light by calling the built-in function reflect Passing in the norm of the surface itself because that tells us what to reflect off of And we actually want the negative of the vector because remember This is actually the vector from the point to the light we want from the light to the point and so by subtracting that's what we get And notice that we don't normalize here. Well, that's because the light direction already is normalized So the reflection direction will be a unit length We compute the dot product of both these vectors Get the max with zero Effectively capping the minimum value to zero and then we're raising this to the exponent of 32 If we wanted smaller highlights, we would increase this constant. In fact, I will demonstrate this even more 128 And make sure that my updated fragment shader file is copied to the build I need to actually come here and just make an arbitrary change So it'll actually rebuild the whole project Okay, come over here and run it again You will see that our specular highlights, they're smaller points Notice they're not any brighter. They're not more intense. They're just more focused If we want more intense highlights, we would up the specular strength here So in fact, let me demonstrate that I'll make it really bright 1.5 Again have to check the build system rebuild Run it again And we'll come and see that yeah That's definitely more intense And in fact, we have this halo effect around it that almost looks like a lens flare Which is not realistic. I can't think of any situation where you'd want something that intense, but There we go Last example, we'll look at in this video 3.1 materials What this is demonstrating is that the material itself the surface Its properties of how it gets rendered has been parameterized And also notice the color of our light is shifting over time That's just to demonstrate the effects of different lights on these surfaces And so looking at the code Now for the fragment shader, this is the main thing that's changed We've plugged in now for the light position Different values for ambient diffuse and specular whereas before we just had the one light color And for material, we now have properties of ambient diffuse and specular and also shininess Which is plugged into As the k component of our specular calculation The larger this number and the smaller the highlights The intensity of the specular component is more governed for the material by this specular value here, which is multiplied by spec And so now we have a shader where for the different lights You can have different colors for ambient diffuse and specular and we can also for different materials Have them governed by different intensities of ambient diffuse and specularness And notice these are vex3s because these are rgb values. So it's really not just intensity It's it's more also color because the way we're representing light We don't really distinguish between color and intensity. They're really one on the same thing here They're just tied up together as an rgb value Anyway, so notice now that we have some uniforms where these Struct values are expected and this is interesting because we're defining structs on the gl sl side But then how do you pass that in from the c++ side? Well, it's almost like these are really just namespaces almost as far as we're concerned So on the c++ side, you want to set the ambient of the light uniform Then it's light dot ambient for the string value and that's how you set it So this is just like we were sending any other vex3 uniform Except in this case the uniform itself is actually a struct and this is just one of its components, which is a vex3