 Light in the real world is a quite complicated phenomenon. There's different degrees of light intensity, there's different wavelengths of light, and how those wavelengths and intensities interact with various surfaces which they strike, that really is quite complicated. But in rendering, we're going to work with a simplified conception of light, in which we imagine the points on the surface to have what we can call its base color, its innate color of that point. And a light ray which hits it also has its own color. So given a base color and the color of the light striking it, those are combined by multiplication to get a resulting RGB value. So here, for example, assuming we have this base green color of 0.4 for red, 0.6 for green, and 0.2 for blue, and it is being struck by a light whose r value is 1, its green value is 2, and its blue value is 0.2, and so when multiplied together, the result is 0.4, 1.2, and 0.04. Now of course, when we actually display RGB values, the component values are capped at 1, nothing can exceed 1. So say here, when the green component of our light is 2.0, we can't really visualize that. So instead we're just showing the color with 2 reduced down to 1. And same for our result, we can't really display a green value of 1.2, so it's just being displayed as if it's simply 1. But it's important that our lights can have components exceeding 1, because that represents in effect an intense light, which when we shine onto a surface, produces a result that's brighter than the base color itself. In this example, the base color is starting with a green value of 0.6, and we're effectively doubling that. This does reflect real world behavior, because you have an object say that you might perceive as being red under normal lighting conditions, so reasonably well lit room, but then of course in the dark room it's black, you can't see it at all, but under very intense light conditions, what you think of as being normally red, well then this color might appear to be a much lighter red, and if the light's intense enough it might even appear white. So when we think of a point on an object as having an innate base color, well that's just the color we see when we shine a white light on it, that's not too bright, that doesn't have too much intensity. For most materials, you start shining a brighter and brighter white light on it, and the surface itself will begin to appear more and more white. So when we talk about lighting and rendering, our end goal is for every pixel, for every point on the surface, every fragment, we want to compute what the light value for that point should be. We know what the base color of a fragment should be, that's usually something we sample from a texture, or sometimes from color attributes of the vertices, or perhaps from a combination of multiple sources, like say we could sample from multiple textures and somehow blend those values together to get our base color, but then we want to multiply that base color by some light color. So we need to know what lights with what colors and intensity are striking this point, and if there's multiple light sources hitting this one point, we want to combine them all together, and light very simply acts in an additive fashion. So we can just take these different light colors and add them together to get our combined light value, and that is what we multiply by the base color. So here for example, if I have this green light and this yellow light, I add them together and I get a combined light that's 1.4, 2.1, and 0.4. So another question is, for light sources in our scene, how do we compute for each point on the surface the intensity and color of light coming from each light source? Well, the foundation for how this is handled in real-time rendering at least is what's called the Fong shading model, where we have three primary kinds of light, ambient, diffuse, and specular. The idea of ambient light is that, well, in most scenes with most kinds of typical lighting, light generally is only coming from a few directions, but it is bouncing off of surfaces and scattering around the whole scene such that very little, or perhaps even nothing in the scene, should be totally dark. And so rather than trying to account for all the ways in which light might bounce around the scene, which is something we may try and do in ray tracing, but in real-time rendering without ray tracing, that is just too costly. And so we have this ambient light factor, which is just a very simple hack. We just assume that for all points on all surfaces, there's this pervasive ambient color of light just hitting everything. For the ambient light calculation, there's no regard for where any light sources are positioned, relative to the surface, or where the camera is, none of that matters. Every point simply gets the same amount of ambient light. Now, how you determine what the ambient light level in the scene should be, that could simply be hard set by the environment artist when they set up the scene. In more sophisticated schemes, you might dynamically compute an ambient light level from how many lights are in the scene and how much intensity you might average between them. In some cases, you might not apply the same ambient light level to everything in the environment. You might have it change for different parts of the environment. But when it comes time to render a model, for the ambient light component, that's just simply a color of light that's passed in, and no further calculation is needed. What we call diffuse light, on the other hand, does take into account where lights are positioned relative to a point on the surface. What we care about is the respective angle. The rays of light are striking the point face on, such that it's perpendicular to the surface at that point. Then we apply that light source's color with full intensity. So if my light is pure white, if it's 1, 1, 1, then it's not diminished at all when it strikes the surface fully perpendicular. If, though, our surface is parallel to the light source, then we don't want any contribution at all, because, of course, the light rays will pass by the surface rather than strike it. And if the surface faces away from the light, of course also it should be 0. But for angles of the surface, in between parallel and perpendicular, we want a linear gradation from 0, no intensity, totally dark, to 1, full intensity. And so, as we'll see later, we want to scale the diffuse light component based on the angle from the light source to the point on the surface. Be clear, though, that for diffuse light, we're not going to take into account where the camera is positioned. We call this the diffuse component, because we imagine that when this light is striking the surface, it is scattering in all directions equally. Instead of the light rays bouncing off at a reflected angle, they're just scattering everywhere, because the surface is rough enough to scatter the light rays in all directions instead. And it turns out that for most materials in the world, this is pretty much how to behave. When light strikes a typical object, the light rays do tend to scatter in all directions fairly equally. Not perfectly equally, but fairly equally. That's the approximation in most cases. For the specular component, however, this is accounting for light tending to bounce off of surfaces primarily at the inverse angle, such that if your camera or your eye is looking at that point of the surface in the path of the reflected angle, then you get a much-intensive light for that point. And so, this has the effect of creating bright spots and making the object look shiny, because in the real world, only smooth, generally metallic surfaces look the same way. And you can see in the diagram here that when we take our ambient diffuse and specular components and add them all together, you get an object that looks smooth and shiny, like it's metallic, because of these specular highlights. So in practice, what we'll do is that for objects that aren't meant to be shiny, there's a constant that will govern the specular intensity, and we'll tweak that constant to make some objects look shiny and other objects look rough. There's a new major phenomenon that this model is not accounting for. We're not really properly accounting for indirect light or shadows. Indirect light is a phenomenon of light from our light sources hitting surfaces and then bouncing off and hitting other surfaces one, two, three, four, however many times. It's because of indirect light that things that aren't directly hit by a light source still tend not to be totally dark. If you're outside on a sunny day and you look into the shadows, because intense sunlight is bouncing off of everything in the world in a chaotic fashion, and so it's very rare for anything in the out-of-doors to be totally dark. Unless you look into a tunnel or a deep well, some amount of light is going to bounce around and hit pretty much everything. Again, in most variants of ray tracing done today, accounting is done for light bounces, for indirect light, but that is computationally expensive, and so it's not something until very, very recently that has been viable at all for real-time rendering. There are alternative techniques that can be viable in real-time rendering, what's generally called global illumination, but that's a quite advanced rendering topic that we probably won't cover. As for shadows, well, the fog model for simplicity and also more for just efficiency does not try and account for obstructions in the path between lights and points on surfaces, and so it's as if our lights just shine through every obstruction, and this obviously is not realistic. What can be done about this is that there are some techniques for adding shadows back in. After computing the fog shading, we can figure out where the shadows are supposed to be, and then after the fact darken our images at those points to account for shadowing. Unfortunately, the algorithms that do this are very imperfect and quite expensive, so we definitely can't have perfect shadows in real-time, but we can still get some pretty decent results. Looking now at the first of the lighting examples, this one called colors, we now have these two boxes. The one in the top right, the small white box, that is just meant to represent the light source is just to visualize where the light source is coming from, the single point light that emanates in all directions, and our orange box now in this example is only being lit by ambient light. There's no diffuse or specular component, so in fact the light is irrelevant to this current box, and in fact the light color for the ambient light is simply just 1, 1, 1, so we're just seeing the base color of this box fully lit in effect, but looking at the code very quick, I won't go through all of this, it's just setting up two boxes to render, nothing we haven't seen before, except now in this example and the ones that follow we're going to have two separate shader programs, one for rendering the so-called lamp for that little white box that illustrates the light, and nothing interesting is going on there, it's just painting all the pixels white, but then our other shader, the so-called lighting shader here, that is for the orange box, which we're going to apply lighting to in the later examples, so for now we look at its fragment shader, it's getting an object color and a light color multiplying them together, and this is going to end up the same value for all fragments of the box, and so it's effectively just an ambient light. Now how do we compute the diffuse light? As I described it depends upon the angle at which the light strikes the surface, so what we're going to need is a vector from the light source to the point, but actually we're going to want the inverse vector, we want the point from the surface to the light source, and we want this normalized, we want the unit vector in that direction, and we'll call this L, and in the case where the surface is perpendicular to the light, then the normal at that point and L are going to be one in the same, they're going to be pointing in the same direction, and they're both normalized so they're both unit vectors. However, as the light source moves or the surface changes angle, as the angle between L and N increases from 0 up to 90 degrees, we want the diffuse light to linearly scale down to 0. Once N and L are perpendicular, meaning that the surface is parallel to the light, then we want the light to contribute no diffuse lighting. And of course when the angle is greater than 90 degrees, that means that the surface is pointing away from the light, the front side is not being hit by the light, well then we want the lighting to be 0 as well. So conveniently it's the case that if you take the dot product and do unit vectors, in this case N and L, if they're coincident, if they point in the same direction, then you're going to get the value 1. If they're perpendicular you're going to get 0. If the angle is greater than 90 degrees you're going to get a negative value and as we go from parallel to perpendicular it linearly increases from 0 to 1. So we can take the dot product and multiply it by the light color and that gets us the diffuse component for that light. For many of multiple lights, we do this computation separately for each light but just like always when we combine light components we just add them all together into one light color. Note however that the workload for this calculation does scale with a number of lights. Every additional light is an additional computation. Now for this computation we need both N and L, either both in world space or in view space. Either way it works because it's just a little easier for our examples we'll do so in terms of world space. We'll find L in world space while the position of the light is already defined in terms of world space but for each fragment we're rendering we need its world space coordinate. And so what we're going to see in our vertex shader is that we have an output variable for the world space coordinate. GL position of course is always set to the clip space coordinate but we're also going to compute the world space coordinate. And then on our fragment shader the three world space vertices will be interpolated in a perspective correct way and so we'll have the fragments coordinate and world space like we need to find L. For our normal vector N well in our models at each vertex we're going to have a normal vector attribute but these normals are defined in terms of the local model space in the vertex shader we're going to need to transform them into world space, store that in an output variable and then in the fragment shader we'll get the linear interpolation we need for that fragment. The question though is how do we transform vectors? You might naively assume that we can just apply the same transform matrix onto our vectors as we do for the vertices of a model but no it doesn't work that way firstly because you should not translate normals. Here for example considered in two dimensions the white lines are the axes let's say X and Y and that orange line is the edge of a polygon and that green sticking out represents the normal which we also are depicting jutting out from the origin because that's of course how normals are actually defined they're simply just points relative to the origin. We'll imagine that the vertices of this edge gets translated they get shoved over to the right we add to their X component but when we translate the vertices the normal should remain unchanged the angle of the edge has not changed it's just shifted over and yet if we were to apply the same translation to our normal vector what we get is depicted by the red line and no longer points in the right direction it's not perpendicular to the edge now if we just want to rotate our normals that actually is not an issue for rotations we could just apply the same rotation transform matrices and for scaling as long as the scaling is uniform such that the scaling factor for X, Y and Z are the same so that they're all being say doubled or tripled or cut in half but as long as they're being scaled in the same way the scaling will change the magnitude of the vector either make it longer or shorter but it will point in the right direction for non-uniform scaling however if say we're doubling the X's but not scaling the Y's of the Z's then we have a problem because then what we would get is something like this here the X's and only the X's are being doubled and now the edge has a different angle and so now properly the normal should be tilted counter-clockwise but if we apply the same non-uniform scaling to our normal vector if we double the X's then what we get is this red line which actually has been rotated in the opposite direction so this is simply not the correct result now it turns out that given a transform M that we're going to apply to our vertices to get the correct transformation for the vectors even if it has non-uniform scaling we can take the inverse of M and then the transpose or vice versa which turns out to be equivalent if you take the inverse and then the transpose or the transpose and then the inverse you always get the same result it turns out but either way we get a transform potentially with non-uniform scaling that still correctly transforms our vectors remember though we never want to apply translation onto our normals so we simply take our 4x4 transform matrix and cut it down to a 3x3 matrix get rid of the right column in the bottom row and then we have a 3x3 transform matrix that we can multiply on the VEC3 that is our normal