 Next, we have 6.2 coordinate systems depth, and now we're rendering a full cube. And now that we're rendering things which may get drawn on top of each other, it's very important that we enable depth testing by calling glEnableDepthTest. If we forget to enable depth testing, this is what we'll get. Because what's happening here is the primitives are being rendered in the order they happen to exist in our array of vertices. And so, sometimes primitives which should be drawn behind others are being rendered after, and so they're just being drawn on top of things that they should be behind. So don't forget to enable depth testing. Aside from that, there's not really anything new going on here. We just have a bunch more vertices to make up all the sides of our cube. And down when we render, we are rotating on an axis that is not one of the XYZ axes. With a rotation axis that is not one of the three cardinal axes, that's why the cube seems to tumble in multiple directions. The only other thing that's different here is that when we call drawArrays, we specify that we are drawing 36 vertices, not just six like we were before. Here in example 6.3, coordinate systems multiple. We're now rendering the same box but with multiple instances. What's going on in the code here is we've added another array cube positions of VEC3s, which are the positions where we're going to place our cubes. And this is not being put in a GL buffer. We're not actually going to store this on the GPU side. Instead, just in our rendering loop, we're taking those cube positions, creating a model transform, and applying a rotation on it. Always around this axis, but with different angles as determined by the counter of the loop I, setting that model to the uniform, and then calling drawArrays each time in the loop. So whereas before, in each frame, we're just making one draw call, now we're making multiple. This is not ideal, because for various reasons, draw calls are quite expensive. They're significant overhead when you issue a draw call. Generally, we want to keep down the number of draw calls in any one frame. And so in a later example, we'll look at instancing, where with a single draw call, we can render the same model multiple times, but with different model transforms. So the instances are rendered in different places. What we have here, however, is the simpler, more obvious solution. Looking at example 7.1 camera circle, now we have camera movement. And what's happening here is that our camera is moving around the origin point. It's circling around it, orbiting, and looking always at the origin. So each time in the render loop, we're setting a new view matrix defined here. And we're constructing this view matrix with the convenient look at function. The look at function takes three points and the vector from the first point to the second determines what direction we want the camera to look at. So imagine the camera positioned here, but looking towards this direction. And then the third point, this is our up direction. So for the line between these two points, the question is how is the camera rotated on that axis? And that is what this is effectively defining by pointing upwards. In this case, our look at direction is always going to be flat on the XZ plane. And up is straight up on the Y axis. So our camera perspective is always going to be, in a sense, level with the ground. Now for the orbiting effect, we're changing the position of the camera. And by taking the sine and cosine of the time value multiplied by radius, we get X and Z values that effectively orbit in a circle around the origin. And we're always looking at the origin. Now I'm not going to go into the math of the look at function. It's something you could actually probably figure out for yourself when you realize what we're doing is we're positioning the camera at this location, making it point in this direction and rotating around that origin defined by this up vector. Having established a transform to get a camera into that position, remember that we don't really transform any camera. Instead we transform everything in the world around the camera with the inverse transform. So once we've computed that camera transform, we get its inverse, and that is our view transform. So that is what look at is going to return. In example 7.2, camera keyboard DT, DT standing for delta time. We now have keyboard controls of WASDA. I can move forward, back, and straight, left, and right. And in this example, we now need some global variables. We have first a variable camera position representing where the camera's positioned, camera front, which is a vector relative from camera position determining which direction the camera is facing. Camera up, which is also relative from camera position, determining which way is our up vector. And we also are defining delta time and last frame. Last frame is going to store the time stamp of the previous frame, and delta time is going to store the difference between the time stamp of this frame and the last frame. How much time has elapsed since the last frame? And this is standard game loop stuff where, when it comes to movements, we want to regulate speed of movement based on how much time has actually elapsed because of vagaries in the system. The time between frames elapsing is never perfect. Sometimes it takes much longer to get from one frame to the next than normal, than average. So you should always factor in how much time has actually elapsed between frames. That's why we get delta time. That is computed first thing in the render loop, where first we get the current frame time stamp, get delta time by subtracting that from last frame, and then get last frame by setting that to current frame so that next time around we have it stored. And then for our view transform, we're using look at again, the position of our camera is defined by camera position. The place it's looking to is camera position plus camera front. Again, camera front is defined relative to camera position. So we need to add it to get a point where the camera is looking at. And then the up direction is defined by camera up, which in this example is never going to change. Camera position though does need to be affected by our keyboard input, WASDA, so down in process input. First, we want to compute how fast the camera is moving based on delta time. So we just have this constant 2.5. If we increase this, we would get faster camera movement. If we decreased it, we would get slower camera movement. And in the case where we hit W to push the camera forward, well, camera front tells us which way the camera should go. And in this example, you may have noticed there was no way to change the direction of camera front. So it's always going to be the same in this example. But we multiply that times camera speed and then add the result to camera position. We're taking two vectors, adding them together. And so for moving forward, this is going to mean that the Z value is decreasing because if we move forward and open jail cords, then we're going down the negative Z axis. For S for moving backwards, same deal, except we're subtracting. So the camera position Z value should get bigger. And then for A and D for strafing left and right, we need to get the vector, which is the cross product of camera front and camera up because camera front and camera up form a plane and the perpendicular vector is computed from the cross product. And that tells us which way to move left, multiply that by camera speed and subtract it from camera position to go left, add it when we push D to go right. Now, an example of 7.3, camera mouse zoom. We've added the ability to, in addition to WASDA to move the camera back and forth and strafe, we can use the mouse to change the forward-looking direction of the camera. Also, we can use the mouse wheel to zoom in and out. So we're changing effectively the field of view. So in the code now, we have a few more global variables, including some variables describing the mouse state as we'll see when we look at the mouse handling code. And from the mouse state, we're gonna update these other global variables affecting the camera state, yaw and pitch, which describe the orientation of the camera, the facing. And so from these, we will update camera front. FOV, the field of view, that's gonna be modified by the mouse wheel. And the field of view then simply factors into the view transform when we get our perspective matrix. The field of view is no longer hard-coded. It's from this global variable FOV. Now to handle mouse input at the top here, we're going to set two callbacks. This first one, set cursor position callback. This callback handles the change of the mouse cursor position and set scroll callback, that's for the mouse wheel. So looking at this function first, this one symbol, we get two inputs, one for x offset of the wheel and one for y offset of the wheel. The y offset is for the up and down scrolling. X offset is for left and right scrolling because there are some mouse wheels on some mice that do scroll left and right, but most just scroll up and down. So we just care about the y offset. And in their field of view, we're gonna cap the field of view between the range of one and 45. It has to be somewhere in that range. So only then do we update the field of view. And so we take the y offset and subtract it from field of view. I assume y offset is negative when you scroll down and it's positive when you scroll up. And when we scroll up, we want to zoom in, which means decreasing the field of view. So that's why we're subtracting here. And then having computed a new field of view, we wanna make sure it's capped in the range of one to 45. So that's what this logic is about. For the mouse movement handling, there's this key callback up here in the setup code where we set input mode to GLFWCursor and cursor disabled. This effectively captures the mouse for our window and doesn't display a cursor within it, which is what you want for an FPS. You wanna be able to move your mouse around while you have focus in the window and you don't wanna see the cursor poke out of the window and you definitely don't wanna see the cursor within the window either. So the callback for this, it gets an X and Y double value denoting the position of the cursor. And for our purposes, these values only really have meaning relative to prior values. So from one call to the next, what we care about is how the X and Y values have changed. So that's why we have the global lastX and lastY values to record what the X and Y positions of the previous call were. But for the first time this is called, we want the X offset and Y offset, the deltas between the X and Ys. We want that to be zero. And so we have this global firstMouse, which is starting true, but for the firstMouse event, it's gonna set X and Y to the position's first thing and then disable this so this won't run again in any subs we can call. I find this a little strange. I would have just set X offset and Y offset to zero for the first call, that would have had the same effect. But anyway, this gets us what we need. And so once we have the X offset and the Y offset, we multiply them by a sensitivity factor, the larger the sensitivity value, effectively the larger the mouse movements. And with that offset and Y offset, we add them to the aw and pitch respectively. For the pitch though, we wanna make sure it's capped within a range of 89 to negative 89. Because certainly you don't want to have your perspective sort of do a backflip. So you've exceeded 90 degrees or negative 90 degrees. You don't wanna do that. So that's why we're capping the pitch. And from the on pitch, we now need to update camera front. We create a front vector where we're competing it's X, the Y and the Z using this trigonometry. I won't go into the details. You can figure out the geometry for yourself if you're curious. And lastly, we normalize the vector because we want our camera vector to always have a length of one. We want it to always be a unit vector. So by updating camera front, that's gonna affect our view matrix and the call to look at. And last thing here, it did be clear that the handlers for the mouse events, those are processed when we call pull events. When pull events is called, that's when it reads events off of the event queue for the process, the events coming from the operating system. And when mouse events are pulled off this queue, that's when it calls the mouse handlers.