 A cube map is a special kind of texture that's generated by taking six images of our scene from the same fixed position, but in six different directions. First front, then 90 degrees to the left and right, 180 degrees to the back, and 90 degrees up to top and 90 degrees down to bottom. The field of view for each of these images is 90 degrees both horizontally and vertically, and so we effectively get an image of our scene in all directions from that point. These six images are then stitched together into a single texture, usually in the cross pattern you see here. With these cube maps we can render a few interesting effects, most notably sky boxes and environment mapping. Example 6.1 demonstrates a sky box. What you see here is we're rendering this ordinary textured box at the center of the scene just for visual reference so that as I move around you can tell that I'm moving. As I translate around you can see that I'm effectively moving around the box. We also are rendering the sky box, and notice that the sky box rendering in the background that only changes when I rotate my camera, not when I translate. As I translate left and right here the sky box image is not changing. Also notice that the sky box despite being rendered from a cube map, which was captured as six discrete images, you can't really see the seams between the edges and you can't really detect where the corners are even if I look straight at where the corner should be. If we placed our camera inside a cube and rendered those six images onto the faces of the cube we would look like we're inside a box, and yet you can see here it doesn't really look like we're inside a box. What accounts for this is the special way in which cube maps are sampled. When we sample from a cube map, rather than providing a U and V coordinate, we provide a three dimensional vector coordinate, an X, Y and Z. And from this vector it's computed first which face we are sampling from, and then the corresponding UV coordinate for that face is computed. So what this looks like in detail is, given a vector we select the corresponding image from the six images, we then compute a texture chord on that face that ranges from negative one to positive one, but because UV coordinates are supposed to be expressed in the scale from zero to one we then rescale from zero to one. Say the vector we want to sample is running through the front image plane and assume also we're in a left handed system, unlike open jail and so Z in the positive direction runs through the front. Well then, the way we know that our vector runs through the front image plane is that first Z is going to be greater than zero, it's going to be positive, but that alone is not enough, that just tells us that it can't possibly be the back plane, it could still be the left, right, top and bottom, what tells us it's the front is that the absolute value of X is less than Z and the absolute value of Y is less than Z, because remember that the field of view for these images is 90 degrees on both axes and so say when X is equal to Z then we have a vector running through the right edge of the front image, if negative X is equal to Z then we're running through the left edge of the front image and so the X's that range from negative Z to positive Z, those X coordinates lie in the range of the front image rather than the left or right image and then it's the same logic for the Y's, for Y values between negative Z and positive Z those Y's are in the range of the front image rather than the bottom or top image, so these are the criteria that tells us that our vector runs through the front image plane to calculate U and V in terms of a range negative one to positive one, that's simply the ratio of X over Z and Y over Z and then we rescale by adding one and dividing by two, getting us UV coordinates in the range of zero to one, now for vectors running through the right image plane it's all the same logic except the axes are just swapping rules here and so now the test that tells us that we're on the right image plane is that well X is going to be greater than zero, the absolute value of Z is less than X and the absolute value of Y is less than X and then to compute U's and V's the ratio is negative Z over X for U and Y over X for V, again exact same logic you just have to reorient for the different axes and then considering one more case the left image plane is all the same as for the right image plane except we just flip the signs of the axis, so this is the logic from here you can figure out for yourself how to compute vectors running through the top bottom and back images, so now when we render our skybox what we want to do is keep our camera positioned at the origin but we'll let our camera rotate so we can look around and everywhere we look we want to render a pixel so there should be some primitive in the way it doesn't actually really matter what primitive we see might as well be triangles and it doesn't matter really what the geometry is it just has to block or view everywhere we look and so the simplest possible geometry that would entirely encompass our camera is a cube but be clear the fact that our cube map is captured as a cube and that we're rendering on the cube that is actually coincidental. What matters is that for every point on the primitives we render what we want for each pixel is the corresponding world space coordinate because any coordinate can also be thought of as a vector and so for each pixel we sample from the cube map using that pixels world space vector and so the fragment shader for our skybox will look very simple we get as input a VEC3 texture coordinate not a VEC2 because this is not just a UV value this is an XYZ world space vector and our sampler notice is now a sampler cube not a sampler 2D but we use the same texture function to sample from it and that gets us our fragment color. To get this input texture coordinate in the vertex shader we simply take the vertex and get its world space coordinate so we transform it by the model matrix though actually this isn't really going to be necessary because our skybox cube that we're rendering on to it's not actually going to get transformed model here is just going to be an identity matrix because the local cords are one and the same with the world space cords in this case so we actually could just assign a pause directly to text cords here anyway gil position is going to be computed like normal except for one thing because our skybox is supposed to be rendered behind everything else well we could simply just render it first and then render everything on top but then for every pixel we overwrite that's overdraw that's that's wasted work where we drew the skybox where it didn't really need to be drawn and so we can do a little trick where we render the skybox last but make sure for the z depth test every pixel of the skybox has the z value one the max z depth value and as you'll see we'll tweak the depth test so that rather than testing for whether the z value is less than the current z value in the buffer it's going to be less than or equals and so effectively our skybox is going to be drawn into all those pixels where the z buffer currently has the value one but not into any other pixels which are the pixels where something else has already been drawn and so effectively then the fragment shader for our skybox is only going to run for those pixels where there isn't already anything drawn thus saving the wasted work if we drew it to every pixel and then drew the rest of the scene on top so the way to make sure that the z value in screen space where the depth test is performed to make sure that z value is always one for our skybox pixels we set the z value in clip space to be the same as the w because then when the vertices are converted into normalized device coordinates x y and c are all divided by w and so if z equals w the z value is always going to end up being one w divided by w is always going to be one so having computed the position like normal if we get dot x y w w then geoposition z's will be the same value as the w's anyway now we have our text cords and geopositions and remember that our vertex output of text cords here in the fragment shader is going to get interpolated for each pixel but in the perspective correct way and that will get us for each pixel the correct world space vector so now in the c plus plus code we have the vertices of our cube the one that we actually see at the center of the scene not the skybox but then we also have the skybox cube vertices notice that the dimensions are effectively two on each side that doesn't actually really matter as long as it's big enough that our camera doesn't clip into it it doesn't really matter what the size is it could be much larger or even smaller does not really matter but anyway in our render loop we're going to first draw the cube and then draw the skybox but when we draw the skybox all the z death values are going to be equal to one and we want our skybox to overwrite pixels that also have the value one so we need to change the depth test function from the default of less than to less than or equal also notice when we get the view transform we don't want our camera to translate at all and so we're taking the mat4 converting it to mat3 that effectively drops the right column and the bottom row and then we convert it back to mat4 note though that this code all looks the same as up here except notice we set the depth test function back to less the default for the sake of drawing our cube in the next frame and also our texture is not a texture to D as normal it's a texture cube map and the way that's getting created is down here in the load cube map function our cube map on disk actually is stored as six separate images looking at the director here you can see back bottom front left right and top so those are all getting loaded separately in this loop here notice the currently bound texture is a cube map texture not a regular texture 2d and when we call text image 2d to load the data from the file into the texture this first constant parameter here is specifying which of the faces this texture corresponds to the positive x constant refers to the right face negative x refers to the left positive z to the front negative z to the back positive y to the top and negative y to the bottom and conveniently these constants are defined as incremented numbers so in our loop here we can change the constant by just adding an eye the counter of the loop and so for example in the second iteration the number is going to be equivalent to the negative x constant the faces by the way are constructed here and then passed into load cube maps so it's first right left top bottom front then back and lastly when we set the texture parameters notice that we have three texture wrap parameters st and r because for our cube maps we have three texture coordinates not the usual two the next example 6.2 demonstrates the other primary uses of cube maps environment mapping what's happening here is that the cube at the center of our scene is now being textured with the same cube map as our skybox but for each pixel the vector is computed as the reflection of the vector from the camera to the point so in effect it looks like a reflection of the environment now of course because the environment that it reflects was captured as a still image our environment map reflection won't reflect anything that changes or anything else that's rendered in the scene but it still is a useful technique because it is the cheapest way to compute a reflective surface in this example we have a totally reflective surface we have a perfect mirror but more commonly these environment maps are applied for less shiny materials like say metal armor or statues in which case the cube map texture is then blended with some base color of the material itself and so you get a duller reflection effect and so in practice when the reflections aren't perfect it's really not all that noticeable anyway what this now looks like in code is on the c++ side hardly anything has changed it's just that now for the cube we're rendering at the center of the scene it doesn't have UV coordinates it now has normals because we're going to computer reflected angle off the surface and so in the vertex shader we need to transform these normals according to the formula we saw in earlier examples we also want the world space position and then gl position is computed like normal then in the fragment shader i here is going to be the vector from the camera to the point on the surface and then we reflect that off the surface using the normal as normal to get r and that is what we sample from the cube map texture note that this reflection to be perfectly accurate would have to account for the fact that the position is not at the point where the environment map was captured and so the vector we get is not exactly correct but as long as the things reflected in the environment map aren't too close if they're a reasonable distance away then the inaccuracy is not usually noticeable