 A frame buffer object is an object made up of one or more image attachments, and these image attachments come in four kinds. There are color attachments, depth attachments, stencil, and depth stencil, which is a combination of depth and stencil interleaved into one buffer. Per pixel, that's generally 24 bits of depth, followed by 8 bits for stencil. These image attachments come in two kinds. They are either textures or render buffer objects. Unlike textures, render buffers have no mip maps, they have no layers or tiling, they don't have texture parameters, and they cannot be sampled, and so you cannot read from them from your shaders. A render buffer very simply just stores the pixels, one after the other, in the most straightforward way possible, making them optimal for cases where you just want to read and write a bunch of pixels en masse, but this simple layout makes them inefficient for sampling, and so you're not allowed to sample from them. Now the so called default frame buffer object is created for us, and this has four color attachments, what's called front left, front right, back left, and back right. And assuming we're doing non-stereoscopic but double buffered rendering, then it's the front left, which is currently displayed on screen, meanwhile we render into back left, and when we're done rendering, then we swap the two buffers. If we're doing stereoscopic rendering, such as for VR, then we would use left for left eye, and right for the right eye. Either way, this default frame buffer is generally configured to have a depth and stencil attachment, and this is not actually automatic, but it's something that's done for us by GLFW, so in practice we're not going to have to set it up ourselves. So far in our examples, we've just been using the default frame buffer, but it's possible to create and use other frame buffers. The default frame buffer is always what ends up on screen, but we might sometimes want to render into so-called off-screen frame buffers for various purposes, such as applying post-processing effects or doing things like deferred rendering, which is something we'll describe in a much later video. In example 5.1 frame buffers, we're not rendering anything new before, it's just a quad for the floor here and then these two boxes, but what's not evident on screen here is that we're actually first rendering into an off-screen frame buffer, a separate frame buffer that we explicitly have created, and then we're using the color attachment of that frame buffer as a texture to render the image out into the default frame buffer. For the default frame buffer, we're just setting up a simple quad that encompasses our whole view, rendering this other texture onto the quad, which has the effect then of copying the contents from the off-screen frame buffer's color attachment into the default frame buffer's color attachment. Surely you would think there's a more direct way to copy data from one frame buffer to another, and in fact actually there are, yet depending upon your OpenGL driver and your system, it tends to be the case that this is the most efficient way to do it, but the other reason to do it this way is now we can apply so-called post- processing effects. When rendering the off-screen frame buffer's texture into the default frame buffer, we can have our fragment shader for that rendering to apply some fancy effects as we'll discuss. First thing though, let's look at how to create and set up a second frame buffer. We start very simply by creating the frame buffer object itself and then creating its two attachments. First the color attachment, which will be a texture, just like we've seen before, and then the depth and stencil attachment will make just a render buffer object because we're not going to need to sample from it. The general rule is that if you don't need to sample from an image attachment, it might as well be a render buffer object, and so it should then in theory be slightly more efficient, though I've seen conflicting information of whether that really pans out. It turns out that the hardware is actually really good at reading and writing into textures, and so I believe on many modern GPUs it's not really going to make a difference, but maybe I have bad information on that so don't take my word for it. Anyway, for our color texture here you may note that for the texture parameters we're not setting any wrapping mode because that's going to be irrelevant for our purposes. Notice that the format is RGB with no alpha, and the width and height are the same as our screen output. For the depth and stencil render buffer we simply generate the render buffer here, bind it so that it's active for the subsequent call of render buffer storage, and this is what actually allocates the buffer, so we specify its format 24 bits of depth, 8 bits of stencil, and its dimensions. Having created both the texture and render buffer we then need to attach them, so we bind the frame buffer first, and when we do so we have three options of how to bind it. GL draw frame buffer binds the frame buffer for write ops, GL read frame buffer for only read ops, and if you just say frame buffer then it binds it for both. Write ops includes any rendering operations like say GL draw arrays, whereas read operations include some functions like GL read pixels and a few other pixel transfer operations where we can copy data from one frame buffer into another, but this does not include when we sample from texture attachments. So in this example when our fragment shader reads from the off-screen frame buffer as a texture by sampling, that frame buffer actually doesn't have to be bound at all because that's not relevant for that purpose. For our purposes we're not going to do any of these read ops here so we'll just use draw frame buffer, and so we call frame buffer texture 2D to attach our color texture as the color attachment zero. Again a frame buffer can have multiple color attachments so there's 0, 1, 2, 3, 4, and I believe it's guaranteed in the spec that you will have at least eight color attachments I believe, though on some systems I believe you can go beyond that limit. It's uncommon though to need more than eight and in this case we just need the one, and then anyway we call GL frame buffer render buffer to attach the render buffer as a depth stencil attachment. Notice there's no number zero here because you never have more than one of these on any frame buffer, and lastly we can call GL check frame buffer status to check if our frame buffer is complete, meaning that it's validly configured, it has a valid set of attachments, and in the event here that it is incomplete we'll print out this error message. Doing this check is not required but it's good for debugging if you're doing anything complicated with frame buffers, and now looking down at the render loop first thing every frame is we want to render into the off-screen frame buffer so we make sure that is currently bound for drawing. We also need to make sure that depth testing is enabled because as you'll see we're actually going to disable it when we render into the default frame buffer. Here we're clearing the currently bound frame buffer, the off-screen frame buffer, and then all this is just rendering as normal we're rendering our scene. Having rendered everything into the off-screen frame buffer now we want to actually render it into the default frame buffer so that we can see it on screen and so we bind the default frame buffer with zero. The integer ID for the default frame buffer is always zero. We disable depth testing, we clear in this case only the color buffer, we don't care about the depth buffer because we're not using the depth buffer of the default frame buffer so no need to clear it, and then we'll do our rendering this time using a different shader program than what we use to actually render into the off-screen frame buffer and we'll look at those shaders in a second. Our geometry is defined by this quad vertex array object which is just a single quad as we'll look at in a moment. We then bind the texture color buffer like we do for any texture and then we draw our quad which actually is made up of two triangles so properly I shouldn't call it a quad but it's a rectangle. So before looking at the fragment shaders very quickly what does a quad VAO look like? It's the quad vertices here. It's two triangles whose vertices are defined in terms of clip space with no z. There's only two dimensions here because we're actually just doing two-dimensional rendering here and then we have the corresponding texture coordinates so there are two vertex attributes for the quad VAO. Looking at the shaders for our off-screen rendering the vertex shader is nothing we haven't seen before for just very ordinary flat-lit rendering of textures and same with the fragment shader just rendering out the texture color for the pixel and then for our screen vertex shader the one used to render out to the default frame buffer. In this case we're not dealing with any transforms because our vertices are already in terms of clip space like we want but they are VEC2s rather than VEC3s so we're filling in for z here the value of zero. As for our screen fragment shader it's exactly the same as our other fragment shader. We're just simply rendering out the texture onto the quad. What if however we don't simply render the texture verbatim but apply some kind of effect? Well that's what we call post processing. After rendering out our scene as normal we're then manipulating the color values before they reach the screen. Probably the very simplest post processing effect is to simply invert all the colors which we can do here by simply taking the VEC4 returned by this texture call and subtracting it from one. This takes the RGBNA value returned by this texture call and subtracts them all from one and that is the VEC4 being assigned to frag color so it's effectively inverting all the colors. So here I'll save and rebuild and if I run it all the colors have been inverted which is not terribly interesting in this case because our original scene is all gray scale but you can see that everything's inverted. Note that means that all the alpha values are zero but that doesn't matter for the output color buffer. For something slightly more complicated we can make the output gray scale. In this case our pre-processed image is already gray scaled but let's pretend it weren't. Anyway if you want to get the gray scale equivalent of any color value you simply take the rg and b and average them together and make that average your new rg and b values. Now the symbol technique for gray scale does work but it's generally not ideal because when it comes to our light perception of colors we're more sensitive to green less so for red and even less so for blue and so what we want to do is actually a weighted averaging of these different components and so what we actually want looks more like this. Multiply the red by 0.21 the green by 0.71 because it's the the strongest component we want for the average and then blue only by 0.07 so blue is actually contributing quite little to this average but that's actually what we want because of human perception. Let me actually just demonstrate this here rebuild and we run it and as far as I can tell this is exactly the same as it was before which is good I suppose because it was gray scale to begin with and you wouldn't want your gray scale filter to somehow change the image if it's already gray scale. To achieve a lot of interesting post-processing effects we can use what's called a convolution matrix or more commonly just called a kernel and the idea is that we sample not just from our pixel coordinate but also from neighboring points to the top left bottom right and to the top left top right bottom left and bottom right. We then multiply these nine samples by the nine different weights specified by our kernel and then we add these all together to get our final output color. So here for example we have a kernel that will get us a sharpen effect where we take the center pixel itself and multiply by nine but then from the neighboring samples we multiply them all by negative one and when all these samples get added together it has the effect of sharpening our image and note that these neighboring points are not necessarily just a pixel distance away they could be sometimes that's what we want but we can choose actually just any offset and the configuration of this offset will effectively modify our effect. So we get different effects not just by changing the weights in the kernel but also by modifying this offset. In this case we're using the same offset for vertical and horizontal dimensions but in some cases you may want to have separate vertical and horizontal offsets. You also probably want to tie the offset to the dimensions of your output image which in this case we're rendering out to 1920 by 1080 and so if our offset is 1 over 600 then in the horizontal dimension at least our offset is about two pixels away from our center point and in the vertical dimension it's approximately three. So if say we wanted to make sure the offset is actually one pixel in both the horizontal and vertical dimensions we'd have to compute two separate offsets using the actual dimensions of our image. For our purposes here we're just keeping things simple so we've chosen an arbitrary offset. Anyway to see this in action I'm going to build it here and now I run it and you can see we have this ugly looking sharpened effect. So that's neat but if you look at the edge of the image you'll notice something a little funny going on what's happening is that by default our image that we're sampling from by default it's doing wrapping and so we're getting this odd effect at the border of our image. To fix that we simply need to configure our texture color buffer to use a clamp to edge for the texture wrap parameter. So I'll build that run it again and now you can see at the edges we've gotten rid of that problem. For one more example if we change the kernel here instead of a sharpen we can get a blur effect and this is achieved by weighting the corners to be one sixteenth each the top left bottom right to be two sixteenth each and in the center itself to be four sixteenths and you may note all these weights add up to one which is usually the case for our kernels our sharpened kernel also added up to one. If they didn't add up to one if instead they added up to something larger than one we would effectively be lightening the pixels if they add up to something less than one we would be effectively darkening the pixels. In this case though we don't want to darken or lighten the image so our kernel weights add up to one and now I will rebuild run it again and now we're getting blurry output