 What we call ALI's scene is visual imperfections and a rendering, resulting from the fact that we are rendering not at an infinite resolution, but at some finite resolution. Most noticeably ALI's scene manifests as jaggy edges, but the effect to understand is pervasive to the whole scene. Even within polygons we have ALI's scene, which tends to be more noticeable on detailed textures when the camera or polygons are moving. To mitigate ALI's scene, the most obvious and simplest thing to do is to simply render at a higher resolution. The higher we go, the less noticeable these imperfections. The obvious problem then is that, well, monitors only have so much resolution, and secondly, the higher the resolution we render to, the bigger the performance hit. So what we call anti-ALI's scene encompasses any technique meant to mitigate ALI's scene by some means other than just simply raising the resolution. The simplest and most obvious anti-ALI's in technique is what's called super sampling. The idea simply is that we render at a high resolution, but then down sample for our final output. So say if I'm running at 1080p but using 4x super sampling, then internally the rendering is being done at a 4k resolution, but then that gets down sampled to 1080. This is not only the simplest technique, it's also the most effective, but the obvious downside is repain all the cost of rendering at the higher resolution. So it's only really useful when we care only about image quality and not performance. So super sampling as a technique was really short lived in games, and then we moved on to what's called MSAA, multi-sample anti-ALI scene, which, as we'll discuss in detail in a moment, is a quite similar idea. We're sort of rendering at a higher resolution, except as we'll see it cuts down on the amount of fragment processing work and so it's considerably more efficient. MSAA, however, still incurs a significant performance hit, and so in more recent years, it's generally been superseded by a number of techniques that all apply a kind of edge detection and blurring in a post-processing effect. And this includes FXAA, fast approximate, MLAA is amorphological, and its variant SMAA, sub-pixel morphological. And then we have TSAA, temporal sample, which as the name implies, takes into account changes from frame to frame. There are many variants of these techniques, and they have their trade-offs, but generally these are the most efficient anti-ALI scene techniques, and so they're the ones most commonly used today. Arguably then, multi-sample AA is a bit out of date, but it does have special support in the hardware, and it is simpler than the more recent techniques, and so it's the one we're going to cover here. So in regular rendering, without multi-sampling, we have effectively one sample point for pixel, the center of the pixel itself. With multi-sample, as the name implies, we have multiple samples per pixel, which could be two if we're using 2x multi-sample, or four if we're using 4x multi-sample, etc. 4x is probably the most commonly used because 2x doesn't get you that much of a visual benefit, and anything greater than 4x tends to be too expensive. Let's say that we have four sample points within each pixel. We then, for our buffers, need to store four values per pixel, just as if we were rendering at four times a resolution. So in terms of these buffer sizes, we're paying the same cost as if we were rendering at the higher resolution. The difference is that when we rasterize our triangles, the in-bounds check is performed against these sample points, not the center of the pixel. And so for interior pixels of a triangle, all four sample points will be in-bounds, but then for pixels on the edge, only one or two or three will be in-bounds. And if you notice in the diagram here that the grid of four pixels is slightly rotated, that's usually how the sampling points are arranged because it effectively cuts across the grid of our pixels, and so it's going to give us a slightly more effective anti-aliasing. Anyway, for any pixel where at least one of these sample points is in-bounds of the triangle, well, the depth and stencil values are computed per in-bound sample point, just as if we were rendering at a higher resolution, but even if more than one sample point is in-bounds, we only run the fragment shader once, and we do so with the vertex attributes interpolated to the center of the pixel, just as if we were rendering the pixel normally. So for this triangle and this pixel that's in-bounds, we only get one output color, but remember for each pixel we have four color values, each corresponding to one of the sample points, and we write the output of the fragment shader to only those slots corresponding to the sample points that are in-bounds. So here in the diagram we have the two left sample points, those are in-bounds, so the color values only can be written for those sample points, but not the two others. So this is how the rendering proceeds, and then when all the rendering is done and we're ready to display, the four color values per each pixel get averaged together into one color value, and that is what we actually see. So be clear then that for pixels that are on the interior of a triangle, well, all four sample points are going to be inside, and so they're going to get the same color value, the same color value repeated four times when averaged together just gets you that same color value, so multi-sampling effectively just gets us the same result for those pixels. It doesn't do any anti-aliasing for the interiors of our triangles. For the edge pixels, however, such as in the diagram, the four samples per pixel are likely going to have different color values, and so when you average them together you get a different result, which is effectively like a blur on that edge, thus mitigating the aliasing problem. So unlike super-sampling, even though we're rendering to equivalent-sized buffers, we're not paying all the same cost in terms of the fragment processing. The fragment shader will typically run significantly less than if instead we were using super-sampling. Note, however, that relative to running without multi-sampling, the fragment shader will run some number of times more because of edge pixels where the center is not in bounds, but one or more of the multi-sample points is in bounds, and so for many edge pixels where otherwise the fragment shader would run just once, sometimes it'll run twice, or even three or four times, if you happen to have four different triangles that all intersect around the center of the pixel, four sample points could be in bounds of four different triangles. So on top of the extra depth and stencil testing that we're doing for all the sample points, on top of that, the fragment shader is being run more, particularly in scenes where you have many small triangles and so you have many more edge pixels. In fact, in the worst case scenario, your whole scene could be a bunch of tiny little triangles, such that each sample point is in bounds of a different triangle, and so we would end up paying all the costs of super-sampling. So the denser, more elaborate our geometry, the closer we are to that worst-case scenario. More typical cases, however, 4x multi-sampling tends to get us about like a 30% performance hit on most modern hardware, somewhere in that ballpark. The example 11, anti-aliasing off-screen, is demonstrating 4x multi-sampling. You probably can't really tell in the video here, but the difference is noticeable to me at least, even at this high resolution, that line there is not as jaggy as it would be otherwise without the 4x multi-sampling. And the simplest way to achieve this is to simply have OpenGL do the work for us by enable multi-sampling. But if we enable this, we need to make sure that the frame buffer we're rendering into is set up for multi-sampling, and for that, we can lean on GLFW and have it do it for us by telling it that we want four samples. And so it's going to set up the default frame buffer now with the appropriate 4x multi-sample buffers. But if for any reason we're rendering into an off-screen frame buffer, rather than directly into the default frame buffer, then this solution isn't going to do us any good. We need the multi-sampling to be performed when we actually do the 3D rendering into the off-screen frame buffer. So disable this here. And now we will need to set up an off-screen frame buffer configured for 4x multi-sampling. So that's what we're doing here, creating a multi-sampled color attachment. Notice the type is 2D multi-sample, and we're specifying the number of multi-samples for. Our depth and stencil buffers also need to be set up for multi-sampling. So note here we're calling render buffer storage multi-sample, and again specifying 4x multi-sampling. Then we attach these to our frame buffer like we've done before. And now if we render into this frame buffer, it'll do the 4x multi-sampling. Now we could directly from this frame buffer copy the data into the default frame buffer to show it on screen. And in that copy operation, it'll take the 4 samples per pixel and average them together. But if we want to apply post-processing effects, well we need to render from one frame buffer into the other so that we can have a fragment shader process the pixels. So then we're actually going to want an intermediate frame buffer which is not set up for multi-sampling. And because we're not doing any 3D rendering into it, it doesn't need a depth or stencil buffer. So it just has the color attachment. So then in our render loop, we're going to render into our multi-sampled off-screen buffer. Then we'll copy from this multi-sampled buffer to the intermediate frame buffer. Notice we've set up through bind frame buffer, read from the original frame buffer, draw into the intermediate, and then use blit to copy the color buffer using nearest neighbor, which is fine for our purposes here because the dimensions are exactly the same. So nearest neighbors are going to give us perfectly accurate results. Once our image is in this intermediate frame buffer, then we render from that intermediate into the actual default frame buffer. Simply drawing onto a quad, taking up the full screen. And because we're rendering rather than just blitting, well, the fragment shader can process the pixels in any way we like, and thus can apply post-process effects, even though in this case we're not doing any effects. Now as you might imagine, it's not ideal performance-wise to go through this intermediate buffer. So instead we can go directly from the original frame buffer, rendering into the default frame buffer, and in the fragment shader we would use a Sampler 2D MS as in multi-sampled, and the texel fetch function to manually read the four sampled color values and average them together ourselves in our fragment shader, and then we can apply the post-process effects. And now in fact, because our post-process effects have access to the multi-sampled data rather than just the average colors, an advantage then is we effectively can do a kind of higher resolution post-processing effect, though that could incur some performance costs, so there are trade-offs here. I'm not exactly sure which approach ends up being the most efficient. The real answer with performance as always is you really just have to test and see for your use case, which ends up being the most efficient.