 When working with transparency, it's important to distinguish between so-called straight alpha and pre-multiplied alpha. In straight, or post-multiplied alpha, the true color values we're ultimately going to see are the RGB components multiplied by the alpha. But with pre-multiplied values, the RGB components as stored have already been multiplied by the alpha. So here, for example, our straight RDA value of 1.5.7.5, its pre-multiplied equivalent is 0.5, 0.25, 0.35, and 0.5. Notice the alpha itself is the same. While straight alpha is the more obvious representation, pre-multiplied alpha, as we'll see shortly, has some advantages. What's generally called alpha blending is the process of compositing one color value on top of another. The RGBA value on top is called the source, and the RGBA value below is called the destination. And what we want to compute are the blended component values, the RGB and A. First, consider the formula for just blending the alpha itself. We take the source alpha and add it to the destination alpha, multiplied with 1 minus the source alpha. To understand the logic behind this, imagine we're layering two transparencies, both of which are filtering out some amount of light that passes through them both. Well, let's say that the top transparency of the source filters out 60% of the light. In alpha terms, that would be 0.6. So only 40% of the light that hits our source transparency passes through it. Now let's say that the destination alpha only lets 3 quarters of the light through, which in alpha terms is 0.75. Well, if our source transparency has already filtered out 40% of the light, if we filter out another 75%, 75% of 40 is 10%, and so only 10% of the original light is getting through both transparencies. And so in our formula here, that would be 0.6 plus 0.3, getting this 0.9 alpha, meaning it is 90% opaque and only 10% of the light gets through. Now, notes if either the source or the destination have an alpha of 1, meaning fully opaque, then the result is going to always be 1. Because say if our source alpha is 1, then 1 minus source alpha is 0, multiply that with the destination alpha and then you're just left with 1, the source alpha. Or conversely, if the destination alpha is 1, but then source alpha is let's say 0.2, well then we'd have 0.2 plus 1 times 0.8, and so 0.2 plus 8 gets us again 1. So the formula correctly gets us a result of 1 if both or either alphas are 1. Also note that the formula is commutative. If we swap the source and destination alphas, we still get the same result. For the sake of computing the alpha at least, it doesn't matter which we treat as the source in which the destination. As for the color components, the formula is not commutative and also the formula is different if we're dealing with straight alpha versus pre-multiplied alpha. Both formulas however get us a pre-multiplied result and so if you want the straight result, you have to compute the blended alpha and then divide the blended color by the blended alpha. Anyway though, to understand the logic of these formulas, consider first the top formula for straight alpha. The source color component is multiplied by the source alpha and the destination color component is multiplied by the destination alpha which makes sense because the color's transparency in the sense is diminishing its intensity. If we had a 0.5 alpha for our red value, the intensity of the red value we perceive should get cut in half. We don't however simply sum up the two color values multiplied by their alpha. Because the destination is behind the source, its intensity is also diminished by how much light makes it through the top layer through the source and that amount of light is again 1 minus source alpha so that gets multiplied on the destination before we sum. For example, if my source alpha is 0.6, if you imagine a light shining through our top layer through our source layer and our source layer has an alpha of 0.6 then it's only letting in 40% of the light through while then it makes sense that we multiply the destination color value by 0.4. Only 40% of that color intensity underneath is shining through the top layer through the source. So that's the logic of this formula and then in the case for pre-multiplied values, well we don't need to multiply in the alphas because they've already been multiplied in. By default in OpenGL, the output from our fragment shaders simply overwrites whatever might already be in the frame buffer for that pixel. But if we enable blending, then what's already in the frame buffer, the destination pixel, is combined with the source pixel according to a formula that's hardwired into a special part of the GPU called the ROPs, which depending on whom you ask means raster operations processor, render output unit, raster operations pipeline, no one seems to agree but they're called ROPs. And though this logic is hardwired, we have some options about precisely what the operation is. We can select among five options what OpenGL calls the blending equations. In the first three equations, the source and destination are each multiplied with a respective factor. There's a factor for the source and a factor for the destination and we have a dozen or so options about what precisely these factors may be. So that's configurable too. And as you can see in the first equation on the top, the products are then combined by adding them together. In the second equation, we subtract the destination from the source and in the third equation we swap that, we subtract the source from the destination. The last two equation options don't multiply the source and destination by factors. They simply give us either the min between source and destination or the max between source and destination. And be clear this is per component. So say for min, it looks at both red components and whichever is lesser, that is the output red component. To play around with all the possibilities, there's a really neat web page by Anders Wiggelsen that using WebGL is demonstrating, compositing of an image on top of another using different blending equations and blending factors. Here for the blend equation, I can pick any of the five equations. Funk add is the default but here I'll change it to subtract and now you can see the result. And then this top hold down lets us select the source factor. Right now it's just GL1. So you can see here that red, green, blue and alpha of the source are all just being multiplied by one. Meanwhile for the destination, the components are all being multiplied by one minus the source alpha. But we have a dozen other options including say GLDestColor. So now the source components are being multiplied by the respective destination components, which gets us an odd effect but this is something we can do. Now in code to configure the source and destination factors, we call GLBlendFunk but there's also GLBlendFunkSeparate which lets us independently configure what happens with the alpha. And so here for example if I select one, notice now the source alpha is being multiplied by one whereas the other source components are being multiplied still by the destination color. So FunkSeparate is useful when you want to treat alpha in a different way. And likewise we normally set the equation with GLEquationFunk but if we use GLBlendEquationSeparate we can set the equation independently for the alpha. So now for the alpha it's using the min equation rather than subtracting destination from source. And you can see how we would set these options in code down here. Also note this checkbox pre-multiply. With it unchecked both of the input images are straight alpha but if I check it now they're both pre-multiply which in this case doesn't have much of an apparent effect for these particular options but in other cases it will make a noticeable difference. Let me refresh here so we can set it back to the defaults. Notice that there are some options for the factors where it involves a constant color. Like here I'll select for the source 1 minus constant color. And this constant color is set by another function, BLENDCOLOR, which here in the tool we can configure under this tab. But this is what it looks like in code. And so now the factor you can see is 1 minus these RGBA values. Now the question is what blending options do we want for compositing transparent images? Well in the case other color values are straight alpha rather than pre-multiply what we normally want is the default addition equation so we don't need to configure that. But then for the blend func we want source alpha and 1 minus source alpha. And as long as the destination alpha is 1, which is normally going to be the case we're going to first composite transparencies onto something that's fully opaque and then potentially composite further transparencies on top of that all back to front and so every time a blend is performed the destination alpha will always be 1 and so we will get the correct color but the blended alpha will be incorrect. This however will not matter because looking at the formula it doesn't involve the destination alpha we just always assume it to be 1 and in the final output that we're going to render on screen the alpha is disregarded anyway and the blended color values from this formula are pre-multiplied. So long story short this formula is correct for calculating the color value but applying the same formula to the alpha channel is erroneously squaring the source alpha and the blended alpha value is incorrect but that's okay because it's just going to get ignored. Now in the case that we're working with pre-multiplied alpha now we want the source components to just be multiplied by 1 not by source alpha and now we should get the correct alpha value regardless of what the destination alpha is. In the event that our output is not going straight to screen but perhaps is being written to a frame buffer and is going to get composited with some other image down the road well then we're going to want to have correct alphas in the output so in that case this is the proper solution. So for this reason among others generally we prefer using pre-multiplied alpha. In the niche circumstance however where you're dealing with straight alpha and you're compositing back to front where the initial background image has an alpha of 1 in the special case you can use gl blendfunk separate to get both the correct color and the correct alpha all the alphas will be 1 because now here for the source alpha instead of multiplying it by source alpha it is just being multiplied by 1 giving us the correct result. Again though generally the best thing to do for the sake of blending and for other reasons too generally the best thing is to just use pre-multiplied alpha. Looking now at example 3.2 blending sort what's happening here is that well first we're rendering the floor then we're rendering these solid boxes the gray boxes and then we have these five windows with a transparent texture and for those those get rendered last and we make sure to sort them back to front when we render them so that we render the back ones first. Because alpha blended is not commutative the order matters and we want the rearmost transparency blended onto an opaque destination onto an opaque background and then working back to front we render the remaining transparencies. Now the sorting here is done on a per object basis it's done using the objects view space coordinates and as long as these objects don't possibly intersect at all if their bounds don't interlap at all then the sorting works correctly but for two models that intersect into each other's bounds it could be that some polygons of the first are in front of the other but then vice versa some polygons of the second are in front of the first so you can't really fully sort on a whole object basis but even if we sort on a per polygon basis there's still the problem of two polygons, two triangles that might intersect. This is why when we draw we use the Z-buffer because the only way to determine what should go on top of what has to be done on a per pixel basis. The ultimate solution for sorting transparencies then is that after rendering all the opaque objects we need to render all the transparencies but not have overlapping pixels override each other instead they all get preserved and then once we've done all the rendering then for each pixel we sort all the color values and blend them back to front. So it is possible to correctly render an arbitrary number of transparent objects stacked on top of each other but the details of that get a little complicated so we won't cover that here. Anyway, looking at this example's code first thing we need to make sure we've enabled blending and set up the proper blending function our textures here are straight alpha so the blending formula we want is source alpha and one minus source alpha and then for our windows we have this list of window positions which in the rendering loop we're going to put into a map of floats for the distance to that position from the camera and the positions for the values. Once we have this map we can do a reverse sorting with this Arbegin method here to get an iterator that goes backwards through the keys of the map from the most distant to the closest and then we render a window at that position so the windows are rendered in order of furthest away to closest based on their object position for example 3.1 blending discard what's happening here is that we're rendering these textures onto flat quads, these grass textures which have some pixels with alpha values that are zero and in the fragment shader we are discarding those pixels with alpha values under a certain threshold so in fact you'll notice here in the C++ code we have an enabled blending, we're not using blending at all in this example, instead in the fragment shader you see here that when the alpha of our texture color is less than 0.1 we just use the discard statement which is like a special return from your fragment shader that tells the shader not to write any fragment value now the reason we want to make this less than some threshold is simply equal to 0 well if I rebuild now and run it again you'll see we get these annoying little white tips what's happening there is that looking at the texture loading code we're using bilinear filtering and so you sample from the texture it does an interpolation in average in most cases of 1 pixel with its immediate neighbors and so on the edges of the blades of the grass where we go from non-zero alpha to zero alpha some pixels that shouldn't get rendered the interpolation is mixing in from nearby non-zero alpha pixels some degree of non-zero alpha and so these pixels we don't want to see are still getting rendered because they're passing the test they effectively have non-zero alpha thanks to the bilinear filtering you'll also notice here that when working with transparent textures we want to set the texture wrapping to clamp to edge rather than repeat because again thanks to interpolation at the edge we would erroneously have some non-zero alpha through interpolation bleed into the zero alpha parts the parts we don't want to see and so in this case for some pixels we would strangely see sort of an outline of the border which I'll demonstrate here by just making this repeat and we'll set the fragment shader back to use the threshold rebuild run it again and as you can see we're getting this really ugly looking artifact at the edge of the texture parts that should be discarded because of interpolation and wrapping on the texture those pixels at the top are being blended with pixels at the bottom of the texture and so the alphas up there are actually passing our threshold of 0.1 they actually end up not discarded so for transparent textures like this generally you want to clamp to edge