 I almost forgot, I almost forgot my bit. Look, it's been so long since we've done this. It's been in query long. Also, this episode is going to be about a blog post that I wrote, which you just have to forget you read, because otherwise you'll know everything. I've already forgotten. It's OK. It went in one ear and out the other. You consume blog posts with your ears. Oh, that's probably why I don't remember it. It's been a while since we've done the recording. I mean, last time we were sat here, we were in national lockdown. There was a pandemic. And now, Jake, OK. Yeah, I guess the only difference this time is it's colder. It is colder. Well, and to distract us from all that's going on, I want to talk about dithering. Yay. So originally, I wanted to look at the dithering because I know it's a thing with images. I also know it's a thing with audio recording. And I was like, oh, that's interesting. The same process that applies to different areas of technology. Let's write a blog post about it. Let's do some research. And it was a massive rabbit hole and it turned out to be super long. So in the end, my blog post end up to be just about monochrome dithering in images specifically, where you only have black and white and you try to make images with just two colors. And because I wrote a blog post, it would be kind of negligent to not also repurpose it as a video because I'm lazy. So you'll wait. I'm thinking now of all the blog posts I've written that I haven't also made episodes of. Are you saying that's negligent? I'm negligent. Yeah, that's what I'm saying. And now you have your next 800 episodes of HTTP203 sorted, but let's some of those blog posts didn't do very well, mate. They're not going to do well as an episode either. You don't know if you don't try. If I've got to do it, I'll do it. All right, so dithering. I'm going to use two example images in this blog post, which are these two, which I chose because I thought they have a good mixture of. They have some gradients. They have also some hard edges. One is very much on the dark side. One is very much on the light side. There's some nice details that we can look if they get preserved through the things. For example, on the darker image, we have the fine lines of the bridge. On the lighter image, you can see a little boat in the water. Maybe we can see which dithering algorithms will actually maintain that boat and where it will get lost. And you took both these pictures yourself, so there's no copyright issues. I did. No one can say anything about copyright issues, except if the bridges are copyrighted, which I hope they're not. Or the boats. So as I said, the whole point is that we only have two colors. We have these images, which have 256 shades of gray, which in the book... That's a different book. Is that the sequel? That's the more IT focused version. Sorry. And I'm going to talk... Because they are gray scale, I'm going to talk about these images where every pixel has a brightness because it doesn't really have a color. It's just a brightness. And basically, I'm going to say, zero means black, one means white, and everything else is kind of in between. And so we have 256 shades. But in our output, we only want to have pure black and pure white and still kind of make that image look the same, which sounds pretty hard. And that's exactly what dithering is about. And the naive approach, I guess the first attempt that I did is you just look at a pixel and you select the color that it's closer to. So if it's brighter than 50%, you choose white. And if it's darker than 50%, you choose black. The problem is that that looks pretty bad. If you know the image, you can recognize it. If you don't know the image, this is not really helpful. And the original definition, like the mathematical definition of dithering, works with noise. So what they do is they add a bit of noise to the image and then do this process, which is called quantization to flip to black or white. And dithering basically says, instead of flipping from black to white at 50% brightness, we choose a random value. And it sounds a bit weird, but it makes sense in a mathematical approach. So if you think about if you have an image or just a rectangle, a big image, it's just one color, 10% brightness. If you choose the value at which you flip randomly, then it will kind of mean that roughly 10% of the pixels will be white and the rest will be black because the threshold is random. And so overall, the average brightness kind of stays the same. And what's also nice is you can talk about threshold maps. So instead of creating a random threshold for every pixel, you generate the threshold ahead of time and put it in a map and it looks like this. It's basically the same size as an image, but with random brightnesses in it. And if a pixel is brighter than it's corresponding pixel in a threshold map, we go from black to white. And that's kind of nice. So why would you use a map rather than like math.random? I mean, this is a map created with math at random, but the nice thing about having it as an image like this is that it's a deterministic or it's repeatable because you have your randomized ones, but then you store which values you had. And it's kind of easier to reason about why the image looks like what it looks like in the end because you can see what the threshold map is. And thirdly, this actually allows you to, and this is not really running for this video, but you can run it as a shader on a GPU because you can now paralyze per pixel. You don't need to go through it one by one. And that can be quite handy. So if we use this specific threshold map here, basic comparing each pixel to a corresponding pixel threshold map, it looks like this. And honestly, when I did this, I was pretty surprised because, I mean, it's not great. We know from like images of the 90s that we can do better, but this is a very simple approach. And it definitely has more detail than the previous quantization approach. And that's kind of cool. I mean, the- Yeah, it's quite grainy. I imagine this would look a lot better if it were animated. Like if you were animating the random values, like your eyes would correct all of the frames and it would actually look really, really good. But as a single frame, it's quite great. That's actually what gray traces do. Like they choose random noise and they basically add up the individual images over time and average them out. And then it actually becomes the correct image, which I think is also kind of cool. But that animating dithering is yet another topic that I'm definitely gonna get into, which is another whole rabbit hole. So we have the threshold map, which random noise and when you use it for dithering, it looks like this. And then people said, well, maybe we can be a bit smarter about that noise that we use and introduce a bit of order to the noise. And that's what is often called ordered dithering. And one of the approaches which some people might recognize from gaming is called Bayer dithering, which uses a Bayer matrix, which is used still today in camera sensors where it's like a specific arrangement of pixel sensors for other reasons. But that same arrangement is used here, which kind of looks like this. So you can see that there's a very distinct pattern to it. What's actually quite interesting that in this Bayer pattern, not a single value appears twice. So all the bright pixels look similar, but they don't have the same value. And so you can see that pixels of similar brightness are further apart to actually make it a pattern. And these Bayer matrices even exist in a couple of different sizes in two by two, in four by four, eight by eight, 16 by 16. Technically they also exist in bigger sizes, but because we only have 256 shades of gray, anything bigger than 16 by 16 doesn't really make sense. But also at the same time, 16 by 16 is a lot smaller than the image that we are trying to dither. And so we just tile this Bayer matrix. We just wrap around the edges and fill the entire image to create our full size threshold map. And if we use this for dithering, I actually think this looks really cool. So you can see that a lot of detail is preserved. I feel like it looks detailed. And the noise is very, very structured, very ordered in a sense, which can be undesirable, but sometimes can also be a super interesting stylistic choice. So in this scenario, I actually really like it the way it looks on screen and you have like the gradients look quite real, I think. Yeah, but as you say, it does have that unnatural feel because you can see the tiling from the, yeah, from the tiling. The tiling, from the tiling. Yeah, I think sometimes it's a stylistic choice. It's very nostalgic for me because I think it was very popular back in the day on games that had to render on 16 images on like Windows 95 or older. And that's probably why I like it to an extent. Yeah, I remember this being an option in PaintShop Pro, which takes me back. This was definitely one of the differing options that I remember. And it turns out that often people equate the word ordered dithering with Bayer dithering, but really any dithering that orders the noise in a way falls to this category. And there's another one that I found really interesting, which is blue noise dithering. So the original dithering, the mathematical dithering, so to speak, uses white noise, this purely random map where every frequency is represented equally. Now blue noise is called blue noise because it has the higher frequencies in a higher intensity than the lower frequency. So in white noise, just like white light, every frequency is represented equally. In blue noise, the higher frequencies get a bit more power. And that has an interesting side effect because with white noise, because it is random, that will also randomly appear clusters of white pixels and kind of voids of black pixels. And if you blur it, you can see that these rough structures kind of remain. Blue noise tries to eliminate those bigger structures and only have very high frequency noise, which when you blur it gives you a very even shade of gray. And this is actually not that simple to generate. But again, because it's ordered dithering, you just basically build up a threshold map as a texture. You only need to generate it once and then you can just reuse it and don't ever do the computation again. And the way blue noise is defined, it actually will always tile and wrap around the edges. So you just generate, in this case, I did 64 by 64. And then I just tile it to cover the rest of the image. And this is one of my favorite dithering looks because it looks super organic. Like the spacing always looks roughly even. Lots of detail is preserved in the image and yet the gradients also look really, really natural. And I think this is a really interesting look. And again, it still can run on a GPU. So you could apply this in real time to images or even a game, which some people have done and it looks kind of amazing. So this is like, yeah, very similar to the random noise one you had before, but it's just, it's removed that roughness. Yeah, it's the spacing. I think it's the spacing that looks more even. So it is both organic and yet kind of evenly spaced is what I keep calling it, which I hope people know what I mean, but if you look at the fork on the darker image around in the bottom left corner, that it actually brings across the fade out, the gradient that the fork had. And if you go further away from the image, it almost looks like a natural gradient. Keep in mind, these images only use full black and full white. There is no individual gray pixels in these images. And the fact that both the sky and the fork all look like, they're actually fading from a white to a darker color. That's the dithering at work. And I think it's kind of amazing that it works. You should link to the actual images because thanks to video compression on YouTube, these will be using more than two colors. That is probably true. I will actually link to my blog post because they are all in there as well. And I got to get the view count up. Absolutely. That's kind of my job. So these have been the ordered dithering, which are fairly close to the mathematical OG dithering. But if anyone has ever researched or Googled for dithering, they will probably have found a different approach because the most popular approach to dithering is called error diffusion dithering. And that is a very different approach. The most popular algorithm is called Floyd Steinberg. Our test images look like this when the Floyd Steinberg, which I actually don't like as much because especially in the skies in both images, you have very noticeable lines and patterns. Yeah, it looks like someone's doing some dots and then they've kind of started falling asleep and sort of trailed off. The Floyd Steinberg is actually a really good algorithm. It just doesn't work well if you just have two colors that are very, very far apart. It looks much better if you have a palette with eight or 16 colors so that actual gradients look more like gradients and not just like weird lines in the sky. So yeah, I think in the context of my article, Floyd Steinberg didn't come off very well because it's, you know, I only use two colors with moccas, it looks much better. But it is a very popular algorithm because it is easy to implement and it is quite fast. And if you have moccas, it actually looks really good. Okay, so let's talk about error diffusion and how it actually works. So here's a super small four by four image zoomed in with the brightness in each pixel. And error diffusion starts with the first pixel and looks at the first pixel in isolation and quantize it. So it looks at the brightness. If it's bigger than 0.5, it takes white. If it's lower than 0.5, it goes to black. And then it quantizes it. But what it now does, and that's new, but it actually looks at what the quantization error is meaning it was 0.6, it got quantized to one because it's white. That means the quantization error is minus 0.4, meaning the pixel got brighter than it originally was. And in this example, we're gonna take that quantization error and give half of the error to the pixel to the right and the other half of the error to the pixel below us. So because this pixel got brighter, we will make those other two pixels darker by 0.2 each, which is 50% of the error, which I've done now. And now we repeat that process over and over and go through every pixel line by line, top to bottom. So this would be our next pixel and repeat the entire process. So we actually measure the error that the quantization introduces, that the ring introduces, and then try to correct this error by making neighboring pixels brighter or darker depending on what kind of error we've introduced. And this is how this picture came to be. So this is again the Floyd-Seinberg picture. It does more than just use the pixel to the right to the bottom. It actually uses all four neighboring pixels that it hasn't visited yet. And diffuses the error very carefully in a way that tries to avoid lines and structures as much as possible. And it succeeds if you have more than two colors. So that repeating pattern we're getting there is because like, well, in the sky there, it's going to be close to white, but a bit gray. And so as it counts through a series of pixels, they're all snapping to white until that error builds up to the degree when it's actually, oh, we're now actually closer to black. So that's one black pixel. And then you're back to the, you know, because that would be quite an error. So then it's definitely going to make the next pixel white and so on and so on. Yeah, so because basically the sky is almost white, but not quite as a tiny error that keeps accumulating over multiple lines. And then suddenly, yay, we have a black pixel. And then we go back to a whole bunch of rows of white. And again, so that's exactly what's happening here. And again, if we had multiple colors, like if we had white and mid-gray and the black, you would just see a couple more mid-gray pixels rather than a whole lot of white and a couple of black pixels, which would look a lot better. So again, Floyd Steinberg would, you work a lot better with more colors. But while I was researching this Floyd Steinberg algorithm, I actually discovered a, not not discovered, I stumbled over white paper for another dithering error diffusion algorithm that I had never heard of before. And it is called the Rimasma dithering. And it kind of has the same approach in spirit. So we traverse the image, the individual pixels, we measure the previous quantization errors and make that effect the current pixel. Now, what is different about Rimasma is that instead of trying to make a very clever way of diffusing the error to make structures disappear, it just chooses to traverse the image differently. So we went line by line, left to right in the previous approach. Sometimes people go left to right and the next time they go right to left, but still a top to bottom approach, basically. The Rimasma dithering uses a Hilbert curve, which is basically a very snaky curvy line that will in the end visit every pixel in an image. So here I have tried to visualize this curve. The brighter the pixel, the later in the curve it appears. So you can see we do like this. It curves in on itself, but it will visit every pixel. And so this is a different way to avoid building up any form of patterns or structure because this curve is so winded that any kind of structure is kind of diffused to that. And so if we look at that dithering, I also really like that. I think it looks, again, very organic. The spacing is very even. You have a couple of parameters to play with. Like how many previous errors do you want to consider for the quantization? How much weight do they get? So if you don't like this specific rendering, there are parameters to play with. This is just one that I personally thought was quite nice. But again, it does a very good job at forming a gradient in the sky, going from black to white, from white to black, without having any structure and still looking organic. So where does it pass the diffusion onto? So I can see the order of pixels it visits, but where does it pass the error onto? So it actually, it works the other way around. It kind of just quantizes. And if you want to quantize pixel n, it looks at the m previous errors that you made. So it doesn't actually diffuse them forward. It looks backwards. Like, oh, what were the errors I previously did? And with different weights, add them to the current pixel. It's the same approach just backwards. But because it's a curve, we don't need to have a diffusion matrix, which the other algorithms had, because you need to define, okay, one bit of the error to the bottom, one bit of the error to the right, one to the bottom right. Sometimes some error diffusion algorithms go even further than just the neighboring pixels, but go two, three rows and columns away from the current pixel. So this one was actually even easier to implement. And I think the visual is really, really nice. So when it's passing on the error to a pixel, can it go above one and below zero? Or does it just clamp it black and white? That is an interesting implementation detail. So in my implementations, while I was dithering, I actually allowed the value to go above one and below zero for the duration of the dithering. And only when I generated the output image, I would do the clamping. But I do know that especially in the olden days, this was actually an in-place algorithm. So these dithering algorithms can work on the original image without having to maintain a copy. And in those scenarios, you would clamp throughout. I think the difference would be quite minimal, unless you have like huge wide areas, if you have very few colors so the errors can grow a lot. If you have 16 colors or something, I don't think that would ever become a very noticeable problem. So that's an implementation detail, actually. It's kind of up to you. And I think, is this my last slide? That is my last slide. I try to do it with your thing. Jake, can you spot which dithering algorithm this is? Pop quiz. Well, okay. Number A, this is unfair because I'm looking at this on a tiny screen Right, fine, fine. I'm going to look closely. Right, okay. Now I have looked. From what I can tell, again, I'm still looking at it through a mirror. I did see what I felt were swirlier patterns to the dithering. So I think it's the... The swirly one. The one that sounded, the one that, the name of it sounded a bit like you were involved in it because it sounded like it had the word summer in it. Dithering Osma. Well, that's wrong. It's blue noise. But it's a valiant tri-jake. We are off to a good start in 2021. I like it. And done. Did you see Lee? I barely glanced and I think I went completely off-script anyway, but that's okay.