 Hey everyone, it's time for another edition of the Developer Diary for my camera app. One of the things that got me excited about this project in the first place was the image filters. The filters that I've made are fairly standard things that you might see in other camera apps or image editing software, like changing the brightness, contrast, and that sort of thing. The filters in this app are written using WebGL. I'm not going to dive into the details of WebGL in this episode, but I recently recorded an episode of SuperCharge with Zerma where I wrote some code that sets up WebGL so that you can use it to manipulate an image. It's about an hour long, so I'm not going to assume that you've watched it, but there is a link in the description if you want to go do that now. The very brief summary of the WebGL setup is that you create a canvas element and attach some code to it that runs for each pixel. This has to output what color that pixel should be. This code is called a shader. The code for the shader is written in a special language that is kind of like C. I'm not going to go into the details of the language, but instead I want to talk about what my code is actually doing. The shader can look at data from the original image, only when you're talking about images in WebGL, you tend to refer to it as a texture. So here we have the code for my shader. We have a thing called a varying up here, which is called text chords. This is the coordinates of the pixel within the canvas that can be used to look into the original image, the texture, hence text chords. And this is a different value for each pixel, which is why it's a varying. You can also pass in some values from your JavaScript that were shared for every pixel. These variables are called uniforms because they are the same for every pixel. This is how we can pass through things like what the contrast or brightness levels actually are. So here we have uniforms for all of the filter settings, saturation, warmth, etc. This texture sampler is how we're going to refer to our original image. And this one which is source size, this gives us the width and height in pixels of the original image. So how do these filters actually work? What are we doing? Well, one of the first things that I do in my code is something called a convolution filter. So a convolution filter, it's kind of a complicated name or something that's actually relatively simple. What we're going to do is we're going to calculate the color of an output pixel based on more than one input pixel. So here I want to work out what color a particular pixel should be when I produce my image. And the way I do that is I look at the values of the colors for that pixel and some pixels around it. So here we have a 3x3 convolution because we're going to look at 9 pixels in a 3x3 grid. Each pixel is given a weight. This is how important it is to the output. This particular set of weights is going to give us what's known as a blur. So what we do is we gather up the colors of all the pixels around it and we multiply all of the color values for each of these pixels by the weight. Because all these color values are going to be numbers from 0 to 1 and then when we add them all together we're going to end up with a number between 0 and in this case 16. Once we've added them together with the weights we'll then divide by 16 to get back to values between 0 and 1. The way that you might think that you're going to do this is you have an array with your weights in and you loop through from minus 1, minus 1 to 1, 1 and you go through and look up all 9 pixels in a loop and you get each color and you multiply it by its weight and you add them to a total and at the end you divide by the sum of all those weights. And that's more or less what you do except that in WebGL Shader I'm going to optimize this slightly by unrolling this loop so that is instead of doing the loop as a loop I'm going to write out each individual statement that's going to get run on its own line so that instead of running a loop 9 times I'm going to write 9 instructions. So you might be wondering why would I do this? Why would I enroll this loop and have each instruction separate rather than looping over them? And it's to do with how graphics cards work. So a graphics card contains hundreds of GPU cores which are great at doing mathematical calculations but one of the things they're not so good at that an ordinary CPU is is branching so that is when things are different when you run the code again. So in a GPU it's used to doing the same calculations over and over and over again it sets up a pipeline to just read in data and output the output without doing any like decision making. So when we have our loop many graphics card drivers will actually optimize that out automatically it will say okay so it's a constant number of runs for this code and we can work out at compile time that this is going to run 9 times and it's going to use these values and it can produce that unrolled version for you but not every driver does it. And particularly with things like texture reads where we're looking at the color information of the original image that can be very highly optimized and pipelining in those colors and based on the fact that we're going left to right and we're always pulling things out in the same order so the graphics card can make that really fast if it knows ahead of time what the pixels are going to be that are going to be looked up. And when you introduce that loop some graphics card drivers have that as a branch and they don't optimize for it anymore. So this could be extremely slow on some graphics cards. Now most of the time this will be fast most modern devices this will work just fine but not all of them. I tested on some devices where this was extremely slow and so that's why I'm doing it. So let's have a look at the actual code in my shader for this. So I did a 3x3 matrix in the example I've actually used a 5x5 matrix in my code because sampling more values you get a smoother blur effect but it's otherwise exactly the same. So here I'm using the texture 2D call of the shader language to look up in my texture sampler some coordinates including the offset that I'm looking at to get the color value for a pixel. Then down here I'm adding up all of those color values along with their weights and then dividing by the sum of those weights. Now you may notice that in the app here I don't actually have blur as an option what I actually have is sharpen. Now here I've got the sharpness turned way up but I can turn it down to nothing and you can see that the sharpness of the image changes when I move the slider. So how am I using a blur to make an image sharper? Well the way that that works is using something called an unsharp mask. This is a technique that was actually invented for film back when it really was film. This is a pre-digital technique to make things sharper. What you do is you take the original image and you take the blurred version and you find out the difference between them. So you're basically saying how does this image change when it gets blurry? And then what you can do is you can undo that. So if the pixel gets more red and less blue when it's blurry then you can make that pixel less red and more blue to make it sharper. Which is actually an extremely clever way of doing this. You do a negative blur almost. So whether it's implemented here is that I work out this distance between the texture and the blurred version and I add it to the original texture color. And then I actually scale this effect with the sharpness value from the slider. So if sharpen here, which is just a number, let's say this was zero, then this whole thing would be zero as well and it would be adding nothing. This would have no effect. But also if sharpen is negative then we actually just add on the blurred version. We make the image more blurry and you can see that in the app. We take the sharpness and we move it down to the negative and the image becomes blurrier than it was. So that's applying a convolution filter. In this case, using a blur to sharpen the image. There are convolution filters that do all sorts of things that give you an embossed image or confined edges. There are all sorts of things you can do with a convolution filter. I'm just using it to increase sharpness. Now there are some simpler operations you can do as well. For brightness, what I actually do here is I take each color in the original and I just multiply it by whatever the brightness value is. So if you have a brightness of one, nothing happens. Whereas if you have a brightness of two, every pixel gets twice as bright. That is, the red, green and blue values are doubled and similarly a brightness of half will mean that everything gets darker. All of the pixel values get to be half what they were. Warmth is another interesting one. I kind of thought this would be much more complicated, but I found a very simple formula that works pretty well. So you take your warmth value and you just increase the red of the pixel by that warmth value, you add the warmth value on and you get the blue value and you subtract the warmth from it. It's an extremely simple formula, but it actually gives a pretty good effect. So you can see that what happens is that when you increase the warmth slider, the image gets less blue and more red and it gives a warmer tone and similarly when you decrease the slider, the image gets more blue and less red. So this is obviously an extremely simple way of achieving this effect, which I found pretty surprising and I found that for most of the effects that I used in filters, there were kind of two schools of thought for each of them. One of them would be an extremely simple mechanical method of achieving the effect. So for warmth, just change the red and blue values and it's just by a fixed value and that gives a pretty good effect. Other people are taking detailed models of how the brain perceives images and how your eye works to produce models that are a bit more natural but more complicated. So I opted for this simple way in all of these but you may find that if you search around for how to actually implement these you get some of those complicated versions coming through as well. So the next filter I want to talk about is contrast. This was another one that was interesting. I didn't know how this worked before I started this project and what we're doing is the idea is to make the pixels seem more distinct, to make every part of the image a little different from the other parts of the image. The way that we can do this is we can say let's make every pixel, make every colour get a little bit further away from grey. So grey is where the red is at half, the green is at half, blue is at half, like it's half way between white and black and the way that we increase the contrast in this situation is if the red value of our pixel is more than half then we make it bigger and if it's less than half we make it smaller. So if the pixels get further away from grey this increases the contrast. Now this doesn't work perfectly if your image is, let's say it's very dark so most of the pixels will be less than grey this actually will decrease the contrast everything gets closer to black when they're all very dark and same will happen if everything's very light. So the other thing that I've got here is the ability to change the grey point so that's what value we're using as grey so we can set the grey point from almost black to almost white and when the grey point is almost white then that means that many of the lighter pixels will still get darker and similarly when we move the grey point down towards black darker pixels, some of them will get lighter so you can choose the grey point to suit your image generally speaking you want to set the grey point to be the average of all of the pixels in your image so that more pixels are moving away from each other on average I haven't implemented that automatically though that is a thing that some image applications do instead I've just given a slider and you can choose whatever looks nice so that's how I implemented some of these filters you know things like the vignette it's making the image darker the further away it is from the centre and things like that there are many other kinds of filters that I could have implemented that you might find in other applications I could have done them in a more complicated way but I hope this is a reasonable example of how these things work so I hope you enjoyed the show and that you'll join me next time when we'll talk about something else thanks for watching my video remember you can subscribe to the channel for more from Chrome Developers or follow the links on your screen to watch other episodes of Developer Diary cheers