 looks good. So I'll just start. Hello. I have VGA, which is the technology you should be using for this kind of occasion, apparently. So I'm going to talk about a little bit about wavelets. I'll be trying to not mention a single formula on any of these slides. It's really just to give you an intuition. It's a couple of pictures and I'm going to show you a couple of applications, especially in Darktable where we make some use of this. So just to give you a quick motivation, what wavelets are about is it gives you a nice way to edit the frequency content in your images. I mean, I'm sure you can do audio compression and whatever with it, but I'm interested in the image processing part. So let's take, for example, this picture and I can do this to it. Just select one frequency band and enhance it. Let me flip those back and forth a little bit to make it more visible to you. So you can see there's a certain grain size, which is emphasized in the second picture. It's not just sharpens like the finest detail. It's not like the clouds are popping out. It's this certain size of grain that I can select from the wavelet bands and do whatever to it. And that's actually useful for a couple of things. I'm just going to show you a small subset of that. And you can make good use of that for local contrast, which is what we've seen in that example picture. You can use it for HDR compression because that does kind of the opposite to local contrast, right? You compress down the dynamic range to something flat and boring, so you push up the details a little bit using a technique like that. I'll show you denoising applications. Of course, you can also just select the frequency bands of your noise and then get rid of it, hopefully mostly. It's also useful for more esoteric stuff like monochrome conversion, where you just take the color information from one of those frequency bands and then do monochrome conversion based on that, not on the finest level, because that would be noisy, but just take an intermediate level and use that for your color filter. So just to jump in, I stole a couple of pictures from Ronan Fatal's great paper, Z-Graph 2009, where most of the Waiflet code started, because it's really a great paper. If you have time, look at it. So most of you will have seen pictures like that before. This is a classic image processing example picture. And what Waiflets basically do is, in the simple most incarnation, you will have decimated Waiflets. So what you do for this kind of multi-resolution analysis, you just take your picture, you downscale it by, say, factor of two would be Haar Waiflets, for example, and factor of two again, and that again, and then you're at the finest level at some point. And that's your course representation of your picture. And how do you come back to the full picture is you just add on all the details as well. So what we're looking at here now is in the upper left corner, you see the just downscaled course picture. It just stops somewhere. You can do that until it's only one pixel size. And in the rest of the quotes, you see encoded just the differences. So imagine you would take this little picture and you blow it up to double size again, then you have a course estimate like a prediction of what your large picture would look like in the differences to the actual picture. That's what you see encoded in the rest here. So you see how the energy of that is near the edges. It's my most point of visible. That's great. So you can see some some white points here. So that's differences in x, y and in the combination. So yeah, that's great. So now you have your wafer transform. And now, how do you how do you compute those differences? There are a couple of possibilities. One is on top. That's just a simple, most tensor product version. So on the top left is a here taken from the same paper, you just leave away every other column in your picture. And then you take these, for example, as your course representation of the picture. And that would be left away. And in the prediction step, you take the left and right neighbors, you take an interpolation of that, and that's your predicted value. And then you take the difference to that. And that's the difference in coding. So now you've just done it for the x direction to do it in y, you just take the resulting picture and do it there just very much the same thing. So this game will be referred to as the CDF Waiflets. That's the Wubeshi-Favoux Waiflet. And the bottom version here is slightly better, because it's more isotropic where you don't divide up your picture in one row even, one row odd, and you leave away the rows. But you divide it up into a checkerboard pattern, right? So you're using the black pixels here to predict the red one. And then you swap roles and predict the black ones from the reds. I'll show you what that looks like later on. So let's first try the example from the first slide again. It's great. Now we have Waiflet transform. Let's boost local contrast. The way you do that is you let the course picture be the course picture, and you just multiply all the details by, say, the factor of two. So that'll enhance all your details. That's great. Now that's your original picture. Now we enhance it. That's what it looks like. Isn't quite what we expected. You have all those ugly halos around the edges. And the reason why that is, is imagine this is your signal, so it starts on the left side and gets bright to the right. Now you do the down sampling, and you do the prediction and step up. So what that looks like, it's blurred version, right? You lose a bit of detail. So that's a little bit flatter. You should take the difference to that and enhance it by a factor of two. What you'll get is ringing at the edges. So this red line is what it's going to look like, and it's not what you want. So the solution to that is called edge avoiding Waiflets, and that's a data dependent way of computing Waiflets. I'm not going to go into much detail about that. Just read the paper, as I said, it's awesome. And what it looks like is that's the CDF thing that I've shown you before. And if you construct your Waiflet basis exactly for this input picture, it's going to look like that. That's the edge avoiding Waiflet equivalent. Let me flick those back and forth. So we can see here is that it still contains some detail, but not the edge information anymore. So this stuff here around the silhouettes of the guys detected as edges, and it's actually pushed into the course representation. So you cannot find it anymore. In the detail coefficient, that just contains all the fine structured grains like grass and texture, but not the edges, which is exactly what you want, because now you can crank up the grain detail and will not get halos around the edges. So that's what those basis functions look like if you visualize them. If you imagine you sample that picture down to just a four by four pixel image, and then you up sample it again. So these blobs are what you'll get as your course representation. And you can see there's those those faults in here. That's what the CDF interpolation does. So that's this linear tensor product style interpolation that gives you those pyramids. So that's the CDF 22. You can see how it nicely stops here at the edges. So the basis function knows exactly where to go and then just stops right there. No, sorry, I was one too much. And that's the red like version with this slightly more isotropic interpolation doesn't just first go through X and then Y. This gives you more round, more pleasing looking blotches and has the same age stopping criteria in here. So yeah, that is great. And you can achieve very awesome effects with it. Let me just show you a few of those HDR pictures, because yeah, you know what that looks like. Over enhances detail quite a lot. So you can see a lot of those great details here. But the thing to take home here is not that this is a particularly great image, but there's no artifacts around the edges here. So you don't see ringing, even though you're very much overdoing the effect in those pictures. So that works fairly well. But what is the problem with that? I brought two more pictures of Fatal's paper. So this is another HDR picture with the CDF approach. And that's with the red black, I believe might be the other way around. But let me fix those back and forth. You can actually see the big structures jumping, right? You can see up here in the ceiling and pretty much everywhere, there's this grain size to it where you've seen the big basis functions. And you can see them swap places and it gives it an unpleasant texture if you're imagining that as an animation over several frames that wouldn't really work. And the problem is I brought this nice go board for you. If you take that to the way flat basis and just throw away all the detail coefficient, this is what it's going to look like with just a standard way flat basis CDF as you can tell by the crosshatch pattern. And if you take the edge avoiding one, this is what it looks like. So mostly it very reliably detects the edges here. So that's great. But you have to be as lucky to find one of those big spots here right within a stone of the go board. So if you missed it, then there will be no edge for you to detect. And the reason is just that those basis functions are sparsely distributed. So there's a big hole between this center of a basis function and the next one in between. There's nothing. So if your feature is falling between those two and smaller than your basis, then you're just going to miss it. You're out of luck. So how do you go about that? You just use what's called undecimated way flat. So this up top here is your input image. And then to compute your course representation, which would be this image here, you just interpolate all the neighboring pixels and you do that for the next image as well. And the trick is that you actually do that not only for the black spots here, but for the direct neighbor as well. So you're not making your image smaller, you're just blurring it. So you have the same number of course pixels as you had input pixels. And this is what it looks like. So to the left, that would be the original with a couple of details. Of course, always the same picture. And then it just goes blurry every step, but the resolution stays. So that gives you the shift invariance. And now you have information for every pixel. So that's great. That solves the problem with the under sampled features. And then there's a trick that's called R2. So you're leaving holes in between here. So you're not using these gray points here. That just means you can actually compute it in quite a fast time because you're not really considering all the pixels with those increasing blur radii. You're leaving away a little bit. And you can do that because it's actually going blurrier and blurrier. So that's cool. And you can combine that with the edge avoiding bit. And I brought a video for you what that looks like. So this is a real time spring capture. The video just runs slower because it's a very slow laptop here. So it's a video of my hand and it's piped through local contrast enhancement or just as now leaving away all the detail. You can see what you can achieve in real time for this mega pixel, I believe that was. Yeah, you can add 10 or 20 years to your age or remove it as you like. I thought it was pretty fun. Yeah, I promise to bring a denoising application at the beginning. So here it is. So you're looking at R2D2, just real photograph. There's some artificially added noise around 0.5% of the brightest pixel here and to the top left is the inset. So this is the original. And now if you just use regular CDF waiflets and denoising on that, it just becomes very blurry. Because if you know how much noise you have and you need to remove all those frequencies, you also remove some of the edge information. But if you're using edge avoiding waiflets, then they will detect actually what is an edge and not put that into the highest frequency data of the wavelength band. And that also still works if you have twice the amount of variance. So of course it gets even blurrier and there's still some edges in here. So the challenge in doing that is of course still as always with denoising what is an edge and what is noise. How do you determine that? And now you need to put that into the waiflet basis already. And that's exactly what we do in Darktable with our profile denoising module if you're running it in waiflet mode. So that's exactly the algorithm behind it. And the way it works is we just take a profile of your camera sensor and so we know upfront how much variance to expect. So we know what is noise variance and what we can expect the waiflets to detect those edges and so the separation kind of works very nicely with that. There's also the equalizer module. I decided not to explain too much about it here because there's actually a very great video by Robert Hutton. It's linked from our web page. You should be able to find that. And he goes into a lot of detail and with knowledge about what the waiflets look like in the background you should actually be able to do quite cool things about it. So just quickly this would be the local contrast curve. So you crank up detail a little bit and down here that one would be the what you call waiflet shrinkage threshold. So that's taking away noise. If you push that noise floor up it would denoise more. And yeah that's course coefficient and that's the final one so you can just choose whatever you like best. There's a couple of limitations. I need to point out about that because you're going to see those artifacts as the original image and depending on what you do with it you'll get those nasty gradient reversals here. Or you will get halos that I've shown in the beginning. Or you might for this high contrast edge even get aliasing. So that's inherent to this atoo technique that was leaving away points so that's exactly the mistake we make here. We're not going to fix this because it's going to be horribly slow if we do. The one thing to do about that is you take the sharpness to happen you just fiddle with that curve to reduce the artifacts as you like for your specific use case. Sometimes a little bit of halo actually makes it look better but we leave that up to you to decide. Yes that's all from my side if we have time I'm happy to answer some questions. Maybe if Lasse in the meantime can... Did we find a solution at all? He was running out for... He hasn't returned yet. Okay so that means we don't have a solution yet. We can still keep swapping until we have. So I take one question and then we move on to the next speaker. Okay then we do another applause.