 Alright, welcome to part 2C of my series on photographing the Pinwheel Galaxy, also called M101. This is Nico Carver from NebulaFotos.com. And if you haven't watched part 1 and you're interested in how I did the capture of this with my DSLR, please check out the link in the description. You can just watch the first hour and change of that video before switching over to this one if you're interested in how to process using a paid program called PixInsight, which we'll be diving into right now. Before I open the program, I just want to show you that I do already have my files from the DSLR organized into four folders, lights, flats, darks, and bias. A question that I often get is how do I do this organization? Well, what I do is I just connect the SD card to the computer. You can just use an SD card reader or a connector camera with the USB cable. And then I transfer all of the files that I shot over to the computer. And then I go about organizing them just by moving the files around into these four folders that I created. And the way I know which file is which is you can just click on a file and on Mac or PC or Linux, I assume, you get a preview of what the file looks like normally. If you have any kind of program that can read the raw file back on your computer, you get this nice little preview on a Mac. If you hit the space bar, it makes it bigger. It might take a second here. But I can clearly see that this is a flat frame while something like this is what we call our light frame, meaning the actual picture of the night sky. The darks and bias are harder to differentiate. So it's helpful to know how many you took of each and then you can sort of in what order and then you can differentiate them that way. But if you're still having trouble on a Windows PC, you can right click and choose properties. And on a Mac, you can right click and choose get info. And this gives you some information about the file, including what we call exit data often or metadata. And it's just basically information about the file, including the exposure time. So I can see here that this is a bias frame because the exposure time is one eight thousandths of a second. While this one, if I do the same thing, right click, get info. This one is a dark frame because the exposure time is 30 seconds, matching the light frames. OK, so you can organize your files based on how they look and based on the metadata. And the goal is to get them already organized into these four folders, which is going to help us in processing them. OK, let's go ahead and open up PIX Insight. PIX Insight is cross platform. So you can get it on Linux, Windows or Mac. I'm going to be using the Mac version here. OK, it opens up like this. We can, of course, just do file open to open up individual files. So if I were to do that and navigate to my desktop and to the lights and just click on one file here, it would open it up. If I click on this little radioactive icon up here, that's the shortcut for STF Auto Stretch and I can see the file like this. And so this is just a single light. You can just barely make a M101 right there. You can see there's plenty of hot pixels, these blue and red and green really bright pixels because I did take this on a spring night. So the sensor did heat up. And so the calibration frame is going to get rid of this vignetting to some degree. It's going to get rid of all these hot pixels and we'll have a much better picture. And then we'll stack all these lights together to make our final result. Anyways, just wanted to show you that if that's one option, you can do file open to open up a single picture. You can also use processes, all processes, link to open up a stack of pictures. So this can be helpful. Like if I were just to, I'm just going to do a few here. I'm not going to do a hundred because I don't want to wait too long. But it will load up all of the pictures. It will load up all the pictures that you selected, apply an auto stretch and then you can step through them either just by clicking over here or by pressing the play button and it will step through them at whatever interval you want. So I could step through them at one second. And it's just a quick way to examine your pictures. You can also zoom in with command plus on Mac or control plus on Windows. And you can examine the pictures going through the list one by one. Blinking them is what we call it this way. So this can be really useful for image evaluation for picking out frames that don't look right. The tracking error or things like that. There are other systems to do this. We're going to be talking about weighted batch preprocessing, which involves some parameters to do this kind of image selection or rejection, but based on a computer algorithm rather than you just looking at every frame. The last thing I want to point out here is that you can see PixInsight did this rotation of the image so that it's vertical. It's applying a demosaic algorithm to it so that we're seeing it in color. That's because if we go over here to where it says Format Explorer and click on the raw format Explorer, these are all the settings associated with what it will do when it opens up a raw camera file. If I go down here to Edit Preferences, the setting that I have right now is demosaic RGB, meaning that it is not going to treat it as a raw CFA image. It's going to do any image flipping instructions in the raw instructions, etc. But for calibration, you want to be in pure raw, which turns on a lot of these output options. It's going to use the raw CFA image. It's not going to do any image flipping. It's going to use no black point correction, no highlights clipping, and it's not going to interpolate the image, meaning debayer it with any kind of pattern. It's not going to apply any white balance function either. This is usually what we want when we're calibrating. If you're using one of the batch preprocessing scripts, you don't really actually have to worry about this, and we're going to be using one of those today, but I just wanted to mention it in case you are doing any kind of manual calibration of your files. It's best to turn on all these raw format preferences by clicking this pure raw button before you get going. I'm going to hit OK, and I'll just show you the difference. If we open up one of our lights now, first thing that you'll notice is that it just opens up horizontally, so no image flip has been applied. The other thing is, if I just do the STF auto stretch with this little radioactive button up here, meaning it's going to temporarily stretch the levels so we can see it. You can see that it looks in black and white, and if we zoom in, we can actually see the Bayer pattern of the red, green, and blue pixels. That's what this is. If we wanted to turn this now into a color image, there is a process in Pix Insight for that, of course. We can go to All Processes, D-Bayer, and you can usually just leave the Bayer mosaic pattern on auto, but I happen to know that for the Canon 5D Mark III, it's RGGB. That's probably the most common Bayer pattern. You can even choose your demosaic method. I usually leave it on VNG. You can use this as a single process, meaning we could just grab the little process icon, the triangle, and drop it onto this image, just like that, to change one, or you can see that you can load in a bunch of target images. In a completely manual way to do the Pix Insight, we could go through each calibration step, create master files, D-Bayer each light, register each light with these different processes. We're going to do an easier method with the batch preprocessing script, but I just wanted to show you this just in case you were wondering, this is how it works. You can go from Pure Raw to a demosaic image, or D-Bayered image, just like this. A lot of times what you'll find when you do that, especially on a single image like this, is that the color channels are not particularly of the same strength, so we're seeing this very green image. That's what you get when we just apply the STF Auto Stretch. If I go into my Process Explorer, and go down to Favorites, and open up the Screen Transfer Function Process, we can actually unlink the channels so that it's not going to stretch them all equally, and apply a STF Auto Stretch, and then we get this response where it's applied a different non-linear stretch to each channel, and it looks a little bit more normal. Okay, enough on this technical stuff. I just wanted to show you some of the behind-the-scenes of PixInsight. Now what we're going to do is we're going to use a script to do a lot of this stuff more automatically. So, go up to Script, go down to Batch Processing, and if you are in an older version of PixInsight, you might just only see Batch Pre-Processing, but I'm going to assume people are more up-to-date, because it's very easy to get the latest and greatest PixInsight. All updates are included, and so we're going to use the weighted Batch Pre-Processing, which is available in newer versions of PixInsight. And it's a very similar script to the Batch Pre-Processing, but with the weighted option, it allows us to do a little bit more with the subframe weighting, basically to weight frames that either have a better focus or a better signal-to-noise ratio, or something like that, with this subframe weighting option. Okay, I'm going to click on the Add Bias button. I'm going to grab all of these raw CR2 files, there we go, do the same thing with darks. Just ignore these fit files down here, that was from when I was working with a different program, I think CRL created those there. So here's all of our darks, you can see it recognized, they were all 30 seconds in length. Let's add our flats. And finally our lights. Okay, and I recognize that all my lights were 30 seconds, it says no filter, because I didn't, these are all just DSLR images, so there's not different filter responses. Okay, so we have lights, flats, darks, bias, great. The next thing is here under D-Bear, you can see you can put it on auto if you're not sure, or like I said, most patterns are RGGB for most like Canon and Nikon and Sony cameras, that's true. The D-Bear method I'm going to leave on VNG. And right here under subframe weighting, this is where the weighted parts comes in, it says generate subframe weights, and I'm going to go into the weighting parameters. And there's different presets you can pick, there's just sort of a default if you don't choose a preset, there's the Nebula preset, the Galaxy preset, and the Cluster preset. And basically the main change here is with a Nebula, it assumes that the most important factor is the signal to noise ratio, the least important factor is your focus, or your FWHM, meaning how tight the stars are, stands for full width half maximum. With the Galaxy, it weights the full width half maximum a little bit higher than the roundness of the stars, and the signal to noise ratio is still just a little bit ahead of the focus. And with the Cluster, it assumes that the most important thing is the roundness and the focus of the stars, and that that's more important than your signal to noise ratio. So this is how it's going to weight your different subframes. I'm going to choose the Galaxy preset here, and I'll just hit this little X to go back. I am going to uncheck this use best frame as reference for registration. Now in a lot of cases, you would want to use this, because it makes the most sense to use your best frame to register all the other frames against when you're doing star registration. But in my case, since I know there was substantial drift, because I wasn't using the best mount, I want to make sure that I'm getting as much of the field as possible. So I'm going to pick this first image that I took, because I don't want it to pick a later image that maybe had much better signal to noise ratio. Just because after registration, most of the image will be just black, which of course will get a very high signal to noise ratio, but it will be completely off from all the other frames. So in my case, I'm going to uncheck that and choose my own registration reference image, which is going to be the first image that I took. Okay, we can go into integration parameters here, and we want the combination to be average. I'm going to choose Windsorized Sigma Clipping for the rejection algorithm. You'll notice that there's also the option for auto. So if you just want to let it pick your rejection algorithm, you can do that. And it does that just based on the amount of images in the group. But I found that for this kind of thing, I want Windsorized Sigma Clipping because I've tried the different options. And for my purposes, for DSLR images, I think that one works the best. Okay, you can try large scale pixel rejection. I have not found those work that well for me, so I'm just going to leave them off. But I do want to apply low Sigma Clipping and a high Sigma Clipping, I'm going to actually bring the low Sigma Clipping down to three deviations above and below, or sorry, below the norm. Okay, so we have three and three. That's good. Okay, I'm going to close out of image integration. I don't want to generate drizzle data. I am undersampled in this case, so it would actually be maybe worth doing, but I did not dither these images, so I don't think drizzling is going to work that well. So I'm going to turn that off. I do want to apply image integration, meaning it's going to make a master light at the end here. And I'm going to, if you didn't want to do that, you could turn it off and you could just click this Calibrate Only, and then do the registration and integration yourself if you wanted to pick those parameters a little bit more closely with the full processes. But I'm just going to let it try to do everything here. For output directory, I'm going to make a new folder in my M101 folder, and I'm going to call it WBPP for weighted batch pre-processing. Okay, so I think I've filled everything out. You want to make sure that each of these four tabs has files in them. And one thing I should point out here is you can actually change different things like the rejection algorithm for each type of file. I usually only care about it for the lights and just let it do its thing for flats, darks, and bias. Well, the one thing I want to turn off here is Calibrate with flat darks because we're not using flat darks, we're using bias frames. So I'm going to turn that off. That's something new that's in the weighted batch pre-processing that you want to make sure under the flats tab. Okay, let's go ahead and click run. All right, that's odd actually. So I just turned it back on, and when I click run, it says only master bias will be used to calibrate flat frames. That's what I want it to do. I don't want it to use darks on the flats. So that's what I want. I guess the way to get that is actually to have Calibrate with flat darks turned on. I don't really get why, but maybe a quirk of this new script. Anyways, that was the default, so I guess just leave it on the default, and that warning is fine. I only want master bias speed to be used to calibrate the flat frames. I don't want my 30 second darks to be scaled. That doesn't make any sense. So let's go ahead and click continue. It goes through here. This is going to take quite a while because we're dealing with hundreds and hundreds of files, so I'm just going to let it go and we'll catch back up when it's all finished. Okay, it's all done. I'm going to go ahead and open the resulting light, which will be under weighted batch preprocessing folder, which we created, master, and then master light. So this is the end result of all that calibration, registration, and stacking, and it looks like that. You can see that it has the sort of overwhelming blue background when we just apply a STF auto stretch unless we do what I just did and turn off the link RGB channels, in which we get the sort of truer response here. So with that linked, again, turn off auto stretch, looks like that. With that unlinked, it looks like that. So learn to use the screen transfer function if you haven't, because you can get much more interesting and accurate results by unlinking the channels to understand what it looks like. What I want to do next with this image is I want to extract a bunch of different things from it here. I'm going to extract the luminance component. There is a button for this right up here. It's one, two, three, four, five, six buttons over from the left. If you hover over it, it says extract CIEL asterisk component, but that means extract the luminance. And I'm just going to change the name of that view to Loom. I'll go ahead and minimize it and put it up here for safekeeping. I'm also going to extract the RGB channels. So that's the next button over. If you hover over that button, it says split RGB channels. OK, very good. I'm then going to, so now I have a red, green, and blue channel here. I'm only going to use these for a little bit, so I'm not going to bother renaming them. But what I want to do is I want to run a image analysis noise evaluation on these. So I'm going to first run it here on the green. OK, I can see the noise evaluation says 6.5. The reason I'm doing this is I'm going to want to linear fit to the channel with the least amount of noise. Usually that's the green channel. OK, yep, the red channel had a little bit more. So green channel 6.5, red channel 7.03. Let's try again. Again this is under script, image analysis, noise evaluation, and now running it on the blue channel, 6.8910 to the fifth. So yep, the green channel has the least amount of noise, which makes sense because there's two green channels to every one red or one blue pixel on an RGB sensor. So let's go ahead and linear fit the R and the B, the red and the blue, to the G. And what this is going to do is it's going to balance those channels just like we were seeing by un-linking them in the STF AutoStretch. Same idea. So this is under processes, all processes, linear fit. For the reference image I'm going to use this one right here, this one that ends in G for green. And then I'm just going to grab this little triangle, which is the do the process icon, and drag it onto the blue channel here. And this will basically just change all of the pixel values to better match the response that we have in the green channel. So instead of seeing that overwhelming blue when we have a length of channels, it's going to take them down. So you can see the linear fit function, it applied there. For most people that math is not going to mean anything. Let me just minimize this to get rid of it. OK, and I'll also apply the green reference linear fit to the red. OK, very good. You can close out of that. Now I'm going to go to process, channel management, channel combination. And I'm going to recombine these three files back together. So I'm just putting the red channel and the red, the green channel and the green, the blue channel and the blue. I want RGB color space. And then I'll hit the circle, the apply global button to create a new file here. It says image 06. Let's go ahead and bring back up our screen transfer function. Link the channels and see how it did. Very good. OK, so this is now a linear fit image, meaning basically we've done the work of actually linear fitting the channels, which is basically the same thing as unlinking the RGB channels in the FDF AutoStretch. But we now have a permanent version of that change. While everything we do with the screen transfer function is temporary. So we've made a permanent version of that. We can now get rid of these RGB channels. And the next thing that I'm going to do is I'm going to reduce some of the noise in the background. But I only want to do it in the background. So I'm going to stretch this luminance frame that I already extracted into a masked view. Actually, let me get rid of this. Let's extract a new luminance frame from this image. So I'm going to call this image RGB. Let's extract the new luminance from this one just in case. I don't think it'll make any difference, but just in case it does. So now we have RGBL. And we're going to turn this into a loom mask. So I'll just go ahead and call it that. And let's bring up under intensity transformations, the histogram transformation. From where it says no view selected, I'm going to pick my loom mask. And I'm just going to stretch this out just by taking this mid-tone slider, pushing it over to the left. Taking the shadow slider, pushing that over to the right. OK, something like that. I just want to make sure that it's protecting all the stars and any information in our galaxy. So I've made it a fairly bright luminance mask. I'm now going to go ahead and apply that to this image by just dragging the tab right underneath this tab. And everything that turns red is what it's currently protecting. So we can see it's not protecting the stars of the galaxy. So we want to actually flip this mask. I'm going to go up to mask, invert mask, and that's better. OK, now so I can see what I'm doing, I'm going to go to mask menu and choose an unclick show mask so I can see it. And then I'm going to apply some SCNR noise reduction. So if I go down to noise reduction, SCNR, we'll start with green noise reduction. I'm going to go ahead and apply it at its full strength of one. And I'm going to do some red noise reduction, also at its full strength. Keep in mind, we have this mask applied, so it's not hitting the stars or the galaxy. And I'll do some blue. It doesn't look like the blue has to be as strong, so I'm going to do 70%. OK, so that just took out some of the color noise. We're still getting some of the color in the background. So what we can do now is we can just apply a light bit of desaturation with a curves transformation, which is also under intensity transformations. And I'm just going to pull up my saturation curve, which is this S, all the way over here on the right-hand side. And I'm just going to desaturate the background a bit just by pulling down that curve a little bit. OK, that's looking better. Now I'm going to actually reset that saturation curve. And I'm going to flip the mask over. So I'm going to do invert mask. So now it's protecting the background and applying to the stars and the galaxy. And I'm going to increase the saturation of those. And I'm going to do that a couple of times. OK, the next thing I'm going to do is I'm going to remove this mask. So I'll just go to mask, remove mask. And I'm going to crop the image down because I can see that down here there's not a lot of interesting information. And we also have this really aggressive registration artifacts. There's also nothing much interesting over here. And that part of the image just looks way too bright. So I'm just going to crop this down quite a bit with process geometry dynamic crop. And I'll just do maybe something like that. Sort of center the galaxy and apply. OK, now that we've cropped, this luminance mask is useless. I'm just going to get rid of it by closing the window. Close dynamic crop. Let's go ahead and make this bigger. The view options for the windows are down here. So this one is zoomed to optimal fit and the one above it is zoomed to fit. So that will make it fairly big. OK, this is looking pretty good. The next thing I want to do is you can see that the image background is brighter over here and darker over here. So we're going to apply a background removal, which is under process background modelization dynamic background extraction. OK, and the way that this works is you place samples on your image that are background just by clicking. And you want at least one in each corner. And I also like to place one along each axis on the sides. So if you see this cross pattern, I like to put one at the extension of each cross as well as one in each corner. When you have a strong gradient like this, you might also want to place a couple along that. I'm going to place a couple in here just to see if it can do anything about that unfortunate line. And then I'm going to place a couple in on the center. You don't want to place any on stars, on bright stars, or on your Galaxy or DSO. OK, the other thing, notice here, is that when you place in a sample, it's green. That's the selected sample. That's what you're seeing over here in a inverted view. When you place a sample and it shows up as red, that means that it wouldn't be included when it does the background extraction. So we want all of these samples down here to be included. The reason they're right now not being included is because of our model parameters right here. So I'm going to increase the tolerance up to 1.1 and resize all of my samples. And you can see that when I did that, these three that were red before all turned white because they're now included in the sample just by raising the tolerance of what can be included in the model. But this one is still red. So I can increase that tolerance again. I'll increase it to 1.3. And now that did the trick. So that sample is now included in our model. OK, you can change other things here. You can make the samples bigger, meaning they would include more pixels or smaller, meaning they would include fewer pixels, that kind of thing. You can change how smooth the gradient you want it to create. You can use axial patterns. You can do all kinds of stuff, and I'm not going to get into all of it. Let's go ahead and try a subtraction division. And we'll just hit the green check mark to execute it. OK, so the first thing we want to look at is the background model that it created. That's what it looks like. You can see that there's a bunch of lines here. It doesn't look very smooth. That's normal in a 16-bit representation. So if I change it to displaying in 32 bits with this little, or 24 bits with this little button up here that says 24, then you get a much smoother gradient. But this looks pretty nice. I can X out of that. I'm going to see what the actual resulting image looks like. And that looks quite good. One thing that you'll notice is that when you do this work, taking away the background reveals all of the crazy noise in the image. That's completely normal. And there's definitely other things that we can do to work on that noise. But the other thing to keep in mind is that this is an auto-stretch. So this isn't how I'm going to actually stretch the image in reality. It's just taking the pixel values and auto-stretching them. It doesn't take into mind any artistic elements to stretching. OK, I'm going to go ahead and close out of the dynamic background extraction. I think that did a good job. I'll minimize this one. We don't need it. But I'll just put it over here just in case. And at this point, let's go ahead and save. I'm just going to go back to my folder here and just save this as an XISF file. If you're going between different astronomy programs, you might want to save as fits. But if you're staying in PIX Insight XISF is the default format. OK, the next thing I'm going to do is I'm going to extract another luminance from this. And I'm going to do a different stretch on this. So just pull up my luminance channel here. We'll give this a nice stretch just by using the shadow sliders and the mid-tone sliders and stretching it out. OK, something like that. And then I'm going to apply that mask to the image here and once again invert it. OK, so we have a very highly stretched luminance mask. We then inverted it and applied it to the image. I'm going to then not remove it but not show it. And I'm going to apply a little bit of multiscale median transformation. And what this is about is it's basically using wavelets to smooth out the noise. And so at a scale of one, you're looking at very, very high frequency, very small noise patterns. At a scale of 32, then you're looking at much bigger noise patterns. And R is just residual. So that would just sort of be even very large structures, which can be helpful in some cases but not for what we're doing. So I'm just going to do a zoom in here on the galaxy and the background so that I can pay attention to what this is doing. And I'm going to apply noise reduction. I'm going to apply more noise reduction on a smaller scale. So I'll do like a 7.5 threshold on a scale of one, a 5.5 threshold on a scale of two. And just keep bringing these values down. Maybe do a 3 and 2.5 and finally a 1.5 on this bigger one. OK, something like that doesn't have to be exact. You can play around with this. And one good way to play around is to define a preview area. So I'm just going to define this as a preview area. And then open up your real-time preview. It then computes your multi-scale median transformation on just, well, it does that on the whole image. But we can look at the preview here in this box once it's computed a real-time preview for. OK, there we go. So now I can look at this versus the real-time preview of what it's going to do. And see if I think that it's blurring any real detail that I've gotten in the image or if it's just applying a nice blur to the noise factors. And I think this looks pretty good. You can see that in this, there's a lot of high-frequency noise. And in this one, it looks a little bit blurred out, some of that high-frequency noise. So I'm going to go ahead and apply it to the actual image, just by grabbing the new instance icon and dragging it onto the image here. OK, I can close the real-time preview. I can drop my preview by deleting it. We'll zoom back out a little bit. And you can always do a before and after with undo, redo. I think we haven't lost any detail, but we have now a smoother, less high-noise image. That's good. With this mask still applied and still inverted, I'm going to go ahead and apply another round of desaturation to the background. So I'm going to go back up to processes, intensity transformation, curves transformation. I'll reset that saturation curve and bring it down a bit. OK. And it's good. I'm going to then invert the mask or un-invert it, sorry, so that it's just in its normal state. You can always show it again to make sure you know which version it's on. So right now it's on the version where it's selecting the stars and the galaxy and masking the background. I'm then going to open up that curves transformation once again, reset it and bring the saturation back up on the stars and the galaxy. OK. And at this point, it's fine for the stars and the galaxy to look a little bit garish, because when we stretch, we lose color. And with this mask applied, sometimes I'll even with this mask applied, try to bring up just the blues saturation a little bit. So if we go under intensity transformation, color saturation, we can apply things across the color range by just adding points to this. And so if I wanted to, for instance, just target the blue part of the color range and saturate that more than the rest, we could do something like that. And that should bring up the blues a little bit more. OK. Let's go ahead and apply a histogram transformation now. Before we do that, we want to remove our mask. So we'll go to mask, remove mask, go down to intensity transformations, histogram transformation. I'm going to reset this and pull up my RGB underscore DBE. Actually, it might be a good idea to save this before we do this. So I'll just save this now as underscore nr for noise reduction. OK. And I like to actually just go ahead and reset the screen transfer function at this point and then just do this iteratively. So you can watch in real time how this is what it's going to do after you hit the square, after you apply that transformation. And basically what I like to do is bring this very linear looking histogram mountain over to about one quarter before taking the shadow slider and bringing it right up to the edge there. Bring that, knock that back down. And then I'll bring it right back up to the edge again and just try to bring it back over to one quarter. OK. And then I'll reset my black point. All right. And this is looking pretty good already. So you have nice colorful stars. We can reset the black point once more, it looks like. And bring this over a little bit. I think that's a little bit too much. Let me undo that there. OK. I like that. Basically, we did a lot of the work while the image was still linear so that it would be in good shape when we stretch. The main thing I'm noticing now is there's a lot of green noise in the image. So I'm going to run a SCNR green again, process noise reduction, SCNR. We want to have it on the green channel. And this time I'm going to run it at 50%. OK. That dropped it down a bit. I'm going to go ahead and change this. No, actually, I'm just going to get out of that. I'm going to draw out a new luminance. Again, just by clicking that button up there, I'm going to go ahead and drop down the background a little bit with histogram transformation on that luminance. OK, that's not working. Let's try something else. So this worked pretty well in terms of grabbing the stars. I'm going to go ahead and actually push it even a little bit further to just try to completely drop down this background right here. OK. But the problem now is we have the background completely dark in this mask, but the galaxy arms are dropped out too. So I'm going to grab a new luminance. Sorry, the luminance button there. And I'm going to create a range mask from that. So I'm just going to go down to mass generation range selection. And the way this one works is you can just drop a real time preview here. You can see with the lower limit and upper limit set to 0 and 1, you just get a completely white image. But if we raise that lower limit up a bit, then we start getting structures. And I'm just going to do something like that. And then I'm going to increase the fuzziness. And OK. And if we sort of just position that mask over this image here, we can see that that range selection completely covers up the galaxy and its arms, which is what we want. Well, not quite the outer arm. But OK, I'm going to have to do a little manual work here to get that outer arm. So let's go ahead and open up Painting Clone Stamp. I'm going to go ahead and click on the image to select it. Then Command-click. I'm going to make this nice and soft and quite a bit bigger. Bring down the opacity a little bit, too. I'm going to Command-click in the center here and then draw out so that I am picking up that arm out there. OK, looks good. I'm going to go ahead and apply that. OK, now I want to combine this mask with this mask. I'm going to use PixelMath to do that. So I'm going to go down to Process PixelMath. Pull open up the Expression Editor. I'm just going to grab the Range Mask and add it to this Luminance Mask. And we want a new image output. Let's apply it. And we get this. OK, I'm going to minimize these. And let's try applying this mask to our image now. OK, so this is how it's protecting. And let's go ahead and flip it around. So we want to protect the galaxy and the stars and affect the background. So this looks pretty good. The one thing is it's also protecting a little bit of the noise here. So I'm just going to apply one more curves to this. Just do this sort of S-curve. OK, that's better. And let's go ahead and now open up that Curves again. But this time do it on the actual image. And I'm going to bring down both the exposure level and saturation on the background. OK, that looks good. Let's go ahead and take the mask off to see what that looks like. Or I'll just actually turn off the show. OK, that looks good. Actually, I'm going to bring it down even a little bit more. Now, I just darkened the background quite a bit so that we wouldn't have the emphasis of that line. This is really personal taste, of course. And so some people would find this background too dark. For me, this looks pretty good. I think especially with galaxy images, you can darken the background quite a bit. And it still looks pretty good. But it's all personal taste. So maybe that was a little bit too much for some. But I think it looks good. And we're going to call this a day, I think. There are other things you can do, like sharpening and things like that. But on this image, I think this is enough. So let me just go ahead and remove this mask. I'm going to go ahead and save the image. To save as again, I'll call this final one. And I'll save it off as a JPEG too. So you can just do save as and choose JPEG from there. I'll do 100% quality. OK, let's see what that looks like. Make this full screen. OK, and there's our final result out of pics insight. You can see we have nice spiral arms. The noise is well managed. I still see that artifact right there. I don't know how to fix that in pics insight. You can watch my other videos on Photoshop and GIMP and things like that and know how to fix it quite easily in there. So probably what I would do if I would use pics insight to get it to this point and then bring it in and just draw a mask in there to fix that. If someone knows how to fix something that irregular in pics insight, let me know because I'm not sure. But anyways, hopefully this was helpful. You can see we did a bunch of different things, often with masks, to get it to this point of dropping down the sky, saturating the stars and the galaxy. And again, this was done from Portal 9. So you can't expect the world. But for 30-second exposures with just an unmodified DSLR from Portal 9, which is city skies, I think this looks pretty good. If you have any questions, you can always ask in the comments. And if you like these kinds of videos and want to see more, please consider supporting me on Patreon. This has been Nico Carver from nebulaphotos.com. And until next time, clear skies.