 All right, another thing I want to show you before we dive into processing here is just what a single sub looks like in each channel and then what it looks like after they're stacked. We'll see this a little bit as we go, but a lot of people just are curious about this up front. So keep in mind, this is just for this particular object, which is the seagull nebula or sometimes called the parrot nebula, IC 2177, but it's still illustrative, I think, of what you can maybe expect out of an object that emits in hydrogen alpha O3 and S2. So the first thing we're going to look at here is an H-alpha single sub. So this is one five-minute exposure with the ZWO ASI 1600 in H-alpha. Now let's look at after it's been stacked. So this is, I think, like 30 sub exposures stacked together. And so you can see there's a single, there's a stack. So a lot comes out in the stack as you increase the signal-to-noise ratio by stacking many together. Here's a single O3 sub, Oxygen 3. You can see there's just barely something there. This is a little bit hard to make out. Here's after we stack, and then it's a lot more evident where that O3 is. Again, I'll show you a single and after the stack. Okay, and then last, here's the S2 signal, so the sulfur. There's a single, oh, sorry, this is a single, got out of order, and there's the stack of the sulfur. All right, with that said, we can move on. Okay, now let's jump into processing. You can see here on my desktop, I have a folder called IC2177. That's the object that we'll be processing today. I took narrowband data, so I have the H-alpha frames, the O3, the S2, and then the corresponding flats for those lights, and then the corresponding dark flats for those flats. Dark flats are just like darks for your lights. They just calibrate the flats, just like your darks calibrate your lights. So I have that all organized here. You can also download these files from my website, nebulaphotos.com. Okay, I'm going to open up Pixinsight. Here's what it looks like when you just open it up fresh, and we're going to calibrate everything manually. So we're going to start by going to Process, go down to Image Integration, and choose Image Integration from that submenu, click on Add Files, navigate to my desktop here, go into my darks folder, and just select all of those darks to add them. So you can see I have 15 darks there in the Input Images window. Then down here under Image Integration, I'm going to change a few of these settings. Under Normalization, I'm going to change it to No Normalization. When you're calibrating darks or bias frames, you don't want to use normalization, you just want to give everything sort of a straight weight. So also under Waiting, I'm going to say Don't Care, give everything all weights equals one. I do want to generate an integrated image, but I don't need to subtract pedestals or evaluate noise. So I'm going to turn those off. That scale estimator, I'm just going to leave on the default. Under Pixel Rejection, I'm going to say Windsorized Sigma Clipping, meaning that if there is anything to Sigma outside of the norm, it will probably just ignore those values. Under Normalization, I'm going to say No Normalization again. And for these clipping options, I'm going to turn off Clip Low Range, but leave on Clip Low Pixels and Clip High Pixels. What that means is that I don't want it to think that maybe the Amp Glow is something it wants to reject. We want to keep that in the master dark. So I'm going to turn that Low Range off, but keep these ones on just in case there are some stray things that it may reject. Okay, that's it. So again, Average, No Normalization, Don't Care, All Weights, Equal 1, Generate Integrated Image, Windsorized Sigma Clipping, No Normalization, all looks good. I'm going to hit this little circle to apply global. And what Pix Insight does is if you have more cores in your system, it uses those cores to work a little bit faster. So this, since we're only integrating 15 frames, this won't take very long. It does give you in this process console an idea of what it's doing, which can be handy. If there's any errors, it will also report them over here. And a lot of times you can use those errors to figure out where you went wrong and how you can correct. Okay, it's done. So this was real time. I didn't speed up the video or anything. You can see this on a fairly fast machine. This goes pretty fast. You can look at these rejection maps if you want to. I'm just going to stretch them so you can see this is what it didn't include in the Master Dark, meaning these are pixels that sort of they sort of look like hot pixels, but they weren't in each frame. So it rejected them. Here's the low rejection map. And then if I stretch the Master Dark, it looks like this. Now that looks a little bit scary. It looks like there is a lot of noise there. But one thing to keep in mind is that when you apply this STF auto stretch, it is stretching the data to the extreme that it can. So while the Master Dark can look a little bit crazy like this, don't worry about it. The whole point of doing calibration frames is to remove this noise from your finished image. So this looks perfectly normal for the ASI 1600. If we zoom in, you can see these little white dots are hot pixels. And on the sort of larger scale here, these things on the sides in the corners is amp glow, meaning I don't know exactly how to explain it, but the amplifier when it amplifies the signal is leaving some glow in the frame, and we want to subtract that out through the calibration process. Anyways, I'm going to go ahead and save this off. So I'm going to do File, Save As. I'm going to put it back into that same folder that I've been working in. And I'm going to call it Master Dark. I don't want to save it as a TIFF file. I'm going to use the XISF file format. You could also, if you're planning to reuse this in programs outside of PIX Insight, use the FITS file format. It's really up to you. If you only work in PIX Insight, then there's no advantage of FITS over XISF. So I usually just use the default file format that PIX Insight uses, which is XISF. And the defaults here are fine. We want to use a 32-bit floating point sample format. Okay, now that that's done, I'm just going to go ahead and close that. Okay, then what I'm going to do is I'm going to create Master Darks using these exact same settings, but with my dark flats for each filter. So I'm going to go ahead and click the Clear button right here to clear the list of input images and click Add Files again. And this time, I'm going to go into my dark flats folder, start with HA, and grab these 30 files. You can see I now have 30 files. These are dark flats. Each one is 0.46 seconds. And again, I'm going to use the same settings, but I'm now going to integrate my Master HA dark flat. So I'm going to hit the little Apply Global circle down here. And it will go through and integrate this. Again, I'm just showing this in real time. But for the subsequent filters, the O3 and S2, I might speed this up so it doesn't get too monotonous. But basically, again, I'm just using the same settings that I used for my master darks that are going to calibrate my darks, which are going to calibrate my lights. I'm using those same settings for my dark flats. OK, it's done. Just to show you how this compares, there's my rejection for masters. Get rid of those. There is my HA dark flat. So you can see it looks a little bit different than my master dark that I was using for my lights. For one thing, we don't get these amp-clos in the corners. We now get a little bit more of an amp-clo at the bottom. This will look a little bit more similar to what you expect from a bias frame. But again, we're not going to be using bias because we're instead using dark flats. So I'm just going to save this off. I'm going to call this HA master dark flat and save it as an XISF file. OK. Can close that. Again, I'm going to clear this list of input images. And this time, I'm going to add my O3 dark flats. And again, apply global. And I am now going to speed this up because we're going to do this for O3 and then we're going to do it for S2, so it's a little bit boring. All right. Here's my O3 dark flat. Save that off. Close it. Clear the list. And finally, add my S2 dark flats. Apply. Save that. This time is S2 master dark flat. Close that. I'm going to go ahead and for now close image integration and open up image calibration. And what we're going to do now is use those master dark flats that we just created to calibrate our flat frames. So we're going to start with HA. Again, I'm using the image calibration process. I get to that from process, image calibration, image calibration. We're up here where it says target frames. I'm going to click add files and go into my flats folder. Start with my HA flats. Select them all. So I have 30 flats up there. I'm going to make a new output directory. So I'm going to go into my master folder here, IC2177, click new folder and call this HA flats. Cal. Okay, I'm going to turn off master bias, turn off master flat under master dark. I'm going to leave that checked, but I'm going to turn off optimize. Optimize is a scaling option where if you are scaling based on time, you could do that. I don't recommend ever using that with CMOS chips. You might be able to use it with certain CCDs that have more linear response to dark scaling, but I do not recommend it for DSLRs or any CMOS chips. Calibrate, I don't need that on either. So I'm just going to go ahead and click this little folder here to select the master dark flat, which is the HA master dark flat. Okay, so we have that set, master dark, HA master dark. It's calibrating the HA flats and it's going to output them to this folder that I've called HA flats cal for calibrated. With all of that set, I'm going to go ahead and hit the little apply global button. All right, you can see image calibration, 30 succeeded, zero failed, zero skipped. That sounds good. Just to see that it worked, what I can do is I can open up one of those calibrated flats. You can see unstretched, it doesn't look like much, but if I stretch it, we can now see the flat frame there. And you can see that this was, yeah, this was taken with my astrograph, which is a pretty flat image. There's just a tiny little bit of vignetting on the corners here. I think that's from my filter, not really from the telescope. And then there's a little bit of other weirdness. I can't really tell if those are do spots or what's going on here, but you can see it's a pretty flat file. When we take all of them and put them together, things will become even a little bit more apparent what's going on, but this looks good. And you might also be wondering, as we're doing this, isn't there a more automatic way to do this? There is, it's under script, batch processing, batch pre-processing. Sometimes they do use that, but it works really better if you are going to be using bias frames. I don't know if there's a way to use this automatic feature if you're using dark flats like I am here. So we have to do it the manual way, which is a little bit tedious, but once you get used to it, it's not too difficult. It's just... And once you get used to it, you can also sort of have more of an assurance that you know what's happening with each step along the way. Okay, so now I'm just going to repeat that process for that O3 and the S2. So clear the list, add my O3 flats, make a new folder called O3 Flats Cal, drop in my O3 Master Dark Flat right there, and hit the circle, apply global button. And do it one more time for the S2. Flats, S2, I have my S2 flats here, which by the way, from my system, the S2 flats are always the longest. If you remember, the HA flats were just 0.46 seconds. My S2 flats are almost triple that, 1.26 seconds. I'll make a new S2 Flats Cal folder and drop in my S2 Master Dark Flat here under Master Dark, and again, apply. Okay, now that we have calibrated flat files, we are going to open back up Image Integration. Go ahead and clear this list, and we're going to take our calibrated flat files and integrate them into Master Flats for each filter. So again, what I'm adding here is that my new calibrated flats. So I'm going to start with HA, use that new HA Flats Cal folder, and you can see that I'm in the right folder because each file name now has this underscore C at the end. Okay, so I have 30 flats in there. I'm going to change these settings just a little bit. Under Normalization, I'm going to change it from No Normalization to Multiplicative. I'm going to leave the weights on Don't Care, All Weights equals 1, but I'm going to turn back on Evaluate Noise. And under Here, under Normalization, I'm going to turn on Equalize Fluxes. And I'm not going to explain right now what each of these options does, but if you're interested, you can always hover over things in Pix Insight, and a little tooltip will often come up, and it will explain a lot of these, a lot of this terminology for you if you're interested. Okay, with that done, so again, Multiplicative, Average, Don't Care, Evaluate Noise, and under Pixel Rejection, I'm going to just keep using Winsorized Sigma Clipping and Equalize Fluxes. I'll just also make a quick note here if you are taking certain kinds of flats, like Sky Flats, Winsorized Sigma Clipping might not be the best option here. You might instead want to use percentile clipping or something else that's a little bit more aggressive to reject stars in your flats if you were using Sky Flats. I was using a light panel for this dataset, so Winsorized Sigma Clipping will work fine. Okay, and we're just going to repeat this process again for HA-03S2. So here we go. All right, I can look at the rejection maps. Really, when I'm stretching these rejection maps, all I'm doing is just making sure that there's nothing that leaps out as being really odd, because then you might want to re-run the process if the rejection maps look very strange, but if there's just sort of a few pixels that are rejected, don't worry about it. You can just close those out. Here's our integrated HA-flat, and you can see now there is more pattern here. Again, everyone's flats are going to look a little bit different because it's about the optical system that you're using, so using different filters, using different telescopes, you're going to get a different response here, but for this particular combination of things I'm using here, this is what my HA-flat looks like. I'm going to go ahead and save it as HA Master Flat. I can close out of that. And again, just like when we were doing the dark flats, we're going to repeat this process using the same settings, so I'm not going to change anything down here. I'm not going to reset the whole process window. I'm just going to clear the input list, and this time add my O3 calibrated flat frames open and just hit the Apply Global button. All right, that's done. I can close the rejection maps, stretch my O3, and yes, this does look quite funky. I don't know if it's something about the ASI 1600 or the Astrodon 03, but I often get this sort of weird reverse look with my O3 filter. The good news is that it does seem to calibrate nicely, and the other thing to remember is that whenever you are stretching this STF auto stretch, it is showing you the absolute extreme stretch. So while it does look funky, it's not really this dramatic. It's just that this is an extreme stretch of the data. Anyways, I'm going to save this off as my O3 Master Flat. And last up is the S2, so I'm going to clear the input list, add my S2 calibrated flats, and apply. All right, there's my S2 flat. I'll save it as S2 Master Flat. Can close that, can close Image Integration for now, open back up, Image Calibration. So we're sort of just going back and forth here. Image Calibration, Image Calibration. I'm going to go ahead and just hit the Reset button down here in the lower right. I'm going to go ahead and add my first set of lights. So we're finally on to Calibrating Lights. I'm going to go into HA, select these 37 HA lights. I'm going to go into Output Files and make a new folder called HA Cal. Right here under Output Pedestal, I'm going to use an output pedestal of 800. This is in data units using a 16-bit scale. Since I was using, how do I explain this, a pedestal of 50 in a 12-bit range, I'm going to multiply that. So that's why I get to 800. This is sort of a more advanced subject. I'm not really prepared to explain it right now, but basically the reason for an output pedestal is that sometimes when you calibrate, you can get into negative values where you get black holes in your data, and this is to avoid that. So with the ASI 1600, I'd recommend always using somewhat of an output pedestal here. Usually between 400 and 1000 is sufficient. For my particular data here that I'm using, 800 I think is the right value. I'm using a gain of 200 and an offset of 50 in the driver. But you might want to vary that output pedestal. You might also want to experiment a little bit with the output pedestal and see what works best. But for this camera, I would recommend using one. Okay, so we've set up our output directory. We've set our output pedestal. It can turn off master bias, turn off optimize under master dark. I'm going to add my master dark here. This is the master dark that we created first thing we haven't used yet. I'm going to add my master flat for HA, which we just made. And so this looks good. We have HA master flat under master flat. We have master dark under master dark. We have calibrated and optimized turned off under output files. We have an 800 under output pedestal. And we've made a new folder called HA Cal to drop all of the calibrated files into. And we'll hit go, which is the little circle. Okay, and if you just want a quick test of how it worked, what you could do is I'm going to open up my first HA light here. And I'll also open up my first calibrated HA light. I'll just put them side by side and give each one an auto stretch. Okay, and they look pretty similar to be honest, but you can see that this one has a little bit of vignetting where the center looks a little bit too bright. And that's been corrected over here. Sometimes if we zoom in, we can also see where it has removed hot pixels. So I see one right there. I'm going to zoom in one more level. Sorry, I can see two hot pixels right here. So one handy thing we can do in Pixinsight is I can drag this. Sorry, screwing up here. I'm going to make this a little bit smaller. I can drag this tab right on top of this one, and it zooms in to show you the same view. So what we can see here, you can see the same stars, these three stars right here and right here. And there's a hot pixel, there's a hot pixel, and you can see in the calibrated result those have been removed. So do you have to actually do this every time? No, but it can be fun just to make sure things are working to open up a light frame, open up a calibrated light frame, and inspect the differences. Close those out. And we're going to repeat this process for our O3 and our S2. So I'm going to start by going into Lights, O3, select all the O3 light frames, make a new folder called O3Cal. I'm going to leave in that 800 output pedestal, leave the same master dark in, but change out the flat to the O3 master flat. And hit the Apply button. Okay, and do the same thing for S2. Add the S2 lights, change out the master flat, and make a new folder called S2Cal, and hit the Apply. All right. So we now have calibrated all of our lights. I'm going to close out of that. The next thing that we can do here is either use Blink or Subframe Selector to inspect them. If you are using Blink, you just go to All Processes, Blink. You open up one of the calibrated folders and open up those images, and then just use your arrow keys to move through the images. Basically what you're looking for here is just any frames that really jump out as bad. Sometimes though, when you're zoomed out like this, it can be really hard to tell. So you can try zooming in and then moving through them again. And again, all of these look pretty similar to me. You can see that there is drift between the frames. That's because I had dithering set up, so it's moving between every frame. You can also see that this is two sessions. The early session is a lot brighter than the later session. So that's Blink. It can be useful. But what I mostly use is Subframe Selector. So under All Processes, I'm just going to jump down to the S here and choose Subframe Selector. Add all of my HA calibrated frames. And I often actually just dump everything in here at once. So I'm actually going to dump in my O3 calibrated frames as well and my S2 calibrated frame. The scale on this was about 1.6. Camera gain was 0.5 electrons to data number. It's 12-bit. That all looks fine. And the Subframe Selector as a process, it started as a script. I think you can still get it there. But as a process, the way that it works is you have to choose the routine up here that you want it to do. So I'm going to start with Measure Subframes. And then I'm going to hit the Apply Global button. And so you can see what it's doing over here is since I added everything, we are measuring 72 subframes, meaning all of our lights that we've calibrated. And what it does is it looks at both noise in the image, but more importantly to me, something called eccentricity, which is the roundness of the stars and the FWHM, or Full Width Half Maximum, which is an approximation of the focus. So we'll let it do its thing here and then we'll examine the results. Okay, it's done measuring. And what I can do now is I can use this. There's three different windows that it opens up here. This first one we are using to put in what we wanted it to measure and give some system parameters. There's this window, which is called expressions, and then there's this window, which is your measurements window. And the first thing that I often do is I put in an approval formula here. And I usually base this on what I'm seeing in the data. So looking at Full Width Half Maximum, I can see that there's just a few frames that are over 0.7. So I'm going to try just FWHM should be under 0.7 and apply it. So I usually do this looking at the data. You can see over here the left scale is FWHM. The median looks to be around 3.0. 3.3. The outliers are up here. Well over 4. So let's just try FWHM should be under or less than 4.5 and apply it. And you can see that then took out 1, 2, 3, 4, 5, 6, 7 frames out of our 72. And I'm going to go ahead and sort this table by FWHM descending just to see how many of those are S2 frames. 1, 2, 3, 4, 5. So it's taking out 5 of my 15 S2 frames. That leaves me with only 10. It's a little dicey. I wish I had a better data set here, but we have what we have. So let's just see the really bad offenders here. So you can see we do have a number over 5, but this one has a really, really bad eccentricity of 0.941. So that's 03 number 20. Any of the S2 frames have bad eccentricity? Yes, maybe this one. Or is that another 03 frame? That's another 03 frame, 19. Okay, so I think it's really just these 2 frames that I'm going to want to get rid of. Let me just check the eccentricity graph here. So I'm just going to... You can use the output feature of this, but since there's really just 2 frames in this grouping that I want to get rid of, I'm just going to actually go into my folders, go into 03 cal, and just find those 2. 19 and 20. Just double check that those are the ones. 20, 19, they ended in 156 and 702. Yep, those are them. And I'm just going to delete those out of the folder. Okay. We could also apply a weighting here if we want to. I usually only do weightings when I have a bigger data set than this. This is a fairly paltry data set, so I don't think that a weighting is necessarily called for. But a common one that I do is just 1 over FWHM, full width half maximum. And if I apply that, you can see then what happens is that frames that are better in focus are going to get a higher weighting and frames that are not, is in not good focus, get a lower weighting. We can then use that weighting when we integrate these files into a master, and that can be pretty useful when you're going for certain things in your image. So if I was going for trying to get a really nice sharp image, I might use that weighting expression. I'm not going to actually... You can do that and just check out your weightings quite easily with this tool. Again, sort by FWHM. This time I'll do S-ending. So I can see here that my best frame is this one. That's an HA frame, I think. Let me see. That's number 25 in the HA set right there. This is my sharpest frame in terms of FWHM. It also has a good eccentricity of 0.460. Pretty good. Maybe this one's even better. You can see the FWHM is about as good, but the eccentricity is even better, 0.386. Number 18 in that same data set. So maybe I'd go with that one. What I'm looking for here is really just what's going to be the best frame to register against. So what am I going to use as my reference frame when I register all of these things? So I think I'm going to use this one. Number 18. So you just would want to take note of this. One way to do that if you're taking breaks and processing is to save this off as a CSV. I'll just put it back in the same folder, because a lot of times I've found, I'll just call this subframe selector. A lot of times I've found that I think I'll remember what I've learned here, but then I forget, and I've closed PICS inside, and then all the processing is gone, so I have to rerun it. So it's better just to save it off as a CSV at this point, so I can always come back to this data, because it is really useful data in a number of ways. And I haven't gone in depth here with subframe selector expressions. Maybe another video I could talk more about this and how I use this tool, but this is just sort of an overview of narrowband processing, so we're done here. I've thrown out those really bad O3 frames. I'm going to keep the somewhat bad S2 frames just because I don't have much data, and I'm not going to use a weighting formula, but I did find my best frame to register everything against. So I'm going to close out of this. And speaking of registration, that's where we're headed next, so we're going to go to Process Image Registration, Star Alignment. And up here where it says Reference Image, this is where we're going to pick that reference out of HA Cal, and it was number 18. So that's the best... Oh, that's not it. Oh, that is it, sorry. Confusing myself. Well, I could always double check with my CSV thing here. Number 18, right? Let's see. Yes, that was it. That's the one that had the really low centricity and a good full with half maximum. Okay. So picked that for my reference image. I am not going to go into drizzling just because it takes a while, and even though this dataset might call for it, we just didn't capture enough to really adequately drizzle. I find that you need at least 25 to 30 subframes in each channel, each filter to really adequately drizzle, which, again, I'm not going to really go into right now, but it is a good thing to do when you have under-sampled data like this, but we're not going to do it. So I'm going to uncheck Generate Drizzle Data. The rest of these settings are fine. I'm going to go ahead and click Add Files. Select everything here in the HA Cal folder. I'll put them to yet another new folder. Call this one HA Reg for Registration. And hit Apply Global. Okay, that's done. So now I'm going to leave in that same reference image. This is important. You want to just keep registering against the same image, but change out what it's registering. But now I'm going to add in all of my O3 Cal lights and make a new O3 Reg folder. And again, Apply it. Okay, and then I'm going to do it for the S2 frames, leaving that HA Registration frame in there. I'm going to go down to S2 Cal, select all of these, make a new folder called S2 Reg and Apply. All right, all done with image registration. So we are on to the last pre-processing step. I know it's been a long journey already, which is the final image integration of our now calibrated registered light frames. So we're going to go to Process Image Integration. This time, what I'm going to have you do is just reset the process. So go down to the lower right-hand corner and click the Reset button and add our HA registered frames. And we're going to leave all of this stuff alone. Everything under Image Integration, we're going to leave on the defaults. Under Pixel Rejection, we're going to choose Windsorized Sigma Clipping. Turn on Clip High Range, leave on all the other things here, and run it. By the way, this is what I usually will always run, first off, is just basically the defaults with Windsorized Sigma Clipping. If something seems unusual, I might change it, or if I'm going for something very particular, I might be adding in drizzle data here or weightings or other things, but those are more advanced topics for just your average picture. I just run the defaults on Image Integration, and they do a pretty good job. All right, here we go. This is a case where I do think it's more important to take a look at the rejection maps so you can see it rejected either an airplane or some satellites there, some stars that maybe were, I don't know what the deal is there, and then in the low rejection map, the main thing you can see is along the edges here, because of field rotation or different sessions, the camera being rotated a little bit differently, it rejected the edges a bit. Then let's look at our final integration here. Looks pretty good to me. Zoom in and check it out. That's pretty low noise. This is 37 five-minute HA frames. Yeah, I don't really see anything off-putting about that, so I'm going to go ahead and save that as just HA, and I'm actually going to also use the identifier HA there. To change the identifier, you just double click on it and you can set it to whatever you want, and I'm just going to minimize this but keep my little HA master in my workspace here. All right, and now we just do the same thing for the O3 and the S2, and then we will be done with pre-processing, sorry. So go to O3 reg, select all of those. All of this stuff can stay the same and apply. All right, here's my O3 results. You can see the rotation effect and the low rejection thing is even more extreme there. And here's what we got. And you can see that we have these black edges. The reason for that is because, remember, we registered this against the HA so that they would match up, right? So that means we'll have to do some cropping, but that's okay. This looks like a nice result for the O3. And finally, we are going to do the S2. Oh, before I do that, let me save this, call it O3, and I'll add my S2 frames right here from S2 reg. All right, that's done. There's my S2. Save that, call it S2. Okay, first thing I'm going to do with these three is crop away these black edges and I'm going to show you how you can do that identically on all three frames. So we're going to open up Process, Geometry, Dynamic Crop. Push this over to the side a little bit. I'm going to click on S2 here and then I'm just going to click the Reset. What that does is it gives me this crop box around the edge. Again, the way I did that, just click on the one you want to crop then click the Reset button and it'll give you a nice crop box to play around with. What I'm going to do is I'm going to bring in this top edge then I'm going to move my mouse outside of the box and rotate it a little bit. What I'm trying to do is I'm trying to get as much of the image as possible while cropping away those black edges. All right, that looks pretty good. Now, instead of just applying this, what I'm going to do is I'm going to use the New Instance icon and I'm not going to click it. I'm going to actually drag it off onto my workspace and then you can see it now says Process 1. If I wanted to, I could rename that if I was planning to reuse it a lot but I'm not so I'm just going to go ahead and leave it as Process 1 and close out my Dynamic Crop process. It'll say, are you sure you want to cancel an active session? So yes. I'm going to apply this process to each channel here. Okay, so they're all now cropped. They're still registered because remember we've just done the same thing to each one. I can just check that by moving them like that and the only thing I have to still check is that there's no black edges remaining. Which I can see on the O3 there is a little bit. So I'm going to get rid of this one and re-open up Dynamic Crop. This time click on the O3, reset it and I just have to drag in this left side just a little bit. I'm going to zoom in to make sure I'm not overdoing it. Just to get rid of that little black side there. Again, I'm going to do the same thing. I'm going to drag off a new instance, close Dynamic Crop, reset the zoom here and apply this to each channel. Alright, so now we have registered cropped images. We have our S2, our O3 and our HA. Next thing I'm going to do, I'm going to go ahead and get rid of this, is do some background extraction. So I'm going to use Dynamic Background Extraction and this nebula fills most of the frame so I have to be careful here. I'm only going to use a few samples. I'm just going to put one in this corner, put one in this corner, put one here at the top, one in this corner and one in this corner. It is, well, maybe there's really just no good place to put one in the middle. Maybe I'll put one right there. There might be a little emission there, but it's generally okay. And that's good. I'll go ahead and apply a subtraction, apply it. Let's look at the background just by stretching it. Let's look at the result. Yeah, I think that improved the contrast a little bit. It looks nice. So I'm going to keep that. I'm going to push that over there and just do the same thing, sorry, for the O3 and S2. So it's under background modelization, Dynamic Background Extraction and I'm just going to apply a few samples where there's not too many stars, mostly along the edges here, but I also want one sort of in the center. And sometimes I'm a little bit more careful about this, but I'm just sort of going to do it quickly because I don't want to make this video too long. Looks good. So I'm going to subtract the background to check it out. Yeah, that's one thing I've noticed about the O3 is there did seem to be some brightening effect down in that corner, so I'm glad I picked up on that. Looks good. Move that out of my way. Run it on the S2. Just picking samples here where it is just the sky background and not the nebula. I also don't want a lot of stars in my sample so I'm going to throw it off. And then I'm going to just run subtraction correction on it. Check out the background. Check out the results. Okay. Looks good. Now we are ready to apply actual stretches to these. Right now they are actually not stretched. If I turn off the auto stretch, you can see they are still unstretched. So I like to just do this manually to taste. So I'm going to start with the HA, open up what is this, intensity transformations, histogram transformation. Where it says no view selected down here, I'm going to choose HA underscore DBE. I'm going to take this middle slider, move it over, I do that again, do it again. Once I get a little breathing room over here on the left hand side, I'll bring my shadows over. And you can see that when you bring it over too far, right there is the number of pixels that you would be clipping to black. So I try to keep that as close to zero as possible. And basically what I'm doing here is just to manage more contrasty, bringing out the signal. That looks good. I'm going to stop right there for now. If I have to go further on it later, I can, but I don't like to go too dramatic a stretch right away. Move on to the S2 here. Just do the same thing. Bring that over as an initial stretch. Back off a little bit. One more time. Reset the black here. So comparing that to my HA, you can see we already have an issue where the S2's background level is a little bit too bright compared to the HA. I'll deal with that in a second. I'm going to go ahead and stretch my O3 first. Taking this mid slider over. Run that a few times. Reset the black. Okay. So now what I'm going to do before I combine is just sort of get the O3 and the S2 to this level of contrast that was so easy with the HA. So I'm going to do that with curves. I'm going to go back to intensity transformations. Curves transformation. I'm going to open up a real time preview here. And then I can just sort of see what I'm doing as I'm doing it. Take down here. Take this one up. Reset. Take it down again. Apply. Reset. Take that sky background down one more time. Bring the signal up a little bit and apply. Okay, now I'm just going to go ahead and compare the HA and the S2. They look more similar in their tonal response now. So I'm going to do the O3. Open up my curves. Do a real time preview and just play around in this part of the curves basically. Looks like that. I'll just apply it again. Okay, looks good. So now we're ready to combine these into a color image. So I'm going to go to process, pixel math, pixel math. I don't want to use a single RGBK expression, so I'm going to turn that off. I'm going to open up my expression editor in my R channel. I'm going to drop my S2. Double click. G channel, my HA. Blue channel, my O3. Okay. So this is what we call a show mapping or a Hubble palette mapping. S2 is mapped to the R. HA is mapped to the green and O3 is mapped to the blue. Under destination, we want to create a new image. For color space, we don't want it the same as target. We want an RGB color image. And I can never remember if I want to apply or apply global here. I think just apply. Yep. Okay, can close out of that. Here's our first Hubble image of the Seagull Nebula. If you're a purist, you could just go with that. It's very green. But we can play around with it. One thing that I am a little bit unhappy about it with it is the sky background is a little bit too colorful for me, so I'd probably desaturate the sky a little bit and make it a little darker. But it's looking pretty good. We have a nice separation already between the channels, but I think we can improve it a little bit. So, to do that, I'm going to go ahead and minimize these out of the way. We can do that with just curves. So if we go to process intensity transformations, curves transformation, I'm going to reset this. I can open up a real-time preview here, and then just with the red, green, and blue curves, we can do a lot of interesting work. Just playing around with, you know, how much red is in the image, how much green is in the image, and so forth. Something like that might be interesting. Again, this is just really quick. You can get much more in depth, like if you want to mask off certain colors, there's a script up here. I believe it's under I don't use it very much, so sorry, I'm going to have to find it. Utilities, color mask. So this script is pretty cool. You can pick a color, like let's say I want to work on the yellows in the image. So I'm going to click the word yellow. I want a chrominance mask. I'm going to click OK. And it finds all of the yellows in the image and creates a mask for you. Just like that. And then I can really target the yellows and change just that part of the image. So this is an option, again to apply a mask, if you have never done this before, you just apply it like that and then you're working on the yellows in the image. I'm going to go ahead and get rid of that. I'm not going to do much with that right now. But it is an option if you really want more control over the colors in your narrow banding. That's a good way to do it. Really, the only thing that's really bothering me about this image right now is how purpley magenta the stars look. So a quick way to fix that is I'm going to go to process all processes and find invert and I'm just going to invert the image. And you can see when I invert the image, all of those magenta halos turn green and then I'm just going to zap them. I'm just going to turn I'm just going to treat them as noise and go to noise reduction scnr and color to remove green. So if I wanted to do a really dramatic removal of all the magenta halos I would just apply this at 100%. I'm going to try it at 75% instead. 100 I just find whenever you're using any scnr process is just too dramatic. So I'm going to bring it down a little bit. I'll apply it see how that looks on the inverted image. I think that looks pretty good. But to really the real test is we're going to invert this image back. Yes, and I like that a lot. So you're going to have to experiment with this for different uses. You want a different amount here. We probably could have even gotten away with 50% and it still would have taken out the magenta halos but left some star color in the image. The amount of star color you are leaving in a narrow band image is really up to personal taste. Some people still like the colorful stars with a narrow band image. Other people just want pure white stars. So it's really up to you. I think this looks good. And now it's really just tweaking. If you want to play around with saturation levels, you can go to intensity, transformation, color saturation. Again, this has a real-time preview and so you can bring up or down the blues and see how that looks. I mean maybe bring up the blues a little bit bring down the greens a little bit. Bring up the reds bring down the magentas. Okay, it's maybe a little too strong. Let's try that. Yeah, I like it. Again, though, this is really up to you. Some people like a nice saturated narrow band image. Other people would find this too garish and want something with a little bit less saturation. Another thing that I might do with this image is just take a luminance out of it. So here's to get a luminance all you have to do is just click this button right here extract L component and apply that luminance as a mask and then invert the mask under the mask menu. I'm also going to turn off show mask here for a second and I'm just going to then open up curves reset it open up my RGB k curve open up a real time preview and just take down the sky background a little bit and take down the saturation of that sky background a bit something like that. Just to show you before and after on that one here's before you can see the sky background is a little lighter. You can see a little bit more blue in that corner and after just taking out a little bit of the saturation in the sky background and making it a little darker so that the nebula stands out a little bit more. I'm going to remove that mask and the last thing I'm going to do is I'm just going to go back into curves one more time reset it go to my RGB k curve open up a real time preview and I'm just going to do some final tweaks here with the curve. So you can see what this does is it but when you do a really dramatic S curve it adds a lot of local contrast to the image you want your nebula to take on that sort of 3D effect and so this is a way to do it but when you really stretch it like that then it looks terrible of course it looks garish and you lose a lot of detail when you do that so it's really just about making these fine little adjustments so that you have bright areas of the nebula and dark areas so this is just a very mild mild S curve and I might even bring it down a little bit on the top end here okay there of course there's always more you can do with an image we could do something with the stars we could have deconvolved them or shrunk them or whatever but this is really just basic narrow band I wanted to show you how to properly calibrate register put your narrow band image together with pixel math and then tweak it a little bit with mostly curves which is one of the number one processes to master I'd say and picks insight saving it off we can do file save as and all of the options are right in here we can save it as a jpeg or png for the web tiff for bringing it into photoshop if you wanted to or other tools and an xisf or fits file or continued work inside picks insight one other thing I'd say about saving tiffs out of picks insight is for them to work in other programs I find you really want to use 16 bits mode other programs might be able to read 32 bit mode but usually not very well so or you'll have weird display issues so I always use 16 bit tiffs out of picks insight to bring them into other software programs all right that's it for this session I'll just make this a little bit bigger so you can see my final result here if you process this yourself I'd be interested to see your results you can share them with me just let me know in the comments on my youtube or you can find my contact information on my website thanks for watching and if you have any suggestions for future videos let me know in the comments thanks