 A couple of months ago, Quiv the lazy geek approached me with an interesting challenge. He suggested that we both photograph the Pleiades, a beautiful star cluster and reflection nebula in the constellation Taurus. The interesting part of the challenge is he would have as much time as he wanted, meaning shooting Pleiades over multiple nights while I only had one night to shoot the object. And you might be thinking, well how is this fair? Well there's yet another twist to this challenge, which is Quiv shot from his balcony in Tokyo, where as you can imagine there's lots of light pollution from the city, while I shot from a more isolated location, a park in Rhode Island, that was substantially darker. Using Bortle's famous scale for sky quality, Quiv's location is a Bortle 8, while my shooting location is a Bortle 4. And the reason this challenge is so interesting is Quiv is trying to see if by putting in many, many hours on a single deep sky object, he can beat the advantage I had of a darker sky. And it's especially tricky because the Pleiades is a broadband object, meaning you want to capture all the light in the visible spectrum. So that rules out using narrowband filters. But Quiv did have some other tricks up his sleeve. I found out that he had put together a special new imaging rig for this challenge, which was a Schmidt-Cassigrain telescope, but in a hyper-star configuration, meaning he'd be shooting at f2, while I'd be using my Ascar lens, which maxes out at a focal ratio of f4, and with a much smaller aperture. Now I plan to use the Ascar with a QHY168C camera I have, but you'll hear my conversation with Quiv why that didn't work out, and I ended up using a stock Canon T7. Before I roll that Zoom chat that I had with Quiv, I want to let you know if you stick around to the end of this video, I'm going to wrap it up by talking about what causes walking noise, and what you can do to avoid walking noise, and then how I approached diminishing its appearance in my photo, because I didn't avoid it, and the walking noise was honestly pretty bad in my Pleiades photo for this challenge. Okay, so what is this challenge, Nico? First, thanks for doing this challenge with me. I've had a lot of fun actually doing the challenge, and actually it's the first time we've talked just the two of us on Zoom, so it's very interesting as well. And just to, I already spoke to you about the strategy that I took, and what I did is that from my balcony here in Tokyo, it's actually 500 meters away from Tokyo proper, I used my C6 telescope with a hyperstar lens so that I was imaging at a focal ratio of F2, which means I can get very short sub-exposures and I get more light per pixel per unit of time basically, and I hoped that would give me an advantage compared to a darker area, which is what you did. I think I took around, yes, six nights. I took three nights with an ASI 533 MC Pro, that's a one inch sensor, and then I took three more nights with an IMX571 sensor, a rising camera, it's kind of a knockoff 2600 MC Pro, and I stacked everything together, and I had around 3,000 frames of 20 seconds and 30 seconds each, something like that for a total of 22 hours of data, so 11 hours APS-C size and 11 hours one inch size, and then I stacked only the one inch portion together. I heard that you had trouble on your side. Yes, lots of trouble. So I was using the Ascar lens that you have as well. I haven't run into any of the sort of issues that you have with the threads or the star issues, so it was all user error on my side, so what happened was I wanted to dither and do automatic focusing and all this other stuff, because usually when I hit the dark site, I am running three setups, and one of them is usually untracked, so that one is you're very intensive, you're there having a re-center, and so I wanted this setup to be as automated as possible. I didn't want to have to check focus and all of that, so here's the setup. Oh yes, I had exactly the same thing. So I have the ZWO EAF there and the little ZWO mini guide cam on top, and then this is all the other automation stuff, the power box and whatever, and then having this on front also helps balance, otherwise you need some kind of like counterweight out there. Yeah. Okay, so I had tested this entire setup with a monocam at home, so that's a rule that I always have is I'm going to test the setup at home and make sure it works, then tweak if I need to, and that's my rule for going to a dark site, tested at home, but then I did something really stupid. I said, okay, I don't want to waste time on filter changes with a monocam and refocusing by filter, so I'm going to switch to a color camera, and I thought this is going to be easy. I have the right spacer on there for 55 millimeter back focus. I already know that all I have to do is just screw it onto the back of my escar lens, so I did that. I get to my dark site, I can't reach focus. I go through the entire focal range. I start at one end to go all the way to the other, no focus, and I realized this, at some point I had put this extra five millimeter spacer on there. I have no idea why, I have no memory of doing that, and I couldn't get it off. And so, and this is a very common problem. If you are in astrophotography long enough, you'll find that spacers bind together. It happens on so many of my spacers, and you try with rubber grip, and these two will not come apart. So I have no spacers to get me to the right back focus with this camera, but it took me about 45 minutes and an hour to realize that during the dark, because at this time of year in the Northeast, it gets dark at like four third, I mean, I guess that's all over the world, but I mean in the Northern Hemisphere, and so it was already dark, I'm just wasting dark time. Now luckily, I did have extra cameras, I did have a plan B, so, but I didn't have a USB cable to control this Canon T7. So then I'm like, okay, I've wasted so much time. I'm not even going to guide, I'm not going to do any computer control other than just have the mount running. And so I just started taking 30 second pictures with the T7 with an intervalometer, and that's what I ended up doing. No guiding, no dithering, I was just like, I need to get at least a few hours here. And if I try to set this all up now, I'm probably just going to waste time. So that's that's my story for this challenge. Broke my own rule in a way by changing cameras, and then I paid for it. Yeah, yeah, we've all made this kind of mistake. I remember changing a camera before going to a dark site and forgetting the T-ring for that camera. I had the T-ring for another brand. Oh, yeah. And yep, I duct taped it together in the... That worked, but there wasn't enough tilt. Yeah, it's, and like for any beginners watching this video, by the way, this spacer is binding together. It happens all the freaking time. And I like to have, I think I have two strap wrenches in my car at all times just for that, just so I know I can unbind those things. Because sometimes even, as you said, rubber gloves are not enough. It's a big problem. So do you, do you have like, I've tried strap wrenches and the problem I've had is sometimes with something like this where it's only five millimeters, I can't get the strap wrench to just sit on that piece and not extend over onto this one somehow, or do you have sort of like very thin ones that... No, I have thick ones. They're probably like two centimeters or almost one inch or something like that thick. And I did use them on such small adapters, but it's really hard and it takes a lot of time. And if you're in the dark, in the cold... Sure. Yeah, forget about it. Yeah. Have you found any other tricks work? I know some people say like try freezing or... Yeah, I tried freezing. I tried a micro, not microwaving, but putting it in the oven, you know, and that kind of stuff, it does not work. And there was a long time when I didn't know that strap wrenches were a thing. So it's like, I had bound adapters at home that I didn't know what to do with. Yeah. And I would use like huge pliers on them and then huge markers. Scratches, yeah. It's like, yep. So yeah, that's... I was dumb. I didn't know... I had no mechanical bone in my body. I don't know how mechanics work except like worm gear. So I had no idea those tools existed. But like, I'm super impressed because I think like every time I went to dark sites this year, something went wrong and I wasn't able to get anything out of it. And I think that's like one of the big things if you live in the city and like me, you have setups that are permanently set up on my rooftop for me. It's like, at least if something goes wrong, like you can do something about it like in a fairly calm manner with all of the equipment you need to do something about it on the spot. Whereas like when you're traveling to dark sites, there is a bit of the Murphy lottery going on there, right? Definitely. Yeah. And what I've learned is not only to have like backups of every cable, every possible thing. I mean, I have backups of mounts, cameras, everything. But then also like you have your plan A in your head and knowing when to just give up on plan A. So like this challenge, it's like, I probably spent too long on plan A, but then as soon as you commit to plan B, just go for it. I mean, just and then stop fiddling because I found a lot of asked photographers or fiddlers and you'll see them on the at the dark site at a star party even and they'll, I'll ask them, well, how did last night go? And they said, oh, I was just messing with PhD to all night. I'm like, yeah, why? Why? Yeah, I think like we like, we don't like to leave a problem unsolved. And so we get tunnel vision on those things. I think a lot of us would make terrible pilots because we'd be tunnel visioning on a single instrument the whole time, forgetting that we're rolling and falling out of this guy. It's yeah, I completely get get the idea. And now on my rooftop, I actually have three different setups all ready to go. And that way, if I need to mess up with one, I had the other two to back me up. Yes. But still, like, I think in the end, you got like three hours, something like that's Yeah, about three hours. Yeah, that is, that is already a lot. And I feel like three hours would beat my 22 hours. But, but we can see, can I see the results? Yes, I'll share my screen here. Let's see. Awesome. All right, here we go. So that's, that's three hours. No dithering, no guiding. Yeah, just at f4. And so I did achieve the thing that I set out to do in a way, which is I really, I looked at the, what is it called the digital sky survey, the imagery. And that's how I always sort of plan my shots, whether it's on telescopius or blackwater skies or whatever. And I noticed this really strong dust component here. And I just thought, okay, this looks really cool how it's sort of is like a, like an S into the Pleiades. And I was like, that'll be a great diagonal across the frame. And then I can even include, so originally I was thinking of doing this at 360 millimeter focal length. But then when I saw that at 200, I could get the, this little vulture head, dark nebula down there in the corner. I need to try to get that because I haven't seen too many shots framed like this where, where you have the vulture head to the Pleiades. Yeah, this composition is amazing. It's, it's such an awesome shot. So I like, we often see Pleiades to California nebula, right, in terms of composition, but like just this, this is so cool, plus so much dust. And I like how the dust around the Pleiades you get, it's not, some people process it so that this whole thing is just brown. But there's actually a lot of blue stars that give a blue dust. And I think that's really cool. Completely agree. I saw the same I have even on my APSC because I'm at 300 millimeters, I have a tighter kind of field of view. But still, I see, and I was surprised and I was actually surprised to be able to get that from Tokyo as well as to see that some of that dark nebula right next to the Pleiades is actually blue because of the blue star there that is shining it up. And I really, I really love that. It's, it's a beautiful target overall. Yeah, it's, it's, and it makes me think about what do we call a reflection nebula, which is just the stars, you know, shining on the dust versus a dark nebula. That's true. They are effectively the same thing, right? Yeah, it's just about the, it's just about, I think we only call it reflection nebula if it's a nice color to us, you know, blue and yellow are nice and brown is sort of. Yeah, because I mean, dark nebula, they're reflecting sunlight, otherwise we would see it. Exactly. So they're reflection nebula, technically speaking. So it's, yeah, it's a very, very good point. I think it's, there might be a historical component to that, right? In terms of how the nebula were first classified. Well, and I think that some of the dark nebula, like the one in the milky and Sagittarius and things like that, where there's so many stars in the milky way, and then the way that most people process those ones, I don't know if you could process them to be bright, but most people process them to be almost like inky black, like Barnard's E, for instance, or something like that. It's almost always processed to be just black against the milky way. But if when you're in these regions, the dusty constellations like Taurus or Orion or whatever, then people usually or Cepheus, then people bring out the dark nebula and try to make it stand out against the sky. Yeah. And like one of the things with dark nebula though is that in from Tokyo are from really bad border zones. They become my enemy because they're like, there's enough signal that they start having like those, like slightly little pixels all over the place, but not consistently. So there are a source of noise in a way in the image that is really hard to get rid of. And I was imaging the California nebula recently on the Ascar lands, actually, same kind of field of view. And I could see like one quarter of my frame was dark. There were no stars left. I'm like, I had to look at the sky survey to see like, yeah, there's a dark nebula there that's just hiding stuff. And I love to see this kind of stuff. And this composition is amazing. Like there's so much dust. The dust color is beautiful. You have the vulture and to play at this is like such a natural color. And that's just three hours of data. And it's all manual. Yeah. And I did, maybe I'll show this in my half of the video. I did have to do some sort of hacky work in Photoshop to reduce the walking noise, you know, that those diagonal lines because I wasn't dithering. And I had it on my Orion Atlas. So it was, it was very steady, the tracking. And so when you're, when you're tracking is very steady, but you're not dithering, you get very noticeable walking noise, especially with a DSLR. And so, but I noticed that the walking noise, the dominant noise color was a red. And so I basically just did a select the red. De-saturated. And that helped a lot. That is such a cool way to do it. Yeah. And I've been a victim of walking noise so many times when I was starting before I knew dithering was a thing or before it was auto guiding. And yeah, that's a super good way of getting rid of the walking noise. Yeah. And I think a lot of people's inclination is to reach for normal noise reduction techniques. It's horrible. And it's terrible. It's going to totally destroy your image. It's going to make it all a blurry mess if you do that. So this is really no noise reduction because it at three hours, it can't really handle much noise reduction. I'm sure yours at 22 hours can, but this at three hours, I don't do normal noise reduction. I just try to de-saturate the noise if it has a unique color component to it. Yeah. I have to say like on many of my pictures these days, I used to pass the noise to do some noise reduction after removing the stars. The noise that maybe lost you put back the stars. And I see a lot of people used to pass the noise as well. And actually I have a video on the topic that will be coming out in a few days when we're talking and probably by the time this video comes out, it's going to be out already about how to pass the noise can introduce detail that doesn't exist. And I see that very often some of the most amazing pictures I've seen like on Facebook are an astrobin that have been selected as picture of the day, whatever. I look at the details of the nebula. I compared to the Hubble picture of the same nebula. And there's details that don't exist. And you recognize the trace of topaz denoise. And denoising has gotten much better with topaz denoise these days, but there are things that annoy me about some denoising techniques that are being used. And there's no denoising technique that I know of that works pretty well on walking noise. So I think your way of doing this saturation is really, really nice. And I love the end result. I can already tell you it's better than mine. Oh no, I don't think so. Let's switch to yours here. Yeah, let me share my screen. Okay, so I'll start with the 11 hours of APSC data. So this is how it looks like. Can you see the screen? Yep, I can. Looks great. So, I mean, of course, we're looking at different image scales here, but I think that your stars, actually, I think that I've watched some of your videos about the hyper star, and you said that the stars are probably going to be a weakness of that telescope. But I think your stars look great here. I love the color. Thank you. Yeah, I think a lot of people don't like the star spikes that you get with hyper star because of the diffraction artifacts and they place cables in different ways to avoid that. And I don't care that much. And by the way, while I'm zooming in, if people are interested on APSC, the stars in the corners, they're definitely degraded. They're a bit elongated because they probably don't have the spacing perfectly right. But they're perfectly acceptable. And this is like barely cropped. Only the stacking artifacts were cropped. So I'm really impressed, actually, with hyper star for now. And yeah, this is from Tokyo. And as I was mentioning, some of the blue areas here on the bottom left is there in the dark. Maybe there is a blue tinge. And really, I love that color. And already it was just like 11 hours. I was quite satisfied. But when I zoom in, you can see it starts kind of breaking up because there's tons of noise that I haven't been able to really take care of. But when you look at it from afar, I'm actually super satisfied with this. Because it's from Tokyo. It's from a Borel 8, Borel 9 zone. And I didn't expect it to be that good with just, well, just 11 hours across three nights. Still, I think you managed to get more dust connected. So it's like... Well, yeah, actually, you know what we should do is crop my image to the same field as yours to actually examine them. We can both do that in our videos just to see. I think that'd be interesting. I'll send you, of course, the processed image in TIFF or EXIF format. And you'll do the same. And then we can look at that. I think that your signal-to-noise ratio is definitely better than mine. You were able to bring the sky background brighter. If we looked at mine again, you'd see it's a lot darker. And that is to hide noise, basically. Right. Which a common technique. And I think, in your case, it actually looks great. It's great for the composition. And I'm wondering whether, even if you had had the freedom to brighten the background, whether you would have taken that freedom, because then the S might be less visible. That's true. Yeah, that's true. Because I have seen images. Actually, I do a challenge on my Patreon of the last month's challenge was Taurus, anything in Taurus. A lot of people picked this kind of scene. And a lot of people were really bringing out the dust. But then, yeah, you're right. You lose the S if you just really bring it up, because there's dust everywhere to bring out. Exactly. Yeah. It's a tough, tough call. And let's see. So now I'm going to show what I have with 22 hours of data. So this is after I stacked both the ASI533 and the IMX571 data. So it's a one inch much tighter field of view. And the noise is much more controlled. I see more details. And I was actually surprised. I thought that was 11 hours at F2, I couldn't get better in terms of signal to noise ratio. But I did. And this image, the full stack 22 hours was actually much easier to process than the 11 hours image. And that's something that I learned. And through this challenge, I would never have learned otherwise, because I don't think I would have ever spent those additional 11 hours at F2, because I didn't think it would add anything, but it actually did. And I'm really like, it's interesting to, it's the first time I do a challenge like that, because normally I don't challenge myself enough. And not that I did. I'm really happy with what I got here. And I don't think a lot of people would look at that and think that it was taken from Tokyo. Definitely not. I mean, this is better than any image I've ever gotten from, because I live in the city too. It's just that I don't often show that part of photography on YouTube. But it's, this is better than anything I've ever done from the city. It's really quite amazing. I wonder, I don't know if the zoom compression is making me say this, but it looks almost like the 22 hour version also has much better contrast in the high signal areas. Is that true? It is. It is. Like I saw when I was processing, and sorry, there's the JPEG compression and then there will be YouTube compression and zoom compression. So it's not going to look very good, but there was so much detail in the areas here, in the center, in the wispiness here, even in the high contrast areas. And I did not expect that. I don't know why, or maybe like the latter data had better seen. I don't know, but it was just easier to process overall. And I was able to get more, there was just more detail available. So the 571 and the 533 have the same pixel size, right? Same pixel size. It's basically the same sensor, but cropped, right? It's like, if you looked at the, I'm pretty sure that Sony, when they build this sensor, they just like cut in different sizes. And that's it. So it's, yeah, it's basically the same thing. And that's why I was comfortable like mixing them together. Yeah, when I got the 571, I was very surprised that it also has the same pixel size as the ASI 1600, the 3.76 micron or whatever. I think it's 3.8 versus 3.76. I don't quite remember. Yeah, it's so close that I was like, okay, this is going to work. I continued on with Mosaic projects that I'd started with that. And it really does help when you're at the exact same pixel scale. It's very true. Yeah. And I think my backspacing was slightly different between both cameras. And the good thing with Ryzen Cam is that they copy ZW in that the backspacing to the sensor is 17.5 millimeters, just like other ZW cameras, at least with the ring at the front. Yeah, let me stop sharing. So we can keep discussing, but it's like, honestly, I expected your image to be amazing. But when I heard that you had had issues that you had only three hours of data, I was like, haha, maybe I'll get a better result. And then I still see that amazing amount of data in there and the really good processing. And I'm like, it's you, you got a really amazing result. I would wonder, you know, it's like, I mean, people have probably do can do endless videos like this on YouTube, where it's like, now I wonder what yours would have looked like if we had swapped locations. Of course, you know, it's like my three hours of the DSLR from Tokyo versus your 22 hours from my site. That would be, it'd be so interesting, a YouTube video, if we could like, actually get on planes and switch spots. Yeah, one day, maybe one day we can meet that niche or whatever it's called and then try to do some some astro together that would be so cool. Okay, let's dig into my image of it here at the end. And I want to focus on one thing, which is what we call walking noise. And walking noise is created by a combination of two things, fixed pattern noise and field drift throughout the night. So fixed pattern noise is just any noise that as the name suggests stays somewhat consistent from frame to frame. And most color cameras will have some it's this ugly model in the background. You know, some of it's going to be random and change from frame to frame, but some of it stays somewhat consistent. And field drift, I'll show you that I'm just going to blink through the frames quickly here. And you can see the stars are slowly moving down into the right. And just to show that even a little bit more clearly, I stacked everything over three hours, but didn't register the frames. And you can clearly see that we had a some some some substantial field drift, like I said, down into the right, which matches up with the declination. And there's a number of reasons you could have drift like this. It's most likely that polar alignment got a bit off, maybe I bumped the mount or something. Yeah, in any case, we can see that there was significant drift. And after I align all of the photos based on the star patterns, we get the fixed pattern noise basically turns into streaks of noise following the direction of the drift. And that's what we call walking noise. Now after I just stack and apply an auto stretch, it doesn't look too bad. I don't really don't really see any noise. And that's pretty normal. I find that you only really get to start seeing the walking noise after you really push the image to bring out what you want to bring out. And so in this case, I wanted to bring out this s of this very faint dust with the vulture head down here. And so by really processing the image, you start seeing the walking noise and I'll show you even if I just remove the stars, I think you'll be able to see it a lot clearer. Let me make this a little bit bigger. So you see this diagonal noise going across like this. The key, if you want to break up these streaks of noise, there's sort of two good ways to go about that in the capturing of the data. One is trying to eliminate the source of drift because if you had no drift, then you at least wouldn't get the lines. You'd still have noise in the background, but at least we just sort of stay in one place rather than turn into these very visible lines across the image, these streaks. And then two, if you use auto guiding with something only PhD to and you dither between taking each exposure, and especially if you dither in both declination and right ascension, that can really help a lot because it's going to break up the noise patterns. So instead of just forming these consistent streaks, it will be slightly shifted between each frame. And then when you stack all the frames together, the noise is no longer consistently along this pattern of the drift. And it gets a lot more broken up and a lot less visible in the final image. So dithering is very important. It also helps you eliminate some noise just through the stacking process because it's going to throw out outliers. And if those noise patterns, those fixed pattern noise patterns aren't in the same place, they're all shifted about, then it's going to be able to throw them out as outliers in the data. But let's say you can't guide and dither or you didn't guide and dither, like I didn't, then what can we do to try to diminish the appearance of this walking noise in the image? Well, the first thing is, I don't know how well it will come across here, but there's a lot of green noise here. And so the first thing I often do with an image like this one is I run a process under noise reduction in Pixinsight, if you have Pixinsight, called scnr. And I run it on the green channel. So under where it says color to remove green. I don't want to run it at full strength. I usually run it as something like half strength to start. And I find that that actually, I'll just show you before and after here. So here is before, lots of green noise, and here's after. And it didn't hurt the colors of the image too much. It didn't really disrupt the color balance, I don't think. You really just removed a lot of the green noise. Now, if you wanted to keep going with this and remove, you know, the red noise and the blue noise, what you'd really want to do is start masking, because if I just ran this without masking, you're going to pretty quickly make your image monochromatic, because if you keep removing too much of the color using this scnr process, it's not going to work very well. But usually removing green is fine, because unless you're dealing with comets or something like the trapezium in Orion, there's not too much green in space, or something like a planetary nebula, of course, you wouldn't want to do it. But between the Pleiades and the dust, there's not too much green in there that we have to worry about. Okay. So from here, you can still see the streaks. But I want to show you, I'm going to open this up in Photoshop. Okay, so here's the image open in Photoshop. And as soon as I add the stars back to the image, immediately, the stars help break up the image enough that the noise isn't as noticeable, but I do still see it here, especially in the dark areas of the image. So the next thing that I do is I basically just try to hide it by removing a lot of the magenta in the image. Because what's left now, a lot of it is magenta-ish noise. And so I just run a selective color layer here. And I remove a lot of magenta from the magenta. I do a little bit more with sort of crushing the blacks too. So in the neutrals, I just raise the black level. And in the blacks, I again raise the black level just slightly. And so this helped a lot. What we're left with in terms of noise, the only real noise I see that's the, you know, the walking noise that I see that's left is a little bit of a sort of orangish red, because we've removed the magenta from the reds. So what's left is maybe just a little bit of an orangish red in a few spots. So what I do there is I just click back to my starless layer here, and I'm going to select by color range. And I just select those parts of the image, where I still see a little bit of that walking noise in the form of sort of an orangish red line. And you can play around with the fuzziness slider here. And then you can also sort of blur this mask that you're creating until you basically get something like this, where you can see this, this is still following that direction of that drift. And so it's still the walking noise, but I'm only selecting some parts of the background here. And so I just did that by selecting by color, and then sort of just working with the mask a little bit. And then all I do is I just desaturate those parts of the image. I just brought it down by negative 77. And I also brought down the lightness just a little bit. And so this is a much more subtle change. But if you know where to look, you can see I'm just really desaturating certain little parts of noise here that are left over. One little tip that I found works well when you're doing this is if you're having trouble seeing what you're doing, you can throw on an adjustment layer that just raises the overall exposure a couple stops and then work through this process of trying to remove the walking noise. And then you can just turn that back off to get back to sort of a neutral place that you were in before. With this image, I got it to this nice neutral place where I diminished the appearance of the walking noise. And then I just did some final curves work to bring back up the dust a little bit. And then this is my final image. So hopefully that was helpful. Again, this was a little bit Photoshop specific here at the end. But if you wanted to do something similar in Pix Insight, you could just work with masks. If you could work with the color mask script, for instance, to try to select the certain colors and then desaturate them with curves or something like that. So one thing that I have found and I did mention I think in my conversation with Quiv is you can't really blur walking noise. That's just going to really give you a soupy mess. What you're really wanting to do is select its color and just desaturate it but leaving the rest of the details. It's sort of like trying to separate out different parts of the image. And it's a little bit of a guessing game sometimes. But again, the processes that I found work very well for this kind of thing are SC and R in Pix Insight, maybe with a mask and selective color and select by color in Photoshop. Okay. Well, that does it for this edition. And I hope everyone enjoyed this challenge that I did with Quiv. I'm certainly dead and was blown away by what he was able to pull out from Tokyo. Okay, since this video I'm sure is over 20 minutes long, you're now seeing all of my current members here on my Patreon campaign. And if you want to see your name in the credits of future long Nebula Photos videos, you can sign up over on patreon.com slash Nebula Photos. And we now have over 500 members. So it's a big cool community. And there are a bunch of benefits outside of just your name in the credits of long videos. Some of those benefits include I now I did one exclusive video and I'm working on a second exclusive video just for Patreon. There are monthly Zoom chats where you can ask questions of me and other people on the chat. There is a Discord community, which is really cool, very vibrant, lots of stuff going on there, including monthly imaging challenges with prizes, a quarterly group imaging project where we're all working on the same deep sky object together. And Patreon also has lots of different communication methods. You can direct message me straight through patreon.com. You can also do it on Discord. And so there's a lot of cool ways to connect and really get involved. And so if you want to accelerate your learning further after watching some of my videos, consider joining over on Patreon. It starts at just $1 a month. And again, the link is patreon.com slash Nebula Photos. Until next time, this has been Nico Carver at Nebula Photos. Clear skies.