 Hello, my name is Nico Carver and I'm an astrophotographer. My website is at nebulaphotos.com and today we're going to look at what astrophotographers call narrow band imaging. The most famous application of this is the Hubble Palate popularized by the Hubble Space Telescope. In the videos that follow this one, I'll show you how I process narrow band images to make pretty full color pictures and you can find the links to those videos below. I have one for Photoshop, one for GIMP, which is an open source free application and one for Pix Insight, which is what a lot of more advanced astrophotographers are using. Before we get into all that though, I thought it would be a good idea to really understand narrow band first. We first have to understand a bit about the science of color and the science of color vision in the human eye and then how we try to replicate that with cameras and monitors and basically photography and anything in the modern age where we're trying to make a full color image to look like what we see with our actual eyes. And so we're going to start with this thing, we're going to start sort of at the beginning here with this thing called the electromagnetic spectrum and I'm sure a lot of you have heard of this, also called the EM spectrum for short. And it's basically this large range of radiation of wavelengths from a very low frequency radio wave and microwaves and things like that to very super energetic high frequency gamma rays. And there's a very small section of the spectrum that our eyes can actually directly detect these waves. And we call this section visible light, meaning the light that is visible to our human detectors, our eyes. With different sensors, we can now though directly image other parts of the spectrum. And so you may have heard of scientific missions that are doing X-ray detection or ultraviolet or different parts that we don't actually directly see, but sensors can. And the interesting thing is different animals are sensitive to other parts of the spectrum, just outside of what we think of as the visible spectrum. For instance, the lenses in our eyes block ultraviolet light. But recent research has found in other mammals, like my cat here, their lenses don't block UV. Now we're still not sure if that means Bubby here can really see in UV, but why is this? It's because the sensor of the eye is the redna. That's in the back of the eye where the rods and the cones live. And the cones are cells that only can detect certain wavelengths and then our animal brain or the cat's brain then interprets those wavelengths as colors. Trichromats are like most mammals and that they are dichromats, meaning their color vision is due to the interaction of two different types of cones in the eye. Humans and many other primates are trichromats, and that means that we have three different types of cones. And we can characterize these cones as short wavelength cones or S cones, medium wavelength M cones and long wavelength L cones. And so you can see from this chart each type of cone is most responsive to certain wavelengths. And a simple way to think of this is we have blue cones, green cones, and red cones, RGB. We don't have cones that see this particular color, for instance. But through the combination of response in the green and red cones, our brains can interpret this as yellow. Make sense? So note that the biggest area of overlap here for the cones in our eyes is in the green part of the spectrum. That's going to be important later when we get to color cameras. But we have to start with the black and white camera or mono camera. This is short for monochromatic. The earliest photography was all black and white mono, meaning we could measure the intensity of the light across the scene. But we didn't yet have a way of reproducing color. The earliest technology for reproducing color was very interesting. It happened before we invented color film. And so this is how it worked. You would ask your subject to stay very still and then take three photographs, each time placing a different color glass filter in front. And so I'm sure many of you can guess the colors of these filters. Yes, it was red, green, and blue. And by combining the relative intensities of the light captured through these three filters, we can fairly precisely mimic what the human eye sees. And back then, the only way to display this color film was by using three slide projectors displaying three distinct images, one on top of one another, onto the same screen. But how do we do it today? Well, an LCD monitor or a flat screen TV is just basically a light panel. The light panel itself is just white light. And in front of that are pixels, and the pixels are just little polarizing filters. And those polarizing filters control the intensity of the light at each pixel site. And then again, in front of that, we have the red, green, and blue filters. They control each array in the pixel with a sub-pixel. And so one pixel is lighting up a sort of pale green and then the one next to it is a little bit darker green. And when you combine all of this together, it looks like a color image. But if we use a microscope on a monitor that shows all white, we would actually see these sub-pixels. So each pixel is clearly made up of a red, green, and blue sub-pixel. A digital color camera works pretty similarly, but instead of a light panel in the back, we have the sensor. And a sensor is just a piece of light-sensitive silicon with millions of little pixel wells attached to electronics to turn the analog source, so the light coming through the lens and hitting that sensor, into a digital readout. So at each pixel site or photo site, it says this intensity of light hit right here based on the light coming through the lens. And so with astrophotography, a lot of times those signals are very small, which is why noise and minimizing noise is so important. If you have a model camera, then there's nothing in between the front of the sensor and the optics other than maybe a glass protect window. If you have a color camera, like in most DSLRs today, then in front of the sensor is what we call a color filter array, or CFA. And the most common color filter array is the Bayer array, named after Dr. Bayer, who worked at Eastman Kodak. And he invented it. The Bayer array is arranged like this. And so if you look closely, you'll notice that there are two green pixels for every one red and one blue. If we remember back to what I was saying about the cones in our eyes, the Bayer filter array is arranged with this oversensitivity to green light in our eyes in mind. The camera, through a computer on board the camera or software later, if you shoot raw, demosaics the image. This in effect interpolates the colors based on the pixel itself. So for instance, if light hit this green pixel, it would then interpolate what color actually goes there, not just based on the single pixel that with the light hit, but all the pixels nearby around it. And so this is how we end up with all the different colors, not just red, green and blue pixels, but all the different colors is based on this interpolation of where the light is hitting and the interaction of the pixels. OK, so to review, all sensors are actually mono sensors. But in a DSLR or a one shot color camera, there is a filter array in front of the sensor. However, the Bayer array or color filter array is not the only way to filter light hitting the sensor to make a full color image. Another type of imaging, often called mono imaging, is we use a monochromatic camera. So in this case, I'm using a ZWO astronomy camera called the ASI 1600 MMC. And so this is a cooled mono camera. This part right here, the red can like thing. And to capture different colors, I put different colored filters in front of the sensor. And I do this with this thing called an electronic filter wheel, meaning that it's a completely closed in design. This part attaches to the telescope. And then through a computer, I can move different filters in front of the sensor. So this is an A position filter wheel. So I have L, R, G, B, H, A, S2, and O3. And I'll get into what those different things mean. And I also have near IR, but I'm not going to talk about that. So the most common ones for putting in a filter wheel like this, if you're doing mono imaging, are red, green, and blue. Because if you just choose red, green, and blue, and you shoot those, it's like shooting with a DSLR. It's slightly more efficient since you're shooting all red. You're using all the pixels in the sensor at once, then all green, then all blue. But you end up with a similar image that you would get with a DSLR, because you're just shooting red, green, and blue. Another option is to shoot L, R, G, B. So you shoot luminance, where you're getting all the information, and then you mix that with the RGB. And the nice thing about shooting R, G, B is that you get really accurate star color, which you're often missing when you use narrowband filters, which is what I use a lot. And so red, green, and blue are what we call broadband filters. Meaning there is no one wavelength that equals red. What the filter does is it lets in a pretty broad range of reds. And so we call this broadband. And it's letting in wavelengths from a wave where peak to peak it measures 590 nanometers all the way up to a wave that measures 700 nanometers peak to peak. So a wavelength is if we look at a wave, we measure the length of the wave from this point to this point, and we call that a wavelength. It's literally 500 nanometers or 700 nanometers or somewhere in there. And all of those are we respond to as red in the eye. And so we can say that this red filter here has a band pass of 110 nanometers, meaning that anywhere from a red that measures 590 nanometers to one that measures 700 will come through with this filter. Everything outside of that band pass is rejected. A narrowband filter blocks more light. They look sort of more reflective like this. And a lot of times in the case of a narrowband filter, you're letting in just 10 or five or even three nanometer wide band pass. So you're blocking almost all of the visible spectrum, just except for the little small slice of the visible spectrum that you want. But why do we do that? Well, the reason is my very favorite thing to photograph are nebulae, hence my website, nebulaphotos.com. And there are different kinds of nebulae. And I'm not going to explain all of them here, but a basic division is there's reflection nebulae, where there's swaths of dust and things that reflect the light from bright stars, dark nebulae, which block out the light from the stars and also other nebulae. And then lastly, emission nebulae, which actually emit their own light, either because they are clouds of excited gases, where stars are being formed or they're part of the death of a star, either planetary nebulae or the deaths of a star or supernova remnants, which are one of my favorite kinds of objects. And it's this last class that we call emission nebulae, where we typically always use narrowband filters. The reason is these nebulae emit light at very particular and known wavelengths. For instance, singly ionized sulfur atoms, or S2, emits at 672.4 nanometers. That's it. Hydrogen alpha, also called H-alpha, emits at 656.2. And doubly ionized oxygen, or O3, emits at 500.7. And so I have narrowband filters, with band passes just a few nanometers wide, three to five nanometers wide, and the band passes are centered on these key emission lines. And so these essentially block out 99% of the visible spectrum. And blocking out this unwanted light is often a really good strategy due to terrestrial light pollution. It increases the contrast that you get. The sky appears almost pretty dark, almost black, and the nebulae really come out that way, the really high contrast. If I just use a more broadband filter, like a blue filter here instead of an O3, then I'm also capturing all these LED street lights around and all these other things that I don't really want in my picture. And so I'm sure a question I'll get is, should I use narrowband filters with my DSLR? And this is a big source of debate. What you're essentially doing is taking one filter, and a filter array of red, green, and blue, and then putting other filters on top of that. So it's not as efficient, but I know that people can get good results doing so. I've seen it online. I've never personally tried it, so I'm not going to comment from personal experience. I've only done narrowband imaging with my mono camera. I use my DSLRs, mostly without filters, and I try to travel to darker sites to use those. Okay, the last part that I want to... I'll talk about this more in the processing videos too, but I'll just mention briefly here. Once you have captured narrowband data, as opposed to red, green, and blue data, it's a little bit less clear how to actually make an image out of that. The reason is that O3 is actually somewhere in between green and blue. It's sort of a teal color. 500.7 nanometers is sort of right in between green and blue. It's sort of like a greenish blue or a teal. H-alpha and S2 are actually a very... They're both very deep reds. H-alpha is already a deep red, and then S2 goes even deeper, probably beyond what our eye can actually detect. So you have two basically red colors and one green-blue color. So what's cool about this is that if you are just doing what we call bi-color imaging, where you just shoot H-alpha and O3, you can just put the HA in the red channel and the O3 in both the green and the blue channel, and you get a fairly natural response from that, which is really nice. But when you shoot all three or more narrowband channels, then it gets a little bit more creative, and a lot of people find this really fun. If you're a stickler for what should this actually look like, then it might not be for you, but what we still can see from these images where we're a little bit more creative is how the gases are interacting, and the actual position of all the gases is still accurate. It's that the colors are just what we call false color if you change the mappings. So one of the most common mappings, meaning you're taking some narrowband data and putting it into the red, green, or blue channel is to put the S2 data in the red channel, the H-alpha data in the green channel, and the O3 data in the blue channel. And so if you think about that, that's S2, HA, O3, or show for short, SHO. SHO imaging where you put them in that order makes some sense because sulfur is the furthest into the red, and then H-alpha is less so, and then O3 is even less so. So it does make sense the order. When you do that, and then you sort of remove a little bit of the green, you get this really cool sort of golden orange and yellows and blue look. And it's sort of reminiscent of Hollywood film toning where it's like the orange and blue look. And this style of imaging where you do the show and then you remove a little bit of the green is known as the Hubble Palate because when NASA sent out the Hubble Space Telescope, there's all these beautiful images that they got back where they shot those filters, and people, including people who work for NASA, often processed it in this way. So it became known as the Hubble Palate. It's also, you'll also see the SHO Palate or SHO Palate, something like that. Okay, so that's really it for this intro, but I would encourage you if you are interested in how to process narrowband images from a mono setup to keep watching because I have videos for GIMP Photoshop and Pix Insight where I go through the process of stacking, registering and putting together a full color image using some sample data I shot with this setup of the Seagull or Parrot Nebula. It's called Both Things. It's IC 2177. And I'll explain a little bit more about palette choices and color in those videos, but I hope that this was a good introduction to the science behind color filters and what we mean by broadband versus narrowband and why we use those terms, what we're actually doing when we shoot with these kinds of filters. Thanks for watching. Again, my website is at nebulaphotos.com. If you're not already subscribed to my YouTube channel, I encourage you to subscribe. Thanks very much. I'll see you in the next video.