 So today's presentation is going to be about color for photographers. And what I hope to do is demystify the concept of color spaces. Now this is going to be practical. We'll be relating this to the basic photographic process. And I will want to define a color space, talk a little bit about trisimilis, how a camera records colors, how printers print colors, and how do we get these colors to match. Now color theory varies a lot depending upon who you're talking to. As a physicist I might be interested a lot about spectra and so forth. Not so much about human vision or even color spaces. And if I was in graphics arts then I might be concerned a lot about color schemes. In this particular presentation we are going to focus on the idea of how colors are perceived. Generally a color space is the range of colors that something can detect or record or reproduce. And of course a human being is such an instrument so there is a color space for human vision. In humans all perceived colors can be differentiated by just three values. The reason for this is because there are just three different types of color sensing receptors in the eye. And because there are such three different receptors that are stimulated we call this the trisimilis. And here's a little picture. I think you've seen this before. Just showing a representation of the retina with the cone cells that pick up on color. Each such receptor is stimulated by a photon that is in a very broad range called the L-long millisport. Look at the diagram. See that these heavily overlap. That means that a given photon, given wavelength, will stimulate almost always at least two of these and sometimes all three. It's impossible to stimulate a receptor by itself. The three values can be expressed in several schemes. We can have the LMS scheme that would simply be a measure of how much each receptor has been stimulated. RGB for red, green, blue. HSB for hue, saturation, and brightness. And then LXY, which is luminance and a pair of coordinates that represent a combined response. So let's start with our LMS color space. It's the range of colors that is seen by most humans. However, it happens that some females have four different color receptors. They have a tetra stimulus as opposed to the trisimilis. And as a consequence, they can distinguish more colors than typical. And by the way, I should mention that women that tend to have this have a grandfather that was slightly colorblind. And there are some males that have muted receptors and distinguish fewer colors. It was common being a difficulty in differentiating reds and greens. That's very slight. So the human vision color space was measured. This was first done back in 1931 by the International Commission on Illumination. And in French, their initials were C-I-E. And they defined the standard color space for people. Well, by the way, that is why women often do tend to see colors that men do not. Green is quite different from the... We're just kind of staring and going, what? But yes, they do see different colors. Anyway, C-I-E, they measure the LMS color space. And they did that by using what they referred to as the C-I-E 1931 gathered color metric observer. C-I-E refers to the fact they used a particularly small section of human retina. A sensitive part for looking at color. Now, who was the standard observer? Well, standard observer was a bunch of English school boys. Basically, they looked at pure spectra, then tried to match the color by using adjustable filters. From the measured LMS color space, they constructed another representation, which is called the L-X-Y color space. Because intensity, luminance, is heavily weighting the M response, basically green, over the L and S response, red and blue. The L-X-Y color space is defined as the colors a normal human can see. And bear in mind, when I say normal, I simply mean the middle of the distribution, a variation of color site. It's, you know, hardly the same for everybody. This representation, the X value is close to the amount of red. The Y value is close to the amount of green. The amount of blue can be tamed from X and Y, given that the luminance value is fixed. Don't worry about luminance. I've already told you that any color hue and saturation can be adjusted to numbers X and Y. So basically we can have a two-dimensional plot to play colors. And as I said, two-dimensional plots. So in summary, the range of colors for a fixed luminance is a color space. When the luminance is specified, the color space is two-dimensional. There are some alternatives. I already mentioned hue, saturation, and brightness. And if we only look at hue and saturation and not brightness, again, that is just a two-dimensional color space, just two values. And we can also use three standard color bands and define those as red, green, and blue. I don't know why I put down red twice here. I have to correct the slide. But red, green, and blue, which would be measured by some artificial specimens, some filters that we designated. The hues are the colors that a single photon might convey. And these are the rainbow colors. You know, ours we would say in physics, the pure spectral colors. Let me just bring this one little chart forward back here. And OK. These are the spectral hues. The hues of the rainbow. Red, orange, yellow, green, blue, indigo, and violet. But I want you to notice something. And that is that in the rainbow, you don't see the color brown or purple or gray. Those are actually not hues. Those are made by combination of hues. And we refer to those in general as being color. Now, I'm going to flip this around if I can. It was Isaac Newton that first observed that light can be broken up into the spectral hues. And he also observed that all the colors you see can be made by combining two of the hues. And from this, he produced the color wheel. This is the Newton color wheel. And it displays all the colors of a given fixed luminance. The spectral hues, the ones in the rainbow, start here with red and go around this side of the color wheel down to blue. That's the idea. The section in the lower left between red and blue, the magenta area, is in fact not found in the spectral hues. And is only seen when you combine various degrees of red and blue and perhaps another color. Okay, I'll put this away. Put up there. Okay, I mentioned the pure spectral colors were the outer rim of the color wheel. In our LXY plot, they form this rim here. And let me see if I can get my little pointer up. There's a little red dot that I just put here. This is the extreme red going toward the green, coming around down toward the blue. So you can see in this particular kind of plot, my x-axis basically corresponds to the amount of red, not precisely. It's actually a fairly complicated algebraic relationship between this and the response on the retina. And the y-axis is like the response of the green. So here, we'll put some colors on there. And you can see that we have red and green and blue at the outer limits of the diagram. Gray kind of toward the center. Brown is sort of a mix of red and green and then naturally a little red. Purple is between the blue and the red. In fact, we call this lower line here, here and here, as the line of purple. Well, by the way, let me go back one. Inside are the colors we can see. Outside are mixes that are possible, but you won't perceive them. And the reason is that there is no way to get your receptors on the retina to respond to these ratios or produce these ratios in your mind. So in this area out here, basically you would say there are colors that would not be differentiated by a human being. Unless you use some extraordinary measures such as finding a way to stimulate receptors individually. You know, perhaps a tight little laser beam that would target just a single receptor. Now on the XY plot, we can show how mixing two or three primary colors will produce a non-primary color. For example, mixing red, green and blue to make gray. By mixing two of the spectral colors in a desired proportion, we can produce any color inside the spectral curve. Green can be a primary. It depends upon the color scheme you wish to use. You know, printers will use cyan, yellow and magenta and mixed to black, whereas physicists and photographers will use red, green and blue. The other way to look at it is that color saturation can be a position along the line from the spectral color to the center gray. And again, by looking as that as going from a sharp spectral line, produce any color. See, almost there's still a problem with the line of purple. So we can produce any color of a given luminance by mixing the appropriate pair of hues. And we can also produce any color by desaturating a hue. And except for the purples. Go back here real quick. This line here, between red and blue, is my line of purples. And for there, I basically have to mix three different hues in order to get any part of the purple region. Oh, and I had a slide for that. I forgot I had a slide for it that shows the line of purples. But this lower line here you see is that line for magenta and the other purples. Now, white, gray and black are the same color. They're just different luminance. So we call the center of the plot the white point. It might more properly be called the gray point. In any given scene, white is simply the brightest gray and black is simply the darkest gray. You will often find that the black is not so black and the white is not so white. Okay, so just to reiterate, the human color space is the spectral curve that makes a boundary in the XY plot that humans can see as separate colors. If you only see the inside, you cannot differentiate the stimulus in the outside. It doesn't mean those can't exist. It means you can't see them with your retina. Okay, so the CIA diagram is shown here. And as I said, this was first measured back in 1931 by having young English boys adjust filters to match pure spectral cues. There are various things marked on here. You see that black line in the middle. That's what thermal radiation would give different temperatures. You recall we talk about red hot steel and orange hot steel and both work. This is also the same incidentally for spectra of stars in terms of their surface temperature. So that's your thermal radiation. So that was for human vision. Now what happens when you have a camera or a printer? Cameras will use color filters that are the analog of the receptors of the human retina. And the choice of those filters will determine what subset of the human color space will be recorded by the camera. So the first thing to consider is that the camera is always going to record fewer colors than a human being can see. Early in digital imaging development, HP and Microsoft collaborated to come up with a color space that would be suitable for digital cameras and displays. And this was in 1996. And as you can see, I said ancient history because the computer and electronic standpoint, that was a long time ago. These values are called the standardized red, green, and blue, or sRGB. Bear in mind, though, that when I say RGB, I do not mean exactly red, green, or blue. The IEC about three years later accepted sRGB as a default standard for imaging. And in that standard, the colors in each of the channels is divided up into an 8-bit number. Now that seemed to be adequate that human beings could not really differentiate if you had smaller divisions. And that applies to devices that use specific filters for sensing the color and specific dyes for displaying the color. Now here it is shown on the CIE gamut. By the way, gamut is another name for range. It actually comes out of medieval music. It was the lower limit local range. Modern times came. Now you see these triangular vertices I've marked on the diagram here and here, here. Those represent the filter values that serve as a reference for the sRGB. Or you prefer the primary illuminance if I was using these two filter light. And I've marked down the x, y, and y. Now that's kind of small. Let's jump back to here. All of this area out here in this green around here is a color that will not be recorded correctly by the sRGB scheme. And all this information you might see in nature, you're not going to see in a photograph. So it's just a small triangle inside that full IE gamut. Now photographic films had a much larger extent. And that is one of the complaints that photographers had in the transition to digital. That the colors were becoming less vibrant, less more muted. You would think maybe we can do better. I mean, 1999 was a long time ago now. And we might have better filters and better display technology and so forth. Adobe, a year before the IEC made sRGB a series after. They came up with their own color space. The Adobe, and the specific reason was they looked at sRGB and said that's just too small. And Adobe does have a lot of clout so they could compete with their alternative to the IEC sRGB. So I have to emphasize that as an international standard, it is still sRGB. They made a little error in their first try back in 1998 and they revised it. But that means that you are now confused by having two different Adobe 1998 and you have official Adobe standard. So now we have three different standards. We have the sRGB, the Adobe 1998, which a lot of stuff was recorded in. And the Adobe, the current Adobe RGB. Now here's the Adobe color space. And you can see that it's considerably larger and in particular it captures a lot more greens than the sRGB does. So it gets more greens and blues. It is also designed to work with ICC color management tools for printing. You will find that most professional cameras will allow you to choose between sRGB and Adobe are recording an image. And here I show them together. And you can just see that the sRGB triangle is inside the Adobe one. Now, more recently we've come up with a new color space called the Pro Photo Color Space. Now why did that happen? It happened because in digital cinema they can record and display more colors than we could previously with the filters and professional cameras. In particular, for digital seminar, there is a laser scanning section tech that can display a huge range of colors over that. So Kodak came up with Pro Photo. And it's actually even a little bit larger than the old chemical film color space. And I should note that it's actually been adopted by Adobe as the old color spaces in Adobe Lightroom Editor. Let's see if I can bring forward the picture of that. Okay, just to show you. Let me scoot it over a bit. This is the Pro Photo Color Space. And still misses some greens, but it does capture much, much more of the human vision color space than the previous standards. One rather interesting aspect is you can actually encode into Pro Photo regions or laptop that represent colors that cannot be differentiated by human vision. And about 12% of the Pro Photo Color Space is like that. Let's put this away. Come back to here. Okay, now let's see what happens when we're using a pattern of filters to record a color with the camera. Well, a green filter, such as this one, might be measuring a broad area that's above this green fuzzy line here. The blue filter might be measuring a broad area that is to the lower left. The red filter might be measuring a broad, red fuzzy line. And the color I want to measure is somewhere in the overlap of those three filters. When I've measured that color, okay, I will come up with some values. And I've given an example down here of 0.5, 0.6, and 0.2. Just remember that these really aren't red, green, blue. They're close, but not really the same thing. Now we're going to print with three inks. So I have my three measured values of RGB, and I happen to have three inks that are called red, green, and blue, but again, not exactly. So here on my CIA diagram, I've marked down where the inks might be. We have a green ink, a red ink, and a blue ink. And I've put down that color that I want to print. Well, I print it, and guess what happens? The recorded color doesn't look a bit like the printed color. Simply because my inks and my filters don't match. I was just very naive. I thought that if I measured something that was like red, green, and blue with a camera and then printed something, used red, green, and blue inks that somehow didn't look the same, they're not. What we have to do is mathematically alter the recorded color values to match the reproduction to the original. And that's the reason we have to use a standard color space. So we know that cameras record colors by using color filters, and I won't get into it in this talk, but it uses a pattern called the Bayer pattern or dichromic prisons. The camera processor right inside your camera will adjust the digital values to conform to the sRGB or the Adobe or HDTV for that matter. And when you print the printer driver, say for example the Canon driver for the Canon printers, will adjust the digital values to conform to the ink colors. Your precise work requires that you set up all these adjustments correctly. There's the same subject, same camera, but done with two different color spaces. I'll tell you the truth, I'm kind of lying to you here because I had to manipulate this to get the fact that I wanted to pray. But on one side you can see what Adobe color space might give, and on the right you can see what the sRGB color space might give. And if you look carefully you can see that, yeah, I've kind of muted the greens and I've kind of broadened out the red in each one. If I looked at them separately I might not even notice anything was wrong. It's only by comparison that I see that one had a bit better range of color than the other. Now besides my sRGB and my Adobe RGB, and I mentioned the pro photo, there are other color spaces. There's one called Color Match, there's the PAL system for television, the HDTV system, NTSC, and so forth. And each of these has a different purpose. It's for different devices. The iPhone color space is unique for itself. It's been designated as Display P3. It's close to sRGB but it's not the same. And that again is handled for you invisibly because when you import the images the software you're using is going to adjust these different color spaces. Just so you know here's the HDTV color space, again shown within the SIE gamut. So you have to choose the color space you want to use and you have to be consistent. Pick with that color. Each device you use is calibrated to attach a particular color space, its inherent color space to your chosen color space. Such matching is done by having specimens such as this color card. You would take a photo with it, you would display it on your screen, and then you would adjust your screen parameters so that the colors match the original. And that's actually usually done by an automatic system. There is a little fold against, look at each little square of color. Or it would it thinks the colors, then make a algebraic adjustment to play parameters, get everything to match. And many professional photographers will go through this exercise, reach display in each printer that they use. Three ink printers, as we've already seen, can only do colors inside the triangle that's mapped by their inks. Six ink printers will have an irregular hexagon like this. They can do more and therefore will produce a higher quality print for you. But when you calibrate a given printer for a given color space, you're doing all this mapping of the outside to the inside and trying to get the color spaces to fit. You don't ever have true color fidelity. You're going to think there is, but it will always be somewhat different. So the colors you see in a movie or video display or printed are not true of what you would see in the original scene with your own natural. By the way, color grading is the choice of a color space. Polymer is a very big deal. If you look at the differences between films, between color evidies, you will see there's been quite a change to color grading. There was a time when, I think in the 50s, when all the colors really popped, they were strongly differentiated. These days they're a bit more muted, I think. And you really want to see what the Mona Lisa looks like. The only way you'll ever know is by going to Paris. And you're not going to tell it from a book. So we see colors a camera cannot accurately record. The camera records colors we cannot display. The display shows colors we cannot print. But fortunately, human color adaption comes into play. It kind of saves us. And the reason for that is that the visual cortex will automatically average a scene to gray. It simply assumes that all colors are more or less evenly represented. So it will adjust its color perception. So the average of that scene is gray. Now, if you've ever worked in a semiconductor fab, as I have, you know that they use yellow light and certainly the sensitive films, floating wafers and circuits. And it takes a very short amount of time after you've stepped into that room for the yellow to disappear and for you to feel like you're just seeing normally. And it's only till later when you bring something outside and you suddenly say, oh my goodness, you know, this that I thought was green in fact was blue. You do adapt to it. There's also what we call the land effect. That's after Edwin land, the inventor of Polaroid. And that is, is that gray areas will tend to fill in with complementary colors. So if I have a small bits of gray surrounded, that gray will tend to have complementary to surrounding colors. Okay. So one thing that's useful is having a standard language for color communication. And that's implemented several ways. I've already shown you the color swatches that we use and also the color sensors that we put over top of the displays to calibrate them. I should also mention the Panatone Hexachrome color system. Excuse me. If you work in the graphics arts, you are often very accustomed to seeing someone marking down a Panatone designation for an area. Have maybe yellow or something and they would pull arrow on it and said, not what you see, but instead this Panatone color. Okay, so that covers what I wanted to hit on today. We'll open it for questions. I think I missed some of these that you had put into the chats. So let's go ahead and re-ask if you wish. Okay, nothing anyone wants to ask about. Actually, you can. Some animals are, instead of having tri-stimulus, have just two color receptors. And for such a diagram, it would basically become one-dimensional. You know, they would not have a lot of differentiation. And, of course, we mentioned that some human females have four types of receptors. And the SEA diagram would not be correct for them either. They would have a much broader, differently shaped diagram. Well, basically for those that can see into the UV or the infrared, you would take the spectral curve and you would extend it further than we show on the diagram. So there would be considerably more portions toward the lower right than a human being would have. Well, there's always a philosophical question, am I really seeing the same colors that someone else sees? We think we do because, excuse me, my throat's going. We think we do because the colors that seem complementary to us seem complementary to other people as well. But, you know, you just don't really know what's going on inside another person's head. And that would also be true of artificial intelligence. And one thing I didn't even touch upon today was the question of color sampling. Because when you have a camera sensor, you're using an array of filters to different photoreceptors in the sensor concentrate on different colors. And for most digital cameras, we have what is called the Bayer pattern that uses an array of two by two pixels, four pixels altogether, where two are concentrated on green and the other two are on red and blue each. But interestingly enough, in Sun Cinema cameras, they use an array that uses just two colors. I believe it's a blue and a green, and they synthesize the red based upon, and then we also have some very high-end video cameras, little cubes that separate out complete red, green, and blue images. If I was looking at luminance alone, I could do a considerably more sensitive and finer gradation than by doing color. And, of course, in photography, people often speak of the fact that a monochrome picture, a black and white picture, seems to have much better grading than their photos. That's why, although like old black and white films, they're like black and white photos. The grain in the film depends on what film you're talking about. But the total number of pixels in many digital sensors matches the available grain resolution. We're running up to the point now where we'll soon have in common use 70 megapixel, 100 megapixel cameras. In our cell phones, we don't have a time table for that. But I think you can expect to see very big things happening. I think that chemical photography will continue for quite some time, simply because people like the feel and quite literally the texture of it more than digital reproduction. You know, it's interesting you mentioned music. Back in the 70s, when digitized audio was first introduced, there were many audio files that I knew that I explained. They said that the digitization frequency before killer hurts was quite audible to them. They said it was like a buzzsaw going off in my head. Today, I'll run into people who will tell me that, oh no, they have this perfect CD, a digital bulletin master. And they choose to use anything else to judge and qualify audio systems. So there's a change of perception like that. Yes, there was a time when for audio, you would see a little designation on the media that would say DDD or ADD, indicating digitally audio is a very big deal for people that were into that, about which variation you thought was best. And we do have people with supernormal vision, too. In my much younger days, not anymore, but in my much younger days when I was working in integrated circuit fabs, I was often able to see details through the microscope that others could not. Any additional questions though about color? I don't know of any of the current research on that, but I would assume that probably it does change with age. Well, perhaps next time I can do one on some other technical aspects, in particular a bit more on spectra and luminance. And for printing monochrome, I suggest getting a printer that includes a photo gray ink. Well, thank you everyone for having me here today. And, you know, hopefully we can do some additional things in the future. Well, goodbye everyone. I guess I'll sign out now and I will see you all, I guess, next week.