 I believe this is a good moment to begin. First, I'd like to welcome everybody to the Science Circle. I think most of you are here in virtual reality. It's a nice Sunday morning, but I know that for many of you it is much, much later. In fact, it could be the dreaded early Monday morning. So thank you for coming. And I hope that what I present will be of interest. So I'm going to swing around here so I can see the slideshow. Today's presentation is called Optics for Photographers. And I was motivated to do this presentation because I had noticed that a lot of young people, many people entering in photography, didn't really have much of a background in optics. They were learning things from camera manufacturer ads and common knowledge, but they didn't really know how optics worked. So I thought that I would present optics at a level that I would be using. I'm principally going to be using ray tracing. I'm going to start with the pinhole camera. Pinhole cameras are both interesting and fun to use, and they do illustrate a number of basic. Then I will relate pinhole optics to practical lens optics, bringing the ideas of focal length at the field. And we'll also talk about ISO, which is the gain of the image sensor, and aperture and shutter speed. Someone is saying that my voice is fading in and out. Okay, I don't know what's changed here. It was okay earlier. Let's just see if it will go along and if it stays together. Okay, so let's make a pinhole camera. To start with, we'll take a body cap, and we will drill a hole right through the center, basically ruining its use as a body cap. We'll cover that with a bit of foil. Secure the foil with tape, and then use a fine needle to make a pinhole in the foil. And then mount that to our camera. Now we'll just set up for a shoot. And you can see I just sort of set up on my balcony here and provided two little mannequins. So here is a picture that you can get with a pinhole. I want you to notice that, of course, it has lousy resolution, because it is just a pinhole. But all sections have the same lousy resolution. There is no feeling of a change in focus throughout the image, and that's referred to as having a flat image. Let's see why that is using ray tracing. So here I've set up a little virtual model being shown to you in virtual pinhole and my two mannequins. I'm going to take away the camera so you can see the sensor and its relationship with the pinhole and our subjects. We can trace light rays from each position on the subjects through the pinhole to the image. Oh, and I just should mention that light rays are a convenient but fictional construct. We can use them mathematically, of course, in modern physics, light behaves entirely differently. Okay, so each point on the object is mapped to a corresponding point in the image. You can think of pinholes as mapping angles. So we're going from three-dimensional coordinates in a real world to a two-dimensional representation. You lose a dimension. However, looking at the image, one of the capabilities of this vision is that we can analyze the image or that loss of information. And that also is the source of a lot of optical illusions. For example, here I've drawn a little quadrilateral between the heights and the positions of my two models. It shows clearly that in that image they seem to be different sizes, but in the real world they're the same size. Let's take a look at some of the features of this representation. You say my voice is still breaking up. Is it being a bad problem? Well, I'll just continue on. Okay, so it's basically going to be all about triangles and drawing our light ray lines from the objects through the pinhole to our image sensor. Magnific- oh, let me go back for a moment. I just wanted to mention some of these things. These red lines here, let me see if my little laser pointer is working. Okay, there we go. This red line along here and here reach to the edges of the sensor here and here. That represents my field of view for this given optical imaging element, which is the pinhole. Then my object size is between these yellow rays here and the image size over here. And then I have a object distance along this path and then my image distance along this path and also, of course, a sensor size. Okay, so the magnification we define is the image size divided by the object size and that happens to be equal to the image distance divided by the object distance. Now, it doesn't matter. Excuse me. I could be using a pinhole, a lens, a curved mirror. That relationship was cold. I should mention, however, that for complex lenses we don't necessarily measure from the center of the lens group and if you look above the slideshow I happen to have put some additional graphics that show the difference between simple lenses and complex lenses. The one at the very top is a simple lens and we have what we call a principal plane running through that lens, which is the point from which you do all your measurements. You measure your image distance, your object distance from that principal plane. However, if I was to look at something like a telephoto lens, which you'll see in the ray tracing on the left-hand side, the pair of lenses, in this case a convex and a curved lens, have provided a combined focal length that is much greater and it has also moved the principal plane of the combined lenses out in front of the physical lens group. This is the reason that telephoto lenses on a camera can be shorter than the specified focal length closer to it. So let's return then to the slideshow. Voice is gone, back. I'm not changing my position, so I don't know what's happening. How much have you missed? Let's continue on. The field of view, as I indicated already, is determined by the triangle, image sensor, and image distance. We have a formula for all this. 1 over the object distance plus 1 over the image distance equals a constant. That constant will turn out to be 1 over the focal length, but you should notice that our pinhole camera doesn't actually have a focal length. Let's go back to our triangles for a moment. If I increase my object distance, just move this object outward, that will decrease the image size, thereby decreasing the magnification. Counterwise, if I increase my image distance, that's going to increase the magnification. However, because I did not change my sensor size, or the sensor location, well, let me move one. I'll decrease my sensor size. I'm going to decrease the field of view, that red line. I flip back for a second. You look at the red lines here, and then I go where I have reduced the sensor size. The red line is now down here. You can see that's reduced my field of view. So field of view is your sensor size versus the distance of the sensor from my principal plane of the optics. Now, suppose I add more pinholes. Well, I think it's going to be fairly obvious. That's going to double my image. So now I have a pretty difficult to view image, but each mannequin is in two positions now, superimposed. Yes, double trouble. And here we are showing what is going on with ray tracing. For each pinhole, I have a separate generating two. Now, suppose I took a small prison, and I put it over one of the pinholes, as you see here. That could bend one of my rays and merge those two images. Suppose I had a plate that had a lot of pinholes. I could put the prison over each one. And as you can see, the further out I get from the center, the steeper the prison has to be. Do a whole array of those. I'll just line them up so that the pattern of each one is in line. And you can see that I'm actually forming a lens shape, just like that. So you can actually think of a lens as being an infinite number of infinitesimal pinholes, each with an infinitesimal path bending prison. And I do the same thing for mirror optics, but because the rays fold back on themselves, that's really hard to draw. And so there we have developing a lens without even using the calculus. All right. So actually we did use the calculus, but it was well hidden. Okay, so it's still all about triangles, but now something special has happened. Because I have rays that come in through different positions on my imaging element, there is one and only one point where they converge and where the image would be an exact focus. And that allows me to characterize the imaging element, in this case a lens, having a particular focal length. Okay, right here. The pinhole element, to reiterate, did not have a focus. Every object at any object distance was equally resolved. The fin lens has a focus, which means that there is only one image distance where an object will be an exact focus. However, we don't actually need exact focus. An acceptable resolution would allow a range of object distances and a corresponding range of image distances to be considered as in focus. What is that acceptable resolution? Well, infinitesimal point will be resolved to a minimum size by this size is called the circle of confusion. The object position, the resolution that is equal to that resolution is an acceptable resolution. The human eye has a circle of confusion of about one-fifth of a millimeter, 0.2 millimeter, and a viewing distance of 25 centimeters. So, you know, roughly a comfortable viewing distance for, say, a book or a picture you would be holding in front of your eyes to see lines that are spaced no closer than... And that is roughly about 1,020 pixels by 12... Also, it happens to be something like an 8 by 10. If I use the 35-millimeter sensor to generate this... Okay, are we still getting a lot of breakups? Okay. Do you want me to log out and re-log in then? Okay, I'm going to do that. Hang on. Okay, I'm back. Does this sound a little better? Okay, so continuing on, if we're using a 35-millimeter camera sensor, the allow-it-soar confusion is about 0.04 millimeters, and most often we'll use a value of 0.03 millimeters. And bear in mind that this is referring to the acuity of the human eye looking at the final image, whether printed or displayed in the typical circumstances. This does not indicate the sharpness of the camera. If you had an exact focus, it would be considerably sharper, even though the pixel size of the sensor or whatever may prevent you from being able to see it. It does indicate the degree to which sharpness can be reduced before it impacts... I'm going to adjust my preferences. We'll see if that makes a difference to the voice. Okay, just for point of information, here is a chart of the circles of confusion for different combinations of format and sensor size. And as I mentioned, for the 35-millimeter is about 0.2 or 0.3 millimeters. Okay, so let's talk about depth of field. The range of acceptable distances based on the circle of confusion is the depth of field. And for the object type, we call it depth of field. On the image size, we call it depth of field. We can define this as the range over which the image sensor can be moved while keeping a fixed object in acceptable focus. We can also give an alternative definition that is the range of optical distances that would be in acceptable focus for a given fixed sensor depth. So let's take a look at our triangles again. We have our focal length here. I'm going to characterize the addition of object distance to image distance. As I move my object away, my image distance has to come closer and closer to the focal length. And if I move to extreme, such as image distance, it gets very close to the focal length. The mechanics of that is that on your camera lens, there is a pin that sits on a helical slot on the internal barrel of the lens mount. When you rotate the focus ring, it is moving that barrel back and forth in distance. So remember that the field of view is determined only by the sensor size and the image distance. But we now see that the image distance for objects effectively at infinity, the image distance is determined by the focal length. So that's the relationship there. And I mentioned the magnification before. Just to reiterate, it's the image size to object size. And I won't go over that again. I'll just keep going. Okay, so what is that focal length? Now, here is one of our problems. And is a significant number of people not hearing me now? Photographers, and they're embedded in this by the camera manufacturers, will call the focal length of the lens as the value that would give an equivalent field of view on a 35 millimeter sensor. An example of this is that people often refer to the wide lens on an iPhone as being 28 millimeters. But in reality, it is a 4.25 millimeter lens that is paired with a 5.7 millimeter sensor. Now, why is that important? For an iPhone, if I focus on an object that is one meter away, the depth of field, that's our acceptable range of resolution, is in the range of half a meter all the way out to infinity. For a 28 millimeter lens on a DLSR, if I focus on an object that is one meter away, the depth of field will be about one meter while it will be about a 0.9 very narrow range for a lens. Now, when you have a configuration such that the lens aperture and other distance, the depth of field goes to infinity. The near distance is called the hyperfocal distance. The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity, acceptably sharp. When the lens is focused at this distance, all objected distances from half of the hyperfocal distance out to infinity will be acceptable. Alternatively, you just refer to it as the distance beyond which all objects are effectively sharp for a little bit of infinity. Now, there are several ways of determining how to focus for the hyperfocal condition. The best way is to simply focus at something that is at infinity, and that will give you a decent approximation of having a hyperfocal setup. Oh, what is the F-stop equivalent of an iPhone camera? Let's see, I have that here somewhere. It's a normal F-stop. It's around F2 or the exact number. Okay, so a lens might be equivalent in terms of field of view, but not in terms of depth of field, and most especially not in terms of the hyperfocal distance. Both shooting situations, your iPhone actually have everything in it. So to get a bokeh effect, the iPhone has to fake it, and it does that very cleverly. What is bokeh? In photography, it is the aesthetic quality of the blur produced out of focus parts. This is especially popular for portrait photography. You want your portrait subject to be in good focus, maybe a little soft toward the edges, but you want everything else to be blurred out to almost just be a pattern of color that frames the subject. That is controlled by controlling the lens aperture. Now the aperture changes the active diameter of the lens, the pupil. Increasing aperture diameter increases the amount of lens reaching the sensor by the square of the diameter. So you double the aperture, you increase exposure by four times. Decreasing the aperture decreases the apparent lens diameter, making it more pinhole-like. And as you may remember that for our pinhole, everything is in the same focus. This is characterized by the f-number. We know that the total light is going to be proportional to the square of the diameter divided by the focal length. So we simply let the f-number be equal to the focal length divided by the aperture diameter. F-number settings are called f-stops. We have some conventional stopped values. As you see the list here, one, one point. Each step to a higher f-number decreases exposure by approximately every two step, decreases exposure by approximately one half. ISO is our gain. Of course in the days of film, that was the photosensitive activity of the film emulsion. In these days it is the amplification of the image sensor. Some standard values were established by and we have certain preset values for that that are similar to the effect of f-stops. In this case 100, 125, 160. And every two step increases exposure by about 2.4. So roughly if I go down two stops in aperture, I am able to compensate that by changing my ISO by two steps. Okay, oh, by the way, someone asked if the f-stop was for a specific distance. No, it's not. The f-stop is independent focal distance. It is true when you change focus, the image distance changes a little bit. But by definition we simply refer to the f-stop as being the aperture diameter divided by the focal length. Okay, now there's an optical gain setting for any given image sensor and usually that's going to be ISO 100, not all the time. Different cameras will vary in this regard. There are some basic rules though. You should expect on a sunny day to be using ISO 100 or 200. On a cloudy day you want to go up to an ISO 400, indoors about a certain low light. The shutter speed is also in preset values of fractions of a second. 100, 125, 160, if 100, 200 and so on. And each step decreases exposure by approximately one quarter. There is a well-known rule that is to minimize the appearance of handheld shake. You have to use a shutter speed that is faster in seconds than the inverse of the focal length. At a 200 millimeter lens, you must use a shutter speed that is shorter than one second. Well, concerning the ISO speed in the... As you increase your ISO, yes, there will be increasing the amplitude of the sensor. There will be a point where it starts to be a diminishing return. For a modern camera, you know, I've gone up to 1600 easily without getting too bad of an effect. Now, if I go up to 64,000 or much higher, yes, it gets quite noisy. But if I'm working in a low light situation, camera, there's not much else you can do. You're going to make which you can... Okay, just to note, a better way to minimize handheld shake is to eliminate the hand, and that's to use a tripod. And it is also true that modern lens stabilizers help a lot. Basically, they help by two of the preset values of shutter speed. So, if I was using a 200 millimeter lens with a... I might be able to shoot shutter speed low as 1.60 of a second. However, a tripod is better. Now, if you do put your camera onto a tripod, make sure you turn off the lens stabilizer. The reason for this is that even when there's no shake, the stabilizer continues to hunt and try to stabilize the lens. So, it actually starts adding a little bit of a sturdy tripod. So, let us say that we're shooting landscapes. What you want to do is control your aperture and use the optimal ISO for your sensor. The shutter speed will be set automatically. And this is where your aperture priority mode on your camera should be used. On the other hand, if you're shooting moving objects, you can control shutter speed and aperture. So, you'll basically set your ISO whenever you're going to need for the given light and then set the aperture and speed to what you... and you'll be wanting to use your shutter priority mode for that. Okay, then we have the question of our lens, wide, normal, or tele. Now, a lot of people say, well, if it's a distant object, I always want to use tele. If it's a portrait, I want to use wide and so on. It's not really true. The first thing you should do is choose where you want to stand in relation to your subject to get the perspective you desire. Then choose a lens that captures the field of view you need. You want to minimize the amount of cropping you have to do on the final image. So, many people will use what is considered to be a telephoto lens for doing portraits that are fairly close up. Someone mentioned that the aperture of the lens can affect the lens sharpness. And that is true. They have an optimum f-stop for a given lens. Okay, now, perspective is based on where you stand in relation to your subjects. And that's only indirect related to the lens focal length. Here's a scene with the telephoto. And here is the same scene from the same position with the wide. And here is an image that has been cropped from the wide. And you can see that's almost identical to the one that we got with the telephoto. So, my perspective is by where I'm standing. And the amount of cropping that I'm doing is determined by my lens focal length. Okay. I just thought I'd show off some pictures at this point, but we'll break for a moment and let you ask some questions. Let's see what we have here on the chat list. Well, yes, and that's why you want to avoid cropping in the post-processing area. So, for example, in this particular image here, and let's see now, this was taken with 28 millimeter. 28 millimeter. Suppose I'd wanted to focus on four fronts. Well, photo for that. And it should also be mentioned that there is a distortion for the extreme wide angles that takes place from the effect of the perspective. Okay. Does anyone want to throw in a quick question before I move on? Okay. Let's see. Okay. This was a typical situation where the light was this case. And this again is 130 millimeters for a similar scene. And this is a, let me just take a quick look here to see what this one was. Okay. This was done with a 44 millimeter, which a lot of people consider to be a normal type lens. Okay. That's another 130 millimeter shot. Okay. And this is a 28 millimeter shot. And this is a spoon amusement. This is my sister looking at the mighty Rhine River. Well, she's actually looking at a small stream coming off the Rhine. I believe the actual Rhine is on the other side of that embankment. Okay. And let's see. This was taken with 56 millimeter. And by the way, it should be pretty obvious at this point that I was using a zoom lens because when it makes a. By the way, what do I mean by that? Well, photographers refer to prime lenses as being fixed focal length, top quality lenses, usually with fairly wide open apertures available. Zoom lenses are often a very high quality and produce great results. But there's this kind of this viewpoint out there in the professional world that. Prime lenses are always going to be just a little bit of turning up the nose at zoom lenses. You know, every photographer wants to get a cover. And I've only had one cover in my life. And this is a little equipment image I took that finally ended up on the cover of solid state. It occurs to me. I didn't tell you about my background. I'm actually a physicist. Now I'm retired, but I've worked most of my career in matter, particular in industry. So, but yeah, here's the one I only cover. I ever got and this finally was taken with 1000 millimeters with an F8 and the exposure can only be described as a little while. But of course, this was August 17th of 2017 during our great eclipse. Okay, so I'm open for questions. Hmm. Well, concerning the quality of lenses, you know, there's always going to be a little bit of debate of glass versus plastic. As it happens, although they'll often have a front and rear element that are made of glass, they can hold up well under. You'll find out that a lot of modern lenses, the interior optical components will be of a plastic and they work just fine. The plastics give them a greater selection of optical indices, a greater selection and so you'll find quite a bit of plastic optics on the side. Well, the reason that the zoom lenses were often considered to be the telephoto lens is because initially that's the focal lengths they came in. Although I, you know, have around here a 15 through 70 millimeter. See, I believe. But yeah, we also have them that go up to 100. Okay, film versus digital. Well, you know, before the mid 1990s, I was all film and for my black and white, I did my own developing. I basically put together a little bathroom dark and did that. When digital cameras became available, I started using them. And frankly, I find that everything I could have done with film, I'm a digital. Though there's quite a bit of controversy over that. There are some schools where the photography professors have really poo poo digital photography and said that real artists will only use film and so on. I think they are quickly dying off both philosophically and physically. Digital is here to stay. We're just in country. We're list for our series. We're list camera. The significance of it is, is that the flange distance, the point where you mount the lens to the image sensor is only 22 millimeters for their mirrorless cameras. Whereas it was 44 millimeters for their DLSRs. Having that reduced flange distance is a big advantage for the optical designers. They have a lot more freedom quality lenses because of the smaller flange distance. The early reports are is that the large series lenses live up to that promise. They are very good lenses. The other thing is flipping mirror into prison from the camera body that saves both on cost and complexity. There are some real advantages for the mirrorless design. Okay, let me simply define it this way. I'm going to say that a telephoto is any lens that has a focal length longer than 70 millimeters, whether zoom or fixed, doesn't matter. A zoom lens is any lens, regardless of the range of focal lengths, that is able to move between focal lengths by the turn of a control ring on the lens. So you can therefore have zoom wide to normal, normal to telephoto, fixed wide. Okay, now concerning the moisture on the lens, you know, I don't know. I will say that internally their seals are good enough that you should never get any moisture into an external element. As for the external faces, both here in the front, you know, it's hard to say. Bogging can be a problem, you know, and you do have to watch out for it. Well, I don't know quite how to respond to the question of how to make one's ass look not so fat with a lens. However, it is very convenient to always blame the photographer. Okay, there's a question I missed about a bellows extension between lenses. Okay, well basically these extenders go between the rear element and the mounting flange. They simply move the lens further out to make the lens behave as if it was a longer focal length. Just a second there, I was looking around for a picture I had, which I didn't include. But I've taken my pinhole element and I've put it onto extension rings to show how it acts with the extension rings, as if it was about a 150mm lens by simply increasing the distance between it and the image sensor. Okay, someone mentioned Vaseline on the lens. Well, to tell you the truth, I would never do that. You know, I think that if I was going to try something like that, I would use a clear wax rather than Vaseline. You know, I'd usually put a little film of wax on the front surface. But for the most part, I'd just carry along a little softer appearance. You know, I would do that either with large aperture and slightly off focus, or I would do it with post-image processing. Of course, they used to use gauze and all kinds of things to soften up pictures. Oh yes, you could add a filter and I've had haze filters and artificial fogs and that kind of thing that I've used in the past. These days in terms of filters, basically I think there's only two types of filters that you absolutely must have. And the first is a nice set of neutral density filters. And you want the neutral density filters because you may find yourself in a well-lit situation where you really want to use a wide aperture. Using a neutral density filter, you can do that. The other kind of filter I suggest that you have is a polarizer. You cannot duplicate the effects of a polarizer in post-processing. There are many good situations where having a rotatable polarizer on the front of your camera can really give you an interesting image. Yes, the polarizer can cut glare, especially if you're looking at reflected sunlight off of surfaces of water or glass and so forth. I myself like to carry around sheets of polarized plastic, which I will handhold in front of my lens. That way I don't have to have a set for all my different lens diameters. And it's easy for me to just, you know, slap it up there and rotate it around and so forth. Okay, now for film, someone asked about different chemistries. No, I've never done silver versus platinum. I basically put away all my chemical darkroom equipment a couple decades ago now and moved on. And the only thing that kind of bothers me a little bit is that it was fun doing the black and white film. I sometimes wonder if I'm missing some of the effects I might get when I used digital photography. But, you know, it is also just a royal pain. I did manage to ruin a sink once with the acetic acid stop solution just right through the ceramic. Well, you can buy infrared cameras, you know, and they have specialized uses. You know, people do wildlife photography at night and so forth. UV photos, well, UV does not have a great range in the atmosphere, so probably not so much. Okay, it looks like we're into our last couple of minutes. Are there any additional topics along this line that you would like to hear? Okay, well, thank you all for coming to the Science Circle today. On a Sunday, and for many of you at very early in the morning, and I suppose for some of you very late at night. And I'm glad to have had this opportunity, so thank you very much. Now, as I said, in regards to disproportionate body parts, you can always blame the photographer. Thank you.