 I believe this is a good time to begin. First, I would like to welcome everyone to the Science Circle. I know that many of you have been frequent attendees to the Science Circle, and there may be a few new ones. Here in virtual reality, it's a nice Sunday morning, but internationally, some of you may be at quite different time zones, including the dreaded early Monday mornings. So thank you for coming, and I hope what I present here today will be of interest. I'm going to swing around so that I can see the slideshow. Today's presentation is called Optics for Photographers. I was motivated to put this presentation together because I had seen that a lot of young people, new people entering photography, did not have a background in optics, and I feel that a basic understanding of optics can be very useful for the photographer. We're learning things from camera manufacturers and their advertisements and what is common knowledge, and some of this is inaccurate. I will principally use ray tracing to illustrate optical properties. I will start with a pinhole camera. Pinhole cameras are both instructive and interesting to use, fun to use, and I have seen some courses that have used them for artistic purposes. I will then relate pinhole optics to practical lens optics and develop the concepts of focal length and depth of field, which is the gain of the sensor, and the effects of aperture and shutter speed. Okay, so let's make a pinhole camera. To start with, we'll take a body cap and we will drill a hole right through the center, basically ruining its use as a body cap. We'll cover that with a bit of foil, secure the foil with tape, and then use a fine needle to make a pinhole in the foil, and then mount that to our camera. Now we'll just set up for a shoot. And you can see I just sort of set up on my balcony here and provided two little mannequins. So here is a picture that you can get with a pinhole. Now I want you to notice that, of course, it has lousy resolution because it is just a pinhole, but all sections have the same lousy resolution. There is no feeling of a change in focus throughout the image, and that's referred to as having a flat image. Let's see why that is using ray tracing. So here I've set up a little virtual model being shown to you in virtual reality. You can see the pinhole and my two mannequins. I'm going to take away the camera so you can see the sensor and its relationship to the pinhole and our subjects. We can trace right light rays from each position on the subjects through the pinhole to the image. Oh, and I just should mention that light rays are a convenient but fictional construct. We can use them mathematically, but of course in modern physics light behaves entirely differently. Okay, so each point on the object is mapped to a corresponding point in the image. You can think of pinholes as mapping angles. So we're going from three-dimensional coordinates in a real world to a two-dimensional representation. You lose a dimension. However, looking at the image, one of the capabilities of human vision is that we can analyze the image and restore the lost information. And that also is the source of a lot of optical illusions. For example, here I've drawn a little quadrilateral showing the hikes and positions of my two mannequins. This shows that in the image they seem to have different sizes, although in the real world they are, of course, the same size. So let's take a look at some of the features of this representation. It's basically all about triangles and drawing our light ray lines from the objects through the pinhole to our image sensor. Magnific- oh, let me go back for a moment. I just wanted to mention some of these things. These red lines here, let me see if my little laser pointer is working. Okay, there we go. This red line along here and here reach to the edges of the sensor here and here. That represents my field of view for this given optical imaging element, which is the pinhole. Then my object size is between these yellow rays here and the image size over here. And then I have a object distance along this path and then my image distance along this path. And also, of course, a sensor size. Okay, so the magnification we define as the image size divided by the object size. And that happens to be equal to the image distance divided by the object distance. Now it doesn't matter. I could be using a pinhole, a lens, or a curved mirror. This relationship will still hold. I should mention, however, that for complex lenses, we don't necessarily measure from the center of the lens group. And if you look above the slideshow, I have placed some graphics that show a telephoto lens, a simple lens, and a simple lens with objects at infinity. The one at the very top is a simple lens. And we have what we call a principal plane running through that lens, which is the point from which you do all your measurements. You measure your image distance, your object distance from that principal plane. However, if I was to look at something like a telephoto lens, which you'll see in the ray tracing on the left-hand side, the pair of lenses, in this case a convex and a concave lens, have provided a combined focal length that is much greater. And it has also moved the principal plane of the combined lenses out in front of the physical lens group. This is the reason that telephoto lenses on a camera can be shorter than the focal length of the lens. The field of view, as I've indicated already, is determined by the triangle, image sensor, and image distance. We have a formula for all this. One over the object distance plus one over the image distance equals a constant. That constant will turn out to be one over the focal length, but you should notice that our pinhole camera doesn't actually have a focal length. Let's go back to our triangles for a moment. If I increase my object distance, just move this object outward, that will decrease the image size, thereby decreasing the magnification. Contrawise, if I increase my image distance, that's going to increase the magnification. However, because it did not change my sensor size or the sensor distance, the field of view is unchanged. Now, if I do decrease the sensor size, that will decrease the field of view. You look at the red lines here, and then I go where I have reduced the sensor size. The red line is now down here. You can see that's reduced my field of view. The field of view is your sensor size versus the distance of the sensor from my principle plane of the optics. Now, suppose I add more pinholes. Well, I think it's going to be fairly obvious. That's going to double my image. So now I have a pretty difficult image to view, but as you can see, everything is doubled. Yes, double trouble. And we can show what is going on with ray tracing. Here, each pinhole allows a separate ray from a given object point to two different points in the image. Now, suppose I took a small prison and I put it over one of the pinholes, as you see here. That could bend one of my rays and merge those two images. Suppose I had a plate that had a lot of pinholes. I could put a prison over each one. And as you can see, the further out I get from the center, the steeper the prison has to be, the whole ray of those, I'll just line them up so that the pattern of each one is in line. And you can see that I'm actually forming a lens shape just like that. So you can actually think of a lens as being an infinite number of infinitesimal pinholes, each with an infinitesimal path bending prison. And I do the same thing for mirror optics, but because the rays fold back on themselves, it's really hard to draw. And so there we have developing a lens without even using the calculus. All right, so actually we did use the calculus, but it was well hidden. Okay, so it's still all about triangles, but now something special has happened. Because I have rays that come in through different positions on my imaging element, there is one and only one point where they converge and where the image would be an exact focus. And that allows me to characterize the imaging element, this case of lens, having a particular focal length. Okay, right here. The pinhole element, to reiterate, did not have a focus. Every object at any object distance was equally resolved. The fin lens has a focus, which means that there is only one image distance where an object will be an exact focus. However, we don't actually need exact focus. An acceptable resolution would allow a range of object distances and a corresponding range of image distances to be considered as in focus. What is that acceptable resolution? Suppose we define a circle of confusion that is the smallest diameter in the image that can be discurrent when the image is viewed or displayed. Any point in the object that is resolved to this circle or smaller can be considered to be in focus. The human eye has a circle of confusion of about one-fifth of a millimeter, 0.2 millimeter, at a viewing distance of 25 centimeters. So, you know, roughly a comfortable viewing distance for say a book or a picture you would be holding in front of your eyes. At about arm's length, you would see lines that are spaced no closer than that one-fifth of a millimeter. And that would be at about arm's length. So think of an 8x10 photo being held at arm's length. You were seeing about 1080 pixels by 12,070 or so at that distance. If we're using a 35 millimeter camera sensor, the allowance or confusion is about 0.04 millimeters. And most often we'll use a value of 0.03 millimeters. And bear in mind that this is referring to the acuity of the human eye when you're looking at the final image, whether printed or displayed in the typical circumstances. This does not indicate the sharpness of the camera. If you had an exact focus, it will be considerably sharper, even though the pixel size of the sensor or whatever may prevent you from being able to see it. It does indicate the degree to which sharpness can be reduced before it impacts human vision. Just for point of information, here is a chart of the circles of confusion for different combinations of format and sensor size. And as I mentioned, for the 35 millimeter is about 0.2 or 0.3 millimeters. Okay, so let's talk about depth of field. The range of acceptable distances based on the circle of confusion is the depth of field. And for the object type, we call it depth of field on the image size. We can define this as the range over which the image sensor can be moved while keeping a fixed object in acceptable focus. We can also give an alternative definition that is the range of optical distances that will be an acceptable focus for a given fixed sensor depth. So let's take a look at our triangles again. We have our focal length here going to characterize the addition of object distance to image distance. As I move my object away, my image distance has to come closer and closer to the focal length. And if I move it to extremes such as putting the object almost at infinity, the image distance becomes almost at the focal distance. The mechanics of that is that on your camera lens, there is a pin that sits in a helical slot on the internal barrel of the lens mount. When you rotate the focus ring, it is moving that barrel back and forth in distance. So remember that the field of view is determined only by the sensor size and the image distance. But we now see that the image distance for objects effectively at infinity, the image distance is determined by the focal length. So that's the relationship there. And I mentioned the magnification before. Just to reiterate it's the image size to object size. And I won't go over that again. We'll just keep going. Okay, so what is that focal length? Now here is one of our problems. Photographers, and they're embedded in this by the camera manufacturers, will call the focal length of a lens as the value that would give an equivalent field of view on a 35 millimeter sensor. An example of this is that people often refer to the wide lens on an iPhone as being 28 millimeters. But in reality it is a 4.25 millimeter lens that is paired with a 5.7 millimeter sensor. Now why is that important? For an iPhone, if I focus on an object that is one meter away, the depth of field, that's our acceptable range of resolution, is in the range of half a meter all the way out to infinity. For a 28 millimeter lens on a DLSR, I focus on an object that is one meter away. The depth of field will be about 0.96 meters out to 1.02 meters, which is an extremely narrow range. A very limited amount of the object space will be in focus. When the focal point, aperture, and focal length are such that the in focus condition goes out to infinity, this is called the hyperfocal setting. This is commonly called the hyperfocal distance. When set to the hyperfocal settings, everything beyond the hyperfocal distance will be in focus. There are a couple different definitions for hyperfocal, but they amount to much the same thing. There are some apps available that you can use to calculate the hyperfocal distance for any given aperture, focal length, and they are available from the app stores. The best way to set it up physically is simply to focus on an object at infinity, and that will give you the greatest range of depth of field. Oh, what is the f-stop equivalent of an iPhone camera? For the wide-angle lens, it is f1.8, and for the telephoto lens, it is f2.4. So two lenses might be equivalent in terms of field of view, but not in terms of depth of field, and most especially not in terms of the hyperfocal distance. In most shooting situations, your iPhone has everything in focus from 1 meter out to infinity. So to get a bokeh effect, it has to fake it, which it does very cleverly. Bokeh is the effect of having your central principal subject in focus, while the rest of the image, most especially the background, is out of focus. This is very popular for portraits, where you want to frame the subject with an out-of-focus background, almost just a pattern of color around the subject. That is controlled by controlling the lens aperture. Now the aperture changes the active diameter of the lens, the pupil. Increasing aperture diameter increases the amount of lens reaching the sensor by the square of the diameter. So you double the aperture, you increase exposure by four times. Decreasing the aperture decreases the apparent lens diameter, making it more pinhole-like. And as you may remember that for our pinhole, everything is in the same focus. This is characterized by the f-number. We know that the total light is going to be proportional to the square of the diameter divided by the focal length. So we simply let the f-number be equal to the focal length divided by the aperture diameter. F-number settings are called f-stops. We have some conventional stopped values, some of which I've listed here. Each step reduces the exposure by about 0.75, and every two steps reduces the exposure by 0.5 approximately. Our next parameter is the ISO setting. This was related to the chemical photosensitivity of film in the days of emulsions and chemical photography. Today it is related to the amplification of the image sensor. These values were established by the International Standardizations Organization, hence ISO. And it comes in preset values, which I have listed here. And every two step increases exposure by about 2.4. So roughly, if I go down two stops in aperture, I am able to compensate that by changing my ISO by two steps. Oh, by the way, someone asked if the f-stop was for a specific distance. No, it's not. The f-stop is independent. It is true when you change focus, the image distance changes a little bit. But by definition we simply refer to the f-stop as being the aperture diameter divided by the focal length. There is an optimum gain setting for any given camera and image sensor. Usually this is about ISO 100. This optimum value does vary from camera to camera. It's not always 100, but usually. Now, when you're outside on a sunny day, ISO 100 is usually best. Cloudy day, you want to use ISO 400, indoors, 800, and low light, 1600 and up. The shutter speed is also in preset values of fractions of a second, 100th, 125th, 160th, 1200th, and so on. And each step decreases exposure by approximately one quarter. There is a well-known rule that to minimize the effects of a shake from handheld cameras, you should use a shutter speed that is, in seconds, the inverse of the lens focal length. So, for example, if I had a 200mm lens, I'd want to use a shutter speed of 1200th or second or faster. Modern image stabilizers, which basically will move the lens in response to any movement to the camera, help in this regard by about a factor of 4x. So, if I'm using a stabilizer, I might be able to handhold a 200mm lens, even with a 150th of a second shutter speed. A better idea is to use a tripod. A tripod, however, should not be used with an image stabilizer. Be sure to turn your image stabilizer off if you mount the camera on a tripod. The reason for this is the stabilizer will continue to hunt for a stable condition, even though the camera itself is now rock-steady, so it actually will add a little bit of motion blur in that situation. For most shooting situations, controlling the aperture will be your most important consideration. This is certainly true for portraits and for landscapes. So, in those situations, you want to choose your aperture according to the depth of field you're looking for, and you want to set your ISO to the optimum gain, and then let your shutter speed be adjusted to get the correct exposure. To do this, you'll use the aperture priority mode on your camera. This is also called the AV mode. If you were shooting something that involved stopping motion, then using a fast shutter speed is the most important factor. And for those purposes, you would want to use the shutter priority, or TV mode, set your shutter speed, choose your aperture, and then increase your ISO gain so that you'll have the correct exposure. Concerning lens selection, the first thing to do is decide where you're going to stand in relation to your subjects to get the perspective you want. Then choose your lens to crop the image as close as possible to the desired final result. You want to minimize the amount of cropping needed in post-processing. It's important to remember that perspective in your image is a matter of where you stand. It is not really a function of your choice of lens. If taken at the same distance in relation to the subject, a wide-angle lens and a telephoto lens will produce the same perspective. To demonstrate this, we'll see how a wide-angle shot can be cropped to resemble a telephoto shot. Here's a scene with the telephoto. And here is the same scene from the same position with the wide. And here is an image that has been cropped from the wide. And you can see that's almost identical to the one that we got with the telephoto. So my perspective is by where I'm standing. And the amount of cropping that I'm doing is determined by my lens focal length. Okay, I just thought I'd show off some pictures at this point, but we'll break for a moment and let you ask some questions. Let's see what we have here on the chat list. Yes, it is true that the more you have to crop, well, you're throwing away pixels, so the less sharp the final image will be. And this is the reason that you want to choose your lens to match the field of view to be as close as possible to the desired final image. But you will always do some cropping to optimize the aesthetic qualities of that image. This scene and the one that follows are similar in subject except under very different lighting conditions. This one obviously being, I think this was sunset. And this one being midday, both for 128 millimeter focal length lens shots. Okay, this was done with a 44 millimeter, which a lot of people consider to be a normal type lens. Okay, that's another 130 millimeter shot. Okay, and this is a 28 millimeter shot. And this is an amusement. This is my sister looking at the mighty Rhine River. Well, she's actually looking at a small stream coming off the Rhine. I believe the actual Rhine is on the other side of that embankment. And let's see, this was taken with 56 millimeter. By the way, you can probably guess that I was using the zoom lens for these photos because I was hopping around so much in focal length as opposed to a prime lens. Photographers consider a prime lens to be a top quality fixed focal length lens of a fairly large aperture. Zoom lenses will have less aperture because of their complexity. And generally there felt to be of lower quality although there are some really very good zoom lenses available. Just a little bit of turning up the nose of zoom lenses. Now this is a cover. Every photographer dreams of getting a cover on a magazine. And in this case, an equipment shot I had taken showed up on solid stick technology last year. It occurs to me I haven't mentioned my background. I'm actually a physicist. I'm retired now. My background has been in atomic and condensed matter and process technology. And this one comes from my process technology life, which has occupied me for about the past 30 years. Oh, here's an eclipse. Well, we all know when that occurred. That was August 17th of 2017. And that was done with a thousand millimeter lens. Aperture of F8 and shutter speed was for a little while. But yes, that's my eclipse photo. So that about finishes it. Are there any final remarks people would like to make or ask? Oh yes, that would be the glass versus plastic debate. In a modern lens, while the front and rear elements will often be of glass, because it's a good hard material that will resist a lot of rough handling, the interior elements will tend to be of plastic. The plastics offer a better range of optical indices and achromatic correction. So it's actually going to make you a better quality lens by using some plastic elements. But generally this will only be done on the interior of the lens group, the exterior front and rear pieces being of glass. In regards to film versus digital, well, I started doing my own developing back in 1978. Black and white, I set up a little bathroom style developing setup. And I stuck with doing my own processing, even did a little Siba-Crone color, up until about the mid-1990s as digital cameras became available. Then I switched over to digital, and I found that I'm able to do with digital almost everything that I can do in a dark room, maybe some exceptions. I know there are a lot of art schools out there that continue to poo-poo digital a little bit and insist that real artistry can only be done with chemistry. But I think they're wrong. I think that digital is just fine. A more recent controversy is the question of mirror versus mirrorless cameras. In the DLSR camera, you have a mirror that flips up between viewing a scene and taking the exposure, and a pentaprision that is used to set up a viewfinder for the photographer. In a mirrorless camera, these elements are gone. And as a consequence, the flange distance, which is the distance from the mounting ring for the lens to the image sensor, is much smaller in a mirrorless camera. In a regular DLSR, it can be anywhere from 40 to 45 millimeters. In a mirrorless camera, we're looking more like in the 20 millimeter area. So half the distance. And this gives the lens designer a lot more freedom in designing those lenses. They can do much better. In the new Canon R series, and not that I'm trying to promote Canon here, but in the new Canon R series, the R lenses are already reputed to be very, very superior to the previous lens offerings. So not only is digital here to stay, but I believe that mirrorless cameras will also be here to stay and will become the professional choice in the future. I'm going to say that a telephoto lens is any lens that has a focal length more than 70 millimeters. And I will say that a wide angle lens is any lens that has less than, let's say a 40 millimeter focal length. And I'd put the normal range in the 40 to 70 millimeter area. Also, a lot of people tend to confuse zoom with telephoto. Not all zoom lenses are telephoto. You can have wide angle zoom lenses as well. For example, I have a 28 millimeter to 70 millimeter zoom lens. As mentioned before, the plastic elements when they're used will be in the interior of the lens group completely sealed up. And if you have any moisture getting in there, you've got real problems. The exterior lenses still tend to be glass. They'll be hard coated with anti-reflection coatings. And moisture can be a problem on these external elements, but basically you just have to live with that and bring along a nice microcloth to keep those clean and dry. I see that someone has asked about their bodily proportions, not looking the way they want them to look in a photograph. And you can always blame the photographer for that. And it is true that if you're doing a wide angle portrait and you get in nice and close, there's going to be a lot of distortion. So blame the photographer. Okay, now there was a question I missed about using a bellows extension. A bellows extension or extension rings, I tend to use extension rings, will move a lens out from the mounting element. They simply sit between the rear of the lens and the mounting flange. And this increased distance will make the lens behave as if it was of a longer focal length. Of course, it also simultaneously reduces the effect of aperture. I did have a photo which I didn't use where I took my pen hole and put it at the end of a set of extension tubes just to show this effect. Basically, I can turn my pen hole from acting like a 22mm lens into acting like a 120mm lens with such extension tubes. By the way, bellows are also used to tilt lenses for the purpose of correcting the perspective of structures such as buildings where it is necessary to point the camera at an angle to the very large structure, the building structure, which normally causes quite a bit of perspective distortion. But by tilting the lens to be parallel to the building, you can correct for this. And that's kind of a complicated topic. Maybe if there is an interest, we can hit upon that in another tutorial. Someone mentioned using Vaseline on a lens. I'll tell you the truth, I would never do that. If I wanted to try that, I would probably mount a clear filter on the front of the lens and muck that up rather than the lens itself. By the way, for inducing certain types of image softening, people have used such things as gauze and even pantyhose to alter the hardness of the image. Concerning filters, I would like to mention that most filter effects these days I will simply do in post-processing. But there are two filters that I think cannot be duplicated in post-processing. First of all, you would like to have neutral density filters available. And the reason for this is that sometimes you'll be wanting to use a wide aperture and you might be wanting to use a fairly long exposure and you want to keep the ISO near the optimum for the camera. Well, then to control the exposure you need to control the amount of light. You can add neutral density filters to the front of the lens so that you can have those conditions without overexposing. The other filter is a rotatable polarizing filter. And this is very useful, especially when doing outdoor scenes, to control the contrast of diffuse sunlight and also to control the glare from reflecting off water surfaces and other highly specular reflections in your image. Back in the days when I did chemical work, I stuck mostly to the fairly standard chemistries that were available. As I had mentioned, I did mostly black and white and I also did a little bit of zebra crone, but I didn't try anything too fancy. I myself like to carry around sheets of polarized plastic, which I will handhold in front of my lens. That way I don't have to have a set for all my different lens diameters. And it's easy for me to just, you know, slap it up there and rotate it around and so forth. Okay, now for film someone asked about different chemistries. No, I've never done silver versus platinum. You know, I basically put away all my chemical darkroom equipment. A couple of decades ago now and, you know, moved on. And the only thing that kind of bothers me a little bit is that it was fun doing the black and white film. I sometimes wonder if I'm missing some of the effects I might get when I used digital photography. But, you know, it is also just a royal pain. I did manage to ruin a sink once with the acetic acid stop solution, just right through the ceramic. Well, you can buy infrared cameras, you know, and they have specialized uses. You know, people do wildlife photography at night and so forth. UV photos, well, UV does not have a great range in the atmosphere, so probably not so much. Okay, it looks like we're into our last couple of minutes. Are there any additional topics along this line that you would like to hear? Okay, well, thank you all for coming to the Science Circle today on a Sunday, and for many of you very early in the morning, and I suppose for some of you very late at night. And I'm glad to have had this opportunity, so thank you very much. Now, as I said, in regards to disproportionate body parts, you can always blame the photographer.