 So I'll just I'll just say a few words, Joe, because actually when we do this in Trieste, it's really a school and, you know, we, we don't really do a lot, a lot of introductions, but I'm going to do it anyway. Just for everybody to know if you haven't looked up Joe Shaw's biographies that distinguished professor of electrical and computer engineering. I'm an affiliate professor of physics and the director of the optical technology center all at Montana State University. He's won many, many awards, including the Gigi Stokes Award from SPIE graduate student mentoring award. He was a presidential early career award. He's a fellow of Optica or OSA at the time, and also a fellow of SPIE. And what you may not know, but now that I know it, when next time we're in person, and Joe also as a musician plays a guitar bass and a little bit of saxophone, Joe, we need, we need another, we need a guitarist and we need a bassist and we need, we can use another saxophone player. So next time we, we do this in Trieste, I'll get you and step in hell to come down. I tried with the gene to get step in hell here to play sax, but it didn't work. He wasn't biting any rate. So, well, that that's it. I mean, there's a lot of, there's just so many awards. Anyway, I'm going to give it to you, Joe. It's just, it's your show. You got two talks, two lectures and however you want to, I'll just let you go. Yeah. So maybe I'm turning it over to turn it over to Joe. Good. Thank you. Thank you, Joe. It's wonderful to be here with you all. I wish I was in person so that I could see you each. It's, it's easier when I can see your faces and, and respond to what you're thinking and, and what you're saying. I don't mind if people ask questions as I go. So if there's, I mean, this is, this is a school. So this is not, this is not a lecture for me to give to you. It's for me to give to you and you to ask me. So please ask questions if you have them. Joe, I assume that's okay if we do it that way. Absolutely. As I said, you're in charge. And so I think maybe they can raise their hand and Abdul you, you can. Yeah, yeah, I'll handle that. Okay. Anyone raise your hand he can ask in between. Okay. Perfect. Thank you. So if, if I have the schedule right and I think I do the first one this morning is going to be about lidar for autonomous vehicles and the second one will be atmospheric lidar and then tomorrow we're going to talk some more about some different lidar applications. So this is, this is one that is very new. So lidar lidars have been used since the 1960s, you know, of course, we celebrated yesterday the International Day of Life, and that marks the anniversary of the first laser. And as soon as the laser existed people started using them for lidars. I mean, it didn't take very long at all for somebody to get that idea. So this is kind of what we're going to do is we're going to talk about some basic principles and sort of calculations for how, how we calculate lidar signals. Then I'm going to talk about geometric factors and resolution. And then we'll talk about different kinds of lidars, including how to scan lidars and, and then we'll, we'll end with a brief discussion of coherent lidar. The, excuse me, the autonomous vehicle lidar world is very different from any other lidar world and it poses unique requirements, and I will, I will try to review those. But first let me tell you where I am coming from. I live in Bozeman, Montana in the United States, and that's right here. Well Stone National Park is a famous place that some of you may have heard of and that's right here just south of my home. It turns out that I'm actually talking to you from Arizona though I'm way down here right now, because I had to go and visit my parents and help them, because they're getting pretty old. And so I'm actually in Arizona but I, but I usually live in Montana. And my place Bozeman, Montana is a is a great place of course the university has great facilities but what I like to brag about is the is the outdoors. I like to ski, both cross country and downhill and hike in the summertime. It's a good place and if, if you ever are in this direction I would be. I would be happy to have you come visit. Working at least it's working for me I hope you see the video playing. And I will play it a couple times so that if, if you don't understand it you might start to understand it this is a video that was created with lidar data on a car. And it's, it's played back faster than it was recorded of course but. Okay so it looks like it's just playing over and over again. The thing that maybe I can pause it. No, the thing that you might notice is just how dark the world is. There's not, you feel like you're driving at night with poor headlights, and that's because the light are only sees in this case, forward. It's not look to the side is not look back in in a more general case. In a more general case, a lidar on an autonomous vehicle, you know the purpose of the lidar is to act as the eyes. There's a famous quote by Elon Musk saying that lidar is a fool's errand, and he thinks that lidars are too expensive and too complex and will never be successfully used for self driving vehicles. But I, I and many others disagree with them because they give you, they give you the ability to see at night, they give you the ability to see farther away than just with a camera, especially in poor conditions. I think the combination of cameras and lidars are the best thing. So the principle, or the requirements that people usually agree on for self driving car lidars is that it needs to scan the scene around the vehicle and update that scene approximately 10 times per second. So think about that. That's a lot of data. That's a lot of pointing lidars in different directions. So that's a very different requirement than, than lidars for atmospheric measurements like we'll talk about in the next hour. It's very different. And the requirement then drive the design of the system. Not only that, but the system has to be small. It has to fit on the vehicle. And it has to be fairly low cost. The goal I think in the community is to get these lidars down to maybe speaking in US dollars, maybe a few hundred dollars. And right now we are not very close to that price point. So we have a lot of work to do to develop lidars that can do self driving vehicles. Okay. This is kind of a general layout of how a lidar would be designed for this application. Of course, the optics, there's met, there's a lot of sophisticated things to think about with the optics but here it's just shown as a single lens. But let's, let's start here with a pulse laser. This is what drives the system. Not all lidars use pulse lasers, but I do want you to know that many, and maybe even most do. So we will talk about pulse lasers and then at the end of the day I will talk to you about a different kind of lidar that does not use pulses. But pulse lasers then will send out laser pulses that are collimated and directed by the optics. There's going to be scan mirrors and that kind of thing. The light goes out, hits something in the world, bounces some light all over the place. That's the thing to remember is that the light goes all over the place. Only a tiny bit of the light comes back to the receiver. The distance between the target and the receiver can be written as the speed of light times delta T. And what is delta T? Delta T is the distance, or sorry, the time that it takes to travel. And then you have to divide by two because the light is going out and back. It's going two ways. So if we measure delta T, we can just multiply by the speed of light and divide by two and we get the distance. So what does a lidar do? Fundamentally it measures distance and we can measure much more than distance, but that's the fundamental thing that we're measuring is distance. Okay, so the next thing is some electronics. In this case, single photon avalanche diodes are shown. There's a lot of photonic devices that are being developed to help lidars be more effective and lower cost. And then I'm going to skip the basic finance circuitry. This is the key difference between the lidars that I'll talk about today this morning and the lidars that I'll talk about in just one hour from now. Instead of recording the whole wave form, a self-driving car lidar will typically use a time to digital converter. And so this is just an electronic chip and you put the pulse in. When the laser fires, there's an electronic pulse that is created to start counting. And then when the optical pulse comes back and triggers the electronics, it records the time between the start and stop. And that's the time that is used over here to calculate the distance. So we don't know what happened between the optics and the target. All we know is the time that it took to hit some target. That's an important difference from what I'll talk about in the next hour. And the reason is that if you have to record the whole wave form, it becomes very expensive and there are significant trade-offs between the bit depth, so the dynamic range and the speed. Okay, so that's what we're measuring. It's delta T, the difference between these two pulses. Okay, so let me, I'm not going to give you a whole hour of equations, but we do need to talk about some equations to start with. So let's start with a cartoon. We have a laser over here. It puts out some power, power of the source. It has a divergence angle, theta sub B, range R to the hard target. It has an area that the beam has a cross-sectional area, A sub B. The A sub T, which is the area of the target. Rho sub T is the reflectance of the target. And then the light comes back. It's collected by some optics with area, area of the receiver, focal length F, and detector area. Those are the parameters we will use. The first version of this is going to be with the receiver field of view filled by the target and filled by the laser beam. So you can see the field of view of the laser, the field of view of the receiver is completely full with light, reflected light. And then the next case, we will change that. Okay, so the first thing I do when I'm developing an equation like this is I think in terms of radiometric quantities. I'm using the optical engineering convention, which some of you will be familiar with and some of you will not be familiar with. But in that convention, we use E to represent irradiance. And so what I'm saying is, this is irradiance in watts per meter squared. So we have the power of the laser divided by the beam area. So the power of the source divided by the cross-sectional area. So that's going to be a function of range. And then we have to multiply by the transmittance of the path, the atmospheric path. And so now I can change that equation to be, I can replace the area of the beam with just pi times the radius squared. And the radius is just the range times the angle, as long as I'm starting from zero beam size down here, and that's an easy modification. So all of a sudden we have a range dependence. This is where the range dependence comes in is from the Doug versions of the laser beam. So now we are already proportional to one over r squared. Okay, so what do we have we have irradiance incident on the target. So now we're going to reflect that by multiplying by the reflectivity. And then we are going to turn it into radiance L radiance is watts per meter squared per solid angle per stir radiant. And so we have to divide the numerator is the reflected irradiance and the denominator high is the projected solid angle of a hemisphere. That says that the light gets scattered in equal, equally into all angles of the downward facing hemisphere. And that that that's the assumption that I made to make this simple this is lamb version target. That means that the radiance is scattered uniformly into all angles. That's not true for most materials or most objects. But it's a good simple approximation to make. If it's not a lamb version surface, we have to use something else here that is called the bidirectional reflectance distribution function or BRDF. And that's fine if we if we need to do that we can do that but for simplicity I'm going to use a lamb version assumption. You put in this equation, you insert this equation for e sub t, and now we have this equation on the right. The next step is we're going to propagate that radiance from the object down through the throughput of the receiver. The throughput is the area times the projected solid angle. So this. There are two different versions we can use but I will use the area of the receiver that's the pupil area of your collecting telescope. And Omega will be the projected solid angle of this field of view. Since the field of view is full, we can define the solid angle with the detector area and the focal length. And we have to multiply by the transmittance of the atmosphere again because we're coming back, and we have to multiply by the optics transmittance. I say approximately equal to here because I'm using a small angle approximation for this solid angle. The projected solid angle is approximately equal to the detector area divided by focal length squared. There are exact equations that I derived in my longer class, but we don't need it because this is actually a very good approximation as long as the angles are small. Okay, so then we can put this equation into this equation and we get our final answer. I need to move the faces so I can, I do two more things. I replaced the receiver area with pi times the diameter of the receiver squared over four. That's just pi r squared, but using the diameter. And I'm going to recognize that the focal length divided by the diameter of the pupil is the full is the f number of my receiver. So in this equation I have used those two. They're not approximations, but those are just two equalities. So now I have it written in terms of the dimensions and the pro the parameters that I usually know from my system. So note that it is proportional to the laser power. So the easiest way to increase the signal is to use a brighter laser. But of course that can cause problems because we cannot send out so much laser light that we are. For for a self driving car in the civilized world, we have to use I safe lasers. Detector areas in the numerator, the things that are in the denominator include range squared. So as the range gets larger the signal gets very rapidly smaller. The divergence angle and the f number. And notice that we have piece of a squared. That's the atmospheric transmittance squared. That's because we go out and back. So we get multiplied by it twice. Okay, so the main thing that I want you to recognize here is that we are proportional to one over range squared, because in the next version that's going to change. So I will move to the next version, the next version, this is exaggerated, of course, but just to make the point. In this case, the only thing we're going to change is we're going to make the receiver field of view, much larger, so that now the field of view of the receivers larger than the target and larger than the beam of light illuminating the target. What happens in this case, this is not a normal situation we we usually don't try to design light ours this way. But with radars, it's almost always this way. And because of the longer wavelength and larger diffraction angles. And so with radars. This form of the equation that we will develop here is very, very common, but with light are it's not so common. But it's important to understand. Okay, we start with our reflected radius equation, which is the same equation from the previous slide, because the transmitter is the same, all of that is the same. So we start with radiance reflected by the object. Now we multiply by the throughput, the omega product, also known as a ton do of the receiver, but now I'm going to make the only change that I have to make here is for the solid angle. Instead of using the full solid angle of my full field of view, which is larger than my target, I have to use the solid angle shown in red. The red lines indicate the solid angle that I need to use that's the solid angle subtended by the illuminated spot on the target. So that's going to be the area of the target divided by range squared. Notice that we already have range squared up here, we have range squared here so that when we put the two together, we actually have range squared and range squared. So the whole equation is going to be proportional to one over range to the fourth power. So with light are we either have one over R squared, or possibly one over R to the four. So this is even worse isn't it that if we double the range. We get two to the four. So what is that 16 times less power. We, our signal falls off very rapidly with distance in this case. All right, so that's pretty much all the slides with heavy duty equations. Well, I guess I have one more. If we take that equation for the power, and we divide it by the noise equivalent power of our detector, then we can calculate the signal to reference ratio. So then we can solve this equation for our, we have our squared in the denominator we just solved for our, and we say the maximum range is therefore all this ratio, where we are using signal to noise ratio threshold. So maybe we have, maybe we make the decision that our algorithms are reliable, only for a signal to noise ratio greater than 10, for example, then we would put 10 in here, calculate all these numbers and find the maximum range. Notice that the maximum range equation has a reflectivity, obviously, right, because a brighter object is easier to see than a dark object. Right now, lots of companies are trying to tell us lots of things about how good their lidars are. And so you see very frequently claims that this lidar can detect objects out to 300 meters, and somebody else says, Oh, mine only detects out to 200 meters. Well, that doesn't mean anything. We have to know what the object is, is it reflective? What is the reflectivity? And is it one version or what? We can also do the underfilled case and we get a fourth loop, right, same, same equation just for through. There also is a minimum range. And at the, at the most fundamental, the minimum range is limited by how fast we can start the electronics. But it's also limited by the overlap between the receiver and the transmitter and I'll explain that to you next. So there is a maximum range, but there also is a minimum range and many people do not realize that there is a minimum range. Okay, so these are examples that I took from on the left of a journal paper. And on the right, just an example from a company. And these are good examples. These, these are shown with the information that you need to understand it. So let's start with the left one. These are calculations for a small lidar with a three centimeter diameter aperture, a target with 30% reflectance, and a diet, a detector receiver. Or sorry, this is the laser divergence 0.5 milliradians laser divergence. And you use those equations that I just showed you. In the first slide to calculate the signal, then what they're doing here is they're expressing their signal in terms of photons my equation was in terms of power. So power is energy per time. So jewels per second. What you have to do is multiply. Or sorry, divide by the photon energy h new planks constant times the optical frequency. And you can turn those equations that I gave you into photons per second. Divide by however many fractions of a second you are collecting light and you get photons. So that's how to convert those equations to this graph. This is a logarithmic vertical axis, because that makes the, the one over r squared behavior become linear. And so now we see with range on the on the bottom. In fact, this is a logarithmic access also. So with this log one plot, we get these straight lines showing the signal as a function of range. What these different lines are for is different energy levels. So one nanojoule pulse energy gives us this purple line. One microjoule, much larger energy gives us this line up here. So to read this graph is that the, here's this, this solid line is the NEI so that's the noise equivalent irradiance and they are saying this is a photon equivalent. So they are calculating a noise equivalent hopefully you most of you know that kind of terminology. When you see something called the noise equivalent power. It is the optical power required to create a signal to noise ratio of one. In other words, it's the, it's the irradiance required to produce and no signal to noise ratio of one and signal to noise ratio is the mean signal divided by the standard deviation of the noise or the standard deviation of the signal which we call the noise. We don't want a signal to noise ratio of one, usually, but using a signal to noise ratio of one is a really good reference point. So this is signal to noise ratio of one. And then somebody came along and said, we want, we want a signal to know we want a larger signal to noise ratio. And so we're going to increase it up here. The horizontal dashed line represents the threshold, and that's the threshold that the user has to decide based on how reliable they think their algorithms are. So in this case it's saying that we need to detect about 100 photons to be reliable. Well with a one nanojoule pulse that means that your maximum range is you know what is this maybe 20, maybe 20 meters. With 10 nanojoule rain, or sorry, 10 nanojoule pulse energy, we get a larger range, maybe 90 or maybe 100 meters. If we go all the way up to one microjoule, then we intersect these two lines at almost 1000 meters. But you may not be able to do one microjoule because you either cannot afford that laser, it costs too much money, or it takes too much electrical power, or more likely, it puts out so much light that it's not I saved. And we cannot drive a car down the street in it in a city or even in a countryside with with lasers that are going to blind people. Okay, so this is this is the kind of design curves that we use as LiDAR designers to determine what the performance is of our LiDAR. On the right is the same type of curves, but in this case we have a logarithmic vertical axis for amplitude, that's proportional to the signal, and a linear axis on the horizontal for distance. And so we see the curve instead of a straight line. Now we're not using different laser pulse energies what we're doing is these curves are all for different targets. So the top one is for reflective tape. Reflective tape has a very high reflectivity but even more important is designed to be retro reflective. So this is a retro reflective target. And we said that our threshold was, let's say 10 to the one. That's what this number is right here 10 to the one. If that was our threshold, the retro reflective tape would be detectable out to a range of 100 meters. This is for an actual commercial LiDAR. We're not going to be able to detect that type of material out to 100 meters, but you cannot assume that everybody's wearing a coat or a shirt made of retro reflective tapes so you need to detect people wearing dark clothing. And so you need to look at some of these other things like let's go all the way to the bottom Kodak gray card, 18%. This is, this is commonly used in calibrating cameras. This is commonly by Kodak company for calibrating the auto exposure system for cameras, 18% reflectance is about the average of the world around us. So that's kind of a typical value for everything in the world, and they don't tell us what they're, they're not assuming anything they're probably making measurements in the lab. The Kodak gray card is quite Lambertian the reflective tape is not Lambertian it's much better than Lambertian. So with the Lambertian 18% reflective object. You can see that if our threshold were 10 to the one, we would only have a maximum range of about 10 meters. This might be a very nice little light are but it's not very, it's not going to see things effectively very far away. And these are the kind of objects that we have to see in real life is things like people wearing dark shirts. So you can see that the problem that we have, depending on how fast you're driving and how complex the scene is, usually self driving light ours are designed to have 200 or 300 meter range. So that's the challenge. These, these quotes down at the bottom are not talking about this system. These are quotes from different marketing materials that I put here just to show you that these are the kinds of things that I see people saying, or writing about their light ours that don't make any sense. Designed for highway scenarios with road tracking out to 80 meters lanes to 150 meters and objects to 250 meters. Well what is an object what kind of object. Is it this or is it this or is it something in between. So this isn't enough information. Here's some quote that says that they detect something at a distance of up to 200 meters. Well, again, they're not giving you enough information. And then can track objects up to 100 meters away. What kind of object. So you can see that the, the companies need to give us more information if we're going to really do a good job of comparing one light are to another. This is a big problem right now there, there are no standards in this business light ours are just being built and people are telling us how good they are but they're not using the same standards. There are some of there's a group of us who are in the process of trying to do some experiments and generates and standards for autonomous vehicle light ours. Okay, so let's, let's talk a little bit about some geometric factors. The first one is overlap. This is the illustrated here. Here's my transmit optics here's my receive optics. The transmit beam diverges as it goes out the field of view of my receiver gets larger as it goes out. And at this distance, they start to overlap. At this distance, the, the entire laser beam is contained within the field of view of the light are. This is called the full overlap distance. This is the partial overlap distance. And at ranges shorter than that distance, we don't get really any signal at all. So, this is the minimum distance, and then this is the full overlap distance. This is something that you could solve partly by using mono static optics, which means you use the same optics for the transmitter and receiver. And that does not solve the problem entirely. And the reason is that when an object is up close. The receiver is not focused properly. And so you don't get all the light so it's still only a partial overlap, even with mono static receivers. The next geometric quantity we should talk about is range resolution we said that we're talking about measuring range. How accurately can we measure range. Do we measure range perfectly or is there some uncertainty. And the answer is, of course, there's uncertainty delta R. If we are not limited by the electronics, which actually before about five years ago, many light ours, if not most light ours were limited by the electronics. But if if the electronics are fast enough, then the range resolution is limited by the bandwidth of the receiver. And that's a theoretical statement that doesn't make a lot of sense unless we talked about what is this bandwidth, this is the electronic bandwidth, not the optical bandwidth, first of all. So in a pulse laser system, that means that delta R is C delta T over two times the refractive index in the denominator delta T is the pulse width of our laser. So that's it. That's how long the pulse is in time. So a typical value for a lot of my light ours is about a nanosecond. And that means that the range resolution is about 15 centimeters in air with an equal to one in water that gets compressed and becomes smaller. And we'll talk about some light ours used in water later today. I guess that's tomorrow. However, for self driving cars and cars and trucks and space vehicles whatever autonomous vehicles. We're looking at hard targets. This, this is the best range resolution you can get for distributed scattering, meaning if we are scattering from particles in the atmosphere. But if we're looking at hard targets, we know that that pulse is hitting. It's front first. So we can detect the edge of that pulse and we can improve the range resolution from 15 centimeters to, you know, maybe one or two centimeters. So actually you will fairly frequently see people talking about autonomous vehicle light ours with range resolution of a couple of centimeters. That's what we can do with how sharp the edges are on the on the laser pulse. Okay, this is one that that is not very important for for self driving car light ours but it can be an issue so let's talk about it the maximum unambiguous range is the range that we can measure with one pulse before the second pulse starts giving us a signal. So if I have pulse number one shown in blue and pulse number two shown in red. I understand that some people may not see the difference between those colors and so I will talk about them as number one and number two. We send out pulse number one, and it reflects from this far away object, and the pulse starts coming back. And meanwhile we send out pulse number two, and it reflects from this nearby object. It's possible that we will get both pulses back to the receiver at the same time. And we will be confused. We won't know if we're looking at pulse number one or pulse number two. The important thing is that you have to make sure that the spacing, which is tau, the time spacing between pulses has to be large enough that pulse number one is gone. And the signal has faded away to zero before we launched pulse number two. The range for that to be true is the speed of light by the time between the pulses. In historical atmospheric lidars, this was not a problem because we very rarely fired pulses more often than 10 or maybe 100 times per second. But with a lot of these, with a lot of these lidars for autonomous vehicles, of course, we're trying to scan the whole scene about 10 times per second. And so we are shooting laser pulses much faster and quite often we're using diode lasers that can be fired at a rate of, let's say 10 kilohertz. So 10,000 times per second. That makes this problem become important. Okay, this is one that you don't hardly ever see in any textbooks, or anybody's papers, but it turns out for a self driving car lidar it's extremely important. Let's say we have a lidar with a single laser, and that laser is being scanned with a scan mirror. So the laser pulse comes out bounces off the scan mirror, and reflects off of the object comes back. And by the time it reaches the scan mirror if the scanner is rotating rapidly. By the time it comes back with this situation, where it comes back, the scan mirror has moved so that now the received light is not received. The back scattered light misses the field of view of the light. So this only becomes a problem if this scan mirrors is going very quickly. But again, we're trying to do very rapid scanning with self driving car lidars. So this is, it turns out a very, very major problem for autonomous vehicle lidars. So over on the right side I have an example. Let's say the field of view of our lidar receiver is 10 micro radians. That's very small. But that's what we do with lidar receivers very frequently we only have micro radians for field of view and the reason is, the wider the field of view, the more background light you get, and the more background light you get, the harder it is to detect your laser light. So let's start with 10 micro radian field of view. Let's assume a range of 150 meters. And so 150 meters using the speed of light three times 10 to the eight meters per second means that the propagation time from the lidar to the object and back is one microsecond. In one microsecond. The signal will be lost if the scan rate is higher than the field of view divided by that time one microsecond. And that tells us 10 radians per second, which is 173 degrees per second, which is 1.6 revolutions per second. Call this to two revolutions per second. That means those the laser is spinning two times a second that's not very fast. There are many, many, many, many lasers that spin faster than that for laser scanners. That means we have a big problem. If we actually try to do this. What this tells you is that you really don't want to design a lidar to see the whole world around you with one beam. You really want to use multiple lasers. And that's why so many designs now now are using multiple lasers, especially with diode lasers we can build an array of diode lasers. Okay, so let's talk about lasers with with autonomous vehicles. The lidars need to be small so we need our lasers to be small. And therefore we almost always use diode lasers. Sometimes we'll use a fiber laser but that's still a diode laser with fiber. And diode lasers are nice because they're small their efficiency they're easy to modulate their low cost. We can use vertical cavity, so surface emitting lasers to have superior beam uniformity, but they have low optical power, which limits the range. And they have highly elliptical beams, at least diodes edge emitting diodes do. So we need some prisms and lenses to make the beam circular. And then we need a small, or we have a small emitting area which means that because of diffraction, we have high divergence. So you need columnating lenses. So these are all the problems of diode lasers. Down here on the bottom left is shown a plot of the amplitude of the diode laser beam versus angle. And what we're showing here is that there is a one one dimension has a very wide beam and the other dimension has a very narrow beam. That means we have a highly elliptical beam, and we have to fix that. So that can be fixed fiber lasers use a diode, a seed diode to generate the wavelength that we want, and a pump diode, so that we can generate more power. They're brought together with a wave wavelength division multiplexer in this case, and we have some active doped fiber so that there's optical gain in the fiber. And then we can have one stage, two stages, we can have any number of stages of amplifiers. And out of that last stage then, we can generate a nice and also we're coming out of the fibers of the beam is nice and circular and symmetric. So this is commonly done at the 1550 nanometer wavelength range, because all of this stuff exists at low cost because of optical telecommunications. And again, I have to mention laser eye safety. There are many plots we could show for eye safety but this general one shows the maximum permissible exposure in jewels per area versus wavelength. And you see that for these are different pulse width or pulse durations. If you pick one of these pulse durations let's say one microsecond, you see that in the visible wavelengths, the allowable energy is much lower than over here in the near infrared. And what that tells you is that there is an advantage for using 1550 lasers, because we can put out, you know, many, we can put out much higher pulse energy without, without causing a threat to a human observer. Of course, the threat we're talking about here is that the light is focused by the lens of our eye onto the retina. And if you focus high energy onto the retina you will burn your retina and cause permanent damage. So, because of this, many people argue that 1550 nanometer is the best wavelength to use because we can use higher energy. That's not always the case that it's not always the case that it's better. The two wavelengths that are most commonly used with autonomous vehicle lidars are 1550 nanometers using optical telecom devices, and about eight or 900 nanometers using just standard near infrared diode lasers. Okay, I need to move a little more quickly here, but I don't need you to understand all the details here. The purpose of this plot is just simply to show you that this is, this is from 1992. This is an old picture, but it shows you that even back then people were thinking about how to do lidars with multiple lasers. They have 25 lasers. There's 25 diode lasers. This is a scan mirror that is rotating. This is, this is a way of getting around that problem we talked about with lag angle loss. Instead of scanning one laser to see the world here now we are scanning 25 lasers in a fan. So we have a vertical fan of beams, and they are scanning horizontally, all at once. From the very early days, this has been an approach that has been used. A similar approach was commercialized later than that. This is a picture of a commercial lidar that you can still buy today. This is made by Velodyne which is one of the companies that is out there selling lidars for autonomous vehicles. It uses 64 diode lasers in a vertical fan. So we have 64 lasers in this fan pointing down and up to horizontal. And this top goes around in a circle continuously. So the idea is that you can use this to see the world around a vehicle. They are claiming 120 meters maximum range, although they don't say for what reflectivity, the range uncertainty is two centimeters. That's about what we talked about earlier. That same company and other companies are making, are moving in new directions these days. The big rotating lidar, this costs about 70 or $80,000. It's more money than the car. So you can't afford to put that on a car. So they are making a smaller version. They are making a very small version that has a wide angle lens so that they can see behind a vehicle. And they are making small handheld versions. The idea in the industry these days is to make smaller lower cost lidars so that you can have multiple lidars instead of just one. So the idea is replace one big expensive lidar with multiple small low cost lidars. And how do we scan the beans one option is with a MEMS mirror micro electromechanical systems. And this is this is showing a monostatic use where the same optics are used for the transmit beam and receive beam. And this is by static where we transmit the beam off the scan mirror and collect the light off of a separate receiver. Why not use phased arrays instead of moving a mirror in radar, we we just changed the phase of the antenna elements, and we can by putting a phase gradient. And then you can transform of the phase gradient and you get a tilt. So you can steer the beam. And just by electronically tuning these elements we can scan the beam. That's what these are here these are phased array antennas on these Navy ships and on this airplane. All radars in the modern world, mostly use this technique now. Because the wavelengths are long and we can make the devices in the optical world. The devices are too small because the wavelengths are so small, and we end up being limited by grading modes so that we can get about 23 degrees scanning for a phased array in one axis and a three or four degrees and the other axis. This is a challenge lots of people are working on it and there are some clever ideas out there, but fundamentally physics is preventing us from making easy low cost phased arrays. So one solution is shown in this patent here by a recent company. If you can only make a phased array that scans over 20 degrees. Why not just make them low cost, make them small and cheap enough that you can put many of them together. So you don't need one phased array to cover the whole 180 degrees scan angle, you can use many different phased arrays. So that's one solution. Another solution that's been being being explored by this company. I am not here to promote any company but I'm just giving you examples. This is a very interesting company because what they're using is dispersive optics. So think like a prism. If you pass light through a prism, and if you tune your laser, then the, as you as you tune the frequency or wavelength of your laser, you will change the scan angle. And that's what they're doing they're using, they're using dispersive optics to do angle scanning with a frequency tuning laser. Okay, so let's talk briefly about flash light are flash light are is where you send out a cone of light to illuminate your entire scene all at once. And so these are pictures from a commercial company that sells flash light ours and illustrating the, the, the use. And the other advantage is that you don't really need as much stability in your system as you do with a mechanical scanner, but the huge disadvantage is that you have greatly reduced energy density per pixel, because you're, you're spreading all your light out into a large cone. But for near range light are this is a really, really good way to do it because it allows you to see everything in the near field of the light are immediately. And so again, in my field of remote sensing, we believe that no, you never solve all of your problems with one sensor. There's always multiple sensors required. Here's an example of that. Here's a space probe that is landing autonomously. And it has a Doppler light are for navigation. It has a flash light are for landing. The laser altimeter also for landing. Why does it use three different light ours. The laser altimeter is a single beam. That tells you the distance to the surface when you're very far away. The flash light are only operates when you're close to the surface and it gives you an image like this so that you can very quickly adjust where you're landing, so that you don't land on a rock or in a crater. The flash light are is for measuring velocities as you're moving across the surface. So we need three different light ours for three different purposes. Okay, I want to briefly mention Geiger mode, everything I've talked to you about so far has been linear mode light are. This is a non linear mode where you drive an avalanche photo diode with very large reverse bias. And the advantage is that it gives you when you get a single photon incident on the detector it gives you a flood of electrons. So you have very, very high sensitivity. So very high sensitivity means you can have a very small system. You get very high range resolution because of the sharp pulses that come out of this detector. We don't need any kind of analog processing, but the disadvantages are we only get a single return for pulse. And I think the biggest one is this detector dead time, the, the Geiger mode detector will give you a pulse and then it has to reset. It takes time to reset, but people are very smart and they're working around these problems and Geiger mode light ours are looking to be very, very powerful. Here, here's an illustration that shows you how a linear mode operates. If you have a bright laser pulse. It's easy to distinguish it from the noise. We have a weak laser pulse that gets buried in the noise and an even weaker pulse gets buried even more in the noise with a Geiger mode we get a bright pulse, regardless of whether it's weak, medium or strong. And so we can detect single photons. And I'm going to skip those. I have one last thing I want to talk about we're just about out of time. But I want to talk very briefly about coherent by Dar. With coherent light are we actually make use of the coherent properties of the light. And so what we do is we transmit a laser beam, and we pick off a piece of that transmitted light as the local oscillator. We mix it on a detector with scattered light. And the mixture gives us the product. This, the, it gives us the square of the product and that gives us a sum and a difference frequency. The sum is, is too high to detect, but the difference frequency is our signal frequency that's our optical frequency minus our local oscillator frequency. And so we can, we can detect the light the advantages of of doing coherent detection is that it's insensitive to background light because we only are coherent with our signal. The dominant noise is shot noise. And we get automatic Doppler measurements. But the cost that we pay is that it requires very precise alignment. We have very limited field of view, and we have more system complexity. And I would point out that these disadvantages largely go away. If we implement the coherent light are all in fiber, then the alignment becomes much simpler. This is in that slide a few minutes ago that that spacecraft. I showed you that it had a flashlight are, and then it also had a Doppler light are the light the Doppler light are is using the coherent method. What we're doing is splitting its signal three ways, so that it has three different beams. And those, you can see that on the left here there's beam number 123. And that way we are able to get three different components of the vector motion, so that we get velocity velocity vector out of our measurement. I don't have time to go into this in detail but there's a method that uses CW lasers continuous wave lasers. And we chirp the, we chirp the frequency, so that it goes right the frequency is going up and down. And the copy comes back at a delayed time. The coherent mixing gives us a beat frequency, so that the Fourier transform gives us the beat frequency, which we can interpret as range. The beat frequency is in fact proportional to range as this equation shows. So we can measure range by measuring the frequency of our beat. And the last thing I'll show you here is that the, let's see, I need. The range resolution is inversely proportional to the chirp bandwidth delta f. So if you can figure out how to chirp a laser over a large optical bandwidth, you can achieve very, very fine range resolution. The only thing I want to show you now is just that we also get, we also get velocity for free. And so I'm going to let this play this is a video that was created by some of my former students who operate a company selling self driving car light ours. And they use this technique the FMCW technique. And so as this person moves away from the light the lidar is on the left. Looking to the right. So this is blue shifted red shifted. You can see the, the velocity being determined. And I see that that Henry Pinedo has has his hand up so I'm happy to try to answer that question. Okay. There is a question from Henry. Henry, you can unmute and ask directly from the speaker. Yeah, yes, thank you. I didn't want to interrupt while was describing this movie. Please, I would like to know what, what type of lighter are used for wind velocity estimation, which is something that I am attempting to, to know more about. And what type of solutions are of this type of light that you presented are used and how feasible are them to build for estimating wind velocities. The goal is just to estimate the energy potential for for wind, wind energy potential basically. Yeah, yeah, it's a good question and very timely. In the next section we will talk about atmospheric light ours and I will talk a little bit about wind light ours. This method can be you the method that I just talked about. Many, many wind light ours use coherent detection because you get the Doppler shift automatically. They're all also some incoherent methods for detecting winds. And I can't remember if I have slides about that in my atmospheric section but I think I might. So, and there are companies that make these it's a it's a fairly mature technology. So there. Why don't we hold more details on that question until the next section when we talk about atmosphere quite good question. I see that we have Dr. concertini with us today too. Yes. Yes. My, I have two questions one is very small in the beginning when you present the formulas. There is the square of the atmospheric transmission and then there is another T sub zero what is this. That's the transmittance of the optics. You know, with our interference filter and all these things that might only be 50%. But I saw this. And the second question is different. What is the light us work in the infrared in the near infrared in one case or in the far infrared in the other case. So do the short distance that it should be no effect from the turbulence from the atmospheric turbulence. Unless there are special situations. Let's say in the airport where you are behind a plane and in this case this could be a big disturbance is this true. I think it is true and and you know more much more about turbulence than I do but it's it's it's true that over such short distances it would be pretty negligible most times. If we were behind an airplane if we had a region of high thermal. I think it would become very important and I think in that case then the coherent by Dar would suffer the most maybe but thank you. Yeah, no it's a very interesting point. Okay, I think. If anyone still want to ask a question okay there's another question from Hugo. Hugo you can unmute and ask directly please. Thank you very much for the talk very informative. At the very last, at the very last slide you mentioned about frequency modulated continuous waves. In this case, instead of using fast Fourier transforms, would you use chirp Z transforms, which are generally even more general form of. Yeah, yeah, good. I don't know right off the top of my head what the let's see Z transforms would be a discrete version of. I guess technically we could develop that theory in terms of Z transforms. I know when we implement it we implement it with a fast Fourier transform. And I think though that that would be an interesting thing to do is to develop the theory using Z transforms, I think you would get the same kind of. Yeah, that's a good point. I'll put my email in the chat so that if anybody wants to send me additional questions they can. There's also a question in the chat of dual. Is there any equivalent to synthetic aperture radar. The answer is yes, and I have never won it, but it's called synthetic aperture lidar. And there is some work being done on that area. In the basic ranging application, how do changes of the index. Oh yeah, same that that has to do. Changes of the index of refraction change the speed of light. And so all of the, all of the ranging equations have the index them. And then fluctuations of that index of course can manifest as turbulence and that's what. Dr concertini and I were briefly mentioning. Okay, there is another question Joseph shot, if you can answer what kind of a lidar used for the detection of PM 2.5 particles in environment. That really brings us into the atmospheric lidar world and we, we can measure backscattering at multiple wavelengths, minimum to wavelengths. And by looking at the ratio of the backscatter signal at the two wavelengths you can get some information about the particle sizes. And so this has actually been done using NIDIAG lasers at 532 nanometers and 1064 nanometers, and the ratio of those two wavelengths will tell you some information about the particle size. And that's, that's relevant to the PM 2.5 question. Okay, thank you so much for all the answers that the question for the questions. And we will, we will talk, we will talk more about that talk. Yeah, yeah. Yeah, sure. Still, if any participant want to ask question he or she can raise hand so that I can let you ask directly from the speaker. Okay, I think no more questions for now. So, Joe, I think you can proceed, whether we should stop or the talk should continue. Well, that's up to Joe shop I think we can just keep going. Okay, that's great. That's led right into it with those questions. So it's good time.