 Okay, so thanks everybody coming back and that great lectures we're going to have some more right now. And so the first lecture today is given by Professor Randy Babbitt also in Montana State University. He's a professor of physics and director of spectrum lab. And so he's been a he's one of quite a few awards actually has got 81 papers, quite a few 18 patents that's really great. And so you can tell we're talking to technology people, but a rate so he's he's I think you and you and Joe have known each other for quite some time. Oh yeah, we're piecing a pod here. He's a pod. So anyway, we're really really happy to have you come here. This is, by the way, I just want to let you know I've been is john how hooked up here. I don't see him. Well anyway, I was talking to john how today. And he was he was one of our speakers lecturers yesterday. He liked this so much. We're thinking about making a program. You know, especially the thing with the honey bees detecting landmines. Something something kind of global pay rates. Maybe there's a spin off coming out of this. And for the next year for the day of light celebrations. All right, so anyway, well Randy I'll just let you go ahead and take over here. Let's talk about coherent light are and digital photography or the my schedule. That's the game plan. You let them know whether they can ask questions. They can raise their hand. Okay. No problem. Well, let me share a screen here. Yeah, no problem. Just interrupting and asking questions. And you guys can hear me just fine. Is that correct? Yeah. All right, let me get my sheet screen shared. And which one do I want screen to there we go. All right, you see my full screen. Okay. All right, and then pop that one up here. All right. Okay. Hi, I'm Randy Babbit. And as I said, interrupt me anytime with questions. Abdu will monitor that. Is that right? And just. Or someone will have to keep track. Okay. All right. And so I am a professor of physics been here 25 years and I'm director of spectrum lab, which I'll talk a little bit about at the end as well as a little bit about optic and Montana optics companies. If we have time. And I'm going to talk today about coherent light are and digital holography, which are kind of two distinct things. And in the end, I'll bring it all together. So I'm going to talk a little bit about light are direct versus coherent to begin with. And as far as coherent light are I'm going to really concentrate on FMCW frequency modulated continuous wave light are talking about the principles advantages and disadvantages. And chirp generation, which used to be a major problem, but it's now second nature these days. And what it can be used for everything from metrology of small objects to long range light are. And then I'll talk about switch gears and talk about did your holography. I'm going to talk a little bit about image plane, digitalography, pupil playing. When we talk about pupil playing, we need to learn a lot about optical wave propagation. So I'll go into that. How we get image recovery and how you can focus after the fact after the recording, you can change your focus and holography. And the requirements, the signal noise ratios and depth of field you get the digital photography. And then I'll talk about FMCW digital holography, which is something we've developed here at Montana State, which is a way of doing range selective digital photography combining basically FMCW light are with digital holography. So let's think about direct detection light are talk about that. And in general, what you do is you send out a pulse and we'll call it an intensity pulse going out. It's could be just a pulse. It could be a modulated pulse, but some something in time goes out hits a target. And what you get in return is actually a convolution of what you sent out with the target's range profile. So you get quite a convolved return here, a complicated return. But if you send a pulse out, and the target is just some flat surface that is small, respect to the range dimension, then you basically get a pulse back. And so we send out a pulse and we'll say that the pulse is only TP or long. You wait a time tau, which is basically the time it takes for the pulse to get from the transmitter to the target to the receiver. So there's a two are in there divided by C, you'll get a pulse return, assuming your target roughly acts as a delta function. Your reigns resolution is just the derivative of this. So it's essentially your ability to measure time accurately. And in this case, that's roughly the width of the pulse. We'll talk a little bit about Rayleigh limit here in a second why why why those two are related. And so if you have like a six nanosecond pulse, it might get one meter resolution. You would need a detector that could do greater than 150 megahertz to see that. If you wanted something more in the order of one centimeter resolution, you would need like 60 pika second pulses, or 15 gigahertz detector so pulsed. LiDAR has an issue with range resolution requiring high bandwidths. I just want to say a little bit about what you have a range resolution there. You really is range resolution precision and accuracy and you have to keep those apart. Resolution is really your ability to resolve and it's really related to spectro Rayleigh resolution. It's when two targets can be resolved and aren't on top of each other. And here I'm showing three cases one where two targets are well separated greater than their range resolution twice the range resolution when they're separated by twice the range resolution. And when they just become really just one blob is when they are one range resolution. So here are the separated individual target returns. And then if you combine those we can see that when you get down to the separation being between the two targets being the range resolution you just looks like one blob and then you lose their individual characteristics. So that's your range resolution. And in essence that's the basic the pulse width time C divided by two. Your precision can be much greater though because if you only have one pulse and you want to know how precisely you can identify where that target is. If you've got good SNR you can basically find the center of that blob to basically the range resolution divided by the square root of the signal to noise ratio. So you can zero one on the center. It does assume you only have a single target and you have some good timing resolution. But you can and good signal noise you can get much better precision than you have range resolution. And then accuracy is another thing and that depends a lot on has has many things that it depends on your timing accuracy, your knowledge of the propagation through any of the optics as well as the atmosphere or wherever you've got what you're going through the index refraction. I mean the difference between assuming air is one and the air is one point zero zero zero three or zero zero zero six depending on where you are what your temperature wavelength pressure and humidity are. You can get considerable errors. If you were a kilometer out this is on the order of point three meter air. If you assume the wrong index. So now let's talk a little bit about direct versus coherent detection. Here is a light field with an amplitude. It's light food returning from a target. So you have a amplitude that's delayed by the round trip to the target. It's on a cosine background sinusoidal background also delayed with respect to the target. And the assumption here is that that we can detect the intensity of this pulse. So you have a the bandwidth of your modulation. This pulse is less than the bandwidth of your detector, which is much less than the wavelength of light or the frequency of light. And so you can't detect the frequency of the light, but you can look at the intensity of this. And so in direct detection, that's what you do. A detector is really a square law detector and what that does is it squares your optical field and then averages over the integration time of the detector, which is roughly one of the bandwidth of the detector. And so if you take this and you square it and then average it. You lose if you square a cost on you get a DC term plus something that's beating that twice the optical frequency. The averaging gets rid of that twice optical frequency. And all you see is the envelope squared when you do direct detection. When you're doing coherent detection, you will mix the optical field with a local oscillator. So we'll take the delayed return signal and mix it with a local oscillator, which you define as a cosine. But typically we do this at a different frequency. It's heterodyne detection rather than the home of dying detection. So we mix the return signal, which is at frequency of the laser with a different transmit laser with a different laser that is slightly shifted. And we look at the intensity of the combined signal. If you look at the intensity of the combined signal, you have the intensity of the pulse that comes back. You have the intensity of the local oscillator, which is flat and amplitude. And then you have the combined term here, the FLO plus F laser term gets averaged out so you only have the difference frequency. So you get a signal back with a bead on it that has an envelope that was the envelope of the return pulse. And if your LO was much greater than your return, which is what you want when doing coherent detection, then you can ignore this term here. And you basically have a nice DC offset. You have an amplification here of the return signal. Up here you had AR squared. Here you have ALOAR. So you really get an amplification of the ratio of the amplitude of the LO signal divided by the amplitude of the return signal. It's on an optical carrier, which is nice because it's removed from DC, makes it a little easier to detect. You can adjust this frequency so it matches whatever detector you're using. And there's a phase term here, which usually we ignore, but it's actually interesting because one, if there's actually added phase that can destroy your signal, but you can actually use this to do things like frequency shift at holography or study vibrations or dobler movement of a target. So this phase term can come in useful in many cases. So pictorially, here we have that this direct detection, you basically just have a blob coming back with the envelope squared of your return pulse. And the coherent detection, you still have the envelope, but it's modulating a cosine and riding on a DC background. So this is just a pictorial way of looking at what we were just showing before. So the one issue with that phase term is that if something is moving, if your target's moving on the order of lambda, that can switch that cosine from, if you go lambda over four, switch the cosine from a sine to a cosine, which can be a problem because vibrations or movement can really distort your signal or diminish your return if you integrate it. But like I said, the nice thing about that is that that phase term can also give you dobler measurements while symmetry. If the movements are small, you can actually study the vibrations of a target through that phase term, but I'm not going to get too much into that. I just wanted to mention that. So the signal to noise considerations, if you have direct detection, then your signal to noise is basically governed by the power of the return, the duration of the pulse that comes back, your quantum efficiency, plane's constant, and the frequency of the light. And the noise is usually the simplest way to look at it. It's the shot noise, the square root of the return photons, plus thermal noise that's in band. And so your signal to noise would be the ratio of the number of photons that get back, the square root of the number of photons, plus photons in the photo electrons, this is actually in photo electrons, and the thermal noise. In order to get to the shot noise limited, you need to have your number of photons in your signal much greater, the square root of that much greater than the thermal noise. So you have to have very strong signals to get back in order to be shot noise limited. When you're doing coherent detection, your signal is the square root of the number of LO photons times the number of return photons. Your noise is the square root of the local oscillator, which dominates. It's actually the square root of the local oscillator plus phi r, but we usually ignore that part. Plus the thermal noise, if you look at your signal to noise, rather than the number of photo electrons in the return, it's the square root of the number of photons in return times the square root of LO. The interesting thing about this is if you have a really strong LO, then these two terms cancel out. This is much greater than this. You can actually get to a shot noise limited detection, square root of the number of photon electrons in the return, and you can do that in a case where the number of photo electrons is much less than the thermal photo electrons. So you can actually get single photon detection even when you have thermal noise. So that's one nice thing about coherent detection is that you get that amplification to the point where you can actually get shot noise limited signal detection. Let's talk about what it takes to get good resolution with the pulse lidar. Like we said, the range resolution is a function of the duration of the pulse, which is also inversely related to the bandwidth of the pulse or the bandwidth of the tector you need to measure the pulse. So one way to increase your resolution is to increase the bandwidth or shorten your pulse. If you don't increase your power, that's less energy. So usually in order to keep your signal noise the same, you have to have intense brief pulses. That can get expensive and technically difficult. Brief optical pulses can cause optical damage. They can be less eye safe. They require high bandwidth detectors to see them. And there can be distortions and jitter in the creation of those pulses. So that's why we go to FMCW because it doesn't require shorting the pulse. It's a way of increasing bandwidth without shorting the pulses. So FMCW is based on chirps, chirped pulses. What's a chirped pulse? Actually, I think Joe should do this. He did a really good sound of chirps the other day. But I'll try it. A chirp is... Listen to Joe's but he's better recording. So what is a chirped pulse? A chirped pulse is a pulse that starts at one frequency and ends up at a different frequency and literally ramps from that starting frequency to the end frequency over a certain bandwidth of the chirp. And it does let over a duration, tau C, and you'll see these bandwidth duration of chirp a lot of times. So remember those. At a chirp rate, which is the ratio of the bandwidth to the duration. And you'll see that a lot too. And mathematically, you can write that as here where we see there's this one half cap of tau squared because the frequency is the derivative of this phase term. And so if you look at the frequency, the frequency would be F naught. The derivative of this would be F naught plus cap of T. And this is the phase here of the linear chirp. All right. So let's look at what FMCW LiDAR, how it uses that. If you send out a transmit pulse that mathematically looks like this, which is... I've converted to angular units here. So you have the frequency of the laser times time, and one half times the chirp rate converted with two pi, tau T squared. The received pulse comes back tau later. So it's the same equation. We're just with T minus tau's in it. And then when you do the coherent detection, you beat those two together, and it's really just a subtraction of these two phase terms is what you end up with. And if you look at what's left over when you beat those two together and get rid of the high-frequency terms, you spatially filter it or look at the intensity averaged a little bit, there'll be a term, there'll be a phase term that's a function of the frequency of light. But there's this term here, which is very interesting. You just cancel and you get alpha T tau, or in terms of kappa's, kappa tau, which is a beat frequency. And that beat frequency, kappa tau, tau was related to the range. So you have this beat that is proportional to the range. So if you measure this beat frequency of the coherent detection, you basically divide it by kappa and multiply it times c and divide it by two, you will get the range. So that's how you basically convert FMCW signals. You take a, let's switch over here. You take a 4A transform of the beat signal. Well, let me talk about this a little bit. Let me talk about the experiment first or the typical setup is a chirped laser. You split that into a transmit and an LO. You send the transmit out through a circulator. A circulator is nice because the transmitter pulse goes out and the return pulse comes back on a different path. So the pulse, transmit pulse goes out and comes back and beats with the LO. And then you digitize that. And I'll talk a little bit about the requirements on the digitizer and the detector. And then you 4A transform. So you send out a chirp. It comes back as a delayed chirp. And so a delay in time here causes a frequency change between those two, between the laser and the transmitter pulse and the return pulse. This is the beat frequency capitao. And because it's a linear chirp that frequency beat is constant. So you get a constant tone during the whole pulse. And if you take a 4A transform of that, you'll get a beat frequency that can be multiplied and converted to a range. And so here's your fast 4A transform, converted units to range. And you get a very nice, sharp pulse with a range resolution that is C over 2 times the bandwidth, which is the same range resolution we had with pulsed LiDAR. C over 2 times the bandwidth of the brief pulse. But now we have a pulse that's much longer than what it would take, much longer than a brief pulse. The range resolution all comes from the bandwidth, not the duration. The duration of the pulse actually, the duration of the pulse, let's see here, the duration of the pulse comes in in terms of kappa here and your ability to measure that frequency well. All right, so what are the disadvantages of FMCW LiDAR? So now we've got this range resolution that's a function of your chirp. But now you can have durations that are much longer than one over the bandwidth. You can have like a multi gigahertz chirp over a millisecond time scale. So you can have a factor of 1,000. The duration can be on the order of a million times one over the bandwidth. That's typical. We can do anything from 1,000 to a million times. What that allows you to do is to put a lot more energy on the target without having much peak power. The peak power can be really weak. You have a long pulse, get a lot of energy on the pulse. And so that higher energy gets you better SNR. There's less chance of optical damage because you're relatively weak pulses. I mean, you can have a watt signal, but if you take a watt signal, but you can have it on for a long time. It's more eye safe going to long pulses rather than brief pulses. The detector requirement, the bandwidth requirement, is no longer the bandwidth of the chirp or the pulse. It's the bandwidth that allows you to get to measure the beat frequency. So you can have much lower bandwidths. You can get high resolution with tens, you can get the equivalent of gigahertz resolution with the order of megahertz detectors. And likewise, you're digitized on the same range on the megahertz. And because they're pretty well characterized, and I'll talk about that in a little bit, you have much less distortion or jitter. The disadvantage, well, they're a little more expensive and technically difficult to produce, but they're becoming ubiquitous. The amount of volume of FMCW sources is getting large. And there are new techniques to doing this. I'll talk about one technique, but they're more advanced techniques for producing FMCW laser sources. And so the cost is coming down and the performance is getting better. So when I first got to Montana State, I did nothing about LiDAR or anything like that. I was working on something called spectral holograms rather than spatial holography. And it's really a form of spectroscopy. We wanted to read out a material very quickly. I wanted a chirp pulse that could do 10 gigahertz and a millisecond. And it took 10 years to develop that readout capability. So I'm not going to go into this. I'm just saying that this is sort of the history. And one of the issues we ran into is chirp nonlinearity. I was working with chirp pulses in the 80s and you basically would ramp the current of your diode or change of grading. The problem is it wasn't linear. And so if the chirp isn't linear, so it's doing some path and you send that out and then you get delayed return, the beat frequency varies. You get a medium beat frequency, a big beat frequency, a small beat frequency. And what that does is it greatly broadens your peaks and reduces the height. So your signal is spread out and therefore your range resolution is essentially destroyed. You can have a gigahertz chirp with meter type resolution, but because of the noise on it, you almost have kilometer errors. It's not good. And so my, actually, Tom Mossberg was my advisor when I was a graduate student. His team came up with this idea of how to stabilize the chirp. You basically, you take a fibrous delay, you take your chirp and you beat it through this interferometer and you measure the beat. And as long as it doesn't actually have to be stable long, but it has to be stable during the duration of the chirp, you take this beat frequency and you compare it to some reference beat frequency and you feed that back so that the laser constantly keeps this beat frequency here the same as your reference frequency. So there's a feedback system and that will linearize your chirp. We took that technique and advanced it to the point where we took a picadilaser, which can do terahertz. And so we were doing terahertz scans over milliseconds. So this is actually some work done by Pete Roos and Randy Riebel. Randy was one of my students and Pete and Randy were working in Spectrum Lab. Pete's also a graduate, a Ph.D. graduate of MSU. So it was a trend. It was a MSU graduate. Zeb was a MSU undergraduate under CU. So all these people are MSU people, graduates. They came to me and said we can not just do 10 gigahertz over a millisecond, but we can do terahertz and we can do it with sub-100 kilohertz linearization. And so they came up with a technique for doing that. That was a good thing. The kind of good and bad thing was they said, well, we could care less about spectra holography. We're going off and doing LiDAR work. And so Randy and Pete formed Bridger Photonics, which then split off to became Blackmore Sensors, which then became Aurora. So major economic development came out of this in the Bozeman Valley. And so here's a little bit of what that technique looked like. What the results are is that if you have a, take the Pika-D laser and just sweep it, and it's a grading-based laser. So it's really just sweeping and grading. And you look at the linearity errors, they are on the order of half a gigahertz. So if you just chirp the laser, you'll get half gigahertz errors as you chirp in time when it's unlocked. But if you lock it, that little noise right down there, and here's a blow-up of that, you can get that noise down to on the order of 100 kilohertz or better. And so what that looks like in range, relative range resolution is rather than having, when you're unlocked on the order of meter-type resolutions and noisy, you get this nice spike with the order of 50 micron resolutions, equivalent of 50 micron resolutions. So here's an example of that 50 micron resolution on the order of a meter timescale, meter-meter distance. They were tested that by actually moving a target with a picometer and actually showed they could actually measure 10 micron steps with, on the order of 47 microns resolution, they could actually do it with 86 nanometers of precision, which means that because they knew it was a single target, they could come in and analyze the position of the sound to the sub-micron resolution. They used that for doing some fun things like profiling. They could look at a penny and look at the word liberty on the penny and get profiles of that. They could go into Lincoln's head and look at the wrinkles in his head and his hair. Let's see, what else? The other thing they did is they looked at contact lenses and analyzed the profile of contact lenses that were being made by a Montana company called WaveSource. Oh, and they also formed Bridge over Tonics, which we collaborated with us. They looked at long-range measurements with these LiDAR techniques, and they showed that at a kilometer, they could get 100 micron peaks, 100 micron range resolutions at kilometers, and they actually went to 14 kilometers, and they could actually get sub-millimeter resolution, and that really was more turbulence limited. And so, very precise measurements at long-distance, precise and high-resolution measurements at long distances. Another nice thing about FMCW, it's good and bad thing, is that the return is also the shift in the return. The B-frequency is a function of the movement of the target. And so, let's say you had a target out there and it was moving towards you, that would upshift the laser. And so, you get your delay causes a frequency beat that's a function of range, but then the movement causes a Doppler shift, which also causes a shift in light. So, you get a delay shift, which causes a frequency beat, and then a frequency shift due to the Doppler. And so, that can give you a range Doppler ambiguity. So, you'll hear a lot with FMCW techniques, whether it's radar or LADAR. By the way, FMCW has been in radar for a long time and range Doppler ambiguity has been an issue. You can solve that by doing up and down chirps. And so, here's a case without the Doppler up and down chirp, you get a, here's your transmit, here's your return. The beat is the same when you don't have any movement, but if you have some movement of the target, then the return signal is upshifted. And so, here's your new return signal that's been upshifted in both locations. Well, that upshifting causes a reduction in the beat on the upside and an increase of the beat on the downside. You can use that, those two measurements, to get the range beat, which is the one that's a function of capital, and the speed, which is the Doppler shift. So, you can use the up and the down chirp to remove the range Doppler ambiguity. Blackmore sensors, which spun out of Bridger Photonics and became Aurora, came up with a technique of doing double sideband FMCW. In that case, I don't want to go too far in the technique, but you go into, you have a double sideband, you have two sidebands chirping, which we will get if you electronically chirped an electro-optic modulator, phase modulator. You would get two sidebands, plus and minus sideband. You add a frequency shift with an acoustic modulator, and you go through all the math, and I'm not going to go through all this, you can look at their patent, but from this one measurement, rather than having to do an up and a down, you get two peaks, and if the target stationary, you get these two peaks, if the Doppler moves, if there's Doppler shift, you see a shift one way, a shift the other, you can analyze those peaks to get the same information in a single measurement. They took that technique and did a lot of interesting things. First, with just regular scanned FMCW of stationary targets, they showed the high precision they could get. But here is the Montana State Stadium. This isn't just a picture. This is a point cloud of a scanned FMCW system where they've measured the distance to every point here and plotted it. And the nice thing about these, I don't have any movies of this in particular, but you can rotate this and look at it in 3D. Likewise here, you can rotate this and look at it in 3D. One reason you can see that this is 3D, the shadows you see here are not the shadows of the sun, but the shadows of the lidar. So this lidar was sitting over here and going this way, the lidar was really over this way, but then they were able to turn the image after the fact, the point cloud and look at it from a different angle. So that's one nice thing about the point clouds that you can make with FMCW is you can turn it all around. You get a great 3D. Here's another thing they were doing. So they took this and they showed that they could do the range and velocity. And Blackmore became a roar, which is very much into automotive lidar, which Joe was talking about the other day. And this is an interesting thing where you can see the cars going by. You can see people walking by. And what you'll see is that sometimes the people turn blue or red, that's the question of whether they're going towards you or away from you. And you actually see the velocities. Here's the lidar here. This is blue coming towards the line of sight and red as you go away from it. So it's going away as red, blue coming towards you. Things that are moving across range stay white. And so you can get the range Doppler velocity. And so this is the basis of a lot of their work on automotive lidar. And how do I get beyond that? There we go. All right, now let's totally shift gears to digitalography. But are there any questions about the FMCW at this point? No, Randall. Thank you. For now, there is no question. OK. So let's totally shift gears to digitalography. And let's cover all of digitalography in about 15 minutes. OK. So in digitalography, you also send out a field to hit a target. And that hits an object. And the light field comes in and hits the camera. And so at the same time that a reference beam. So we have a reference beam, which we can describe as an E field, a constant E field times a cosine plane wave. The object beam is a much more complicated. It's some optical field. We don't actually know its direction. It's just some complicated phase at the camera with some complicated amplitude. And this is the thing we're trying to measure. What is the amplitude xy and the phase xy of this field that is scattered from this object? And we do that by interfering that object field with the reference field. We beat those together. We capture the intensity. And again, the intensity is the time averaged square of the sum of the object and the reference. And if you look at the cross terms or the cross terms on that, you actually have a multiplication of the object field with the reference field. You have some DC terms here, which we typically ignore because we can actually filter out these DC terms due to this modulation. The electric field of the object and electric field of reference are actually riding on a modulated background. And by doing Fourier transform on that, we can actually filter out the other terms, filter out this term, only pick out the terms of interest. This term is actually a constant. It's the constant reference, so we can actually pick out this. And I'll talk a little bit more about that. I just want to say, whenever I say camera, I really don't mean a camera with a lens. I'm talking about a lensless camera or focal plane array. So when I say camera, it's really just the sensor, not the sensor plus lens. All right, so a plane wave goes along. It's just a simple background. Here's a rendition of a plane wave. When two plane waves meet, they basically, you can see patterns here of high and low. Those are high intensity regions that form gradings with a periodicity that's actually a function of the angle at which the two meet and the wavelength of the light. When the reference beam and the object beam meet, they form thousands of gradings on the camera. And so there are two ways at which we can do the beating on the camera. We can either do it with no lens in the system, and that's called lensless holography and also pupil plane, which means that really you're so far away from the target that you're really in the Fraunhofer limit here and you're looking at the pupil plane, or you can actually put a lens in and image the target onto the camera and do image plane. I'm going to talk about image plane first, and then I'll talk more about pupil plane. So an image plane, you've taken the target and you've imaged it onto the camera. You formed a interference pattern that has an interference of the object with the reference, its complex conjugate, and then some DC terms. If you take that combination on the camera and you fast forward a transform it spatially, two-dimensional, fast forward a spatial transform, what you'll see is that because of this K dot R term, this term shows up here, this term shows up there, and the DC term show up in the center of your, in the Fourier domain. If you put a mask around this blocking these two, or sorry, mask around this guy blocking these two and then inverse Fourier transform it, you get your target back. And the nice thing is you've actually got the target, not in intensity of the target, but you have the full phase information about the complex phase information about the target at that point. The setup looks a little bit different than what I just showed you. You take a laser, you want to polarize the light because when they meet here, you want them to meet with the same polarization. You form a reference beam and you want it nicely spatial filtered on the image, the better spatial filtering you get and the more uniform it is, the better the reconstruction. You take a transmit beam and you hit a target and then image that onto the camera. And then you record that and then you do a lot of post-processing on that. That's basically the same thing I showed a second ago. When you're doing, oh no, this, oh, let's talk about, that's not image plane, this is pupil plane, I'm sorry, lensless pupil plane recording. So change that word to pupil. So in pupil plane, we had just talked about image plane, in pupil plane there is no lens. What you see here is it's a blur of the target hitting here, beat with a reference signal. So you take that blur of the target, beat it with a reference signal and then what you do is you add in a phase factor which we'll talk about in a second that focuses the beam when you do a 4A transform and actually produces the target and then you can mask out that and you'll get the real and image part. So understand this, we need to know more about how optical field propagation works and what I'm going to talk about very briefly is how to get from your object plane here, which we'll call U1, to the camera plane, which we'll call U2. And that's based on Huygens principle that at every point becomes a spherical wave and so a plane wave is really just a bunch of spherical waves that reproduce a wave front. Spherical waves work the same way. If you take Huygens principle and you say that every point on the object becomes a spherical wave and we tend to ignore the cosine term, which is inclination factor, which is hard to explain, but we usually just set it to one and you take this integral to get to what gets produced at the new point U2, then you get this equation here. If you make some assumptions, which are the Fresnel approximations, you assume that the range is far. If you assume the range is far, then we can make some assumptions about K times R. We assume the cosines one and what we get is the Fresnel diffraction integral and so you have the field at U1 is multiplied times this term here and this term here and you get the field at U2. You can continue to make some assumptions about that. If you break this up and turn it into your x, y coordinates and then you make some further assumptions, Audrey is going to go quickly here because I'm running out of time, what you find is that in the Fresnel propagation you have the field of the object, you have a phase term, a quadratic phase term multiplying the field of the object and then you have this, which looks a lot like a Fourier transform. You have the integral of dx of something that's a function of, sorry, dx1 and y1, something that's a function of x1 and y1 times an e to the ik and you can think of this thing as the spatial frequency, ikx2 over del dz is sort of a spatial frequency. You're really doing a fast Fourier transform here and then there's some quadratic phase terms on the back end that you have to take care of as well and so the result is really a Fourier transform of the field with some phase terms added in and what makes that interesting for when you're doing fast Fourier transforms, when you're doing, I'm not going to be front of her, when you're doing pupil plane and you get this return signal and you capture it with pupil plane holography, you basically can get the electric field, you can undo that. So here's the field that you would get for Fresnel propagation. You record that in the intensity of your hologram and you basically pull out the e object by doing the Fourier transform, but before you do that, what you do is you multiply the intensity by this opposite of the scale factor, which is really a focusing factor. You undo the focusing effect and you can pull out the image. Now the interesting thing is is that that range was arranged with a target and what you do is you can pick out that this thing picks out that range and so you can, if there were two targets at different ranges, you can set your range here to whichever target range you want and focus that image. So you can actually focus different objects this way, but depending on what you put up front here when you do the Fourier transform. Let's see, I'm not going to go there and so this shows you how, let's see, this over here. This one shows you how if you change this focus factor here, you can actually bring targets into focus, 20 centimeters, or out of focus from 12 centimeters, gets pretty blurry, up to 30 centimeters is blurry, you can change the focus of what you're focusing. If there was a target at 30 centimeters somewhere else, it would have showed up in focus. All right, digital photography requirements, a stable laser, spatial stability targets moving, that'll wipe out your gradings. You need some post-processing. If you want to do it real time, you better have fast CPU or FPGA or GPU on the back end. You need a good local oscillator and the camera requirements are something I'm going to talk about very briefly. The pixel size and the number of pixels, talk about that here. If you look at your object and you think about the fact that it has a resolution on your object, delta 1 and a size D1 and your camera has a size D2 and a resolution delta 2, then the two things are related. The resolution here dictates the size of the camera you need. The resolution here is a function that you need over here is a function of your range as well as the size over here. And so, if you analyze all this, you come down to these requirements that you want the... If you have an object that is size D1 at a range r and using a wavelength lambda, then your pixel size better be lambda r over D1. The total camera size is a function of how big the resolution of the targets you want to achieve. If your target had resolution delta 1 that you're trying to achieve on the target, then you need a camera that is lambda r over delta 1. So the higher resolution requires a bigger camera. Bigger objects require finer resolution of your pixels. That's in addition to a requirement of the angle of your object beam to your transmit beam, which can be a maximum of lambda over two times your pixel resolution. So your pixel resolution, remember that if the bigger the angle between your transmit and your reference and your object creates a higher resolution rating, you better have a higher resolution pixels to grab that. This just says that essentially the signal-to-noise when you're doing digital photography, if you have a strong LO then the signal-to-noise the noise on the LO, the number on the LO cancel and you basically can get shot noise limited detection in digital photography just like you can do in coherent detection with FMCW. Depth of field. The depth of field of holography is really the same as conventional imaging. Where the depth of field if you want to do diffraction limited is a function of the range. The longer the range, the bigger the depth of field. What is the depth of field? It's the range over which the object is in focus. So here's an object in focus. You change the range by three centimeters and it's pretty much still in focus. You continue to change it, it gets more blurred. At this point, it's no longer diffraction limited depth of field, but because the feature sizes are much greater than diffraction limit, there's a feature that you really can't eliminate targets. They become blurs. If you had two targets there at different ranges, it's hard to distinguish between the two. That's why we combined FMCW with digital photography because we want to get not the depth of field that's of conventional holography, we want the depth of field of FMCW LiDAR. We want to get this high resolution of FMCW LiDAR. One of the stuff we've done recently is to combine digital photography with FMCW techniques, which means that what we did is we took an FMCW beam, we sent it out to our target, it came back delayed. If we just put this on the camera, there would be a beat. The grading would be beating and the camera would integrate that to zero. So what we do is we then take our transmit beam and we offset it so that the beat between the return light and the LO on the camera is zero. So we get a nice stable hologram. Now to do that, that offset is a function of the range of the target. We can pick a range at which we want to look at a target. Here is the setup that we tested this with. Our source is no longer a CW laser. It's a frequency modulated chirp. We send that out through two acoustic modulators, which allows us to adjust the frequency shift between the transmit and reference beam on the camera. We put out two targets. One was a star pattern and 12 centimeters in front of that, we put the screen on a lens mount in front of that. And I won't go into the details, but one, if we just had the target out there, we really see very little difference between a CW conventional DH and our FMCW DH. But when we have a screen in front of it, here's the conventional digiography. When we use FMCW, we can either focus on the star target, in which case the lens mount is blacked out, or we can use that frequency shift to look at just the scatterer in front, so we can actually have range-selected digiography. And these were about 12 centimeters apart. We have range-selectivity here on the order of two centimeters, which in conventional geography you cannot get. The only interesting thing is that you can see the star in front of your target. And here's a picture of that. Here's the conventional CW DH. And what we see is we can barely see the star pattern here. But with the FMCW digiography, we can actually pick out the star, return from the star, and eliminate the scattering. Just a little bit about Spectrum Lab. We're part of Montana State University, about a million dollars a year. We help develop and help commercialize and grow companies. We're part of the university, so we're not commercial ourselves. We work on both this coherent LiDAR and imaging, on networks, and some of the past work we've done has been on spatial spectral holographic that Montana has on the order of 35 optics companies in Bozeman. I think it's over a thousand employees now. And so we kind of like to think of Bozeman or the Gallatin Valley as the photonic valley in the U.S. We have the most optics companies per capita, but we don't have many people, so when you divide by zero, you get a big number. Any questions? Thank you so much, Randall, for this wonderful and informative talk. So for the participants, anyone who wants to ask a question, please raise your hand. Okay, I think there is one question from Shehzad Ali. Okay, Shehzad Ali, you can unmute and ask your question, please. I'm not hearing anything. Okay, then the next question is from Liru. Liru, you can go ahead. Hello, thank you for your presentation. It's very interesting. And I have some questions. The first one is how could we analyze the noise? I think in the data there are many noises. How could we analyze and analyze the noise accuracy? Thank you. How can you analyze the noise accuracy? Yes. How can you analyze the noise accuracy? How do you measure the accuracy at which the system is working? You have to have another way of calibrating your targets and knowing the range distances between targets, so that's one way to characterize the accuracy. Is that what you're asking? The accuracy of the system? Maybe how can we analyze the height accuracy? Because you see in the LiDAR in one place there are many different point clouds. How can we analyze which point cloud choose the real height? The only way to really do that is to compare multiple point clouds with one another and compare them or use some sort of reference to measure that. I think that's the only way to measure the accuracy at which it's going. The accuracy depends on your chirp rate so you need a very accurate chirp rate and that's usually done by comparing it to a reference or producing it digitally. Thank you for your opinion. You're welcome. Any other questions from the participant? People are waiting to raise their hand. Randy, Tom Moser is your advisor. He's my graduate advisor, yes. No, Shahzad. No, we have another question. Shahzad Ali, you can ask now. My question is what is the distance distribution of random point in our question environment? What is the distribution? I'd say that again, please. Shahzad Ali, can you please repeat your question? What is the distance distribution of random point in our question environment? The transmission is being garbled. Can you say that one more time? The random, I got distribution of something. Shahzad, can you please say it slowly because it's a bit noisy from your side or maybe from the system. What is the distance, distribution of random point in our question environment? What is the distance, the distribution of distances and the targets we were looking at? It was actually San Francisco, so it was an entire street of the point cloud was over probably half a kilometer or so. And our measurements that we were doing were tabletop measurements were on the order of a meter distance. But they've done several kilometer measurements as well. So it depends on how long you're willing to wait for the scan and integrate points. It's how large a distribution of points cloud, how large a point cloud you can do. Does that answer your question? Shahzad, did you get your answer? Okay, I think the network from his side is a bit noisy. So any other questions from the participant? Yeah. Yeah, okay. Yeah, Hugo, you can unmute and ask please. Thank you very much for this presentation. Very informative. I have a quick question. I've tried making a, it's an open source Ramon spectrometer known as Ramon Pi using a Raspberry Pi. What were the requirements for the optical circulator for the FMCW? What are the requirements for the circulator? Those are commercial products you can get. You would want, I mean, the requirements are typically how much, so it's supposed to be light goes one way and then the other way. And it basically, it's the isolation that you don't want, if you're transmitting a strong beam, you don't want a lot of that strong beam directly going on to the return path. So you want isolations of, high oscillations of 30 dB or higher, typically in that respect. You want to be able to transmit a lot of power, so you'd like to be able to put up to watts through those circulators. Am I answering the question you're asking there? Yes, I think I put the optical circulator I have for the DIY Ramon spectrometer. I had a serious power issue which led me to believe that that was probably the main point. Thank you very much. Yeah, what kind of powers are you putting through it? It was around three watts for a laser. Yeah, it gets into trouble around then I think. Yes, thank you very much. You're welcome. Okay, I don't see any other raised hand. So, no, it's over to you, Joe. I guess we're going to take a picture really quick. How are we going to do that? I'm going to stop sharing. Okay, now I think everyone, every participant, they should turn on their video cameras and keep it turned on for a minute because there are many participants so I have to take the screen shots. First the first screen, then second, then third and fourth.