 I'm not scanning for insects or finding landmines or mowing lawns with inflated tire four wheelers. That sounds pretty exciting and fun. So, let me start my presentation here. Okay, everyone see that now. Okay. Great. Well, glad to be with you. Joe, thank you for the invitation. I should say, Joe Nemela. And Joe, Sean, I actually recently did a project together where we where we were looking at what is how much radiation can you actually get off of a radiative emitter. So we did the theoretical studies of that because we wanted to see if we could increase or improve due collection capabilities using just natural naturally occurring radiative emission the way that the earth cools so that was a fun project that Joe and I worked on together. So I want to talk a little bit about compressive sensing LiDAR. Now probably many of you aren't familiar with compressive sensing so I'm going to spend quite a bit of time sharing what compressive sensing is and then we'll talk about how we're using that for doing LiDAR. I just want to acknowledge the Air Force of Scientific Research for the for supporting this work. This is primarily work that I did at the University of Rochester. And these are the students on the left that worked on those those projects. This is my group that was at the University of Rochester. The people that were involved in this are Daniel Lum, Greg Howland, Chris Malarkey, Sam Kinar, and that's all. So, now just a little bit of background I do quite a few things I enjoy working on quite a few different projects, ranging from atomic physics to fundamental quantum mechanics to precision measurements, quantum information processing and then things like auto stereoscopic 3D, volumetric 3D, cloaking. But today I want to talk a little bit about compressive sensing we're still doing some compressive sensing but now we're working on nitrogen vacancy center magnetic field imaging so we've we've we've moved on to something a little bit different. So I want to talk a little bit first about what compressive sensing is. Some of the applications that we've done with it. Now compressive sensing really got going in the in mid 2000s. And this had actually been a digital signal processing had been going on for decades but there was something called the single pixel camera that really got got things going and the digital signal processing field got compressive sensing moving and and then I started to see I went to MIT and I saw a really interesting presentation by at Lincoln labs where they were they were looking at scattering through camouflage. So for example, can you see through camouflage, and they were using these single photon detectors they were using arrays of single photon detectors. And after I saw that I thought, well that's interesting I wonder when I first read about compressive sensing I thought well maybe we can use compressive sensing with a single pixel camera to do imaging now what do I mean by single pixel camera that doesn't quite make sense. Normally when we think of a camera we think of 10 megapixel array of detectors and that's 10 megapixel array is basically light hitting on all those individual pixels and then forming an image, but can we do better than that. And can we use actually only a single pixel and get a high resolution image. And that's what I really want to focus on. So, I'm going to take us back a little bit and I want to talk about entropy first of all. When we define entropy, we say there's some random variable x with a distribution p of x, and the simplest thing you can think of is a coin if I do a coin flip. The, the x is the, the outcomes, a heads or tails and p of x is, what's the probability of heads what's the probability of tails. And if I look at h of x that's the entropy how much uncertainty is there in a particular coin flip. So if I take the probability, if I have an unbiased coin, which means it's equally likely to be heads or equally likely to be tails, then the probability is one half. For each one of those log two of one half. And then when you sum up both of those options, you get one bit. So when, when we look at entropy, what we're, what we're really trying to do is we're trying to find the number of symbols that we might have in a, in a, in a particular random variable, for example, the English language has 26 symbols in it and if you include a period or space, you know you can start getting more symbols. But if I, if I look at the number of symbols, and then I find out what's the probability per symbol then I can go and find the entropy that exists in that distribution. So I'm going to ask you a question we're going to make this little interactive. And I hope that's not going to be hard to do over virtual. But so how many, how many bits of information do I have in the English, how many, how many bits of uncertainty are there in the English language for a single character. What is my entropy for a character in the English language is what I'm asking. Okay, so probably the easiest thing to do is you just say, I'm sure probably many of you thought, well the probability of, of any one individual letter is going to be one over 26 log base two of one over 26. And you sum up all of those, and you get a little bit over, you get somewhere between four and five bits, four bits is 16 characters, 32 is characters is five bits. And the problem is, or it's actually less century than that. Now, if you want to think of entropy the entropy is amount of uncertainty that we have in, in, in a particular letter, if I'm just reading, if I just have text written out, and I say, what is the probability that I get an E, or a T in the English language that's much lower than getting a Q, or a Z. Those probabilities are much lower and so the entropy is actually much smaller than four bits, because much smaller than the four and a half bits that we initially did because of the fact that there are different letters with different entropy, different probabilities. Now, but it's actually much, much less than this. So, I want to, I want you to play a game here with me this is, we're going to play Wheel of Fortune. Wheel of Fortune is a game that was really popular in the United States, still is. I just haven't watched it in a long time. And so I'm going to give you an example. I have two words here. I, with this letter that is circled. How much uncertainty do I have in knowing that particular letter right there. Any thoughts. This probably isn't a great. A great setting to do interactive. I don't think that's a mute, none mute. But probably you, you thought well whatever the uncertainty is for that particular letter. Now, if I do this. Sorry, let me, I'll be right back just need to grab a tissue. Any, any thoughts of what that letter might be. Nope. Okay. Anyone answer. H. Okay, that's a very good guess because the most common letter in the English language is the, but it's not, it's not H. Okay. Anyone got it now. Oh, I got that one. I would hope Joe got that one. So, now I probably don't need to tell most of you right now what that letter is, as long as you're pretty familiar with the people in the audience. But for most of you, there is no uncertainty in that letter anymore. And that's a really important point. Because even though I don't you don't, I'm not showing what that letter is. There is no more uncertainty for you and that is because, because of correlation. There exists correlations in our symbols, such that we the amount of uncertainty is greatly reduced, as opposed to what what we would normally think of as as far as that uncertainty goes and so in reality, there's actually only about one bit of uncertainty in the English language. And that one bit of uncertainty in the English language is is has been shown for multiple different things and in fact when you compress in text in English. It is about one bit per symbol so that's it's a very interesting thing what actually how much uncertainty is there in a particular character. Now, I apologize. I want to move on to this slide for a moment. This slide is a picture of my son many years ago my son is now 17 years old. He was quite a bit younger he was in our garden. This is a 512 by 512 image. Now what I did with this image is I simply took the two dimensional discrete cosine transform. I'm not sure if you guys can see this if it's being blocked right now or not, but it's blocked on my computer so hopefully it's not blocked on yours but in the upper left hand corner of this discrete cosine transform distribution there should be a bright set of pixels. A bright set of pixels basically tells us that when I take the discrete cosine transform of this image, almost all of the information is found in just a few components up to the left now what do I mean by discrete cosine transform. Well, all images are composed of frequencies probably all of you have taken a discrete Fourier transform but when you know you know when you take the discrete Fourier transform of for example the cosine, the cosine ends up having two terms in it. One that's a positive frequency and one that's a negative frequency. But when we do the discrete cosine transform we only worry about the positive frequency amplitudes and so that actually is a sparser representation of an image than the discrete cosine transform then the discrete Fourier transform the discrete cosine transform looks at frequencies it looks at spatial frequencies now for example look at this spatial frequency right here. I have a dark, then I have a bright, then I have a dark, then I have a bright, then I have a dark, then I have a bright, and then I have a dark. So this is this this part looks like a relatively modest high spatial frequency but if I look along here. Notice that it's almost uniform all the way up so that would be a very low frequency term so what you can see is that there's a lot of low spatial frequencies in this image. Now, this is 512 by 512 which, which is a huge domain it's about 260 something thousand pixels, and if I take the street cosine transform I have the same number of pixels in that domain as well. But if I take the 81 most important elements and get rid of everything else say everything else is zero and then inverse discrete cosine transform, I get this back now interestingly you can already see a face. If I take the 280 of the most important elements and do the inverse discrete cosine transform I can see a face, and it's just gets better and better to where now what I'm talking about when I say K is 13,236. That means there are K elements in this image that are important in the discrete cosine transform that are important such that if I only take those and go back to this original domain, I only need about 5% of my my pixels in order to reproduce to a pretty good degree that original element that original image. And so that that that goes to show you what this means is this is K sparse. And what I mean by K sparse is that they're only a few K elements that are important that when we, when we keep those, and we go back to our original domain that's sufficient to nearly reproduce that image and so that's that's basically the idea of compression when we what we're doing is we're taking we're only using the information where there's actually what we're only using the case sparse information in some sort of sparse domain where we can where we can recreate the image. This is the, this is the idea behind which JPEG is operates. Now JPEG 2000, for example, uses wavelet transforms which are even better than, for example, then the discrete cosine transform in terms of sparsity. So, now let me, I want to go back here for a moment when we when we talk about imaging what we usually do is we image and then we compress, but now what we're going to do is compression. So we do compression after sensing we want to do compression, while we're sensing. And so if you think about compression what it does is removes inter pixel correlations. It's like when we talked about Joe Nemella, you knew that the J and the E in Joe were important at helping you, we're helping you to know that that was Joe. And those were all correlated, and we can remove inter pixel correlations to get it down to the sparsest amount of information that we need in order to reproduce the fact that that name was that his name was Joe. So the way we do this in imaging is we decompose into some D correlated transform basis like the discrete cosine transform for a transform wavelets, and we want to find a case bars representation of this original image. Okay, so this is how we normally do it. We compress after we sense. Now we want to compress while we sense why do we need all of that information. So we're only going to compress it and get rid of 90% of the information that we took in the first place. So this is this this single pixel camera paper really got people thinking about what what compressive sensing does. And give you a brief brief math background and then we'll go into the, we'll go into the experiment. So, suppose I have a one dimensional signal x of length and now this you, we do this all the time if you look at an oscilloscope, if you take a data set of measurements. So what you're saying is you have n data points and your signal is, is what we're going to just call x and that's a one dimensional signal. And, and then we're what we're going to do and we're going to have a transform that to a sparse basis s. And the way we do that is we say, x is related to the sparse basis through whatever my transform is and that is in our case for example the street cosine transform or the Fourier transform or the wavelet transform, and it basically takes us between these two domains. Now if this is an n dimensional, and this is n dimensional and then we have to have an n by n dimensional matrix in order to achieve that. Now in compressive sensing. So what we need is we actually need a sensing matrix now I've talked about the transform matrix we transform between one basis and another basis. But now I want to talk about the sensing matrix. The sensing matrix is a matrix that we're going to use which when transformed is not sparse. So if I take. I'm going to take this sensing matrix operated on x is going to give me why, but if I take psi operated on Phi, I'm not going to get a sparse matrix now I'm going to still get a dense matrix. And that just means that almost all of my elements are still there. Now, if when by using compressive sensing, what we're going to do is we're going to take m random measurements from this sparse matrix this this. This is from our measurement matrix, and we are going to take these m measurements and, but the number of measurements that we need is going to be greater than or equal to K log and over K where n is the dimension of the space. K is the number of sparse elements in my transform matrix. But that is much much less than that so that's what I'm saying is, suppose I have a 10 megapixel camera. I can do m measurements, let's say 100,000 measurements with a single pixel camera and get the same image as if I were to take 10 megapixels worth of images. So at 10 megapixels or 10 mega samples of my system. So this this means m is can be much, much less than n. So this is still quite theoretical and we'll get into a specific example here in just a minute. Now what we do in order to solve this is this is called an L one minimization what we're going to do is we're going to minimize the number of the sum of the number of elements, subject to the fact that this has to be satisfied. We're going to minimize this sparse basis. The sparse basis terms with the but subject to the fact that why is equal has to be equal to this result where why is the measurements that we're going to perform. Okay, so now let's do a specific example so that it's, this is a little bit easier to understand. Suppose I have an image. This is a relatively high resolution image it's a famous image that we that we know in digital signal processing. Now what I'm going to do is I'm going to use a digital mirror digital micro mirror device. And what this is is it's simply an array of mirrors that are on or off. And these on or off mirrors are either going to point to towards a single pixel detector, or are going to be take light and put it away from the single pixel detector. If it's bright, it's on. If it's dark, it's off bright being pointing towards the detector dark not being pointed towards the detector. And this is like our sensing. This is like one row in our sensing matrix by. What we're going to do is we're going to take that sensing matrix by, we're going to multiply it by my signal so this would be like, this is like my original signal x, but now we're doing everything in two dimensions now. So my x is now two dimensional, I'm going to multiply that by one row in my sensing matrix five. And this is what I get. Then when I sum over all of the intensity. I've basically taken the product of these two things. When I sum over all of that, what I get is one matrix, I get one measurement outcome point 5022 and what do I mean by that. If this were completely on every single one of these pixels was on what I would get here is one. And when I do point. When I put on these random. When I put on this random matrix element, then what that does is it is half of the mirrors are on here. But half some of the mirrors are on when it's dark, and some of the mirrors are off when it's bright. And it could be that just how it goes for this particular random pattern. I get that this is point 5022 of that original brightness, if all of the mirrors were on. Now I'm going to do another row in my sensing matrix and get another measurement outcome on my single pixel detector. And now because I have a different random pattern. So I have one random pattern that have another random pattern, but I have another random pattern, and I have another random pattern, and I repeat that M times, or those M number of measurements, which is much less than the number of pixels here. What I can do is I can go and solve, minimize this see this equation here, and then reproduce now here's here's an example where I have M is point zero five and now you can still see that there's blockiness in here, but this is a pretty sparse. This is this is quite if this is only 5% of my original size of my image and it's already fairly good reproduction of of the original image, but I start getting to about point two or point two five. It starts to look very good as an image, and that's much less than than taking measurements. Now, why do we want to do this. Joe, I think gave a great example of a place where, where this can be useful. One of the things Joe talked about in his talk was scanning his laser. And when you scan a laser what you're doing in this light are is you're saying at this particular point. I want to get the I want to get the feedback from this light our signal at this particular point that I'm going to move the light are get feedback from that signal move the light are get the feedback from that signal. But now what we can do with this is we can flood illuminate a set an entire area and get the signal back from all of those positions, and then measure it with with only a few measurements instead of doing say, you know 100,000 scans I think you did 1024 by 1024. I can't remember. So that would be a millions a million scans. We, we just do a flood illumination with a megapixel DMD and maybe say do 100,000 different DMD configurations. And so, instead of doing a million measurements with a scanning, we don't have to do any scanning, we can flood illuminate. And all we have to do is change our, our digital micro mirror device. And the nice thing about it is, we still get about the same amount of flux back with with every scan, because we're, we're getting the same essentially the same return signal. Okay, now so this, this is a resource efficient. If you're using a single pixel detector, it can be very, very useful. Now I did a lot. In my career, I had done a lot of single photon high dimensional entanglement characterization, and we were able to use this, we were able to use this technology to dramatically reduce the, the work that we did. In fact, I had one student who was able to take the characterization of a high dimensional take entanglement took about a day, but had he done the same dimensionality with his, with his system, scanning a single pixel detector through correlations that would have taken him years and years to do the same project. So it can be, it can be very efficient. So these are some of the things that I thought were that have been that were done recently that I thought were, were pretty neat and there's been a lot more since then. And some of the things that we've done now I want to focus on light are so how can you use this for light are well the very first thing that we did and this is probably I don't actually have any images from this one but it's it's fairly straightforward. What we did is we just sent out a beam of light, it's basically look at this, this system right here this is essentially what we did is we took a pulse, we illuminated a scene. Then we, we just had a single photon counter. So we were interested in just looking at single photons coming back. And that's really useful because single photons you don't have to worry about quantization noise you can you can measure right at the shot noise limit. So, and so you can do with very low light levels of sampling and in fact we were able to show that we could get real time, real time feedback from a system with only lots of light, which is, which is pretty neat so we weren't doing as nearly as fast as Joe though Joe is doing some really high frequency stuff which is really neat. But in this particular one what we did is we just had multiple targets at multiple depths, and then we just received histograms all the different times and then we said well at this time distribution. So let's reconstruct, let's reconstruct we get and then we saw all of the objects at various distances so we had for example, camouflage and then we showed something behind the camouflage, and we could image the stuff behind the camouflage because it had a different time signature. It was, it was a pretty neat thing but the one I really want to talk about today is is another one that we did which I thought was really neat and when my graduate student came to me with this idea, initially I thought he was, it was all messed up. But then I realized it was one of the coolest ideas I'd ever heard. What he did is he said, let's let's go back to here. He flood illuminated a scene. Then he took light coming back from coming back from here. And he said. Let me back up a little bit. There had been lots of people that wanted to start using compressive sensing for doing light are, but they were talking about really hard ways of doing, doing it non linearities. Things that just made it really hard and when, when my, my graduate student came in with this, I, we realized everything could be done linearly, and it was very simple. And all he did was, here's, here's the scene you can't get any depths. Then what he did is he said, all right, now let's simply just illuminate that scene with these, these single pixels with these, these, the single pixel camera we can do all the measurements get back light, we can take this total total sum of the intensity and reconstruct with the total intensity. And then this is what he got. Now, then what he did is he said, all right, for a given time window, let's suppose I have the, the, the, the DMD is on and it's at one setting, and during that one setting we may measure 100 photons. And so what he does is he says, okay, so there's 100 photons in that one setting. And, and then he does another, he does another image and he does a, it does another random pattern another random pattern, another random pattern he can reconstruct and that's how he got the intensity. But he said, all right, so let's suppose we got 100 photons. But then let's also say, let's simply add, well, photon one had a time of arrival of 9.6 seconds photon two had a time of arrival of 9.8 seconds photon three had 9.9. Photon three had 8.4 photon seven had 8.6 photon 12 had, you know, 14. And so, basically, he said, well, let's just simply take the sum of all of those time of flight, and then take the sum and reconstruct the image based on the sum. And when you're doing that, that's an intensity multiplied by the time of flight. So if you take the intensity, multiplied by the time of flight and divide by the intensity, you get the time of flight. And so the, it ended up being a very simple linear problem where you could reproduce all of the objects at the given distances, just using this, this single pixel technique. And then just to show how useful this was we basically put a 3D pendulum in flight and then just tracked that as a function of time, and we were able to do it at I think about 25 frames per second. And, but can remember that the digital micro mirror device is actually flashing much, much faster than that. If you want to get a high, if you want to get high resolution images of a, of a scan of a system, you have to have a DMD that's flashing very, very fast. In fact, when I presented this at Xerox a few years back, they wanted to use it for to recognize how many people were in a car so they could, they could make sure that they were overcharging on toll roads. And they were going to operate their DMD at 100,000 frames a second so that they could get, say, 1000 or get high resolution, very high resolution images in real time frame rates. So, now, this one, I think was also a really fun one to work on. How am I doing for time? Yeah, you got 20 minutes or so. 10 minutes, 20 minutes. So, now, what I did in the last one as I said, well, let's, let's pulse, let's pulse light. But do we need to pulse light. We just do frequency modulated CW lidar and frequency modulated CW lidar is much cheaper because you can just use, well, in often cases it is much cheaper it can get high very high resolution depth. And essentially the idea is you're just sweeping the frequency of a laser. And when you sweep the frequency of the laser. When you get a feedback signal, then you can say, you can heterodyne beat the the two frequencies incoming frequency and the frequency that you keep. And then what that allows you to do is by getting the beat note, you can know what is the distance that an object is that something is scattering from in the distance. But what if you're flood illuminating a scene and you have 50 objects, or 1000 objects that are sending return signals. What you get is, you get a distribution of frequencies, and you can, what you're going to see is a bunch of oscillations you take the Fourier transform and you see all the different frequencies which give you all the different depths. And then you change your random pattern, you take your dmd and you scan and you get one set of frequencies, you take your another random pattern you get another set of frequencies you take you another random pattern you another sense of set of frequencies. Now this is harder than just getting a single number now you're getting a Fourier transform distribution with each measurement. Then you can go back and reconstruct the full Fourier transform and reconstruct then reconstruct your full depth map of your entire distribution and see where everything is with in flood elimination. And so here's the basic idea you have some original frequency, then you have, for example to incoming frequencies, you beat these off of each other, and you get some sort of differential signal that you can that you can measure. And here's the original scene this is this was all done in simulation, we didn't actually do the experiment. Then here's the depth map that we have of what the what what it originally looks like this is the scene this is the depth map, and then these are reconstructed input images at 5% sample rate and at 25% sample rate so we could under sample and still get good reconstructions of where the objects were. So, we, we've done actually a lot of things with this compressive sensing measuring entanglement correlations, showing, we showed that we could get high resolution images and the Fourier transforms what maybe we didn't think was possible this is basically like saying we had a double slit, and we get the double slit interference which is, you know a lot of people don't think you can do that. And then we, we showed that we could do entanglement imaging. We showed that we could do wavefront sensing we showed that we could actually. This one's kind of. This one is actually really, really cool. This, this basically shows that we can take an object that's not in focus and mathematically project it to the point where it would be in focus by knowing the momentum and position of every photon. And anyway, we use this we use compressive sensing doing this so we, we, we did a lot of really fun stuff with compressive sensing we think there's a lot more left to do as I was saying we're now working with someone at the Hebrew University, and we're doing nitrogen vacancy centers and looking at the vacancy centers we want to be able to reconstruct rapidly what the magnetic field is that's that is that the magnet that that the NV centers are sensing so and then be able to reproduce high resolution magnetic field distributions. Okay. Thank you. I'm open for questions. Thanks a lot john. That was great. Abdul. I have some text as well so anyone who want to ask question just to please raise your hand. So you can ask directly from the speaker. Okay, Joe I think we can wait for a few seconds. John, could you maybe stop sharing your screen and we'll, yeah, let's see our, it makes it easier to see everybody. All right, look into her. She sure okay yeah we have one question from Malik so. Okay, you can unmute and ask your question please. Malik so. Hello, everybody. Hello. Yes. Thank you. This is a good presentation but I have one question. Sure. I would like to know who to know if the, if your method can be used to the image throughout partially obscuring objects such as camouflage knitting. Yes. And that's what that that was the 2011 paper yes we image through camouflage. So, we, we basically showed that what we would do is we would send light through camouflage, and then on the other side of the camouflage. What what we did is we would get a time histogram, and then we would bin the time histogram and then do, then do light our at at we would then do the reconstruction of the image at a given depth. Okay, thank you very much. Okay, welcome. So, any other question from participant, we still have few minutes, you can raise your hand and ask directly. There's one. Yeah. Okay, you can. Yeah, it's a Dixon Liani Liani, if I'm commenting his name right. Yeah. Yeah, thank you very much for the good talk. My question is, I noticed you are using a digital micro mirror device in implementing this compressive sensing. Yes, my question is, how easy is it to incorporate this device in any other existing optical device. For example, if I want to use it in a spectrometer. Is it possible. Yes. In fact, that that's a really excellent question. In fact, they, I have some friends at rice, who use that as a hyperspectral camera. So what they what they did is they simply took the light going into the fiber, what you can think about is now the fiber into a spectrometer is now is now your single pixel, and then they reconstructed the image at every wavelength, and then they were able to reconstruct the, they were able to do it, make a very simple hyperspectral camera. Now this is, I want to really point out. This is Dixon right. This is really simple to do because these are everywhere these digital micro mirror devices are everywhere. They're in projectors. Probably the very projector that you're, you use when you watch a presentation or something like that, you can, you can buy these for $100. If they're essentially projectors, you can operate these simply with your HDMI cable out of your computer, and then whatever is on your screen is now what's going to be projected onto your digital micro mirror device. And so you can for, you know, if you have a raspberry pi something very simple, you can actually run an experiment using this device very, very cheaply for for three or $400 you can actually have your own DMD system that that operates very, very inexpensively. So it's, it's, I think it's a really interesting and powerful technology. Of course you can buy $20,000 DMDs that have all kinds of amazing capabilities. But just to get started to do something, it's a very straightforward thing to straightforward thing to do you can shine a laser directly on a DMD you and you just have your computer telling what your DMD should, which should show pixels on your mirrors. Okay, so any other question from participant, please raise your hand. Yeah, we have another question from Nina. Nina you can now unmute and ask, please. Okay, thank you for the informative talk, and from the Philippines. I'm just going to ask on the last part of your presentation where we have mentioned a non focused image, and then you obtain a focus one from the correlation of the momentum and the position of the photons. And I'm just wondering on how did you get the momentum and position of the photons in the experiment. Well, that is a great question. Let me. Let me just pull this slide back up. If, can I just share for just a moment here. Yes, yes, you can do that. Okay, oh, I hope I wrote that down. Oh, goodness. Right, the. I'm really sorry, I thought I had the paper on there. I can, I can get you the, I can get you the link, but basically, are you familiar with the full field. Are you familiar with the idea of measuring the full field. When, when you measure the full field distribution, what you're doing is you're getting both the momentum and the position of a photon. And, and, and let me back up just a moment. If, if I have an image that is, if I have an object, and then I put it through a lens and then I have an image. And then the object is going to be imaged over here. But what if I actually have my imaging device moved up. Well, we know that if I moved my imaging device here would come into full focus. Imagine now that what I do is I move my device, but instead of measuring just instead of just measuring the position of all the rays or the momentum of all the rays. I get both the position and the momentum of the of every ray. Now why is that important. Well, if, if I know a ray strikes a point here at this particular angle, then I know just because of the fact that we assume that light rays traveling a straight line that over here it's going to be at this point. And so, by knowing its position and its direction for every ray and its color, then I can know where this is over here. And so what we did with compressive sensing is we got the position information of all the photons, then we got the momentum information of all the photons I know that sounds crazy but because you're doing compressive sensing, you're not getting all of the information about all the photons at once, you're doing it in a set of measurements. Then I can reproduce where the object is and be able to focus mathematically the the image because I get all of the information about that object at any particular point. I hope that I hope that made sense, but thank you so much. Okay, any other question. Oh, I think Joe we don't have. Okay, well, okay. Thanks again very much john for that really wonderful talk. It always impresses me about, I mean hearing you and also Joe speaking, some of the things that statistical analysis, just the various types of analysis actually apply across a wide variety of fields. And so I can see just like in low temperature work that we do most of its plumbing and but what we also have to do. So it's not a statistical analysis that really doesn't depend on what the what the actual units were. So it's actually the idea that you have a skill set for these things is pretty important I think for all students. Anyway, that that came through. So I want to make a couple of announcements. Tomorrow, we're going to do a photo, a group photo. So, you know, anybody wants to be in the photo, you know, comb your hair I'm going to go get a haircut. I have to get one anyway, but just kidding. And actually I'm not kidding. I will have a photo but the other thing we're going to have our series of talks by some of you that some of you submitted research abstracts. And so we have an hour starting at six. And so that's going to be kind of exciting because these are, these are five minute talks you think well five minutes. What can I say well you can actually say quite a bit in five minutes. We do three minutes. So we're giving you a lot of extra time. Now you can say a lot but you can't say too much and you can't get bogged down details so that it's kind of a fun exercise we do it at the colleges we've been doing it for years. And the idea is just to try to encapsulate your your ideas and really distill what you have to say and get the message across clear and I sent some of you an email. And the best way to think about this is, you know you're really making a pitch. And you like your research you want to get some funding and you just happen to run into Bill Gates, and he's going to give you five minutes but he's not going to give you another minute after that. So you really got to make the case. And that's the idea. And I think it's a really, it's a real fun experience, nothing, no pressure. We just have fun, but, but we will have a few prizes from SPIE. Oh, I say, oh, sorry, Optica. I gotta I gotta put another dollar in the jar every time I call Optica say. And, and John what do you think about ICO we can we can we can sponsor a prize right. Sure. Okay, there. That's from the president. By the way, I guess call it Mr. President. Professor how it's okay. All right, so we'll have a few prizes to but anyways it's kind of fun and it's nice to hear what people are doing from around the world. So anyway, that'll be tomorrow, but we'll have the group photo as well. Okay, so that's that. And tomorrow's the last day. So it's anyway, thanks, John again for giving us a talk and it was really great. This is fun. It's the first online activity I've ever associated with. I didn't know, but it actually works out works out pretty well. Okay, so anyway, I think that's it we'll see everybody tomorrow. Thankfully my computer didn't didn't drop out.