 I'd like to get us going today and welcome all of you to the first Purdue Engineering Distinguished Lecture of this spring semester. The Purdue Engineering Distinguished Lecture Series started in 2019 really as a way to bring some of the top visionaries and thinkers and intellectual leaders, both in academia and in practice, to Purdue Engineering to spend some time with their faculty and students, really discussing some of the grand challenges of our time in that particular discipline, or that area at least, and our lecturers, distinguished lecturers, not only participate and give a seminar that we're going to be hearing about today from Dr. Freeman, but they also actually engage in a hot provoking discussion through a panel with some of our leading faculty experts here at Purdue as well. My name is Arvind Raman, I'm the Executive Associate Dean at the College of Engineering and really proud that we're able to host this particular Distinguished Lecture with the school, with the Elmore Family School of Electrical and Computer Engineering. I'd like to say a few words about Dr. Freeman, our Distinguished Lecture for today. William Freeman is the Thomas and Gerd Perkins Professor of Electrical and Computer Engineering, and really a member of the CSAIL Laboratory at MIT. He was the Associate Department Head of the ECS from 2011 to 2014, and since 2015, he's also been a research manager in Google Research in Cambridge, Massachusetts. Dr. Freeman is known in the area of mid-level vision, audio, and computational photography. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, motion magnification, and belief propagation in networks with loops. He's received outstanding paper awards at CVE or Machine Learning Conferences in 1997, 2006, 2009, 2012, 2019, and Test of Time awards for papers from 1990, 1995, and 2005. He shared the 2020 Breakthrough Prize in Physics for Consulting Role in the Event Horizon Telescope Collaboration, which reconstructed the very first image of a black hole. He's a member of the National Academy of Engineering and a fellow of the IEEE, ACM, and AAAI. In 2019, he also received the PAMI Distinguished Research Award, the highest award in computer vision. Without further ado, let's welcome Professor Freeman. Well, thanks so much. That's a very kind direction. It's great to be here. I regret, I'm sorry, it's not in real person, and I'll give you a rain check on that. I hope to take a rain check from you to be able to go visit in person in the future, but I'm happy to be able to connect in some way through video, and I hope there can be some level of interaction through the chat and also in the panel discussion later. So I'm going to do slides. It's fine to give me questions during the talk, and I think Professor Perulus will moderate those. Thank you for doing that, and at one point, I'm going to ask an interactive question, and I hope you could enter a chat to give your answers to it. Okay, I hope you can all see the screen I'm projecting. Okay, a screen as you can, please. All right, so okay, great. Thanks. So this is about the moon camera, and let's see, as background for it. So I work at both Google and MIT, and actually over my career, I've spent half my time working in industry and half my time working as a professor at MIT, and in some ways, I sort of can't decide which is a better job. So I sort of done both, and now I'm doing both, I'm both at Google, both at MIT, but I do know that the very best job, you know, unquestionably, is to be a faculty member and to be on sabbatical. That's just, it's ideal time, you know, this is one year period where you have this complete freedom, but you know it's going to end. So it's even more special than if you just were unemployed or, you know, just were retired. It's even better than that because it's a finite duration. So in my first sabbatical, I spent a lot of it kind of looking at everything around me in the world and asking, can I make a camera out of that? So I look at a tree, can I make a camera out of that somehow? I look around and I look to the moon and said, can I make a camera out of that? And somehow that idea really stuck with me. And I thought, wow, that would be so cool if you could take a picture of the earth from space from your backyard by looking up at the moon. And so, let's see. Okay, so first of all, how could that possibly work? Okay, well, so here's here's a picture of a crescent moon and there's the bright part there is kind of overexposed, that's the sunlit part, the part that's so bright. And then you see this dark part of the moon that's not lit up by the sun, but yeah, you can still see it. It's lit up by something. What is it lit up by? Well, it's lit up by the reflection from the earth. And so that's the goal then to somehow use a telescope while you're sitting on the earth and look up at the moon there and reconstruct an image of what the earth looks like from the moon's point of view. So that's the goal. And here's kind of another view of the whole thing. There's the sun and and it lights up both the earth and the moon. And then if you're on the earth, you can look at the moon and you see both the sunlit part of the moon, which is the bright part of the moon, the crescent part, as well as if it's a new moon or just a sliver of a crescent moon, you can see this dimly lit up part of the moon, which is lit up by a so-called earth shine reflected light from the earth. By the way, I should ask if I gesture and stuff, can you can you see that? Do you see a little picture of me or is it just the slides that you see? No, we see you. Okay, good. Thanks. That's helpful. Okay, so first of all, I got to address the question. Why is this worth focusing on? And I immediately acknowledged the point that yes, I know we have satellites and I know that satellites can take the pictures of the earth from space, so we don't need to do this trick to take a picture of the earth from space. But I think there are two really good reasons for doing this project. So one is just the cool fact. It would be so cool to take a picture of the earth from space from your backyard. What are any things? So it's kind of like a conceptual art project. And so I'm not an art historian, but my picture of conceptual, my image of conceptual art is that it's something where it's the image itself that doesn't really matter. It's the story behind the image that matters. And so it wouldn't be the fact, this particular photo of the earth that you took that would be so special, but it would just be the fact that you took it while you were standing on the earth that would be so special. And so I would talk with people about this project. And I remember telling my department head at the time about this project. And I called it the dark side of tenure, because tenure allows you to pursue whatever you want in some ways, subject to funding constraints. And anyway, I just remember my department head just really not liking that phrase, the dark side of tenure, just any acknowledgement that there could be a dark side of tenure. He thought it was terrible, but anyway, that's how it looked to me. So that's one reason to do it just for this fun. And then the second reason to do it is for science outreach reasons. So you wouldn't learn new science from this. You wouldn't learn anything new about the earth or the moon, but it would be a wonderful science outreach project. Imagine if you could set up a protocol where anyone like an amateur astronomer could go into their backyard and take a picture of the earth. That would be so neat. So high school kids could go and take pictures of the earth. It would, I hope, kind of in some small way, raise interest in astronomy or in computational imaging or in earth science. It would just be wonderful. I think it would be like discovering a new planet for amateur astronomers to photograph. Now they can photograph Saturn, Jupiter, and so forth, but then if you let them photograph the earth, too, that would be so neat. So that's the second reason to do it. And perhaps the primary reason is for the science outreach reasons. And the slide at the bottom right just kind of addresses that. I'm told that this photo from Apollo 8 when the astronauts first circled the moon and took a picture back of the earth rising over the moon, that that was, it really had an impact on the kind of the collective psyche of people just seeing our earth from that point of view. And so we couldn't hope to have that kind of impact, but we could hope in a small way to kind of increase the awareness of the earth in space. Okay. So that sets the parameters for it. Ah, yes. Here's the interactive part. So let me briefly stop sharing and show you why it's a hard problem. Okay. Here are some props, which are the proper relative size. If this is the size of the earth, the moon, it turns out, is about this big. So here's a quarter and here's a P. And this is the right relative size of the moon relative to the earth. Now, the question that I want you to help me, let's see if we can do this. I'm going to, the question now for you is how far apart should they be? Okay, to be in scale. So these are the right size scales and what's the appropriate distance for this size of earth and moon? And so if you're willing, I'm going to keep moving my hands apart and I'd like you to enter in the chat when, when is the right size. Can we do that? Let's see. I'm looking at the chat. We'll see if this works. Okay. So here, what do you think? Too close? What this? What this? What this? So if you think it's the right scale, enter, okay, good. Enter in the chat. Okay, good. I got some responses. Now, if that's what I want. Great. Thank you. So tell me when is now. I'll just keep going. Now. Now. Okay, right up and keep going. Now. Okay, good. Thank you. Full arm extension. That's great. I have someone thinking ahead. Yes, this is it. Full arm extension. Great. Thank you. Not too far. No, it's actually, this is it. So here's the earth. Here's the moon. This is to scale. Okay, so this gives you a sense of how hard the problem is going to be because what it means to sit at the earth, which is on this quarter and look at the P over there and make an image of the earth. In effect, so imagine you shine a little point source of light somewhere on the quarter, somewhere on the face of the quarter. If you move the thing around to some different position on the quarter, you should be able to look at the P there and by by sensing the difference in the illumination of the P from moving the point light source around somewhere within the quarter, you have to tell where the point light source is on the quarter just by looking at it's it's the light it casts on the P over there. So that's a hard problem, really small angles. Okay, so that's just to give you a sense of the scale of things. So thank you. So now I'm going to undo what I did and go back to the slides. Okay, I hope we're back to slides. So, and then here's the slide showing what we just did and arms like the way quarter size of P. So, and then we've got our constraints that we want it to be something that an amateur astronomer can do with their backyard telescope. So right away that sets constraints on how with what resolution you can look at the moon from the earth because we're not going to allow some, you know, 10 meter telescope on the top of my mountain to look at it, we're going to have to do it from our backyard. And so let's talk about astronomy resolution limits. So, of course, you measure resolution for a telescope in, in what angular resolution it can resolve. A degree is, of course, one three sixtieth of the circle. A minute of arc is one sixtieth of a degree. And an arc second, a second of arc is one sixtieth of a minute. So the moon is about a half a degree from the earth. And that's about 1800 arc seconds or just okay. And so what's what kind of resolution can amateur astronomers see the moon with? Well, there's there's two things that limit the resolution or a number of things, but in particular the size of the telescope, the aperture size, and then also the atmospheric turbulence. And turbulence is really very dominant in many cases. So, so for clear, clear calm skies, good viewing, I'm told that about one arc second is really the best you can do without special computational methods. And so, so that would be great if we could solve this problem only looking at the moon with one arc second of resolution. And to get a sense for how much resolution that is, that would be having roughly, you know, ballpark 2000 by 2000 pixel image of the moon. That's about approximately one arc second per pixel. And you can do some techniques, though, even an amateur can do them to improve that resolution. In particular, there's a wonderful technique called lucky imaging, where you take many different photos, a stack of photos over time, like 1000 say. And then you can over those 1000 photos, some of them are taken when the when the atmospheres relatively calm. So you get lucky and get a shot with fairly unusually clear illumination for just that moment. And if you take 1000 photos, you'll get lucky a number of times. And so there are the sort of standard methods, simple software packages that amateurs can download, which take a stack of photos, finds the lucky images among those using, you can just calculate the variances as one way of seeing how lucky you were with that shot. And then average over the lucky images to get a higher resolution image than you can normally take just with an ordinary exposure. And so using techniques like that, you could get maybe a half an arc second, maybe even a third of an arc second resolution from your backyard telescope. So that's the limit there. And then if we allow ourselves, if we were to allow ourselves to let professionals do this task, then maybe we could go down to 0.2 or even 0.1 arc seconds of resolution. But what we're aiming for the backyard is stronger. So we want to come up with a technique which works when you look at the moon with about no finer than half an arc second resolution. So anyway, this problem which has stuck with me for years, 14 years at least, I've just kept on, I gotta tell you the bottom line. The bottom line is I haven't solved it. It's my white whale project, haven't succeeded yet. But it's been a wonderful motivator and a wonderful source of inspiration for computational imaging projects. And so what I want to do is tell you about a couple of different approaches that I've used to trying to make the moon camera and then tell you what the computational imaging spin-off was that resulted from that line of thinking. So anyway, so here's a list of them. The first thing was to measure to look at diffusive reflections of the moon, to look at fuzzy, then one involving cast shadows, one involving specular reflections of modulations of sunlight, using satellites as an image prior and looking at intensity integrals over the penumbra of the moon caused by the earth's illumination. And I just want to go through a few of these and tell you the approach and the spin-off. Okay, so the first one is this looking at diffuse reflections. So let's assume that the moon is a perfect sphere, just a rough sphere. And so we're not going to ignore the craters and Albedo changes just uniformly colored sphere. And one interesting thing to note is that it's a very, very rough object and a typical model for a non-shiny object in computer vision is a Lambertian reflection function. But it's even rougher than that. So this is this image we're looking at is from a paper by Srinayar, looking at where they came up with a model for rough reflection, the already bi-directional reflection distribution functions. And so Lambertian is on the left, but you can make even rougher models. And the rougher, if it's very, very rough, and it's almost like a backscatter retro-reflective surface where most of the light just bounces straight back at you. And you can see as the roughness increases for the three-renders spheres, it looks flatter and flatter. And indeed, if you take a photograph of a full moon, it really is very flat and doesn't have the fall-off at the edges that a Lambertian reflection would have. And okay, so here are some plots now of this, so the green line is a particular model of the moon's reflection, reflection's function, made by Minert and then the Lambertian one there. So a reflection function, it's really a 4D function, but we're just going to look at a slice of it. So we assume a particular reflected ray that we're looking at and see how the intensity of that reflection varies with the angle of the incident light ray. And the zero angle on this plot on the bottom, zero corresponds to a light ray coming in normal to the surface, and 90 corresponds to one corresponding, grazing angle at the surface. So you can see from the green curve that if we want to, so again, we're trying to use this to make an image. And so we would like to have something where if we wiggle the position of our little light source over the earth, over the quarter-sized thing, that on the other, our arms length away, we get a significant intensity change. That's what we want. So from looking at this curve, you can see that most of the action is going to happen right at the edge of the moon, where you'd like to have, in other words, if the incident light ray comes from this part of the earth versus that part of the earth, you'd like to see a significant change in the intensity, and that would let you back out an image of the earth from processing this picture of the moon you see. And so you can see that the mini-earth reflection is really, there's very little change in the reflected light except at the very edge which corresponds to light rays that are just grazing the surface, which corresponds to looking at just the very edge of the moon. Okay, so let's do that. We're going to sample right near the edge of the moon, which is called the limb to astronomers, and we're going to sample as close together as we can, as close to the edge of the moon as we're allowed with our constraint that it has to be something that an amateur astronomer can do. So we'll do a 0.5 arc seconds spatial sampling, and again, just to give you a sort of feel for how that is, we're going to zoom in as we go from left to right. And so here on the far left, you can kind of see the curvature of the moon, and when you really, really zoom in, there you can finally see the separation of our samples at 0.5 arc seconds of separation, and we're going to have to make a measurement with that precision, which is really tough to do, but we think an amateur astronomer could do it. Okay, so that's kind of the experimental part, and then there's the reconstruction part. So, you know, we live in a strange world now. I bet almost every student just uses neural networks in their daily life, that's my guess. But this is an old school talk, and we're going to use Gaussian models. And so why is that the first thing to try? It's, it gives us insight into the problem. So we have in the top row there, we have Bayes rule. So X is the image of the earth that we're trying to reconstruct. That's we don't know why are the observations of the brightnesses that we see just around the edge of the moon. And we have these two terms. It's this image of the earth that we're looking for is going to be the product of the so-called likelihood term, which is written on the bottom there, times this prior. Is my tiny little cursor visible to you or not? It is. Okay, thank you. So, imaging is wonderful because it's a linear process. So if we take the image of the earth that we don't know and we rasterize it into a large vector, what we wanted, we, the whole imaging process of the relationship between the brightness at any one of those points and what we measure on the moon, you can capture by a matrix that we'll, I'll show you in a slide. And then if you, once you have this, so this is just geometry and computer graphics that lets you figure this out. And once you have that, then you can write the so-called likelihood term, which is to say you're looking for an unknown X, which minimizes the difference between the rendered value and the observations that you see. And we'll assume that we have independent Gaussian observation noise. And so the probability of our observations given a particular image of the earth follows this form. So let's write down the light transport matrix A of how to render an image of the earth onto the moon. So we'll rasterize an image of the earth. And so the first row I'm going to just show you is this green row. And then these are just for finding them in this chart here. And then you can calculate just using computer graphics what the brightness value would be of any part of the moon there if you look at it, assuming a particular reflectance function of the moon. And so from the telescope on the earth, we're going to look at the edge of the moon and look at four different rows here, right near the, very near the edge of the moon, closer in, closer in, closer in. And so for every points over here on the earth, we're going to calculate what the brightness of any one of these points at the edge of the moon would be. And here's how we're laying out that, that matrix, every row of the earth is, is one of these vertical columns in this matrix. And then every, every distance away from the moon that where we make our observation is a different one of these rows. And so that's our light transport matrix. So that's the so-called likelihood term. And then it's such a weak observation, you really have to rely pretty strongly on your prior assumptions on what images look like. And so we're going to use a pretty standard prior here. We're going to assume that we know what the, call it the, the round earth assumption. We assume that the, when the earth is lit up, it's going to have this particular silhouette. And, and then we can just sample lots of different images of just terrestrial images, ordinary images, and do a principal components, decomposition, and decompose this into different modes. So this is our kind of a low dimensional representation of what any image of the earth could look like. We have sort of a DC mode and then higher and higher order basis functions describing slightly more and more complex relations of shading on the moon, on the earth. So this is very similar to a Fourier transform. And, and this gives you all, you can also treat it as a Gaussian function by looking at the variance of each of these different modes from our data set. And you can make random draws of your prior model. And these are some random draws on the bottom of that. So going back to Bayes rule, you can combine the likelihood term, which is here in the middle. It's a Gaussian, send it on with based on our observations. And with the prior, which is this another Gaussian. And this gives us our so called posteriors gives us the probability of any given image of the earth, given the observations we made of the moon. Now, the beautiful part of this Gaussian analysis is that product two Gaussian gives you another Gaussian. You can calculate it by doing a matrix version of completing the square. And you can calculate analytically for any given set of observations, what the most likely image of the earth is for that set of observations. Really nice. And so here's the answer. The different terms in this solution are the rendering matrix that we had, that's the sort of computer graphics part, the forward model. And then you have to make assumptions about your prior model of the earth. You can you can measure this from your training set that you have. And then assumptions about how much noise you have in your image that you're looking at of the moon. And you can put all together and get this value. And then so here's how it works. You look at on the edge of the moon. So like three different rows in from the very edge. And for each one of those distances in from the moon, you have a multiply you take your observations and you multiply it by this function that you've calculated on the previous slide. So for the first row, you take your first row of observations multiplied by this. And second row multiply by that third or multiply that add up all those products and you get a number. Basically, you take the dot product of your observations with this this function. And that gives you a number. And that number is the coefficient of this mode of the image of the earth in your estimate of the earth. And you take all your observations multiply them by this and that and take the dot product of those two and get the single number out. And that single number is the coefficient of mode two of the earth. And same for mode three mode four and five. And so this is a way to go from looking at the moon and measuring the brightness at the very edge to estimating what the image of the earth was that created that set of brightnesses at the moon. Pretty cool. We're good to go. But we have to ask how high resolution and image of the earth will we be able to reconstruct if we limit ourselves to only looking at the moon at point five arc seconds of resolution. And unfortunately, the answer is it's not that good. So let's see. Here's my map simulation. The right the red red curve is your estimate and the green curve is the true values. We ignore the DC for this example. And you can see it captures it pretty well all the way up to about mode eight or so. And then it basically gives up. It says I really can't see what's going on here. I'm just going to estimate that it's what my prior assumption is for these modes. And I know my prior says that these modes are all mean zero. So I'm just going to estimate zero. And this other curve kind of tells a similar story. This is the posterior covariance. This is what you expect. The error is in your estimate of each one of these terms in your answer. And here as you this is kind of driven by the likelihood term where it thinks it's making more and more mistakes as it goes on. And then finally just gives up and it ignores the likelihood terms that I'm just going to start guessing zero because I know that's what my prior tells me. And that'll give me the least error possible. So you only get about eight numbers out from all this work. And how good a picture can you make from eight numbers? Well, I mean if it's each number is a pixel, that would be about a three by three or a two by two image of the earth. And indeed, here's what it corresponds to. If this is the true earth, then, and this is, well this is the, in my simulation I was saying the true earth had only 30 modes, but even those 30 modes I could only render about eight of them. And this is what the reconstruction is of this true earth image. If I assume 0.5 arc seconds of sampling resolution and a really low noise image of about 0.1% observation noise. So it's kind of discouraging. It's not quite there. And if I allow myself a professional telescope, you know, really good conditions with 0.1 arc second resolution, then you can reconstruct the earth to look like this middle image here when the true image was at the bottom left. And it's okay. It's just not exciting. I'm not sure it's worth persuading a professional astronomer to make this observation. And besides, we're really aiming for the amateurs. So this approach didn't work out. But here's it's off number one. This whole, I got a whole big Murie grant in part from this work. There was a call from DARPA about doing non-livesite imaging. And I'd already done all this work for reconstructing the earth if you look at it projected onto a sphere. And so we just used the same math, the same calculations and said to our DARPA sponsor as well, we're going to assume that there's a sphere inside the room and will allow the observer to look at the sphere and they'll do their non-livesite imaging. Non-livesite means you look at a reflection of something rather than directly at the thing itself. And so we could do a lot of what the grant wanted from these calculations with the moon. So here at the left is a video that we could estimate of what you could reconstruct if you allowed yourself to look at a sphere situated inside of a conference room, but you couldn't see in the conference room itself. You could look at the sphere, you could get a reconstruction like that. And so it was helpful. Now I'm really going to have to pick up the pace. So let me just tell you really about one more thing, which is approach number two. We can pause for a few questions here if there's something you want to ask about approach number one. We're about the general scheme. I don't see anything so far, Professor Freeman. So let me just tell you approach number two because it had also a really nice spin-off. Somehow, again, I didn't give up. I don't know why I didn't give up, I really should, but I just weren't really taken by this problem. And I thought, here there's another way of doing it and maybe this is going to work better. The earth shine. So it's all, this one's fun because it relates to craters on the moon. Suppose you're looking at a crater on the moon that's right near the edge of the moon. And again, it's lit up by this earth shine. So the front wall of the crater is going to cast the shadow onto the back wall of the crater. But what's really cool is that if you look at the fuzzy boundary of that shadow cast by the crater, that fuzzy boundary will tell you what partial sums of the earth's intensity look like. So pretend you're sitting there inside the crater looking back at the earth. Okay. This is what you would see in some sense. So here's the earth and here's this red dot is the telescope that you're looking at the moon with from the earth. But here you're looking, you're sitting inside the crater looking back at the earth. So if you're up at position number A and the front wall, the front wall of the crater is much below the earth and so it looks kind of like that. And the brightness that you see at position A is the full brightness of the earth's intensity, just all the full earth there. Now if you're at position B, you just, you're starting to lose some of the earth because that very little part goes below the front edge of the crater. And at position C, you lose about half of the earth because about half of it is cut off by the front wall of the crater. And so the brightness of position C is the sum of all the intensities here on the earth. And so if you sit at the earth and look back at the moon and look at those brightnesses, it tells you about partial sums of the earth's intensities. So you can use those in a linear algebra scheme to estimate what the earth is. It's kind of like a Sudoku problem. If you know that this particular sum gives you this amount, this other sum gives you this amount, you can figure out what the numbers are. And so I really want to skip a lot of this because I want to show you the spin off of it. But so the idea then is you look at several craters around near the edge of the moon. And by looking at those fuzzy boundaries of the cast shadows, you can see these partial sums up to this point, up to this point, up to this point, from that observation and so forth, and reconstructing those of the earth. So the name of the game is how big are the craters on the moon? Because what you really want is a really big crater such that it has a very long throw length for that shadow edge, such that the apparent size of that fuzzy part of the shadow is as big as possible from the earth's point of view. And the bigger it is, then the easier it'll be to see from your telescope. And so I made a little census of how big the craters are at different parts of the moon. And so you can assume that maybe you'll have a 50 or maybe even a 100 kilometer size crater. But again, it's funny, as if nature is conspiring against me on this project because if we have a backyard telescope, again, it's a totally different method. But again, it's right at the edge of what you can do. So a backyard telescope, we're now allowing it to see with a resolution of 0.33 arc seconds. And assuming we can find 50 kilometer-sized craters, you'll get a resolution of only 2.7 pixels per Earth diameter. So something looks like that. But if we allow ourselves to find a really big crater, like a 100 kilometer diameter crater, and look at it with a 0.33 arc second backyard telescope, which is right at the edge, we'll get something that only looks like this. It'll be about five, five and a half pixels per Earth diameter. So it'll look like about like that. On the other hand, if we go to our professional friends and let them do it with 50 kilometer cash shadows, you'll get this, 100 kilometer cash shadows, you'll get this, which is reasonable. But again, it's not, it's, it's, we'd love to get an image like the bottom far right, but something that amateurs could do. And so this method doesn't do that. But this method had a wonderful spin-off, which was the corner camera. And Professor Baumann's daughter, Katie Baumann, worked with me on it. And, and so that, so this, this method got me thinking about cash shadows as imaging devices. And if I can borrow 10 more minutes from the talk, I will just tell you, take you to the corner camera. So here's, let's see how to do this. Okay, let me again go to props. So this is the cool part actually. Here's, here's like a reflector. And here's an object. If I have this reflector way back here, now this, the brightness I see from here really is an integral in some ways over all the light that's from all parts of the room, shining on any point here and that reflects back to me. So my question for you, let's see if we can do this again. Here's an object. I put it here. And then if I take it away, it's going to change the brightness on this card a little bit. I mean, there's, you know, the brightness we see on this card comes from everything in the room, the light impinging on the card and reflecting back to our eye. And if we put up this thing in the field of view of the card, it'll change the integral of all the light there and it'll change the value of this card a little bit. Here's the question for you. Ballpark within a factor of 10, how much do you think holding up like my hand or that bottle, what fraction of the light intensity coming back to my eye changes when I put my hand up there versus when I don't? If you could please put into the chat how much you think it changes. Great. You guys are great. Perfect. And so it really, you know, of course, it's kind of an impossible question to answer because it really depends on the details, but you've got the order of magnitude just right. So for a lot of videos we've looked at and also for a simple analysis I've done here, it's ballpark one part in a thousand. Okay. And okay, so that's cool. Now here's part two of this. I suppose I put an occluder in front of it. So now this card is the ground and this thing is an occluder like a building edge or the edge of a room or a wall. This structure is everywhere in our world. This, you know, reflect ground planning with a sharp edge next to it. Now you've got an imaging system. How is that? Well, a part here, other than from your point of view, sorry, a part here integrates life from everywhere. It sees all my face and then stops integrating right at this value. And a part here sees all my face and starts to stop integrating a little bit further on. And so there's like a, it's an integrator. It integrates everything up to a certain value when it reaches that edge and that stops integrating. And I can sort of show this to you. So usually it does it for one part in a thousand light differences. Just to make everything really clear, let me have a really bright source there like this thing. Wait, there we go. And now you can see this, you know, obviously bright edge there from this point source. And you know, how do you read out the signal that's caused by an integrator while you differentiate? So if we take the signal that's here and differentiate with respect to angle around this corner, you can read out an image of what's on the other side. It's a projection of it because the difference in as you change the height of something really makes a very small difference on the reflection value. It's really the angular value that matters. And we can just do a quick check whether it makes sense here. This is a step function. And so what's the derivative of a step function in angle? Well, it's a delta function in angle. And so this tells me that there's a really bright source right in the direction that that shadow points and check that's correct. So that's for a really bright source, but the same math holds even for these you know, very much dimmer sources, ones that you can't even see by eye. But it makes this one part and a thousand change here. And if you differentiate the image reflected from the ground as a function of angle, you can pull out a 1D image of what's around the corner. So let me just show you this quickly. And then we can go for questions. So I want to share screen another time. So what's really fun about this is it's a robust, reliable thing. And it's kind of got this magical feel to it, which I love. There are these signals there in the world that you can't see by eye, but you can measure them with a conventional camera, a cell phone even, and you can make a picture of what's around the corner. So suppose you're walking out of this building in MIT and you want to know, is there anybody walking around the corner there? That's something that you might well want to know for personal safety or other reasons. So here's what to do. You go stare at the ground there. And these one part and a thousand changes, by the way, are not visible by eye, but they're straightforward to pick out if you integrate in these kind of pie shaped strips away from the corner. So here's where you can see a picture of Professor Bowman's daughter, Katie. So this is being recorded by that camera that you saw. Vicky was an undergrad and now she's a grad from the Berkeley. Katie was a grad student. Now she's a faculty member at Caltech. So then they're walking. And meanwhile, the camera that you saw in the other picture is taking a picture of the ground here. This is the world's most boring video. Let me play it for you. Okay. And then here shows the decoder. Again, the way you read out from an integrator is you differentiate. So here's and this is positive values. This is negative values. Here's what you multiply the ground image by to read out this 1D picture of what's on the other side of the corner as a function of angle. And also you have to subtract out the average image because you want to not get affected by all the dirt and stuff that's on the ground. And here's the results. So on the top horizontally is angle or spatial position, angle let's say, and vertically is time. And this is the 1D video that we've recorded. So it's not as revealing as a full 2D video would be, but it's something that is present everywhere and you can capture it with a conventional camera and it lets you see around the corner. You can still, even though it's just a 1D image, you can still tell important things from it. You can count how many people there are. Well, there's one person there. There's just one trace and you can tell whether they're moving quickly or slowly. Here's another one. How many people are in this one? Well, two. And so let me just kind of skip to the end. What's really fun is just really robust. So let me show you some more examples and then we can stop for questions. So let's see. I'll show you these in decreasing order of contrivedness. Okay. Here's most contrived. Got a bright light source. We got Katie and Vicki moving inside a room and we're looking at this white piece of paper there below the corner. Here's the readout of the 1D video as a function of time that we're calculating from this video that you see at the bottom left. And again, it's the world's most boring video. You can't see any change over time with it, but it's there to be calculated by taking those integrals along these pie shaped rays away from the corner. Okay. Now slightly less contrived. We'll turn off the light. It still works. Less contrived still. We'll take away the white piece of paper. And again, the same with all these back here. This is the video that we're processing. And on the far right is the 1D video. So angle is horizontal and time is vertical. And you can see who's where as a function of time. Here's another one. Again, world's most boring video. This is what we're processing. The bottom left thing. This is what's going on. The top left. And this is what we record from the bottom left is this person jumping around in a 1D video of them doing that. And so it works indoors. It works outdoors. It works under many different surfaces. Here's Adam and another student working on it walking slowly away from the corner. And so you can hear the traces with three different cameras. An iPhone, a Sony camera, and a point ray camera. And you can see as he moves further away from the corner, he gets smaller. You can see that the thickness of this 1D video gets narrower and narrower because he's getting the projected size of him gets smaller and smaller as he moves away from the corner. Oh, yeah. And then again, it's just so much fun. You can take this thing outdoors. Again, this is an outdoor video of a piece of ground. And you can't by eye see any change there. But as they walk around, you can calculate this 1D video for walking around. So what's also fun is it's pretty robust. One time we did it and it started to rain. There it is. So you can see now the video becomes slightly more interesting because it's raining on it. But it doesn't break catastrophically. Obviously, the performance gets worse, but the signal is still there. How might you use this? Well, conceivably, you could use this actually as a pedestrian on the street wanting to know if there's someone walking around the corner. That might be interesting to know for personal safety reasons. You also might imagine if you're in a self-driving car, you want to know if there's someone child around the corner. So we borrowed a child. Here's the daughter of a faculty member walking around a circle. Does this work for kids? And so we set up a camera. They're looking at it. And yes, you can still see a tracer for walking around, even though she's got a smaller cross section than adults do. But you can still see her as she runs around the corner. And then just one last thing. It's also fun if you have two of these things. Like this is an open doorway. Well, what else is it? Well, it's a pair of corner cameras, a stereo pair. You've got one here and you've got one there. And so you can look at both of them and make a stereo picture of what's going on inside the room, even though you're not looking directly inside the room. So again, this was the result of many people's work, but the initial idea came directly from this moon project, another spin-off from it. And just to reiterate the restrictions on this, it only gives you a 1D video around the corner. And also it has what I call T-Rex vision. It really requires that the thing be moving. And that was part of the story in Jurassic Park. T-Rex could almost see if you were moving. But this also led to a number of papers as we explored kind of occlusion-based imaging. If you can have a plant in there, and if you know the shape of the plant, you can pull out a light field, a lot of fun spin-offs from this. And that's the time I have. It's been, you know, other spin-offs have come. And then I guess one other thing that was fun is they actually do have a satellite up now, which records images of the Earth every hour. And I was thinking, well, maybe I can allow myself to use those images as a very strong prior to what the Earth might be. And so this is a very different approach. You could also imagine allowing yourself to look at the moon with just a cell phone camera and just measure three numbers from it, the average color of the reflected Earth's shine. And, you know, cannot tell you anything special. And it turns out it tells you a little bit. So if you use this library of past images of the Earth, and you don't tell what color the Earth was at the moment, you get this picture. But if you do use the color that you might measure from a cell phone, you get a slightly better picture. But that's really cheating because you've got this really strong prior from the previous images. So let me stop here. Oh, sorry. Well, sorry. One last thing. What am I doing now? I have yet another approach which I haven't had time to tell you about. And I'm setting up a little toy models in the lab and I've hired a postdoc and she's going to help me with this. And we're going to try to make her work on a small scale model and then see where that takes us. So this is a summary of some of the methods that I showed and some of the spin-offs that happened. My main point is, rather than being the dark side of tenure, this crazy project of trying to take a picture of the Earth from space by looking at the moon is actually the bright side of tenure. You know, it's really led to wonderful research projects. It's been a whole lot of fun. And we might still get it, not yet, but we might still get it. So that's why I think it's the bright side of tenure. And thanks very much. Oh, thank you. Thank you so much, Professor Friedman. This is wonderful. Thank you. From a logistics perspective, we have about five minutes for questions. So please, if you, for all the participants, you're more than welcome to put your questions in the chat. So I see the first question here. Could you segment the ground along the radius to get the second dimension? That's a great question. The problem is that to do this imaging, you really want something where a small change in angle of the thing you're imaging makes a relatively big change of intensity on the ground. And so if you have the wall there, then that's satisfied with the horizontal motion. But the difference in the reflected light as the thing goes up or down a little bit really doesn't change much as it goes up and down if there's no occluder in the way. And so I don't think it's really, it'd be very, very, very low resolution signal you would pick out from the radial variations. But thank you. Great. Maybe I can follow up with one question until we get the next one here. Is the, how would it help if you had multiple cameras at the same time looking at the ground from a, you know, from an array point of view? So, okay, this is, we're just restricting ourselves to the corner camera. Well, in the model that we analyzed it with, it wouldn't change things because we assumed a Lambertian ground, but of course it's not going to be perfectly Lambertian. So you probably, you could get some improvement from that. The big thing that's hard with the ground is all the dirt and stuff on the ground. That's why we have to only look at moving things. So we subtract out a constant average value of the ground. Anyway, I'm not sure if it would buy you much to look at the ground with multiple cameras. I mean, maybe you could get some noise benefits. Yeah, thanks. That's a question. This is Danny Chan. It's my end time seeing this video. I've seen multiple times from Katie. And just every time I watch it, it's just so, so, so amazing. So I just feel a lot of joy of seeing it again. But it's so amazing to see that you spent so many years on this problem. I guess if you ask me to work on this problem, it's really like 14 years. Wow, that's a long time. What lesson have you learned during this 40 years? Yeah. So I got I have to just in my own defense, I have to say, I'm not only working on this problem, I'm really doing other things. This is kind of a background little project. What lessons have I learned? Well, to be honest, I've learned a whole lot. I've learned a lot about imaging. I've learned a lot about camera systems. It's really been a, you know, as we all know, I think, I mean, you don't really learn something until you kind of do it until you really play with it. So it's like doing a 14 year long problem set in imaging. And it's just very, very helpful for me. Like even now, I have some new thing I'm not talking about yet that just came, you know, it came out of the most recent thing I've, recent way I've been thinking about the problem. And anyway, it's just a helpful way to think about fundamentals. Yeah. And I've learned, you know, a little bit about astronomy and stuff. So okay, thanks. We have, we have more questions, I think that we have time for. So let me, if that's okay with everybody, borrow five minutes from the intermission, try to cover as many as we can. One question was how do you account for atmospheric correction? Yes. Okay, so I assume you're talking about, now we're not talking about the moon, we're not talking about the corner camera, we're talking about the moon project again. So, so in all these things I've shown you, these were sort of MATLAB simulations and I haven't yet played with real data yet. And I'm just, I'm just saying that, you know, from reading what I see and looking at people's web pages, my guess is that when you're all said and done by your corrections, that the image that you ultimately get will have this particular resolution, like 0.5 arc seconds of resolution. So then I just start with an image of that resolution and process it. But the, I mean, the primary way to do the atmospheric correction, as I understand it, for the amateur astronomers now is this so-called lucky imaging method where you take a stack of photographs and really by looking at the kind of the property of each, you make some statistical analysis of each photograph, look at the variance of it, perhaps, and pick out the one with the highest variance of pixel values over the others, and that's going to be the one that's least blurred and then take an average over the ones that are least blurred in your stack and you make a nice higher quality reconstruction. Anyway, so that's the, the lucky imaging is the method that's used to account for atmospheric effects. Going back to the question with the ads, is, is processing possible real time or is there some delay to receiving the differential signal? No, it's, it's, that's the beauty of it. It is process, it is possible in real time. And so we would, yeah, we, we, we would make little demos and we do it in real time and it's, it's just a lot of fun. Yeah. Now the question, if you had multiple cameras facing the moon, greatly distance apart, thousands of miles, would you get any improvement in resolution? I think, I mean, you'd get a small improvement by having, you know, two or three cameras instead of just one, and you might have some improvement by having a diversity of atmospheric conditions. But I don't think these would be big effects, these would be relatively small effects, I think. Have you tried cancelling the baseline, which is a recording of the scene without any movement? Will that help? Let's see, so I assume this is about the corner camera. Yes, the corner camera. Yeah, try cancelling the baseline. Oh, well we, we do, actually I'm not, I'm not clear what you mean by cancelling the baseline. We do have to make an average image and, and then subtract that from every frame in order to just look at the moving things. And so if that's what you mean by cancelling the baseline, that is what we do. Great. And then regarding the corner camera, will it help to look at both the floor and ceiling? Yes, yeah, I think, I think it would, actually. And you, we haven't played with that much, but, but yes, that's a good, good point. And yes, you could do that. You might actually, you could imagine that there would be perhaps less interference, you know, less stuff around on the ceiling than there is on the floor. And so it might be easier to get a nice image of the projection plane when it's the ceiling rather than when it's the floor. So yes, that's a good point. Thank you. I know there are more questions, but out of respect of our distinguished guest time that we want to give him 10 minutes to prepare for the panel along with our other guests. And so thank you so much. We do have a 10 minute intermission and then at 345, we will start in amazingly exciting panel that will be hosted by Professor Stanley Chan. So a huge thanks to Professor Freeman, a virtual applause. Thank you so much. Thank you. This is for, this is amazingly exciting. And we will see you back in about 10 minutes. Thank you so much. Thank you so much. See you soon.