 about computational microscopy for phase imaging. So I'll explain what all of this is. I hope it's general enough, but please interrupt me if I use some optics jargon that you don't know. So the idea is that computational imaging, this is a fairly new field that I'm working in. And like super simple diagram, the idea is that people have been sort of working for centuries to create cameras and imaging systems that mimic your eye or look essentially the same thing. A lens and a sensor device at the back. So optical engineers spend their whole lives trying to design the shape of a piece of glass so you can get the perfect image on that sensor plane and just capture it. And then a lot of other people work to take the data that comes out of the camera and extract information from it based on these ideal images that go through. Microscopy is no different. We're trying to relay a copy of the small object to a big sensor in the most high fidelity way we can so we can see exactly what's there. And then we do all kinds of processing to track these cells or extract information from this. So the idea of computational imaging is to throw away this picture of the optics and the post-processing as being completely separate domains. So what we're trying to do, we're saying this is why are we doing this anymore? So if you look inside a camera, you actually spend a lot of money and a lot of weight and a lot of size like your swap in military terms is spent on removing aberrations or correcting the image so that it lays perfectly flat on the sensor. And what I wanna say is, do we really need all of this? Do we need to do optically what we can do digitally? So can I just replace the lens with some sort of, after the fact computation that gives me back the perfect image after some processing and then use a cheap lens that is really crappy and is small and thin or for lots of other reasons you might wanna do that. So can we do this and should we do this? And it's not an obvious problem how you can do this kind of thing but that's the idea of computational imaging. We actually go beyond that. We're trying to design imaging systems both in terms of the hardware and the software together. So we're going to, we need to know both optics and image processing to do this. But we actually are gonna show, I wanna show some examples today of microscopes specifically that can do this that go beyond just correcting the image. So we wanna use like cheap, simple hardware and achieve what an expensive microscope can do. But actually with computational imaging we can do more than what a traditional microscope can do. So we can go beyond just these high quality 2D regular images and extract more information by carefully designing our optical systems and our post-processing together. So sort of the pipeline of this is just a typical example. So you almost, in computational imaging you're going to modify the optical design. So we want it, we have optics designers in our group. We do this kind of stuff. We need to change the optics in such a way that we take a picture that maybe looks like garbage but it contains the information that we want. So we're gonna do some processing on it and get back the image that we were looking for. And this has become a super popular topic in photography. So the next step is start a company. These are a suite of companies. They all do light field imaging which is this example that I'm showing here. But this is really just, this is sort of a poster child of computational imaging but it's really just a starting point. And this is really like blown up in computational photography has just taken over. Your iPhones do HDR now. It's all computational, it's computational imaging. Your next cell phone is gonna have this array of cameras like this that can do 3D and digital refocusing and all kinds of stuff where you change the aperture after the fact. But what we do in my lab is specifically for microscopy. So we're looking at small things. We're doing scientific imaging. We don't just want a pretty picture. We need it to be scientifically correct. And more importantly, the difference between the two is that photography is purely ray optics. So all this wave optic stuff is happening but it's always happening on the same size scale and it's on the order of microns. So then if I'm looking at a large scale scene I couldn't care less what's happening on the size scale of a micron. So I don't need to care about wave optics. I just treat everything as a ray. I just use geometric optics. It's easier. It's simpler. You can define things a lot easier. Everything's linear. Whereas in my lab we're dealing with microscopes where wave optics matters. Where phase and diffraction happen on size scales of the things you're looking at. And so you must consider wave optics. So everything we do is like a little bit more complicated and harder to do some things but also we basically we need it for this application. So here's a great example. This is what I spent most of my PhD work on is this problem of phase retrieval. And this is a pretty common problem. It's been around for a long time. There's actually already been a noble prize for phase contrast imaging. So this is an age old problem. And the idea is that light is a wave. It has an amplitude and a phase but we can only measure the amplitude squared. So intensity is a measure of power or energy which is absolute value squared. And that's it. Your phase is gone, right? If you have some field A e to the I phi phi is the phase. It's gone when you take absolute value squared. So phase imaging becomes a computational imaging problem by force that you cannot directly measure phase. So you must do something to the physical imaging system in order to convert some of that phase information into intensity information such that it can be measured. And that's exactly what we do. We essentially mess with this X before we take this absolute value squared such that we get some phase information about the object into it. So if you imagine X as being a complex field one thing you can do is you can convolve with any complex transfer function or any complex impulse function we call point spread function and you'll get some of that phase from the object into your intensity measurement. So how you do it is a game. We have to play this game smartly and how you solve that problem after you've taken these measurements is the other second half of the game. So this is a great example of why in biology this is extremely important. This is why there was a Nobel Prize for phase contrast because these are cancer cells and you see at the top in the intensity image that's the normal microscope image. These things are unstained so they're completely transparent. They don't absorb any light so they have no amplitude variations but they do delay the phase of the light. So when we can measure phase and this is a 2D map of the phase delay caused by these cells then we can see the thing and that was the important part of phase contrast. So in my lab what we work on is quantitative phase which means what we get at this bottom phase map I'm showing here is not just some visualization of the phase information but it's actually one-to-one related to the phase delay caused by this sample. So for example, this is essentially a height map of this cell and so we can get quantitative information about the shape of the cells and this makes it a lot easier to do cell segmentation and tracking because you really have a blob there where the cell is whereas phase contrast you're just gonna get sort of like high-pass filtered version of this with absorption effects mixed in so we need to be able to separate absorption from phase to call it quantitative. So there's a lot of different application areas. I'm only gonna talk about the optical but we also do a lot of work in X-ray and a lot of other people do. This is a great example. You're used to seeing X-ray images of yourself where your bones absorb a lot of X-rays so you can see the bones with high contrast, right? Now if you could do a phase X-ray image of your body then you'll see for example that's a picture of collapsed lungs so basically soft tissues show up when you start doing phase contrast in X-ray. It's just a different contrast mechanism and so very different things show up when you get phase information and sometimes it's harder to predict than others. This is an example from Lawrence Berkeley Lab where we work with the sharp tool and they do inspection of UV photo masks for lithography and you can see here this is a set of printed lines and the phase contrast image sort of gives you a sense of the topology of this thing so you can see how there's variations there that otherwise aren't seen if you just take an absorption or intensity image. They're essentially the same thing. We do a little bit of work in X-ray security screening. We're just getting started on this but this is where your baggage scanners are going. They're going to have phase contrast in a few years and the point is just you can see different things. You'll keep absorption contrast but you also have phase so that different types of explosives or chemicals will show up in the X-ray screener so you should stop carrying those when this happens. Okay, so I wanna talk about this phase retrieval problem. This is sort of the crux of everything I'm gonna talk about today and the idea is that I have some X is just some complex function so say it's a 2D complex vector representing the amplitude and phase at every point in my 2D image and I'm trying to solve for this, right? So the physical optical system can actually almost can always be written as a linear transform operator on that X. The problem is that the detector is non-linear so the detector takes absolute value squared but we can play with this A matrix which describes our optical system and we can change the optical system such that it's different operator functions but the constraint is that we're always gonna take a measurement that's absolute value squared. So I'm calling this the optical system design. This is like designing the forward problem and you would like to get as much phase information into your measurements as possible in this case but you wanna do it in a way that's cheap and easy and physically possible which is not something that pops out of the math when you go try to optimize this A matrix mathematically. So this optical system design is much more of an art whereas the bottom part is the find X given these measurements and this is really, this is a well-defined problem and it's just a non-linear optimization problem in our case and we have lots of different ways to solve it because some are practical in different situations. Really the big challenge there is that X is usually of size of your image so it may be a size of a million so any kind of like quadratic things you want to do are gonna be of size a million squared and good luck with doing inverse matrices on that. So you need to be careful about how big of a problem this is when you're running these non-linear optimization. Okay so here's a great example of a really easy way to do phase contrast. So this is what I worked on during my PhD. We have this microscope and the Z stage is basically we have an automated focus stage and that's like a lot of microscopes have this that you can do autofocusing or you can just move the sample up and down so you can focus on it. But what we were doing was saying that if I take a couple images at different focus positions I can put them into this algorithm and solve for phase. So we sort of showed that this is a good phase contrast mechanism in the sense that you can get phase information out of that data when you take it through focus stack and this was a long time ago somebody realized this but the algorithm was incredibly unstable and sort of has severe low frequency noise instabilities and so we worked on taking a stack of images more than two and getting all of the spatial frequencies accurately represented without too much instability. Okay so the problem here becomes this typical problem but where now my AZ is pointing out that I have basically a different linear operator on this complex field for every Z position so every focus step has a different operator so I can get diversity into my data. Otherwise you should ask how many images do I need to take? I'm trying to reconstruct both amplitude and phase. I can't do that from one image ever. I shouldn't ever expect that. I should be able to do it from two but it's very unstable so we typically take five or so to make it a well posed problem. Okay so just going into a little more detail of what defocus is as far as this A matrix. Essentially we can write defocus as a convolution so it's a linear convolution with the complex field and some impulse function HZ that depends on the defocus step and that HZ will, it's just a quadratic phase function in the end. This is what physics tells us. So you can digitally defocus and refocus things. You see there's diffraction ringing here because wave optics matters. And so this is a very simple model. It's good, right? So the forward model is just this. It's our complex field X convolved with some impulse function H that depends on the defocus distance Z and we take absolute value squared at the camera. So the optimization problem we're trying to solve is to just minimize the difference between the measured intensity that we got at different focus positions and the intensity we would have gotten if our current estimate of X was correct that X should have a hat on it or some and it's the estimate given our forward model. So this is a problem that's been around for a long time. One of the most popular ways to solve it is this Gershwerx-Sachsden-Phenup method which is basically just, you just go to all these different domains or you look at each of these different operators and you replace the data with the intensity that you measured and then keep the phase, what I just start with some initial guess on the phase and go to the next step and then replace the intensity, let the phase evolve and it will evolve and it tends to work fairly well but it's a convex optimization procedure. It's essentially just, it's like alternating projections type procedure where you're just replacing the data you know but it's a convex optimization done on a non-convex problem. We always are gonna have this problem and it has problems. It doesn't work a lot of times. It falls into local minima, it gets stuck, it gets sensitive to noise. So we need something a little bit more stable. So what I was working on was going a little bit further to say that we can normally get a good result from various different algorithms that use through focus images to solve for phase but as soon as you add noise they all like go to hell and so this is an example with a very large amount of noise and our phase result is terrible. So we were working on this sort of common filter idea. It's pretty natural step to say that I have this dynamically evolving system at different Z positions and I would like to make an estimator that can go through all of these images. In this case we can go through them recursively and try to find the optimal estimator for the phase. That's always but I'll count the fact that, so I'm trying to solve for the phase at the focus position but I have measurements of the field that sends away from focus. So I need to account for those dynamics. And this works extremely well. It gives a much nicer result. It's recursive, very nice proof that the linearization you make in order to do this common filter which is for linear systems is a good linearization as long as your screen to focus is small. The linearization is quite valid and you can't prove that it's gonna work but it does because you stick to those step sizes that are small enough that it will work. Okay, so there's a big problem with this. The common filter keeps track of the covariance but it dealing with noise because it keeps track of the noise covariance. But we have a million of our covariance and we're in trouble. One of my post this very nice way of looking at the physics of the optics and proving that covariance matrix will we make no assumption itself but we can prove that physical covariance matrix sticks near the and it will be sparse in which case we can significantly speed this up. So in this case, for this data we got speed up but I think that says more about more of them than it does about his one. Mm-hmm, mm-hmm. Yeah, so you can do that. There's four chunks, 50 by 50 blocks because this was to 2010 I did. So the reason why that doesn't work is because you have some, as you're propagating technically the point affects every other point in the de-focused image and so it's not fair to block it in. It works to do that but you can already see like edge effects and that's basically due to things on a count of four because you blocked it into chunks. Getting out as it, other methods. This is actually what is on the Hubble Space Telescope and will be on the unstable with. We're trying to optimize image or noise performance then you have different images of different spatial frequencies. So if I want both low spatial frequency information and high spatial frequency information as well which is with very fine spacing. Sort of like trying to, in a way that, as a way of just how, which ones of these go away. And what we found is that if we expand and we need those far away, and we need this really close in images but we don't need everything in between. So significantly reduce the, one of the major problems is that everything, complex field amplitude e to the i phase at every point. And the only way you really get that in practice is if you have a coherent, then matrix coherence and it have basically laser illumination, small illumination source. So what happens, larger illumination sources. This example in the microscope, you see it. So smaller away from focus. If you've ever used zooming until now that we have a situation of total coherence, still microscope situation. They don't want to use a really small source because they don't get enough light through. So they open it up and use a duration. But this is, and it's really complex. Just pretend it's still coherent. Even sigma is a measure of the ratio of this. So now it's really small. It's fairly coherent. And we get a current model in my phase retrieval algorithm. I get a worse and worse and then pretended it was something else. I'm going to get garbage out, right? So we need to fix our, basically. So the idea of partial code, the shape at your source. And it's been illuminating. So people want to use a larger, more light throughput to depth, which can be a good thing if you want to cut out everything that's not in the focus plane. High resolution. Explain why, but you do get high resolution when you use partial current light. You want to model this. So I take a single point on the source. It gives me this coherent diffraction ringy picture at some slightly de-focused plane. But as I look at the images from every point on the source, I see that each one gives the same picture. It's just shifted. So it's the point on the source is shifted. This is sort of like, if I have a B-list. So all these shifted coherent images. And here, which is my partially coherent intensity, is the coherent one. What I would have gotten is a coherent laser or point source there. Convolved with S, the shape of the source and the scales. So as you go further, here. So we want to put this model into our. So the first step we did was we actually measured the shape of the source. We need to know it if we're going to add this convolutions forward problem. So we put a camera system, no problem. See now we're measuring the shapes of the source. This is what I showed you. It looks like crap when I start increasing the size of the source. But the proper for that, I know the size of the source and I know these images are supposed to be blurred by this amount. Then I can get a nice result. Filter model, but we just changed the forward model. We keep the same solution matrix into the common filter. It doesn't look like strange source shapes like these illumination. So then the next step was that measure the source shape. So can we just measure it? So if you think about it, what we have is this stack of images at different focus positions for the coherent. And all of now with the source shape and the source shape as you go away from focus. So this is our model of the data that we have. So this is basically just a convolution problem. It's a little bit weird because it's scaling, but you can put a scaling operator in there and essentially just a deconvolved model. So the idea here is that if this is just a convolution in 3D, then we could do a deconvolution. If I knew the coherent intensity could deconvolve out the source shape, right? So when we end up with it, where we have our density that's to focusing and creating these diffraction patterns and the whole thing after you take intensity is convolved by the source shape. So our new estimation problem is to solve now for x, which is my complex field, which is basically how I get my phase, but also solve for s. And it's not clear, it's not really obvious that you can definitely always solve this. So we have to be very careful how we take the data that it has the appropriate information in it. And I don't really wanna get into that, but we take the approach everyone does when you wanna solve for two things at once. We assume one and solve for the other and then assume the other and then solve for the first. So just this back and forth method. So we initialize them both. We have a nice linear direct solution that can be our starting point, which gets you very close to the correct solution. And then we solve for the source shape given some assumption of the estimate of the object. And then we do the same and get the complex field and then go back and just iteratively keep going through this. And we can prove that the error will always be reducing or stay equal. We can't prove that we'll get to the global minima. We can prove that we won't diverge, that's all. So it's a very weak proof, but we have no idea how to solve this properly to prove you can get to the global without lifting it into a huge dimensional space that's just completely unpractical. So here's just one result. So this estimated source size here is plotted along different iterations. So I start out with assuming that it's coherent. So it's a small dot is my source estimate and my phase looks terrible. It's that original phase recovery result. But after I evolve through these iterations, I get a really nice solution for both the phase and the source that is correct. And you can see it here. So here's the old data. We knew the source and we solve for phase. Now we don't know either of them and we solve for both. So one thing that you can see here, if I zoom in, you can hopefully see that we actually find that this ring source shape actually gives us the best data. So for various reasons it gives us higher fidelity data so this problem becomes easier to solve. But if you look at the top row versus the bottom, the bottom row looks clearer, right? And this is a really nice example of a case where you think it would be better to know the source shape and put that into your model. But actually we measured it and our measurement had error. So it gives back this nice result but when we solve for both we actually get a better result. I think just because our imaging system that measured the source shapes was imperfect. And so our assuming that the source shape was one thing but it was actually something else was causing us to not reach the best solution that we could. Whereas the second algorithm is able to get to that minimum because it's acknowledging that neither one is known. We haven't really looked too much. So this is pretty typical, taking 20. We'll usually just run it to 100 just to be safe. But we haven't really looked much into which source shapes and phases converge faster. With these non-linear problems it's very difficult to predict these kinds of things. There's really not theory to sort of say what different source shapes will do because it depends on the object as well which is always unknown. So it's always a non-linear situation so it's very difficult to make theoretical convergence guarantees or even how fast it will converge. No, it's really simple. We just tell it to stop basically once it's reached. Like the residual is small. But we usually look at this plot and make sure things make sense. It can actually oscillate a little bit. So it doesn't always go straight towards the solution. But it's very ad hoc. Okay, so now we're kind of thinking what is the best source shape? And we really have no idea. This is a very tough problem to solve if you don't know the object or the phase that you're trying to solve for. But so then the next piece that I wanted to talk about was we're using all the same sort of ideas but a completely different physical system, a different microscope. So now I'm talking about, yeah, but now my optical design is completely different and my algorithm's gonna change a little bit too. We're gonna try to do new things. So the new design that we have is this LED array microscope. So this is really cool. You don't need to take any images through focus which means there's no moving parts and what you do instead is you remove the illumination unit of the microscope and you put in an LED array. So Adafruit is an Arduino-based product that is really cheap and it's literally a toy for kids. And we just put this above the sample, like on a stick holding it. And the idea is that we can choose which LEDs to turn on and if my sample's sitting here and I have this LED array here, if I turn on the center LED, I'm just illuminating the object essentially with an on-axis plane wave. But if I turn on this LED over here, I'm illuminating the object from an angle. And you need to take an optics class to know that angle is the same thing as spatial frequency. So illuminating them as interchangeably. So illuminating off-axis essentially is a shift in Fourier space. So we're gonna exploit this. And we're gonna do all these different things. I'm not gonna talk about 3D imaging today. It's an extension of this. But we wanna do phase contrast. So it's a completely different model but it does give some phase contrast. So coming in off-axis, shift in Fourier space. Gonna get phase information from shifting in Fourier space. But remember that we have a microscope which has a finite bandwidth. That's why you have a resolution limit in your microscope because in Fourier space, you have a finite range of spatial frequencies you can collect with that microscope. So if you start shifting around in Fourier space, you can mix different things with that finite bandwidth. Okay, so let's start with this. So this is a really limitless phase retrieval problem. So it says that it's called differential phase contrast. So I can turn on my LED array. I choose, there's a circle of LEDs that defines the range of angles that the microscope has. And within that circle, if I turn on half of them and take a picture, you can see that they are completely transparent cells but you already see some of them are contrast, right? You can sort of see where they are. But it's most definitely not quantitative. And then we turn on the right half and take another picture. We can show that if I add those two pictures together it's like a normal microscope image. And now these cells become completely invisible because this is a regular image. They're not absorbing any light so you don't see them, right? The cool thing is if I subtract those two images, what I get is basically the first derivative of the phase information. So this is not intuitive, but it makes total sense. So you just need a little bit of Fourier knowledge for this. If I had a purely real image, no phase information and I go to Fourier space, what does that mean? It means it's symmetric, right? So by illuminating from two different sides, I'm taking my Fourier spectrum and I'm shifting them, shifting it in two different directions, right? So then I have a finite band limit which is just gonna cut off, basically cut off these pieces. So what I'm doing then when I subtract those two images is I'm comparing the left half of my spectrum to the right half of my spectrum. And now if the spectrum was completely symmetric, left half and right half would be exactly the same and all of that amplitude information is completely removed. Now phase information is the anti-symmetric information. So when I compare those two, I get only phase information left and you have to read all those papers to understand why it's the first derivative of phase but it is. So we can plot this transfer function and this is linearizing with respect to the object which is a pretty good linearization. So it looks approximately like a flat line only along the center part. So the NA is the band limit of the microscope. That's the range of frequencies of microscope normally allows and along that center part it's approximately a ramp in Fourier space which is a derivative in image space of the phase. But it's not quite a derivative and actually if you look at this, you see that we actually get information out past the band limit of the microscope. This is cool. And this is because we're shifting stuff into the microscope, into the passband by illuminating from different angles. So what we do is we actually compute this phase transfer function exactly so that we can invert it and keep those spatial frequency information out to twice the bandwidth. So we get twice as much resolution as this microscope is supposed to allow. So here's an example of this DPC image. It's just a simple deconvolution so you cannot have regularized deconvolution with this transfer function and we get back the phase. So this is a quantitative phase map. Okay, and so since this just simply involves taking a couple of pictures, we can just go left, right, left, right, left, right, and take this case 10 Hertz. And in fact, we do left, right, top, bottom just to be more stable. So we take four images per frame. And now this is a really cool video for somebody who's worked most of my life on phase imaging. I've never seen this kind of dynamics because we've never been able to take pictures so fast with also high resolution. So getting this resolution plus speed is, makes us able to see things. So if you look at the back image, I'm not even sure it's playing as a video, but nothing's really much moving on the larger size scale. But once I get down and do this small size scale, if I have the resolution to see it, it would have been motion blurred out in all my previous results because through focus things take a while. But now we can see it because we can do this really, really fast. So these are incredibly fast cells. We're intentionally picking the fastest cells we have to do this. But you can see all kinds of activity going on on the subcellular level that we just couldn't see before. And I'm not a biologist, I have no idea what they're doing, but our collaborators claim that they do know what's happening there. Okay, so I'm gonna run out of time, but I wanna get to this topic. So this I think is the coolest thing that you can do with this LED array microscope. And we're calling it gigapixel phase imaging. You'll see why in a minute. But basically what we can get with this exact same hardware system, we can get these huge images. So this is a phase image that has a very large field of view, but it also has very good resolution. So I can zoom in on this and get across this whole image I have this quality of resolution. So this is a blood cell smear. And you can see very often when you're looking at blood samples, you're trying to look for extremely rare events like parasites in your blood cells that you wouldn't see unless you sort of like pan around in the microscope for hours. But here now we can just take it all at once and then go searching for it in the data afterwards. So there's a few rare events that we found inside cells for this blood smear sample. So we don't do anything with the post-processing and maybe somebody here would like to get some of this data and try looking for these things in an automated way. But how do we do this? So the idea is actually really simple and it's related to synthetic aperture. So here's my Fourier space of my sample. So the more range of spatial frequencies I can capture the higher the resolution image I will get. This is the limit of my microscope. And optics absolutely limits you to choose. You can either use a low magnification microscope objective to get a large area field of view but you will have poor resolution. Or you can get very high resolution with a fancy microscope objective but you will have a very small field of view. And we want both. This is the signal space bound with product that you can't beat it. It's just set in an optic sets it and you can't beat it. But we can beat it if we give up time. So the idea is that when I use this low magnification objective I get my good field of view. I wanna cover a large area but it's a very low resolution image. But then I can just illuminate from these off axis LEDs and I'll tell you that the angle of illumination we're going in at with these LEDs is larger than the angles that the microscope can actually collect. Which means I'm shifting things in Fourier space. I get these dark field images which light up the sub resolution features along the direction that is shifted. So if I shift in the other direction I get the horizontal features lit up. So I'm getting information from sub resolution stuff but I need to put it all together to build back the resolution. And the idea is that if I can take images basically we're going to just light up one LED at a time and take a picture for each one and fill in this much larger area of Fourier space. If I can stitch all this data together I can simply take the inverse Fourier that I started with. And you basically just get to sum the range of numerical aperture here which is great. The problem is if you wanna do the inverse Fourier transform you better know the phase. So phase retrieval problem. You can't do the inverse Fourier transform of only the amplitude of some spectrum. You need to know the phase as well to get the real space picture correct. So we need to do phase retrieval and the cool thing you'll see is that these circles are all overlapping and this kind of redundancy plus diversity provides enough information to solve for phase for the same reasons I mentioned before that you're creating asymmetries in Fourier space that give you phase contrast. Okay, so here's a picture of the full field of view. A couple, we're using a forex objective and you zoom in and you have bad resolution but you can run this algorithm and you can get the images together and buy back all your resolution. So this was one of our early results. It's getting six times better resolution than the original as far as NA goes. Really cool thing is that we stole some of these algorithms from a group at Caltech and the pedicography people in X-ray do this all the time but it's really interesting that if you collect enough redundant in some way we could use the redundancy to solve for both phase and the source. We can use the redundancy to solve for both the amplitude and phase that we're trying to reconstruct and the aberrations in the microscope because you're getting back higher resolution if you're just aberrating those images and without your resolution. So you can actually solve for both because they affect the image in different ways. In fact, aberration is a convolution process. So it's the same sort of model as I talked about before in the previous case. So our reconstruction is a negotiation. The basic idea is you start with an initial guess, run it through the forward problem, get a data set of expected measurements that I should have made if my estimate was correct, compare it to the estimates and then the hard part is to update your estimate so that you can go back and iteratively run through this again. The original algorithm, which was invented by a group, was just replacing the intensity and just hope that this thing evolves. Very similar to this Grudgeberg-Saxon approach that I talked about. What we did was we did this method so for non-linear optimization problems, going to this sort of second-order optimization is extremely helpful, it helps get you out of local minima very often. And you can see this was the algorithm. Clearly it has problems. When we do the algorithm, we get a much more... We find that it's always looked better with this method. And there's some new algorithms coming out that we've been comparing to that are actually provable. They're computationally expensive and we actually find that this one works better in a lot of cases. So here's an example picture that we get and you can zoom in to individual cells. This is a dog stomach cardiac region and we have a whole album on GigaPan where you can play with these and zoom around with your mouse if you like. Okay, so this is called Fourier Techography, this whole stitching and phase retrieval stuff. And it's great, it's awesome, you can do this higher resolution, but everything has to be on fixed samples when you take 300 images to reconstruct your final image. So we wanna do in vitro. Basically, that's what our collaborators say, that's where the money is. If you wanna do biological microscopy, you have to be using live. So we need an incubator, that's super easy. We just have this LED stuck above the sample. So do we just shove it in an incubator so that the cells can live? We had to do a lot of work on the algorithm to make it better so that we can get decent results on unstained samples because that's hard. And then we need to do real-time, these cells are moving, there's motion blur, get a nice result unless you can take the data quickly. Okay, so first was a hardware thing, we built a custom array of LEDs that's a thousand times bigger than the original one, just brute set up the hardware sync so that it can run at camera limited speeds. So one algorithm, unstained samples, using a smart initialization scheme will result much, much better. I could explain why later, but basically you're getting closer to the actual solution so you won't get stuck in local minima. And then we had all these weird issues with our microscope that it's not telecentric and it created, so we had to change our forward model basically to account for the fact that our microscope was imperfect and that helped quite a bit. So real-time stuff, these are Hela cells, which is cancer cells dividing. If you watch the top right, you'll see this cell dividing into two. This is like biology one-on-one. So forms all these actin along, curls up and then explodes into a bunch of baby cells. Okay, so even that was taken still, we took eight seconds to take each data set. So we got rid of a lot of motion blur, but if you see, as we go to fast cases, there's still stuff happening. I showed you those original cells that were, stuff was happening way faster than eight seconds time scale. So even when I just showed you, we'll have some motion blur destroying some of the details. So we want even faster, but we have a speed limit. We're doing biological microscopy. There's specific like microscope sensors that basically you need to use because they're very sensitive to noise. So we're really limited in speed by our camera in this case, but what we can attack is this inefficiency in the data. So I said we had to do all this overlap. What it amounts to is that we end up taking 10 times more images than we actually reconstruct. And this is super wasteful, but if you just down sample the number of images you take over them will break, it won't converge. And so it was sort of thought that you just need this. And what we were saying is there has to be a way around this to use the data more efficiently. And what we found is you can't avoid using every LED. You have to take each of these images, but you can multiplex them. So what you can do is basically, first we do this sort of a half and half illumination for the center circle, and that gets you out to twice the resolution limit. So instead of taking 20 images, you just take four. So that's a big gain. And then every image we take after that has eight LEDs turned on at once. So what we take is the sum of these eight images. And we take a whole bunch of these. So when we take the data, this is what the LED array looks like, like a disco party. And then what we find is, so here's the ground troops that we're calling ground troops. Sequentially scan through each LED. With all our hardware upgrades, it still takes eight seconds. We have 300 images, so it's camera limited speeds. Now when we turn on eight LEDs at a time, but still take almost 300 images, we get to reduce the exposure time because we have more LEDs on at a time. So then we get it down to one second. That's extremely important, but it's still not good enough. What we actually found is that because we're taking all this multiplex data, we can actually just throw away seven eighths of the data. If there's eight LEDs on at a time, we can throw away seven eighths of the data and just use only 40 images to get essentially the same result. This one has some problems it's old, but we've fixed these problems since then. And the idea is that you can get away with this basically up to about eight. So you had 10 times more data. You can't turn on 30 LEDs at once and take a one-thirtieth of the data. It doesn't, it breaks out around before 10. So we're still taking a little bit of extra data which is a good thing. And here's some real-time videos. So now we're doing this at one hertz, faster than one hertz actually, almost two hertz. Now we can get these. These are neural progenitor cells moving around actively. And here's just another case. They're moving around now faster. So we're taking intervals now, but each capture is very quick. And these are just some growing and interacting. So we can take these very large field of view things over really fast time scales now. So that's all I wanted to talk about. This is a really beautiful platform for computational microscopy. And I really think there's a lot of, this just started a year and a half ago. And there's a lot of places to go with this kind of approach to imaging where we just sort of throw out the idea of traditional imaging systems and start to think about what you can do with knowledge of both the hardware and the software. So thanks a lot, especially to my group who did all this work. You can repeat this the first way. Could you comment on how stable these deconvolution are in terms of noise? When your earlier example said, you know, we can put the ball in the park. Yeah, so we basically design our data capture such that the deconvolution will be stable. But every phase retrieval method does very poorly on the low spatial frequencies. And so it goes by spatial frequency essentially. And so there's a fundamental limit that those low spatial frequencies are always ill-posed solutions. So you can see in our images, there's not too much of like low frequency blobs because we've worked really hard to make sure our data has enough information because it becomes unstable if we don't. Because our transfer functions always go towards zero at the low spatial frequencies. And there's not much that we can do to get away from that. So we just basically, we arrange our data capture such that it's stable enough for our situation. But yeah, if you, for example, if you're going through focused data sets, the far away images are the ones that give you the low spatial frequencies. If you just throw them out, you'll get like huge blobs across the image that are completely wrong. Or, so then what do people do with it? They regularize it out. So they just kill those low spatial frequencies. And then the result looks like it's been high-pass filtered. And I would argue it's not very quantitative anymore once you've regularized out a whole bunch of your important frequencies. Because those low frequencies contain all the basic information of the cell shape. So we're always fighting this. And they are often unstable, but we try very hard to make them to collect data such that it works out. So basically like this active illumination stuff, I think is very well suited to microscopy because you need to illuminate yourself anyways. In photography, people have done active illumination, but you know, lighting up this whole room with my own custom illumination pattern is very expensive. And I might blind a few people doing it. So I really think microscopy is like the place for active illumination. The place that in photography, it's found a lot of success is basically, like in computer graphics, like for the movies, you wanna 3D scan my face and my whole body so that you can make an animated character that walks like me and talks like me. They do that all the time. So they build a dome of lights and then take pictures head on and they can do all kinds of very similar stuff for photography. But I think if I wanna take a picture of that building, I'm not gonna use active illumination. So yeah, so the people do some active illumination for retinal. Austin Beretta does it in the vision sciences. But I don't know, we haven't really thought about that. There's some people talking about this for this specific like foray to techography. Can you just come in at your eye at different angles and maybe like remove the aberrations of your eye and visualize your retina behind it. I think it's a cool idea, but I haven't thought about it much. The image of that of the blood cells and I saw in the corner, it seems to be in the right pizza. It's using a what? The same protocol. Oh, okay. Oh, anyway, it's using the right pizza. We specifically get color. Yeah, so these are stained. Yeah, so the original stuff that we were doing was all for stained samples and the original paper was that because the unstained stuff wasn't working very well. And so now we were doing all unstained stuff, but this is like just a slide that came out of like, you know, like a box you buy of slide samples, right? But I can show you the comparison. So we did some stuff with taking some samples and staining them ourselves and comparing the phase in both cases. Of course I deleted it from this, but I'll pull it up. Yeah, you get the same answer for the phase, but of course the amplitude is nothing when you have unstained samples. Here it is, sorry. So this is just from another point. So this is stained versus unstained. So we took the same sample and we stained it ourselves. And you can see like, of course the stained intensity is our different versus unstained, but the phase looks essentially the same. And actually this is a really important point for what quantitative phase imaging is going. Especially we build this all the stuff on a cell phone microscope now and in the field you don't want to stain stuff. It's invasive, it takes time, it's annoying. So one thing that quantitative phase imaging is trying to do is say that the information you're getting from that stained intensity image is the same information as you get from the quantitative phase as far as seeing things and diagnosing diseases for example. So let's use quantitative phase instead of staining. But I'm not sure that we fully convinced the biologists because it's very difficult to change people's habits. When you have a gold standard you have to be, it takes a while to get something changed. But there's a lot of groups working on phase imaging and that's one of the big pushes convince real people doing disease diagnosis for example to replace their staining protocols with quantitative phase. Yeah, so this stuff with the wide field of view high resolution, it's basically, it's gonna happen I think in digital pathology. So pathology is moving in that digital direction. They buy slide scanners, so they buy this high resolution microscope and then a mechanical stage that scans around this two inch slide to take high resolution pictures of the whole thing and then they just feed them through. But it's incredibly slow because they have to autofocus at every plane and there's all kinds of problems that cause it to be slow and I think that that's the first killer application that's gonna hit is replacing slide scanners for pathology. But I mean computational imaging you always have to, for clinical you have to convince people that it's good enough result that you can trust it. So I think that's the challenge. Thank you again. Thanks. Thank you.