 OK, I'm very happy to be here and to see so many people to listen to my lectures. So I divided into two pieces. The first part will be about structured illumination microscopy, which is one method for, let's say, medium super resolution. It gives you only a factor of two, as you will see. And the second talk will be on light sheet microscopy, which is a method that allows you to get optical sectioning of relatively large specimens. You see that guy is done with light sheet microscopy. It is a fly. Can I maybe first get a show of hands about your background? Because I'm not 100% sure. So who of you studied physics? That's a fair bit. Who of you studied biology? A few? Some did both. Right, and is theoretical physics background? Very few, OK. Who has heard about Fourier transforms before? No, it's good. OK, then I'm reasonably safe to use. I don't need to spend lots of time to explain what a spatial frequency is and stuff like that. Good, so let's jump right into it. To motivate my talk, I want to start with this statement that says this is the classical way of how to think about optics. People have some optics and they have an object and they look with the optics and the object and they get the image either directly on the retina or maybe on a camera or maybe on a TV screen in the confocal scanner or something like this. But I want to challenge that by saying, well, maybe that's not the best way to get your information. There's plenty of other ways. And why don't we do it like this? You have an object, optics, you get some data, and then you crank the data through a computer and you get your image. And then you've also measured something. So there's, of course, lots of pros and cons for both ways of viewing things. But just as an example, I want to show you some methods in medical imaging that are state-of-the-art imaging methods that all are heavily based on the use of computers. So if I gave you the raw data of any of these methods, you wouldn't be able to tell anything from it without the help of a computer and heavy data processing to get the results in the end. So in microscopy, that's not so typical. But SIM, Structured Elimination, is one method where you actually do need a little bit of computer to interpret your images and get your high resolution image. What is it all about? And I have a little demo here. So did you have a lecture on what ABBES resolution limit is already? You had even the ABBE diffraction experiments, which is a fantastic introduction to it. So that means you are aware that if you have a very low numerical aperture, you're going to not have a good resolution. And so if somebody sits in the back there, even if they have good quality eyes, they might not be able to see what's on here. And so you, it might look like an equally gray screen because the ABBE resolution limit of your eye is terribly low because you're sitting so far away and then the angle of acceptance to this slide is very low. And what I did is I printed very high frequency information on that slide. So it's a super high frequency spatial frequency information. You can't resolve it because it's below ABBE limit. So now the trick in structured illumination is to say I have another thing that you can probably also not see because it's another very high frequency pattern. In this case, it's not, this was a pretty irregular pattern. This one is very regular. It's just straight lines, but beyond ABBE if you want. Or just at the ABBE limit or so. And now I'm going to put the two together and then you get something that's called a moiré effect. So if I do this, then suddenly you can see some black. Oh, maybe I have to stand. Does it work better when you have the white screen? So once again, this is the one high resolution, the other one, and I put them together and then it should not be black. It should be, yeah, do you see it now? So the main point is, of course, this looks like a pretty image of who is it? Ernst ABBE, exactly. So that's Ernst ABBE, but the main point here is that it contains lots of low frequency stuff. You see large areas that are white and black and so that's why you can resolve them even at a big distance. So that means we're somehow able to transport high frequency information by multiplying them with another high frequency information pattern into a low frequency information. And that low frequency information can pass through a microscopic system and then we can reconstruct. So I was cheating a little bit. Why was I cheating? Because this is, of course, not what you're interested in, this beautiful image. What you're interested in is the sample here. You wanna know what's on that slide. You can't see it. You illuminated it with a special pattern. You measure this, but what you're interested in is not that measurement, it's this information about the object. So that's why we need to do some math now to find out how can we take from a measurement like this how can we get back to this object information. And there's something else I wanna show you. It's a funny effect. If I relatively move the phases of the two, you see that the color sort of the gray values change. So you can, there's an easy way to manipulate that pattern by simply changing the phase of the illumination by moving this slightly sideways. We're changing the contrast here and maybe we can use that for the reconstruction. You will see in a minute that this is very helpful information that allows us to interpret that much easier. All right, for you to not get bored, I give these a round so you can play with this. And as a little hint, there's two patterns on it. One is like this and one, you cross them and you will see a different phase and you can try to guess who that guy is. I have more of these, it's easier. So now I lost my pointer. So in a nutshell, we have an object with a high frequency information on it. In a normal microscope, we can't see it because it's beyond the resolution limit. Now, what we do is we take a grating and we overlay that grating on the object and you get these relatively coarse stripes which we can then even measure in a microscope. Of course, they look a little bit different, a bit more fuzzy than I've presented it here, but just you get the idea. And so we can measure this. And therefore, we can hopefully then reconstruct our object. So what you see here is that this is a phenomenon you see in everyday life all the time. Once you alert to it, you'll probably see it at least twice per day or so. These more re-effects that, so I'm sure many of you have seen it already. Mathematically, what do we have? It's actually not complicated because I can describe the emission pattern as a multiplication of the object density with the excitation intensity, the local one. Okay, I have to say one very, very important thing. I'm talking all the time about fluorescence and only fluorescence. So in the coherent world of optics, things would be very different and wouldn't work that way that I present them here. But you have to remember we're talking about fluorescence and fluorescence has a complete loss of phase information. That means that the emitted light has always a random phase and every time you do it, it's different. And so in the end, it is totally safe to add up intensities, which means we can describe it as a linear system in the world of intensity rather than a linear system in the world of amplitudes. Okay, that's an important statement. We're talking fluorescence. And what I present today only works that way for fluorescence. I stress that, no, you can't. You can of course do the same experiments with reflection or scattering and something like this. Resolution enhancement compared to what you put into it. It only works for fluorescence where you lose the phase information. And I'm happy to have lots of discussions on that maybe later, but let's understand how it works in the incoherent world. Okay, so we have an emission pattern described being proportional to the object density, which makes a lot of sense. Double as many fluorophores, double as much light out. And we have it proportional to the excitation intensity, meaning double as much input light, we get double as much light out as well, locally. And of course, if we translate that statement to Fourier space because some things in optics are easier to understand in Fourier space, it becomes a nasty convolution. I hope everybody knows roughly what a convolution is. A convolution is in pictures, you draw something with a brush. So as you know, a microscope image can be modeled as a convolution of fluorescence distribution. We have to convolve EM with the point spread function, i.e. the brush. That means we draw the object with the brush, we get a picture in a microscope, okay? But here it's the other way around. We have a multiplication in real space. We get a convolution drawing with the brush operation in Fourier space. Okay, let's look at that in pictures. Well, we don't really know what the object is. That's what we're interested in, right? So I will just draw some dummy thing for it because I don't know what it is. But we do know what we illuminate with. And in the simplest case, we illuminate with the interference of a split light beam of a laser. So basically, you can think of it as a totally detuned Michelson interferometer, something like this. So you're interfering a laser beam coming from this side with a laser beam coming from this side. So what you get is what people call a standing wave. So you will have slices or planes of high intensity directly neighboring planes of darkness, and many of them, okay? That's our illumination pattern. And we make that as final as we possibly can. So ideally, we try to put our beams from these sides. If we want to go through the objective for convenience reasons, we just go at the edge of the aperture so that they come under the highest possible angle inside the limit, all right? So these interfering waves, they will generate an intensity distribution that you can describe by a sine square. The interference, the amplitude will be a sine wave, and if you square it, you get the intensity pattern of the standing wave. A sine squared, you can again write as a constant plus a sine wave of double frequency. And this is what you see here. The constant Fourier transforms, gives you a delta peak at zero frequency, and the sine wave gives you a delta peak at plus K and minus K. K referring to the K of that sine wave, the spatial frequency of that sine wave. And what I've also done is, you see here, it says phi zero and minus phi zero. I wrote that there because I wanna not forget that we can actually, by changing the relative phases of these two beams, we can make that pattern shift in space, right? We can make the maximum at a minimum, if you like, by just putting a little bit of delay in there or so. And that means that this one gets a complex value at e to the i phi, and this one gets a complex value at e to the minus i phi. By the way, this is very generally true because there's a theorem called the Fourier shift theorem. That means if you're in real space shifting anything in Fourier space, you're multiplying with an e to the i K delta x. So since this has a certain K plus K, and a certain shift delta x in real space, we get a plus phi in here because it's a minus K, you get a minus phi, so to say. But if we had a second peak here or something like this, then this would get a two phi, and the next one would get a three phi and so on. Okay, right. So now I said we have to describe the emission pattern. We have to convolve the two things. That means we have to make a convolution of our three delta peaks with the object. It doesn't matter in a convolution which one is which because it's symmetrical. So you can think of drawing our unknown Fourier transform object with three pencils basically simultaneously, or you can think of the three pencils being drawn with the object. It gives you the same result, right? So never mind, but what it really means is we have three shifted copies of our Fourier transform object attached at these three peaks, so to say, and they are summed up, and that in the end is our emission pattern. So what I've also, of course, this would continue here and it's the funny sum of these shifted components. So that's how the Moray effect really looks like in Fourier space if you want. Is that part clear? Now I get more question marks on the faces suddenly. So where did I lose you? Now everything fine? Okay. So we have this fun of the three shifted objects and let us call them from now on components, object components, and the reason is the following. This is the Arbe limit. So we are able to somehow obtain information from here to here, and we have the funny mix. If we were able to now unmix that mess and from these measurements of these curves here, get back our object components, then we can sort of shift them where they should belong, stitch them together, so to say, and get a good quality high resolution image. And that's the whole point of structured illumination microscopy. Okay, of course what we measure is not that, but we have to go, this is just the emission pattern. So we have to now go through our microscope to image it onto our camera. And that means in real space, convolution, it means in Fourier space, and that's why we went to Fourier space, a simple multiplication. Multiplication with the Fourier transform of the optical, of the point spread function, which is called the optical transfer function in this case, the incoherent transfer function. And that transfer function very roughly looks like a triangular function that goes down. So what we measure is curves like this. And as you can now see, the trick is if we move this illumination pattern, we are altering the relative weight if you want of these three components. But in this case, it's not the weight by making one very strong and the other one zero or something like this, but it's a complex valued weight by changing the phase of these things. But that's just as useful. So what we really have, if you think about it, if we were able to measure three points here in Fourier space at one spatial frequencies, with three points, we can maybe solve the equation system and disentangle it to get the three components. Can you see that? It is a linear equation system. So in other words, it's a very similar problem than let's say I have a funny balance that works only once you put more than 100 kilograms on it, that's sort of the minimum value, otherwise it's not accurate or something like this. So now my task is to measure the weight of you three guys. How would I do that? I could put you on the balance, but maybe you wouldn't be quite the 100 kilograms that it requires. So we have to put always two people on the balance. I put you and you and we write down the weight and I put you and you and write down the weight and I put you and you and write down the weight. I have three measurements, three combinations, but now I need math to figure out what's going on. So you have to write down equation systems, you get a big matrix and you invert the matrix and then you multiply that inverted matrix with our measurements and you get the individual weights. And that is precisely what you do in SIM as well. You can write down that matrix. It turns out to be an extremely simple matrix in a way. If you really wanna know, it is actually the matrix is a Fourier transformation along the dimension of measurements. The matrix that you get is in the end doing a Fourier transformation. It doesn't really matter because we use it as a matrix because we want to account also for errors in these phases. So if the phase is not precisely what it should have been, then you wanna sort of correct for it. So that's why we really write it down as a matrix. So okay, so now let's put all the puzzle pieces together. So we make three measurements like this. They will have variations like you saw in our images. We can Fourier transform the image, we get sort of something like this. Step one of our reconstruction is we correct for this effect that the optical transfer function does. Essentially we divide by the transfer function. There's a little star at that thing because we will come back to that. It's not quite that simple, but in theory we divide by it. Second thing is we extract the components with the help of this inverse matrix. We take the three measurements, we apply the matrix, we get back the three components. Now we have them individual. Yes, I shouldn't have said that. I just confused everybody suddenly. No, what I was saying is if you have these three measurements, then you have several axes, x, y, z and so on, but there's also the axes of phi's. And along that axis you can do a Fourier transform that just turns out to do the matrix and it basically is just telling you the analytical solution of that matrix equation, but it, okay. Yeah. I might see the similarity with phase shifting algorithm. Exactly. Interferometry. It's exactly the same as phase shifting in interferometry or homolograble. Yeah, it's very similar. Except the last step, in these interferometric things, you often do some quadrature approaches and that's what we don't do here, but it is very similar. Right, so again, step one corrected, step two extract the components by the measurements, the inverse matrix application. Well, step three is put them where they should belong. That's easy. This central component is anyway in the right place and it just turns out that it's, that's the ordinary microscope image you would have if you just eliminated with an equally wide screen, so to say. Yeah. Yeah, it's the sum over the pictures. And so that's what you get there. That's not exciting because it doesn't give us any super resolution, but at least we have the normal image again. However, much more interesting is the component that was attached to the plus Kp because when we now shifted where it should be really living, you see that we've gained some part that is theoretically completely not accessible because it's beyond the other limit. And the same from the other side. So you see that it's not some computational trick we are doing, it's really the fact that we are able to attach our object here and we are measuring all the way over to here and that's why instead of on a normal way, we have only from here to here, we now have twice as much because we can sort of go from the right side of our frequencies to the left side of our frequencies. Right, and then we have to do something that I call weighted averaging and that's where I'm gonna correct my cheating in the first bit because correcting for the optical transfer functions all theoretically really nice, but it's a function that goes to zero. So dividing by something that goes to zero doesn't sound like a brilliant idea because it means you're multiplying close to this point. You're multiplying first with a hundred, then with a thousand, then with a million and something like this. So that means if there's noise and the noise doesn't care about other limit and spatial frequencies is everywhere, you're multiplying the noise with a thousand and then you just get noise as a result. So not so useful, right? So we have to fix that, but look at this. Theoretically we still do it, but in this step weighted averaging we can say here our signal is infinitely bad of the black curve. But the pink curve is pretty good because it comes from the middle where we didn't have to divide with a lot. And here they are about equally good and here the pink one is pretty bad because it comes from this side and but the black one is good and so on. So what you do is a frequency dependent weight of the averaging of the two. And does anybody know? It's an extremely simple question, but if you have two measurements with two uncertainties, let's say I give you, I don't know what, we after drinking contests have a measurement of how long is that? And I'm really drunk and so my error is pretty bad whereas you hadn't quite had as much and your measurement error isn't so bad, right? But still, let's say we all drank a lot and you're the only sober guy here. So then you have lots of measurements with a bad uncertainty and you have one good one. What are you gonna do now? You can say I discard all the bad ones and I just take the good one, but maybe on average, the bad ones are even better. So that calls for weighted averaging, but what is the precise equation you're gonna use if you have to average if there's two values with two different uncertainties? Does anybody know? Sorry? Almost, almost. You, yes. So sigma is the standard deviation and it turns out you have to weight them with the inverse variance. So one over sigma square is the correct weight to apply. So that means you take one over sigma one square times the first value, one over sigma two square times the second value and then of course you have to divide by the sum of the weights. So one over sigma one square plus one over sigma two square and that you can prove for Gaussian error propagation is really the best, the optimum that you can do. And of course if there's one really good measurement you give it a very high weight and your measurement doesn't become terribly better because you've added a bad one to it, but a little bit better. Okay, so and at the end we have to do what I call apologization. What I mean by that is simply the fact that here we are still left with nothing else to average with. So we have to make sure that this curve does approach zero again. So we are multiplying with a new transfer function if you want that we can construct at will. And we can make it anyway we like what we just have to make sure that we get down to zero here. There's a reason to take particular functions there. You can do some Wiener filtering step that optimizes for certain conditions the signal to noise in that range so that you do it the right way, so to say. We can discuss that later. So I wanna put what I've now explained to you in pictures together again. So this is a simulated object and now this is our illumination. We put the two together we get some sort of emission pattern like this. So this is the response of our sample. What does that mean in Fourier space? Well in Fourier space our object is sort of we get three copies of it and they are overlapped like this. And then comes the microscope imaging so the image will become blurry or in other words will sort of stencil out a certain range in Fourier space and everything else is lost outside. So now we take our three images and we correct for the effect of the transfer function I sort of left that away. We disentangle the components shift them back into place join them back together with the weighted averaging and there is our sort of say high resolution thing in Fourier space. We have to of course back Fourier transform the result to get a nice image. But what you also see this is all very well but we have caused in a two dimensional image we have to do that for a couple of directions. If we do it for only one direction we get a resolution improvement along X but nothing along Y. And since you usually want a good picture you need to repeat it. Yes, please, sorry. That is a very good question. So the question was how do you determine the spatial frequency? And of course I could say we know what we do we can measure the angle of what we put in but you are perfectly right it's not that simple. If you try that it's not precise enough. You need to know that spatial frequency terribly accurate that means to a few nanometer precision over your whole field of view or something like this. And I'll get back to that in a bit because you have to do algorithms that determine it from the images. And then you get very accurate values for it. So, okay. So, experimental setup. How do you actually do something like this? This is a slightly more complicated setup but it's actually very simple. So what you do is this is the position when an ordinary microscope there's the field stop. The field stop as you know gets imaged into your object. So if we put in this position a diffraction grating we'll get an image of the diffraction grating here. And if we now illuminate it intentionally with a laser instead of an incoherent light source it turns out that we can get a very good, well, we get a very high frequency interference pattern in there and this high frequency pattern can have very close to 100% contrast. There's only in the simplest case if this was a zero pi phase grating we would not get a zero diffraction order. So it leaves us with two orders that are interfering and gives us the highest possible frequency if we make this grating smaller and smaller. Eventually it will be beyond the other limit and you will get nothing. Actually then it will be dark in this case. But as soon as both orders go into the objective they're in there, they interfere and they give you 100% contrast. So the cool thing by doing this with a laser is we are not really limited by a decaying transfer function. We are doing it in the coherent world for illumination. So we are limited by the pupil function and as soon as we go in there we are fine. There's one little butt there. I don't know if somebody spotted the butt. The butt is take care of polarization because if you're interfering two beams under 45 degree angle that are polarized like this, nice try but they're not gonna interfere the way you think they would. What would happen is you would get some funny polarization modulation over your space but you wouldn't get the intensity contrast that we want. So in this case you have to make sure they are azimuthally polarized. And indeed we do that in our experimental setup. We make the polarization always azimuthal to the way they interfere. Okay, so the setup is really simple. What I've shown you here is a slightly more complicated situation because I put a grating in that's not a zero pi phase grating but it's some grating that has also a zero order. Well a normal amplitude grating would have that or a zero whatever, not quite two pi phase grating or a different aspect ratio for example of the grating would have that. So that means you have now three orders interfering which is interesting because it makes you along the optical axis, it makes you a three dimensional interference structure. So that's not any longer the standing wave picture you had before of these planes of interference but it becomes actually like a wood pile structure that's doing this all the time. You know this effect and you would still have grating lines but if you de-focus them they would sort of start to blur a little bit and suddenly they reappear with a phase shift and then they do this again and again and again and this effect is well known in optics it's called the Talbo effect. It's a self interference re-imaging effect that you always have after gratings. Yeah, so that's what's happening here. It's a Talbo effect in 3D. We have a wood pile structure here. Okay, and then we can move this it turns out that this will give you more of these orders instead of three you have if you think in the 2D world you would have five such orders and that means instead of having to take three images we have to take five images but since they are different positions it's a simple shift, same math applies just the matrix becomes a little bit bigger. Okay, good, so there we go. With that you can do really cool images. Here's an example, the first really good image was published by Mats Gustafsson and you see here, wide field image of a tubular network in a cell and this is the high resolution structural illumination image and he's also applied it for a three dimensional set and I don't have the time to really go into the 3D theory of it but the transfer function goes from some funny shape like this in a normal wide field microscope into a shape like this where you've doubled the resolution in X, Y and Z and you've filled the missing cone problem in a wide field microscope. Here's one example of an image so this is part of a cell actin cytoskeleton and what you see is these sort of structural illumination on it and I have to maybe stress that when you do these experiments yourself it's actually fairly simple to build a structural illumination microscope but one thing is important you should do it coherently because only then in the illumination only then you can get this really big contrast so there's many publications that are called structural illumination but if you look in detail they did use for example just a beamer video projector or something like that that's all fine but that means your illumination structure will have when it's close to the limit zero contrast and that's not very useful right so you really want to do it with a laser because then you can get 100% contrast at the limit so I would make a distinction between sort of true super resolution constructed illumination methods that really try to get the biggest contrast at the highest frequency into your illumination structure and the ones that do it incoherently the same effect you get when you just tell your laser scanner and a confocal microscope only illuminate every second line you also get a structure but it has the same problem once the structure becomes really close because it's an incoherent superposition in time it washes out everything and no contrast left so to say that's one thing that the biggest mistake that many people do when they try to do structure illumination they do everything right and they get the first data set and say I can't see my structure there's nothing in there and it's not true the reason is that if you're really at the limit this high frequency peak even though it's 100% there will be dimmed down in the detection to zero at the transfer function of your detection goes to zero there and that's why you don't see it so not seeing a peak doesn't necessarily mean that you haven't done the right experiment it just means you haven't quite applied the math yet so don't get discouraged if you can't see the structure and also in the images even here it's already hard to see and since this is a three beam interference i.e. five order structure you can see what you see here the little bit of interference structure you can see corresponds to the low intensity order that's only half way of what you're really interested in because the most interesting ones are the ones that are touched here in this case since they this experiment didn't quite go to 99% of the illumination frequencies possible it only went to whatever 80% you can still see the position of attachment of these peaks the reason is that the object spatial frequency zero is very very strong in every object and so it still pops up and you see it often but it depends also on the samples this is a particularly nice sample with lots of structure in it but if you use as calibrating beats or something like this it's very hard to see these peaks here yes yeah that's what should be done i have to admit that this is very rarely done so i think what you're saying is that to see if it's yeah and you do it the official way so to speak yeah yeah i totally agree yeah what's the difference well they should show you the same structure right as i said it has been rarely done so what you're suggesting is take take an NA0.5 to structure illumination so it should get you to about one and compare it with what you see at one right and and i totally agree that would be the right measurement to do it most people don't do it in a way it's even simpler than trying to do structure illumination for a high NA objective with one because the lower the NA the less problems you have with polarization right you don't need to worry so much about polarization the beams are coming under low angles they will interfere fine okay um yeah i'm i don't even know a single publication where they they've really done exactly that and i know that you can do it and you should do it but we also we haven't done it so far yeah good um so i want to show you this is how a single image looks like you take five of these images and that allows you with the mathematics to extract the plus second component that's the extracted first component that's the extracted central component minus first minus second well and then you repeat that experiment in three directions of of your structure you might take a motor and rotate your grating do it again and so on um this is just to indicate the resolution improvement now goes from here to here and that's why if you put it all together like this big puzzle you join it and now you have twice the resolution in x y and in z if you do that right in z yes yeah there's there's several ways you can control the phase of the illumination gating i.e. well the first thing is you shift it you mount it on a piezo and and you move it with the piezo if if you have no piezo for your grating you can use your object as well and actually the first experiments i i did a long time ago on on structure illumination i think is it oh yeah so that just just one second so if if i take this this would be the wide field image of our actin filaments and if i take now the structure illumination reconstruction you're getting from there to there so there's a clear resolution improvement no doubt about it but you're right it should really be proven that this is really the structure you should be seeing um this this is the first publication in in 1999 where we did this and you see the the effect again this is the the beats on a slide low resolution and structure illumination the contrast is not terribly good here and i think maybe at some point we might want to dim the light a bit i don't know if that's possible so then you can see the the image is nicer um so what you see here is another great example of Mats Gustafsson oh yeah i wanted to tell you about this thing how i did that i didn't have um i didn't have a laser first of all and i didn't have a motor when i was doing my phd so what i ended up doing is i was sort of tapping the sample and taking 10 20 images and every time the sample moved to some random position and then i shifted all the samples back together so i had sort of randomized phases and i picked the right ones in the end and i reconstructed the image so there's always a okay slightly cumbersome solution to that problem but you can you can get around budget problems with with easy tricks right so this is a really great another example of Mats's group structure illumination what you can see here is mitochondria in a cell so these are where the energy gets sort of produced for the cell or i mean made from one unit of energy into another unit of energy and if you look closely again that you can probably not see from the distance but you can even see inside these mitochondria the substructures the crystal structures so you see the membranes that are inside there which is kind of cool and that is a 3d version of the same this is another nice example also from Mats's group multicolor they did growth cones this is a growth cone of a neuron so you see actin and the cytosol label in here and okay so that yeah i'm starting to seriously run over time at the moment but we'll see the next little bit would be to tell you to answer your question how do you get these parameters really right and turns out there's a hell lot of parameters that you don't know precisely enough the grating constant like you said the the grating constant the orientation of the grating it might be wrong by half a degree or something like this the local phase meaning when you're shifting the grating did you really shift by the right amount or maybe slightly wrong the global phase means what is precisely the bright line in the grating in the first one even if you know the relative phases you need to know eventually how they are globally because only then you can join them I sort of skipped that part but the extractive component might still have a phase orientation to the first one you need to know that to rotate it in complex plane into the right orientation and the contrast of the orders you might need to know as well the local illumination intensity of every image it might fluctuate over your time series and if there's a drift in the sample you want to correct that as well so lots of things you really need to know for a good reconstruction but you have to figure them out for the images itself because they can't be known easily from your experiment so I'm going to show you a little bit the first bit first of all what happens if you don't get it right so basically you see if you do a reconstruction with wrong parameters slightly wrong parameters for the grating constant it's just a little bit wrong by maybe a percent or less then you will get effects like this there will be a bright region here even though it should look like this and the dark region here this is of course a simulation but just to show you it's the roof of King's Cross station in London actually and another thing you might notice sometimes you can also get sort of splitting of lines so if you correct it you go from here to here much nicer image in the end so how do you actually do it and you do it like this you turns out you can do this extraction only with the knowledge of the relative phase so as long as you shift them by one third of the period you're totally fine it doesn't matter what the grating constant actually is and that you can get at least roughly right so separation of the components works fine once you have them separated what you can do is you can use this overlap area where you've measured the same thing essentially twice and your cross correlating them and from the cross correlation you can find out very precisely what your grating constant is and it turns out that you need to find that out by much better than one pixel in Fourier space and simply fitting something to the cross correlation peak and trying to get a sub pixel position is not good enough you need to do it iteratively so you basically determine it once with a fit if you like and then you correct for it and try again and the correcting you can do with the Fourier shift theorem again and so you do this a couple of times and optimize the peak and this way you can then find a very precise typically the precision is better than a tenth of a pixel in Fourier space about this cross correlation then you're fine and with that yeah that also solves the sorry that solves the problem of the grating orientation as well because it you can do it and you get a 2D two-dimensional peak x and y position so you know orientation and length of it good next problem local phase global phase and order contrast you can actually also look at these cross correlations but we wanted a very robust algorithm so we basically looked at the errors that happen if you do it wrong and then we we make corrections until this sort of measurement of error goes down and if you basically have a slightly wrong local phase of illumination let's say it wasn't one third of the period but it was only a quarter or something like that and then you will not unmix the components correctly and if you have here such a thing that should be for example component zero you haven't done it right you have some leftovers at the positions where you know where they should be these positions where the zero frequencies of the object end up and since you know where that is relatively well very precisely actually you can quantify the error and we came up with different error metrics of how to measure that and and they are sort of summarized in these matrices and we look at not only the zero component but the first and the second component and all these positions so in this one where the green error is where you are expecting a peak and the red ones are the bad ones you shouldn't have them and here's the other way around you should expect something here but nothing there and there and there and so on and we then just make some sort of metric that makes something like the ratio of the strength and the red error and the green error actually it's a bit more involved because it's okay to have some values here but they shouldn't be peaky so we have we measure the peakiness or something like this in these regions and then you summarize that in the matrix and then we have an iterative algorithm that optimizes it but that sounds terribly slow because for every time it sounds like you need Fourier transforms of the whole image and that becomes really slow turns out that this mathematics you can pull through the Fourier transforms and and that allows you with some tricks to to relatively quickly do this optimization of the phase because you're not you know precisely where to measure so you can sort of pre-correlate the image information once and then you do it with the correlated information um so if you correct it you see now everything is clean and you get it exactly the way you want it and then you put the things together and then you have less artifacts in it good question so how much does it depend on on the sample and so on we do it at the moment all the time for every new image essentially but I think if you take a time series of a sample it's not necessary to do it all the time maybe it's enough every tenth or every hundredth image to sort of do this recalibration so we are starting to think in that direction because we want to make algorithms that that work live speed so that you can look at cells doing their thing and you can watch it live going on and you can yeah and then you don't have the time to do all these relatively complicated calculations all the time so we we we want to say okay maybe only every tenth image we sort of spawn off a recalibrate task and go on showing images basically but we haven't figured that out yet so it turns out actually that there is a much simpler way to get almost the same results than what I show you here by looking at individual image because if you think about it every individual image contains the knowledge about the phase of illumination the local phase and that's all you need to know and if you do autocorrelations of that image information the autocorrelation shows you a peak even if the peak is not visible inside because a shifted version of that image should somehow correlate with itself yeah and from these autocorrelations you can then figure out what you need to know and Kai Wicker in my group worked that out and it it it works like a charm so it's it's really a much easier way to do it maybe not quite as robust maybe not exactly the same precision but it's only a few percent or so so this is actually if you do a reconstruction with the theoretical phases and if you then do the phase correction you see that these artifacts here disappear and because in Fourier space the peaks here disappear and yeah global phase error I don't want to go into that you can actually see it from the Fourier transforms a little bit and you get these typical splitting effects of lines here they become double lines yeah so that's you know I was advertising do it with a computer but you see also the lots of problems you get yeah normal optics wouldn't get you into that sort of troubles but of course aberrations and so on yes but if you do it with a computer and it's a complicated thing then the then the errors you're going to see are also quite complicated sorry I mean yeah that's there's no free lunch exactly that's what I wanted to say good let me let me speed up a bit this next bit was actually about mathematical things that have to do with the algorithm that you can think about mathematical problems that actually we haven't really put into it yet but it's interesting so just how can you fit a line through lots of points where the error distribution is distributed like this in a more complicated way than normally and this is not contained in the ordinary fitting packages that you get from from the manufacturer so to say yeah so you have to think how to do that it's not it's not trivial we haven't done it yet but we know roughly which way to probably go there's something called Steiner's theorem that allows you to calculate the the the the inertial moments moments when you move them along from a from a certain position to a different position and this can be used to solve that problem of how to fit a line the best way through an error distribution that's complicated let me I think this is all not so so interesting actually I can put the PDF of this online somewhere and you can look at this if you're interested yeah how to do it faster that was what I said before if you do the individual image correlations you can do it in a faster way and you see here a comparison of a single step algorithm to the iterative algorithm and except for very very very bad signal to noise where the iterative one is significantly better for all other situations they are okay and actually even the single step seems to be in some way more robust which we don't really understand but yeah and you see is the comparison between iterative and single phase and you see no visual difference about the quality of reconstruction there fine the vener filter problem is still a problem which is the current way the last step what you do is this weighted averaging and the upwardization and this can be summarized in a vener filtering approach but vener filtering makes fundamentally wrong assumptions about your sample it assumes that there's an equal amount of noise everywhere which is absolutely not the case bright stuff has much more noise than dim stuff because it's possible on distributed and there's no simple algorithm that can figure it out of course you can do iterative deconvolution algorithms but they're significantly slower and we are still seeing to find good compromises between both worlds the one Fourier transform algorithm is the vener filtering but maybe there can be done something in between and we are right now working on that because we are we managed to now model the error distribution through the whole process of reconstruction so we can basically predict what's the error in every pixel and that then hopefully will allow us to also get better reconstruction results this is an interesting theoretical problem the question is what I said you can sort of choose your transfer function for the end but what properties do you want and what's the ideal transfer function to choose and it turns out that this problem is very old and you can find in a publication by Dennis Gabor from 19 I don't know what 1915 maybe or so in that range you can find a statement about that and he recommends using a cosine shaped curve that I think that's the sign here that the green curve or cosine I would say and that has certain properties which are minimizing a certain quantity and that would be one way to go but there's of course several solutions depending on what you like and what you don't like but there's also interestingly you can prove that there is no way you can make a function that never goes negative because somehow you think okay I don't want a final point spread function that goes negative because there's no sense in negative intensities but there is none that is frequency limited so you cannot make any function frequency limit and non negative no wait sorry I'm messing it up now what was about it no you cannot have it monotonous monotonously decaying so that would be ideal like a Gaussian or something like this it's monotonously decaying and frequency limited and that that doesn't work together you can't you can't have some function that decays monotonously never has a side lobe and is frequency limited even though that would be what you maybe like but you can of course make good compromise between it with a bit of upwardization with extremely low side lobes fine okay so there's also something interesting called the lucosh bound that you can apply there I'm not such a big fan of it that has to do with the positivity actually so this is the setup that we currently have in our lab and as you see we've replaced the grating with a special light modulator so these things you can get from video protectors essentially this one is is a special one because it uses ferroelectric type of liquid crystal and that is very fast so we can we can run this at more than a thousand frames per second the disadvantage is digital and we lose a hell lot of light in this thing so from the laser to our illumination here we have about 1% or 2% of the light left which then becomes a problem because you need to buy expensive lasers to have enough light to do something useful with it and so but with other light modulators you are much more effective than with this ferroelectric one digital mirrors would also be enough but then you have an amplitude modulation which means if you're then picking out the orders you want and there's this this thing here called passive Fourier filter which essentially well it's not essentially it's a piece of paper where we where we took a needle and we punched holes in the positions where we want the diffraction orders to go through and you have to get a bit of practice so you have to waste a couple of pages of paper until you get it right but that's okay so yeah this has to be selecting precisely the orders and we have an algorithm that that iteratively searches for the right patterns to display so to just get us the right light that we don't get stray light from other patterns in there and here's a bit of polarization control this one here it says other muzzle polarizer which of course you can buy it from a company it's terribly expensive if you want we've now switched over to a a mode of ebay which means I bought an ebay from some american company quarter wave plates but they are just the raw material of it it's it's sliced mica or something and it was really not expensive I think I paid 50 euro and I get 20 of them or something like that yeah so really not expensive and then you can it turns out you can take a scissor and cut them with a scissor because this mica is is like paper almost you can cut it like this and so we we then cut them and we arranged them and rotated them in a way that we can then get the whole polarization as a muzzle thing with with these mica plates and yeah here you see some the first results so this is a total rough no a raw frame range of 62 images and then we've worked more and more and I will show you in a minute another example where we went up to seven more than 700 frames per second raw data of course I have to say one thing to get really good data you need a good camera and we are using very expensive cameras for that but from the if you are happy with slightly less quality and speed you can use industrial great cameras and they become cheap or mobile phone cameras they are maybe even better than the really expensive ones I don't know yeah what what I wanted to say here is is these these cameras they work in a rolling shutter mode and that is a problem for structural illumination because ideally you want to expose and then read out your frame and expose and read out you can do that in these cameras but then they become relatively slow because they are meant to be sort of operated in a rolling shutter way at 100 frames per second or something like this which means that you have these lines I would call the reset and read out line they run over the frame like this and you can make them larger and larger so in the end you can sort of reset read out and immediately this pixel starts exposing until the line comes by again and resets and read out again and that's the ideal world because then you have 100% of your time for exposure and you're still reading out at the highest rate but of course for Sim that wouldn't work in the normal way so we made a slightly more complicated illumination scheme to be able to exploit that fast speed of the camera because if you think about it we if we have only always exposure and then read out even if we want to run it out only 50 hertz half speed we have we have still only 50% duty cycle for exposure and the faster you want to go if we want to run it 90 frames per second then we have only 10% for exposure so it's a vicious cycle in some way so that's why we came up with a let me skip a bit now the computer needs some time because there's a movie coming up in a second this is the effect of rolling shutters if you take a picture and the car is moving you get funny funny effects here but what you should see hopefully in the next slide once it comes up is how we solve that problem so it's a little animation that shows you what we do so we basically split the frame in the different segments and then we illuminate in a synchronized way with the camera read out so that only that essentially the read out and reset line is kind of always kept in the dark and the bit before it displays the new frame sorry the old frame and the bit behind it displays already the new frame so here you see the effect so what you see here is the sequence of displayed frames and we basically slice our gratings up into pieces like this and we display at different times different pieces and while the read out and reset line on the camera run like this we are sort of displaying the series of seemingly complicated patterns but if you add them up in the right way what is actually read out are precisely the patterns that you wanted in the end and of course there is theoretical some problem at the transition line between here and here because you go slightly incoherent now sequentially turns out it didn't hurt we didn't see any significant artifacts from that effect so the images that we got look fine it's just a bit complicated with the triggering and getting everything synchronized so here you see result that is structured that's normal whitefield imaging structure illumination you can see the increase in resolution something is wrong with the color so it's a 714 frames per second was the raw data rate and that means we have about almost 80 frames per second final rate of images and and that's important it's important to circumvent drift problems and it's important because we want to go finally want to go to non-linear structure illumination and so what does that mean well here is what I've explained to you we're illuminating with a sinusoidal grating and we get this effect but then when you think about it it's based on this equation right proportionality to object density and proportionality to illumination intensity but then you can say well what if what if we were not proportional so we have something like this we have some deformation with the intensity how can this happen well there's lots of physical effects that can give you something like this so for example a simple saturation of the fluorescence if you've ever worked with the confocal microscope if you put the laser up too high it will start to saturate the fluorophores what is meant is that they cannot keep up with the cycling between ground and excited state because it always takes them a certain time to emit a photon two nanoseconds or something like that and if there's lots of photons coming in in these two nanoseconds they are sort of inactive so they can't response that means you're sort of losing the linearity if you go too bright they cannot follow up and it saturates and that means that if I go this curve times a thousand I would be like this but then that's only the start and then here it saturates it cannot go brighter than a certain limit which actually is the radiative rate of the fluorophore and this deformation for those who play electric guitar you know when you want to go from bobtil and sound to acdc sound you have to crank up the amplifier because it gives you higher harmonics right and that makes the squeaky sound so the same here the non-linearity in the Fourier space comes down to a convolution with this Fourier transform of the function applied to the intensity and that makes the sine wave into a non-sinusoidal wave and you get higher harmonics here and luckily the same tricks work still because this will automatically rotate a 2, 5, 3, 5, 4, 5 so instead of having to take 3-imit you now have to take whatever 25 images or something like this but you get more and more resolution in theory infinite resolution information here of course practically you have noise everywhere and if these are too small everything drowns in the noise and you won't be able to resolve it but you've fundamentally gotten rid of the problem of the Abel limit you have now a signal to noise problem and I would make the statement that this is the gist behind almost all super resolution at least one family of the super resolution methods so stimulated emission depletion microscope you can picture the same way you're getting information from beyond Abel's frequency into the range of measurable frequencies and there's another family these are the single molecule methods which sort of work in a different way they are very rarely interpreted in the framework of the Abel limits and right so these were our first attempts to trying to do this non-linear super resolution we didn't get so far so what you saw was the transition from wide field to normal structure elimination linear structure elimination and now comes the transition to the non-linear and that's not so nice as we expected it to be and first it takes some time I don't know you barely notice the difference it went a little bit sharper but not what we wanted so that's why we never published this however other groups got got further particularly Mascoussons group they worked on that and applied it then also to biological samples that was a fixed sample it's again because of the light here you can't see it but the what you would see here is lots of little dots these are the nuclear proteins and they got to about 60 or 50 nanometer resolution for such a fixed sample and then very prominently about a bit more than a year ago almost two is Eric Betzig the next one once it comes up sorry my computer is really slow there will be two movies once they play and these are done with non-linear structure elimination by a photo switchable proteins and that's the most promising route if you have fluorescent proteins that you can switch several times from a dark state to a bright state and back you can use that to generate your non-linearity and that that is what is done here and actually they went for not so high non-linearity but they could still get about 60 nanometer resolution and then you can see these nice details on the actin cytoskeleton of cells that were previously not visible you see some spiraling patterns in there here for example sprouting sprouting actin activities and so on yes and only the ones that are not illuminating are increasing it yes so I have to explain a little bit so the saturation effect that I just showed you indeed has the disadvantage that you sort of have a relatively strong central component wide field image and relatively weak higher component and since the signal to noise comes from the total number of photons you have you have a relatively bad signal to noise it's not that the resolution would be worse or something but it's just the signal to noise is sort of very compromised because of this really strong or this big amount of fluorescence from everywhere yes and you can do it in a different way so we like to discriminate between positive and negative saturation so we are doing a normal positive saturation but what you would like to do is more like instead you'd like to saturate another transition and that's exactly what is done when you use these photo switchable dyes because what you're saturating is then the on-to-off transition you're saturating switching it from a bright to a dark state and this way you can make essentially from a pattern that has very fine lines by lots of background you make it to a pattern that has nothing with small lines on top of it and that's much better in signal to noise terms but there's another problem with the switching of proteins and molecules that is molecules are entities and they either switch or they don't switch and that means you have a not just the Poisson noise from the photons but you have the Poisson noise from the switching events and if the theory of reconstruction like it is done at the moment says okay this brightness is 80% it doesn't apply to single molecules the molecule is either on or off so to get something close to the 80% you need to repeat that experiment many times and that makes it really difficult you need to basically push these molecules through thousands of cycles to get a good reliable estimate of 80% for a molecule so the higher the resolution goes the more you come to the single molecule regime and you need to account for these effects it doesn't make a difference if it's two photon it still has the problem the molecules on or off or No, the non-linearity Ah, the non-linearity Yes, you're right Molecular switching or saturation of switching kinetics is just one way of doing it there's several other ways of getting a non-linear response and you're right a two photon would be a non-linear response it has a problem because it's just a square and just a square gives us just one more peak in the end so it's not so good in that way but it has another problem which is much more serious which is that if you compare it to the wavelengths that you illuminate your photons normally, your fluorophores normally with you now have double the wavelengths so what you're saying is basically let's use double the wavelengths and then get back in resolution to what it gives you anyway so it's not really worth the effort you don't gain by this effect using two photons is for that not recommended of course if you can exploit the two photon ZIM if you want to go deep into tissue and maybe do something there that's a different issue and you should know because we have a common project about this right so for me now the question is I've run over time quite a big bit and I wanted to show you something completely different which I think is cool it's much easier to grasp and watch because it's more nice images and not so much theory I would still have about 20 minutes to do that and so I think I better sort of skip over the final slides here and structure illumination and then show you a little bit of the other thing okay good so there's some imagery construction things called blind structure illumination that's about when the patterns are not perfect sine waves anymore can we maybe even detect that and correct for it and yes you can and it makes better images and so on and you can read publications about that it's not really so important for the understanding of structure illumination so that's why I want to summarize this part of the talk so it's first of all you should have learned that linear fluorescence microscopy methods like structure illumination can enhance a factor of two and what is particularly interesting about structure illumination is that it does that in an efficient way why do I say that because Collin for example already published how many years ago statements that you can get a factor of two resolution and improvement in a confocal microscope in fluorescence and it's true the upper limit expands by factor of two it's just that the transfer function becomes so incredibly small out there that it's really really hard to pull that out of your signal to noise with deconvolution you can try to do that but difficult structure illumination simply pushes these high frequencies so much that now they're easy to detect and then nonlinear fluorescent method like nonlinear structure illumination or that can do better by the way I should mention what hasn't been done yet another nonlinear effect is using quantum effects so for those who know about Rabi oscillations you can try to push your fluorophores with ultra short pulses into Rabi oscillations and if you are able to do that repeatedly you can get a very nice nonlinearity for that but it's not so easy nano diamonds maybe inertial transitions of erbium but not easy right these are collaborators for the structure illumination part and I want to think the names here in my group that contributed to it and many many more people so on different aspects of structure illumination microscopy but now I want to go to the second talk and should we have some questions now or some more or should we continue with the talk any important questions right now yes so the question is if we can do it in transmission and it refers actually to a similar question before can we do it in reflection and so on so my answer would be yes you can do it but you gain nothing what I mean by that is in transmission or reflection are always coherent methods that you would have to describe in the coherent or partially coherent theory and in these cases you will find that if you compare the resolution limit with something like oblique illumination you get the same final limit many people don't know that but Abbe's original paper 1983 that he published on the resolution limit was very specific oh did I say 19 1870 no 1873 I think it was yeah 1873 so where he publishes this equation essentially first of all in that whole paper there's not a single equation written the way we write it today it's just written in words this is proportion to this and so on yeah but he basically says okay we can get the resolution limit lambda over 2 and a that you all know but for the specific case of an oblique illumination so he first describes the theory with head-on illumination low NA and then you get lambda over 1 NA as the limit and many people don't know that and that's why there's so many papers around that always say they have achieved super resolution and they essentially show they are better than head-on illumination but they say it's beyond Abbe but that's not I mean that's not the Abbe limit that we know it's lambda over 2 NA it's for oblique illumination and if you compare with that one you gain nothing by the whole structure you can think of it like this it's linear in amplitude so if you illuminate something with this wave and you were able to measure the amplitude and you illuminate it with this wave and you were able to get the amplitude with a hologram or something and now you do both at the same time what do you gain nothing and that's what you would then call structured illumination and you just get a more complicated looking amplitude you can disentangle it whatsoever but it's not helping you in resolution so scattering doesn't do the trick basically okay good let me switch over to the other presentation by the way did you figure out what this other guy was I don't know if he was old no I mean the guy on the slides that I gave around right any guesses it was Morea maybe no it was Friedrich Schiller and our university is called Friedrich Schiller University so that's that's why I put in there right so this is going to be a fast run through it because it's it's meant to take longer than I have time now but I think that the basic messages are simple to get so you might know about light sheet illumination the trick is there instead of illuminating from the top like you normally do you have a separate optics and you illuminate from the side why is that cool because you get if you make this wide enough it's a bit like maybe in a disco a laser beam that scans like this or so it gets you a nice sheet of light and you can make it relatively thin and relatively long the reason is that basically the thickness scales sort of linearly with the pupil size but the length some people call it the Rayleigh length scales with the square so that means if you if you are willing to reduce your thickness to make to reduce your resolution or to make the thickness twice as wide you're going to have four times longer in that and this is what is used here so you basically let's let's sacrifice a factor of 10 in width so we are talking about instead of 500 nanometer 5 micron width but then we have a factor of 100 in length so over a whole field of view we can have almost the same thickness and that's what is done here that's the ordinary way but so here you see a cleared tissue and we've been working with cleared objects so this is very old procedures from spotter holds and so on in the early 1900s they found out how you can put organic material inserting solutions and make them almost completely transparent so first you bleach a little bit with hydrogen peroxide and then you put them in these solutions and they become like a piece of glass and so this is a whole brain from a mouse that's been put in there but you can do all kinds of other things and in this case this this data set when Uli Leisner came to my lab is from such a mouse brain but what you see here is interestingly not the stuff that is fluorescing but it's the stuff that is not fluorescing so what has been done is the data was taken and then inverted in a computer and what you see here is bright is actually the dark stuff in this case this is the vessels because there's nothing inside and it's the nuclei which for some reason don't fluoresce as well because they don't contain so much protein like the rest and the fluorescence just comes from the outer fluorescence of the material after formalin fixation so it has been fixed by formalin and that cross linkers everything and that generates a little bit of fluorescence and it's enough for these sort of for these nicely embedded samples in light sheet illumination so what we've now done is we wanted to make slightly better compromise by focusing the light now a bit stronger so instead of having five micron we wanted to go down to almost one micron in thickness so of course we have the problem now of the massively reduced relay length but since this method is really very very gentle in terms of exposure and bleaching we can sacrifice a bit of that and then instead of taking one image we take several images where we move the position of that light sheet and we move it with the help of an electrically tunable lens that you can basically put some voltage on it it changes the shape a little bit and we'll focus to a different position and here you see the results so this is a fly head of a fly that was put in this clearing solutions and you see now the light sheet is sort of focused from here to here that's focused to here and so on you take several of these images you stitch them together do a little bit of deconvolution and you get a dataset like this and that was the moment where I got really scared of flies because they really look like aliens or I don't know what but if you if you image them at a resolution of about one micron in x, y and z that's done here yeah then you get pretty cool data the raw data of just one one of these images so to say I mean the 3D stack of all the images together was 300 gigabyte thus you need a lot of pixels and this is before the stitching yeah so that's just the raw data then when you when you only cut out the pieces put it back together and after processing the whole thing is still 8 gigabyte for one volume but uh yeah so of course the interesting bit is not what is outside but what is inside because you have the whole fluorescent data of the whole fly head and so you to a very high level of detail one micron is not at all super resolution but it's good enough to see many features and it's good enough to for example see that behind all of these compound eyes there's always four neurons and you see them and they draw into the brain and you see the switching layers in the brain and so we had then some some some people for us some biologists segmented and give some more interpretation to the data and so here you see the same data as it sort of cut open these are the muscle groups that are able to operate this sort of trunk-like thing that the fly can actually pull totally inside when it's flying and then when it wants to eat some food this looks really scary have you ever seen a machine that drills a tunnel this looks really like a tunnel drilling machine there's like these sharp tools in the front of the fly row after row after row and then it meshes up the apple pumps out some saliva sucks in the juice with the fruit exactly so this is the segmented data of the neurons so the biologists can somehow from the morphological features very precisely tell the different segments of all these these parts of neurons in the brain there the fly has these antennae thing on the top and these OOcelli they are called so with this it senses the light on the head I think even humans have something left there some sort of light sensitive sensor inside or so and yeah then you can put the musculature the pharynx the digestive system everything together so when we had that we thought this is really cool that should be really interested interesting to museums so because they have these samples from from hundred years ago that they collected in the amazon forest of all kinds of tiny insects and so on why not put them in our machine and give them back a data set at this resolution and it turned out they were moderately interested in this thing but they didn't have any money whatsoever to pay for any of these experiments so we didn't do any of that which is a pity I think it could would still be nice but here you just see the level of detail this is actually a larvae from such a fly fly larvae and the resolution is really one micron or slightly above and you can see the banding patterns of the muscle if you look carefully here you see these stripes which are the Z bands of the musculature in the muscle these larvae basically are made for eating they just eat and eat and eat so they have a lot of muscles that's a something from the rainforest a funny animal that the biologists are interested in yena it's some sort of link animal that hasn't been researched a lot what is interesting with this guy is it doesn't have eyes but only if it needs to find mating partners or something to eat because there's nothing left then it starts to develop eyes and looks around where to go next this is just about our creative procedures we are able to to also handle complicated tissue like this is lung tissue from a mouse that's a bit more difficult to preserve and to put in the machine there this is also to show that we can preserve the fluorescence of GFP so these are GFP labeled neurons in a piece of brain and you see in a minute this is how the raw data looks like now the interesting bit is we have the positive contrast from the labeled neurons but at the same time we have this sort of inverse contrast from all the dark stuff i.e the vessels in the nuclei so you can now just by image possessing extract the dark stuff and the bright stuff put it back together and you get something like a multi-color image even though it was just a single color image and that's what you see in a second so we basically have the here the inverse stuff then we just put it back together and we get a nice sort of context to the neurons what is what is in their surrounding how many cells are there where are the vessels going and so on yeah so here you have the neurons and the good thing is the resolution is now isotropic because the light sheet is so thin that it matches the resolution we have in X and Y good so that was the part I wanted to show you about just light sheet imaging with some cool images and now comes if I'm allowed Cario a part about a combination of Raman microscopy with light sheet imaging so first of all what is Raman scattering I don't know was it part in this course was it mentioned Raman scattering so Raman scattering is another effect and if you think fluorescence is weak well then look at Raman because Raman scattering is way weaker than fluorescence and what happens is basically physically it's the following effect you have a light wave that hits a molecule and this molecule is sort of you can imagine is vibrating all the time and when it vibrates it changes a little bit the configuration of itself and the electron clouds and what happens is basically there's a tiny little bit of a modulation of a feature that is called the polarizability of a material so you can polarize something rather well or not so well and this polarizability is modulated over time now so you can excite it sort of it scatters a bit more this way and a bit less this way and then a bit more again and a bit less again so that means the wave that is sort of emits in if you want in Rayleigh scattered light that is a little bit modulated because of its fluctuation and this tiny modulation of the polarizability leads to a tiny modulation of the amplitude and this tiny amplitude modulation that means in addition to the Rayleigh scattered line you have a little bit of a anti-stokes and stokes scattered Raman line so that's in a classical picture what Raman scattering is and you might get an idea why it is so super weak because it's a tiny effect and now yeah I have to say they are commercial even Raman instruments that that work like a confocal microscope and they put a laser here and then they have a spectrometer and they take the data and then they say well let's wait a little bit longer until I get enough photons because they are so weak and then they move to the next pixel and to the next pixel and so on so a typical Raman image of biological samples is typically 100 by 100 pixels and takes four hours to acquire why? well you can crank up the laser intensity but there's only so much you can do eventually you're just gonna evaporate your sample you can actually say you fry your sample and that's indeed what they see they they say we always go to the carbonization limit you know what the carbonization is when you when you put a schnitzel in sorry when you cannot eat it anymore when you cannot eat it anymore yeah if your schnitzel becomes a piece of coal that's the carbonization limit that's pretty much where they are working in their Raman microscopes so we thought that's maybe not the best way of doing it going pixel by pixel why not use the light sheet idea and illuminate from the side so you get a million times more pixels simultaneously and you can get you can then afford to wait much longer for your signal to come basically so that that's the idea a light sheet what are the properties well the tissue tolerates more more intensity because the intensity is spread it recycles the light in a way what I mean by that is the light that has sort of hit the beginning of the light sheet it's the same light that's used again for the next bit and the next bit and next bit so you're you're more effective in making use of your light and in a confoc you're sort of above the plane you don't need it you don't need it here you're in focus you need it for a little bit and then you don't need it again right it has this multiplex advantage but that's also at the same time the big problem as you will see in a minute so so the idea is why don't we use it for ramen imaging but the problem is how do we get this hyperspectral information when we've now a whole field of view of a camera and every pixel contains this 1,000 spectral channels but how can we get them there there's no camera with a thousand spectral channels that you can buy or something like that so we have to find some other way of getting the spectral information out and this is now what is done here I will try to go very quickly over this you might have heard about Fourier infrared spectroscopy which which is one way where you measure in the infrared the option properties of materials and it's done usually with an interferometric way so if you know how that works you also understand how this one works so basically you take a Michelson interferometer as shown here and you modulate the path links of one of the two paths and if you now had some red light from the sample let's imagine this ideal laser red light or something like this coming to our interferometer you would of course get depending on the precise optical path links you get constructive, destructive, constructive, destructive interference so you get this interferogram if you want and the trick is of course if you now have blue light you also get constructive, destructive but they would appear at a different spacing because it's a different wavelength and that's that's what you're using so if you now have something that is red and blue all together you get some sort of incoherent overlap of these curves so it typically looks like this there's one particular point where the path links difference is equal so even for white light you always get constructive interference and then you're moving out of this path links and pretty quickly your interference seems to die away and but you can measure this and it turns out if you do a Fourier transformation of that measurement along the optical path links axis you can recover the spectrum because every spectral line has a sinusoidal component in that overlap and if you analyze it for sinusoidal component that's precisely what the Fourier transformation does you get it back you get some additional offset component here but that doesn't hurt you very much so it's okay it's called Fourier transform spectroscopy yes okay and so the question is what if it's incoherent light it turns out it doesn't matter so it maybe has to do sometimes with a misunderstanding of what is a photon and how photons interfere every photon in my world only interferes with itself and it doesn't matter you can split a photon if you want as long as you don't measure it it goes both ways it can interfere with itself so you can determine the spectral properties this way as well for totally incoherent light of course you have to adjust everything well and so on but it's okay it works so let's look at this a little bit more in detail so here we have a sketch we have the objective here we are in the infinite beam path there's our interferometer there's the camera this would be the equivalent of the tube lens where we focus onto the camera again so that a pixel from the object space becomes a pixel on the camera so that's fine if we then modulate by moving one of the mirrors you get your interferogram you get your spectrum you're good now let's let's look what happens if um well first of all we have to be pretty I know sorry what happens to a pixel now that is not here but at some other position what actually happens in this case very luckily is that you still get the perfect interference and it's a different pixel on the camera so that scheme if it's adjusted right works for a whole field of view simultaneously of course your optical components need to be nice and not too big wavefront errors and so on but it's possible to do that so what you need to make this really work is a couple of technical things so one thing is you need very precise control of that mirror we are talking about interferometry so we need whatever one tenth of lambda so it comes down to a few nanometers that we need to know where this mirror is how do we do that? well we put some reference lasers sort of over the corner of the mirror a photodiode and then we can move that mirror for many many many millimeters even and we can count fringes one two three and we know precisely where we are as long as the laser has always the same wavelength we're good that is sort of easy to solve we are using Arduino's for this thing to just track where we are however there's a big problem that's this one try to buy a motor with the necessary precision that you can move over a millimeter and it doesn't wiggle its axis because they always have tolerances for how much steering precision they have it doesn't exist if you want that you have to build it yourself meaning having like three points on the mirror and always three piezos that move don't do that yeah it sounds like really a lot of work so we thought we have to come up with some better concept than this type of interferometry turns out there was even a commercial machine for those who might have heard about it from a company called Applied Spectral Imaging in Israel they were making an instrument that was called Sky spectral carry typing was the application and they used the Zanyak interferometer for this which is great because it's super stable but it turns out for this application completely unusable because it doesn't support enough optical throughput you cannot fiddle it's too big basically you cannot fiddle the beans through it well enough so you would have a tiny field of view and you wouldn't get enough spectral channels and with this machine we wanted to use that because we happened to have one at our institute but it just doesn't work it's good for fluorescence but it's not good for Raman imaging you want much more spectral resolution then so we built our own and let's analyze what the problem is what we did is we came up with using this cat eye reflector trick so the corner cube reflection because if you know about corner cubes they always return beams in exactly the same direction that you send them to it so they are used commonly when you when you want to do field measurements of large distances people don't want to fiddle around with adjusting mirrors to lasers to send them back so they just put something there cat eye send the laser the laser automatically returns to the measurement machine and you can do all kinds of measurements with that and so why not use it for this interferometer because then the motor can wiggle and we don't care it does wiggle a little bit but the beams always return in the same angle which means that they interfere precisely at the right position on our detector so here you see the design if you analyze that a bit more closely for polarization reasons it turns out that you want to illuminate one particular one of these six parts of such a corner cube reflector and if you send the light in here it will be reflected twice and come out here and we have gained two advantages one of them is it doesn't matter when one of these mirrors wiggles and the second thing is we have now also the possibilities of using two outputs so we sort of turned the Michelson into a mach zender type interferometer because we have entry here and in the second layer we get two outputs and we can send them on two cameras and get two interference signals simultaneously one constructive one destructive that's kind of good better signal to noise and robustness against laser fluctuations and so on so this is how it's built and in the end it actually we analyze that a bit and it turns out that you can cut pieces of these corner cubes that you can just buy from admin optics or any other company and this way we could make them really fit closely together and get a huge optical throughput so we don't have any problem anymore to send the full camera image with all the spectral information through that thing this is how it looks like in the lab and this is schematically so we have a laser two watts of laser power that we sent into our sample in a light sheet configuration this is a bit on something called open spin you can download from the web for free where you can find all the designs of how to do the parts and so on of course you need the objectives and stuff like this and then this is sort of the emission light our Raman emitted light sideways it goes through this interferometer on the camera and then we have a motor that moves it this way what you see here is the reference laser here that we coupled over the same mirrors with a diode then to see where we actually are in our stepping and then the big laser that illuminates here the sample right and then there's a trick you can sample it in the ordinary way that would mean per fringe you would need two sampling steps then but the problem is you would have to take maybe 2000 to 4000 images for a decent spectrum and then if you look what you've actually done you're really interested only in this part of the spectrum and this is totally empty so you think like why wasting so many images on something you're not able to measure so we therefore did an intentional under sampling we only sample instead of the Nyquist limit we under sampled it by a factor of two and that leads to intentional aliasing so what we measure is this way around but because this part is totally empty and we can even enforce that by putting the right filters in that make sure there's absolutely nothing there it's okay we don't lose anything we just have to know how to interpret our data in the end here you see some examples this is light sheet illuminated and what I've overlaid from the spectral information in every pixel we have now 500 to 1,000 spectral bands basically that we measured and I've overlaid a couple of these bands just to show you that there's some interesting information there this is just a sample of agarose gel and inside there's polystyrene and poly-material metacrylate beads and you see that you can nicely spectrally distinguish them so with these pseudo colors and what is shown in blue is also the fluorescence and not the fluorescence the Raman scattering of the water because water also does do Raman scattering water is actually a terribly bad Raman scatterer but that's super good because then we have a chance to discriminate something in the water if water was something else if you try to embed it in benzene or something like this you wouldn't be able to see anything because the medium would overpower the old rest of the spectra so here you see that's the water spectrum here but this is of course what we're interested in and some of these peaks as well so that's PMMA this is if we take only the spectrum of a 2x2 pixel region it becomes noisy as you see there's a few more things that I didn't tell you about that you have to do tiny corrections for the pixel position effect because if you're going a little bit diagonal the path length is slightly different than if you go straight right tiny effect but you have to correct it and also there's some tricks of how to make spectra that are properly zero centered if you just take the absolute you get not so good results but if you if you do the correct face correction to the spectra you can just take the real part and that gets you better results here's the polystyrene also fine you can compare the polystyrene to what you can pull out the database from the internet and that's the spectrum in the reference database and as you see precisely the same except for one big difference here but that's of course the water because they didn't measure it with water and we have it in water right so that makes sense and turns out that actually our spectrum resolution is better than what you get out of the database so we had narrower line width than what you could download from the internet and here's some biology so this is a zebrafish eye that was just a bit formally fixed but embedded just in water and it's illuminated from the sides and you see one of the big problems of light sheet if you have scattering tissue especially the eye of a zebrafish well what it's made for it's made for focusing light right and it does focus light hmm so looks like we have a trouble here yeah we need we could of course try to desert the light sheet by by illuminating several of them from different sides or make it go round but that's a lot of effort and then we thought about it and we thought okay what is biology biology is water right water and water and a little bit of stuff in the water so why not use our water signal so if you look at the water signal that's the blue stuff the water signal is a very good indication what is the local power density in every part of the sample so if we divide by the water signal we should be able to normalize our data to be then fine right turns out works really well all the stripes are gone and you you get a much clear of course there's a bit of degradation in the quality here as well but at least it doesn't look so artifactious anymore so luckily biology is really just water um right then you can look at different regions in the sample get out the spectra and it all makes sense this is water this is the so-called plexiform layer of the eye where there's lots of axons and neurons going through so it's mostly lipids and indeed this is a typical spectrum of lipids and this eye the center of the lens is made out of proteins mostly I think keratin and indeed that corresponds to protein spectra sounds good and one has to say that this looks to you like a fluorescence image but it is really Raman scattering and it has the original one has 2000x2000 pixels then we bend it down 2x2 to get a bit sort of better signal to noise so this is 1000x1000 roughly but this is really really difficult to get with a machine that takes so many millisecond per point right so we've gained a lot in speed this way and that allowed us over one night to measure even a whole series of 50 slices of the zebrafish eye so we have now a 3D Raman image with 1000x1000 pixels and 512 spectral channels I think does it say here? I'm not sure at least 512 spectral channels maybe also 1000 and then the last thing we were thinking about there is can we somehow run this data set with this incredible amount of information it's also many many gigabytes I don't know 20 gigabytes or something like that so some algorithm and pull out what is in it because there's chemistry in it there's these different spectral components so there must be some way of getting them and turns out that's a very difficult problem it's called the blind source unmixing problem but there's algorithms for it and one of them is the non-negative matrix factorization and that's what we apply to it and basically ask to the computer find us five components to interpret the data with and it came up with these components totally unsupervised if you want and therefore it's quite interesting and nice that it found things that we can actually identify as being proteins, lipids, water and something that you would call DNA or RNA or something like this which makes sense so that's nice and then we can pseudo color or data set with these components or three of them and that's what we've done here so this is now the pseudo color looks very similar to what you've seen before but it's pseudo colored from the components that it found with the non-negative matrix factorization unmixing okay so I'm not to bad quarter of an hour over time so in the future we want to improve on that and even though as nice as it is with the Fourier spectroscopy we will move away from it again because if you analyze how much is it really better we are a lot faster than the commercial instruments but if you analyze the theoretical limits for some cases we are only a factor of five faster and the reason is that the Fourier transform sort of sets us back and signal to noise quite a bit for the case of Poisson noise here and that's why the next step we are going to try is some concepts that are called integrated field spectrometers and they are used a lot in astronomy and they work on using micro lens arrays for doing our spectrum okay so now we really deserve a break I think so yeah so I think we'll have to go straight on to the break we listen more to Reiner rather than having to have the chance of answering questions so let's thank Reiner for his wonderful talk okay so coffee break now we'll start again as we're supposed to