 Okay, so before Professor Colin Shepherd will continue with his presentation about confocal microscopy, I want to ask you about the poster session. Due to the large number of the posters, the evaluators, I want to ask you to try to express your paper in 10 minutes to increase efficiency, okay? So prepare yourself to be ready to tell everything in 10 minutes, then you have additional discussion. Okay, and now I invite Professor Colin Shepherd to continue the interesting discussion about confocal microscopy and super resolution, and today will be also the face-contrast microscopy part. So please, Professor. Okay, thank you very much. Yeah, so in the opening talk I gave yesterday, I went a bit slower than I thought I would, so you'll find if you look at the talks that have been uploaded, today I've chopped some of the slides out, just a small number, but you can see those. It's all things that I thought that other people would say more about later on in the course, like Reiner Heinzmann and Alberto Diaspora. So they're both going to be talking about super resolution aspects. Right, now what I was going to do now, I introduced confocal microscopy last time. What I was going to do is to actually derive the imaging equation for not a fluorescence microscope, but a coherent form of confocal microscope, just so you can see how it works. And so last time I introduced the fact that if you've got a coherent imaging system, here it's just a single lens, and you've got some objects, this is the object amplitude distribution, then the image you get is the convolution of this with the amplitude point spread function of the lens, which was this thing 2J1 of V over V, it's the amplitude point spread function. So now in a confocal, and well, here I've drawn it as a transmission system to a first degree, so far anyway, it doesn't really matter whether it's a reflection or a transmission system. There is a very real difference. I'm not sure whether I should say much about that or not. But anyway, in the focal plane, when it's in focus, they're the same. But I've just drawn it like this so you can see what's going on. So we found what we've got here, the amplitude we have here. This amplitude is then multiplied by the... Sorry, this is a point source. We get the point spread function of the first lens, H1 here. This is then multiplied by the transmission of the object, which is placed here. But this is actually shifted, and this XSYS is a scan coordinate. We're moving the sample relative to this focus spot. And then this is imaged again by a second lens, and we get this H1T convolved with the point spread function of the second lens. And so finally then, of course, we look at the intensity again. So this is the expression for the final image intensity that you would see in this plane of the detector here. And you see this depends on XD but also on XS. And what we do in a confocal microscope is we basically just look at the intensity on the axis. So we put XDYD equal to zero. So if we put XDYD zero, we get this now. And you see that this expression is basically H1, H2 convolved with T. So you might find this rather surprising, the mathematics, the way it works, because here we've got something which is looking like H1 convolved with HT, but we end up getting a product. And it's this product which is really nice for us because it has the effects of sharpening up the point spread function so we get a better resolution. And it also is responsible for the optical sectioning effects as well. So this confocal microscope in this mode like this behaves as a coherent microscope with an effective point spread function which is given by the products of the point spread functions of the two lenses. Okay, so once we know how to calculate the image, now we can calculate the image of various things. I showed some of these before. This is the image of two points. You remember I showed these top three plots before for an ordinary conventional microscope with different sizes of condenser aperture. And then now I'm giving confocal reflection and confocal fluorescence as well. And you see that, so this blue one as I described before is pretty close to the Rayleigh separation. Remember we said that here the two points are resolved, here they're certainly not resolved. And this is in confocal reflection, this is in confocal fluorescence. You see that the size of this dip gets quite very pronounced especially in this confocal fluorescence case. So we can see that the imaging of two points is better in the confocal system. Okay, now I was going to jump back and talk a bit about the history of how confocal microscopes were introduced and so on. Usually we recognize that the confocal microscope was invented by Marvin Minsky. People, some of you might know the name of Marvin Minsky, unfortunately he died last year, 2016, but he was very famous in the area of artificial intelligence mainly. So why he came up with this idea is not too clear. But anyway, these pictures are taken from his patent that he filed in 1957. And you can see he didn't call it a confocal microscope by the way, he called it a double focusing optical system. So stressing the fact that you've got the two lenses. And you can see a transmission system, a reflection system. One of the things I wanted to point out here is, you know, 1957 was a long time ago. And so there wasn't the same technology then that there was now. So you can see that from these diagrams. You see the source of light is some sort of, you know, it's a lamp bulb. This was before the laser was invented. There was no laser in 1957. So it brings it into perspective. And you can also see the photo detector is just a simple photo cell. So actually I think the photo multiplier might just have been about invented by them, but it was only in its infancy. And the third thing, well you wouldn't see this in the diagram anyway, but the third thing that Marvin Minsky didn't have, there was no computer. Right, and I'll say more about that in a minute. So that was 1957, but actually it turns out that the concepts of confocal imaging had been discussed before. I'm not trying to belittle what Marvin Minsky did because I think he was the first real confocal microscope. But I'm going to show you two examples of earlier things. This one was from Goldman in 1940. And so this system, this is an image taken from his system. You see this is for looking at the eye. And this image is actually a section through the eye. It's what in confocal we call an XZ image. And in OCT they call it, what do they call it? Anyone remember what this called? A Y scan or something. But anyway, everyone calls these things different things, different areas. But to me this is an amazing image that you can see the cornea and the lens of the eye. I think that if you showed this to most people working in the OCT area nowadays, they'd think this was done with OCT. But it's not. It's done with this system, which is basically a sort of confocal system. But it's actually what we sometimes now call semi-confocal because instead of using a pinhole, it uses a slit. And so what we do is we illuminate the sample with actually a line of lights or actually a sheet of lights. It's actually really also what's now called the SPIM, the selective plane illumination microscope, or a sheet or a light sheet microscope. They all become the same in the end. So you illuminate with this sheet of light or effectively a line of light which is where it cuts the focal plane. And then this line is imaged onto a slit here. And the slit acts like the confocal pinhole and does the optical sectioning in the same way as the pinhole, not quite so efficiently. And I'll show some example of that later on. But it's not as good. But the big advantage is your image in the whole line at a time. The big disadvantage of the confocal microscope with a pinhole is that we have to look at every point one after the other. So it takes some time to do that. If you do it this way with a line of light and a slit, you get a whole image at one time. But you've still got this problem. How do you record this again before computers, before CCDs or anything like that? So actually he writes this image onto film and the film rotates under this slit. So we write out the image, this XZ image onto the film as we scan in the Z direction. So that's a very, to me, a very nice paper that describes this. It does actually give a lot of details. It's not really, the paper's not really about the instrument. Most of the paper is about the medicine. It's lots of pictures of diseased eyes and all sorts of things in this paper. But it does have another interesting feature. I think I'm not sure whether this was the first place this really came up. But you'll notice another aspect of this system. And that is that we illuminate along this line and we detect along this axis. So the two axes, the illumination and the detection, are separated. They're at 45 degrees in this case. And what this does, this also helps with the optical sectioning because it means that in order to get an image, of course, you have to both illuminate and you also have to detect. So this means that you're illuminating along this axis. You're detecting along this axis. So you will only pick up a signal where these two axes cross. So as well as this confocal or semi-confocal optical sectioning, we also get another optical sectioning property which arises from this offset arrangement. So this offset arrangement, I guess, well, it really dates back to the original ultramicroscope that was developed in about 1910. So it is a very old idea, but it's the basic principle of the slit lamp that the optometrists will use to look at your eye. Okay, so that was Goldman. The other one I wanted to show was this one from 1942. This is a Japanese paper in Japanese by Koana. And you can see from these diagrams the similarity with the previous ones I've shown. This is the pinhole. So this is also a confocal system. But it was a confocal microphotometer system. What he wanted to do was to measure accurately the strength of this light, and he realized that if he put this pinhole here, he could get rid of stray light. Just like I was described, the confocal pinhole gets rid of this stray light. But it doesn't seem, from what I understand, this paper doesn't really describe a complete microscope where we're scanning and actually produce an image. It's just looking at isolated points and making measurements. So 1942, Koana. Now I'm going to jump to a bit later, the Minsky. So some papers by Patrice Oris from Czechoslovakia. I met him just last year, so he's still active, although he must be older even than me. And anyway, so this is his system. What he realized was that another way of getting around this problem of the speed, in a confocal microscope, you illuminate one point at a time. It's very slow. So why don't we multiplex? Why don't we illuminate lots of points all the time? I have lots of pinholes and get an image much quicker. So that's what he did. So the illumination spots are arranged as holes here on a disc which spins round. And this same disc is used as the confocal pinhole on the way out. So the light comes in. It goes to the sample. It goes back through the disc. And you see here as an eyepiece. So you actually see a confocal image with this system. With your eye. You don't need a computer to store it or anything like that. So a very nice system that was actually originally commercialized by Petrand himself, but now there are a few companies that make these systems. Right. So I got into confocal microscopes when I was at Oxford. I moved to Oxford in 1974 from Cambridge. And we started building this confocal microscope. And so this was actually our first laser scanning microscope. You can see quite a simple thing really. It's just a helium neon laser. Two objective lenses. This is a transmission system. We used a photodiode on this first one as a detector. And the sample was actually mechanically scanned through the focal spot. And to this day a lot of people still use this stage scanning. It's particularly good if you want to make some accurate measurements because it means that the optical system is completely unchanging as you scan, right? If you use Galvo mirrors, you never know what's going to happen. So that was our system. This is actually the very first image we got with this system in 1975. It's a test chart. And you can see it looks like an old-fashioned television. And that's because it was an old-fashioned television. That's how we used to display the image. And then we photographed from the television. We were actually also developing a beam scanning system in the same year. And this is an example of that. So these pictures are all taken from this article. It's in the Journal of the Royal Microscopical Society. And it gives a sort of history of our early work on confocal. I wrote this article, actually, when I left Oxford to go to Australia. Summing up things. Right, so... What happened? Ah, an update. Adobe Flash Player is... Perhaps I should have turned off the wireless. Right. Oh, it seems... Okay, so... This is me when I was younger. And so the other members of our group... Our group, our professor, was Rudy Kompfner. Who was famous for inventing the traveling wave tube amongst other things. And so he was the guy who started this work. And so with us here were two students. This is Amar Chaudhary from India. So he was the first PhD student to work on our confocal microscope project. And here you can see, actually, we have this paper, it's 1977, Image Formation in the Scanning Microscope. Where we derived the imaging performance of the confocal. And as I say here, this was actually the first paper that I believe you've ever used this term, confocal microscope. The other guy, Peter Hale here, he was working on a completely different project, which was to do with optical fibres. Which was another thing we were working on those days. And the remaining person of the group, you can't see because he was taking the photograph. And that was Julian Ganaway, so I'll show you something about him in a minute. So a lot of the stuff we did when I was in Oxford was in confocal reflectance, not fluorescence. And part of the reason for that was it's much cheaper because you don't need much of a laser. Whereas confocal fluorescence, remember, this was actually before the days of air-cooled argon lasers. So you had to have a gas, you had to have a water-cooled laser if you had an argon laser in those days. But anyway, these are all confocal reflectance images. And it's all taken from the time when I was in Oxford. So this is a stereo pair of a pollen grain. If you cross your eyes, some people can make that fuse to produce a nice 3D image. This is a colour confocal image. It's basically, this is not pseudo colour. This is taken using three lasers, three pinholes. And you have to, of course, align all that. So you have to get a research student, lock him in a lab for a week, and eventually he can get it all lined up. But this shows, you see, it's actually inside, it's imaging under the surface of a leaf. So normally, if you put a leaf in an ordinary microscope, all you see is reflection from the surface. And these green things are chloroplasts. So this is the internal structure of the leaf in its real colour. This is a brain sample, all these neurons. This is another interesting one. This is a cell that's just dividing in the process of dividing. And these are microtubules that have been labelled with gold nanoparticles. So 15 nanometre gold nanoparticles. So this is quite a long time ago. Around the late 80s we were doing these experiments. We went right down to actually 5 nanometre gold nanoparticles. So they were conjugated to, in this case, to microtubules. I might add, of course, nanoparticles have got the great advantage over fluorescent probes that they don't bleach. So you can look at them forever. Right, now, so I mentioned that our first system didn't have a computer. Eventually we did get a computer. And so this, I mentioned this student last time, Ingemal Cox. He was a student, he was actually a graduate in computer science. And his PhD project was to hook up a computer with our microscope. And so these are some of his results. So this paper in 1983 describes, I think this is the first confocal microscope with a computer. And so showing how you can get optical sectioning and all these things. Now another thing that happened when I was in Oxford was we were involved with commercializing this system. And so in 1982 we set up a company called Oxford Opto Electronics that was really the main market we were looking for was the semiconductor materials type market. And so this microscope was developed. We later sold on the rights to another company and it became this laser sharp 1984. Eventually laser sharp was bought by Byrad. And in 1987 Byrad brought out this MRC 500 which I have to admit was not our design, right? So this is a beam scanning system. Ours was a stage scanning system at that time. But this 1987 was when biologists found out about confocal really. And it was all because of these adverts like this one which show fruit fly chromosomes. So this is with and without the pinhole. And you see how much better it looks in the confocal image. So that's when it all started. Now I'm going to carry on. This is now in my Australian days. So I was at University of Sydney in physics department for 15 years. And so this was a paper that was published. Mingu was my postdoc when I was in Sydney before he went off and did things on his own. And this was a paper which was about penetration into scattering medium. And trying to show how you get improved penetration into a scattering medium using the confocal effect. And so what you have to do is optimize the size of the pinhole and so on. And you can read more about that in this old Optics Letters paper. The student who did this, Tony Tannous, was from Lebanon. And he went back to Lebanon afterwards. And in fact I got in touch with him for the first time for a very long time just last year. So he's still active. Okay, right. So I think I've put over the impression that the confocal microscope is great. It does a lot of good for you. And certainly that must be true. Otherwise they wouldn't have sold tens of thousands of these. But it still does have some limitations. And I've listed these in some detail here. I'm not going to go all through this. But you see it's not fast enough. It's too big. It costs too much and so on and so on. The other problem is resolution and how to improve the penetration. I've said a bit about the speed already. The fact that it's slow because we're illuminating one spot at a time. So how do you get around that? You can either use the spinning disk, line illumination, or structured illumination. So these are some of the ways around that. Of course nowadays we've got all these super resolution techniques like sted and localization microscopy. I'm not really going to say very much about those because Rainer Heinsmann and Alberto Diasbro can say something about that. Right now. What other methods are there that you can use to get 3D images? Well, you can do just purely digital type methods, digital deconvolution. In the early 90s there were a number of companies making systems that they effectively called, I think one of the companies used as their trade name, digital confocal. By deconvolution you can get something similar to confocal. I think probably not as good because eventually confocal won and the digital deconvolution really went away. Then there's things like OCT that I'm sure you know about but there's another type of microscope called a coherence probe microscope. Sometimes called fulfilled OCT. This is another way of getting 3D images. Then there's multi-photon. We had an introduction to two-photon absorption yesterday from Nicoletta and I'm going to say quite a lot more about that in a minute. So this leads to especially to two-photon fluorescence microscope and second harmonic generation microscope. And then structured illumination is another method. I'll say something first of all about structured illumination. Really this was first introduced by Lukosz. Again a long time ago, 1963. It's very difficult to have a new idea because all these smart guys back then invented everything you can possibly think of. So Lukosz's idea was that you would actually project some sort of fringe pattern onto your sample and so your sample is illuminated with a pattern of fringes. So certainly not curler illumination where you're trying to get very uniform illumination. In fact the opposite. You're trying to get a lot of structure on the illumination. And this has the effect of basically shifting the spatial frequencies. It's like a Moiré effect that takes the high spatial frequencies and shifts them to low spatial frequencies. And then you transmit the low spatial frequencies through the microscope and then you've got to put them back where they came from which you do by using another grating. So this was Lukosz's idea. Nowadays this second step, the reconstruction stage we don't do optically normally. We would do that digitally. So in a normal structural illumination microscope now you wouldn't have this second grating, just the first one. And this one is, in Lukosz's paper, this is a bright field system. It's not a fluorescence system. Although he does, you see, I've actually put this, the method may be used with coherent, partially coherent or incoherent illumination. So I think he was of the opinion you could do this with fluorescence as well. Okay, now I've mentioned confocal with a pinhole. I've mentioned using a slit instead of a pinhole. And I've mentioned using an array of spots, an array of pinholes. You never get anything for nothing in this world. So, you know, the fact that they're faster, you throw something away. And what you throw away is optical sectioning. This shows here, this curve here is the axial response in a confocal microscope. We often measure this. This is a very simple experiment to do. You set up a confocal imaging system with a pinhole. It doesn't even need to be a scanning system. You put in as your sample just a plain mirror. And then you scan the mirror through the focus and the signal will fall off in this way. It would come to a peak when it's in the focal position. So it's a very easy experiment to do that one. But what you find is if you replace the pinhole by a slit, then you find that it gets broader and the side lobes get stronger. So the optical sectioning is not as good. And this one is for an array of pinholes. You see that here it basically doesn't decay so far. It decays a little and then it plateaus. I mentioned it here specifically. In confocal, this optical sectioning decays as one over z squared with a slit, line illumination and a slit. It decays as one over z. And you all remember from mathematics that if you integrate one over z from minus infinity to plus infinity, it diverges. Whereas if you integrate one over z squared, it doesn't. So line illumination with a slit is limited to how thick the sample can be for that reason. Right now, I also mentioned earlier then from that Goldman paper about using this off axis arrangement. And so one way of implementing that is by splitting the aperture of your objective into two semicircular regions. So you see here there's actually a pipe bit at the center which has got a width here of two little d. So they're not quite semicircles. So they're sometimes called d-shaped apertures. And the idea is that you illuminate through one and you detect through the other. And so now you can see that these two axes are not coincident anymore and we're going to get this additional optical sectioning that comes about from this effect. And this is demonstrated in this diagram here where you can see that for normal confocal you get a slope of minus two inverse square law. It goes off as one over z squared as I described. With this system, if this d is thick enough in order to stop crosstalk between the two apertures, we get a slope of minus three. It goes as one over z cubed. So we get this improved optical sectioning. Now I'm going to go to two photon. So the basic principle of two photon microscopy is, well, Nicoletta mentioned it yesterday, that the absorption is proportional to the square of the intensity. And in two photon fluorescence microscope, the signal you pick up is also proportional to the square of the illumination intensity. So what that has, it has two effects. Firstly, it sharpens up the point spread function so you get better resolution. That is true, but unfortunately if you're doing fluorescence for a particular dye, you're using a longer wavelength anyway. So you're getting something a bit better than something that's not so good. So overall, you don't get better resolution with two photon microscope for the same dye. But more importantly, perhaps, you get optical sectioning. Because of the squaring of the point spread function, you get optical sectioning. But you don't need a pinhole. So that's very nice. It means you can get a much more efficient system, which you need of course, because the two photon absorption is quite a weak process anyway. Another thing that's important then, the reason why we can get these nonlinear effects is because we're focusing the light to a very concentrated spot with a microscope objective. So we can get a very high power density at the focus. So because of this squaring, you see, if you double the focus intensity, you're going to four times the signal. So you want to compress it as much as possible in space, but also in time. For a given amount of power, if you use a pulse laser, you're going to get more signal than if you don't use a pulse laser. So we started working also on two photon on nonlinear microscopes, again, when I was in Oxford. And we had this paper that you see, I published with my professor, Rudy Kompfner, in 1978, where we say, and I've printed it out here, in the scanning optical microscope, nonlinear interactions are expected to occur between the objects and the highly focused beam of light. So using a microscope objective is what allows you to do this. And then we go on to say that nonlinear interactions include generation of sum frequencies, rum and scattering, two photon fluorescence and others. So we suggested that you could do all of these things, but actually the particular method that we were really working on experimentally at that time was second harmonic generation. And we had a paper in the same year, or yeah, it's 1978, the same year, which shows some images from second harmonic generation from a crystal. And what's more to the point, it demonstrates the optical sectioning, because here you see these different focus positions, completely different images. Yeah, so that was multi-photon. This is just a pretty picture, I thought you might like to see. This is an example of skin, which has been labeled with three things. So there's actually, sorry, it's labeled with two fluorescent labels, but the blue is actually from second harmonic generation. You get very strong second harmonic generation from collagen, which is in skin. So this was from when I was in Australia. When I was in Australia, as well as being in the School of Physics, I was also with the Australian Key Centre for Microscopy. So this work was done as part of that Centre for Microscopy. This is another one I like. This is second harmonic and third harmonic pictures of my own arm. These were taken at the National Taiwan University Hospital in Taiwan then. And this is a series of pictures starting from the top and focusing down into the skin. The green you see is second harmonic generation from collagen. And the purpley colour is third harmonic generation. There's no collagen in the outer layer of the skin. Yeah, so this is showing OTF for confocal fluorescence microscope. And so Ingemar Cox again in this paper here, we calculated the OTF. And basically OTF for confocal fluorescence is the spatial frequency bandwidth is doubled as compared with an ordinary fluorescence microscope. But you see that the response you get out here is like seriously weak. It's increased from two to four, but in this band between three and four there's virtually nothing. And then also, and this is true, these curves apply when you've got a very, very small pinhole. In practice if you're doing fluorescence, you have to make the pinhole a bit bigger normally because otherwise you don't get enough signal. And so if you do that you find that again the OTF drops off until eventually if it becomes a very big pinhole it's almost the same as if you didn't have a pinhole. So what this is pointing out is that although as I've described in principle confocal microscope can give an improved resolution in practice, especially for fluorescence where you're looking at very weak signals, in practice you don't see this resolution improvement. And I'll go on to say about how you can get around that in a minute. Right now I'm going to say something about super resolution. So according to what we might call classical theory the resolution of a microscope, the transfer function, the OTF or whatever is band limited and you can't change that. But it was Teraldo de Francia in 1952 who probably first put out the view that resolution itself is not the fundamental limit. The fundamental limit is more to do with the information that you can transmit through the optical system. And so around the 1960s Lucosh came up with, I mentioned him earlier, he had several papers about super resolution and how you could... So the structured illumination was one of the methods but he also proposed some other methods as well where you basically as I say here increase the bandwidth by using different polarizations, different wavelengths, i.e. different colors, things like that. Then my student Ingama Cox, we worked on a theory of super resolution which really carried on from the work of Lucosh but included the effects of noise. So it included both this idea of information content, information capacity, but also included the Shannon type noise and we came up with this expression for the information capacity that you can transmit through an optical system. And so this is basically the bandwidth in a direction times the size of the object in that direction. So what this expression is really saying is if you want to improve the resolution to get super resolution, you have to trade off something else. So you can trade off signals and noise ratio, time, color, polarization or whatever. So the system can only transmit so much information. It's just a case of how you use that information. So as a result of that, this led me to come up with this sort of classification of different super resolution schemes into different degrees of super resolution. And I've labeled them here as you see, class 1A is the top one, class 3 is the lowest one. Class 3, I wouldn't really call this super resolution. This includes things like super resolving filters, what are now often called super oscillations. These are methods where you improve the two-point response, but you don't improve the bandwidth of the system. Then class 2 are methods where you do increase the cut-off of the system, but you're still limited by the maximum numerical aperture of a lens. So the methods of Lukash would usually come into this category. Then class 1B are ones where you do indeed increase the cut-off of the system, but only by maybe a factor of two, up to a factor of two. And these include structured illumination, confocal, and some other methods that I'm going to describe in a minute. And then these are the ones that, instead of course, instead of localization microscopy are the ones that got the Nobel Prize. It's 2015, wasn't it, I think, now. These ones, effectively the numerical aperture, the effective numerical aperture is unlimited. You can make it as big as you like. But what I wanted to point out here is that you can actually do pretty well using structured illumination or confocal. I like to think of confocal as being an example of structured illumination. Structured illumination is a very general term, which means you project some pattern onto the sample. In confocal, we're using the special case of projecting a spot, a single spot. Now, this is what it does to the transfer function. And so I start with this one. And this is showing actually the 3D transfer function. I haven't said anything about 3D imaging. Until now. But this is what happens. In a coherent system, the spatial frequencies you can pick up lie on the surface of this capivosphere in spatial frequency space. For an incoherent system, then you have to do the auto correlation of this and you end up with this thing. The transverse spatial frequency bandwidth is doubled. We get a fill-in of the 3D spatial frequencies, which means we can get a better 3D image. But we're still left with this missing, what's called the missing cone of spatial frequencies that you can't pick up with this. In a confocal or structured illumination microscope, you have to do the convolution of the illumination and the detection responses and you end up with this shape. So now the cutoff is doubled again. So this is four times the size of this. This is four times the width of this. And in principle, if you could illuminate from different directions, which you can do, for example, using four-pi microscope, you could actually recover all the spatial frequencies within a sphere this sort of size. So you can see that using effectively semi-classical type methods, there's no blinking dots or anything like this here. This is just pure optics. We've improved things from this cap of a sphere to this complete sphere of spatial frequencies. So you can see there's a huge improvement without going to any fancy techniques like STED or whatever. Okay, now I'm going to say something about a couple of things, improvements to confocal that I've been working on over the last few years. The first was something we worked on when I was in Singapore and this is still carrying on. It's a technique called focal modulation microscope. And so the idea is we're trying to improve the optical sectioning. And what we found is that what we do is we get our laser and we divide the beam of light from the laser into two parts. We frequency shift one relative to the other using some sort of modulator. And then we put these two beams separately into the microscope so they don't overlap. So just like the D-shaped apertures that I was mentioning earlier, again, we've got different axes here. But it's not now for the illumination and the detection. It's for two illumination beams. So you will only get a signal. So what we do, sorry, is we beat these together. The intensity excites fluorescence from a sample. We then detect the fluorescent light and we look for the beat frequency. So we will only get that beat frequency where these two axes cross. So it improves the optical sectioning of the confocal basically. So that's the principle. Here it shows it doing it with acoustic modulators. You can also use electro-optic modulators or whatever. And this is an example of this. This is looking at chicken cartilage. This is a confocal image. This is a depth of 280 microns into chicken cartilage. But you see that this image is becoming blurred because of the background, because the optical sectioning is not rejecting all of the out-of-focus light. If we do focal modulation microscope, on the other hand, this is the image. And we get rid of this background and we can go deeper. Here it shows image from, you can't see it very well with the lights on, but this is 600 microns deep into chicken cartilage. So that's the one technique I wanted to show. So that's still being worked on by Nanguan Chen in National University of Singapore. And yeah, what we found, this was a bit of a surprise when we first realized this. I had two students who were calculating the imaging performance of this system. And so I'll start from the beginning. This shows the point spread function in 3D from a confocal system. This is basically the square of what you'd get from a single lens as shown in Bournemouth. This shows what you get for a conventional system with D-shaped apertures. And you see here it's made a lot broader. This is the transverse direction. This is a lot broader than that. This is because with the D-shaped apertures we're effectively only using half the objective. So it's going to be twice as broad. But this was with focal modulation microscope. We find that actually the resolution is even better than in confocal. So this is quite an interesting effect. As well as getting improved optical sectioning we also get improved resolution. And this is calculating this optical sectioning effect as we go out of focus. So just like we measure this by what we call integrated intensity which is the area under the point spread function as you go out of focus. And so this is showing how focal modulation microscope decays as one over z cubed. Better than you get with a confocal. Right. And then the final thing I was going to talk about in this talk is what sometimes we call pixel reassignment. But this is a method that's related to what's called now the name that's been used by some people anyway is image scanning microscopy. So this is a very neat thing. It all started really again back to my student Ingemar Cox. In 1982 he published this paper where I remember he came to me one day and he said he was building a confocal microscope as part of his PhD. And he said how do I align it? How do I know when it's aligned? Now people who have actually done this themselves would know that what you do is you align it to get maximum signal basically, right? So the pinhole is aligned to maximize the signal. However what he found was that if you misalign the system so here he's moving his pinhole sideways the point spread function actually gets narrower. This was a very big surprise when we came up with this result. The signal is reduced. These curves have all been normalized. The signal is reduced but the point spread function is narrower. But you get bigger side lobes eventually the side lobes become huge. But an effect that we didn't show in this plot here is that the effective point spread function shifts sideways, right? Now this is obviously true because if you took the pinhole out altogether it just behaves like an ordinary microscope, right? So the resolution gets worse, right? And this is because all of these responses from these out-of-axis points are all moved sideways relative to the axis. So this is the principle, is that you illuminate with this spot you detect from this spot the overall effect is the products of these the effective point spread function the products of these which is this. So you're really imaging this spot not that spot. So this was one of these moments where you suddenly realize something that is quite important. It means that when you take your image as you're scanning the information you get from different points in that detector plane are not referring to the same point of the object. So now we use computers you can put them all back where they come from and you end up with a better resolution. So this is what you do. So these are all from these displaced points and we shift these all back to where they are and we just integrate up. And so this principle was proposed in this paper that I published in 1988 which was the year before I moved to Australia actually. And so this one shows this figure shows the OTF for this system what's now called the image scanning microscopy and showing how it boosts up these high spatial frequencies. But plus you're collecting all the light. There's no pinhole anymore. We're collecting all the light with a detector array but we're putting all the light back where it should come from. So we get a much stronger signal as well as getting improved resolution. So I published this in 1988 and forgot all about it and then eventually it was reinvented by Jörg Enderlein. He published it in physical review letters. He did the experiment and showed that you could get this sharpening up of the point spread function using this method. So this has now become a pretty important thing I think. There's a lot of groups around the world working on it and there's also a commercial system from Zeiss. I'll show you that in a minute. And this is some analysis you can find in this paper in optics letters showing how the resolution changes as you change the size of the detector array, how the signal, the detection efficiency increases and how the peak intensity increases. And this is an amazing thing when we first realized this. You see this peak intensity of the point spread function goes above one. We're collecting all the light but then we're squeezing it into a smaller spot. So it gets brighter. The point spread function is brighter than you can get in an ordinary microscope, which seems incredible. And this is showing how the point spread function changes as you change the size of that detector array. This is a more recent paper where we looked at the OTF. This is the unnormalized OTF. So this is basically the strength of the signal that you actually pick up. So I think this is a good way of presenting the results. And on this set of curves here, the dash curves are for a confocal system with different sizes of pinhole. And you see that it starts off out here but as the pinhole gets bigger you lose all the signal. Eventually it ends up here. Whereas the solid curves are with this pixel reassignment and actually in this case as the array gets bigger the resolution actually gets better. So this is for a very large array. It's this outer curve here. So as the array gets bigger, the detector gets bigger after the pixel reassignment it gets better altogether whereas with the confocal it gets worse and worse. Okay, so this is the Zeiss. That's called the ARI scan. And so I think they've sold quite a lot of these now, doing very well. I've got no commercial interests with these guys although they are nice enough to mention my name every now and then in some of their things. And so that's the Zeiss ARI scan. And yeah, just to sum up I've got a couple of things to sum up. There's a lot of controversy. I think Reiner Heinsmann might maybe say something about this because I think he agrees with me a bit on this that a lot of these nonlinear, these super resolution methods are based on a nonlinearity. So things like palm and storm and STAD there is always some sort of nonlinearity. And the fact that you can get an improvement in resolution from nonlinearity has been known for a very long time. Even for example, lithography is a very good example where we've known that for a very long time. Okay, so one of the conclusions I want you to make from there is that you should distinguish between true super resolution and other methods. And this is just listing some of the things. Maybe I've said enough really on this. So I think that's the last one on this talk. Any questions before I carry on to say something about face contrast? Thank you, sir, for the nice presentation. Accordingly to the structure of elimination I wanted to ask if this structured elimination means structuring the intensity, not the face. I mean sometimes we can structure it like tichography I don't know if you heard about it. Yeah, I know about it. It's a pay vector direction. And so given the resolution we can say that tichography has the same limit as any other structured light elimination technique or it can go further. Yeah, I think they have the same limits. Yeah, I think basically tichography, when you're doing this structured elimination the structuring is done by our lens basically which has got us fixed aperture so you can't beat a complete hemisphere or whatever with a lens. And so the trick in tichography is they take signals in the Fourier plane and then they do phase retrieval on it. Sure. So I don't know if you know, there was a very nice paper by Dimitri Pisalsis from EPFL a couple of years ago now where they built this system where they get the light from the sample, they look in the Fourier plane and they make a hologram of it. So they've got now the phase information as well as the intensity of information. So once they know that, they know the modulus and phase, they can therefore refocus that light digitally and make a synthetic confocal. And they did this in this paper, it's very neat. And that shows a lot of the similarity between tichography and some of these other techniques I think. So it wouldn't beat that PSF resolution that you have shown the semi-conic area. It will be just inside that semi-conic area, the resolution. Sorry, with tichography? Yeah, yes, yes. I think in principle with tich... Yeah, so again... Yeah, I think you probably... You can do tichography with fluorescence, for example. I can't see why you shouldn't do that, can you? No, no, no, this is the direction of the eliminating light, the fluorescent, it's always the same. You cannot play with it, I don't think so. Can we? Can we have fluorescent, dark field? No, you can't. Okay, so you're not going to get that factor of two, in principle. So I think you're left with a missing cone, probably. Thank you. Any questions? I might ask you a question then, so you seem to know about tichography. Do people do tichography in a reflection direction as well as in the forward direction? I was eager to be the first to do this. Yes, they did. I found a paper I could email you. Yeah, okay, we can talk later. Okay, so... Yeah, there's one there. Thank you for your presentation. In confocal microscopy, as far as I know, people recommended that you overfill the objective aperture by some factor 1.2 or 1.4. I ask you if we get better resolution, if we overfill and... Basically, for true confocal, for true confocal with a very small pinhole, right, you effectively square the point spread function. So that means you double the bandwidth, but the point spread function is only improved, narrowed by a factor of the square root of 2. It's because the transfer function, as well as being doubled in bandwidth, it also drops at the high frequencies to some degree. So the different ways of measuring these things, you get a different degree of improvement. A bit confusing, first of all, but we can talk more if you like later. Okay, now I'm going to carry on then and talk about partially coherent imaging and phase contrast microscope. I'm obviously not going to get very far with this because I've been too slow, but I've also lost my cursor again. Sorry, the last bit of time I've got, I can't carry on. What am I doing? Ah, I think it's to do with this. Ah, I've got it, I've got it. I think it's because of the airport again. Right, so let's, now I've found it. Right, okay, so, yeah, I'm going to have to skip through some of this a bit quickly and so I don't know quite, I think probably the more experimental parts and the instrumental parts are more interesting to you than the theory. Right, now, yeah, so this is worth saying. So there are three main methods of phase contrast, three main classes, let's call them. One is based on complex pupil function. So what I said is that if you've got a perfect system, you don't get phase contrast, right? So you've got to mess up the system in some way. One way is to improve, is to introduce some complex function to the pupil function. The second method is to introduce an asymmetry in the system. And the third group of methods are interference methods, where you actually have a reference B. And so now interference methods, I'm not really going to say anything more about today, actually because they normally come in another talk that I do. But the, these coherent methods, basically they do have some disadvantages, they also have some advantages. So for example, digital holographic microscope, it's a coherent method, right? So you don't get this improvement in resolution that you can get in an ordinary microscope by using oblique elimination. And so this is the advantage of using these partially coherent methods. You avoid speckle as well. But the question is how to get the information out and make sense of it. Right now, so this was this paper that I mentioned yesterday, but I didn't show. This is from Hopkins, 1953. This is where he develops the theory of partially coherent imaging in a microscope. And so these were the, where these, you remember I said about these spatial frequencies, M and P and Q. And you see here he's got these summations. This is because it's a Fourier series because he's looking at a repetitive object. And so very important paper this one, I would have to say. And then in our paper, the paper I published with Chowdhury in 1977, image formation in the scanning microscope. So what we wanted to do in this paper was to come up with the theory of confocal microscope and compare it with a conventional microscope. So we obviously had to treat the conventional one too. But we changed these periodic objects to non-periodic ones. And so all these things are, you see that it's a Fourier transform rather than a Fourier series. And we came up with the transfer functions for the confocal as compared with the conventional system in this paper. Right, now, so I mentioned this before as well about how in a partially coherent system you basically have to replace these two terms by this one. And this quantity, which is called the transmission cross coefficient, so here you notice I put these two as zero. So M and P are both spatial frequencies in the X direction. And so what does it mean? It means this is actually given by the product of three circles. This is the condenser lens. This is the objective lens and this is the objective lens. This one is complex conjugate of the condenser lens. So this transmission cross coefficient is given by this area of overlap of now three circles where one circle might have a different radius. And if you calculate it for this particular case where the condenser aperture is the same as the objective aperture, this is what this thing looks like. Sorry, so this thing, M1 and M2 are what I called M and P before. You see, the cutoff is this funny head or shape. And so I had a student, Shalyn Mehta, who did quite a lot of work on trying to model these things. And I'll come on to that in a minute. This is back to the Chowdhury again. This is showing how the cutoff of these systems varies as you change the value of this coherence ratio. Yeah, with Shalyn, to simplify things, it turns out you can simplify things by introducing what we call central and difference coordinates. So we've got M1 and M2 and we look at the sums M1 plus M2 and M1 minus M2 and similarly for distances. And if you introduce these new coordinates the effect it has is to rotate this transmission cross coefficient so that it looks like this now. And it turns out that this thing here we can think of has got a lot of meaning in terms of these face-based type quantities like the Wigner distribution function. So this is the way we interpret the meaning of this thing. Right, now, but on the other hand for most of what I'm going to say this partially coherent stuff is just too complicated to do. So we want to look at some special cases which are simpler to think of. And it turns out there are two very important special cases that are much simpler. One is you see now I'm down to just one quantity. This is just in the x direction but I'm also putting p equals zero. So this thing is what I call the weak object transfer function. This is going to be true. This is going to apply for a weak object. And so everything becomes much simpler if you look at weak objects. The other thing is this case which I call the phase gradient transfer function the PGTF which is this quantity it's where p equals M. And this applies for an object which is varying slowly in space. So this would apply actually it turns out this one would apply if the first brawn approximation is satisfied this one would apply if the right of approximation is satisfied. So, oh yeah I'll skip this one and skip this one. This is just deriving why this weak phase object works. I'll skip this one as well. Let's go on to this. Weak object transfer function you calculate that for different values of this coherence ratio and it becomes complex and you see this is some examples this is a very small aperture condenser aperture a very large condenser aperture and you find that what we want is for this imaginary part to have a nice smooth shape because we can use this imaginary part to look at the imaginary part of the object effectively and therefore get the phase information. That's the idea but we find if you look at these curves this becomes a nice shape this becomes a nice shape but in this case here eventually this becomes very very weak eventually if I put this as one this would vanish. So if you choose a coherence ratio around 0.5 0.6 0.7 you can get an imaginary part which is quite strong and quite well behaved. So what you might imagine doing then is taking two images you defocus one way you defocus the other way and subtract the two and the idea then is that the ordinary bright field image would cancel out and you'd just be left with a cross part and with a transfer function given by these sorts of curves like that so you see 0.6 is about optimum for this and then we notice amazingly that this region here these are all parabolas basically right so digitally we can get rid of that parabola by applying an inverse Laplacian operator to the data which has the effects of dividing by this parabola and we end up getting something like this so after we've applied this inverse Laplacian this is our transfer function for phase information that we get and you see how it depends on the ratio a common class ratio S, the interest ratio so around 0.5 0.7 around this region you're getting a flat response all works very nicely so this is applying this method in practice and so this is an example looking at an optical fiber sorry so this is the method I'm talking about this is the weak object transfer function method and you see we recover this parabolic shape of this refractive index very nicely and this is applying it to some biological samples these are the ones TIE means transport of intensity equation this is another method that I'm going to talk about in a minute dark field microscope now dark field microscope is not really phase contrast but it's got a lot of similarity and in fact does give some information from phase and so dark field microscope all you do is you put in an annular mask in the illumination side and then an aperture stop so that the direct light doesn't get through and well this is just I'll go through this is what this this is what the transmission cross coefficient looks like for a dark field system and in particular this is very funny that you don't get anything in this region here the sum frequencies are not imaged you only image the difference frequencies what this is also meaning is that if you've got a single spatial frequency you don't get an image and so you only get complicated mixtures of spatial frequencies which is why the images are rather difficult to interpret you very often get for example, doublings of frequency but the Zernicka phase contrast method is very similar so the Zernicka phase contrast method we replace this aperture stop by phasoring so the now all we do is we change the phase of the direct light relative to the other light and so it brings it into view again I'm not going to go through the maths but this is basically how it's working we've got this we're trying this is our phase information that we're trying to see and you see that the point is we can't see this because effectively the length of this vector is the same as the length of that vector so what you do in Zernicka phase contrast is you rotate that round until it's like this relative to the other and now the changing length of this is much bigger and therefore you can see the phase contrast so that's roughly how it works this is what it does in terms of transfer function again you'll see that it's got this is what this is the phase contrast part and these are sort of bits we don't want there like artifacts on top I miss out that right yeah okay I don't know this this has got displaced this is talking about the focus which I've already spoken about now I'm going to go on to transport of intensity equation yeah so this is related to the defocus but it was shown you know as light propagates this first seems very very surprising but you can get the phase information by looking at the intensity of the light as it propagates and so this is what's called the transport of equation of intensity it says that the the IDZ right so the changing intensity with along the axis gives you information about the phase of the object or phase of the wave phone and so if you can measure this measure this you can actually recover that so this this was this equation well in this form is first derived by Teague but it's very close to the Iconal equation which is much much older and yeah and if you expand this divergence you can write it like this often this second term is small and we can forget about it and then we're just left with a Laplacian type term this was a nice picture taken by Miguel Poros you know Miguel Poros who was visiting me and we were talking about transport of intensity equation and this was just a beautiful scene to see it but you look at the pattern on this boat you see these this is the sun shining on the water and you see these patterns here so what this what I wanted to show from this is that you see intensity changes as a result of the changes in height of the water so if you think of the water as being like a wave it's like converting a phase of this information to an intensity information so this is exactly what we try to do in transport of intensity equation using the transport of intensity equation is to start off from this information and recover the shape of the surface of the water that's producing this shape right see these are some examples the first experimental ones I think were done by Norbert Streibel in 1984 this is an example he gave but he didn't do the full reconstruction as far as I know the first reconstructions were done by the group of Keith Nugent and so this is an example where they've recovered the phase of a cheek cell using this method now in order to do this reconstruction normally we have to measure the intensity in three planes at a certain time we had a project when I was in Singapore where we came up with a way of doing this with just a single shot using color and so you use a a color camera you pick up three images the three colors are used for the three planes and then you can get images of phase information from this just from this information it of course is assuming that the object is acromatic so if the object could do different things with different colors it wouldn't work right so TIE, the main thing I wanted to point out here that there's the way the phase that we measure is related to the objects in a partially coherent system is very complicated so don't think that you're actually measuring the phase of the object directly you're not and the other thing is that it doesn't seem to work too well with three-dimensional objects you can't image three-dimensional phase variations in your sample refractive index variations in your sample using this method this is another method I wanted to talk about though so this is again got quite a long history and we did a lot on this when I was in Oxford in the scanning type system you illuminate your sample with a spot of light just like we do in confocal but you look in the back focal plane where you do typography and you place here a split detector and you take two signals so this is a sort of like almost like typography in a way except it's just got two elements and we subtract these and that gives you an information about the phase gradient of the sample so this is an example this is a cheek cell so you just subtract this signal from this one and you get this beautiful image of the phase information this is you can do it in reflection as well this is looking at an integrated circuit oh yeah this is pointing out this method is actually very very sensitive this is an image of a single monolayer on a substrate so it's one of these Langmuir Blodgett films so it's only a few nanometers thick but we can detect it using this method this is what this phase gradient transfer function looks like this is what basically measures the transfer function for these slow spatial frequencies oh yeah we also found that you can modify this system by changing the geometry of the detector so we made it for example an annular split detector and then you can get these nice images that look like these ones showing up different properties of the sample basically so that was all back when I was in Oxford and then a lot lot later my student Shalyn Maita again he said well why don't we do this in an ordinary microscope and so he did some experiments on this so the idea is you need to return all the rays the opposite way you need this source here of course you can't have a negative source so what you have to do is you take one image you take another image and you subtract this one from this one and he called this asymmetric illumination differential phase contrast and of course if you want to do this Defide X and Defide Y you need to have you need four images in order to do the differentiates in the two directions and he did lots of experiments on this and so getting images from biological samples and so on and yeah and then we came up with a problem of if you know what Defide X is you know what Defide Y is what is Phi how can you solve that and well the simplest thing you might think of doing is just starting from this and integrating to get Phi but the trouble is that doesn't work because if you do that you've got this constant of integration and you don't know what the constant of integration is the constant of integration varies in Y so we came up with a method where we can combine these informations in X and Y and this was basically our algorithm I'll leave you to think about that or someone if you are interested you can come and talk to me more about it and but it seems to work very well to do this so this is an example of some cells and so Defide Defide Y and then we recover the phase now so DPC has been around quite a long time it was first invented for Electro Microscope I think the first paper was this one by Decker and DeLang and then for the optical case there was a paper by Stuart this was actually just an abstract it was a paper at a conference and a pattern was taken out by Ellis and we didn't know about these actually when we published our paper in 1984 and then the final method I was going to talk about is Namasky I see I'm supposed to stop so I'll be very quick just to show something about that Namasky you probably know is a very standard method now which uses Welliston prisms to split the light so it's basically a shearing interferometer and let's shift that I'll miss that out yeah this is pointing out one of the disadvantages of Namasky is you sometimes go out artifacts caused by birefringents in the sample because it's based on polarization so these are regions which are birefringent which show up in the DIC image as a phase but they're not really a phase I'll skip that I'll skip that I'll skip that yeah again you can this is another sort of approach this DIC is effectively an interferometer right and so you can do effectively the same as phase stepping that you do in normal interferometry you take three or four images and do some processing on those and you can get rid of the non-linearities in the original DIC image so that's a very neat approach and so you can get a measurement of the phase gradient which again then you've got to recover the phase from that this is actually demonstrating what happens if you try just integrating this thing you see you get all this streaking this is from the constant of integration so but you can oh this was a neat thing we were doing when I was in Singapore so this is showing this phase stepping DIC and getting a phase gradient image and this is the final phase we found that you can actually use transport of intensity on the DIC images which was a bit of a surprise to us first of all but it seems to work very well and this led to a rather amazing experiment we did where we did both on the same sample so we've got this sample we take four images and we do this phase shifting DIC on it and get this image and we take three defocus measurements and we do TIE on it, transport of intensity equation and we recover the phase and we get this image so these are two reconstructions of the same object performed actually on different data because they're on these three and on these four but you see amazingly good similarity between those two reconstructions and this is showing getting quantitative phase information again looking at optical fiber seeing this parabolic phase structure okay I think this is the very last slide of this I've gone through this very quickly so I'm sorry about that but this is just comparing these different methods and what their advantages and disadvantages are and I guess I could just sum up to say that no method has all advantages and no disadvantages so you have to pick the method according to what you want to do I guess so I'll stop at that point if anyone wants to ask something I don't know whether very short question if there is please I'm sorry I want to keep it very short and I don't know if it's a stupid question this year I was in a presentation by Professor Gert Hausler from Germany yeah perhaps you know he talked about that when we want to see the details of an image in our computer we better get the derivative of the image in two directions or one direction first and then see then he proved his claim by showing the FFTs of the two images the image itself has an FFT for example with a cluster of this size and the derivative has a cluster much twice as big so am I wrong or is that just an illusion of improving the details it's not really improving the details yeah I think one thing you might have noticed in some of the pictures I showed that actually if you look at the phase gradient you see a lot more by eye than if you look at the phase because I guess differentiation is going to boost the highest frequencies because it multiplies by ramp and so there was a very recent paper by another of my former students Kieran Larkin about this about visualizing DIC images you might like to look at that paper it's quite neat but I think it's because of this effect that you're boosting up the high frequencies just to the eye not in fact the resolution just to the eye yeah I don't know exactly what Gert said but I haven't he didn't say anything more wasn't wrong thank you so it's a little bit late but professor Colin Shepherd is with us all this period so please if you have questions you can ask for him yeah please come and talk with me I always like people talking to me and now I have to remind you the aspects for the experimental part so be careful again groups four, five six at two o'clock in Adriatico good so better take the lunch there to be in time second one, two, three groups in multidisciplinary lab they are two hands-on experiments written in the program and is an additional part with Aberdefraction and for UV experiment they are half of two groups to increase the consistency of the process for additional things ask Umberto because he will explain and please about the poster session it will start at four o'clock be sure that you will place the posters in the hall before leaving for experimental part during the lunchtime and five, six please come by walk ten minutes before to be here for poster session and the last one thing prepare ten minutes discussion for the evaluator because we have a lot of things to do and we have to speed up the things and now Umberto will give you some new details, thank you ten minutes before to here in the lobby the posters that are still they will be printed before four and then you can come here Federica will bring here this poster before four come here and collect there in the table and then you can add there in the poster session