 Yma, dwi'n wedi cymdeithas i'r cyfnodd. Felly, rydyn ni wedi'i'n ceisio yma. Mae'r gweithio, rydyn ni'n gweithio siaradau. Rydyn ni'n gweithio i ddweud? Mae'r frontwro. Felly, rydyn ni'n gweithio? yn gweithio, sydd oes bod yn cael ei wneud. A hynny'n defnyddio'r ysgolwyddiad yw Yn Ym Mhryd, yn gyffordd am ydyn nhw'n gweld, ond am y cyfnodau'r ysgolwyddiad, o'i ddau'r cyfnod. Yn ymwneud, mae'n fflaes yw'r ysgolwyddiad ddefnyddio, a ddefnyddio'r cyfnod o'r cyfnod o'r cyfnod o mewn gwneud o'r medicyn o'r ffordd, ffordd, ffordd, mykylgynogi, ac yn ymdweud o mykylgynogi. Felly, mae'n ddweud o'r aelod, mae'n ymddangos fod yn meddwl i'r ffordd, Ac rydw i ddim yn ei wneud o'r detaill ar y 100 microns. A oeddwn i'r ddweud, rydw i'n rhan o'r instrumentau. Mae'r rhan o'r instrumentau sy'n bwysig a'r microskopu optigol. Yn y rhan o'r microskopu optigol, rydw i'n bwysig i'r detaill sy'n bwysig i'r region. Rydw i'n rhan o'r ddweud o'r 200 nanometrach sy'n rhan o'r ddweud o'r ddweud o grannu a ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud. If we're applying the super resolution technique, we go even further, so we can see in that case something which is on the order of the size of the virus and which belong to the 20, 50 nanometers. We want to see even something smaller, the structure and study the structure of the proteins, for example, so we can use the electron microscopy or we can use the x-ray techniques, which now is a very active area in these studies. But so with the normal light microscopy, what we can study? We can study the cells, the animal and the plant cells, and not only the size, because the size usually of the cells is around 20, 30, 10 microns, but what we want to see, what is inside of the cells. And we are speaking about the bacteria, so the size is around the microns, also the 2, 1 microns. But what exactly we want to know about these small objects? Do we know if we are speaking even about the imaging of the big objects? So we see them only from the one side, so probably in order to know what is going on the other side, the object has to be rotated. But it's even not enough, because we usually want to know what is inside of this object. So for the macro-size object, what we used, if we want to apply non-invasive techniques or if we don't want to cut the object and to look inside, we are using usually the x-ray tomographic techniques or it might be MRI techniques, et cetera. So something similar we also want to do, but on the microscopic level. And in that case, we will speak about the quantitative imaging. What we need to get what we can name like a good image. Of course, we say that we have to magnify the object in order to see it because of our proper limitations. For example, here you see the two images which are obtained with the same magnification range, but they are quite different. Why are they different? Because one of them is obtained with a large numerical aperture and another one with a smaller one. And as Corrine Shepherd explained to you two days ago, that numerical aperture is important to process the high-frequency, high-special frequency content of the scattering light, which is responsible for the sharpest, sharpest details, the contrast which we can see in our image. So the numerical aperture is the size of your lens is important to see the image in the good conditions. Another important thing is the correct illumination. And so the most part of the talk will be related to the importance of the illuminations in the imaging process. Now, we don't want to see the image with our proper eyes. So usually we use the camera, the digital camera in general to see it. And so we need to have the detector with a good dynamic range and also with the smallest pixel size. And of course we need a good optics without aberration, good alignment, etc. So the outline of this talk will be devoted to considering the different type of illumination, the illumination with coherent and partial coherent light. Then we will speak about what we understand on the quantitative imaging. Quantitative imaging sometimes has the name quantitative phase imaging. And so we will speak about the phase retrieval method applying for the quantitative imaging microscopy. And then we will consider the proposals for the approaches for the recovering of the three-dimensional objects from the... in microscopy. And with coherent and partially coherent light, and then we will speak a little bit about the illumination coherence imaging. So how we can firstly change the coherence property of our illumination. It says quite... Okay. So first of all let us speak a little bit about the different type of light which illuminates our object. Probably you remember from the definition how we can describe the light in the books of the undergraduate students. In that case we describe it with the amplitude and with the phase. But this is the case only when we consider the coherent light. And the coherent light it means that the perturbation of the field and the different point are completely correlated. And if it's not the case so and that we have to choose another approach and we have to consider the correlation functions. And in particular if we use the Gaussian statistics so we have to consider only the two-point correlation function between the... which describe the perturbation in two points which are R1 and R2. So you see that even we consider the light in one plane so it says that two D problems. It is much more complicated when we are using the partially coherent light if comparing with the coherent one. And the coherent one we have amplitude and the phase. So this is explained with the complex field amplitude which is a two-dimensional in general complex function. And in the case of the partially coherent light we already have four-dimensional complex function. And the first thing is that we cannot measure directly neither the complex field amplitude or the two-point correlation function which in the case if we consider the scalar and monochromatic fields we can name like a mutual intensity. Why we are calling it like a mutual intensity? Because if we calculate it in the points R1 equal R2 what we will get, we will get the intensity distribution. And the intensity distribution we can easily measure. So the problem is if you want to characterize the light how from the intensity distribution to recover the phase information if we consider the coherent light or how recover the mutual intensity if we consider the partially coherent one. Why we are speaking about the partially coherent light and coherent light when we would like to consider the image formation through the microscope? Because depending on this illumination we have completely different images. We will consider the scheme proposed, the illumination scheme proposed by August Kohler in more than 100 years ago when he worked for this company. And in that time it was very important to get the uniform distribution of the intensity of the illumination in the plane where you put your sample. And so he found that if the lamp which is somewhere here is projected to the back focal plane of the condenser lens and the condenser lens is responsible to illuminate properly, concentrate the light on the sample, then the image of this source never will coincide with the image of the specimen when you are going to analyze. And it is a nice thing. But apart from that now we know that it's also allow us and the simple case change the type of the illumination from coherent to the partially coherent light. How? Only closing and opening this condenser aperture which is here. Why we can do it? Because if we applied the Vansiter-Sernick theorem and which we suppose that illumination of the original illumination of our source is completely incoherent and we projected to the back focal plane of the our condenser lens, then in the plane of our sample the coherent states which will be described, so the mutual intensity of this illumination will be described like a Fourier transform of the intensity distribution we have in the back focal plane of the condenser. And so it will be before the light passing through the object and when it passing through them so the mutual intensity change correspondingly when we take the mutual intensity of the illumination beam and multiply them for the transmissions function of our sample. And so this is u s corresponds to the transmission function of the object. This transmission function is in general we can consider like a complex function because the almost transparent sample is changing in the face. It is important to say that why the cochlear illumination is good because if we consider the intensity distribution of this illumination we can find so we put here R1 equal R2 we will obtain that this is will be the constant and so the illumination will be constant along all the sample. So it was the people wanted to do but the illumination will be the intensity distribution will be the same but not the coherence or correlation property of our light. But the good thing of this correlation property is that they will be shifting variant because you see that the dependence of the mutual intensity of these two points depends only the difference between them. So we have something which is also called like a shell model beam so shell model illumination beam. In order to in two words describe what is the parameters of the what is the state of the coherence of our illumination we can use the coherence parameter which is related to the numerical aperture of our condenser which is in general related of the size of this aperture and the numerical aperture of our objective. So when s is equal almost zero which is too small it means when we closed the aperture of the condenser so we have the coherent light and when we open it we have the partially coherent or almost incoherent one. Okay, but we can do something else. Suppose that we change or we can project a certain image which has to be incoherent to the back focal plane of our condenser and this what we will project is shown on this aperture. Line and let us look how it will reflect in the formation of the images of the simple sphere. So you see that in this case when this is a coherent light we have a lot of fringes after that we have some second and focus and after that because the sphere is working like a lens so we see the image of what we projected to the condenser. But now let us use the line. So what we are going in that case. In the case of the line you still have these fringes, these interference fringes in that direction which corresponds to the almost coherent light in that plane but these fringes are almost washed out in the other plane because in that case your elimination is almost incoherent or less coherent. We can do something even different so we can combine the different type of the color so we can do it for example with the green light in one direction and the red line in another one so we again obtain the very confusing image. So or we can do it like a triangle form and so you have it like that. So if we consider only this row so that the defocused images it's even very difficult to say that we have a sphere right. But if you look it in the focal plane so in the focal plane so this is the focus light so in that case you see that they are very similar. Why? Because in that case your elimination as you remember is uniform and you obtain exactly using the description of shell model. Shell model beam your image will not be dependent on the coherence of your light. If you go further so you can see as I said you almost what type of the illumination you see. So what I want to take your attention to what? That depending on the illumination in that case I know that the objects that I had were the sphere it's not so it's quite confusing to decide what exactly I absorbed so what is exactly the object that I want to analyze. But maybe I can go here. This? This? Okay. So in other things about the partially coherent light. So let us now to decide what is better to use. Partially coherent light or coherent light. So I know that you have a special laboratory about the speckle imaging but sometimes the speckle is quite a negative thing. So let us make the following experiment so you have the level of the skater and what is your task? My task is to form the images of this sphere with this envelope. So let us use the coherent light and the imaging plane correspond to that. So in both cases you see these skaters but if we want to see through the skaters so if I want to see these spheres in the case of the coherent light it is very difficult to find them. And then the case of the partially coherent light it's quite easier to find where they are. So probably the partially coherent light is much better if you have the sample where you have some skaters which are not belong to what you want to observe. Okay. Now let us speak about the other book because we tried it. Okay. So let us look to the following thing. So we have the same object. The object is the diotem but it is focusing with the coherent and partially coherent light. Exactly in the same focusing position. And what we see, we see completely different images. So how I can interpret these images if my object is in 3D so how I can find what is its form, what is the 3D form of these images if depending of the type of the illumination they are completely different. We might notice that somehow the partially coherent light has the better optical sectioning so it is de-focused more rapidly and therefore maybe I can have a better guess about its dimension in the direction. So it is one of the things, one, I'm sorry because something happened. So what we understand under the words, what we want to study to get from the images. Of course we want to know the form of our object. And this form which not only the form outside but also inside for example of the cell. We want to know the measure something so we want to know the measuring how much it is in transfer and axial direction. And we want to know something about the composition and when we think about the composition we probably want to know the changes of the refractive index of our sample. And it is more or less what we are understanding under quantitative imaging. But how we can get the quantitative imaging because we know that the direct measurement only from direct measurements we only obtain the intensity distribution of them. So for that purpose we have to use the computational imaging in general. The quantitative imaging is often called the quantitative phase imaging. Why? Because the biological microscope samples they have the bad absorption in contrast and in that case they can be a bad thing of course. But there is also a good thing so we can treat them like a only phase object. And if we can do it so we can probably recover the phase of our image. But how we can do it if we can measure directly only the intensity distribution. So we can use the computational method to recover them. So I will discuss several one how we can do it. So in general there are different applications of the phase retrieval method. One of them is digital refocusing and the digital refocusing is the following. Imagine that you register the image of your sample and the next day you decided that probably you had to refocus it in a little bit another plane. Or you want to send that image to another person and this person decided that it's not a correct plane. To look in this sample. So what can you do in that case if you can recover the phase of your complex field amplitude of your image then you can make this refocusing digitally. For example in the following picture you have the intensity distribution measure it from the simple analogical refocusing and on this place in these images we apply this digital refocusing. So we measure the stack of the intensity distribution we recover the phase and from this recovering phase we can go and see something in the planes which we didn't measure. So it is one of the application of the phase retrieval and another one which is quite a simple one is to recover from the phase supposing that we use a canal approximation of the geometrical objects. We have recovered the thickness of our objects so we know more or less the accumulated path of our light passing through this object. And from the knowledge of this form we can recover the refractive index but in this case it is like accumulated refractive index. So let us speak about how we can recover the phase. There are different methods and at the moment we will speak about the coherent light. So a coherent light we need to recover the phase directly so it's probably much easier problem that we are going to the partial coherent light. So we usually get several measurements of the intensity distribution obtained by defocusing or by interferometric techniques. And the probably the more important or more useful algorithm applying for that is Hedgeberg-Sachson type algorithms or the method which is interferometry or holography or the transport of intensity equation or phase space tomography. So let us speak about the first one. So this is iterative technique proposed in almost 50 years ago and it consists of following. So we measure the intensity distribution of our object in two planes, in the imaging planes and in the Fourier planes. And then we try to recover the phase information about our original field distribution in the sample plane. So what we are doing, we take the amplitude which is a square of the measured intensity distribution. We may add the arbitrary phase or we may put it a constant and we make the Fourier transform. In the Fourier transform we obtain a certain amplitude and a certain phase. So let us, but we already measured the intensity distribution in the Fourier plane. So what we are doing, we change this, our guess about the amplitude of the intensity distribution in the Fourier plane to what we have measured. And we use the phase which was calculated from this first iteration. After that we make the inverse Fourier transform and again we do the same thing. So we change the amplitude of our field and we remain the phase which we calculated here. After that we make several loops and we hope that this process, this iterative process will converge. How we can decide when it converge? So we have to compare the intensity distribution, the amplitude that we obtained. The difference between the, generally there is a different method to do it, but probably the simplest one. So we can compare what we obtained from the one iteration and iteration n and compare it with the next one. If it's almost the same so we can say that the process has converge. Sometimes it's converge to the wrong solution of course, but in order to resolve this problem. We probably need more information about our sample. So what we need to this, we probably need more diffraction pattern. Not only in the far field, not only for Fourier transform, but also something which is in the Fresnel diffraction region. So what the people are doing. We can use the defocused images or we can use the diffraction pattern in the asymmetric system. So what we understand are the symmetric system is when instead of use the spherical lenses to form for example the defocused images. We can use some cylindrical lenses which will break the symmetry of the system and provide more face diversity in our reconstruction process. Of course sometimes for example in x-ray when they try to recover the information about the protein. So they use like a constraint some guess about the size, form, etc. So this is processes even more complicated. In the most things the people are using the paraxial approximation in order to get. So in the standard saxon algorithm we use the Fourier transform. So Fourier transform we know how to calculate. But in the case when we are using the Fresnel diffraction pattern you need for this iterative construction model. Calculate the beam propagation from one plane to another plane. And usually the people are using the paraxial approximation it means the Fresnel integrals. So paraxial approximation corresponds to the following. So we have to resolve the Helmholtz equation. And we suppose that the fast changing of our field and the direction of propagation are described by this exponential. Because that is the direction of the propagation. Then substituting this expression in the Helmholtz equation which is by the way this is a wave equation for the monochromatic field. We obtain this one and we drop this term. Because we already suppose that the fast changes along the propagation distance were taken into account in that exponent. And so supposing this condition which generally corresponds that we consider that the wave vector of the scattering diffracted light has the small angle with the direction of propagation. So we come to this one which is a paraxial approximation of our equation. So in this paraxial approximation we can not only consider the propagation in the free space we can add several lenses. And to write this approximation in the integral form. And so if we are doing like that when we including and now pass already the lenses which might be spherical, which might be cylindrical ones, but they have to be aligned. So in that case the kernel will have the following form. Where this coefficient A, B, C, D corresponds to the ray transformation matrix which is quite easy to calculate using the hematrical optics approximation. So you see that when B equals zero so we have the imaging conditions and this kernel is deduced to the delta function with some magnification which described by this parameter A. And in the case when it is not, when we are in the generalized Fresnel regime, so we have this quite complicated formula, but it's not the case if we consider for example A and D equals zero. So in that case we obtain the simple kernel for the Fourier transform and in the case when A and D equal one so we have the normal Fresnel transform. So this is some kind of the generalization of the Fresnel transform. And as I said that it is simplified a lot the propagation of the beam through the system which contains the different elements which is usually happened in our life because we not only have the objective, so after that we have to project it to our detector and etc. So again this A, B, C, D coefficient are related to the position and direction of propagation of the beam passing through the system with the position and the direction of beam propagation entering to the system. In spite it is in general a matrix which is four by four and in that case it has 16 parameters but it is simplistic and therefore it has only 10 parameters to describe that. So now let us consider some example of how we can get the face from the defocusing imaging. So of course in order to make this defocusing one thing we can move our lens, our detector, our CD camera. But the problem is that the mechanical movements probably produce some enlargement. So what we can do instead we can put here the special light modulator and this special modulator we can program it in such a way that it works like a digital lens. And so in that case we can do it more faster obtaining these defocused images because everything can be changed with a video rate, so with more 30 frames per second. And another thing is that there is no problem with alignment of our system. So we did it and they measured nine intensity defocused images, two of them are represented here. And then using the iterative algorithm as I explained you for the case of the Fourier transform but it's a little bit complicated because they have several intensity distribution measurements and propagation between them are described with the generalized Fourier transform. They obtain the information about the face and this is the representation of that with also another problem that has to be solved is unwrapping of the face. Because from this iterative process we obtained our lens but with the interval of from 0 to 2 pi. And sometimes the thickness of our object is more than wavelength and therefore we have to apply the special algorithm to make this face in accordance with the size of our object. And so there are some special algorithm which is quite time consuming and often not correct but the people are working on this problem. So can we do it the same with the partial coherent light because we said that the partial coherent light probably provides some advantages with respect to speckle noise and other stuff. So yes we can it. So we do it in particular with a similar setup but instead of using the special light modulator which is quite expensive so the special light modulator probably will cost, I don't know, the good one might be 100, 100, 100, 10,000 euros. But we used the electrical tunable lenses which are also named like a fluidic lenses and so the price of that is 600 only. And for this particular application the only thing that we need is to focus our image. And but another new thing that we use a partial coherent light and using this setup we obtained several images. So it's also around the 10 images, 10 defocused images and every each image is measured in 10 microsecond and the object wave field is already constructed in more than 20 minutes. But what is the procedure of the reconstruction of this face information of our sample when we are using the partial coherent light. So in that case we can notice that the intensity distribution of partial coherent light in this paraxial approximation we can think that we can relate to the intensity distribution of the coherent light as a convolution of it with what? This intensity distribution which we have in the back focal plane of our condenser. So making a deconvolution if we know this illumination of our system we are able to recover this virtual I would say coherent intensity distribution via the virtual because we did not measure it. What is measured it the intensity distribution of the partial coherent light. So it is already free of the speckle. And after that we can apply it using that we can apply it as an algorithm similar to what I explained you before and recover the information about our images. So it was tried first for the spheres which is diameter of four and a half microns and for the different type of the illumination so different degree of partial coherence and very construct the thickness of the face accumulation. In general when you have been passing through this sphere and they are matched very well with what we expected to find so knowing from the knowledge of the form we can also find the refractive index of our sphere. Another approach which is too close to that but it is not iterative it is used the transport of intensity equation. So we again staying with the paraxial approximation of the Helmquilts equation and we represent our beams like square modulus of the intensity and the face. If we take this equation and multiply by the conjugated value of our field and sum it with the conjugated equation paraxial equation multiplied by the not conjugated field and we make the rest for one from another one. We will obtain this equation which was first I think derived by the Turkey and is known in the literature like a transport of intensity equation. So you see that from measurement of the differences in the intensity distribution in the close position in that direction in the direction of propagation we are able to find the gradient of your face. In the case when your object is almost transparent so this intensity distribution as a focal plane is almost constant we even can put it outside of these brackets and then we will have the Laplace of the face. So how we can measure this derivative it's quite easy. So if we are staying in the focusing plane so we can do the focusing for delta zeta in one direction and the focusing to the another direction we can raise them divided for this double delta zeta. And in that case we already obtained it of course from the theoretical point of view it's very easy but it's not so easy in the practice. First of all because you have here intensity distribution which might have the zero values and because we have the noise and so usually the people don't use only two intensity distribution but use much more. And also in the recent paper it was explored that usually it's not necessary to measure them equidistantly and it is much because in that case you need quite a huge number of these defocused images to obtain the correct results for this, for the calculation of these and the calculation of the face but you can do it in the exponential spacing. In that case the images captured far away from the focusing are responsible for the low frequency and which are closer for the high frequency. And therefore we will be able to recover the face information from the less images then we can do it with more or less the same precision if we do it especially equidistant stock of the intensity distribution. So we consider this transport of intensity equation applications for the coherent light but there are also the generalization for the partially coherent light in particular in this Petrocelli and Laura Waller papers so if you are interested you can go inside. Now let us look to another possibility to recover the face and it is related to the holography. I have realized that this year it's already 70 years of the inversion of the holography which was proposed, explained the image formation from this point of view by the Dennis Gabor who received his Nobel Prize in 1971. He tried to recover the face information or in general he wanted to see the object in that case and how you can see the object if the only thing that you can measure is the intensity distribution. So he said okay what is the intensity distribution which we really measured so it is the superposition of the illumination beam and the beam is catered by the objects which we want to visualize. And so multiplying this so separating them he found that this intensity distribution is equal of the intensity distribution of the scattered the reference light plus the intensity distribution of the object light. And these two terms which have the information about the face directly, somehow directly information because also this information is in the intensity distribution it will consider the propagation, it will capture the several intensity distribution global in the different position for example. But in that case if we are doing only in one plane so we see that the face information are in these two terms and in one terms we have our object which we plane which way which we want to recover and another one we have conjugated of that. So these two terms which are usually interesting in the holographic applications and they are called like twin terms because they are corresponds to the object field but one of them is conjugated to another one. It means that if one of them is in focus another one is not and therefore the reconstruction of this type of holography is quite difficult because we always see both type of this illumination. So but how we can recover this information of at least try to recover the information about the object field so what we have to do is multiply our intensity distribution on the reference beam. It means that propagate the reference beam to the photograph which we taken from the previous scheme and then we will obtain that we have in general two images. One image which corresponds to our original object and another one which we called like a twin image which will be formed here. So it was the idea of Gabor how explain it and how to try to see the object after the recording the information which contained not only the intensity distribution of this object but also about this face. But now we can do this process not in the analog way but in the digital way. So we also captured the hologram but this hologram is exactly the same like one intensity distribution which we can measure with our microscope. And then in order to see the analog way we need to illuminate it again maybe with not a plain way but with a spherical way in order to get it larger or something like that and see it. But now this reconstruction problem we can resolve in our computer because we know the information of the propagation of our beams from one plane to another one. So what we are doing we capture the hologram but not in the form of the digital way using the CCD camera and after that we try to recover it and if we recover the field so we will be able to recover the face information about. And the problem of the Gabor scheme of holograms is that we have your object is always superposed with this twin object and it's quite sometimes it's quite difficult to resolve this problem. So another proposal by Les Opatniks was not to use the reference beam which is originally in microscope is for free because it's a illumination beam but use it apart with some inclination. In that case what is going on that these two objects will not be in the same line and therefore you using the Fourier transform you can in the digital in the analog way separated only by the filtering on the correct terms in the Fourier domain. The problem of the offline holography application digital offline holography is that in order to separate them this is two twin images and from the CD term you need quite a big angle between the object and the reference beam. But from another side you need to register it using the CCD camera but the CCD camera has a pixel size which is not enough in order to capture the fringes which is quite close one with respect to another one. So in that case there is a trade-off between the enlarge this angle or to make it smaller in order to capture in the conditions and the good conditions these holographic fringes. But even let us to see how we can solve the problem of the face recovering if we have inline hologram or it also will work for offline hologram. And this proposal was from this year by Yamaguchi and Schand and the proposal is face shifting digital holography. So it what is consist so let us use some face shifting of our reference beam. If we again write the expression for the Gabor type holography so we see that we have also the entity distribution of our object, the entity distribution of our reference beam and we have the model of the product of both of them and we have here the cosine where entering the face of the object and the face of your reference beam. Now let us change the face of the reference beam using for example the mirror which is controlled by the elements. So if we can change this face with a certain precision for example for four different faces like a P and divided by two so it will be alpha equals zero, alpha equals pi over two, alpha equals pi and alpha equals three pi over two. So in that case we will be able to recover the face information only resting the intensity distribution measuring the different retarging of our reference beam. So it is nice idea so we will be able using this holographic techniques recover the face of the object. So there are a lot of different variation of this scheme so it might be also applied for the offline holography and here you see the recuperation of the face of some cancer cells obtained by this method. So this is the hologram, this is the face image and you see that the face image is very strange so where it is. It is the wrapped image of the face because it is in the range of only two pi and applying the unwrapping algorithms you can recover the face which is correspond on the face image of your cancer cell in that case. And from this results we can measure the thickness of our object so to say something about its refractive index if more or less we are able to manage to find its form. But okay so we have considered different waves how to find the information about the face but is it the problem, we solved already the problem of the formation of our object. Can we say what is its form exactly? Can we say what is the refractive index in some of its path? No we have the information about the accumulation index, refractive index and we have information about the face but it is the face not the face of the transmittance function of our object even if it is almost transparent. This is the face of the image and the face of the image is not exactly the same as the face of the object which we have to recover. Why? Because of the two things. First of all that where we measure, we measure this here but what we want to know, we want to know something what is happened here and there is something in between and there is in between the objects which form the image of our object. So we have to do something with this information about our system which form the image and we also want to know something about the 3D distribution of our sample, refractive index of our sample. So on the other words we have to take into account the microscope transfer function or the point spread function of our microscope and we know that what is doing the objective lens. It is cut the special content of the field which is diffracted by our sample. So there are different also approaches how to solve this complicated problem and for that we have returned again to the chemical equation. But now we would like to resolve this chemical equation not in the free space but in the because we are not now working in the field of the image formation because in the image we don't have anything between the free space. But we have to resolve this equation inside of our sample and we can say that refractive index in that case is variable because it depends of the refractive index of our sample. So we have to consider this problem. We can simplify it so we add here the refractive index of the surrounding medium where the sample where the cell is mounting. For example it might be water so and several other things that it is water and then we rewrite this expression like something which we know how to resolve because this is the chemical equation for the free space propagation and we know what is the point spread function in that case. And on the other side what we have is the differences between the squares of the refractive index of the mounting medium and our sample, our objective. And this expression which is in blue we call like a optical potential. So first of all we can resolve the chemical equation in the free space because the green function is in that case it is known. And after that we tried to find the approximate equation for this in a more genius case. So there are different also approximation of this chemical equation solution so paraxelol one, aconal approximation born or also called relay approximation when or it is also called like a small perturbation method when we are looking for the complex field amplitude like a complex field amplitude of the obtained for the omachynios equation and this is the first or the second type of approximation of the solution which we can find using this perturbation method. So this approximation is linear with the spawn of the complex field amplitude like we can see from this expression but we can also apply another one which is a rate of approximation and which is not linear and multiplicative with respect to the complex field amplitude and expressed in the following way. So the difference between them we will find a little bit further but maybe I would like to tell you that the first born approximation if we cut it here doesn't take into account the multiple scheduling while the rate of approximation does it but only the forward scheduling is taken into account in this case. The first one probably the simplest one is the hematrical optics approximation so we suppose that the smooth changes of our field on the wavelength and in this case we don't take into account the diffraction so we use the bioproximation and we obtain in the first approximation the famous aconal equation and from that we can say something. I think about the accumulated phase like here we add 2 pi over lambda if we consider the entire phase. So this method allow us to say something about the 3D information of our object and the answer is partially yes. Using for example the phase tomography so the idea is the following so we use the holograms it is of line holograms and so this is acoustic optical modulators which allow us the shifting in order to recover the phase information of our object. And what we are doing again in the back focal plane of our condenser lens we will focus our beam in the different points so if we will focus it in the center so we have the plane wave in the direction of that which is illuminated our object and we have one projection which corresponds to this accumulation phase in the direction. And if we put it somewhere apart from the center so we have another plane wave which is inclined with respect to the first one so we have another projection so the situation is very similar what we have in x-ray computed tomography but in that case we are speaking about the phase and not the absorption which is also the case of the current study of the x-ray tomography. And then recovering this projection but of course we cannot recover its projection from all pi interval degree because you have here this inclination wave diffracted by the sample have to enter into the objective so there is a certain limitation and this limitation in that case it is not. 60 degrees so using another problem is that in x-ray tomography what the people are measuring is the projection in this plane but we have only the position of our CCD camera in the direction so we need some additional correction to the projection that we measured. That can be done. So thus using 81 phase images for hologram images per every image we are able so for hologram why we need for hologram in order to recover the phase for every one of these images. We are able using like a radon tomography procedure to recover the information in this case about the refractive index of the helicel and here you see the results where the nucleoli which has the larger refractive index between that and that part are shown. In this green color. So it is nice so we can use somehow this approximation to get the information but can we do something better so let us now return to the born approximation and we can see the first born approximation. In that case we can find because in the previous case we don't take into account that you have the point spread function of our microscope so we have to do it somehow. So in the first born approximation resolving the homogeneous equation equation we can recover the green function and after that substituting to the non homogeneous one we find that the first approximation is given by the following integral. Where G is the green function and V is our optical potential so our idea is to recover this information. But of course we have a certain approximation when we use it so for the first order approximation we suppose that the sketch ring is weak so the magnitude of the sketched light is much smaller. smaller than the magnitude of the incident one and that they diffracted only once it means that we don't consider the diffraction between the sketched red light. So we can say that the calculation of this first approximation to the complex field amplitude is similar to the problem of calculation of the field created by independent sources. Using this approximation and the similar setup that I showed you before and maybe again with a coherent illumination because as in the previous case people are using the laser illumination but with this inclination. So and using the deconvolution supposing that the point spread function of our system or green function is invariant with respect to shifting invariant. So we have this convolution type equation which we can know ring what is this G recover the information making the deconvolution process. So if we see that we can write the first order approximation as a form of convolution so if we go to the Fourier domain it will become more easier so we have the Fourier transform of the first order approximation as a Fourier transform of product of our optical potential which is our goal to recover it multiplied by the illumination which we may suppose that it is a constant and multiplied by the Fourier transform of our green function which is a coherent transfer function. So measuring or estimating from some theoretical consideration the coherent transfer function we are able to divide it and to recover the information about the optical potential. But of course it is very easy to say that we only divided for that but in general it is quite complex problem because we cannot allow to dividing for the small numbers because we will enlarge artificially the noise and the results will be very strange. So the deconvolution is quite a challenging task and there is a different regularization method which are applied in order to recover the optical potential in the conditions. So here it was applied for this configuration and also the after using this inclined illumination are able to improve the frequency content of the images said to be recovering. And using 240 holograms they recover the fluctuation of the refractive index of the equivalent bacteria which are represented here where here you have the delta n which is the different perturbation of the refractive index with respect to the what we measure. So we are able to recover the fluctuation of the refractive index of the equivalent bacteria which are represented here where here you have the delta n which is the different perturbation of the refractive index with respect to the what we measure. So but how we can at least theoretically approximate this point spread function of our objective. So we know that the green function for the free propagation is written in this form but we also know that when it's propagates through the objective not all special frequency can pass. So we have this limitation which are related to the numerical aperture of our objective. And we see also here that it is related to our, so this is related to our transfer resolution and this is related to our resolution in the direction. So we can then modify what we have to the free space propagation multiplying it by the pupil size and multiply it by this nation that allow us to calculate only the propagation in the direction of calculate only the light which is propagated to the direction of beam propagation. Which really was measured by our system. So if we now return back to the partially coherent light and to write it for the first order of one approximation. So we can write the expression for the mutual intensity in the case of the light passing through the objective and through our sample and to write it for the image. And if we write it for the coherent case so substituting here that this is the mutual intensity of elimination. It's almost, it's only the project of complex field amplitude multiplied by the calculated one. We find that the intensity distribution is equal to the intensity distribution of our illumination and these two twin terms that we have in the Gabor picture of the inline allography. Except one thing. So what we cut here is the intensity distribution of our scattered field. So this is the price what we pay using this one approximation. So these two pictures are quite different because they already belong to the second one approximation that we don't consider. So because of that we are saying that this part has to be more smaller than the intensity distribution of our illumination. So after that using the dividing the optical potential to part one of them correspond to the phase part which is only the accumulation of the phase during the propagation through the sample and another one which is related to the absorption part. So substituting in the formula I showed you before we can separate somehow the part which is related to the phase accumulation and to the absorption. We say that we can separate them so our being in that case is the frequency representation of our intensity distribution. So writing that it seems that they are separated but they separated but they behave here is some. So directly we cannot separate it measuring only for example this intensity distribution in 3G stack and making the Fourier transform of them. We cannot recover information about the phase even if we know what is the transfer function for the phase and for the amplitude. So we need to make some seconds and probably to play with a different state of the coherence of the illuminated beam. Why? Because if we look here you have this CSR term which corresponds to intensity distribution which you suppose incoherent over the condensate aperture. So might be playing with that we will be able to separate them but we need additional measurements for that for the different state of the coherence. So it will stop in 2 minutes and we will continue tomorrow. So here you see the optical transfer function for the case of the different state of the coherence. So this is less coherence and this is more coherent light. And so you have the frequency content which is permitted by our microscope is larger in the case of the partial coherent light than in the coherent light. So might be instead of doing this point by point scanning which I saw you before in the case of the tomographic reconstruction of the 3D representation of our object. We can use the partial coherent light and try to recover this 3D information in less shots that we have in the previous case. Then we have another type of approximation which is a retail one but probably we will return to that tomorrow. It was like a good time to stop at this point. So do we have some questions? Thank you for the very nice and informative presentation. I had actually two questions. The first one is about that you had a publication I seen that you used partially coherent light to avoid the speckle and then use a very nice method actually to derive the coherent intensity function without the speckle and then to retrieve the phase. But actually I've seen something vice versa in the articles that they put speckle a diffuser on a light to produce a speckle not to avoid them and then retrieve the phase. So what would be the difference between these two methods? There is a difference. So this is like a structural delimination of how and so there is a different method. Totally different method. And my other question was about the trade off that you've been mentioning about the fringe spacing and the CCD pixel size. Actually I was doing calligraphy for quite a couple of years and I was always thinking about if it could make the fringes so fine that they would produce Maori fringes with the CCD pixels itself. I wanted to check if you have seen or such an application or seen it yourself have been doing such an application of the holographic fringes. No, I don't understand what you're saying. I mean that is it possible to make fringes so fine that they conform Maori fringes with the CCD pixels so that we could do a kind of extending the resolution that way too maybe. I haven't seen. I'm working in this in holographic microscopy. It's something like inline holography. It's almost the same as we saw. So inline holography you have the same stack but you have after that to treat it in a different way so you can treat it like that. Like with iterative process you can treat it with TA equations and we also can treat it like a galwar holography and try to reconstruct it. But not in offline holography. There are also the method of course to recover the more special content about this. But what I think I don't remember the publication. Thank you very much for a nice presentation. Any more questions? Okay, so I think hopefully you will get some more questions after the next part. So maybe we should stop now then and lunchtime.