 805. All right, thanks everyone for taking your time to join us and to listen to this nice interesting talk and which is a continuation from a series of talks that we've had here organized by the kind folks at the links. These webinars are now had an introduction to hard x-ray coherent diffractive imaging in black geometry. So in the previous talks we've heard and seen how techniques such as typography is extremely useful and in the forward scattering regime you can study extended samples that are ideally extended larger than the beam size and you could raster through this sample and the overlapping information gives you a lot of constraint to see some features. And now what we're going to look right now is how we could use a phase retrieval in some sort of an inverse problem scenario to reconstruct from from the measure of diffraction patterns in in black scattering geometry to show things such as strain within crystals and atomic displacement and most of these things also could be done in serial. Our speaker today is Dimitri Jigev. Dimitri is a very exciting and interesting young scientist I've known him personally for about two getting to two three years now but I've known I've been following his work for the past six, seven years. He got his PhD in 2017 from the University of Hamburg. And his work with people like Ivan Vatanians and Eckhart Edgar and others are very very famous folks in synchrotron and photon science who can actually help us build young people like us today to not just take over but to control the direction of a synchrotron radiation research. Dimitri is commonly a postdoctoral researcher in the division of synchrotron radiation research within the Department of Physics at Lund University, and he has over nine years of experience working with synchrotron and physics and materials related problem. So I'd like to give a nice welcome to Dimitri, I call him Dima, and the floor is all yours and please, please go ahead. Thank you for a really nice introduction and hello everyone. Really nice to see so many people today. And I really hope that a lot of these people are still following from the previous talks because this is, as was told, the continuation of them as the whole series of webinars and today we're going to talk about the cognitive imaging in the break geometry. Since we already have some knowledge about the cognitive imaging, we'll just refresh what we already know from the previous webinars, then why we should do the imaging in the break geometry at all, how we measure that or how we obtain the data, what kind of information and especially quantitative information we can obtain, what are the limitations and challenges you can face using this technique and what are the technical limitations and facilities around the world to do that. And hopefully that will be a new part on this webinar, we will have some hands on experience if it goes well, I hope so. So let's start, just recap of the coherent diffraction imaging, what we use, we normally use a coherent E3 beams produced by signatures or free electron lasers also to illuminate structures like extended objects or compact objects and with certain assumptions like in our field regime when the object is relatively small and we measure the diffraction from the long distance, we can record diffraction patterns, having these nice oscillations coming from the shape or from the total structure of the object. We measure the intensities of this scattering and it can be in the scanning way or typography or it can be an isolated object so we can reconstruct it in E3D using thermographic approaches. We face the faith column which is kind of funny, so we need to reconstruct the original complex valued object function which contains the information about the material by applying physical algorithms that was explained in the previous talks. By reconstructing this function, which is in case of the complex valued density, we can access the absorption contrast through the object or the face contrast for a weakly scattering object, so that's all nice and we can reconstruct it for any point in our object in 3D. But what happens if we replace our general object with a crystal, which has an order to the placement of atoms, and if we shine each trace onto the crystal, it was well discovered 100 years ago that we get special spots on the detector behind which are black peaks. And the information contained through these black peaks allows us to reconstruct the actual atomic structure of the crystal that goes through the Bragg's law. The Bragg's law is a very simple connection between the spacing D between the atoms or the atomic planes and the incidence angle of the X-rays to this crystal and the wavelengths. So the constructive interference which is forming the black peaks occurs only at those places where we are aligned with a typical lattice vector position stuff from our crystal. This is the simple drawing showing how the experiment actually goes. In another way we can describe the diffraction from the crystal by constructing the so-called evald sphere, which is based on two vectors, Ki, which is incident wave vector, the model so it is quite divided by the wavelength. NKF is a scattering vector separated by two theta Braggs, so that's the Bragg angle for a given crystal. Taking the difference between KF and Ki, we can construct the Q vector or the Q space of wave transfer vectors. By choosing the certain orientation, the position of this, the end of this Q vector aligns with the position of the H vector of the certain crystal or reciprocal lattice. And in this case we can record the Bragg reflection. The width is basically shown by the thickness of the evald sphere, so the bandwidth of your radiation gives some width, so you record some area around the Bragg peak on your detector. So this is the basics how we're going to treat the diffraction. So this is just a brief recap on the coherence. So if we now take the object and put a naked sufficiently small and that's a crystalline object, and we put it into so-called coherence volume, which is defined by the source parameters of the sumer term, let's say. In this case, the limited to the coherence defined through the monochromaticity, monochromaticity of the radiation. And for third generations in neutrons, it's a value around one micrometer, and the transverse coherence length is defined by how far we are from the source and how small is the source D. So by putting the object into this coherence volume, so we eliminate completely our object with a coherent beam, we can produce the coherent diffraction pattern, which in case of the cubic structure will look like that. And basically we zoom in into one of those Bragg peaks if we go to the Bragg geometry. So if we record the three-dimensional intensity distribution around the back point, you will see something like that, and they all look similar, but they have a different sensitivity. The sensitivity they have is the sensitivity to the displacements of atoms from their equilibrium positions. In this scheme, it's shown by the vector U, which is called displacement field, and if you shine your X-rays on a place where the atoms are displaced from the original positions, you get distortions in your diffraction pattern in one or another way. And by writing up, again, in kinematical approximation, the far field scattering amplitude, we can see that in the case of the Bragg scattering, we start to be sensitive to the deformation inside the crystal, which are located in the face of the complex valued object function. The S here is the shape of the object, since it's smaller than the coherent volume described in morphology, and the face is basically our new deformation. By reconstructing the face of the object, we can extract the displacement field, so how atoms are displaced from their position. By measuring a certain HKL reflection, because crystal has different reflections, we access only a projection on to this direction. So in the strain crystal structure, which can look like that, the strain can be compressive or tensile, so this deformation describes how the structure looks like. By shining X-rays on this type of structure, we can get different effects from the simple point of view. If the crystal is unstrained, we can adjust the original position, which can be found from crystallographic techniques in the literature. But there is a homogeneous strain inside the structure. The peak will just shift in the Q space, showing how the atoms get closer or further apart. Or if you have a homogeneous strain due to the whatever reasons, the peaks start to broaden or change its shape and become asymmetric. And this is the most interesting part that we can actually look into this inhomogeneity of the crystalline structure caused by lots of different effects. Some examples on a very simple numerical models. If you take, as it was probably already shown, the main object with amplitude 1 around r0s and no phase, so basically no distortions, you can get a nice diffraction pattern, which looks like a single function in any dimension. It's squared single function, so this is the intensity. However, if you focus on some elastic strain, for example, in semiconductor materials, when you have a mismatch between two structures, which are got close together, there is a mispeed strain that appears at the interface of these two structures trying to accommodate the lattice constants and you get profiles, something like that. So the phase has certain curvature, which is not linear. And in this case, you'll change the X-rays and you see the asymmetric behavior of this different diffraction pattern. And one direction, so this is like a plane. XY, you can see it here, but in another direction it still stays the same, so are we very sensitive to the direction or the magnitude of this effect. Or there could be other defects like point defects, some typological defects like dislocations, which can be simulated like vortex fields inside the phase. So basically there are points of zero intensity, and the phase is not defined there, so it goes to pi values around this point forming such vertices. And this is how the distribution looks like in a crystal if you reconstruct the phase from the phase retrieval, and the diffraction starts to split and all kind of interesting effects can appear. It also applies to planar defects or stacking faults, the diffraction pattern starts to have these certain features and they all get intermixed, so the final diffraction pattern can look very complicated. Like this one for example, under the external stimuli, this is experimental data, if you take your crystal and shine the green X-rays, you see a nice fringe pattern showing perfect symmetry, but if you start to plastically deform it, the diffraction pattern is totally away from what was initial, and this is actually a hard challenge to reconstruct this type of data. So a short summary about this part, what we are actually sensitive to with the BREC approach, we can reconstruct still the morphology, but that is basically if you have some absolute reference for your measurements, elastic strains, you can image the effects from defects and also the dynamics of the defects formation inside the localized crystals, some defects like tweeting or multi-phase coexistence or surface induced stress, or polarization driven atomic displacements, all these effects affecting basically the atomic displacements inside the crystallized structure can be studied, and that's a really wide and developing field of science. Moving forward, so that's all fine, we can get all this nice information, but how we actually obtain the data. To continue the discussion in terms of tape vectors, how to record the full three-dimensional information around the BREC points in the actual experiment, in case of the forward scattering, you just rotate your sample and you get slices through the center of the diffraction pattern, and then you can spec them in 3D, but in the case of BREC CDI, BREC Hymn-Diffractive Imaging, you have to place your object into the center of rotation of the device, which can reliably record the scattering. And here you see the incidence beam, KI, and scattering beam, KF, and you record it at the detector, so this plane represents the detector, which is analogous to a small arc in the evald sphere formation, and it crosses the BREC point if you align your crystal to the BREC condition. And then by mechanically rotating your sample by small steps with the angle delta theta, you can actually get these slices through the reciprocal space and record the full three-dimensional distribution of intensity around the BREC point. This is actually one of the fastest and easiest way to record BREC CDI data, and to see it more like intuitive way, this is animation, how it actually happens. So the X-ray beam hits the sample, and when you do this nice tilting around the BREC condition, the reciprocal space travels through the detector plane, and you record each of these slices on your detector, and you can work with this data later on. Sometimes these movements are making things complicated if you want to use some very stable environments or some special devices on top of your sample, and you don't want to move anything. So there is a way to record the same type of data set without actually moving anything on the sample side by changing the energy of the X-rays and keeping the focus on your sample. This way it's called energy scan. So you can start from higher energies, or from lower energies, or higher energies, you can just scan the very narrow energy range of your incoming X-rays, and basically changing the length and the angle. This is exaggerated. You can also scan through the reciprocal space and get your three-dimensional data set. And this basically looks like that. So the lengths of the K-refectors change, and the resulting Q-vector lengths also change, so you can map the whole 3D distribution of intensity through your detector again. However, the sampling issues or like working with the data is a bit more complicated, but that's a topic for separate talk, I guess. On the technical side, how do we access these reflections in a reliable way? So you can use two options. One is the different cometars. It's a well-developed instrumentation, a lot of companies produce it, where you have motors for both your sample and the detector, and you can get any orientation of them with respect to each other with a high precision and reproducibility. And that's perfect for doing these type of experiments, though sometimes there is a lack of flexibility because detector distance cannot be changed easily and then only customized version of this differentometer can work. As an example, this is how the actual differentometer at P10 and Petro III Synodron looks like, allows you to go anywhere in the full quadrant of the 3D space and access any reflection or orientation of your sample as you like. Of course, the good scatters you need to precisely move your sample. Another option is to use the robot arms. This is the case of Nanomax Maxpore here in Lund, where the detector is mounted on the robot arm computationally moving. So it's computer programs to move around the sphere around the sample and keeping the same distance. So it's very nice and gives much more flexibility in the measurements. What are the requirements for successful measurements? Most important, of course, is the sampling, which was already discussed also in the forward direction. I'm just briefly look at this, you record your signal by a pixelated detector, so you sample your continuous signal with pixels and successfully reconstruct something with a phase of two algorithms. You need to have these pixel spacing smaller than the inverse size of your object, so you should get at least three points per fringe, so period, let's say to reconstruct uniquely the signal coming from the sample scattering signal. Of course, you would like to have much more to get a really nice reconstruction. And this called the oversampling criteria and normally should be more than two along the each direction in your data. But in the case of Brexit, you also have a third dimension. So how you sample the third dimension because you actually record by doing rocking curves. So this is how the initial state looks like for the first position along the rocking curve. The second one is basically a rotation of this triangle, by some small angle delta theta, and you can see that the atmosphere with the approximation very small angles just moves along this, this coordinate that the queue. And this delta q is related to delta theta, almost directly with this relation, which is again, the delta theta should be proportional to inverse of object size so large object you have smaller step in the q space should be in vice versa. This number is just an order of magnitude of the angle, angular step for an average experiment so it's hundreds of degrees that's that's precision you need for your scanning stage. So here's a summary summary for these parts basically to get the nice data you need to find a micro focusing or not a focusing beam line with a different set up between before six circle different meters, you can choose between beam sizes with a full coherence. Normally it's between 200 to 1000 nanometer, but with the event of new sources. This range can be extended. Use a flexible detector positioning and small pixel size to have a nice sampling. And you either do the rocking curve scans, which are fast, but it requires alignment of the sample into the center of rotation that you don't lose your sample out of the beam. You need stability for certain environments of the sample, or you choose the energy scans for improved stability when you basically change only the energy of the beam by tuning the on the later and the monochromator, but it requires more time and you need to work on the data more. Some net requirements limitations overall on the isolated objects can be reconstructed isolated both in real and reciprocal space we can actually edge a single back spot from your from your sample it can be in the powder type of preparation, or just on the alone on this some substrate. You should be sure that it's isolated, you better know the orientation of your crystal so that you know where the reflection you're looking for that can be supported by some lower measurements or electron microscopy measurements so you actually know how your sample is aligned with the with respect to the substrate. You need the material which scatters enough photons so like five six orders of magnitude of dynamic range would be a perfect data to reconstruct lower it goes. It's more difficult x-ray damage, obviously your sample should survive the beam. It should be your particles should be also mechanically stable under the beam. When you put a high flux beam until your sample like another particle it can rotate basically under the beam pressure or other effects and then you lose your the fraction pattern out of your out of your field of view and you have to search it again so this is quite a bit of limitation of the technique sometimes and you also have to ensure that sampling is adequate. After the data is collected the interesting things just starting and that's basically an example of the workflow you can experience to actually get to the publishable results so starting with the data acquisition you go through the clearing of the data the phase retrieval itself, a lot of corrections for different instabilities or partial hearing effects, coordinate transforms, and then you have to convert everything to the actually quantitative values. I'm not going to cover all of them but the most crucial one related to the break version of the CDI I want to mention. So the phase retrieval was also described already let's look an example of the gold standard of back CDI which is basically a gold monoparticle which scatters a lot and has a nice shape and you measure normally one on one reflection which is normal to the surface. Use some initial guess with the random phases and then you're really you're called to this iterative process substituting a scattering amplitude switch look like that in a reciprocal space and real constraints in a real in a real space which are the shape or positivity and so on. And I actually want to make a small remark like how you can guess your shape of the object right away and mowing on intensity, just small hint. You know that the intensity the intensity from the object is basically an autocorrelation of the way field coming out of the object so you can exploit this fact. You can calculate the inverse Fourier transform of your measured intensity, and then you get the object, which amplitude is twice larger in each dimension, but it's already a very good approximation to the shape and size of your particle. So you can use this one as an initial guess and you will be very close in most of the cases. So this retrieval is done. Your next problem is basically a coordinate transformation because you measure the data in the coordinates system of your detector, which is not orthogonal. And then you come back to orthogonal space where you can align your object with the Q with the reciprocal inspectors of the crystal need to do this transformation back to the laboratory frame so called that you need to know the sample and detector angles, you need to know the exact distance between the sample and detector, and you'd need to know angular or energy step of the scan. So this is how it look like in the detector coordinate systems basically object is skewed. But after reassembling towards the learning system and becomes orthogonal and it's much easier to work with, because we work with the matrices in this technique. As an example, the gold metal parts will normally look like that very symmetric in mass shape, but in a skewed coordinate system it looks something like that. This is a reconstruction of the data from before, but after the re-mapping towards the lab current system, it becomes more symmetric and it actually looks like a more non-particle way to measure. And already here you can notice one of the interesting effects we can reconstruct with back CDI, you see this gap between two parts of the crystal. It's nothing wrong with the crystal itself it's still a full particle, but we miss this intensity in between, just because it's not scattering to this break condition we measured. Because there is a tuning defect, so basically the crystal orientation is different for this part of the crystal. This is one of the sensitivities with back CDI you can achieve, which is not possible to see with electron microscopy let's say. Another important aspect to correct after the reconstruction is done is if your object is three-dimensional, you will get an additional phase shift, which is nothing to do with the phase structure of your sample actually. The difference in the path lengths of the scattered beams, let's say the X-ray beams coming under the red angle, but for different incidence positions you will get different propagation distances from the X-rays inside your sample. And as a result, you will get an additional phase difference as an outcome. And this is important to correct and it can be easily done since you know the morphology of your object, you just have to take care of that and compensate otherwise the phase information will give a false information on the strain or displacement fields. So after all the corrections, we have our nice phase reconstruction, these are the central slices of the gold nanoparticle, and now we can go towards the quantitative results. So basically we can extract the displacement field from the face as it was explained before, and as we know from the elasticity theory, the strain or the amount of relative distortion of the crystal can be calculated as a gradient of the displacement field in the respective direction. And if you measure at least three non-collinear reflections from your crystal, you can reconstruct the full tensor of the strain inside. So in this case, this is a displacement, how it looked like after the conversion. And since our reflection was one more point it up upwards, we can conclude the gradient and then we get the strain values, which are pretty low sometimes. So we have a nice sensitivity down to 10 to the minus four, and this is very important aspect of the RECCDI that you can access actually varying the subtle changes in the strain within the three dimensional object. So summary for this part, obtaining quantitative data is possible only if you try to learn as much as possible with your sample before you go to the viewing time. It's still not giving you a direct answer to all the questions, but you will know where to look for these answers. You should know the orientation of your crystal, you should know your reference values if you want to get some absolute measurements like the exact lattice spacing. And now you have to correct for effects which affect the phase distribution after the reconstruction, and additionally you should think about minimizing the numerical artifacts coming from the phase retrieval itself. So examples on successful applications of these techniques. Ever since started with a very simple objects, basically the first works showing the feasibility of the approach were done by Ian Robinson. We have isolated gold nano crystals like in this case is the reconstruction of a gold nano crystal. And the approaches were developed how to reconstruct and get this useful information out to the victim. It was not really like towards the real world applications. More recently, I will publish the work on a single nano objects in the field of nano wires or nano nano composites or different nano structures are typically made. When you can access the strain fields during the growth or some influences about the growth approaches. One important example is that you can study defects and their dynamics in the real materials or time materials like battery cathode, which is also consist of nano particles you can access how these locations inside the particle can propagate during charge and discharge so it's actually in a functioning device, or you can study the full three dimensional tensor of the simple objects but under very harsh and extreme conditions like iron irradiation happening in a nuclear reactor for example. You also can go into the institute and no brand imaging is a highlight of the technique that you can combine this simple working curve measurements with advanced sample environments and get access to the structure while the processes are happening. So this example, for example, is the oxygen the surface oxidation during the catalytic reaction and the palladium particles inside the dome. So all kind of combinations can be done and that's very exciting and sometimes of course also challenging fields, but it's still in the development phase and very interesting to work in. Some words I want to say about the extension of this technique, which I probably think will be covered later in this course in this webinar, a series is a combination of brexit I with photography so if you want to study the three dimensional strain distribution inside the extended project, you basically combine the brexit I approach with the scanning technique when you go with your focus being through the extended project and collect the fraction pattern from each position. I briefly want to mention what is the major differences to the brexit I is basically the dramatic increase in the measurement time, which can be ours, depending on your sample and the amount of signal you can record. The dramatic increase in the data volumes, it's five dimensional data sets, which can get gigabytes and gigabytes and sizes. The stability issues start to play a major role because of the measurement times and phasing issues of actual three dimensional objects can be challenging since you have to impose certain constraints, and it's not very easy to do. Before you use the model based approaches to explain what actually is reconstructed in your experiment. Now we're coming towards the end, I want to mention the major facilities where this type of experiments can be done. I would like to start with the APS where most of the pioneering works were done is 34 IDC beam line in USA advanced product source in her turn and was harder and ones of trying is the contacts at this beam line. If you want to try something there. Then in Europe, we have a bunch of students doing this, but I will mention just a few. The SRF idea one, and Petra three, it's p 10. Here at max for we develop this technique also at nano max and diamond light source in UK. You can also do that. So overall summary about the brexit I is that we can image the isolated Chris five objects from 50 to 100 1000 nanometers that range the sensitivity to any effects which use extra density displacement for this placements. It's easy to combine with advanced sample environments, and most important than we can go to study better structures or do experiments in C2 and all around the negative sites. We get the radiation damage and the cost of the issues. It's relatively never sample size to get the maximum signal to noise ratio and get the best results. And it's on the strain actually with reconstruct so it cannot be actually normally more than 1% of the strain. And sometimes the data analysis can be quite complex talking about the data analysis so the software is still in development in many groups around the world, but I will mention those which are available online. I developed that yes rough. It's a nice Python package which can do almost anything and nice implementations for the high speed computation. The phaser is a metal implementation of the phase ritual done by myself. It's also important to have packages for the post processing to actually analyze the data you reconstruct. And it's a BCDI project by Jerome Carnes. And you can also use the post phaser which is a part of the phaser software by myself. With that, I hope you learned something new today. So we come to the hands on part. I wonder how many people managed to install something. But I would like just briefly show how to work with the phaser software, which was shared with you. If you can follow that would be really nice. You just start the program the way you can do it either by the running the installation you have or from the math lab code. You should see the screen something like that. Here you can see the startup screen where you can specify some experimental parameters which are related to your experiments. In this case they're loaded from as any initial initialization file, or you can type them in manually or you can just leave it as a standard values, which you see there. To load the data you press the open file and you choose the data test. The data is open. The data should be loaded. And you can have a look in the starting screen. So this is a reciprocal space window and a real space window. You can have a look at your iso surfaces of the Bragg intensity. In this case that was some particle with nice fringes, so nicely scattering particle. And following the approach we discussed in this webinar, you need to create the initial guess. In this case we can choose the auto correlation option. So this is the support estimate it's already chosen, and then you press the create object. So the object is created, and right now it's nothing else than just the block, which is only a twice size over our actual object. And then you have tabs here, which are used for different steps of the data analysis. And also the reconstruction tab, and inside this reconstruction tab you have all these algorithms we discussed before in also some combinations and patterns to use but we will work with very simple approach, changing our reduction and input output algorithms. So let's start with this one. On your laptop or computer you can turn on the GPU acceleration here in the corner, and then it will go faster. And then you basically just run the HIO algorithm. If you have a bad lab you can see the updates. So it basically goes very fast. The twin image forms here but it eliminates at some point so one of the solution is chosen. So this is already a quite good estimation of our object and you can finalize it with error reduction, let's say 50 iterations. So here there are parameters which you can tune and play with and you will see that some of them work, some of them not so you can actually play with all the algorithms and parameters. So this is the number of iterations to do. The feedback criteria is how much updates should be done. The sigma interval is the how much, this is for the shrink wrap algorithm which gets your support on every step. So you can start from larger values so it's moving your support more and then goes to the smaller one or you can start from smaller values if you already have a nice yes of the object. And how many steps should be done for this shrink wrap algorithm and the threshold. So basically you can keep these values standard I chosen them because they give a nice result most of the cases. And then you run error reduction, which basically will finalize the reconstruction give a nice shape. So this is the amplitude so proportional to electron density, and this is the phase. Already here you can see that there is a phase gradient so these reps of the face which are not physical, and they can be fixed. So this is the how the reconstruction looks like you can go back to the three dimensional view and compare reciprocal space and the real space. You can rotate and see the connection between the fringes and the facets of the object. And to remove the face ramp, you just press these buttons. I think it's right or left you just need to please and the phase ramp will be removed. That means the data was not centered in the calculation window. So right now, there is no phase ramp, and you can see a nice particle reconstructed with some missing intensity means the crystal quality was not that good. But this slider base allows you to change the eyes of surface to go through the eyes of surfaces of intent or family to do your own trick. So this is the basic steps you can do to reconstruct your brexit data. If you specify a path here for saving you can just press. Save results on the reconstruction window and then you get your results saved and you can work with that further. This is just a short introduction to work for you to play with for more details you of course need to invest more time. And then with the real data, there are more challenges which actually has to be programmed. And yeah, I'm happy to help you with that or discuss if you want after the webinar. But with that, I would like to thank you for your attention. And are there any questions. All right. We lost that win questions. So why why everyone is still trying to think it's I think it's a lot of interesting information which we've downloaded within now 45 minutes our processing power is probably still still trying to get updated. So let me ask, let's let's get a little bit into some few not so technical details. What, what do you think could be done to improve the sensitivity of CDI of BCDA as you mentioned, you know, we, we can only see some percentage of fractions of strain or displacement. So what could be done from the experimental part of it from the beam quality and also numerically to somehow improve this resolution resolution and sensitivity rather. The first and very kind of straightforward way to improve the sensitivity is to measure higher order reflections from your crystal. But for most of the crystal, further you're going to the Q values, you get problems with the intensity scattered. So the crystals normally scattered less at the higher angles. So for some sizes of your object you also can get into the dynamical artifacts, dynamical diffraction problems, but maybe for non objects it's not a, not a big deal. The development of the new central sources will definitely lead us to this new field of high energy break CDI, we can we use a higher energy x-rays, like 50 KV or something when we can easily access the higher reflections, higher order reflections but get the same amount of scattering because the flux is improved and the coherence also improved in these new sources. But if you think about your object, sometimes we measure like semiconductor materials right and if you have an option to go to a higher reflection with the loose with the loss of some intensity, you should wait and you should find the balance between amount of signal and the sensitivity you need. So you should choose a higher reflection to measure. You also can go to, I mean with the larger beams, you don't have much problems with the divergence of the beam, but with the smaller, if you go to very small beams you get this additional sensitivity in the strain just to the fact that the K vector have some spread at the focal point. So you'd rather use some low, low focus beams for these type of approaches. Yeah. Yeah, I mean, you also can prefer putting your detector as far as possible and filling in the whole detector with it with very small pixel size, and you get more resolution there. Thank you. Questions from the audience. I have one, if I may. All right. Many people in the audience may not be really experienced with the technique of CDI. And one, one important thing is that the particles are isolated. So mentioning this particle isolated in real space and in reciprocal space, can you explain a little bit more how you can actually even illuminate many particles and still isolate them? Yeah. That's a good question. This is one of the strengths of this approach which we can sometimes exploit that many materials appear in the form of polycrystalline or multigrain of the null particles. For example, cathode materials in the batteries or it can be metal or some polycrystalline thin films like in solar cells. And there all the crystallites have slightly different orientation in the real space so the crystalline planes, they all randomly oriented. And that gives us the opportunity to shine with the big beam, relatively big beam on some area of the sample and just select one direction in the reciprocal space. So we put detector in one position, and we basically search for those particles which scatter into this angle. And we can see that the fraction spot is basically coming from a single particle inside the bunch of the particles and it can be representative because there is no mechanical or structural difference. The only difference is that this particular particle is scattering to the direction we selected and we can study that. And this is the isolation in the real space and that you don't need big angles, it can be just a slight misalignment because further you go with the detector, your angles basically increase, there is a divergence. And you can select one or another reflection and you can go with that. Or you think about your sample preparation upfront that you will prepare your sample in a way that it's either isolated on a substrate or you cut out the piece of the material you're interested in and you manually put it with a focus on beam techniques or some manipulation. And so, artificially creates a brexit sample, but this is more complicated and it's worth it if you actually want to see in the inside the structure or some buried features inside your sample, and you can extract them. Thank you. Alright, thanks a lot. Dina. Dima let me ask you, let's continue with the discussion. What's your, what's your general opinion on the. So let's go down a little bit to the reconstruction algorithms I mean they're a couple of, you know, HIO, ER and stuff like that and almost every group, almost every principal investigator and researcher we, for someone will take a given data set from gold would go with different choices of algorithm ER versus HIO would get a solution, which is publishable and the next person would do something with some relaxation and ER and is publishable. What do you think is there a way that as a community we could try to understand which the range of sensitivity and of every algorithm when he has to do with with phase variation and the continuity of the, of the electron density and so basically can you say a little bit about when some algorithms would fail and which ones would work a little bit better on the situations of, you know, density fluctuations or, or phase shift, not phase shift but you know, constraints on now on phases, things like that. So which particular algorithms you mean. Yeah, so yeah, so yeah, so let's say I have a crystal that has these locations, you know, and of course this dislocations, the manifestation in the way front would be, you know, phase jump so you're going to have regions in which the face is going to vary it's going to either rapidly vary between plus or minus pi over two and then the densities also will not be smooth. What, and then you automatically go to another algorithm to sort of help in this problem. Is there some, is there some guided approach that the new users in the community should be aware of when doing the reconstruction if you have prior information about your crystal you knew that you were doing indentation and it is highly straight. What guidance would they have in choosing algorithms to continue. Yeah, this is a good point. And I would say that we as a community at some point we have to arrive to some kind of not machine learning but you know, the approach which basically can minimize these distances or this error and for whatever object just based on the fact we know that okay this type of features in the diffraction pattern form this type of objects and we get closer to the actual solution. But in terms of algorithms or rather approaches as you said if you know something about your object, which is more complicated than gold nanoparticle with a single dislocation or something like this. You have to fit in all this information into your algorithms. This is the first. So, if you know the shape as precise as possible you have to basically put it as a constraint. If you know roughly the, the number of dislocations or you know some parameters like porosity or some parameters like the strain variations which we expect which are like, you know, at least possible that will definitely improve the reconstruction in the end. And on top of that, the approaches like guided whether this genetic algorithms allows you to reconstruct like I have even a slide for this. If you can create a big amount of random guesses or initializations which can go through this genetic algorithm approach when you select the best candidates based on several metrics. This is the normal reciprocal space error which we minimize the data and the object and the log likelihood thing which takes into account also the noise and it can take into account that your electric density can truly be smooth. So these kind of things can be forced. This is the best candidates that combined with the new guesses and then goes around and around. You can go through several generation of mutations of these reconstructions and you then you get the best candidate in the end. So this kind of approach is not just averaging, you know, but just by choosing the best candidates for, for this particular problem could be the approach, but I think that's, that's a really a big problem like why you choose this algorithm or not another. Like, at least when I began, it was more than it was more like an art you just sit and try to guess what it what is the best, but by using several criteria nowadays, I think you have to choose the best. Yeah. Thank you very much. Do we have one final question or two more. I think we still have some some time for questions. Yeah, so let me ask one, probably one more. So just, what do you think, how, what's your perception of the future of, of BCDI I mean where do you think we're heading and in the global scheme of things we were seeing papers that has reported five nanometer spatial resolution reconstruction. I think applications of BCDI to image train and elastic and ferroelectric domains and different types of scenario. But from your experience and where do you think we're going to sit in this synchrotron community within the next 1020 years, especially with the emergence of x fields and what do you think would be the direction in which the BCDI community should be moving to be pushing resolution to better than five nanometer is that what we should be be thinking of or we should be trying to move towards the scenario of having a black box, you know, like, like electron microscopy which is, you give me a sample I tell I tell you high fidelity, you know how it looks like and what do you think what's your what's your feeling. So there are two basically parts to this question as you said the black box. So of course, our aim for next years, especially with the new sources to develop these black box finally, you know, that's a lot of people who don't want to, you know, code and understand this data analysis from the beginning because it takes so much time to get into the older details that you actually bring your sample and reconstruct and that can be done with the use of not only the iterative algorithm but also machine learning and this kind of thing when the construction can be done in, in the second regime not like in the minutes or that it can be basically taught but it gets with experience and amount of data we measure. So, yeah, I think this is the way that we create a microscope at some point. So most of the things in this technique can be automa automatized and it's done at different beam lines to the different extent, like realigning the particles the measurement itself can be fully automatic. Like, especially if you know the reflections you are looking for, you just type in all the matrices for the current for the transforms and then it goes and measures is this is one of the ways to go of course. This is a technique aside by itself. I think we should really focus on actual useful information, which can be obtained from the materials under there, either native or functional state inside the devices or under some processes which you actually can do because all these proof of principle experiments more or less were done and we still struggle with some type of reconstruction we cannot do. But I think it should go that way. Plus, the time resolution is one of the things which is important to develop that we can actually see how these structures evolve during the experiment and can we control can we see the difference in this sample or that sample prepared in a different way but measured with the same technique. And in general, I think as a last step, I actually showed in this work that the statistical analysis would be in the end, the major part of work for the scientists that you get a lot of statistic from single beam time because you can do fast measurements, you can analyze fast, and then you do not struggle with deciding or is this particle looks like a real particle or not. You analyze actually the effect you wanted to see. I think we should be handling this way that we actually grab the quantitative information out of it and analyze that that would be the focus. Yeah. All right. Thank you very much team. Thanks. Dean that you want to say a few things for us to take it away. Yes, well obviously thanks to Dimitri and Edwin for sharing this session. We are basically giving your appointment for next two days we have already one date next week and he said when himself is going to present some specific and fancy application of CDI in rock. And actually the week later, we have a new speaker is yet public on the page but it will be soon. And it's Alexander Birling from Nano Max and Max for was going to present actually how to fight the rotation of the particles in the beam so Dimitri. And then Alex, it's just very timely Alex will tell us about this. And then later on, Dimitri also today spoke about using scanning approaches with Bragg, and this is typography in Bragg, and we are actually very lucky to have two of the pioneers in this technique with me, Shamar from CNRS in France, and Steph Andrews Kiewicz was also agreed to participate today so I'm very happy for that. I'm looking forward to the next talks, and I keep you. I ask everyone just to keep looking at the webpage of links because new seminars will come along. We will also actually based on a request from one of you. And I've also invited somebody who has done typography on biologic samples. So it's important that if you give us your feedback we can actually respond better to your request and to your needs. So with this I thank everyone, thank Dimitri, thank Edwin, and see you next time.