 So as was alluded to, I will be talking to you primarily about the new current effective imaging beam line that we're building here in NSLS2. We'll put together a quick introductory slide. So we are building this new beam line. It's, you will have an emphasis on Bragg CDI. So I'll talk a little bit about Bragg CDI since you may be slightly less familiar with that variation of the technique. It should support or is designed to support, I should say, different versions of CDI. So basically any geometry that you could imagine doing CDI and we should be able to accommodate this beam line. In this talk, I'm going to give you a quick review of current effective imaging. I'm going to talk a little bit about the distinction of Bragg CDI compared to forward scattering. And I'm going to talk a little bit about the scientific motivation behind the beam line design. And because this is pretty much an expert audience, going to go through that stuff pretty fast. So it's there only to provide you with a window into what we're thinking about and we're designing this concept. But please feel free to stop me at any point to, to ask additional questions through that. And then we'll get to the stuff that I think is probably more interesting in this audience, which is comments on the development of the CDI design, you know, the more technical details about how we went about the design process for this beam line. And then a presentation of what we're actually going to build was actually being built right now. So you guys all know this I think but I'm going to put it in here just for posterity. So the point of current effective imaging is to collect an image of an object without using an imaging lens. And through that we end up measuring the intensity that's scattered by the sample. But the light of course does the same thing with a lens but the lens interprets the field as it moves through the lens and then gives you an image at the end, we don't have that lens by design. Because X-ray lenses are very difficult to make and they have a not so nice characteristics that we can talk about if necessary. In order to mimic that behavior of a lens we need to develop the phase of the field that is that corresponds to the intensity being measured in the detector in the far field in our case. And so this is essentially an ill-posed inverse problem, right? We measure intensity and we need the complex amplitude. And so we need to figure out some way to do that. And of course we do that with the traditional iterative, the iterative solver in this case. So this is just a quick demonstration, right? You start with I guess what the object must look like based on its domain support constraint. You propagate that into the far field, you replace the amplitude, the calculated amplitude from this object with the measured amplitude. You keep the phases which are incorrect. You propagate that back and to the real space domain or to the sample space domain. And then you do that same process over and over again. Developing an estimate of what the phases were that corresponded to the intensity that you measured in the far field, and then using that intensity, plus the phase that you recovered to get back an image of the object. And so that is essentially how this works, or at least in the way that we implemented. And I have this movie, which would make slightly more sense if you were all intimately familiar with the Brookhaven icon. But you'll see, this is kind of the process that the algorithm goes through. This is actually a reduction at ER and some solvent flipping for technical details. But this thing on the right is the intensity that was fed in. And you can see that you make sort of gradual progress toward a pretty realistic estimate of what the object was, at least in this simulation. The real reason for me to show you the simulation of this audience is that if you don't know what the truth object actually looks like, if you don't know that it looks like this little icon that's associated with the Brookhaven logo. You know, you might be tempted to stop your integrated phase retrieval here, which is where the object sort of starts to be unrecognizable as an object. And so one of the challenges that I think we all face. And perhaps we can think about collaborating on the future is knowing exactly when we should stop. And do you stop at this point in the reconstruction stop this point in the reconstruction or you still have pretty dominant lines which are not real. What do you let this thing go all the way to the end, and get a pretty realistic estimate of this new object. Okay, it recovers these three, these three logo colors that's in the logo. One of the intriguing things about CDI, these facilities perspective and certainly one of the things that I found intriguing about it throughout the course of my career is that the CDI is sort of inherently multi disciplinary. You, you sort of have to draw from a large variety of areas of expertise in order to execute a CDI experiment effectively. And so you know this is kind of a very nice place for us to, to interface with other facilities within the laboratory so it's benefit of working in a facility to come together and all work together to solve, you know, these interesting problems. So that was CDI implemented. Again, I'm sure you guys know this. This is, you know, John now and David Sayer and Janusz, all these people did these first experiments at the original light source here at NSLS. When NSLS was still operating, but Brookhaven still persists. And they really measured a fraction pattern and then use that to fraction pattern and some supplementary information to recover this image of the object. And that was, you know, almost 25 years ago now. It's been demonstrated in a number of geometries. You can use this forward scattering geometry, which is where the initial demonstration was done where, where a lot of the software x-ray work is done, where a lot of the biology work has been done. And, you know, it is now a very similar geometry to scanning probe x-ray microscope or a ticography experiment. This is the Bragg geometry. And this is this is primarily the motivation for the instrument that we're talking about today. So we'll talk about that a little bit. Where you can even do these things in the grazing instance geometry. And I'm sure that Yerevan and others can tell you all about how that might be useful to, to look especially at interfaces and the physical phenomenon that occurred interfaces. Yeah, generally speaking, provides a means to acquire images that depend primarily on the detector. So you have to pay very careful attention to how well you sample, what sampling rate you sample this, this diffracted intensity, but they don't have an imaging lens. And so for x-rays, this is particularly powerful because an aberration free imaging lens is something that is very, very difficult to achieve with x-rays. And in doing this, I can also give you near wavelength limit resolution, right? So your resolution is at least in principle determined by the acceptance of your detector, which, you know, again, doesn't mean you have to worry about the numerical aperture of an image lens, and the accompanying depth of field. So our CDI beam line, I apologize for the name. It's just the one that stuck. We'll support all of these geometries and so that was one of the design criteria that went into this beam line. Right here so that you're thinking about Bragg diffraction. So, one of the hallmarks of this instrument is the fact that it's designed to do Bragg CDI in addition to Fort Scattering CDI. And so we will be looking, the science case is strongly centered around looking at ordered materials, although it's not limited to them. Geometry basically just do a tomography experiment to get your three dimensional information, right? You just rock the sample. You know, around an axis and you look at the patterns coming off in the Fort Scattering and then as a very similar experimental geometry to extra tomography or indeed any kind of micro CT imaging. In the Bragg case, you have a non-trivial Q, so the momentum transfer is not zero. You're off the zero, zero, zero peak if you're used to thinking about crystallographic reciprocal space. And instead, what you can do is you can rock the sample with respect to the incoming beam and you take these different slices as you do that and collect the three dimensional intensity distribution in the reciprocal space using that technique. You can change the momentum methods or you can change the magnitude of this K vector by changing the energy of the incoming field. And that will also allow you to rock through the three dimensions. So you get the three dimensional information in a different way than you get it with Fort Scattering CDI, but it is still there. And so this works, right? This is the nature paper from 2006, where we reconstructed this lead crystallite and then looked at the phase of the reconstructed real space estimate of the object and then interpreted that phase in the context of some deformation within the material. And so that's kind of the reason that you go to this non-trivial Q is that you want to get that deformation information out of the crystal. And so, again, I think you probably are all familiar with these examples, but these are kind of the kinds of experiments that we were thinking about when we proposed this experiment, this instrument to be built. We want to use this fact that we can go to a non-zero Q and image lattice defects and materials and also looking at deformation fields. So this is just an example from Andrew at the net argon, where he looked at this battery material and then isolated a defect within it. And the defect is kind of is typified by this very well-structured phase in the reconstruction. So that tells you what kind of a defect is present inside your particle. You do the same thing with deformation. So this is an experiment from Anna, where they looked at metal catalysts in changing the environmental concentration of carbon monoxide, I believe, which then changes the deformation inside the catalytic particle. And you can do that as a function of time, right? You can watch these processes evolve in time, and that gives you the kind of a better picture of allowing you to design new materials. A better picture of the phenomenon, which in turn allows you to design better materials to take better advantage of the phenomenon that you're observing. The last one is from Jesse Clark, who at argon at Ross's beam line at 34 ADC looked at this calcite crystal and monitored the growth and dissolution of this crystal in situ. And so these are kind of, these kind of typify the experiments that we're thinking about when we wanted to develop the capabilities for the CDIB in line. We want to be able to look at deformation. We want to be able to look at defects. We want to be able to monitor things in time. And so that has a, you know, the carry on effects on the design of the beam line are that you have to provide a great deal of space and the vicinity of the sample to allow the beam to be properly conditioned and to allow sample setup. And you have to try to maximize the amount of flux that you can put on the sample and use in the CDI reconstruction so you can push the time resolution as far as is reasonable. And in fact, the time resolution, these experiments is probably going to be determined mostly by the interaction of the beam with the sample. And so, you know, that's where we want to be. We don't want to have an instrument limitation we want to have a sample limitation. I put this slide in here, which is again the experiment that Jesse led at LCLS. We participated in with the reason it's here is that it's a, it's a laser pump x-ray probe experiment was set up at LCLS, but it kind of, this picture gives you a good idea of how crowded that sample interaction region becomes. And so this is, you know, this is the image that I showed people when I motivate how much space I need to have reserved around my sample to make sure that we can actually conduct the state of the art experiment. That's why. Yes. Changing course a little bit right so we talked a little bit about CDI, the kinds of experiments, the kinds of science that we want to target with this instrument. And now we want to talk a little bit about the technique, because we, you know, despite the fact that CDI was demonstrated 20 years ago we're still actively developing techniques and it's something that I'm very interested in doing myself. And so I think we in this slide in the next couple of slides. We're going to go back and think about the assumptions that were made during the additional demonstration of CDI and how those things have evolved and how they might evolve in the future. So I just put this slide up here to remind everyone that the generally speaking we assume the field incident on the sample is fully coherent. It's a little bit problematic and that it's not fully coherent, and also in that if you can use partially coherent flux you get more light. And so you can push yourself back into this regime, where the x-ray interaction with the sample is determining how fast you can do the measurement of not sure it's true. And that's where we want to be. And so, you know, a while ago now, we did some work in Melbourne. This is Lachlan Whitehead was the lead on this experiment using 2IDB at APS, where we could manipulate the transverse coherence of the beam incident on the sample. And then demonstrate that not only can we recover the coherent modes that are present in the illumination but we can also use that coherent mode decomposition to good effect inside the reconstruction. And so this little figure on the right here demonstrates that if we do this coherent mode-based propagation, which is the new method in this figure from PRL, we can tolerate with a good resilience the low coherence of the beam. And in fact, when you go to the high coherent case, when you basically match the beam size to the coherence length, you get almost the same reconstruction as you get with the low coherence case. Whereas if you assume inside the algorithms that the propagation is purely coherent, weird things start to happen. Even in the high coherence case, the algorithm struggles to understand where it needs to distribute the energy inside the estimate of the object. And so you get kind of this strange modulation and the amplitude of the field emerging from the sample. But that's not physical, right? And then the algorithm just doesn't know how to deal with that because you're telling it that the illumination is fully coherent, but it's not. And so there's this ambiguity. And of course, if you go to low coherence, things just don't work. If you assume that's partially coherent. And this is just a demonstration of a Young's Double Slit that showed our partial coherence. You can do the same thing with the temporal coherence. So this was another experiment to IDB where they, they just opened up the, the, the slit determining the monochromaticity on the grading and allowed the entire undulator harmonic to come on to the sample. And they demonstrated that if you know what this is, you can, you can make little temporarily coherent bins within the, within the undulator spectrum, which is this thing on the right over here. And you can do that again inside the, the algorithm, the propagation step to recover the actual object, which is this thing in the middle here. Whereas if you just assume that you have fully temporal coherent light on the left, it doesn't work. Right, sure. You're not actually doing the right thing. And so these are, these are kind of in addition to the Fresnel CDI one. So the Fresnel CDI is where you put your sample slightly off the waste of the focusing beam. And you use the, the known phase of the illuminating field to kind of stabilize to bootstrap, if you will, the algorithm. And this also has strong, says a strong effect, a strong positive effect on the stability of the reconstruction and, and the resilience. And that means that in principle to get a low resolution image you take less data, because in this particular case you have this holographic reference sitting in the middle of the object sitting in the middle of the fraction field. And so these, these three examples are the ones that I pulled out to sort of give an indication of what we're thinking about we're thinking about the future of the technique. And what we're thinking about how you can basically design an optical system that will allow you to make interesting changes. By interesting, of course, I mean helpful changes to the illuminating field on the sample. And so that, you know, we've talked a little bit about the requirements on on the instrument based on the sample, and a little bit about the, the upside for developing the technique in the future, as as demonstrated by previous responses. And so this part, we then move on to the kind of the meat of the presentation, which shows you sort of what we've actually done. How we've taken that inspiration, we've used it to develop a new concept. And then we're, we're in the process of constructing that. And so of course the beam line that I'm talking about is that NSLS to NSLS to came online in 2014 for slight in October of 2014, eight years ago. And it's a it's a very bright source right so the coherence properties are very, very good, especially below about 15 kilovolts because we're a 3G hearing. And so there's a there's a lot of opportunity at NSLS to for using the brightness of the incident X rays in in developing both high resolution images and also developing time resolved images of interesting samples. I mean, I'm focused on material science, but you know we're not excluding other areas. So, you know, we're very interested in pursuing as many things as are applicable to our source. The final concept for the CDI beam line. And again, I apologize, we started off with almost 10 acronyms. And in CDI was the only one that tested well. So the C the beam line has the same acronym as the technique which is a little confusing, but it is what it is. So the CDI beam line will be located at nine ID of NSLS to the optical design takes full advantage of the current X rays, the brightness of the source. And interestingly can change the x-ray spot size and the beam properties, which provides unique capabilities for pushing technique development and therefore generating faster and higher resolution images of samples are more robust images. The optics are sort of shown essentially here on the left. We've got a source in our case that's an IDU with an 18 millimeter period. It's an undulator source. We, we bounce off a monochromator. So the first crystal monochromator. There's actually a filter in here, but the first crystal nevertheless sees the filtered white beam from the source. So the crystal is a double crystal monochromator two crystal pairs. So looking one one pair and it's looking three one one pair. And then that that monochromatic monochromatic beam is the incident on a pair of cylindrically bendable mirrors. And so these mirrors can change you can change the radius of the cylinder that changes the effective focus. And then that beam coming off of those mirrors is instant amount pair of KB mirrors. The mirrors can independently move in the direction of the beam propagation so they can move longitudinally. And so by making this this four bound system composed of two bendable mirrors and two translatable fixed figure mirrors. We essentially have a zoom optical system. And so that zoom optical system allows us to change the, the spot size of the beam without essentially without losing flux. There's a little bit of flux loss to imperfection of the optics, but our simulation show that we have very similar flux both the large and the small beam cases that we've considered. And then that monochromatic focus beam is instant on the sample, which will sit on a guineometer, and then the scattered light will move off into two detectors, which will be independently positionable. This optical design allows us to tailor the properties of the beam coming on to our sample to best meet the needs of the experiment. And then the detective motion system coupled with the two area detectors allows us to look at various kinds of samples right so these are just these things on the right are sort of three paradigms that we're thinking about. The first one with the incoming beam is it is this is incident upon a single crystal sample. You might measure two different diffraction peaks using the two different detectors. You might imagine the situation where the beam comes on to a heterogeneous sample. You want to measure in the forward scattering direction at the same time that you measure a brag reflections you get some information about both the amorphous and the ordered components in the sample. And you might imagine the situation with the, the incident beam comes into a polycrystalline sample. And one measures the diffraction peaks from different crystallized within the sample. If your instrumentation is good enough you could even imagine that these are neighboring grains within the sample right so if you can localize two grains and find the diffraction peaks you might be able to to provide interesting information about if your granular forces. And so that's the concept for the beam line. We actually execute that concept. We, we conducted extensive simulations. And these were, you know, years long simulation campaigns. We use secretion radiation workshop, which was supported here by a leg two bar, the benefits that the things that drive us particularly to report a tool like this are the realistic physical optical simulations, the wave based propagation which is really critically understanding how our optical system works. And this, this newly minted ability to look at coherent mode propagation, which is an emerging technique and is kind of a. It's a facilitating technology that allows us to take advantage of the things like the partially coherent illumination and the reconstruction that I showed you earlier. So, Oleg is the primary author I think we could safely say, and I put the GitHub link up there for sw. These simulations were used to determine the positions and the range of curvatures, as well as the, the, the figure of the fixed figure mirror inside our final optical design. And so you know there was a lot of going back and forth about trying to figure out exactly how that would work best. And in the future we plan to use especially the current mode decomposition so this I think is quite exciting. And that it should allow us to very quickly and accurately simulate experiments which will help users to choose the incident properties that are best for the samples. And we can use these this current mode decomposition inside the reconstructions to, to give us a slightly a leg up on solving these problems. So this is just some technical detail. Our kind of photo or spot size range or targeted range for this beam line is about one microns about 10 microns and that's facilitated as you'll see later by a very long sample to sector distance. So these are just the properties that that show you the things that are kind of involved the things on which our final sample, our final illuminating beam properties depend the this principally the shape of the beam and the, the go here properties. So the aperture for the vertical prefocusing mirror the curvature of the prefocusing mirror the aperture the horizontal prefocusing mirror the curvature of the horizontal prefocusing mirror. And then the apertures of the KD mirrors with these fixed figures. And in principle we can also change the distance the displacement along the beam, which is this final column. Anyway, so these are the simulations that show the sort of one micron beam focus on waste so you match the size of the beam the horizontal and vertical, and then you can match that with high accuracy the degree of coherence. And so you can, yeah, I should say this is we were expecting to get in this particular case we're expecting to get about 10 to the 12 photons per second today. Then we'll go down a little bit due to physical realities but this is what the simulation show. We also have the capability to make the beam significant or larger. And so this is a, this is a set of simulations I should have mentioned, incidentally, you on gal may actually be in the audience. I think it's formally on vacation, I think, and and O log and O Lake to our particularly responsible for these simulations. In this particular case we said okay well now we want to turn my ground focus. And so we changed the, the, the apertures and the curvatures of the mirrors along the path of the beam to make this larger spot size. In this case, although the beam line has the capability to translate the KB mirrors, we didn't translate the KB mirrors. So this was like a trial run to see what we can get away with without moving the KB mirrors around. And you can see we actually get very close to matching the beam size and the degree of coherence we lose a little bit of flux, which is demonstrated by the fact that our vertical degree of coherence is much larger than the beam size. But this is a, this is an interesting property it's an interesting way that the optical design responds to the change of the, of the spot size and it's principally driven by the fact that we have a very long beam line. So we can talk more about these simulations that people are interested, I don't want to, I don't want to dwell on that. We are, we are currently building this satellite and station, I apologize my orange box got misplaced. So we have a lot of instructions that over here in New Zealand before the satellite building which will house the end station is going to be this, this very large barn like structure. This was the top right is the groundbreaking ceremony for that so these things are actually real, they come out of the computer, and actually have physical presence. In the event in the near future there will be the foundations for buildings sitting right here beside 744 at NSLS to which will house this the structure. If you have an x-ray view of the beam line if you will. You can see the entire layout we have the first optical enclosure which has the monochromator and the prefocusing mirrors in it. This is the B-hutch which is situated at about the midpoint of the beam line, which contains diagnostics which tell us whether or not the mirrors are doing what they're supposed to be doing and whether or not the beam is moving around. And then we have this very large satellite building at the end, which houses this equally large hutch. So the nitty gritty of the beam line is that the distance from the sample down here at the bottom left to the source up here at the top right is about 100 meters actually designed to be exactly 100 meters. So this is in contrast to a traditional scanning probe microscopy layout, where you change the final focus spot size by changing a secondary source aperture. We don't have that secondary source aperture in this design and since we don't we can basically preserve the flux. So this beam line is extremely efficient at propagating brightness. And so it doesn't matter so much about the size of the beam. It's designed to take a coherence fraction and to propagate that coherence fraction onto the sample. We have about a 1 to 10 micron spot size. Of course we can make them. This is really the range over which we were confident that we have pretty good control of the properties of the beam. But we can vary the coherence within that range. I think that's all I wanted to say although take questions on that. This is what the inside of the hutch looks like. So inside the hutch, we have this very large diffractometer. So this guy is designed to bring an area detector within half a meter of the sample and then to move it as far away as 10 meters. The horizontal scattering angle. Excuse me horizontal scattering angle is about 125 degrees. Then the vertical scattering angle is determined by the elevation of the detector which is about one and a half meters. And so if you go to half a meter from the sample you get about a 70 degree scattering angle. So if you're sitting on the way back here at 10 meters you get about eight degrees, nine degrees maybe, but this entire thing is designed to allow you to look at a sample which is very large. So when the fringe spacing and your fraction pattern becomes very small and you can move your detector away. There's also I'll show you in the next slide I think the region around the sample. So I'll show you that we have very long working distance here the hutch size is very big it's about 24 meters by 34 meters and then it's got very tall ceilings. So 15 foot six, I think is my minimum clearance so it's about a five meter roof height. The detectors will be able to move independently on these rails will always be two of them. So all of that. Yeah, we're, we're planning for up to 30 kilogram detectors so this will be like a four megapixel detector or so. And then we've designed all these parameters assuming a pixel size in the 50 to 100 micron range. There's also the propagation distance in the sample and the detectors is decided to be helium, rather than trying to do with a vacuum flight path about me. This is the sample interface region. So the sample sits here on an Euler cradle. You have this very long working distance you have like a meter and a half between the sample and the end of the mirror tank for the Patrick by his mirrors were sitting this tank on the upstream side here. On the other side, you see the same thing being comes in from the left, the KB mirrors, you have this very large propagation, again with the helium filled path, and then onto the sample. And so that that area, especially here we have plenty of room for cleanup apertures, iron chambers, being position monitors if we need them. And if we get clever in the future, perhaps even a scatter resource to help us monitor the way front as we go. One of the challenges of this beam line incidentally you'll see demonstrated by this figure. I think this dude is like five foot six something like that. But the beam position off the floor is 1.6, 1.625 meters, which is a bit high. So, you know, this is a, this is a, an outgrowth of the fact that we've got a vertical mirror in the foe, the first optical enclosure. So there, there are some challenges. All right, so nearing the end here. So what's the CDI beam line going to do and when is it going to happen. So it's designed to conduct state of the art CDI experiments and then to facilitate next generation career techniques with unique optical design and very flexible scattering geometry. It'll provide spot sizes that are sort of 10 microns in size with good control. So we're going to go ahead and start our experiments on real materials, and then it's being constructed as part of a Department of Energy major item equipment grant. So it's called the next two project is currently underway. And first slide is expected in January of 2025 and so if anyone has any, any exciting experiments, you know, we can start to try to plan those now I'm happy to talk to you about that. I think we can collaborate on technique development, you know, I think we can all always use more resources. So, I think those are very exciting opportunities. These are the people involved. So this is the team within the next two project. So myself, of course, you on who is the be one scientist is who is our engineer. Oleg has done a lot of the simulation work and especially helping us to set up the simulation in the first place so that we can mess around with them. So these guys are our project staff who are helping with the nitty gritty of actually building the beam line control account managers. Eric is doing the building design line is photon deliveries to the end station, and these guys providing infrastructure and control support for the beam line. All right, and this is my final slide. So the technique encourages flexibility. And it allows you to, to take advantage of the, the illuminating conditions of the sample, and in fact anything that you happen to know about the sample or that you control about the samples environment in recovering the image or the estimate of the image. And the, the information that you can derive is very interesting. So it's very interesting because you not only get a high resolution absorption image, you can also manipulate the contrast of the phase, which gives you a phase contrast without the, the degradation that you would see in a phase contrast measurement at for example a full field imaging microscope where you use the zernike faceplate or something like that. We think that well, the literature demonstrates that that CDI is applicable to a wide range of disciplines and and this beam line as you see from the design is particularly targeting the growth of these fields into the time resolved and the regime in the future you can look at larger functional materials or particles within larger functional materials. So in the near future being I was going to have this beam line as I said first light is expected in January of 25. So you know, you should start to explain planning experiments now. If anyone has any great ideas for small modifications I would love to work with you on that to make that happen. And I will open the floor. Thank you.