 There's sort of two things that I want to kind of cover in this talk. The one is obviously the electron tichography. And the other one is a little bit of information about the detectors that sort of helped make this practical for us. So, you know, the thing that's maybe a little different is a modern state of the art electron microscope, you can quite easily get an image at about half an angstrom resolution these days. And so I'm showing you here a direct image of a crystal or prasidentium scan date. And, you know, what I want to talk about today is the tichography on that particular material and some other materials as well where we've improved the resolution. And in fact, turns out that instrumental blur is now below 20 picometers. And that I'll show you some of the fun things that that lets us actually image, which includes effectively looking at a firewall effector in real space, which is of interest because if I get to a defect or a grain boundary, I can then see that a firewall effect to add an interface, which would be something to very difficult to do from a diffraction experiment. And the tichography itself has also made it possible for us to look at other things. And in this case, I'm showing you a very pretty picture of magnetic skirmyon. So it's a magnetic texture image with tichography. This is the induction field. So the first thing I want to do is acknowledge a lot of the folks involved in this work on the detector side, in particular, is the detector group of Sol Gruner, which is now being run by Julia Tom Levy. And in particular, a lot of the technical skills are from Mark Tate. And without the detectors, we wouldn't be recording any of the things that we are recording. And that's been about a 15 year collaboration with Sol on detectors. On the tichography itself, my long-suffering postdoc, Zhen Chen, took the experimental data. He's now a professor at Chenghua. And then the reconstruction work was by Yi Jiang, who I see is on the call. He's at Argonne now, but he used to be one of White Elsa's grad students. And that sort of bite is on there. And then as we got into more sophisticated forms of tichography, we worked with Manuel and his postdoc, Michael, and then a lot of these things, you need nice samples to look at. And as we go along, I'll point those out as well. So I'm going to start with sort of a quick cartoon of an electron microscope, at least the way we run the microscope. In general, the idea is I use electron, a very high brightness electron beam. And I can, you know, comment to people about the brightness of these things that if you look at the sustained brightness, it's still higher than that of an X-Vel. If you're looking at peak brightness, we can argue about whether it's better or worse. But in a time average sense of what you would need to actually record an experiment in a finite amount of time, we're definitely ahead. And, you know, certainly when we're involved in designing high brightness sources, both for microscopes and for synchrotrons and colliders, the cold field emission guns are the highest brightness source we know of. They don't deliver enough current to be useful as a source for a ring, but they do produce a very high brightness source with the electrons they generate. They generate plenty of electrons for a microscope. So our lenses are electromagnetic lenses and that has some drawbacks which I'll touch on in a few minutes. But nevertheless, we're able to focus the spot down to smaller than the distance between atoms. So if I scan the beam around and I put the beam on an atom column, I might get very strong scattering from the nucleus. This will give me Rutherford scattering and I can pick that up at high angles with an annular detector. And if I scan the beam around, I get a lot of scattering on an atom column and I get less scattering off the atom column and the heavy atoms scatter more than the light atom. So that would be sort of the typical imaging mode. You would get in a microscope and this thing would show up on your screen at maybe a TV rate. On a modern instrument, this image is a TV rate kind of image. Typically, we take the small angle scattering, we run it through a spectrometer and look at the energy loss and then you get something that looks a lot like X-ray absorption spectroscopy. It's literally formally equivalent where our momentum transfer vector plays the role of the polarization of the X-ray. So it's a linear dichroism kind of mode by default where you can play with scattering to get a very messy circular dichroism. It doesn't do it as well as XMCD does. But the advantage, of course, is very high spatial resolution that I can do spectroscopy from a single atom column, which is what this map is showing you and showing you now that I've got three different elements. This is my titanium L-edge and if I look at that L-edge very carefully, you can see the fine structure because our sort of, on this instrument, our default energy resolution is 0.3 electron volts on a 300 kilovolt beam. There are monochromated systems where if you're running at 60 kilovolts, you can get down to 5 million electron volts on the beam and still retain atomic resolution with a lot less current, but it is in fact possible. So these are a nice way to think of this is it's like a little small linear accelerator and it's a small beam line. And by time you've stuck things on the beam line, the costs are not quite the same, but as the machines get fancier, the costs are getting up there. The spectroscopy goes down even into looking at energy losses of a few volts and then you get the valence spectrum and if I keep going down, I'll actually get to see the phone on modes if I was on that especially monochromated instrument. And these kind of atomic resolution maps, we've been able to do with a corrected instrument since about 2008. So that's sort of the normal mode of running one of these machines. And then if you kind of look at the sort of a brief history of resolution of microscopes, if I'm looking at diffraction limited imaging, we'd more or less hit the diffraction limit for light well over a hundred years ago. And then the electron microscopes came along and of course the things you need for resolution is the resolution is gonna depend on the wavelength of the radiation and it depends on the sizing and numerical aperture. So early on, our lenses were terrible, very small numerical apertures. So the way you got better resolution, of course, was you worked on the wavelength and with electrons, you would get a smaller wavelength by pushing up the beam energy. And so the highest resolution microscopes were running up to about 1.2 MEV. If you went above that, you get pay production in your sample and you've already got it at 1.2 MEV. Every paper started with the title the Radiation Damage Study of. So this was maybe a little bit too much energy to put into things even though the resolution wasn't too bad and mostly we sort of worked down about there with 300 kilovolpe electrons and got ourselves and we were happy with a one and a half to two angstrom resolution image. Now, I'll touch on why the numerical aperture was so small because if you look at it, the wavelength of the electron is pretty good, right? It's, you know, we're looking at under 20 picometers. In fact, we're under, you know, that's two picometers. And so we're waiting for about a two picometer wavelength. Why is my resolution two angstroms? And the problem for that, of course, is that the lenses have terrible spherical aberration and terrible, terrible chromatic aberration. So one of the reasons we obsess about these very narrow energy spreads is if they get any worse than that, it's the resolution limiting aberration for a microscope. So, you know, on a synchrotron, if you were one part in 10 to the four, one part in 10 to the five, that might be typical. If we were one part in 10 to the five, my spatial resolution would be probably about 10 or 20 nanometers. So the aberration that in an uncorrected instrument that kills us is the spherical aberration, which of course as I'm focusing high angle rays much too strongly compared to the, you know, what the ideal lens would have done. And so I end up having to work at a defocus and you know, obviously the bigger my aperture, the worse the spherical aberration. And then of course, if I make my aperture too small, I then end up with a diffraction limited probe and the smaller the aperture, the bigger the diffraction blur. And so that's our traditional for incoherent imaging, diffraction limit again, lambda over aperture size. And these two things, you know, show up if you plot them out graphically, you can kind of very quickly see at small angles diffraction limit is the problem, at large angles aberrations is the problem. And there's obviously therefore an optimal aperture size you want to work at, and that aperture size is sort of about one and a half to two angstroms for an instrument. So this had held us back for a long time, even though we'd known from in fact the 1930s, what the problem was. And this is called Schurz's theorem that if I'm using electric and magnetic potentials in free space, the shape of the potential has to follow Laplace's equation. And the potential plays the role of the refractive index. So now if I'm building a lens, my refractive index has the wrong curvature to get a lens in anything other than a paraxial approximation. And this curvature is going to guarantee that I have positive spherical and chromatic aberration if I have a symmetrically cylindrically symmetric lens. So the conditions of, you know, when do I get these aberrations? Is I have a cylindrically symmetric lens, which is your normal round lens. I do need a real image of the object because I got to put some detector there to see what's going on. The fields need to be static, which now that you're doing ultra fast experiments, maybe you can relax that condition, but you've got to do it at very, very short time scales that still makes it very difficult. And then you don't want to put charges on axis for me to get inelastic scattering and that's going to mess up the optics as well. So the way people solve the problem is gave up on cylindrical symmetry. And these are the aberration correctors, which are multipole optics. So in fact, things you'd recognize from a synchrotron a lot of quadruples and dipoles and hexapoles and sometimes even octapoles. And you can arrange these things to produce a line focus with a negative aberration in one direction and the other direction sort of your focused align so it doesn't matter. And you can then correct the aberrations in sort of a power series that doesn't converge. So the low orders are fairly easy to do, the high orders are very difficult and these aberration correctors very quickly at lower beam energies led to much big improvement in resolution on the microscopes, but because the power series does not converge, the resolution today is not a lot different from what it would have been back in say 2002. And that's basically the, there are small increases. It's gone from like 0.5 to 0.4 angstrom resolution over 20 years, but it's a, this jump up over here was a huge jump up. And the fact that you've got an instrument under an angstrom resolution, which is now in a standard commercial instrument of which there's hundreds, many hundreds in around the world has been a huge sea change for microscopy and almost every top end microscope has one of these correctors in it. And this is the work of Harold Rosa, Max Hider and Andre Krivena to competing teams that developed these and they were awarded the Cavalry Prize for that in 2020. Rusko was awarded the Nobel Prize for the discovery and invention of the electron microscope back in 86. He shared it with the invention of the SDM for looking at atoms. So this is sort of what a modern state of the art electron microscope will look like. If you take a look at sort of what you can do with these things, again, you might take your high angle scattering and that'll show you where the heavy atoms are. You can also look at elastic scattering at small angles and the elastic scattering at small angles is picking up a different part of the form factor that for electrons is very sensitive to the valence electrons. And so it helps to emphasize light atoms. So this is gallium oxide and the heavy, the high angle scattering shows you the galliums, the small angle scattering shows you gallium plus oxygen and then you can play games with color scales to see all the atoms in the material. So there's a lot of information in the angular distribution of the scattering. And if you think about it, what's really going on is there's a diffraction pattern in this plane over here and we're just averaging up different parts of the diffraction pattern. And the question of course is, if I actually took some of the information in the diffraction pattern, could I get more information out of my data? And that's obviously what we wanna do with tichography. But for electron scattering, this is actually quite a tricky thing to do because as you can imagine, an atomic resolution image doesn't stay stable for very long. So you need a very fast detector. Otherwise by the time I've collected the data, things have drifted and vibrated all over the place. And electrons have a much steeper angular distribution, angular falloff compared to x-rays. So we get much less information at high scattering angles. And so the detectors need a very large dynamic range in order to collect that information. So I'm gonna jump ahead on this one over here. By collecting all of this angular information and then doing tichography on it, I've got a picture here of Yi and Zhen who are the main folks who did the work on this project. The black line is sort of the history of electron tichography. And what I wanna point out is, the first person to propose tichography is an algorithm with John Rodenberg. And this is back in maybe about 1990. And then he and his then grad student Peter Nellist actually managed to do a little bit of tichography on an electron microscope in the early 90s. And they managed to get a one-dimensional line profile of a silicon lattice fringe at about three angstroms. And the problem with tichography at the time was, because the detectors were not very good, this is underperforming what you could do on a top-end instrument or on an aberration corrected instrument at the same time. So the tichography ran into two problems with electrons. One is the electrons are very strongly scattering. So you're very limited in what kind of samples you could look at. And the second one was the aberration correctors were already doing pretty well. So without the detectors, the tichography wasn't competitive. And so it was very much a area left to true believers of which there are maybe only two or three groups in the world. And instead the tichography migrated over to the X-ray community where the instrumental demands were more compatible with the kind of data that you had to take. And it was rarely only when we got the detectors up and running that we were able to get results that people in the Microscuby community paid attention to. So our first result that I'll talk about was at 80 kilovolts. And that was in fact a world record that beat out even the 300 kilovolt electron microscope state-of-the-art correctors was a little better than that. And then when we went to 300 kilovolts, we were way ahead of anything else that you could do by electron microscopy. And in fact, as I'll talk about the resolution limit that we hit is in fact no longer the instrument, we're not limited by instrumental blurring at all. We're in fact limited by the thermal vibration of the sample itself. So we're looking at the thing that blows out the shape of our atoms on the phonons. So that's sort of a little of the history of kind of what I wanna cover today. And sort of, we've been thinking about this for a long time. And in fact, our first attempt at a high dynamic range detector for electrons was to take one of Solgruner's early pixel array detectors and put it on our electron microscope. So it was sort of one of his sort of test ships things. It had 16 by 16 pixels and a 16 bit dynamic range. And he was a little bit worried about radiation damage. What would happen when we put it on the microscope at that point we didn't know. And so Tom Caswell, the undergrad who was working on the project took it over to our synchrotron beam line, put it on the beam line, pulled out all the stops. All the apertures gave it a full blast of the X-ray beam figured out the dose, did it for two hours and saw no damage. But when we looked at the numbers of what would that correspond to on the electron microscope it was about under a second of illumination. And that sort of reflects the difference in number of particles that you might be looking at between the two systems. So here was some very early imaging of a grain structure with this first detector. The problem was we found out very quickly the 16 bit dynamic range wasn't enough bits to do the kind of diffraction experiments we wanted to do. And so we, after working for two years I think we started the project about 2004, 2005. And yeah, Chris I can put up a picture of the difference between elastic and inelastic but that's very much in the favor of the electrons. But even just raw particle number of brightnesses we're pretty high. So, you know, we went back and we needed design to get a much higher dynamic range for these diffraction patterns. Now, when we went, as we went off to work on that there were other detectors that were developed for electron microscopes and that had a huge sea change in how people did cryo electron microscopy. And it was enough of sort of a sea change that you got these amazingly hyped. This is a news and viewers article from Nature where they were boldly declaring that extra crystallography was over and cryo EM is gonna take over. And we normally describe this as ribosomania because it was these small molecules that people were looking at. And I'm quoting from the article over here, a lot of the worst offenders for this kind of hype were the new converts from the X-ray community to the electron community. And I'm not sure this is quite true yet, although when you look at the slope of the curves certainly the number of structures solved by cryo EM is catching up with what's done by X-ray crystallography. What did happen at least was this was enough of a sea change that the folks who developed the cryo EM methods did receive the Nobel Prize in Chemistry for their work and Richard Henderson who did a lot of the early analysis on why, how the X-rays and electrons scatter in very different ways. And in fact, for imaging you have a big advantage at high resolution with electrons over X-rays. And he also did a lot of work on the detectors. So what this meant in the electron microscopy community is these type of detectors became very well known and very well established. But they're, and they're very good detectors for imaging but they're not good detectors for diffraction and what we needed was a detector for diffraction. So the detectors that were used for the cryo EM are these monolithic active pixel detectors or MAPS detectors and the Gatan are very popular in the US, the Falcon is more popular in Europe. And the detector is basically a large piece of silicon with relatively primitive electronics on each pixel but a lot of very, very small pixels. And because the detector is a, it's a very thin detector it's a transmission detector. There's not a lot of beam spreading as the electron goes through each pixel. And so you can make the pixel small. So you get a lot of pixels on the chip but there's not a lot you can do with each pixel. You can count an electron and you can't always count that quickly because you can't put sophisticated electronics on the setup. So this is the MAPS detector is a very good detector for biologists. It has a large number of pixels for TM image. The biologists don't use a lot of electrons because if you did, you'd cook the sample. So sort of these row column select readouts work fairly well and you can maybe count 25 electrons per pixel per second which is a very low count rate. The detector we use is a pixel array detector sometimes called a hybrid pixel detector and there are several designs that are built this way. This is maybe more common for X-rays because these very thin transmission detectors would really not stop an X-ray at all. So our detectors have a lot more in common with what you might find on the synchrotron. This is a thick piece of silicon. It turns out that the electrons at the beam energies we use which is sort of 60 to 300 kilovolts that behaves a lot like an X-ray would in the 10 to 20 kilovolts range. And so in many cases, you know, we benefit from about the same size pixel. We have a little bit more spreading. So 150 micron pixel is a good one if you don't want to worry about point spread function. 50 microns would be useless at high beam energies because it would spread too badly. But the beauty of these hybrid detectors is your sensor layer, you basically connect complicated electronics to every pixel on the chip. And so now you can do much more sophisticated processing. So we have fewer pixels. We have a faster readout. We, because we can do more processing, we can get more sophisticated electronics to get a high dynamic range, which for us is very important. And the sensor diode itself doesn't have to be silicon. It could be other materials as well. So for the MPAD system, you know, as I mentioned, we have to, in order to get a signal with enough information in it to do the reconstructions that we want, I need to get information at high scattering angles, but I have a lot of beam current at small scattering angles. So we need a high dynamic range and we have to be able to work with, you know, a lot of current in each pixel. And for that, Solz has this hybrid design of detector where what you do is you can think of the detector as basically a big bucket of charge and you're scooping charge out of that detector with a cup. And as long as I scoop charge out with that cup very quickly, it's never gonna saturate as long as I can have my removal rate quickly, quick enough. And that's actually a fairly easy thing to arrange to the point that we can actually put, if it was a 60 kilovolts, I'm putting nano-Ams of beam current on the system. And in fact, if you work it out, our count rate is, if we think of this as a counting detector, it's not, we could handle about a billion electrons per second, whereas a counting detector saturates at around about one to 10 megahertz. So the idea is we remove buckets of charge which might be about 20 electrons or several hundred X-rays and we just keep them in a counter. And then at the end of the frame, we digitize the remainder of what's left in the bucket. And that goes into a 12-bit analog output. So now I've got 30 bits of dynamic, 30 bits into my counter. And when you work out the noise level on a single electron, our dynamic range is very close to that, you know, it's well over a million to one. And 30 bits is in fact, much higher than that. So if you look at sort of over here on how do various detectors do, the pulse counting detectors rarely run out of steam at, as I said, around about a megahertz or so. And our pixelated M-pad detectors, the first generation, you know, would be fine at effectively about close to 10 megahertz. In fact, it's actually way over 10 megahertz. I'm looking at beam currents here. It's, if I remembering right, one megahertz is about, I've got it up over there. I just got all my zoom stuff on top of it. The megahertz is about a tenth of a pico amp. So we're sitting over here at close to, you know, 50, 1,500 megahertz as a count rate. And then up over here, we should be, like I said, that should be, that's close to about a billion. And this is, you know, in fact, it's, yeah, even higher rates than that now when we've tweaked it a bit. And yeah, so this is the detector we're working with today is, it's got a linear performance over extremely large range of intensities, which is important for us. I should mention that these hybrid detectors are also very good for X-fells because all of the electrons arrive in a single bunch, but they actually take a little while to get out through the silicon. So it turns out if you send an X-fell bunch, I get to read out in maybe a couple of cycles because some of it comes out slowly. And so you can have several hundred electrons in a pulse, say a femtosecond pulse and you can record the thing without saturation. So it works very nicely for, you know, a diffraction pattern and the central beam from an ultrafast source. The counting detectors, one of the things that, you know, causes headaches is what I want to quickly point out is that when you get close to the, you know, the dead time of the detector, your counting goes non-linear and when your counting goes non-linear, your DQE basically collapses to close to zero. And often we want to be running the detector sort of in this range over here. And this is what kind of limited us with the pulse counting detectors. So having solved that problem with the pixelated detector, the kind of details we got to think about was the frame rate that we want to do the imaging at. And this is important because I'm trying to take do ticography, I need to record lots of diffraction patterns and I need to record them quickly or else I'm going to run into instabilities of the instrument. And so we came up with a metric of what we're called a maximum usable imaging speed that tells us how fast I can run the detector if I need a minimum signal to noise ratio on the system. So just to form an image, I need a signal to noise ratio of about five and that would sort of be somewhere over there and a signal to noise ratio of five depending on the type of detector you're looking at, a lot of the pulse counting detectors are going to do just fine for taking an image. But for a diffraction pattern, particularly for diffraction pattern for say magnetic imaging, I need a much higher signal to noise ratio and then the pulse counting detectors do not do well and the MPAD detectors, second generation in particular, can handle that beautifully at the speeds that we want, right? We want to be running at one to 10 kilohertz is where we want to be going and the new detector lets us do that. So that's our maximum usable imaging speed. It's an important concept because often people use dynamic range and the problem with dynamic range is, well, if it takes you 24 hours to fill up the dynamic range, it's not going to be useful for ticography. So this is just sort of an example of some diffraction patterns we've recorded with the detector and you can see in 100 microseconds, the diffraction patterns look quite usable. This is sort of at small angles and then at high angles, you can see things like the Holtz lines and the Kukuchi bands. And even at a millisecond, this is a beautiful image and there's a lot we can do with that. So a trivial one is you can actually, from this sort of a data set, you can actually map strain and polarization from that data set. This is in a ferroelectric and it's done at 100 microseconds a pixel. So these maps come out in real time in front of your eyes. So we've got the diffraction data we need for the ticography. The detector was a very compact design. It didn't require huge amounts of data out or anything like that, which makes it very manageable. And we've since commercialized the detector and you can now buy them on thermo Fisher FEI microscopes. So now we're at the point that we can think about doing ticography or diffraction. And here is the pixel array detector, cartoon of, here's a 2D material and we've fired the electron beam through that. And then I collect the diffraction pattern and the diffraction pattern, which you might get in a millisecond or a tenth of a millisecond looks like a perfectly good diffraction pattern. And then I record my diffraction pattern at every position when I scan the beam around, and then if I go and I integrate up the intensity of the central beam, you'll get something that would look like a bright field image. And you can now see there's a monolayer material sitting on a silicon nitride support. If I look at the Bragg beam, you can sort of tell this monolayer material is maybe made up of two different things. And if I actually measure the position of each diffraction peak, I can produce a map of the lattice constant with a precision of under a picometer and then a strain map with strain precision, I think about point, about 0.1% and you can see how these two different materials, which is a single monolayer where the two layers are grown, intergrown with each other. It comes out very nicely. So this is sort of a diffraction pattern with a moderately parallel beam. And by moderately parallel, I mean it's about a milliradian wide and it's maybe about a nanometer in diameter. And so the obvious thing to ask is, well, what happens to a diffraction pattern if I make my probe size smaller and smaller and smaller? And let's just jump through that there. The spatial resolution of my system, of course, is the wavelength over the angle. So as I make the angle bigger, my probe size in real space gets smaller. But what happens, of course, is my Bragg discs get fatter and fatter and eventually those Bragg discs are going to overlap each other. And in fact, the point where the Bragg discs overlap is the point where the diffraction, where the spot size is smaller than the lattice spacing and that's when you actually get to form an image. That's sort of more or less the condition for forming an image. So this brings us to ticography. If I have a diffraction pattern where the discs overlap, if you think about it, what I'm looking at is I have a wave function that went into the sample and exit wave came out and I'm looking at the square of the exit wave in momentum space on my diffraction pattern. And our detector is sensitive enough to measure not only the central beam, but all of these very weakly scattered beams as well. And that's important because now what I'm showing you over here on the right is the full diffraction pattern from a piece of MOS2. And as I scan the beam around, what's gonna happen and you can see it in the pattern on the side here is where the two discs overlap, there's a phase shift between the central disc and the scattered disc. And that phase shift depends on position because of the Fourier shift theorem e to the ik delta x. So as I move the beam around, the phase information in that overlap region shifts and you can see it going from bright to dark in this lower animation over here. And in the upper animation, you can see there's information inside the central disc as well. That's the easy way to think of that because we're gonna come to it later is this is literally a map of momentum transfer. So this is just showing you where the probability current flow of the electrons are and this momentum transfer can be related back to the force or potential acting on the beam. And so the bright spot is basically all of the electrons attracted to the nucleus of the atom. And as the beam moves around, the bright spot is literally pointing towards and telling you where the nearest nucleus of the atom is. So this is the information we wanna work with when we do ticography. What we wanna do is we wanna solve the phase of this diffraction pattern and as a function of position, there's rich information to be extracted. And the end result of that is normally our numerical aperture would be this big, that would be setting the resolution. If I can collect counts out to the largest scattering angle on the microscope, I can look for a resolution improvement which is now maybe five times better because that's my largest scattering angle in theory at any rate. So that's the idea behind the ticography. The way you would describe this is that we view this as an optimization problem in terms of forward modeling, I have to produce some model of a diffraction pattern and the model of a probe that produced that diffraction pattern. So I need a potential to scatter from, I need a probe to come in with and then I compare that to my measured diffraction pattern. And as long as I have a model of what's going on and I vary that model, I have a chance of minimizing till I find the right object and probe and I have enough redundancy because I have enough data that I've stepped around. So earlier, this is sort of the, what you would do in E-pi which is a fine sort of starting point for these things. Although working with Manuel, we went to much more sophisticated methods later on. So this is sort of just the general philosophy of as long as I can model a diffraction pattern that matches the experiment, I mean, good shape to figure out what the object was. So here's some quick early demonstration of this is a 2D material, Mali sulfide and conventional annular dark field imaging. It's fairly noisy because there's not a lot of electrons that scatter at high angles. The spatial resolution is twice the aperture size. You can see that in the Fourier transform. Simple differential phase contrast imaging with a quadrant type detector or center of mass gives you more counts, but no better resolution. If I do ticography from the central beam, I'm also not going to do better than the diffraction limit of the microscope. And most of your electrons are in the central beam. So this is why early ticography underperformed normal diffractive imaging. But once I can collect electrons at high scattering angles, I can start using information outside the central beam and you can already see the resolution of the micro, the image is getting better. And the higher angle I go to, the better the resolution of the image. And then finally over here, we collected out to four times the scattering angle. And now my resolution is, shouldn't theory be eight alpha, but we don't have enough counts at the highest angle. So we're stuck at five alpha, but that's okay because the microscope diffraction limit was one angstrom. And now we're down to 0.39 angstroms. So what do you get when you do imaging at these kind of high spatial resolutions is you now it becomes very obvious that there's a missing sulfur atom over here. So if I wanted to look at a single crystal, I just take a diffraction pattern. But if I want to study defects, I need a real space image. So over here is a defect and you have to have been very brave to have spotted that in the original image. But over here, it's extremely clear that it's exactly half the intensity of the other atoms. So this was our first paper, 2018, year was still a student. Then the next thing we wanted to do was we wanted to measure resolution real space because my electron microscopy colleagues don't always understand Fourier transforms. So we picked a twisted bilayer of MOS2 and some of the atoms are 1.7 angstroms apart. Other ones are almost practically on top of each other. So look at the traditional electron microscope image. It's a little blurry except where the atoms are on top of each other. The rest of them were a bit hard to figure out. Here's what ticography does. It pulls it out beautifully. And if you look at some of the atoms that get closer and closer together, you figure out which ones no longer look like a round dot and that will give you the resolution to real space. Again, it's about 0.4 angstroms. So that was at the time the highest resolution image in the world. And so we got in the Guinness World Records for the highest resolution microscope. I think previously there was an AFM that was at 0.7 and so that was one of my friends from Augsburg and they got pushed out. So, okay, we're now at 0.4 angstrom resolution but we're doing it in a very thin sample. So the electron microscopes like electron scatter really strongly, you know, this is fine for 2D materials but what about everything else? And for 2D materials, the strong phase approximation works really well. So this is even simpler than the sort of maybe the, you know, that transmission model where, you know, this is, you know, a purely imaginary number. There should be very little absorption. The only reason to have an absorption term in there is because maybe some electrons didn't make it onto the detector they were backscattered but almost every, practically every electron that enters the sample comes out the other side. There's very little in the way of absorption. In fact, there's actually quite a little in the way of backscattering. So this is a strong phase approximation. It has a couple of problems for electrons and what it neglects is that the probe, in the strong phase approximation, we're not allowed to change the shape of the probe. You can see the square of the wave function is the same as the square of the input wave function. So this neglects beam spreading and it also neglects channeling of the beam itself. So it turns out if I was doing this with x-rays, the beam spreading would be a pain but you wouldn't really have to worry about channeling and reshaping of the beam which for the electrons is the big problem. So making the strong phase approximation is effectively treating these two potentials as equivalent that the three-dimensional details of the potential are lost. And in terms of the beam spreading, you know, this has been well calculated. You can find it both in the x-ray and the electron literature. You end up with about, you know, for our parameters about a seven nanometer depth of field and if the sample gets thicker than that, you're gonna run into trouble. So we had a for thick samples and for an electron microscope, everything is thick except for 2D materials. We had to go to the multi-sliced hycographies and you can see our beam does change shapes significantly over a typical sample. And so the multi-sliced algorithm is divide the sample up into a bunch of slices of potentials, treat each slice in the strong phase approximation, propagate it, scatter. This is a very standard technique. This is sort of Cali and Moody from 1948. It's been well used in the electron microscope community. The first of these algorithms was proposed for multi-sliced hycography was Andy Maiden back in 2012. And the nice thing about the algorithm, it gives you a whole bunch of potentials in a slice. You've got some 3D info and it gives you the probe. And then this has been, you know, well used in the X-ray community and especially by Manuel's group where, you know, you can look through fairly thick samples. And the problem is for the X-rays, you didn't need a lot of layers and the layer distance was comparable to the depth of field. But for the electrons, the multiple scattering and the channeling of the beam that the beam has reshaped requires you need a lot of layers. In fact, many cases you sometimes need almost as many layers as there are atomic layers of atoms in the sample. And it's very strong scattering. So the normal algorithms didn't work for us and it required additional regularization to stabilize it. And this is sort of, you know, the regularization that Yee and Jen were working with, which was effectively to recognize that the tachography has got depth information but it's missing information in vertical direction. If you done tomography, this is literally the missing wedge. And physically, if you look at the propagator at small radial distances, there's very little phase shift in the z direction. So it's hard to get a good phase estimate and you don't wanna wait that information very heavily. So the regularization comes in to recognize this and missing wedge of information and that stabilizes the reconstruction. And that was sort of essential to get it to work. And so the multi-slice here is that we were using Manuel's code modified by Yee and here is sort of a simulation of precedenium scandade. It's a material, it's a fairly strong scatterer on the precedenium atoms. The single slice was gonna work just fine if things are thin, but even something 15 nanometers thick, it's failed and it's failed miserably at 30 nanometers. The multi-slice algorithm is robust throughout. And that's a big deal because now there's multiple scattering problem that basically was a problem first formulated by hence beta sort of about 80, 90 years ago. The multi-slice algorithm has taken care of that multiple scattering and given us effectively a phase output that is linear with thickness. So now you can look at depth information in the material. And this works up to about 30 nanometers in the sample which is good because most TM samples are 10 to 20 nanometers thick. So the other thing that was very important was the partial coherence was something we rarely had to get going. And once we treated the partial coherence with the mixed modes, this was sort of something originally formulated by Pierre Tybalt, we could not get a good reconstruction. In fact, we were lucky we'd dealt with partial coherence in 2D materials before or we would have given up on the multi-slice because without partial coherence it didn't work. So a lot of little details that we had to get right before the 3D reconstructions worked but after a lot of hard work by iterating over things and data sets and Jen being stuck in China, thanks to COVID and travel restrictions he was able to work very heavily on these data sets and we got our first reconstruction. And it looks pretty good. And then when you look at the Fourier transform there's a lot of information out there to some very high scattering angles. Now, the fun thing is when you ask for what should this thing have looked like? So here is the static atomic potential. So this is the electrostatic potential for static atoms and that's what it would look like. When I start to put in thermal vibrations these potentials blur out and then when we come to the experimental data and I compare my experiment to the simulation they look practically the same and in fact in our science paper from last year where we analyzed this result we have a lengthy section actually measuring the broadening and finding out that our experimental blur the residual that we can't account for by thermal vibrations is about 16 picometers on the heavy atoms. So that's effectively our instrumental resolution and that as you can see the limitation to the system is not the experimental blur it's in fact the phonons themselves. So that's the first time an imaging system has sort of got to that kind of point. And in fact, if we zoom up a little bit you can actually see some very interesting things happening. So what you'll notice is the oxygen atoms are elongated and they're elongated, this is the bond direction the bond goes scandium, oxygen scandium it's two octahedra that get to rotate like that and this is the soft direction perpendicular to the bond is stretched out and along the bond direction it's quite stiff and similarly the atoms over here the bond direction is there they're elongated the other way. So that's a very nice demonstration of seeing the thermal blur in the material and because we don't require a periodic system if I get up to an interface or a grain boundary I shouldn't fear be able to see how atoms rattle around at interfaces and boundaries and that's something we're very much looking forward to getting some results on soon. So that's our typography for imaging. I just wanna sort of point out that the typography because it uses every electron in the material in the on the detector and everything that was scattered through the material it's actually a very, very dose efficient imaging method and it's more dose efficient than all the other imaging methods in the microscope we can think of to date. So we're just sort of showing you over here information limit versus dose and then the super resolution where I'm doing better than the diffraction limit you can see there's a different scaling the scaling here is one over square root of N the scaling over here is actually more of a logarithmic scaling that if I wanna improve my resolution I have to put in a heck of a lot more dose in order to benefit from that. But nevertheless, the microscopes can deliver certainly up to about 10 to the eight electrons per angstrom squared in a very reasonable time and that certainly helps pushing the resolution a little bit more than where we are today. So the other thing I sort of wanted to come to was you'll remember there was depth information that we got out of this ticography and then you can tell where you are in the sample. So go back to the data and you can kind of see over here you can actually measure how thick the sample was. So we have a nice bit of 3D information and our depth resolution early on was about four nanometers and with our new microscope we're down to about three and a little bit under three right now. And that means I can now start going after individual dopant atoms inside the material. So this is Gatellinium gallium garnet or GGG and it's right near interface with a philium and keep an eye on this animation over here. There's an atom that pops up and it's a philium atom sitting right on the surface of the material and you can see as I animate through this interstitial pops up and disappears. So if I just slice it in the depth direction, there it is, there's my atom. My lateral resolution is about 0.3 angstroms and the depth resolution is about three nanometers. So 3D info. Another example of this is octahedral rotations in sodium niobate. It's an interesting ferroelectric material and in a hybrid way. But one of the things that's interesting is this octahedral rotation. The normal conventional imaging mode, it's a little hard to see the displacements of the oxygen atoms. You can see there's a wiggle over there. If I draw in a line, that's the wiggle you're looking for but you might not be happy trying to map that out from an image like this. The ticography is a huge improvement. You can see very clearly, there are oxygen atoms that are bouncing around and the image is much, much clearer. And in fact, we've undone all sorts of elastic scattering artifacts. The beauty is I can do it in three dimensions. And so what you actually realize if I look at it in 3D, if you look at the animation, you'll see atoms sort of slowly drifting around here. What's happening is the surface of the sample, it's only 20 nanometers thick. You can see the top and bottom surface, the top two or three nanometers have relaxed and the actual true structure in the material can only be found in the middle of the system. And the beauty of the 3D imaging is we can now reject the surface relaxation or at least another way to put it is we can study how the system relaxes back to the bulk value. So last thing I want to touch on because I think we're now running a little out of time is magnetic imaging for just a few minutes. For the magnetic imaging, what I'm looking for is Lorentz force on the electron beam, QV cross B. It's going to basically, if electron beam goes for magnetic material, the beam gets deflected to the side and I can pick it up with a center of mass kind of measurement. So differential phase contrast or I could do typography. So the things I want to look at here are samples of skirmeons, the block skirmeons are little spirals in the page, the nail, little hedgehog out of the page. Problem with electrons is the magnetic scattering is very small compared to the electrostatic scattering. It's obviously, you know, fine structure constant kind of weaker. So I need a very large dynamic range in the detector to pick these things up. The typography is actually a very beautiful way of doing this. There's a phase shift of the electron beam. That phase shift is just there, Aronov-Bohm phase shift of the beam. So I'm getting the phase shift from the magnetic vector potential. And I can use that to then calculate the B field. Or if I wanted to measure currents, I could take another derivative and I could measure current flow in the material. And then this is an example of our typographic reconstruction of some block skirmeons. The color is the direction of the B field and the arrows are the vector and magnitude and direction. And you can see very, very high spatial resolution. There's a little singularity at the core of the vortex. We have no trouble picking that up. The typography resolution is much better than just straightforward center of mass imaging. The method is very sensitive. This is a material that's only four atoms of cobalt thick, buried inside very strong scattering materials. We can still see the magnetic textures. The precision is enough to detect image 2D magnets. And so my last slide is in fact a 2D magnet. So this is a vanadium doped tungsten sulfide. It's a dilute magnetic semiconductor. This is the vanadium atoms put some holes into the material in terms of electrons. And then if I sweep magnetic field and take an image, you can see the contrast coming in and out from a grain that's doped optimally to produce magnetic contrast. And so as I sweep back and forth, you can see the contrast between the, you know, as I pull the spins in plane and out of plane. So that's imaging, you know, from a single, this is a monolay, this is material that's only one atom thick. So this is contrast from a monolayer of spins. Okay. So let me wrap up as to where we've got with the ticography. Imaging in 2D is down to 16 picometer resolution. So we get to see the thermal vibrations on the atoms themselves. That's our resolution limit. We have depth information that's letting us pick up interstitials and individual dopant atoms. And we're able to measure magnetic fields with very, very high sensitivity and precision. Our competition would be electron holography and we're about a hundred times more sensitive than the holography. So I think this is going to be a very useful method going forward. So just to wrap up, you know, things you want to do next is of course, looking at bond stiffness at interfaces and then start to image some topological currents inside topological materials. So just again, you know, maybe now it makes a little more sense who the people are. Many thanks to Sol Gruner's group for the collaboration we've had for electron detectors for the last 15 years for the ticography, Zhen and Yi and then for the algorithms, Manuel and Veit. And then early on, we actually had some help from Martin Humphries when we were just getting started and then beautiful samples from collaborators that you've seen over here. So at this point, I think I've run a little bit late but I should stop and take questions. Thank you very much, David. That was very, very interesting talk and Ian has the first question, go ahead Ian. Yeah, thank you. I'm really, really impressed. This is quite splendid and for all the reasons I think everybody can appreciate that's working, you know, at 10 times worst resolution with X-rays. So I was going to ask about the coherence because that's, you know, that is a big deal and it's something which I thought was limiting the ability because of the, basically the ability to make a small enough source of electrons. But I'm curious, you said you've solved that by using Pierre's multi-mode methods and I guess Manuel has helped with that but what do the modes actually look like that you get out of the image? So let me pop that one up. So here very quickly is, you know, what the modes, this was an early one. In fact, it turned out, you know, what we hadn't taken account of in this particular analysis was there's still a little bit of point spread function on the detector. And so without accounting for point spread function of detector, some of that got folded back into the modes and that wasn't very efficient use of the modes. So, you know, we needed about eight modes and then once I put the point spread function of the detector in, we can drop down to about four modes. And if you kind of look at the modes, I mean, this is, you know, sort of typical, normally I should be getting, once I've taken the point spread function of the detector into account, I should be getting, you know, maybe 60% in my first mode. And then the other ones are sort of, you know, usually it's a singular value decomposition kind of thing. So it's a little hard to think about what's going on, but the, you know, what are the sources of instability for us? The chromatic aberration is one of them. At least chromatic aberration gives us a defocus blur. So we've got to deal with that. The finite source size is maybe about 0.2, 0.3 angstroms. So it's big enough to worry about at this resolution. And then the other one that's actually our biggest problem is environmental instabilities, that the electron beam is wobbling. There might be a little bit of flickering on the probe, changing the shape a little bit, not too much, but it might do a little bit. And all of that's getting folded into the, multi mode. Yeah, yeah. And then, yeah, I think Chris is saying, oh yeah, this is not an in focus, this is a typography, this is an out of focus. Because when we work out of focus in typography, my position correction becomes much easier because there's a little bit of a shadow image of the atoms in the defocus probe. And that helps with position correction. Early work was all done in focus. And with that, it's very hard to do position correction. But it turns out in focus, I've got the limited number of pixels on the detector does sort of show up as kind of like a decaherence. And so it's the in focus typography, we didn't have to do the multi-modal decomposition when we were in focus. When we went out of focus, it was very important, it wouldn't reconstruct without that. So that is sort of a lesson of, you know, there's a lot of bad things that are folded into the multi-modal decomposition that it's correcting for. And sometimes it takes a bit of detective work to figure out which are real decaherence and which are other artifacts that are being treated that way. But I would have thought there was a strong phase component to the modes as well. I guess you just shown them as amplitude. But multi-phase wrapping around that's probably the genre polynomials or something. Yeah, I do have some other plots where you can kind of see them go. And yeah, it's strong both phase and amplitude in the modes. The, I should say that the reconstruction of the potentials, the potentials are almost pure phase and the amplitudes are very close to one. In general, if the amplitude varies by more than about 10%, something probably went wrong in the reconstruction. And in fact, you know, for electron tichography, if you've got more than a 10% variation in your amplitude, it probably didn't reconstruct properly. And sorry to steal all the questions, but did the position correction, was that a make or break thing? Did you absolutely have to have that in order to get the tichography working or is it an add-on? It depends. Our first one on the MOS2 originally didn't have that. So, we had good data, we couldn't do position correction because it was in focus, but it was stable enough, it would run and that reconstructed to 0.39 angstrom. So that was pretty good. The defocus stuff was one where I could now take advantage of position correction and tweak things up a bit more. I'd say our biggest issue is actually maybe more, not so much a random, the random displacements definitely give you an improvement in resolution, but it'll reconstruct without. The thing that's the hardest for us is getting a consistency between the real space sampling of the pattern and then what I'm trying to reconstruct from the Fourier space, because the beam drifts or the sample drifts and that drift distorts the real space sampling, so you're not at the right position. And a lot of the drift correction stuff is actually correcting for sampling distortions in a fairly inefficient way, but getting those kind of distortions sorted out as, if you don't do that, you get big inconsistencies. It'll reconstruct, but it won't look as good. Yeah, thank you. Is there any other question? Yes, I have one if I may. It's a bit technical. David, you're saying that your limit is the phonons. What are the temperatures that you're using for? So this was a room temperature. Can you decrease them? Yeah, so we're at room temperature. What's interesting with these materials is the, if you think about the divide temperatures, the divide temperatures are actually above room temperature. So the change in thermal vibration are not as big as you might imagine that some of these things we might, maybe we'll get a factor or two if we cool down to liquid nitrogen, but the zero point modes are actually a very significant part of what we're already seeing. So if I went from nitrogen to helium, you wouldn't see much at all different, but there'd be a little bit of an improvement at nitrogen temperature. Thanks, Adam, you have a question. Yeah, hi, David. I was excellent talk, it's just amazing, blows me away. I was wondering if you could comment a little bit with respect to the impact of these typographic methods on the question of radiation damage and TEM, which is generally one of the most limiting aspects. Is that a real game changer? We would like it to be. It might take a little bit of persuading for our bio colleagues, but this was sort of the plot I put up over here was at least looking at 2D materials, if the way you read off, we'd work out what's the most efficient imaging mode if I'm only using one channel and that's the purple line. And so the way you can read this is, I can either get myself the same resolution at almost an order of magnitude less dose. Or if I said, okay, I'm limited by the dose, I can get about a factor of three improvement in resolution. So that ought to make a difference. Now, a little bit of what people have done by cryoEM is they're down to about an angstrom resolution in their single particle reconstructions already, which five years ago it was two angstroms and they were competing with x-rays, but now you're an angstrom. I mean, it's just kind of crazy. So I don't know how much better we would do because the dose scaling just to get enough particles reconstructed to have enough information at those high angles. Square root is not a great scaling. So we may not be that much better that people would care about it at atomic resolution. But... What about things like polymers or facial resolution, angstrom switching and cryoEM? Yeah, so I mean, even cryoEM for polymers or for when people are looking at cellular structure, so you're looking at a slice of a cell, that's I think where the ticography is gonna be very useful. And there obviously you wanna combine it with tomography. So that combination I think is gonna be very powerful. And the out of focus ticography can cover very large areas very efficiently. So it looks I think very well suited to these ultra structure level of problems which are not well addressed by current methods. Thank you. Is there any other question? Chadlis? Thanks. Wonderful talk. Thank you so much. My background is more in X-ray UV ticography. So I'm kind of unfamiliar with the electron side. But I'm just curious. I know there's a lot of these different tricks you can play with ticography, with the ultimode sort of, you know, relaxations. But one of them is, you know, or that I've seen and played around with a bit is masking the diffraction pattern to block out some of that for the region of your detector. And it sounded like one of the big problems that you guys were working on with development of your detector was increasing the dynamic range. And so I was wondering if you played around with this sort of relaxation where you mask out part of the DCs, maybe limit the dynamic range that would be required by the detector. I'm sure that you're throwing away the DC of the diffraction pattern. So you're losing information, but I'm just curious if you play with it at all. I mean, so I think for us, there's a lot of information in the central disc. So, you know, we've sometimes done just the central disc and then we've, to see what you get there and then we've grabbed the other beams. There's some other systems where, you know, where you do brag ticography, for instance, that I think for other reasons, like for instance, producing a strain map in the material, we'd like to do some brag ticography without setup. And there you, you know, we don't need the central beam anymore. The, you know, I think the thing and maybe we're spoiled with our detectors. I don't have to mask anything out right now. Right. And so I grab everything and use everything because I want to take advantage of every electron. But, you know, if there's, you know, reasons to speed up a reconstruction or something, then, you know, maybe we could, you know, get your first guess with just the low frequencies and get that stabilized first. The, you know, I'd say our biggest obstacle right now is the time taken for the reconstructions. And it's not the reconstruction itself as it is tuning the parameters to get the best reconstruction. So Yi has written like a Gaussian processes to do some, you know, Bayesian optimization and search all the parameter space to tweak that up. And one of my postdocs wrote a multi parameter, multi objective version of that as well. So we bought ourselves a very large cluster, you know, with, you know, terabytes of RAM and I think 320 gigabytes of GPU. Yeah. And we use that to run many reconstructions to tune our parameters to see, you know, which one is giving us the best reconstruction. So I think our rate limiting step at this point is the reconstruction speed. Right. I could add something from the site of the x-rays that indeed the central information of the geography for us in the x-rays is extremely important. And if you lose that, you give up quite a lot. So it's, yeah. The dynamic range is also quite important for x-rays as well. Sure, sure. Thank you. Is there any other question? We should wrap up soon. We have gone quite a bit of our time but this is also kind of tradition of this symposium I think. If there is no other question, I have a very quick question because I mean, the results that you showed David on the multi-slice, I think you said something that is maybe too generous about x-rays that we use it well used in the x-ray community, the multi-slice. And I don't think this is quite the case. We have done a few demonstrations but Chris Jacobson has shown a huge potential that can be gained when you do Tyco Tomo if you can get advantage of this multi-slice. So I wanted to ask you, I mean, from your experience, how much do you think one can push with real, with electron data? How many slices can you get out of it? And you mentioned a depth resolution that you managed to get of three nanometers. And I was wondering how is this three nanometers compared to the depth of focus or you called it another name. It was a potential, no, face, strong face approximation, I think something. Yeah, so the way to sort of think of the Tycography is it has a 3D point spread function and it actually gives you information in the Z direction but in kind of the shape of a propeller blade. And what that means at small angles is there's a range, you know, a range of angles and I can reconstruct well within this range here but I'm missing information up over there. And that missing information leads to an elongation of the object. And this is well known in tomography. So, you know, back in the days when I was doing electron tomography which we've been doing for about 20 years now, we, you know, this is our missing wedge for elongation factor when we got the early aberration corrected microscopes and we thought about doing serial depth sectioning in annular dark field. It became clear that the numerical aperture of my lens effectively introduced this missing wedge which normally is the highest tilt angle that you recorded and having it, you know a numerical aperture of a one degree means you're doing plus or minus one degree tilting for tomography which is not very good. When you go to typography, maybe I can get plus or minus 10 degrees or plus or minus 15 degrees. And then this would be my wedge shape. And so what it tells you is that if I want to do a 3D reconstruction I've got all these propeller blades and I have to put my next propeller blade, you know, touching the first one. And so what it lets me do is it lets me increase the step size between slices. And it does so in a way without interfering with Kraut's criteria. Kraut's criteria would tell me if I had a bigger object I needed more slices. But because I've got this rather than a line I've got a propeller blade I can actually go to about maybe 10 degree step size and that translates to maybe I need about 15 or 20 slices rather than about 80 or 100. So it speeds up your tomography collection by about a factor of 10. And it's a difficult thing to do at atomic resolution because you have to align everything to a 10th of an atom in precision and your mechanical stage runs out over microns. So we're working on trying to do some 3D reconstructions. We haven't got a good one at atomic resolution yet but if we're kind of at intermediate resolutions I think it would be a very dough sufficient form of imaging because, you know, if I can drop my number of tilts by a factor of 10 that's a big deal. Yeah, absolutely. Yes, I think that was along the lines of also the work that Chris was showing in these of these letters. Well, I think we should wrap up and thank you everyone for coming. Thank you, Dina. I echo the sentiment of Ian just left that it was an awesome talk and awesome results and I look forward to seeing more. Thanks a lot, everyone. Thank you. Have a good morning, David. See you. See you. Thank you again. Bye.