 So thanks everyone for joining today. Today we're very happy to have Professor Seamus Davis with us. Seamus is a professor at the University College Cook and University of Oxford. His research focuses on the fundamental physics of exotic states of electronic magnetic atomic and space time quantum matter. His group speciality is the development of innovative instrumentation to allow direct atomic scale visualization or perception of the quantum many body phenomena that are the characteristics of these states. So today he's going to talk to us about machine learning in electronic quantum matter imaging experiments. So yeah, Seamus, the floor is yours. Please go ahead. Thank you very much. Thanks to everyone for being here. I'm in the Beacroft building at Oxford this afternoon on a beautiful day. And happy Jubilee to the queen. I hope she will be not too dissatisfied with us. But in any case, let's get on with our business here. So I want to talk about how to use machine learning to analyze high data volume electronic quantum matter imaging experiments. OK, so first thing, how do we image electronic quantum matter? So here's a schematic picture of the Beacroft building. I'm over there on the left hand side looking out at Keeble. So three stories underground, more than 30 meters underground, there are our ultra low vibration labs. And in those labs, we have a two story space in which we can build. Here's the scanning tunneling microscope, a spectroscopic imaging tunneling microscope. It's inside its ultra low vibration millikelvin refrigerator inside the first stage of vibration isolation, which is about 10 tons, weighs about 10 tons. And that's sitting on the second stage of vibration isolation, which is a massive 30 ton brick of concrete, which is itself supported by vibration isolators above the foundation of the building. So in terms of acoustic and vibrational noise, these are some of the quietest places on earth. I mean, a cave a mile underground is quieter. But in any human habitation, these are some of the quietest places on earth. And we use that to do the visualization experiments. So inside one of these microscopes, you have some complicated refrigeration. Then right down at the bottom, you have the scanhead. This is a drawing of the scanhead. In the scanhead, there is a very sharp needle. Here's an SEM image of the needle right at the tip. There's only one atom. When I started in this field, I always thought that was a very intimidating requirement. But a Greek philosopher could have told you that if atoms exist, there's always one atom at the end of every object. So it turns out that that's not too difficult to achieve. However, cooling it down to millikelvin temperature and scanning it is tricky. So we've built several new quantum microscopes here at Oxford. Here's a picture of the Gemini spectroscopic imaging microscope, which I took this morning. It's in the B2 corridor, North corridor of B-Craft underground. So we use such systems in the following way. You bring the sharp tip, last atom on the end, within about an angstrom of the surface of the material you want to study. Now, conventional STM, you scan around and take a picture of the atoms on the surface. We don't do that. We turn off the feedback system. And then we can vary the voltage between the thing we want to study and the metal tip. And by varying the voltage, we vary the electron tunneling current. So we get the current as a function of voltage. If we take the derivative, dIDV, from theory, that's proportional to the amplitude squared of the quantum mechanical wave functions of the electrons at this location. Location R in the sample. So by imaging dIDV as a function of location and voltage, we can measure basically the amplitude squared of the quantum wave functions of the electrons as a function of energy. The energy is controlled by the voltage. And location, the location is controlled by scanning the step around over the surface. OK, so here's a result. So this is the surface of lithium iron arsenide. These are lithium atoms. The little light dots used to here are lithium atoms. And these weird-looking things are actually iron vacancies in the layer underneath the surface. But if instead of imaging the atoms, you image the differential conductance, which is the amplitude squared of the electronic wave functions, as a function of energy, it looks like this. And when we first produced images like this, people really couldn't believe them because you're not taught to believe that the electronic structure of real materials looks like the Battle of the Somme or something. But it does. All real materials look like this. They're highly disordered. There's no perfect material except in a theory textbook. Now, you might think that that chaos is not useful, but it turns out to be extremely useful for the following reason. Suppose you have a quantum mechanical wave function, and it scatters from an impurity or a defect, and it rebounds. So that produces a wave function of the opposite wave vector. That means the same wavelength but pointing in the opposite direction. And these two, the incoming and the upcoming wave function, interfere. And they produce a standing wave in real space. And then the amplitude squared of that standing wave modulates at half the wavelength, or twice the wave vector, of the original electronic wave function. So if you can image the standing waves around the impurities, then you can deduce what was the wave vector of the electrons. And since we can image as a function of energy, the way we do it is image the random disorder as a function of energy, take the Fourier transform to find the wave vector of the standing waves, and then deduce that from the standing waves, the wave vector of the electrons is typically plus minus half the wave vector of the standing wave. So like this. You image the electronic structure as a function of energy, take the Fourier transform as a function of energy, and then deduce what was the wave vector of the quantum mechanical wave that was producing this pattern. OK. So if you go back to this highly disordered mess I showed you here, the electronic structure of lithium iron arsenide. But suppose we take the Fourier transform of that exact data set. It looks like this. It looks much more ordered. We're inside the gap of the superconductor here. When we exit from the gap, we see this nice contour here. That contour is the Fermi surface. It's the wavelength of all the electrons which are at the chemical potential of this compound. So even though you take a disordered image in real space by using Fourier analysis and theory, you can deduce what are the de Broglie waves in momentum space. Now, for any theorist listening to this, the reason why this works is that if you have a scattering potential v and you have some greens functions representing those de Broglie waves and you consider scattering to all possible orders. So here an electron scatters once. Here it scatters twice. And to all orders, then it turns out in a Dyson-like equation, you can find the new greens function after all the scattering. It's the original greens function multiplied by this product. And this product is a free particle greens function multiplied by the scattering matrix multiplied by a different free particle greens function. And the theorists in the audience know the constraint upon which we can represent this sum by a matrix, but it usually works. If that's true, then here's the greens function after all the scattering. Here's before. Let's subtract this from that and just get the change in the greens function. It's just this nice triple product here, original greens function, scattering matrix, another original greens function. Now, the utility of that is that the perturbation to the density of electronic states is just the sum over momentum space of that triple product. It's this sum here over k. And now I've written the second wave vector k prime as k plus q. So after I sum over k here, I get some function which depends on energy and wave vector q. And that's the one we measure. So the reason why this all works is because of quantum scattering interference. And the movie I just showed you is actually an image of this theoretical product of greens functions with their scattering matrix. A very powerful technique. Now, so why do I belabor this? I'm going to show you a lot of real space images, but it's important to understand with this technique you can simultaneously determine what's happening to the electrons when they're stuck in real space and what's happening to the electrons in momentum space. And that's the thing which begs the question that we needed machine learning to answer. Okay. So I just showed you this one. Here's a different one. Here's wave functions of a heavy fermion compound. Like all real materials, they look like a mess. Here's the Fourier transform. That circle there is the Fermi surface of a heavy fermion metal. Actually, that's the first time the Fermi surface of a heavy fermion metal was ever observed. Here you have a different material. This is iron selenite. It's a very exotic and strange metal. You can see the disordered impurity scattering on the right-hand side. If I take the Fourier transform, it looks like this. Quite simple, beautiful, straightforward and analyzable. And we were able to discover from this that this material is an orbital selective metal. Only the electrons from some orbitals of the atoms contribute to the metallicity. And this was the first orbital selective metal visualized. So this is a very powerful technique. Okay, now let's talk about cuprate superconductivity. So most people know you can put as many bosons as you like, as long as they don't interact into a single quantum state. So that allows us to take a large number of bosons in a container, let's say a Bose gas. And by cooling it down, remove the randomization. And then eventually, as Bose and Einstein predicted, all the bosons will fall into a single microscopic quantum state and it's the ground state. That's so beautiful. But, you know, fermions can't do that because one of the identifiers of a fermion is that only one fermion can be in each quantum state. So it would appear like this condensation isn't possible. However, other nature is very, very, very clever. And what she knows is that if you take two fermions and let's say they're distinguished by having opposite spin directions, their intrinsic angular momentum number of opposite direction, if you cool them down, the two fermions can bind together. And when they bind, they make bosons and the bosons can condense. That's a charged 2E macroscopic quantum fluid. That's what we call a superconductor. Okay, superconductivity now more than 100 years old, all superconductors are zero resistance, infinite conductivity, perfectly dissipationless electrical and electronic materials. And they're of tremendous potential utility for science, for technology, for the economy. And they're still a very hot topic of study. Here is the critical temperature to it below what you have to cool if you want to get superconductivity. And in the first decades of the field, it was like minus 250 or minus 260 centigrade, not very useful. By the 1990s, it got up to around minus 100, minus 120 degrees centigrade. And again, that's in the copper-based compounds I'm talking about today. And the same is true for the iron-based compounds discovered in the early 2000s. And in recent years, hydrogen-based superconductors have reached to a critical temperature near room temperature, which is psychologically very important. It proves that room temperature superconductivity is not ruled out by mother nature. However, those compounds have to be at millions of atmospheres of pressure before they superconductor. They're not very technologically useful. The highest temperature ambient pressure superconductors are the copper-oxides that we're talking about today. And so there's still a big push to understand how they work. And indeed, superconductivity is extremely important, not yet for the economy, but for science. It's extremely important. If you ever have an MRI, you're inside a high-field superconducting magnet. And now they're becoming very important for understanding how the human brain works or any brain works. They're important as a source of the most sensitive quantum devices, the most sensitive photonic devices, the most sensitive astrophysical detectors, the most high-power accelerator sources, the highest magnetic fields for producing, cyclotrons, high-current cyclotrons, the highest magnetic fields for physics and other kinds of research. And today, in fact, high-temperature superconducting confinement is being used for Tachymec fusion reactors by commercial companies. It's no longer just governments who can do this. Because of high-temperature superconductivity, commercial companies make a compact Tachymec for fusion power. And of course, as most people know, superconductivity is the source of our present capability to carry out quantum computer. All functional, commercial quantum computers at the moment are based on superconductivity. Okay, now let's talk about copper superconductivity. Why do we have to delve into this because it's the highest temperature superconductor we have available at ambient pressure? We need to understand it. So in these compounds, they're dominated by the CO2 layer. So that's a layer of copper, oxygen, copper, oxygen, copper, oxygen, copper, oxygen, no problem. The coppers are in a 3D9 electronic state. They have one electron missing from the D-shell. And that means they should be a metal, but it's actually an insulator. And the reason is the energy for another electron to arrive in the 3D10 state is about three electron volts, that's called U. And so just there isn't enough energy available. At room temperature, there's no way for one electron to hop and to doubly occupy a copper side. So each 3D9 electron gets frozen at its own location in a mod insulator. And I'll tell you in a second that this is an anti-ferromagnetic mod insulator. And for the experts here, the 2P6 band of the oxygens is in between the empty upper 3D10 state of the copper and the field 3D9 state of the copper. So the top of the chemical potential is actually in the oxygen P-bed. Okay, all right. So now when you, so these are insulators. So how do you make them into conductors? Well, you have to remove some electrons from the environment and you do that by removing an electron from the top of the 2P-bed, which is the same as removing an electron from an oxygen atom. It creates some quantum mechanical state we don't understand, but this axis here is the number of electrons removed are holes introduced. It's called hole doping by material scientists. So when there's no holes, the copper is the oxygen is in the 2P6 state, but as we increase the number of holes, more and more electrons are removed from the oxygen atoms. As that happens, the insulating state disappears very quickly. And soon with enough holes in the system, the high temperature superconductivity state appears. This is an extremely well-established phase diagram and the background theory for this is called the theory of a charge transfer insulator for the experts. However, if we want to understand the mechanism, we have to consider carefully what happens between the coppers and the oxygen. So consider the 3D9 electron on one copper and the other 3D9 electron on the adjacent copper. So here it is on one, here's its neighbor. In between there are two 2P6 electrons in the oxygens. That's an insulator. We know it's Hamiltonian. In fact, you can solve the Hamiltonian for these two electrons in this simple model. And if you do so, you'll find there's a powerful anti-ferromagnetic magnetic interaction between the two copper electrons. The simple Hamiltonian is Heisenberg-like and J is a large positive number. It's equivalent to around 500 Kelvin. So what that means is there's a powerful magnetic interaction causing each spin on the copper side to be opposite than its neighbor. And we know the magnitude of that super exchange interaction. So take that picture and now consider a square lattice. Well, you could guess, and it's mathematically true, that in order to have every copper spin opposite to its neighbor, the simplest way is just to have an anti-ferromagnetic insulator. This is an anti-ferromagnetic square lattice where each electron is frozen in space and it spins our opposite to its neighbors. That's what the super exchange theory would tell us and indeed that's what the neutron scattering experiments tell us. This is exactly how an undoped cube rates work. So that tells us that this exchange interaction is predominant in the insulating state. Now, however, whole doping means that you're going to remove one electron from here, from the 2P60. So when you do that, the anti-ferromagnetism disappears almost immediately and the high temperature superconductivity appears Phil Andersen proposed in 1986 that the reason why that happens is that the anti-ferromagnetic super exchange interaction survives, but instead of making a magnet, it actually binds these two electrons to make the electron pairs, which are necessary to make a superconductor. So this is a very powerful and almost unique mechanism for binding pairs. And if it is the mechanism of high temperature superconductivity, it explains why the temperature is so high. And to understand this mechanism would give us control of these materials and hopefully would bring us to room temperature superconductivity. So at least for me, that's still a very exciting goal. So no problem. Anti-ferromagnetic insulator phase understood. Check. D-wave superconducting phase actually very well understood. Check. If you introduce enough holes, you destroy the entity's super exchange and you end up with a correlated metal. That's also pretty well understood. Check. But in the phase diagram of hold-up CO2, there's another phase. It's called the pseudogap phase. We don't know what that phase is, or at least the identity of that phase is contentious. And it's part of the problem we're trying to solve today. OK. Now let's think about electronic liquid crystals. So liquid crystals are part of our lives. Pierre-Gilles Dijen solved this problem. I got an Nobel Prize in 1975 for finding the theory of classical liquid crystals, which we all use in our working lives, in our information technology. It's a vast industry with an amazing number of applications. And the reason is these states are controllable. You can alter the properties of classical liquid crystals. Now there are two basic forms of liquid crystals. So one is called asmectic. It breaks rotational symmetry. All the molecules are aligned in the same direction. But it doesn't break translational symmetry. There's no pattern in real space. And so there's no wave vector that's called wave vector q equal to 0, breaking of rotational symmetry. That's an ematic liquid crystal. Asmectic liquid crystal is one that breaks both rotational symmetry and translational symmetry. It's got a periodicity in real space of lambda or a wave vector of 2 pi over lambda. And Dijon explained both of these states with his habitual genius and beauty in his theory. And we understand them very well for classic liquid crystals. Now just about 20 years ago, these two colleagues, Eduardo Franken, Steve Kievelsen, and Vic Emery, God rest his soul, who has passed away since, they proposed that when you dope holes into a correlated anti-firm magnetic insulator like CO2, instead of getting a superconductor, you would get an electronic liquid crystal state. And they provide very good theoretical reasons for imagining that that's the case. Furthermore, during those years, much evidence has appeared that when you go from this metallic phase into this mysterious phase, as you cross this line, there's a breaking of rotational symmetry at q equal to 0. And there's a breaking of rotational symmetry at finite q. There's a breaking of translational symmetry at finite q. That would mean that this phase here is some combination of a pneumatic and a smectic if it is an electronic liquid crystal. In fact, I mean, if you just look at the empirical facts, you have to conclude that both broken symmetries, both rotation and translational, exist in the electronic structure in this part of the phase diagram, so far so good. All right, now I want to examine this state using our techniques and see what does it look like at the atomic scale. So on the left-hand side, you see the picture of the atoms in one of these materials. This material is called bisco, but the name doesn't matter. It's a canonical high-temperature superconductor. And with conventional STM, we can see where the atoms are. Okay, now the next thing I'm going to show you is a movie of the density of electronic states as a function of energy in this identical field of view. It looks like this. So for some energies, there's no changes of the pattern. It's highly static. Then for other energies, the pattern starts to evolve, the wavelengths are changing as a function of energy the way they would be for a de Broglie wave. And then again, when you go back to high energies, you return to this static pattern whose symmetries I will explain to you. So this way we can visualize the broken symmetries in real space of this high-temperature superconductor in that strange phase, the pseudo gap phase. So let's look at this a little bit more carefully. So again, this is a picture of where the atoms are. And this is a simultaneous picture of the electronic structure. And it's a mess, no doubt about it. It's not your father's or your mother's beautiful periodic density wave and neither is it a simple metal like copper or gold. It's some weird mixture of the two, which is very hard for a human being to diagnose. If you look at these images, it's hard to state. What actually is the phase that I'm looking at here? Okay, now we can get, we can advance our understanding of it by using Fourier analysis of this image. If we Fourier analyze this image, it has four important peaks. Two peaks are the Bragg peaks. They have the same periodicity of the crystal. And there are two other peaks which are the peaks associated with these periodic patterns which are to first order four unit cells periodically. So we can separate the properties of the lattice from the properties of these longer wavelength modulations by using Fourier analysis. So let's just first focus on these two red peaks which are the Bragg peaks. And they measure what's happening inside the CO2 unit cell. And now that's a little bit complicated because inside the cell there are three atoms. There's the copper atom and two oxygen atoms. And that means there's different ways of breaking the symmetry at Q equal to zero inside the unit cell. There's a S symmetry way of breaking it and there's a D symmetry way of breaking it in which case the two oxygens are in different electronic states. And although the real space images look approximately the same, the Fourier transform is really quite different because the symmetry of these states is different. Okay, I won't belabor that too much. If we take an image of where the atoms are, we take a Fourier transform of that image of where the atoms are and we compare the amplitude of this peak to that peak, we find them to be virtually identical. In fact, identical. That means the lattice itself doesn't break any rotational symmetry inside the unit cell. However, if we take a simultaneous image of the electronic structure and compare those two Bragg peaks, they're different by 30 or 40%. That means inside the unit cell, modulations in the X direction are density of states in the X direction is more powerful than density of states in the Y direction. I actually didn't believe that when we first visualized these effects. So I asked the team to look inside each unit cell and check is the density of states in one direction different than the other direction? And indeed it is. Inside the CO2 unit cell, there's a breaking of rotational symmetry. So that means there's an electronic pneumatic state Q equal to zero in the copper superconductors. And this came as a big shock. Several major papers came out in 2010, 2011. This came as a big shock to the community to realize, wow, there is an electronic pneumatic state. So the other two peaks are the ones which you would call a density wave, a periodic modulation of the electronic structure. It happens to be near four unit cells in periodicity. So we can look at that. You saw it here already. That's four unit cells and that's four more unit cells. But let me turn off my movies now. So here's two completely different copper superconductors, chemically different, but at the same carrier density. And if we image those standing waves, they are virtually indistinguishable. They have, so this is four unit cells, four unit cells, four unit cells. And these dark lines here are along lines of the oxygen atoms, which turns out to be important. And then the other material, we see the same thing, four unit cells, four unit cells, four unit cells. But of course there's no long range order. So it's still difficult to understand what we're looking at here. You know, one can zoom in and examine one of these objects with subatomic resolution looking inside every unit cell. We know where the copper atoms are. They're in black. We know where the oxygen atoms are. They're in blue. So when we do that, at the end of a long sequence of study, we were able to conclude that it is a four unit cell density wave we're looking at, but it's got a deformed factor, which is a tricky thing. It means that the modulations of the X oxygens are pi out of phase with the modulations of the Y oxygens. No state like this had ever been seen before, but it isn't surprising to theorists that it can exist in nature. So you can see for yourself, it's unidirectional. I'm telling you that it's locked to the lattice to a line of oxygen atoms. It's four unit cells periodicity, and it has an internal symmetry, which is a de-symmetry form factor. And those facts are not belaboring those facts. We need those facts for the machine learning part of the project. All right, now at last we get to the question. Are the phenomena occurring in the cube rates due to the existence of an electronically crystal? Or are they just having to do with Fermi surface effects, the wavelengths of de Broglie states, which exist in the momentum space of these materials? Now I haven't shown you many de Broglie states, so let them, and oh, so here I make them, the question even clearer. So suppose the Fermi surface is, and the Fermi surface is of this compound does look like this. It's four whole-like pockets around the pi-pi point. There are flat regions here, which the solid state physicists would know are called nested. And that means scattering between those regions can produce a density wave. The wave vector of that density wave is the difference between the wave vector of the two electrons. So it's this red arrow. And so that wave vector is related to the wavelength of the density wave. Q is two pi over lambda. So Fermi surface effects have these properties shown on the left. Now, electronic liquid crystal is quite different. Its periodicity is fixed. It's set by localization of the electrons in real space. The wavelength is set by some microscopic Hamiltonian, probably the MRE three-band model for the experts. And so the Q vector isn't set by the Fermi surface, it's set by the microscopic wavelength. It's lambda is two pi over Q in this state. So here we're trying to distinguish between electronic eigenstates, which are localized. That's on the right-hand panel or delocalized, that's on the left-hand panel, as the driver of the electronic structure in the phase there. Now, so here is an image of, you know, for it. Well, here is the real space image of the amplitude squared of the wave functions in this compound in Biscoe. Resolved by energy as the energy changes, the wavelength of the de Broglie wave changes and even their wave vector changes. The scattering patterns change. It's all what you would expect for a metal. None of these phenomena are too strange for a metal. And in fact, this is a superconductor. We understand what should be happening here. It's a D-wave superconductor. We know where all the primary momentum space scattering wave vectors are in the model of the D-wave superconductor. When we take the Fourier transform of the previous image, it looks like this. Energy result Fourier transform. It has all of the necessary wave vectors and they do all the mutually correct things. So what you're seeing is that there are lots and lots of states throughout momentum space. By using data sets like that, we can deduce where is the Fermi surface. So at whole density, 6%, we measured where is the Fermi surface in this compound. And now in a heroic series of experiments, Fujita-san measured what happens to this Fermi surface as a function of carrier density. And here you see the answer. As we increase the carrier density, the whole pockets grow, we're putting more holes in. So the whole pockets grow and the moment the Fermi surface moves to a new location. I'm gonna show you that movie again though. And I want you to realize that the Q vector of any density wave should be diminishing, right? Here the Q vector is large. As we increase the whole density, the Q vector linking those locations is diminishing with increasing carrier density. So Fermi surface would predict that, okay. And here's a summary of what happens to those Q vectors as a function of carrier density. Now we go to an image of what appears to be the static electronic structure. It's in the same sample. It's in the same field of view. It's just at a different energy. We take the Fourier transform of this image. It has a peak. It has a peak near about 0.27 to pi over a naught. That means very close to four units cell periods. Very good. Now let's measure that wave vector as a function of carrier density throughout the phase diagram. And when we did that experiment, so here's the peak at low carrier, sorry. Yes, here's the peak at low carrier density and here's the peak at higher carrier density. So the Q vector of this peak is diminishing as the carrier density is increasing. And again, that's what you would expect from a Fermi surface effect. And that's what everyone deduced from the data like this, which we published now about 12 years ago, 15 years ago. However, it's more complicated than that. If you actually look with human perception at what the electronic structure is at the atomic scale, and I draw some boxes here to draw your attention to things that at least the human being would look at. If you zoom in on those objects, you'll find that they are static, okay? They don't change. They just remain where they are. They're four units cell periodic. They're bond centered. That means they're fixed on the oxygen sites and they're clearly unidirectional. And that is also a fact. So, okay, so now you want to say, okay, now we have cognitive dissonance. You're telling me that a certain Q vector is evolving with carrier density, but on the other hand, real space electronic structure is not evolving with carrier density, okay? That must be cognitive dissonance or multiple personality disorder or something. But it's actually not, right? The thing you forget is that when you do Fourier analysis, you focus on one peak, but there's data elsewhere in the reciprocal unit cell and all of that gets thrown away. Fourier analysis throws away all the other data, which is called disorder, background, diffuse, whatever, however you want to bad mouth it, you just throw it all away and focus on the Q vector you like. But in a real material, all those other wave vectors contain information about the electronic structure. So you shouldn't make a profound deduction just from the evolution of a single wave vector. Actually, what you should do is look at the whole electronic structure, okay? So here's the question now we're trying to answer. Is the electronic structure of underdoped bisco and underdoped cuprates, is it dominated by this momentum space, Fermi surface style effect, or is it dominated by this real space, electronic liquid crystal effect? And there are two famous papers which prove mathematically that you cannot answer this question in the presence of disorder by using Fourier analysis. It's provable that your traditional Fourier analysis of these images can never succeed in answering this question in the presence of disorder. And look, there's tons of disorder. So formally, you cannot answer this question by using Fourier analysis, enter Una Kim. So Una Kim at Cornell proposed an alternative approach. She said, consider the raw unprocessed data. It's a real space image and that image changes with energy. So in this stack of data here now, the vertical axis is energy, but they're all in the same field of view, but the wave functions look different at different energies. The question is, could you define a machine learning approach which would not focus on a single wave vector but take all the data and find out is this set of images consistent with real space electronic liquid crystal physics, this one? Or is this set of images consistent with momentum space, firm structure, this one? And furthermore, if you had a machine learning way of doing that, your core concept was to do that across the whole phase diagram and find out what's the universal property. And that's the primary thing that I'm gonna describe now for the rest of this talk. So at the end of the day, we're attempting following her lead to find an unbiased identification of the fundamental characteristics of the broken symmetry in these images, very complicated images of the electronic structure. Okay, so identifying complicated images is something that billions, probably trillions of dollars and euros have gone into over the last 10 years. Everyone knows that neural networks are incredibly powerful at identifying images which have no periodicity. You know, when the security services are searching for your location, they don't use Fourier analysis, right? They snap your photograph and they use neural network or several neural networks which are designed, which are trained to identify your face. And so Ouna wanted to use the same strategy but for electronic structure images. Okay, so they used a relatively simple architecture. I'm not a neural network person. So let's see. I looked up what the description of this architecture is. Let's see. So each neural network is a fully connected feet forward network just with a single layer. But as you'll see the training of these neural networks and they trained 81 of them independently, the training of these neural networks is quite rigorous. But the mathematical internal structure by modern standards of machine learning is not that extraordinarily difficult. Although it's pretty difficult because there is a lot of data in each image. Okay, so the challenge is present a neural network with an image like this. This is real data on the left hand side then train the neural network to identify the characteristic structures which are hidden in this disordered image. And then to give an output, which tells us what's the fundamental state which is actually constructing that image. Now to do that, so they did this to some degree by brute force, they trained the neural networks to identify a wide variety of different things. So what they did is constructed training sets which comprise something very close to what we know exists in the cube rates, but they diversified it with noise, Gaussian noise, fluctuations, physical disorder, chemical disorder, topological defects. They also diversified it according to the orientation of the pattern. There's two possible orientations. And they also diversified it by the internal symmetry of the density wave. So the number of objects that the set of neural networks were designed to detect is really quite large after the whole training process. So here's some examples, these are training images. And what's happening here is they're taking a de-symmetry unidirectional density wave image plus Gaussian disorder plus topological defects and what they're changing is the wave vector, the periodicity in real space. They're continuously changing it, different categories have different wave vectors. And the neural networks are going to be trained to identify which category is predominant in a real image of the electronic structure. So under those circumstances with that fairly massive training set, which I'll show you the people who did this work just a little bit later at the end of the talk, with that massive training set, then they used a reasonably conventional training technique to end up with, since our training with something whose content they know, they can check whether the network is making the identification correct. They only consider the networks to be trained if they were 99.9% accurate in detecting the image presented to them whose characteristics were known by the training operators. But they independently trained 81 neural networks presumably whose internal parameter sets were different even though they were trained to identify the same training set. Okay, that took about a year actually. Okay, once those set of array of 81 neural networks had been trained, then we conditioned the experimental data sets, which are images in real space, very complicated images as a function of energy, multiple sets of data like this at each different carrier density across the phase diagram. And in fact, they used virtually our whole data archive which had taken us approximately 20 years to acquire as fodder for this set of 81 neural networks. So that was fun, conditioning all the data to match the requirements of the neural network was also a challenge. That was for the experimentalists and the data scientists. But at the end of the day, so this is a sequence, the left hand column is a sequence of images of the electronic structure at different carrier densities. The central column is a column showing the Fourier transform of the data and you can see from the Fourier transform that gosh, you'd be hard pressed to distinguish between these images by using Fourier analysis. However, the categorization, so which category, the neural networks identified, category two, I can tell you is the four unit cell, de-symmetry, unidirectional density wave. So category two was predominantly identified over all the other categories throughout all the data sets until the carrier density reached about 20% and then category two was no longer predominant. So what this says is that the neural networks think that there's a four A naught periodic de-form factor density wave throughout all of these images. That's what the neural networks found. And furthermore, we could present these images in two orientations to neural networks which were designed to detect the density wave of only one orientation. And what that actually revealed is that one orientation, that the neural networks identified the density wave in one orientation far more often than in the other orientation. So that tells us that it's a predominant unidirectional de-symmetry form factor four A naught periodic density wave which is the dominant object in these images if you don't relapse into Fourier analysis, if you take all the data and use machine learning for the identification. All right, so the end of the day, this suite of neural networks which are now being used for other studies and this strategy is being generalized, this suite of neural networks identified a carrier concentration independent unidirectional lattice commensurate, four A naught periodic density wave because it's disordered, the name for that state is a vestigial pneumatic state which had been predicted but never detected. So that was great. Actually, the neural networks detected a state which had never previously been detected. You know, if we plot the wave vector deduced from the neural networks as a function of carrier density, it's four unit cells. It does not evolve with the Fermi surface. It's a fixed property of the electronic structure. Okay, so the end of this process, we deduced and reported that we believe the broken symmetry state in the underdoped cuprates that we can access is definitely or at least is highly consistent with the theories of electronic liquid crystals of a dope mod insulator and is not consistent with theories of Fermi surface nesting to produce the density wave. All right, and that strongly supports the hypothesis and the growing belief in this idea in our research community, which is the community of strongly correlated superconductivity and has influenced new directions for that research community. Now, one thing to say at this point which no doubt many of the experts and the audience might resonate with is that many professional condensed matter physicists looked at these results and said, great, this is a new piece of information. It's a step in the right direction. It confirms the electronic liquid crystal hypothesis, et cetera, et cetera. They were willing to give credence to it, but admittedly there was a significant group of people who said, well, unless you can tell me how the neural network made its decision, I don't believe that it achieved anything. And we have to have a self-training neural network to get around that objection. We have not reached that point yet. But in trained neural networks, this procedure is a powerful new technique for visualizing electronic and understanding electronic structure in wheel space. This was a big international team. It took many years to do this. The samples across the whole phase diagram were made by the Esaki group in Scuba. Much of the experiment and analysis was done by Fujita-san, who's now at Bokeh of the National Lab. The data management and analysis to match the requirements of the neural networks were done by Andrei Mezoros, who's now at Saclay. And the programming for the neural network to match the data was done by Izan, who's now at Beijing. And Una Kim at Cornell was the PI of this project and the creator of this fascinating idea. And so that's all I want to say on this subject. Thank you very much. Thank you, Sheyma, it was really interesting. Any questions from our audience? So I had a quick one. So I was wondering whether you, during the evaluation of the neural network, whether you could see what are the features that the network picked up on to, like picked up on that was most discriminating in terms of the classification. Right, so during the length of this project, which finished about two years ago, we did not know the answer to that. I kept asking that question and Una kept politely telling me, we don't know the answer and we don't know how to find the answer. But now she does know the answer, which is beyond my area of expertise, but now they're reasonably confident that they know what internal parameterization this single hidden layer neural network requires in order to do this type of discovery. So if you want to know the details, you can send her an email and she'll be happy to tell you. Cool, thank you. There was a question in the chat, I think. Yeah, the question was, did you encounter failures of machine learning approaches while you were going towards this result? Oh yeah, right, absolutely. It was very difficult to find the optimum architecture which could indeed be trained to detect an object that's complicated as a commensurate, de-symmetry, unidirectional density modulation with Gaussian disorder and topological defects. So there were a lot of challenges in the first year of the project, finding an architecture. You know, architectures for identifying human needs, you can download now. Architectures for identifying quantum density waves, you still can't download, you have to invent yourself. So it took a while to get over that challenge. And then for the experimentalists in the audience, it turned out it was a great deal of labor to convert the format of the experimental data as stored in our large archive to the correct format that a machine learning algorithm can use to do the identification. So we had challenges on both sides. So if I may ask another question. I mean, very, very superficially, there is some analogy with radio astronomy imaging and what you are doing. In both cases, the Fourier transform seems to suddenly elucidate rather weird patterns. That imaging process is also a big computational problem, but to my knowledge, it hasn't been attacked very well yet with machine learning. And do you feel there might be a chance to get that right also, or is it? I presume so. So Una's research program at Cornell now has grown tremendously because throughout condensed matter physics of quantum materials, there are vast data sets. You know, the archival data sets from synchrotrons for x-ray scattering and from neutron sources for neutron scattering of uncountable different materials. All of them have only been analyzed by focusing on one or two wave vectors in the Fourier transform. So now there's a great deal of interest. Of course, yes. But in the old days, a human being, all they could look at is a peak in the Fourier transform. So now what the quantum condensed matter people are doing is trying to generalize this scheme to access the vast archives of data which already exist. I hope that they're collaborating with their astrophysical colleagues, but I don't know about that. I'm an experimentalist myself, so I'm not involved in that part of the situation. But condensed matter people think that this could be, I'm not sure it's gonna be revolutionary. You know, people oversell their achievements, probably not, you know, paradigm changing revolution. But I'm sure it can be a powerful new tool. It was definitely a powerful new tool for us to analyze our own dataset. Yes, by the way, coming back to the first part of the question, it isn't perhaps really hopeless to get a self-training model of increasingly complex patterns, right? Because it seems you could generate a lot of them and there's ultimately some system behind it that it may not be hopeless. I believe that Una has a strategy already developed to do that for this class of data, but I don't think it's ready to execute at the moment. Presumably when she's ready to execute, then we'll try again presenting this archive with no preconditioning and see what pops out. Yes, yes. Thank you very much, it was a great talk. All right, any other questions? Otherwise, it's off to the street parties. Yeah, I can't see any. So thank you so much for talking to us today. The seminar was recorded, so it will be available on the YouTube link. So thank you very much. All right, thanks very much, bye.