 Okay, let us start so my name is Nikolai Prokofiev from the University of Massachusetts Amherst So let's continue our session on topology and strong correlations The first talk will be given by Emmanuel Gull from the University of Michigan And he will tell us about diagrammatic Monte Carlo for real space propagation Okay, thank you Right, so this talk tries to give an overview of diagrammatic Monte Carlo methods and continuous time Monte Carlo methods Essentially just to show you what these methods are what they can do and what the general Outlook is and then I'll show in a little bit more detail three basic algorithmic improvements bold line methods causal methods or so-called inchware methods and then embedding methods that are based on this and Finally, I'll show you how we're trying to use numerical methods to extract as much physics as possible out of these simulations to really then Say something about physical systems Now before I go there, I'm gonna have to briefly tell you a little bit about Diagrams, and I'm gonna go all the way back here and start from a very simple Hamiltonian for example this electronic structured Hamiltonian Typically, I'll take a Hamiltonian. I'll put it onto a finite size lattice or into a Quantum chemistry basis and the task that we're interested in is to find energies Single-particle spectral functions two-particles of set abilities or anything like that for a system here in a grand canonical ensemble at some temperature and some chemical potential now back in Early graduate school you were taught how to do that essentially you take your Hamiltonian You split it into two parts one part that you like and one part that you don't and then you write down the grand partition function In an interaction representation of the part that you don't like with respect to the part to the part that you like you expand this exponential that you have up here into a time-ordered exponential series and you end up with a Infinite series here that goes over an infinite number of terms h2. That's your perturbation term over here of Well integrals between zero and beta of something that actually looks fairly complicated Now for each term that we have here. There's an integral over an imaginary time Interval and then there you know if you have in h2 additional sums there will be additional sums or terms now expressions like that look fairly complicated But really people have worked with these expressions since the middle of the 1950s The first step that you do is you go and you abbreviate The individual terms in a diagrammatic language by drawing pictures rather than drawing these expressions that I had on the previous slide But really whenever you see a diagram, you should think of a series similar to this one And then you start typically by approximating this series, right? You pick the terms that you like out of physical grounds and you sum those up Either at a lower order series if h2 is much smaller than h1 or you choose a certain class of diagrams Typically out of necessity rather than out of physically physical insight That's then sum up to infinite order and there are very many methods that work like this the random phase Approximation is one of those GW is a different name for that approximation There's a so-called non-crossing approximation the one crossing approximation You can go to go to fluctuation exchange or flex you can do ladders You can do parquet and so on right all of these are methods that sum up certain diagrams in such a series now typically When you work like that it is very difficult to ascertain that the results are meaningful outside of the weak coupling limit Right if h2 is not much much smaller than h1 is typically uncontrolled And you have to argue based on physical intuition by your solution is still meaningful Now as I mentioned this is really 1950s physics. So what is new? well Let's go back to that series Let's get back to that series and realize that that series is just a very high dimensional series an infinite With an infinite number of term here all very very high order integrals But we know how to do high order integrals right we can use Monte Carlo methods to sum up these individual turns in a series and sample all of these different diagrams or all of these different terms that occur in the series and By sampling different topologies different internal indices and different time locations You can then recover the value of that partition function or measure observable with the weight that they contribute to this partition function Right each diagram or each term that I showed in this expansion here has a weight We randomly take a diagrams We change it by inserting or removing vertices by shuffling around line by rearranging internal indices and so on And we can use a Monte Carlo important sampling procedure to walk through diagram space and in this way randomly sample up all of these All of these terms in that series above right and with that once we're doing that we can construct estimators for say a density a Green's function or a susceptibility and measure expectation values of those observables So the general idea as I said is you start by identifying a convergent diagrammatic expansion You realize that it's just a very high-order integral You then define a Monte Carlo important sampling procedure for diagrams essentially you make sure that you satisfy ergodicity and detailed balance And you sample all of the diagrams Stochastically the advantages are that as long as all of these diagrams are sampled The only error that you have is a stochastic sampling error These procedures are numerically exact and they're controlled The stochastic sampling area is converged like one over the spirit of the number of samples So if you're not sure if your result is right or accurate enough within the error bars that you have You just keep sampling for longer and obtain smaller error bars and with that you have a rigorous control over uncertainties There's a perceived limitation here Well, first of all, there are very many diagrams and really an infinite dimensional space now fortunately The dimensionality of the integral does not enter your Monte Carlo estimates Right, but there's an actual limitation Namely if as you interpret the time the terms in your expansion those Taylor series expansion terms as weights of a stochastic sampling You really need to have a probabilistic interpretation of this and that means they have to be positive If you just go and you do a Taylor series, nobody guarantees you that all of the terms are positive. So typically You run in what is called into what is called a sine problem You have some terms that are positive some terms that are negative and your signal disappears in the noise So a lot of effort is actually spent in a reformulating series in such a way that they have mostly positive or positive expansions Now, let me show here early success that we had in simulating these methods for Fermions what you see over here is the typical numerical problem size that we had in these methods as a function of Inverse temperature here. This is a single site dynamical mean field calculation at a you of about the bandwidth details here Are not that important what you see is that back in 1986 the state of the art algorithm Which at the time was based on a trawler Suzuki decomposition Scale something like this over here Back in 2005 with an interaction expansion We were down to a scaling like this and then in 2006 with the current method of choice We're down to a scaling that is somewhere here You see that all of these are linear in beta But there is a pre-factor over 30 or so between the 1986 method and our current state of the art method And that means that because these are matrix Operations that we have here to go with the matrix size cubed the linear size cubed It corresponds to a speed up of about 30 cubed or 27,000 or if you think in Moore's law It's about you know 25 years of Moore's law inches time to solution that we have with these diagrammatic Expansions as compared to simple trawler Suzuki based methods Another advantage that we have is shown over here This is an imaginary part of a self-energy as a function of frequency You can see that if you use this method over here You'll get something that has a systematic delta tower This is plotted over here that delta tower is controlled So you just make your delta tau your Discreditation a little bit smaller You'll get a different result in this case if you make it twice as small you get the rec curve over here as You keep doing that you get the green curve and then using those three points You can extrapolate to the exact result which in this case we know from a different method namely exact table station down here In what I have shown you by sampling these integral integrals. There is no delta tau There's no concept of this discretization and as you sample these Serious stochastically you of course end up directly on the right result and with that you have an elimination of Systematic errors in your problem So we can do the same thing 25,000 times faster and we can do the same thing much more accurately as previously But really what gets us going in this field is that with basing or by basing our methods on Diagramatic series and on the theory of Feynman diagrams We can become much more general and we can look at much more sort of interesting or general problems So previously we were doing single orbital problems. Now we have large multi orbital problems We were looking at Hubbard interactions. Nowadays. We do quantum chemistry using the full fermion for fermion interaction terms Large system that used to mean two by two clusters. We're now doing hundreds of cluster sites We can look at effective problems like condo problems We can take a system and blast it with a laser and look at its time-dependent response You can look at verdicts functions and we at Miami's verdict functions two particle probes phonons and screening and for example Superconductivity and a little bit more about that layer So let me show you a sort of basic example of how these methods work in practice What you see here is an example that comes from called atomic gas physics That tells you how we would typically do a simulation here You see a cluster of 18 sites that we use for solving a problem in this case we're interested in thermodynamics so energy is density is entropy is free energy is all of that stuff and We'll simply compute it in an approximate formulation on a small lattice We then repeat the calculation on a slightly larger lattice in this case 36 sites We do 48 56 64 84 all the way up to 100 sites We know analytically the finite size scaling of these quantities So what we do is we take the approximate results put them onto a plot Supplement them here with the L and weakly known scaling and once we know that they're on the thermodynamic limit curve Right, we obtain a result in the thermodynamic limit with a result in an airbar that now gives us the exact result of that infinite lattice model in the thermodynamic limit and we can take this and Go back to our experimental colleagues here You see an example again from called atomic gas simulations coming from very very high temperature Where we have high temperature series expansion here You see the numerical results or the theoretical results here off a spin correlation function That is a spin correlation function that shows a fairly large temperature response. That's why we're plotting it here As we lower the temperature or in this case the entropy and if we supplement this by experiment You can see that as we're starting out in the high temperature regime these experiments are fairly accurate They stay on the theoretical curve for some time at this point There is a breakdown the experiment goes down we go up and we think we understand that this actually has to do with heating effects in The experiments so that gives us more or less an indication of where called atomic gas experiments can go and The first interesting correlation physics So the onset of anti for magnetic long range order in this model is about still about a factor of two or so lower in temperature Now Comparing to experiments is always interesting another type of Comparison that we like to do is take methods that are biased in some way Or potentially biased in some way and compare them against other methods that are biased in a completely different way Here for example, you see results for the heart model From on the order of ten or so different algorithms where you see energies in this case coming from high temperature Lowering the temperature lowering the temperature even more and then you see these results smoothly connecting to ground state results that come from methods like auxiliary field QMC or DMRG or a fixed node Monte Carlo and things like that and by Putting results from methods that are completely differently biased onto the same plot and comparing the results We can learn how accurate these methods are in practice and you know, which results Work in which regime and where they might potentially break down. This is interesting because often you hear that we don't understand the Hobbit model What we actually find by comparing here the energies over a wide area of face space is that the typical if you convert this to Cooperate units the typical experimental systems or uncertainty of about 0.02 T Corresponds to an energy uncertainty of about five Kelvin where the big physical effects if you think of Cooperate Happen at about a hundred Kelvin. So this now gives us confidence that we can really use these methods to say something about You know a correlated Hobbit model at finite temperature in an interesting non-perturbative regime now Diagramatics varies is very interesting and in particular you can use the toolbox that the 1950s physicists have developed for us And then through the 1960s and 70s and you can try to adapt that analytical toolbox to numerical methods Now a lot of this work has actually done by Nikolai who's sitting right over here For example, you can take all of the diagrams that are sampled if you do a partition function expansion You can take a logarithm Then you're limiting yourself to the connected diagrams. You can make it self consistent meaning that you throw out the Diagrams that are not skeleton diagrams You can limit yourself to skeleton diagrams and if you want to you can go even farther and limit yourself to certain types of vertex functions As you go down this route right the number of diagrams decreases actually quite rapidly the sign problem Decreases and the physical insight is in many ways much easier to gain from these systems at the same time You're paying for this your algorithmic complexity as you go from here to here to here Increase is vastly rather than just taking a determinant of all of these diagrams You actually have to enumerate them and filter out the ones that are connected or the ones that are irreducible or Skeleton or so as you then go farther down this route You see that also mathematical subtleties in particular problems with self-consistency and convergence or erudicity in Monte Carlo methods start to play a role and Things may become tricky nevertheless I'd like to show you a couple of sort of standard improvements that one can do with these methods For example, if we do these simulations in real time and we look at the time dependent evolution For example of a single electron transistor here We can say that a propagator that starts at time t and goes to time t prime is more or less the atomic state propagator Well plus in the weak coupling regime We're gonna get one excursion in this case This is a self-energy insertion that you see over here and then of course We're gonna have to sample all possible diagrams and presumed those diagrams get complicated and we're gonna have to enumerate just all of them now this is naive but you can imagine that you could take this as a starting point and go find your analytic friends and Do the best that they can do for example We can say that the semi-bold propagator which looks almost the same as this one is the bear propagator Plus the bear propagator and such a self-energy accursion plus the bear propagation And all of these rainbow Insertions and that leads you to the so-called non-crossing approximation These are coupled integral equations solving them takes maybe three seconds on a computer These are very efficient integral equations that we can do very very quickly Now if we say that we want to have the exact propagator Then why not start from the approximate propagator that already has an infinite number of diagrams in them supplement it with all of the diagrams that we haven't considered for for example in this case all of the crossing diagrams that Would be use this and then all of the higher-order crossings diagrams if we do that we recover the exact results There are many fewer diagrams because we've already absorbed most of the Diagrams with short time excursion in this So-called non-crossing approximation the non-crossing approximation already knows about some variant of the condo effect Right and if it is accurate we only need to correct it by a tiny bit of well those crossing diagrams here that matter Now does it actually work in practice well here? You see the average sign as a function of real time and you can see that if we do it naively just straightforwardly Sampling all of the diagrams then boom your sign problem Your average sign drops exponentially as a function of real time and pretty much you're dead in the water If we formulate this around one of those crossing approximations in this case the one crossing approximation You can see that for the same sign budget here. We come about twice as far in real time Or if we take this method and now sample the third order of crossing diagrams the fourth order the fifth order six stores Subsequently you can see that we then gradually plateau and if we can show that within those orders We're converging to the right result or to a self-consistent result right then we can Truncate this at a fixed sign So this is extremely powerful And it turns out that we can actually do much better in these methods There's causality the propagator up to some time is pretty much the propagator up to some earlier time plus a let's say a bear propagator over here a Bear propagator and the self-energy insertion Maybe one that looks like this maybe one that looks like that or All sorts of higher order more complicated diagrams And you can imagine now a diagrammatic Monte Carlo procedure that takes the exact result that we have to some time and Supplements it with a diagrammatic sampling here of all of these additional diagrams that gradually that goes to longer and longer time So in practice This is this is real time on the childish contour I'll show you results in just a bit right in practice the way we do this is like this like an inch worm We start on a small contour, then we simulate a slightly longer we use that result to simulate a slightly longer simulate a slightly longer simulate a slightly longer and Propagate on the childish contour like an inch worm like that Here's a result what you see here is again a time dependent problem as a function of real time These are populations. These are the errors on the populations and you can see that with the naive methods We get to a time of about well two or so in this unit one and a half or so Too short to say something about steady state But as we switch from just bear diagrammatic Monte Carlo to so-called inch worm Monte Carlo the method that I just show you you can see that we can go to longer and longer times the times that we can reach are given By the error that you see down here and you can see that our error as a function of time increases a little bit But certainly does not increase Exponentially now this exponential error that we had in the bear method That is a sign problem right as you make your system larger in some parameter The effort of getting to larger times Grows exponentially here you can see that this is actually not the case anymore, right? We have in this way solve the real-time or the dynamical sign problem That is very different from the fermionic sign problem But he tells you that at polynomial effort we can now reach pretty much as long times as we want Now that allows us to do a lot of interesting physics For example, what you see here is a real-time greens function as a function of time here In a so-called dynamical mean field calculation Starting from an initial solution, which is very far from the converged solution Then you can see by iteration 2 we're a little bit closer We're actually accurate up to here iteration 3 gets you up to here and then iterations 4 and 5 are on top of these earlier iterations Knowing the real-time greens functions allows us to get spectral functions and in particular to put error bars onto these spectral functions Here you see a so-called voltage splitting of the condo peak now obtained with From currents or from real-time greens functions. What you see here is that we have error bars on these values error bars on the spectral functions now really allow you to make definite statements about Excitations rather than having to rely on analytical continuation methods like the maximum entropy method that for these high temperatures and You know non-equilibrium systems are really really unreliable so a third type of Algorithmic method that allows us to use diagrammatics in a smart way to say something About correlated systems or embedding methods and to explain this or to motivate us I want to go back to the 1960s to a lot in general word They told you that you should take a lattice Hamiltonian And if you want to express its thermodynamic properties in particular its grand potential then that's essentially given by the greens function the self-energy and Something which we call phi here the so-called lottinger word functional which consists of all linked closed skeleton diagrams with vertices V these over here all this just give out the vertices V and The propagators or greens functions G Once you know Phi then the derivative of Phi with respect to G gives you the self-energy The self-energy gives you the greens function out of the greens function You can reconstruct the self that lottinger word functional self consistently and that gives you a smart way of Approximating diagrammatics because as Byman Kondanov told you about in 1962 or so If you make approximations to that lot in your word functional rather than say to the greens function or the self-energy You can guarantee a couple of important consolation laws that for example make sure that you're not losing particles as you time evolve your system Now there's a very simple embedding method or a first simple embedding method that dynamical need mean fields area Right dynamical mean field theory is an approximation to that lottinger word functional where you take the exact lottinger word functional And you kick out all of the terms that don't have the same vertices Yeah, that don't have the same various verdicts index everywhere at every single vertex or then the same orbital index at all of these places as you have only GII in here when you take the derivative with respect to G your self-energy only is sigma II and with that your Self-energy is purely local to your orbital now This is a great method whenever the physics or that the non-perturbative physics or I should say the correlation physics here is purely local to an orbital There are the methods for example low-order approximations like the Hartley-Fock method the second-order self-consistent Green's function method or the so-called GW or RPA method that take a certain subset of these diagrams like these low-order diagrams Or this RPA series here and some sum up these diagrams non-perturbatively now because dynamical mean field theory and those methods are Written in the same diagrammatic language. It's actually straightforward to take them and Supplement a Hartley-Fock a GF2 or a GW solution with a DMFT GF2 plus DMFT or GW DMFT calculation, so this is now perturbative in the non-local correlations non-perturbative in the local correlation And it's not able to capture non-perturbative non-local correlation and in a way It's still uncontrolled because there's no small parameter that will allow you to make this systematically more and more accurate But you can take this formalism and you can gradually make it more and more systematic This leads you to the so-called self-energy embedding theory where you take the entire system You solve it approximately in something like a second-order perturbation theory or self-consistent GW calculation You then don't just take the local orbitals But you find the orbitals that are most strongly correlated for example those near the Fermi energy Those with an occupation much different from zero or two or if you want to do local orbitals Those that are local and then you treat those diagrams at a higher level So the diagrams in the subset of orbitals corresponds to a quantum impurity model Using those diagrammatic Monte Carlo methods you can solve these quantum impurity models non-perturbatively and you can then Insert or embed the solution of that impurity model self consistently into a weak coupling Simulation as you make that impurity subspace larger and larger and larger You have a small parameter namely the number of orbitals that you take or one-hour the numbers of orders that you take and you can Obtain self-energies and propagators by the Dyson equation iterate and obtain a result That sounds nice in principle Here's the math of how it actually works the self-energy embedding Phi functional is given by the weak coupling or gf2 or GW phi functional in the entire space for to which you add a Strong coupling solution in a subspace and then subtract a double counting correction And if we take this we skip this and we take this and we actually apply it to systems where we have exact solutions like here hydrogen molecules and hydrogen chains of hydrogens in As a function of the stretching distance we can see that as we're adding diagrams as we make the Correlated subspace bigger and bigger and bigger. We can actually become quantitative for The stretching of these molecules not just in the weak coupling regime But also as we pull these molecules apart and pulling these molecules apart in a way is a strong coupling or Corresponds to what the physicists would call increasing your you or going over a strong correlation Or into a strong correlation regime now Dynamical mean field theory as I've told you is an approximate formulation There's a different way of making it exact or pushing it towards a An X solution exact solution and that is by increasing the size of the impurity that you're looking at Here I have it in a formulation so-called dynamical cluster approximation where we write the self-energy We multiple expand it with basis functions Those basis functions have frequency dependent coefficients the KB pensance is absorbed in that basis function And then we truncate that multipole expansion at some expansion term And now we have a systematic small parameter back namely this NC or one-hour this NC for NC equals to one You get singles IDM FT as you let NC go to infinity you recover the exact solution Right and in practice you're somewhere in between you do a sequence of larger and larger systems and try to extrapolate to the thermodynamic limit Now in the Hubbard model in interesting regimes like in the superconducting regime We can't go all the way to a hundred clusters what we can do is we can qualitatively look at clusters of size for 8 and 16 here's a cluster of size 8 and we can look at the different phases that this model spontaneously exhibits For example here you see as a function of interaction and as a function of doping that the model is Fermi liquid like All the way over here at a large doping and at weak interaction as we take it and we Increase the interaction or we reduce dopants. We end up in a D wave superconducting regime and as we further Increase the interaction or reduce dopants. We end up in a pseudo gap regime and at half filling. There's a mod state So this is now very interesting bit is because if you go over here at an interaction that is similar to the bandwidth You are Qualitatively in a regime that looks similar to what you find in the cup rates and we can indeed start down here and just look at the Superconducting order parameter. We'll see it come up Gradually detach from half filling and then build a superconducting dome that here gradually becomes smaller and then eventually will disappear With that we can do all sorts of interesting things we can for example compute susceptibilities So not just single particle grains functions, but two particle grains functions from the two particle grains functions We can obtain generalized susceptibilities. These are functions of three momenta 3k points and from those susceptibilities We can obtain information about superconducting pairing, but also directly make connection to experiment for example look at the magnetic susceptibility and Obtained from it the night chef the relaxation time on the spin echo decade time as it is measured in nuclear magnetic resonance Another way to look at these things is we can take susceptibilities and now we can finally extract a vertex function out of these susceptibilities and then ask the What question that theoretical physicists always want us to answer? What is it that causes superconductivity? What is it that causes a pseudogap? Here's a an extract from a calculation from a pseudogap What do you see here is a result for the self-energy here's the imaginary part of the self-energy as a function of frequency at pi zero and that by half by half what you see here Qualitatively is that this has an upturn at by half by half indicating metallic behavior Whereas at pi zero you find a downturn indicating mod insulating or insulating Behavior so with that you have a good metal at the node and the bad metal at the anti node Now from the vertex function that we get from the susceptibilities we can via the equation of motion we can Get back the self-energy and there are various ways that you can decompose that vertex function For example, we can decompose it as spin fluctuations as charge fluctuations and as particle particle fluctuations And we can just simply ask if we decompose this as spin fluctuations Is there a dominant signal? Is there a what physicists call a dominant channel that? shows a signal or if we decompose it in charge fluctuation is there a dominant signal or if we look at Superconducting or particle particle fluctuations here Do we find for example that d-wave superconducting fluctuations are prominent in this? pseudogap phase what you see here is the result of that decomposition so the decomposition is exact to get the exact self-energy back And if we decompose this into charge You see a more or less flat signal over here same with particle particle if we decompose it into spin fluctuations You can see that in that pseudogap picture that causes this self-energy over here about 90% or so of that signal are Caused by short-range spin fluctuations here See the pi-pi anti-ferromagnetic spin fluctuations and that now gives us a Way of looking at these fluctuations and telling you not just that yeah, we have a result that shows pseudogap But also the dominant contribution to that pseudogap result comes from well anti-ferromagnetic short wavelength fluctuations With the susceptibilities here's a different way of looking at them right we can always ask How likely is that system to undergo superconducting phase transition and we can do that not just At low temperature where we actually make that system go superconducting But we can just be at fairly high temperature and simply scan parameter space and ask as we scan the parameter skate base How likely is that system to undergo a transition? here you find a heat map or a color plot of The system this is interaction this is doping and the red here is where the system is most likely to want to go a Phase transition what you find is that if you have an interaction that is in this case about two-thirds or so of the bandwidth And you go away from half-filling the Hubbard modeling two-dimension wants to become Superconducting as we change the doping you can see there's this banana that develops and then moves the maximum out here to this doping Here's a third Answer this is now a direct comparison of susceptibilities to what we find or what we measure in experiments What you find here is the NMR night shift as a function of temperature in a Fermi liquid You would expect that NMR night shift to be pretty much boring and just sort of not do anything at all Maybe flat be flat until you hit the superconducting transition term pressure at which point the NMR night shift Which is similar plummet towards zero and that is exactly what we find in the overdose regime Which for these parameters is somewhere over here, right? There's not that much happening to that night shift until you hit DC at DC the night shift is strongly suppressed and plummets towards zero Now the interesting thing is to ask what happens if we go to a pseudo gap phase I've already shown you that the way superconductive fluctuations are pretty much absent in this Calculation now if you look at these results here, you see that the NMR night shift Goes over a maximum that maximum is the onset of the pseudo gap Phase as we push it further along you can see that the NMR night shift is suppressed That suppression as I showed you has nothing to do with superconducting fluctuation until it goes through TC And as it goes through TC here, you can also see it in green There's really not that much change that you observe in the night shift So theorists have speculated that the suppression of the night shift Signifies the onset of the wave superconducting fluctuations and shows you that preform pairs are already present somewhere over here And one of the ways that we can now use this formalism to look at these calculations is really we can compute those fluctuations Attribute them to in this case short range anti for magnetic fluctuations and show that up here They have nothing to do with the suppression of the night shift. It is only over here in the superconducting phase that that happens With this I'm out of time. I hope I could give you a little bit of an overview here of how we're trying to do diagrammatics You know take the 1950s formalism of Feynman diagrams throw it on computer make it do stuff Then take the analytic toolkit that people have developed in the 50s 60s and 70s Throw that one onto a computer use it as a starting point so that we just have to do the rest and Then use that toolkit to try and say something about an interesting system for example systems out of equilibrium here for this Voltage splitting of the condo peak superconducting system or really try to answer a you know Why question the way that theoretical physics likes to pose them to us? Thank you for your attention