 Before we begin, I hope that you are collecting your archivations for the Q&A session in the afternoon. And is there a person who will ensure it to be the moderator who is in the room? You're here. OK, so do you want to make an announcement? So same thing as yesterday. Just send me your questions. OK, I'll put my email address again over there. Send me your questions for the speakers of today. But also, it can be about general things over the week, or general concepts. OK? Just life in general. Just life in general, exactly. Carries in physics. So with that, the next lecture is by a professor, Nikolai Prokofiev, who will be talking about something totally different from the first two lectures. And this is going to be a Diagrammatic Monte Carlo. Not quite yet, Diagrammatic Monte Carlo. Diagrammatic Monte Carlo will happen tomorrow. Today it will be about life in general. Because talk after talk, I can hear a couple of statements like, oh, this quantity cannot be calculated. Nobody knows how to do it. You go to the next talk. Another quantity is on the screen. And people say, OK, good knows how to calculate this quantity in metals or strongly correlated systems. So today I will try to convince you that life in general is not so bad. Yes, I was not even born when some of the models were introduced. I go on pension, and we still don't know how to solve them. But I hope that maybe the younger generation will take it over, because now I would say just we are on some reasonable track that, more or less, any generic thermionic system, I'll explain what I mean, can be solved numerically on a classical computer. So this will be the essence of this talk. I'll show maybe only one diagram just to illustrate, but that's it. So today we'll discuss simply why if you talk to someone who is doing numerics, they immediately mention some sign problem, immediately scary way and run out, unless it's a very specific zero measure model with some specific parameters, when it can be done without so-called sign. But genetically, if I take any Hamiltonian with any density, any interaction range, people talk about something which is called sign problems. So I'll try to convince you that the sign problem is not the problem of fermions. It's the problem of humans trying to calculate them. Hey, so let's start. So this work will be mostly with the ideas which I will present today. It was done in collaboration with Boris Svistanov and Igor Tupitsyn, who are at UMass Amherst, and also with the ENS group. Well now, Ricardo Rossi, he has moved to the Computational Quantum Materials Center in New York. So it's Rossi, one who can Felix Verne. So let me start what I mean by interacting fermions. And strictly speaking, I will be talking about something which is generic enough. So this will be a Hamiltonian which contains a bi-quadratic part in terms of hermianic operators. So you can imagine any dispersion relation for any number of bands, including spins, spin-over coupling, whatever you like, if you want to add more. And some generic two-body interactions between the fermions, again, no restrictions whatsoever on the range of interactions or the structure of the matrix element between A, B, C, D. You can add three body interactions, four-body interactions, no restrictions whatsoever. But I'm assuming that I'm not solving a Hamiltonian where every single parameter in this Hamiltonian can be anything you like. So if you ask do we have a solution for the Hamiltonian where every parameter is a random number, you can potentially imagine. So this one I don't know how to solve. But if it's a regular Hamiltonian when you specify some functions for the dispersion, for the interactions, a translation invariant, otherwise a strongly interacting system, I say just guess, this problem is solvable. Why I'm saying that, OK, fermionic sign problem is human-made. If you ask a voltmeter or any experimentalist and you mention the sign problem, they will immediately give you the answer. You just connect the wires and they will give you the resistivity. So the fermions don't have a problem. So obviously, the problem is not coming from fermions. The problem is coming from us. If we want to compute properties of those fermions using certain methods. And that's what I will be trying to show you. Yes, we can try to solve properties of those fermions using certain specific methods, of course, human invented. And then we can encounter a problem which will not allow us to tell what are the properties of those fermions in a semi-dynamic limit. And that's what I will explain first. That's what scares many people, because some of the methods are almost black box. But you have to pay the price for this black box, because sometimes it's totally not working. And more or less, for any generic fermionic system, there is no box which will work. It will face the so-called fermionic sign problem, because it's based on a particular method. But what I will be trying to convey next is that, well, in general, at least at the level of semi-dynamics, I understand how to demonstrate this, if you can measure some property experimentally, definitely I believe you can compute the same property with the same accuracy on a classical computer for this type of Hamiltonians, including maybe higher order terms, no restrictions. But this will not be a black box. This method will allow you to compute properties, but you have to use a lot of analytic input which will be adjusted to the properties of this Hamiltonian, and you have to do it right. If you do it right, you'll get your answer, but you'll have to work on this analytically, so it will not work just as this. And you'll see layer by layer why this will be important, which means it's not that, well, you have some magic button, and the theoretical physics is all right, because Feynman's diagrammatic technique, more or less, can be applied to anything you like. But if you apply it without thinking, nothing will work, so some analytic understanding of what you are doing will be required at all times. But once you gain this analytic understanding, you can switch on the computer, and some of the properties can be computed in a regular fashion, and you can claim the accuracy of your calculation. So you're no longer guessing whether this is right or wrong, or maybe not. You know the answer is such and such with such and such accuracy bound. Well, that's why I'm putting it here, which means if you throw at me the model, and I never looked at this model before, I don't know how to solve it yet. You should start working analytically, and maybe after that you realize, OK, if you do this, this, and this, this model will be solved. So the problem will not be, in terms of the thermionic sign, I'll explain that the problem will be in something else. The thermionic sign will never hurt you in this setup. So let me kind of explain what all of this means. So let me first discuss what happens with traditional quantum Monte Carlo methods. That's what is human invented sign problem. Well, typically it happens at the following level. You simply map a d-dimensional finite size system of fermions, which means sometimes you just have explicitly all particles in the system, or sometimes you have explicit dependence of your mapping on the number of particles and the size of your system. So you map this quantum system of fermions onto d plus 1 dimensional classical counterpart. When I take classical, I just mean, yes, you can run it on a classical computer, but I have parenthesis just to explain a little bit later that the weight of what you are trying to contribute to your answer may not necessarily be sign positive. If it was sign positive quote, just classical without any kind of quotes. If I have quotes with classical, it means maybe it's a number, classical number, but it's a complex number in general. So you do the mapping. And then you simply, after the mapping is done, you simulate the later by Monte Carlo methods. Because under the mapping, you find that you have a enormous sum that is so large that you cannot enumerate all the terms. So you have to find some technique which will select the proper terms or the most relevant terms and sum them up together in a proper way that if you run infinitely long, the answer will be exact. So that's what Monte Carlo methods allow you to do. More or less generically. And for example, let me explain how this works. Say in the path integral or some other representation, that's more or less the general approach. Suppose I have to compute some average of some quantity Q. So I have to take the trace. This is just pure thermodynamics. You can invent other examples, but they will look exactly the same. So I have to take the trace of Q times the Gibbs exponential and normalize everything by the partition function. So that's what I'm tracing. So the trace of the exponential, this is just z. So next, I understand that, OK, I cannot easily do it. Because, well, if I can exactly diagonalize the problem, it's already solved. So no problem. So I'm assuming that it's just impossible because of the large Hilbert space. I cannot fully diagonalize the problem. It's too large for me. So then I have to write down this trace in some convenient basis set. So imagine this can be only let us say this can be foc space, or if it's in continuum space, maybe this is momentum space. So only one of the terms in the Hamiltonian can be made diagonal, but then you have the off diagonal part, which makes your life miserable. Well, still I can use any basis set I want to write down the trace. So your Q average will be sum over nu and mu. This is the matrix element itself. But then I have to do the Gibbs exponential, any possible matrix element from mu to nu from the Gibbs exponential. Of course, the partition function just says, OK, well, all you need is just the diagonal matrix element. But for general calculation, you need also diagonal matrix elements. Of course, this one is still not possible to compute. So next, more or less, this is a generic prescription, which will lead to different techniques. If you implement it in different ways, you will simply say, OK, exponential, because it's of the same operator, you can write it down as the power of the exponential with much smaller step, epsilon. So let's call it density matrix to power p. And that's how an additional dimension appears. Because if I have an index to say, OK, I'm talking about this term, or this term, or this term, or this term, I can label them. And this gives me an additional sense of something propagating along this axis just to specify the place. So that's the number of terms, which I break the exponential. Of course, if epsilon goes to 0, the number of terms goes to infinity. And ultimately, I'm exact. Well, the advantage of doing this, yes, you pay the price, you have an imaginary time dimension. That's what I will call the imaginary time dimension, how I progress along this product. Well, now, rho epsilon itself, because it contains a small parameter, I can calculate this with any degree of accuracy if epsilon goes to 0. Maybe even the linear expansion is enough. What people do next depends. Some people simply just open those brackets, Stochastic Series expansion. Some people will exponentiate some of the terms, say, which are diagonal in the basis, and keep expansion in the off diagonal terms. What you do next with this expression and how you compute rho epsilon gamma delta, it doesn't matter. So this gives you different techniques. But you will end up with something which looks like this, say, the partition function, which says I have the product of those guys. Well, I have to specify, for example, for every single terms, this is kind of a compact notation. The compact notation says, OK, C is the collection of everything, new 1, new 2, new p. So mention all those numbers. If I mention all those numbers, I know exactly about what kind of matrix elements I have to do. I understand that this can be computed relatively easy because epsilon goes to 0. Do the product. And if I can perform this summation now, but this is a multi-dimensional sum, and I have a lot of summations here, if I can perform this, this is my partition function. And I can be more and more accurate as epsilon goes to 0. So that's where the Monte Carlo comes in. So the Monte Carlo comes at this point, saying, OK, the sum is too complex, but you can sample. That's more or less a generic approach, and I can all implement it, essentially, for any Hamiltonian I want. So now we face, well, that's more or less the name, which is attached to different ways of computing this. You can do it for path integrals. They can be on a lattice or in continuum, which means some of the sums, maybe are integrals. If my index characterizing the Hilbert space is continuous, so be it. Stochastic series expansions, high-temperature expansions can be done this way, strong coupling expansions. You can imagine any number of techniques, which will look the same. Now I have to calculate this, and this looks like, OK, well, something which I cannot put yet on the classical computer, but then you do a trivial kind of redefinition of your averages. Well, I define the sign just assuming that w, because it's a product of matrix elements, and the product of matrix elements is not necessarily sign positive. It can be anything in the generic quantum system. So I'm assuming that this is not necessarily positive number. So I'll separate sign of this number and the modulus of this number. So pc is the modulus, sc is the sign of w. Well, once this is done, you trivially rewrite this. I divide by the sum over c. Let's call it configuration space. I sum up those pc's over the configuration space. I multiply and divide by the same sum, trivial. But now this expression can be called as the average of q times s over the positive statistics, because pc divided by the sum of all possible pc's, that's probability. It's normalized to 1. It has all the right properties. It's positive. And if I sum up all the possibilities, it's 1. So this looks like averaging over the configuration space, where pc divided by its normalization, it's the probability for this configuration to happen. So you immediately give this expression a stochastic kind of probabilistic interpretation. You can do an exact enumeration and sum, but that's not possible. You'll never finish in the time of the universe. But I can generate c with probability proportional to this number. And that's how Monte Carlo works. So ultimately, I can compute those averages, q times s, and the average s. Well, you see that in denominator, I have the average of s of the sign. And if I compute those averages by some technique, by generating, say, configurations according to their weight, that's my answer. So we know exactly now how to run Monte Carlo on this setup, because everything has proper probabilistic interpretation. But now comes the problem. The problem is that, well, s plus, if I do it, just look at the expression, you say, oh, just wait a second. If I do the sum of pc's, this will be the petition function of my system when I totally ignore the sign of all matrix elements. I will make them positive, positive, positive, positive. And the petition function itself is when I take everything with the sign. So that's exactly what s plus average is. It's the petition function of my original system divided by the petition function of fictitious system when the sign is totally removed. Just imagine that if you are doing this in the past integral representation, you have exactly same weights for fermions and for bosons. But for fermions, some of the configurations, if I exchange particles places, they have sign. Minus for bosons, this is still plus. So this is more or less the figure of merit. If you do bosons and fermions for the same Hamiltonian, sign average will be the ratio e to power minus the petition functions of fermions versus bosons. But we know that free energy of fermions, or energy at zero temperature, is a finite number per particle higher than for bosons. Bosons will essentially condense close to zero kinetic energy, but fermions have to keep their fermion energy. So essentially, you immediately realize that while the average sign will go to zero as an exponential in the spacetime volume of the system. While beta is simply sitting in the Gibbs exponent anyway, everything else is just some number. It's the ratio between free energies of the bosons and fermions, and LD is the volume of the system. I am doing the petition function. So you immediately realize that you're OK. The moment you want to do a little bit big system with a large number of particles at low temperatures, this exponent is so small, if I do it not exactly, but statistically, trying to compute it by Monte Carlo, I will always have a statistical error bar, which only vanishes the square root of time of the simulation time. Well, and this error bar has to be smaller than s average. Because remember, I have to do the ratio. This goes to zero exponentially. This goes to zero exponentially. Plus minus error bar, plus minus error bar. So unless your error bars are smaller than this number, your dividing error bar by error bar is just nonsense. You will see nothing unless your simulation time is larger than inverse average sine squared. But sine is exponentially in the spacetime volume of the system. So very quickly, if you try to go to large systems, you will simply face the problem that you're OK well this time, which you need to know the answer even approximately, is simply impossible to reach in practice. So let me kind of simply explain. Monte Carlo I already said it in words. You interpret this as a probability. So if I calculate the sum, the ratio, by simply adding configurations to the sum with probabilities given by this number, then Monte Carlo is simply saying, OK, well, you just generate an ensemble of configurations according to their weight. That's exactly what Monte Carlo knows how to do. And then you simply just sum over all generated configurations in numerator divided by the all generated configurations in denominator. But by central limit theorem, if you add pluses, minuses, pluses, minuses, pluses, minuses, unless I include a lot of them and do it for a very long time, I will never know what's the correct answer because it's exponentially small. So that's the canonical version of what people call the sine problem. The method is such that if I go to large system, I simply have to simulate for exponentially long time the given system size. But I will just show you that this is not the end of the story, as typically people presented. So let me just go, well, this is the explanation how the traditional Monte Carlo works and why people mention the sine problem in this setup. There is no thermionic sine problem yet. I'm just saying sine problem because I didn't even specify the Hamiltonian. It can be any system you like, frustrated magnet or something else. Any questions to this? Whenever someone mentions the sine problem in a particular numerical method, that's exactly what they mean. You divide 0 by 0, but each 0 is known with a robot. And in a large system, this is so bad that you have to run your computer outrageously long time before you see the light for the ratio. Now let me ask the question a little bit different. Why do I cable this sine problem? Because what I really care, as any reasonable person will immediately realize, and this question was asked by David Saperli, that's why we kind of reacted to this. You have a quantity Q. What you want to know is to get this answer with some accuracy. Obviously, the problem doesn't solve exactly. If it's a generic problem in a semi-dynamic limit, you'll never know the answer exactly. So don't even ask me, OK, do you have an exact solution? No, it will never be available as an exact solution, except maybe I know one-dimensional systems. Typically, we don't have it. So generically, the exact solution is not available and will never be available. So don't ask for it. So all you can ask about accurate answer, and you specify the accuracy you want to know. So if you say I want to know the property of this electronic system with 50 digits, you can keep asking this for the rest of your life. Nobody will answer. So you just say, OK, I want to know the property with a certain error bar. That's a reasonable answer. And then I say, OK, well, and I want to know it in a semi-dynamic limit. Why I mention the semi-dynamic limit? Because strictly speaking, if I don't mention the semi-dynamic limit, I have to give you the system size. And you say, OK, you want to know the answer in a system 4 by 4, but OK, why not 5 by 5? Why not 6 by 6? Typically, we're interested in large samples. And that's where the same problem will show up, because it's the scaling with the size of the system. So I want to know the answer in a semi-dynamic limit with a given accuracy. So I specified it. And now the relevant question is, for how long I have to run my computer to get this answer? So that's what we call the computational complexity problem. So you specify the quantity and the accuracy bound. And I have to answer how long I have to compute the answer for this error bound. Of course, if your error bound is 0, I have to do it infinitely long time. But you specify finite epsilon, I have to give you an estimate for how long I have to compute it. And then we say, OK, well, you have a problem. If the CPU time which you need to compute this problem scales much faster to infinity, it scales much faster than any polynomial function of 1 over epsilon. For example, it scales exponentially as exponent minus 1 over epsilon, plus 1 over epsilon. So it means you want to improve accuracy by effect of 2, and you immediately go from one second to the time of the universe. You cannot do it. Well, but if the scaling of the computation time with the accuracy bound is polynomial, we say that the computational complexity problem is solved. Of course, the polynomial can be easy. The polynomial can be tough. But that's more or less the consensus in the community. You have a problem if it's faster than any polynomial. If it's polynomial, there is no problem. Which means I can improve accuracy in polynomial function as a function of accuracy bound. You want accuracy 10 times longer, 10 times better. I have to run 100 times longer, but it's doable. So which means potentially there is a route to get your answer better, better, and better. So that's the formulation of the computational complexity problem. I already explained that the semi-dynamic limit is required because the moment you mention a small number of particles, you immediately know that sooner or later I'll face the central limit theorem, and my accuracy can be improved in 1 over epsilon squared time. The pre-factor can be exponential function of n. But once I pass this, so I beat the sign problem, with my computing power, if I continue, the error bars will shrink as 1 over epsilon squared for any finite system. Of course, in practice, if you can get the pre-factor before you die. So that's why I have this kind of disclosure. Why we need a semi-dynamic limit? Just to mention everything, that OK, I don't have to stick with very small system. That's the computational complexity problem. Sounds reasonable. It just says, OK, well, whether I can improve error bars realistically. That's all what computational complexity is about. Now I have to combine the two. So I see that, OK, in some methods, I have extremely bad scaling of computation time with the system size. At the same time, I have my desire. I want to estimate how long I have to run the computer to get a particular error bound. Now I have to combine the two. Now I'm just slightly more specific. I say, OK, if you want some accuracy epsilon for your quantity, and the method has the five-heramionic sign problem, then TQ for this epsilon will be some pre-factor, which says, OK, that's how long I have to run the computer to get accuracy of what, 100%, and then I will improve according to the central limit theorem. But the problem here comes now that TQn is an exponential function of n. But you say, OK, yeah, well, sorry, but we face a problem here. Because in this definition, I mentioned n, but the original formulation was, give me some dynamic limit answer with a given accuracy epsilon. There is no n. So I cannot use this formula as written, because n is not specified in the setup. So what n I have to use to get the accuracy epsilon? At this point, you immediately realize that you're OK. Since n is not specified, I have to find it in the most natural ways, of course, to select the smallest possible system, where the systematics coming from finite volume is smaller than the required accuracy. Because, of course, you can beat statistical error bars and make them as small as you want. But if you have still systematic error bar for the finite size, I still have my answer, which is not accurate and doesn't reproduce the semi-dynamic limit yet. So I would say, yes, OK, that's what you need. For proper combination of hermionic sign problem with computational complexity question, I have to look at the finite size scaling. So I will simply just, that's how I will determine n. You give me epsilon. I say, OK, fine. Once I know epsilon, I will try to see what is the finite size scaling, which means how my unsubtained for finite system is different from the semi-dynamic limit in relative units. So if I have my delta, I will simply go to system size. The smallest possible such that systematic errors are smaller than the required accuracy. And this will tell me what is n I have to use in this calculation. So that's more or less the illustration. So if somebody tells you what is epsilon, don't immediately jump and compute some large enough system size because you think it's good enough. Because you will start the calculation, and before you know your answer with the accuracy 100%, your time will be over. There is no budget. There is no time. You die, and you still don't know the answer. First, estimate your finite size scaling, and immediately jump to this n epsilon because you don't have to do bigger n. Remember, scaling is exponential with n. So the smaller the n, the better. And I say, OK, well, this n is enough just to ensure that systematic errors are below the balance. So if I compute this with relative accuracy epsilon, I know the systematics is also below epsilon. I satisfied your conditions. So which means, yes, if you want to estimate how bad this fermionic sign problem for computational purposes, you have to do finite size scaling analysis. Well, of course, from whatever is this law, just draw two lines, determine what is n epsilon, and now compute for this n epsilon. Of course, your time will scale now as e some number beta n epsilon. But what is dependent on epsilon now depends on finite size. So strictly speaking, you cannot claim a fermionic sign problem is already a problem unless you investigate the finite size scaling. You may have fermionic sign problem in this form, but who cares if finite size scaling is very good, which probably means that I don't have to compute large system size anyway. So let me illustrate this one for traditional methods. So that's what you find. Imagine that hypothetically, because I don't know any system in nature which will satisfy it, so imagine that my finite size scaling is such that, well, OK, my systematics goes to 0, roughly as e minus the volume of the system. So if this was true, then OK, my n epsilon, because I need this to be smaller than epsilon, then n epsilon will be proportional to log of inverse epsilon. You substitute it to the formula with finite size, which means my tau is exponential in n, but n itself is log inverse epsilon. You'll find that log temperature scales as a log simulation time scales as log inverse epsilon. Sign problem doesn't cause any computational complexity problem. Why? Well, because finite size is so nice for you, I will never need to go to very large system sizes. You want better accuracy, you slightly increase the system size, and you satisfy the new bound, which means I can improve accuracy in polynomial time. So you immediately realize that despite exponential scaling with the number of particles, if my finite size scaling is very good, I'm in a very good shape. Well, OK, if this, of course, can be written slightly better, yep, I'll come to this, yeah. Right now I'm just talking. So I'm talking about, first, these type of systems don't exist. Except, well, OK, let's look at gap systems with short range correlations, for example. Well, then realistically, you can have finite size scaling, which is exponential in the system size, not volume. So this is OK, because if I pass the correlation lengths of my system, where all the microscopic physics has saturated, I go to bigger system sizes, finite size arrows go to 0. So this will be the case of gap systems with short range correlations. Well, then I find the same. So n epsilon is, well, given the system dimension, n epsilon will be log inverse epsilon to power d. I substitute it to the exponential formula I have, and I find that tq is epsilon. Unfortunately, this is log d minus 1. So in 1d, but in 1d you say, OK, scaling, linear scaling with system size is the same as volume, because d equals 1. So essentially the answer is, for any dimension which is not 1, I still have computational complexity problem. So a fermionic sign problem will cost you computational complexity, because I scale as epsilon. But unfortunately, the power of epsilon depends on epsilon itself, which means the smaller the epsilon, the higher the power, strictly speaking, this is not a solution of computational complexity yet. But it's not too bad. Well, the requirement is it has to be a gap system so that correlations are very short, and I don't really have to do very big systems. It's not too bad. It's not exponential. So that's one good news. It's not exponential, but it's not a solution yet. Well, finally, there is one bad case. If your finite size scaling is only a power law, which means you improve accuracy by going to bigger systems as some power law, well, then my n epsilon will be a power in epsilon, just because this is a power. You take it to power d. This is still a power. I put it to the exponent for the computation time. And I see that log t depends on epsilon, maybe not with power minus 1, but still it's a power. So this is now exponentially bad computational complexity problem. Well, of course, there is a star, which says all of this is fine. But the assumption is that I can reach length scales when finite size scaling applies. And you can easily imagine that, OK, maybe I'm in a gap system, but gap is not too large. Not too small, not too large, sorry. And so just I first have to pass through the correlation volume, and then I'll have my favorite scaling. But for this, I say, OK, necessarily I will have to do something which requires that I can beat this exponential, which will be the correlation length in d plus 1 dimensional volume. Once I beat it, OK, I can enjoy the scaling. But this has to be less than the time of the universe. So those days have now just I'm done more or less explaining how traditional Monte Carlo methods face fermionic sign problem. First, in 1D, probably we are in a reasonably good shape. But 1D is more or less already is considered to be numerically solvable no matter what. Because you have the MRG, you have wonderful variational states, maybe a semi-dynamic. So finite time dependence is not fully solved yet. But 1D is considered to be essentially done numerically. You can ask any question and get good answers. If you go to higher dimensions, yes, fermionic sign problem will lead to computational complexity, especially if the system is not gapped. So then, yeah, you will immediately kind of stuck and face exponentially long CPU time, and you cannot do it. So that's the conventional sign problem, how it works. Maybe the new ingredient, which I am trying to convince you that don't be scared of the exponential itself, but will be scared of finite size scaling. It's really the finite size scaling which is forcing you to compute big system sizes. Because if I'm still under the correlation lengths, I cannot really extrapolate to the semi-dynamic limit. So I really have to beat this exponent first. But even then, maybe, well, it will be very difficult to improve. So that's the canonical fermionic sign problem. Now I will switch the method. I say, OK, well, computing properties of interacting fermions is not necessarily doing numerics in a finite size volume when all the degrees of freedom are sitting in this volume. We have other tools. And the same person more or less invented them. So I will use connected Feynman diagrams to express my answer directly in a semi-dynamic limit. Because when you do Feynman diagrams, yes, you can write down Feynman diagrams for the partition function, I'll face the same problem. And some of the methods are based on Feynman diagrams for the partition function, connected, disconnected, everything. They do face the same problem. But I can also do the calculation for something which is log z. So this is what is done by the connected Feynman diagrams. But because you do it in for log z, the diagrams are explicitly for, say, free energy density. Which means I can formulate everything in terms of Feynman diagrams for infinite system. There are no particles left. So when we draw Feynman diagrams, yes, we sometimes say the electrons come in, the electron gets out, you exchange by phone, and it looks like I have particles. No, it's already field theory. I'm simply describing the process of constructing Feynman diagrams. And there is some vision behind it. But strictly speaking, there are no particles in this formulation. So if you want to do thermodynamics, yeah, you have a hard term, it's just, you know, very simple graph, it's a number. Well, but it's already counting for mean field behavior when you just smear out everything homogeneously. You do fork, that's already another diagram. But it's not that I am doing kind of two graphs that I am doing two particles. No, it's already for the semi-dynamic limit system. So first, Feynman diagrams already solve the semi-dynamic limit. The moment I say it, you say, okay, just wait a second. My thermionic sign was formulated as exponential scaling of the particle number. The moment I say that I will compute everything using Feynman diagrams, I cannot have thermionic sign problem period because there are no particles left, okay? I have Feynman diagrams, but I cannot have a thermionic sign problem as we know it because the particles are gone. It's already infinite number of particles from the very beginning. Hard to term infinite number of particles, fork term, next term, still infinite number of particles. So this is already done. Well, and second, I will simply just write down Feynman diagrams. But if you look at any answer in terms of Feynman diagrams, looks like I have to sum over diagram orders, diagram topologies, I have to take a bunch of integrals. Looks the same as before. I have to sum up or take a bunch of integrals with certain weights. And this can be done by Monte Carlo in exactly the same fashion as I explained to you before. So you formulate the summation of all possible Feynman diagrams together as a multi-dimensional sum with their increasing complexity, but that's what you can sample. Just like before, Monte Carlo can sample any multi-dimensional sum. So that's the idea. So you just formulate diagrammatic series and then you sample them by Monte Carlo methods. So that's what I will explain better next lecture. That's when you will see diagrams. But now I'm just saying, okay, well, this is an example, how this works. You write down the sum average, potentially. And this is written as, for example, Taylor series expansion in terms of the coupling constant. It's a particular example. I can reformulate this in terms of some other coefficients because I decided to divide V by the convergence radius of my series. So then it looks like sum over j equals zero from zero to infinity bj g to power j. And now in terms of j, the convergence radius is one. So that's how your answer looks like in terms of Feynman diagrams. Well, if you want slightly more about the content, why we need Monte Carlo, that's because those coefficients B or A, well, they say, okay, if I go to sufficiently high j, there are, well, not exponential, there are factorially many topologies for a particular diagram order. So the number of diagrams is exploding. At order 10, you will face a one billion diagrams. So I cannot even enumerate them on pages. So you have to sample. And for each diagram, you still have to take a multi-dimensional integral. For example, those can be space time variables if you write down everything in space time. And there is something you have to pay for a particular configuration of all your variables which you specify. There's a certain expression which says, okay, that's how much is the contribution if I specify all the parameters, contribute, okay? So this has to be sampled. j, psi, all the internal variables, everything has to be sampled by Monte Carlo, but those weights are known. They are not necessarily positive. They can be sign alternating or even complex in some systems. So I face exactly the same problem. Okay? I already explained that, okay, the thermionic sign problem doesn't apply here naively because I don't have particles left. So if I have something, it's not the conventional sign problem. That's one because I already take a semi-dynamic limit. But then I will just come to this point maybe two more times in waves. First, if I have sign for those Ds, if they are sign positive, fine. If they are sign alternating, there is nothing bad about it. Why? I will just explain it later. You'll see that if I do it for many-body interacting system of fermions in a semi-dynamic limit, generically, I need sign for convergence because what will happen for those fermions if my system has finite convergence radius is n factorial diagrams integrated all their variables. They just kill each other to nothing. And they have to kill each other to nothing. And I'll show it later. And because they kill each other to nothing, this gives you convergence of the series. This is not a problem, this is sign blessing because if those diagrams don't kill each other, their contribution to the series will be factorial. I definitely have no convergence whatsoever. So for convergence, you have to cancel diagrams because there are too many of them. It's a factorial number of the graphs. If none of the graphs is small, the factorial number will give you a factorial contribution. It can be much smaller than factorial only because they cancel. So that's sign blessing one. And it happens for fermions only. It doesn't happen for both ends. So this is blessing because I need convergence series. Well, and second, it turns out that because of the sign which is specific for fermions, I can sum up the factorial number of topologies much, much faster than n factorial time. So this is sign blessing two and I will explain it again. So first, it doesn't apply, but whatever is related to sign helps, helps and helps. Okay, thermionic sign, our question. So I will stop sampling them. That's what I will explain. This is sign blessing two. I will stop sampling them because I can sum them up in exponential time. Despite the factorial number of graphs, I can sum them up all together in exponential time. And this will be machine accuracy cancellation without any noise. Well, if you want some image, what I mean, that's the typical diagram generated when we did frustrated magnetism on the triangular Heisenberg anti-thera magnet. That's more or less generic. It's some diagram for free energy. You have certain number of vertices. The particular topology is specified by pairwise connections between the points. Your vertices can be a spin projection conserving or a spin projection flipping. So the color changes if I change projection from up to down. The point is you simulate the collection of those space time points where you put them. You can sample or sum through all possible connections between them, but I know definitely what this graph will cost me. That's the beauty of the diagrammatic technique. I throw it to the graph, you immediately return me the math. You immediately say this diagram will contribute and you go to the product of every dot times every line. So that's very easy. So the moment I give you the graph, every graphic element has a value. It's a number. The moment I say x1, x8, well, there is a certain number which stands for the propagator from x8 going to x1 in space time. The dot itself can cost you something. So you just multiply the diagram weight as the product of all dots time, the product of all lines, yeah? No? No, no. If it's fully self-consistent, that I'm assuming that I'm taking this function from the table and I create this table after solving the Dyson equation. Yeah, yeah, you can do it with Bay expansion. You can do it as skeleton expansion. But that's what I mean by diagrammatic Monte Carlo. You just sample all possible objects of this type. You sample the number of dots. You sample where you put those dots. You sample, okay, what are the lines, their types. And then you can either sum or sample through the topologies if you like. Okay, so that's what Monte Carlo is doing. Okay, it simply goes through all possible allowed Feynman diagrams. They all have weight. And then I will just repeat the same trick. I say, okay, the diagram has a weight, but it's not necessarily positive. So essentially, yes, you introduce modulus and sign. If I can produce those diagrams according to the modulus of the weight, then all I'm measuring is the sign, okay? But there is no statement that the sign vanishes as the number of particles in the system because there are no particles in the system, okay? So if I sum through all possible configurations for the lowest order diagram, this is just a hard thing. You go to the second order, it's hard to invoke. Nothing cancels to zero with exponential accuracy. So there's no exponential cancellation for this sign, okay? Of course it may go to zero as the diagram order. This is possible, but then I'm happy. And I will explain why I'm happy because if I have a sign problem in conventional sense in a sense that you say, okay, you go to higher order diagrams and this sign, the average sign goes to zero, I say, it's wonderful. Because this is the contribution to the answer. I don't have to kill this sign and know even what it is below a certain error bar. If it's a smaller than error bar, I don't even care what it is. I don't have to compute it, okay? Now let me kind of combine the two. I explained to you what is roughly the idea of Diagrammatic Monte Carlo. You obtain your answer from the series of Feynman diagrams. And now I would like to go back to the question, okay, for how long I have to run it to get my answer with the given accuracy? Everything is already in some dynamic limits, so I don't have to do finite size analysis or anything. All I have to do to estimate up to what order I have to do my summation to satisfy your error bar, okay? So I repeat something which I already said before. But now let's consider how it works. Suppose I will define an approximation. So it's okay. My approximation to the quantity of interest will be simply doing Feynman diagrams up to order n. So I'll sum up everything up to a given order n. So this is just a truncated sum. This is my approximation to the answer. Now if my series converge, and that's the most important part in the whole setup, if my series converge, then I know that the difference between my approximational, sorry, n or n minus one doesn't matter. So if my series converge, then my approximation to the answer goes to zero exponentially fast as power g to power n. I am not mentioning power law prefactors, okay? For convergent series, this is a solid statement. So if I have finite convergence radius, this is exponential for Taylor series. So I know that, well, if I go to order higher, higher and higher, my answer is getting exponentially more and more accurate. Well, now all you have to do is to relate g to power m to epsilon, and that's how you determine the required diagram order, which you have to compute to have an accurate approximation. And then the final statement, which I will kind of explain to you how it works, if the time required to compute all diagrams of order n, if the required time is exponential in n, and this is the case, then I combine the two, I say the required time is exponent in n, but n itself is log in epsilon. So I say, I see immediately that the computational complexity problem is solved. So if you run the diagrammatic Monte Carlo setup, and the quantity of interest has convergent Taylor series, the computational complexity problem is solved because I pay exponentially more for computing next order, but this brings me exponentially more accurate answer. Exponent versus exponent, that's a power law. Yeah, we'll come to this. So I'm just saying for convergent series. Okay, looks like that's the glory. Okay, let's all go and compute Feynman diagrams and claim answers for whatever we like with better and better accuracy. And we're still not doing it yet. For some cases we already did, but it's not universal. Okay, so that's the question. And it was asked by Feynman some more than 60 years ago. The beauty of this Feynman's paper was that it had this paper by Feynman. It's only two pages and it has only one formula. That's the formula in this paper. Feynman was literally saying the following. It was, it's called now a Dyson's collapse argument. He'll say, okay, continue some system of interacting fermions in continuous space. For example, I know interacting by coulomb forces. He was specific at this point. So okay, suppose you want to compute your answer as Taylor series in the coupling constant, just doing Feynman diagrams in terms of E squared. And something looks like this. He'll immediately say, okay, this cannot be Taylor series with finite convergence radius ever. Why? Well, because if I look at my answer in the complex plane of G, because remember if I have Taylor series and they converge, it means I have finite convergence radius and my function has to be analytic at the origin. For any finite convergence radius, your function is analytic at the origin. Otherwise, your Taylor series cannot converge. So he's saying, yeah, but I know it's not analytic at the origin for very simple reason. If I flip the sign of E squared, because I am doing Taylor series, so I'm allowed to look at F E squared as a function in the complex plane to understand its convergence properties. And you immediately realize that okay, if fermions were not repulsive but attractive in continuous space, they would immediately collapse to infinite density. Powerly pressure cannot help you because you gain density squared in potential energy and you only lose density to power five third in kinetic energy. So fermions or bosons alike in continuous space for attractive on average interactions will collapse to infinite density. So who cares what is your physical system here? The problem comes from the opposite direction when the system is pathological and it goes all the way to G equals zero, which means your convergence radius is definitely zero. Why are you even doing Taylor series? Okay, so this was more or less Dyson's argument. Maybe you can pull out something as your coupling constant is very small like in QCD, but in condensed matter, our coupling constant is not even small. So if you have zero convergence radius and the coupling constant is small, your answer is just keep oscillating and just blows. So it looks like a death sentence. So everything I said before about diagrammatic Monte Carlo, it's kind of glorious, but it looks like it never applies, okay? But if you think about lattice fermions, you immediately realize that everything said by Dyson doesn't apply to lattice fermions. You take Fermi-Harvard model on a square lattice and you ask the question at finite temperature, if interactions go to zero, are the properties of the Fermi-Harvard model on a lattice analytic at U equals zero? The answer is yes. Who cares U is positive or negative? It totally doesn't matter because at finite temperature, this is still a weakly interacting Fermi system regardless of the sign of interactions. And I cannot collapse because okay, on a lattice two fermions per side, I'm done. I cannot create more. So nothing bad happens on a lattice. I have finite convergence radius on a lattice. From this, you immediately realize that okay, while Dyson's argument is cool, because I cannot stop collapse to a point to infinite density, but any ultraviolet cutoff. For example, you can always represent a continuous system with better and better accuracy as a lattice system. But the lattice system has a finite convergence radius. So maybe you have to shuffle your calculation in such a way that you had to do lattice calculation, get your answer, and then extrapolate in the ultraviolet cutoff. So you change the order of your operations and maybe this extrapolation will be easy. So first you realize that okay, since on the lattice you have finite convergence radius, you have a tool to address continuous systems if you have ultraviolet cutoff. So everything is not that bad. Even though the argument is correct, it doesn't mean you have to stop. Well, the other argument is that Feynman diagrams allow you to have self-consistent formulations. Essentially, none other expansion allows you to do it. You do virial expansion, strong coupling expansion, high temperature expansion. They don't allow you to do the self-consistent formulation. Because to do the self-consistent formulation, you have to know the structure of your terms up to infinity and only Feynman diagrams know how to do it. So they do allow you to do it. What about lattice at small filling? Yeah, the continuum will be represented by a lattice at small filling, exactly. What will happen to the radius of our... No, the radius of convergence will change, of course. But given the ultraviolet cutoff, you can always get your answer if your labor is enough. And then you do the answer first and then the cutoff the last limit. Suppose that we step away from half filling by infinitely small epsilon. Yeah? Is the radius of our convergence going to be finite? It's already finite even at half filling because I'm talking about finite temperature. Now, every word here is important. U equals zero, this is analytic function at finite temperature. At zero temperature, you have phase transition for any U. You have to do it at finite temperature when the convergence radius is finite. I will just build more and more tools. Yes, there are cases when you say, okay, it doesn't work here and I will immediately explain to you how to get around and that's why my personal perception is you can always do it, okay? So this is the most naive expression now and then I'll just build the complexity, okay? So first, you may have finite convergence radius because you're on a lattice or you regularize your continuous system. Well, second, Dyson's argument is for Taylor series and the bear coupling constant but we already know that if we employ self-consistent formulations, I can get exponentials one over the coupling constant from the lowest order self-consistent graphs. So it can happen that you totally go around this Dyson's collapse argument by simply running self-consistent formulation even at the lowest order and do Taylor series for the rest. The moment you do self-consistent formulation, you are not expanding in the coupling constant anymore. And the famous example is of course, I know, weak BCS when you do the lowest order graphs self-consistently and your results are already containing exponentials one over U. They have nothing to do with Taylor series expansions in terms of U because it's a self-consistent formulation but once I do this self-consistency, the rest can be done as Taylor series. They have nothing to do with the expansion in the coupling constant U anymore and I can have an essential singularity coming to the origin, no problem whatsoever. You can keep calculating the rest. This will be already taken care of analytically, okay? We already know examples how this works and there is one problem from particle theory which was more or less done this way. Full skeleton diagrammatics solve the problem of convergence even though the petition function or any other property was a singular function of this type as a function of the coupling constant. So this is possible. This will totally annihilate the Dyson's argument and we already have an example how this works. Well, finally, and this is what I will also try to explain maybe at the very end. Even if I have this, so I have Taylor series, I computed them naively without thinking, I discovered that I have some branch cuts or some pathological behavior coming all the way to the origin which means I have zero convergence radius. Don't worry yet because there are methods which will allow you to extract the function from Taylor series even if you have zero convergence radius, okay? So which means keep computing your Taylor series. Whether you are inside the convergence radius or outside of convergence radius or even if your convergence radius is zero, don't give up yet because there are analytic tools which will allow you to extract the function even if convergence radius is exact zero, okay? Essentially everything about Dyson is true but you can always go around in a number of ways including the most straightforward, this one. Keep working with the expansion even though the convergence radius is zero just give me enough terms. I'll tell you what is the answer. Now let me go through those kind of points slightly more precisely with more details. I more or less explain to you the global picture. You do Diagrammatic Monte Carlo. It doesn't have any thermionic sign problem as is, it only has sign blessing because convergence and good properties, they come from cancellation of terms. Convergence will be your biggest problem. So it's not sign problem anymore. In this method it's only convergence whether you can get your answer better and better if you build more and more terms in the Taylor series. And there are multiple ways how you can achieve this behavior even outside of convergence radius with zero convergence radius in many different ways. But this becomes adjusted to the analytics of the model. You can always do it as a black box. Monte Carlo is a black box. It will compute your Taylor series but the way how you analyze your Taylor series will be model specific. Okay, let me kind of explain first why we have sign blessing one that I need sign for the entire setup to work better. Well, this is because the number of graphs is factorial. So if graphs didn't cancel each other and each graph is contributing something but they don't cancel, my coefficients in the Taylor series expansion will be proportional to n factorial. But then okay, if my coefficients diverge as n factorial, you can multiply by any power of u. This is always zero convergence radius. I cannot be more and more accurate if I go to high and high order terms because a n will explode faster than you can decay with u to power n. So for convergence, I have to make sure that a n is at least an exponential function in n. But the exponential is of course much smaller than the factorial so they have to cancel. So the moment you say, okay, this fermionic system looks like it has to be analytic at the origin of expansion, necessarily it means that high order graphs will cancel. Possible, possible it happens within the order. Possible it can happen between the orders. There is no for sure thinking, okay, how it works. But within the same order, I have n factorial terms. So by looking at Taylor series, if I know it's a finite convergence radius, they have to cancel within the same order as stated. a n cannot diverge factorial period. It's a Taylor series expansion which means higher orders I can control by power of u. So they cannot cancel between the orders in this case. They have to cancel within the same order necessarily. Yeah, it's this property of analytic functions. Well, sign blessing two, it's kind of coming from nowhere. It says that, well, okay, despite having, for example, for the hybrid model, we have n factorial square topologically distinct diagrams. It's enormous number. But turns out I can compute all the topologies together, what they mean in exponential time. And somehow this was missed for several years. For several years, yes, people were doing very stupid thing. They were sampling those topologies, trying to be designed coming from the same factorial contributions pluses and minuses, and this was very big scaling. We were more or less stuck at orders six or seven. But now we learned that, okay, well, this was just stupidity and nothing else because we didn't think analytically enough. It turns out that you can compute connected Feynman diagrams for fermions much, much faster. And the answer is as follows. In connected Feynman diagrams, the pieces of determinants are still hiding. Now this is how it works. Suppose I have to compute connected diagrams of what I am. I say, okay, this will be all possible diagrams of what I am, connected, disconnected, just pile up everything. You say, why I'm doing this? Well, that's because all diagrams of what I am, connected, disconnected and everything, they always form a determinant. Yeah, probably I just didn't make a plot, but suppose this is your interaction, fermion in, fermion out, fermion in, fermion out. Fermion in, fermion out. And for example, this is spin up, this is spin down for the Hubbard model. How do I make all possible topologies? I take any outgoing and make it any incoming with the same spin. Okay, I have two choices because it's diagram of order two. If it was diagram of order three, I have three choices. Now take any of the remaining outgoing and make it the incoming. Now just close the choice. But each time you see, you take any of the outgoing and make it any of the incoming. And three choices, two choices, one, three, two, one. This is n factorial. The same for the lowest row, another n factorial. But of course, while some of the combinations, if I decide to do it like this, they will create disconnected diagrams. So, but, you look at this, you say, a particular topology is the product of GGG, Green's functions. But the moment I decide to swap places, something, for example, do like this, and this goes in the opposite direction, incoming, outgoing. For fermions, the moment I swap the destination of two fermions places, I pay the sign problem. So it's a sign minus. And then the sum of all possible topologies, how I connect a given set of points, they are determinants based on the Green's functions ij. ij is a set of points in space time, which I specify here. And Green's functions are just fermionic lines connecting them. So one particular topology is one of the terms in this determinant. Sum of all possible topologies connected and disconnected, it's a determinant itself, but determinant can be computed, n factorial terms in n cubed time. So that's what we know from matrix manipulations. You just bring the matrix to the triangular form, you immediately compute the determinant and the number of operations is n cubed. Why? Because you have a brackets, you form other brackets and other brackets and the time of constructing brackets is n cubed and then you multiply them and you get n factorial terms if I open the brackets. So it's a clever scheme, but it says computing all diagrams is only n cubed. You can imagine, okay, how much better n cubed is than n factorial. So I need connected graphs. So you compute all graphs and then you say from this I have to subtract all the loud decompositions of my graph into the connected set Cm times everything else which is disconnected from Cm, which is again a determinant. So given that I am looking at the exponential or factorial scaling, computing determinants cost you nothing. That's just n cubed, it's a polynomial. You say, okay, how many possible ways do I have to create subgroups of m connected graphs times the rest turns out it's only exponential? If you try to partition, it's usually n factorial divided by small n factorial n minus n factorial, but the ratio of those factorials given the symmetrization is usually two to power n. So you only have two to power n ways to decompose your set into something which is m and the rest. For the rest I don't care, it's a determinant. But the set of m I have to select. So the number of selections n minus m factorial and the largest you can get from this, again power laws I don't even mention. It's two to power n. So you immediately see that calculating a's doesn't cost you anything, they're determinants. And I have a set of linear equations, but the set number of equations is only two to power n. In three to n operations you solve them. So that's kind of the miracle that, okay, fermions are still having something which is called determinant hiding inside those graphs. So n factorial square topologies can be computed into three to power n operations. So that's how diagrammatic Monte Carlo is using fermionic sign to finish everything much. You cannot do this for bosons. Because for bosons you don't have a sign. So instead of determinant for this one, you'll have what is called permanent. And I don't remember, but permanent will cost you at least five to power n or something to finish. So the scaling will be much, much worse. Plus the diagrams will not cancel and a n's will definitely diverge as factorials. So nothing will help you. So we have two pieces which are coming from the fermionic sign. And they're both in our favor. Diagrams cancel each other to nothing to give you power scaling. And diagrams, you can compute connected diagrams much, much faster, okay? So in this sense fermions are far better than bosons if you do it diagrammatically. Well, of course all of what I discussed including a fast calculation of graphs and all sign blessing and cancellation, all of this can be done for any type of diagrammatic formulation, including skeleton series, partial summations. It just standard diagrammatic technique. So we know how to do it. So we can compute all typologies in exponential time. So we are left with computational time to compute all diagrams of 4DM is exponential. If series converge, you solve the computational complexity problem. I think I am very bad on time. Yeah, I have maybe 10 minutes. So let me kind of briefly show you how things work. So if my diagrammatic series converge, that's an example, kind of a showcase. It's not the most interesting part of the Fermi-Herbert model. You take u over t is two, not too small. Temperature is hoping divided by eight. It's somehow doped. Any other method will face a sign problem. Now you do it diagrammatically. You are inside the convergence radius for those parameters. And then you can go up to 1D11 realistically with new ways of doing it and look at the vertical scale. Even if you do bosons, and you have no sign problem in path integrals like in Helium-4, just good luck to calculate any of the numbers. Say in Helium-4 using conventional methods with the accuracy of six digits. So essentially I would say, I don't know realistic examples even for bosons when you can compute answers like this with the accuracy which is mentioned here. So diagrammatically you can be in extremely good shape inside the convergence radius. And again, there is nothing special about the model. You can modify the parameters, you'll have the same. Well, now just let me briefly explain in cartoons because there are tools of course, but for this I have to compute the Taylor series and I'll tell you how to analyze them. Well, just imagine that, well, you want to know your answer, this particular value of your coupling parameter, but your convergence radius is truncated by some say single pole. And I compute enough terms in the Taylor series. What do you do in such case? This is called conformal map. I simply introduce a new variable z which is u divided u minus uc. And if I roughly know where is uc, under this conformal map, the pole will go to infinity. You re-express your Taylor series in terms of z because this can be also written as expansion and powers. One can be expressed through the answer and you are sitting well inside the convergence radius. So that's all you do. I compute the Taylor series in u, I am outside of convergence radius, but if I know what is hitting me, say single pole, I can move it to infinity by conformal map. Everything converges exponentially. I go back to the previous statement, everything is perfect. Well, of course, if it's not a single pole, but several poles and maybe not of first order, but higher order, we already know how all of this is called. All of this is called conformal maps based on the ratio of polynomials. Because the ratio of polynomials gives you a conformal map which will move the required number of poles to infinity. Okay? So this is called extrapolation using the Pade formula. So this is, you develop Pade, but the higher the order of your Taylor series expansions, the higher the order of those polynomials, and you see how your answer changes if I increase or play with the powers of K and M. Okay? So that's the generalization to any number of poles of any order. Of course, the more poles and the more orders, the more terms in the Taylor series I need. Well, this can be generalized to branch cuts. So whatever you learn from here about poles can be immediately generalized to any number of branch cuts or any jump or phase jump across the cut. So this is done by the ratio of hyper geometric functions and some people already enjoyed kind of doing this. If this is hurting your convergence radius and you cannot get your answer for you, you employ this ratio. It's an equivalent of conformal map by moving those branch cuts far away. You are inside of the convergence radius. You converge exponentially in Z. Okay? And again, these type of manipulations are possible and you can enjoy them because you have finite convergence radius. You simply see something hurting you. You can move it away by conformal maps. Well, there is another tool which is slightly different in spirit. It's not that I'm doing the conformal map. So imagine I want to know my answer here. Yes, the pole is hurting me. The branch cut is hurting me. My convergence radius is not reaching you. What do I do? As a cartoon, just imagine that I'm allowed to shift the expansion point. Of course, shifting the expansion point in you is not possible. This would immediately say that I know the exact answer at some other value of you. But in this spirit, just imagine that while I can shift the expansion point, maybe at the same time changing the expansion parameter. I'll explain it. So then if I shift this expansion point a little bit, you understand that I'm doing Taylor series at this point. Now the branch cut and the pole are further away than the distance to the point which I need. Kind of trivial kind of understanding of what this means, but it says the following. Yes, if you do Taylor series in you, maybe you diverge. But you don't have to do Taylor series in you. So let me explain how this works. So they will call it a kind of shifted action tool and it has infinitely many flavors how you can realize it, which means it's infinitely flexible tool, how you use it and for what model depends on the model. You have to do it analytically first, investigate what's better. That's how it works. So if you have your action, say this is Firmionic Propagate and non-interacting system and you have interactions. If I do expansion in S interaction, that's especially essentially doing expansion if I put here psi just like here in powers of psi and I'm interested in the answer for psi equal one. But I don't have to do it. I can identically write down another action. Let's call it S psi. When the greens function which I use for fermions is essentially anything I like. It's not G zero of the original model. I just put here anything I like. Of course I have to compensate for this by some other terms, lambda one, lambda two as many as you like. And I put psi here, psi here, psi here, psi here. So this is S psi action and for psi equals zero, this is very weird model in principle. It can be anything you like. So you do now Taylor series in psi and there is only one requirement on all the arbitrary functions which I put here. For psi equal one, when I take G tilde minus one, psi equal one, psi equal one, psi equal one and I sum up all the lambdas, this is G zero minus one. So understand for psi equal one, this S psi action is exactly the same as the original one. So if for this one, I have convergence radius which is bigger than one. I'll give you the answer for the original model but I will do Taylor series expansion around a different point, G tilde minus one. So that's what we call shift deduction tool and you can do anything you like here. For example, lambda one, if it's the only term, can be based on the Hartree term because you immediately see this looks like self-energy contributions at the level of the Green's function. So you can take part of the Hartree term, keep it in G tilde which means you will do expansion at a different density. But the other part of the Hartree term will be still part of your expansion on psi. So you're trying to reach the same point by doing the expansion from a different density. Of course I can play this at the level of Hartree fork so I can start expansion from Hartree fork. I'll just jumping ahead. I can do DMFT solution and I can do the same on top of the DMFT solution and correct the rest of the diagrams. So you do DMFT, you think, okay, I'm done. No, you're not done. You just initialize the calculation diagrammatically and I can do the rest of the graphs. If I have a convergent series in terms of how I formulate it, I'll get the exact answer. The DMFT will be just the starting point. Okay? Now this is done here at the level of self-energy but this can be generalized by Strach-Herbert-Stratanovich decoupling of S interaction to any other number of channels. So I can do the same type of shift. I can screen interactions because after I do Herbert-Stratanovich I can do some bazonic fields and with bazonic fields I can play the same game of shifts. Okay? You change, you can change your expansion point. Like in a Coulomb system I cannot do expansion in the coupling constant. Second-order graph is already infinite. I have to screen. Okay, that's roughly how it will look like and this is what's done by Ferrera, Antoine-Georgian, Evgeny Kozik. You compute Taylor series. That's where you want your answer in your outside of convergence radius but how do you know about those singularities? You have your Taylor series as a function of your expansion parameter. Then simply plot the phase as a function of complex psi and you'll see that the phase is running around this point the phase is running around this point and I have more kind of trouble coming. You just have this map from your Taylor series. You do the shift. In this case, the shift was trivial. In this case, the shift was just the chemical potential. Very simple. You immediately discover that, okay, if you do a large enough shift you always screen essentially. You shift the chemical potential more than the harder term predicts. The point where I want to compute my answers is now inside the convergence radius. That's how, say, self-energy contributions at different frequencies were behaving as the function of diagram order. For some frequencies, you cannot converge. Just you change the diagram order, you do not saturate. You do the shift, you inside the convergence radius all the answers saturate with some constant. At order six, you're essentially done. You can do also order seven to get high accuracy and you're finished. I will slightly rush the results who cares what the results are. They are the correct ones. That's for the pseudo-gap physics of the Hubbard model for certain parameters of the Hubbard model. And they can explain, okay, how this works. You can generalize this, as I already said. You can include ladder summations. If you find that, for example, Mete mentioned the lead case. In the lead case, if I switch on strong interactions why don't I go from the original potential to the scattering shifts? Yeah, you can simply solve the ladder in the lead case. And then you formulate all diagrams in terms of the propagating ladders. This is doable. And you have another formulation, but now your expansion will be in density. Because U is already taken care up to infinite order, exactly, you expand in the density for your diagrams if it's the lead limit. Well, you can screen and all of this kind of has more or less the same setup. You formulate skeleton series with partial summation without partial summation, but the rest of the job is the same. You formulate Taylor series for the rest, okay? And this can be done in a number of ways. I just mentioned some of the techniques. You can enjoy all of them. If you find this is important, you can enjoy all of them and nothing changes fundamentally. It's the same setup. Probably the last example I would like to show is the resonant Fermi gas. So that's the canonical model in cold atoms. When fermions we spin up and spin down, they're coupled by the short range potential, but you are at the edge of forming a bound state. So you tune your system in such a way that the bound state energy is zero. And that's the case when the scattering length goes to infinity. You have finite density of fermions, but the scattering length goes to infinity. Of course, if I make my potential more attractive, I can form bisonic molecules and I, at low temperature, I will condense. So this is molecular BC. If I make my potential weaker, then I have weakly bound Cooper pairs and the ground state will be BCS superconductor. But when AS is exactly infinite, I am somewhere in between. There is only one length scale, which is confirmed, there is only one energy scale, which is Fermi energy. This is considered to be soup. There are no small parameters whatsoever. You have to do all Feynman diagrams up to high order. So that's what we tried to do, but we immediately discovered, and I'll explain by what tools, that the number, if I sum up all diagrams of a given order, the coefficients do blow, but as n factorial to power one-fifths. Diagrams do cancel, but they didn't cancel the factorial completely. So pieces are left, but this is n factorial to power one-fifths. Literally saying the convergence radius is zero. So we have to solve this problem, but we know that convergence radius is zero. So more or less in the same domain as Dyson was saying, okay, stop doing it. But then there was one hero on the market. That's Lipatov. Lipatov was doing it for high energy physics, for whatever reasons, whatever was his motivation. He noticed the following. Yeah, somehow just this has to be first. He noticed the following. He said, okay, suppose you have some quantity and it depends on G. If I want the Taylor series coefficients, use Kashi formula. Just integrate this quantity, the counter integral of the origin over DG divided by G to power n plus one, and I can write it down as exponential. Looks boring. Well, then he was saying, okay, suppose that GZ, the petition function, or any other correlation function as a function of G is a functional integral. Okay, so I'll just substitute this functional integral to this formula. So my Taylor series expansion coefficients is the counter integral over DG, the functional integral over DPSI, and I have an exponential. And I will be interested in understanding how my coefficients depend on n when n is very large. But now you have to understand if I have series and series are bad behaving, they are bad behaving only because of large n behavior. Because the first few terms, it's a polynomial, it's innocent. So everything bad comes from the asymptotic behavior. But that's exactly what this early part of was telling us is computable. For very large n, I see that while this term in the action is very large. But the rest is so far, Grassman variables and Grassman variables are not really large or small, they are Grassman variables. So next what you do, you do Haber-Stratonovic transformation, you decouple the interaction term. So you now have bosonic fields, you integrate fermions out, you get a determinant, but it's by quadratic and psi, so you can finish the job. And then you have large action because it's still large action, but after you integrate out fermions, it's entirely in terms of the complex number fields, now the notion of large is applicable. So now you do saddle point solution for this action, and this gives you the behavior of coefficients for very large n. So if you implement this protocol, you will discover how z behaves for very large n and you can establish what is hurting you at the origin of expansion. So from this we knew what is giving us n factorial to power one-fifths. There are two branch cuts coming to the origin. Now finishing the story, the moment you know this, what is hurting you at the origin? What kind of branch cuts and what are the phase jumps across this branch cut? You can apply the appropriate resumption technique, in this case it was conformal borrel, and you reformulate your series in such a way that you converge exponentially fast, okay? So that's the bottom line. You have zero convergence radius. If you do the proper analytic analysis of what happens in your model, you can finish the job and that's more or less my last plot. That's the unitary Fermi guess. Roughly... That's the last slide I will show you. You see that while you can go up to diagrams of 48, that's the current experimental record. Well, it's a relatively good and reliable experimental answer. Before we did this analysis, this was our theoretical error bar. Call it, I know, perfect consistency with the experiment but the error bar, no matter how, it's good because the numbers are barely changing. Now we can claim that we can get our answers because we can converge exponentially, we can claim our answer much, much better. Now the kind of the challenges with the experiment, whether the experiment can be done to the same level of accuracy and we have no adjustable parameters whatsoever in computing properties of the system. Everything is fixed by infinite scattering length. Yeah, I'll probably stop at this.