 Okay, smaller numbers, stronger in spirit. We are starting our last day with the second lecture from Professor Prokofiev. Okay, I will continue kind of the same discussion we had yesterday, but yesterday I was trying to convince you that doing the Grammatic Monte Carlo is cool in the sense that if everything works, you will solve the problem no matter small or large parameters. And today I will more or less explain the details of how it's done. So the general setup was yesterday and today I will be much more precise in details. So for this I have to select a particular case and just go through and all the steps, how you go from the idea of doing Feynman diagrams to the final result. Okay, one person I'm missing is Natan Andreev. Because he was complaining that if someone says one to Monte Carlo, he has no idea how the result is obtained. So now you will know how this is done. Okay, so this is the work which I mostly did many years ago, but okay, some new incarnations are currently in progress. So this was done in collaboration with Boris Systemov and Igor Tupitsyn on some of the ideas. I will start very slowly and my first part will have nothing to do with Feynman diagrams formally. It will be simply the explanation of how you can do unbiased Monte Carlo sampling of the configuration space which involves a number of integrals and those integrals are of different multiplicity. So how you run a Monte Carlo when you have a number, an integral, a 30-dimensional integral, a 100-dimensional integral, and you never take those integrals. You just sample point after point after point, but those points are from configuration spaces with different differential measures. So this was kind of bothering people for a while. That's why maybe you hear about Suzuki Trotter decomposition, but this was a total kind of blinder of the time. So especially if you do lattice models, forget about Suzuki Trotter, it's just, it's a nightmare. But somehow people didn't appreciate that everything can be done in continuous space without any systematic bias. So they did introduce the finite discretization in imaginary time, which was totally not necessary. So you'll see in two slides that the problem solves immediately in continuum and you can immediately go to something which is diagrammatic Monte Carlo because in the past if you have a lot of continuous variables and if in one kind of step you want to change five continuous variables, in the previous thinking, this would be simply a outrageously small acceptance ratio. Now you will see that all acceptance ratios are one and you can easily run Monte Carlo in this setup. And then I will simply, after I discuss how this is done in general, I'll immediately switch to a particular case when every single detail about what you do can be specified in terms of the Hamiltonian, matrix elements, dispersion relations. And this is the case of the Fröhlich-Poleron which is the easiest one to explain and do it in one hour. But the same, exactly the same spirit can be implemented in the many-body setup. You do more or less exactly the same job and then you can ask for properties of many-body systems. So that's the plan. So let me first kind of explain. That's what we face typically in classical Monte Carlo and for a while you'll see I will not be writing a denominator because at some point I will explain that if you do a diagrammatic Monte Carlo you don't need denominators. You always know how to normalize the data. I don't have to write down the ratio of two multi-dimensional integrals. So forget about denominator for a while. So if I have a classical Monte Carlo setup, I typically have to do a lot of sums. I have to do a lot of integrals. Sometimes it's only integrals. For example, if it's a Heisenberg model, classical Heisenberg model on a lattice you do integrals. If it's Ising model on a lattice you do sums. Maybe you'll have to do sums and integrals at the same time. But the important part is that if you have a number of integrals it's more or less the same differential measure which you write down here because the number of degrees is fixed. It's never changes. So if I'm talking about the lattice of Heisenberg spins or unity spins on a lattice it's M and I have to perform this one multi-dimensional integral. That's more or less the standard arrangement for classical Monte Carlo. And then you simply sample this function using whatever tools you have which means you don't take this integral. You use some other integral which is very easy to take and then you essentially try to look at the ratio between the two integrals and that's how you compute this one. That's absolutely standard in classical. The only difference between classical and quantum is that on top of this sum which I always have in the classical Monte Carlo I have to add yet it needs another sum which says that the number of those integrals and the number of integration variables may be changing. So the difference between this one and this one I will call quantum. For the other difference between classical and quantum is of course that D is not necessarily positive number but I already explained to you yesterday that okay that's not a big deal. You just separate D into modulus and sine. You can sample everything by modulus then you will be measuring the sine and that's the only difference. So today I will be essentially assuming everywhere the D is positive so don't worry about the sine. If you have it we know exactly what to do. Okay, so assume the D is positive. So the difference between the two is that the number of integrations changes from term to term. If I change N from zero to 10 I'll face integrals of high and higher dimensionality. Okay, but that's the general setup. My question is okay how do you compute A which is represented by this expression. You know everything about D's but you have to take those integrals you have to go through all the sums and then you see the light at the end of the tunnel you'll see your answer. So we have to perform this summation. And we want to perform this summation in unbiased way which means if you run for very very long time you will converge to the right answer. And your robots will shrink as square root of the simulation time. Okay, so this term I will call it kind of order whether this is diagram order or just term order it doesn't matter. So this will simply sample different terms or enumerate different terms which have exactly the same order. You can call them topologies if we are talking about diagrams. And then I have a bunch of integration variables and I have to take integrals over them. This is maybe some external parameter. It can be also a continuous variable can be discrete variable, it doesn't matter. Okay, so now just the entire expression here I will call the configuration weight because if I specify all parameters n, psi and all the variables if they are specified that's how much this particular point in the configuration space because everything about those integrals and sum all the parameters n, psi and axis I will call the configuration space point. And that's how much it contributes to the answer including all the differential measures. Okay, so that's what I will call the diagram weight. Okay, so Diagrammatic Monte Carlo simply says, okay, well all of this can be written in the context of Feynman diagrams but you can also have the same expression if you do path integrals or impurity solvers you face more or less exactly the same setup. Okay, this is a particular example. For example, what do I mean? This is say the diagram of what you have one, two, three, four, five, say six. It's the diagram of four, just six. I have to specify where do I put all the black points. How much it will cost me? It will cost me the product of all the lines. Let's assume that I touch the points to the dash line together. So then the product of all lines, that's exactly how much it will contribute to the answer times the differential measures for all the points. So Feynman diagramatics is wonderful because converting graphs to math is immediate. So you just immediately look at the graph you immediately write down the math expression. What does it mean? Okay, well this is a particular diagram for self energy because that's where the electron comes in and that's where the fermion gets out and the case is for frustrated magnets. Now okay, let's go back. So this is what I need to do. Well sometimes this is diagrammatic expansion as I said but formally just look at this. It's a set of multi-dimensional integrals and sums and you have to do it with some known function whether it's coming from Feynman diagrams or path integrals or any other problem in math. I don't care. So at this point there is no content of how I generated this expression. Suppose I have it and I simply have to evaluate it. So just be free of thinking about diagrams at this point even though I will call them diagrams because I know it's much easier for me to specialize on particular kind of notations. Otherwise it's just mathematical expression for what you have to do. Where it's coming from we don't care, okay? So that's the setup. It can come from any other representation for the problem which you found. Now just in the context of diagrammatic expansions if it's not something more abstract, more or less this type of expressions they have to be broken into classes how people typically do them. Sometimes people do all possible graphs if you think about Feynman diagrams. They're connected and disconnected graphs and in this case they're simply computing on the left hand side they're computing the partition function in some finite volume or finite cluster of sites. So in this case whatever I compute here is exponentially dependent on the volume of the system, okay? So that's the typical kind of setup when you do impurity solvers or determinant Monte Carlo you compute all possible graphs. Well the other kind of branch which you can take is when you only can see the connected graphs here and then the entire setup can be formulated in a semi-dynamic limit when the answer is already kind of for free energy density or properties which are not scaling with the volume anymore. So they're already kind of for densities. In this sense you are free from finite size scaling you immediately claim the answer in a semi-dynamic limit. Well we can go slightly further because even if I do all possible graphs this type of expressions they break into two pieces. Sometimes they are signed positive. Well in which case looking at this expression I say okay I will probably sample everything. I will sample n, I will sample integrals and I will sample all the terms of the same order. That's what happens for signed positive representations. For example we do it for bosons, we do it for spins. Well sometimes signed positive can happen for fermions but in this case very frequently people simply sum through all terms of the same order because they can form a determinant. But determinants can be signed full or can be signed free. So maybe in some small subset of fermionic models we do have fermionic kind of representations for fermions when summing up all the typologies and forming a determinant eliminates the same. So yeah we have those cases for specific models especially when there is particle-hole symmetry and contact interactions. This happens very frequently. For very simple reason, you simply have a determinant for spin up, you have a determinant for spin down and they can be the same expression. You multiply the two, it's a positive number. You have particle-hole symmetry sometimes all the same property emerges. But generically this never happens. So in a generic case if you do a determinant when you sum up all the typologies you still probably have to do it if you look at all Feynman diagrams because determinant is only polynomial effort. Summing up all the typologies of Feynman diagrams it's a factorial so you immediately kind of beat the factorial by power law. So you'd have to do the determinant. But if it's not signed positive the sign of your determinant will scale to zero exponentially with volume and you'll face a sign problem. That's the one which I discussed before. Okay, here you also can take a branch. Sometimes your connected Feynman diagrams produce a diagrammatic set which is signed positive and this will be the case of tolerance. In this case you can sample all the typologies because in the absence of sign there is no reason to enumerate all terms which are factorial terms and just sum them together because without the sign there is no determinant anymore so you'll have to go through all the typologies but not all of them contributing equally. So if you sample you are doing much better because you'll be hunting the terms which are contributing most. In other cases when connected Feynman diagrams are not signed positive I would say you'd better sum through all the typologies together. This can be now done very efficiently not in factorial effort but in the exponential effort and there are some ideas that maybe will beat even the exponential. So maybe the sum of all typologies for connected Feynman diagrams can be finished in polynomial time. So people are trying to develop this type of scheme it's not finished yet so we don't even know whether the solution is there but some ideas are on the market. So this will be for genetic thermionic models. So today I will be simply doing the easiest one, this one, when we do Feynman diagrams we look at connected Feynman diagrams they are signed positive for the polarons and I will explain how all of this works. But then I can be very precise in detail so you can follow how this happens. Hey so we go back so that's the expression I have to do. Forget about the sign for polarons because everything will be signed positive. And essentially when you have to do it you have to understand I know three pieces. One piece is okay how do you sample this without any systematic bias? That's the first question. That's what I will explain. Then you have to develop I know some tools. What do you measure? Because if I have Feynman diagrams for what? And how do I extract relevant thermodynamic or physical information from my simulation? For example if you simulate the Green's function what is good for? So you have to develop kind of some tools for data processing, how you store the data and how you extract information. And finally I am writing everything as if it's not normalized and there's no denominator even though typically in Monte Carlo you look at the ratio so I will explain to you how to do normalization if you are in the diagrammatic space. That's relatively easy and that's nice because there is no zero in denominator potentially. So no small numbers. So usually normalization is trivial and very easy. So that's the last part I will explain, okay? So just to be sure if you are familiar how to do Monte Carlo for the Ising model. So this is not more difficult. That's more or less the same in spirit. So that's the typical set up Monte Carlo. You specify your configuration space. Your configuration space can be anything you like. It's a bunch of parameters. And for each point in the configuration space you have to assign some weight factor which means this is how much this point in configuration space is contributing to the answer. And finally the quantity of interest if I have the ratio you write it down this way and then if I sample all configurations according to the weight I simply sum through all the generated configurations and compute my property and normalize by the number of samples. That's the standard Monte Carlo setup. If it's Ising model your configuration space is tell me whether spin is up or down for every particular lattice point, trivial. My configuration weight is just the Gibbs factor and then you go and do it. Easy. If you do diagrams strictly speaking, yes. Any positive number can be written in this form if you like. So essentially any positive number I can exponentiate it as exponent, log of this number. And I can define this if you like the configuration energy but this will be just a very fake. So any positive number which I can write down here the configuration space will change. Yeah, for example, this is the configuration space but it's actually even more compact than the Ising model. If you do Ising model, for example, people do say 500 Q. I have to specify 125 million numbers up or down. Boring. Here I have only 10 points and maybe I know 10 momenta. So my configuration space is maybe 30 numbers. But if I give you all those numbers you'll produce the graph. You'll immediately tell me okay, well do this, this and this. So it's a very compact configuration space. For each configuration I definitely have a weight which is very easily and immediately provided by Feynman diagrammatics. The moment I specify the configuration point you will immediately tell me how much it will cost me in the answer. And then I'll just do the same. Well, it can be safe for Polarons. I have this type of graphs. If I have many body physics and I'm looking at the free energy. So this will be my. The kind of configuration space, it's the same deal. Specify all the points and I know exactly how it will cost me. And then I just go and do the average exactly the same way as before. Yep. No, no, no, just a fake. I'm just saying if you like, you can play this game. I'm on all I'm saying is that any positive number I can represent as E minus log one over P divided by temperature times temperature. I will call this E configuration. Well, it's a fake. So you can do whatever you like. I'm saying there's no difference in spirit when you think about Gibbs factor. Sometimes it's convenient because the energies are additive which means your weight is a product. But we have exactly the same setup in Feynman diagrams. Your weight is the product of lines which means if I write down the energy the energy will be additive, pure line. So everything looks more or less the same way. And that's why I'm saying in spirit, doing Feynman diagrams or doing Dising model that's exactly the same principle how you do it. Okay, now let me explain how the Monte Carlo typically runs. So if I have to do this, I will start simplifying notations because why just mentioning all of this all the time? So I have my point in configuration space and configuration space is defined as m, psi, y and all the continuous variables. So this is what I will call new. That's configuration point. And I contribute something from this point. So that's what I have to do. Now how the Markov chain process works. I already have some configuration which is decided in the past that this configuration contributes to the answer. So next step in the Monte Carlo will be to suggest a change and to go from configuration point new to configuration point new prime. Okay, and you just suggest this change using whatever rules. I have a lot of variables here. So maybe my rule of changing new to new prime is very simple. I will select a random one of the say continuous variables. And once I decided that I'm talking about variable number 114, I'll suggest a different value for this continuous variable. That's as simple as this. So you can change the configuration point and then I have to accept the change. And typically accepting the change goes with the so-called acceptance ratio which is based on the ratio of the weights. And the principle is very simple. If this point new is already part of what I contribute to the sum, then this acceptance ratio simply guarantees that the next point which I contribute contributes according to the ratio of those two points. If it's the same weight, probably it will contribute the same. If it's more, it's contributing more, it's more likely to be accepted. And then I less likely I will return back. So it means I will start contributing in proportional to D. I'm not saying that I will contribute exactly D, I'll contribute in proportional to D. So there will be a common fact in front of my answer and that's why I need normalization. But everything else will be done in a totally unbiased way. So that's the principle of Monte Carlo. You hunt for contributions which are largest in modules. If my D is positive, you are hunting for the most relevant contribution. So essentially this Markov chain process will drive you to the point with the largest possible D and the largest entropy of having those large D. So it goes to the point, if you want in a fake sense, if this is my energy, fake energy, I'll go to the point with the largest free energy. So the largest part of my configuration space which dominates in the answer. Of course, if you run long, you have small probability that you will diffuse away from the dominant spot and sample more and more points, maybe with small and smaller weights, but ultimately your answer will just go to the exact expression, okay? Well, you have to guarantee that the process is ergodic, which means the way how you suggest changes will allow you to go from one point, new, no matter what it is, to any other point, new prime. Again, no matter what it is. Maybe the probability is very small, but it has to be finite, which means in principle, you will sample everything. If you give me infinite time, I'll cover the entire configuration space, okay? And then everything will decide for you, okay, what is the dominant contribution? So that's the principle of doing Markov chain Monte Carlo. But looking at our expression, we realize that, okay, well, I have multi-dimensional integrals and essentially when I suggest a change, there will be two type of updates. When I say update, it's suggesting a change. Suggesting a change is called an update. So sometimes you have updates, I call them of type A. When I'm staying within the same order of my term, which means the number of continuous variables is exactly the same. Well, in this case, you suggest a change in the integration variables. Well, but your weight is proportional to all the differential measures, but I have the same number because I'm not changing the number of integration variables. So all differential measures will cancel and you have a acceptance ratio, which is of what you have won. So accept it. Of course, it will depend on the parameters, but there is no anything which is going to zero. Well, but sometimes you can decide that you will update the configuration space and change the value of N. If I change N, I have to remove or add additional integration variables. According to my definition, that the weight contains all the differential measures, it looks like if I just follow this prescription and think naively, then the ratio of the weights will contain differential measures. So if I say change M continuous variables, each of them will contribute this and maybe this is even a multidimensional variable, say five, then probably everything goes to zero. If I want to do it directly and continue. And that's what people were thinking in the past and that's why Suzuki Trotter or some other schemes based on finite delta tau. Then in two slides, you'll see all of this is fiction because I cheated a little bit because when I was writing this, I didn't write everything. I said, okay, definitely you have this. I'm not saying that this ratio will not show up in this sentence. Yes, the sentence ratio is based on the ratio of those two weights and it does contain differential measures. But there are other factors which I still have to mention because, okay, I was making a proposal and this sentence ratio will contain all the probability of how you proposed the new variables. So let's be slightly more precise. Why all of this is totally wrong? So let's go to something which is the most important equation in Monte Carlo. It's called balance equation. And the balance equation essentially says the following. Imagine that, well, okay, I have to do this sum and I want to be sure that configurations are happening in my kind of sum according to their weight and the weight is given by D new. Okay? Just imagine that I have infinite number of computers. Well, each computer is in a certain state. New one, new two, I know, new infinity. So essentially the number of computers is so large that they cover the entire configuration space. And I say, okay, well, if I just look at all the computers and I say, okay, I achieved the golden state when every new appears with probability proportional to D or P, P can be arbitrary. Then you just group them, new one, new one because I have so many computers and this number will be proportional to P new. You just group anything and this is P new one, this is P new two and I just keep going. Okay? So essentially you say if I have infinite number of computers I don't have to do any updates because I already have different computers representing the statistics of different configurations. Simply go through all the computers, sum up the result. So my A is sum of computers but each computer is already having a certain state and this will be, I know, same as summing over new with D new because if I sum all the computers the number of computers which are having state new is proportional to D new automatically, I have my answer. Okay? So I don't have to do any calculation but now imagine that you do updates and each computer works in exactly the same way. So I run exactly same software on each all computers. This computer will go to new prime, new I know. This one will go to new double prime under the updates. This will go somewhere, this will go somewhere. Okay? According to changes because I suggest changes at random for different configurations ideas, different changes. So all the computers will reshuffle my configuration space but I want my software to be such that when I do this update and I perform the sum through computers here I will reproduce exactly the same answer. Okay? My updates they have to be done in such a way that if I run the simulation this distribution is reproducible on the infinite set of computers which means no matter what I do at each time I can just sum through all the computers I'll get my answer correctly. And now you understand, okay, if this is true and my process is ergodic then I can select computer number one and this computer number one will just go through all the configuration space and if you do it infinite CPU time this computer sooner or later will show up in all possible places. And it will show up in all possible places and will replace this sum. So essentially it's the same as sometimes we say okay ergodicity says that okay any system if you watch it long enough is reproducible of an ensemble with a particular distribution venue. Okay, what do you learn from this? I look at this I say okay just look I have a distribution of computers I do my update or I change configurations and I have to reproduce the same distribution. This is the requirement that under the proposed update my distribution has to be stationary. Literally saying that okay if I select the group new one, new one, new one I will look for the same group of new one because some of the computers will leave the group some of the computers will enter the group. So this is still the group of computers with configuration new one this still has to be P1. I cannot change the distribution by performing an update. And now you simply write down everything as balance equation you say okay the number of computers in the group new is D new. You perform an update and every computer from group new will go somewhere. So they leave the group. Well this is the probability of suggesting an update and this is the probability of accepting it. So this will be the total flux out of new. To take any new you propose with some probability to leave and then you accept. Okay and this will be the flux out. Well but then I say it has to be perfectly compensated by the flux in. I have to look at all other configurations which can enter the group. So that's the number of computers in other groups that's the probability of suggesting an update which goes from another group to group new and the probability of actually accepting this. So this will be the flux of computers to the group new. And I have to be sure it's exactly the same. So that's what I say okay well the probability distribution of computers is exactly what I want according to the weight under updates which I perform this distribution has to be stationary. So that's the balance equation. Just only one line. There is nothing else in Monte Carlo. You are finished more or less in formulating what Monte Carlo is about. So this expression. Of course typically solving this expression is not easy because there is a sum. But well because there is a sum people say okay yeah let's write down far more restrictive condition which is called detail balance. Instead of equating the sum to this sum they say okay well of course each if term by term you have an equality you satisfy the sum. So if I say d new omega new new prime accept equals d new prime omega new prime new accept. If I can satisfy this at the level of each term in the sum then obviously if I sum everything this will be satisfied. So that's what's called detail balance. Essentially when you perform the update your acceptance ratio how you accept from new to new prime divided by how you set from new prime to new you get the final answer. So that's the detail balance equation. And if you satisfy this equation you guarantee that you can start with any distribution here but if you satisfy this equation ultimately you go to the stationary distribution where configurations appear according to the weight. So that's the Bertrand's odds of quantum Monte Carlo. So that's the only expression you have to learn how any Monte Carlo is done. You simply try to achieve using detail balance for the updates which you suggest how you shuffle the configuration space. You try to satisfy this condition if you satisfy it and you are got it then your configuration will pop out with probabilities proportional to d new. That's it, okay? Questions, okay? Now we just make it slightly more practical. So essentially what you want to know is the acceptance ratio. Yes, I have to satisfy this condition but I have two numbers. Everything else is a protocol. So d new is given to me by the problem. Omega new new prime I design it. It's more or less arbitrary. I design how I want to suggest probabilities of changing new to new prime. There is no rules. Do whatever you like as long as you are got it. And the only number which you have to kind of compute is the acceptance ratio. It's the ratio between how you accept going one way divided by the acceptance ratio how you go the opposite way. So that's the only condition you have to satisfy. Okay, you can satisfy it in a number of ways. For example I have r new prime going to new, r new going to new prime equals some number. So this number I call the acceptance ratio. Yes, it depends on the protocol, the model and everything you want to do. You say, okay, what is the solution of this one? The most typical solution people take. They say, okay, I have to solve for this one. And they say it's one if r is bigger than one, it's r if r is less than one. Well, and this is the complementary one just because I have two numbers and that's the solution, sorry. Oh, sorry, yeah. Well, each time you see that if I divide this by this because okay, that's the same condition. So I'm either using one which means I don't even have to call it probability. I will definitely do it. I only have to run the prorandom number if it's bigger than one and I go backwards and it's the complementary pair going backwards. So that's one of the possible solutions. It's not the unique one. Of course, you can write down the other one. For example, new one going to new can be r divided by r plus one over r and the other one can be one over r divided by r plus one over r. If I'm not mistaken. Oh, no, just let's, probably this is also okay, you know. Let's check. If I divide one by the other, yeah, looks, it's okay. All I have to kind of satisfy is that whatever I write down here is a probability which means it has to be less than one. It looks like I'm okay. If I divide this by this, I have r, okay. I have two variables and only one condition. That's why the solution is not unique, but typically this one is used. Then you are done. So that's the distribution I desire. Those are my proposal probabilities, how I decide to change the configuration space. And if I do this proposal, that's how you accept it. Okay. You compute this acceptance ratio and then you accept using this scheme or this scheme, it's up to you. Okay. Now just let me describe, I know in slightly more details, how the update of type A will look like. Because update of type A doesn't change the number of integration variables. Then it's easiest. Okay, I'll use the simplest possible scheme. I'll use some number, 0.1, 0.3, 100. That's the probability to perform a particular update of type A. You call this with some probability. You throw a random number and with this probability you decide that that's exactly what I'm gonna do. Okay, that's P U. Then I will say select one of the variables. Maybe I'm talking about interacting formulas. So there is a position in space, position in time, spin, maybe some other orbital index, whatever. So I have a bunch of variables which I have to change. So I select one of those variables at random. And the probability will be one over the number of variables which I have to change. So I don't change M, but I can change any other variable I like. So I just enumerate all the variables in question and say, okay, well, I select any of them at random. I just don't give any preference to any of them. Yeah, okay, the probability of doing exactly this thing will be one over the number of variables. Okay. Well, suppose you choose some variable Z. Once you have variable Z, okay, I will decide that I will propose a new variable Z prime using some probability density or probability if it's a discrete variable Z prime. So that's the, well, we assume it's continuous space. For example, then this is probability density. Well, if it's a discrete variable, just probability density, I'll probably multiply by DZ prime if I want, but the number of continuous variables doesn't change. So this will cancel anyway, differential measures. So with this probability, I select a new variable. Okay, I'm done. And then I have to ask, okay, can you tell me whether I have to accept this change or not? So I accepted one of the variables at random, proposed a new variable Z prime, tell me whether I have to change the configuration. Yeah, you go to the detail balance equation and enumerate everything you do. I say, okay, yeah, and I will do it at the level of update, which can take you back. Because remember that I have to write down detail balance, which says, okay, if you change from new to new prime, there is an update which can take you back from new prime to new. And I will use exactly the same scheme and the same update to go back. So this is my original configuration. My variable is Z. I propose to change something. I selected this particular variable. So that's the proposal. Okay, so that's the proposal of selecting one of the continuous variables. That's the probability that I will generate Z prime out of say continuous distribution. And then that's the probability that I'll have to accept the change. How do I go back? Well, okay, I have my weight for configuration where the variable is Z prime in the final state. I have to call the same update. I have to select the same variable. I have to propose a change which takes me from Z prime to Z. And I have to accept it. Okay, some of the numbers will cancel. But if you program something, I suggest that you write down everything in this line. Just enumerate every single step in your algorithm. It will simplify, of course, because many things are exactly the same. So we'll see that you'll have to accept the change from Z to Z prime. And it will be based on the ratio of the weight. Obvious, because I want to go and hunt for the best possible answer. But it will be also multiplied by the ratio of the proposed probabilities. So that's the solution of the detailed balance equation. And the rule of thumb will be okay. Try to select those functions as close as possible to the ratio of Ds. If you can do it perfectly, then your acceptance ratio will be one and no time is wasted. Okay, and sometimes because the structure of the weights is the product, product, product, you change only one variable, which means many things in the product between the Ds, they will cancel. So maybe the ratio between Z prime and Z is a simple analytic function. And maybe I can generate this distribution analytically easily. So then I can easily achieve the acceptance ratio one. So that's how you do updates of type one. On any variable. The solution is written, just go ahead. Okay, well, just don't forget that, okay, P is the probability distribution. So which means it has to be normalized. P is always normalized to one because I am talking about the probability. Now just let's go to the last piece of quantum Monte Carlo and we're finished. So the first one, if you know this one, you know how to do any classical Monte Carlo. So there is nothing else. Mathematically, the general scheme, you are finished at this level. You can do any multidimensional integral using this scheme. Now just let's go to the update of type B when I change the number of continuous variables. Well, in this case, I have always have two updates. If I increase the number of continuous variables, then there has to be an update which also decreases the number of continuous variables because I have to write down the detail balance between the two. Okay, so they always form pairs. I have an update which will increase the number of variables and an update which will erase those variables from the configuration. Okay, so you can call and decide that I will change the number of variables from say N to N prime and N will be increased by N. And there is some probability for calling this particular update because I have choices. I can call update A, I can call update B. So that's the probability to decide that I will do this update. Okay, decide it, I know what I want to do. Then I decide, okay, well I have to select the new variables from some distribution because they're not present yet so I have to generate them. And because I have to generate new continuous variables, I have to tell you how I do it. So I have to specify some generating probability density for producing those new variables. Okay, this can be done subject to what is the current configuration where there are no restrictions, but those are the new variables which are not present in new yet. So that's the probability of generating a particular set of those new variables. And remember that the probability, if it's probability density, the probability requires that I multiply it by all differential measures. That's the probability that I will do exactly this. Mentioned, now I have to accept it. So you have to tell me whether I have to accept this proposal or reject it. I have to write down the detail balance. So I have to tell you what I do when I remove the variables. Okay, so if I am sitting in the term which has order n plus m and I want to decrease it down to n, well, which means I have to call an update erasing those extra variables with probability pu bar. That's update which is erasing variables. There is some number attached to this. Again, it's absolutely up to you. You decide with what probability you would like to decrease the order. And then I have to accept it. I mentioned everything. So it's time to write down the detail balance equation. So let me mention all the steps. That's why I'm slow, but that's what you always have to do. That's the number of computers I have which have a particular point in the configuration space. This computer is sitting exactly here and that's the number of them. That's the probability density that I have at this particular point in configuration space. The probabilities that I decided to go to higher order. The probability that I will generate new variables to be exactly xn plus one and plus m. And the probability that I will accept this change. So this is flux out. I will leave point new and go to new prime. So that's the flux out. What is flux in? That's the number of computers, if you like in this discussion, which are sitting at the configurations point which is higher order. That's the probability that I will call an update which wants to raise those variables and that's the probability I will accept the move. Well, you already see that all differential measures cancel. I just do naturally, I explain to you what I do and I see that there are no differential measures left. Because whenever you want to produce new variables, the probability of doing this contains those extra differential measures. They're all gone. So that's my acceptance ratio. It involves the ratio of the weights. So this is the weight of higher order term, but no differential measures left because they all canceled. It's n, m, n plus m. So that's the function itself without differential measures. That's the function itself with smaller number of variables and it's divided by the function which contains those variables. But that's the proposal. How given the state with n variables, how it was generating new variables. Well, of course, okay, I also have the ratio with what probabilities I call updates to increase or decrease n, okay? All differential measures are gone and now it's your job to design a function omega as close as possible to the typical ratio of those two. That's it. Of course, and don't remember that omega is normalized to unity because it's the probability density, okay? So those are the efficiency rules. Try to design omega as close as possible to the ratio of diagrams of higher order slash to buy lowest order. Yes, it contains new variables, but that's how you design your function and you try to be as close as possible. Sometimes this can be ugly expression and you don't know how to generate answers for random variables according to this distribution. So you design some other function which is simple analytic but mimicking all the limiting behavior and typical behavior. So maybe your acceptance ratio will not be perfect but it can be reasonable. So that's the rule. Yes, you have to use easy analytic functions because you have to generate new variables fast. You don't want to waste one day in generating a few variables. So that's what you have to do. Ah, well done. Essentially, that's what quantum Monte Carlo is doing. It's only one equation, detailed balance. And I explain to you everything. If you want to change one variable, it's very easy, design a function to change this variable and make it as close as possible to the true behavior of the model or just do this. This will be exactly the next few slides for the Polaran because you say how hard depends on the model. You give me the model, I'll analyze the functions, what are the Green's functions, what are the interactions and I'll tell you exactly how to do it and the answer is easy. It's model dependent but it's easy. Of course if you do Coulomb interactions, I'll do something else. If you give in on me. Another model, probably I'll do something else. Just to be sure that my acceptance ratios are close to one but typically this will not cost you much. Yeah. I would say they're more or less up to you. You have to measure, those are really kind of dirty details. You have to measure what's called autocorrelation time. In quantum, in diagrammatic Monte Carlo, usually autocorrelation time is extremely short because you go to higher order graphs but sometimes you return to the graph of the lowest order. The graph of the lowest order has no idea about other variables in principle because for example if you do Ising spins, I have 500 million spins. That's my configuration and maybe my update is flipping one spin but the configuration is correlated with the previous state. I can flip all 500 million spins. That's already 500 million operations but probably still because I am flipping on top of the previous, I still not quite the correlate from the previous state even after one billion updates. And what people call autocorrelation time is that okay, how long I have to run the code to have another configuration which looks statistically independent from the original one where what I started with. And with Ising spins, if it's a very big configuration, I have to do a lot of updates. Say 500 million spins will require maybe 10 billion updates, especially close to the critical point. Now you do diagrammatics. With diagrammatics, sometimes you are satisfied with diagrams of order one, two, three, four, five, 15, 20. What is 20? Computers will do several million updates in a second. So what is 20? But remember that I am increasing the order and I produce new variables. But I decrease the order, I erase those variables. If you erase the variable, there is nothing left what it was before. So start with diagram of order 10. You go to diagram of order zero, let's call it zero, all variables I erase. And when I grow the diagram again up to order 10, it's totally correlated because it has no memory whatsoever what was the original one because I just raised them completely. Okay? By going to the lowest order graph, you raise all memory of what was the configuration at high order and you generate it in you. So typically the autocorrelation time is not bothering us at all. It's extremely short, but formally you have to do it. So your scheme has to be such that four can be yes. If acceptance ratio is close to one, in all updates you say yeah. If I do updates only of type A, I'm sitting one at one in the same order, then your autocorrelation time of simply other order is getting long. Just probably make it 50-50. That's roughly. Yeah, that's exactly what I will be doing for Polyron because it's a specific example. Now I know the Hamiltonian, now I know my propagators, I know how they depend on momentum and time. And that's what I will show you how you design. For any model you can do the same. Of course assuming that your Hamiltonian is reasonable. It's single particle term, two-body interactions, three-body interactions. It's manageable. If you say it's a model where every next term and you formally have vortices with three fermions in, three fermions out, four fermions in, four fermions out, five fermions in, and all of them are present, none of them can be thrown away and you have to, good luck. But we don't have this type of problems in physics so far which are of broad interest. It's single particle term and two-body interactions. You're done with two functions. You can immediately design everything according to those two functions because they're part of your Hamiltonian. In this case the parallelization is trivial. Run exactly the same model with exactly the same setup on 100 computers because ultimately, yes, I do have statistical error bars to measure my answer. Sum up all the computers, divide by square root of n for error bars, you're done. No, because this is sampling which is trivially averaging and autocorrelation time is so short that after a very short time you have uncorrelated contribution, uncorrelated contribution, uncorrelated contribution. So now it's just summing up through uncorrelated contributions and reducing error bars. Whether you do it on one CPU or on 100 CPU in parallel which are not even talking to each other, yes. Okay, yeah, I'm talking about connected Feynman graphs. With connected Feynman graphs you typically stay at orders which are not scaling with volume or anything. It's a finite set of five, 10, 20, 30. Autocorrelation time is microseconds. Okay, if you allow yourself to wait one millisecond, do it and you're fine. Yes, you can start measuring immediately. I agree, formally, yes, I am measuring immediately. There is no reason to have thermalization time here. Or allow yourself one millisecond. Yeah, that's different from something which scales with the volume. Okay, so we're done. Just one small note, probably I'm moving slow, but okay, that's fine. I'll switch to Polaran now. Okay, so maybe you heard about people in high energy physics when the entire team of maybe 20, 50 people for the lifetime starting from minor PhD till pension, they compute corrections to the electric dipole moment or G-factor, they also do Feynman diagrams and go to high order. But what they do, they have a book. Each book has a page, on the page you have a diagram and then the collaboration computes this page. Once this page is computed down to the number, they turn the page, you look at another graph and they compute this graph and they sum up the numbers and they go through the pages. They finish and then they recheck everything for consistency and this takes, as I say, the lifetime effort for the entire collaboration to do, say, 1,000 graphs. And right now they're trying kind of with, I don't know, computer help, they kind of try to do it for 10,000 graphs. Yeah, that's a famous kind of project which people do for the anomalous G-factor by using QED and now they're trying to include in QCD corrections. Now what you do here in Monte Carlo, I'm not even looking at those graphs. I don't draw those graphs. I just know that I am doing an ergodic sampling and I do it right. And I do Monte Carlo sampling of the entire configuration space at once. I change diagram order, topology and internal and internal variable and I go from point to point to point to point. I never draw diagrams one by one and I never finish them separately one by one because you take a point from diagram 41, you take one point from diagram 41 to 10, one point from diagram of 11, you sum up them together and I know that I do it in the right way and I just wait for the convergence. And in this way I can do billions of graphs alone and this can be finished in a matter of days. In this sense, this is not right, enumerate and compute them. So one billion graphs, it's the whole library. Okay, now let me go to the practice here. It depends on the problem. So sometimes you have to limit the order and study how your answer converges given the diagram order. In this example, which I discussed here, it's Polarons have convergent series. If your series converge all by themselves, you don't have to do anything. You just allow diagrams to go anywhere you want but because they converge, you never go to high order graphs automatically. You don't even have to worry, okay? So the Polaron in this sense is somewhat easier but it's easy to present. So the Polaron problem, it's the typical one in physics. You have a particle. You specify some environment. I'm not gonna solve for the environment, it's supposed to be given. And then I have some coupling between the particle and the environment and this will modify the particle properties. You'll create a quasi-particle. You can ask about how it propagates, what it's effective energy, what it's effective mass, conductivity, many other questions, okay? So that's the interest. You don't study the environment. You study the effects of the environment on the particle and how you create a quasi-particle and what are the properties of this quasi-particle. Of course, one of the first and famous examples of this is electrons in ionic crystals. When you have an electron, it's you many dispersion relation you like. You have a bunch of phonons. Again, it's you many dispersion relation you like, any number of branches, everything is fine. And you have some electron-phonon coupling which is formally written here in the translation invariant system as density times the displacement of the oscillator. So again, it's very generic for typical electron-phonon coupling. So you neglect coupling to the displacements where. Okay, and that's more or less the picture. If I have an electron sitting in a crystal, it will push atoms around and if I have to move, I have to account that the lattice is kind of deforming as the electron is moving. So what's moving is not the electron, but electron plus lattice deformation, so it's a quasi-particle. Okay, that's standard, blah, blah. So now this is our Hamiltonian. And I will just try to do some graphics for this Hamiltonian. If I'm talking about an electron, I'll draw a line with momentum p. If I'm talking about the phonon, I'll draw a line with momentum q, different color. If I'm talking about the electron-phonon interaction, I literally translate what's written here. Electron with momentum p enters, couples to the phonon. There is a phonon emitted and the electron propagates with momentum p minus q. So you have graphics for every term in the Hamiltonian. And then the object of prime interest will be the Green's function of the polarum. So that's this average. It's kind of standard diagrammatic technique. You write it down formally like this, but then okay, my h contains terms, those are diagonal, diagonal, but this is all diagonal and I cannot solve it. I can do perturbation theory, so you start doing the expansion in v. Well, the beauty of polarance is that you're okay. This is creating a phonon, but to close the trace or to close the diagonal matrix element, I have to absorb this phonon. Otherwise, I cannot arrive to the same state at the end. Well, which means I have to do this twice. I have to emit the phonon, I have to absorb the phonon. Well, so you do the expansion of the exponent, it's e minus h beta. Yes, you do the first order term, it's minus v. But you have to do it again. That's again minus v. So now it's v, modulus squared, and everything is sine positive. So it's sine positive just because I have to use this vertex twice. So you end up with positive expansion in terms of Feynman diagrams. And formally, I can write down diagrams for polarance, so it's the exact Green's function that means electron is propagating never ever emitting or absorbing phonon. It can emit and absorb. It can emit and absorb in different ways twice, and then you go to high and high already, okay? So if you don't care much, just sum up all of those graphs. Don't think about buildification or irreducible objects, because for sine positive expansions, everything will converge anyway, okay? So that's the easiest way you have to try diagrammatic Monte Carlo. Just don't use any specific tricks, just do this set of graphs. And formally, yes, I have to sum all of them to sufficiently high order, and if my coupling constant here is large, I have to go to very high order to get the right answer, okay? So that's essentially the Green's function is the sum of all possible graphs of this type, of this structure, with any number of blue lines, yeah? No, no, any integrals don't give you a sine. You have a sine positive function, you integrate over time, this is sine positive expression. Of course, you are saved here that there are no fermions anywhere. The particle is propagating forward, it cannot turn around and come back, because it's just one particle. So the particle is providing you the backbone. On this backbone, you put red dots, it's where the phonon was emitted or absorbed, and then you connect them pairwise. And this will generate all possible graphs, okay? Yeah, graph to math correspondence is easy. I give you this graph, you will immediately tell me how much it will cost me, because for every line, I have to substitute some expression, okay? So if you see the dot, it's VQI. The other dot will be VQI modulus, kind of conjugated, so whenever you have two dots connected by blue line, this will contribute V modulus squared. If I have a black line, that's the electron propagator, that's free particle propagator, it's simple exponential. I'm talking about right now, say zero temperature, so I'm looking at tolerance at zero temperature. And if I have the blue line, that's the phonon propagator. You created the particle with energy omega Q and it's propagating. So yeah, you have a graph, everything is specified, you know exactly how much it will cost you. Okay, you can write down more sophisticated graphs for other correlation functions. For example, if you, in addition to a-deck and a, you also have b-deck, b-deck, b-b. Don't ask me why I need it. It's a different question. Yeah, I also have graphs and I can simulate this correlation function if I want to from this diagrammatic set, okay? We can also write down diagrams. I probably will skip this part for optical conductivity or show you a little bit later. So this is the space of Feynman diagrams in which I have to run the Monte Carlo, okay? Now you have to specialize a little bit further. So if we are looking at Freolich-Polaron, it's a single electron in the ionic semiconductor, there are some simplifications. You assume that you have some band for the electron, but it will intersect in optical mode at very small momenta. When momenta are so small, I can more or less, for most cases, not like an STO. I can forget about the dispersion for the phonon and say, okay, the relevant physics will happen around this momentum. And because this momentum is much smaller than the size of the Brillouin zone, I can assume that the phonon is dispersionless. So that's the famous Freolich model. And the coupling between the electron and ions is the of the Coulomb type. And you simply say that, okay, the dipole moment, the electric dipole moment generated by the optical mode couples to the density of the electron. So when everything is fixed, you specialize that dispersion relation can be replaced with p squared over double m. This model has no ultraviolet divergences, so all integrals will be sitting at small momenta anyway. So you can use p squared over double m. The phonon dispersion can be taken as a constant, and the effective coupling constant from the coupling to the polarization of the ionic displacement is given on this form. So that's the canonical Freolich model. Now I know every single function. Because if I have this, for the electron propagator, I know it's a Gaussian, p squared times delta tau. I know exactly the propagator for the phonon, and I know how I couple, okay? Now I can design my updates. And I'll show you only two updates which I ergodic and will allow you to solve the problem. Update number one. Just don't be surprised it's so primitive. Whatever is your graph, I don't even care how many dots I have. I want to change the length of this graph in time. This is my update. Easy. But this will allow me to make my graph long or short in time. Okay, well, how do I do it? Well, I know exactly what I want to do it. I call this with some probability. I have to decide what is the next point if I have currently tau. I have to decide about the tau prime. I simply throw it from the exponential distribution function because the electronic propagator is e minus epsilon p. p is already specified, so it's a number. And I simply go from tau last, the last red dot to anything which is further away. It's a simple exponential distribution. So if you give me a random number, that's the tau prime according to this distribution. It's very easy to generate, and I have to accept it. Because I did an exact job. My probability density, which I used to generate new variables, exactly compensates the ratio of the graph weights because the graph for this part is exactly the same. The graph for tau prime contains exponential tau prime minus tau last and the previous one is tau minus tau last. But that's exactly the distribution function I'm using. The distribution function is exactly the same as the propagator. Receptant ratio is one. So you say, okay, it's primitive. Throw the random number. The random number will immediately tell you what is tau prime, accept it. So that's your new configuration point. We're done. And there's only one other update which is left. I have to change the diagram already and that's how I will do it. I will have a payoff updates. If I have some graph, again, the number of red dots can be anything. This is just the piece of the graph. I have to go to something like this. Okay. So whatever it was this line, sorry, whatever it was this line, it's still here. I simply add another line. So I have to describe how I am adding a new line and I have to describe how I remove it if I go back. Okay, let's do the description of the update. First, I need some probability to call an update which will increase the water. Okay, let it be P from M plus one to this. And if I go backwards, I have a probability to go from high water to low water. So those are the probabilities of calling those updates. Now, if I want to insert a new line, I have to decide where do I place the first point. Place it anywhere at randomly between zero and tau. Okay, it's up to you. Well, you can place it anywhere at random on any of the intervals because, okay, I have interval one, interval two, interval three. Select any of those intervals out of three at random and then put your first red point for the new propagator anywhere on this interval uniformly. So that's what I decided to do. I select one of the electronic intervals at random and I select the first point, tau two, anywhere randomly on this interval. So I nominate the probability of making this selection. So that's the probability that I will place tau two exactly here. Now I have to decide about what is q two and what is this tau two prime. So I have to design what is the probability density I have to use to propose new variables q two and tau two prime. Okay, now how do I go back? To go back, I just select one of the phonon propagators and throw it away, that's it. That's as easy as it is. So the probability is one of the n where n is the diagram order. If I have in the final state n plus one lines, here I have n, here I have n plus one. This will be p equals one over n plus one. n is the diagram order in the current state. But remember that okay my state here is n, the final state n plus one, this is in terms of the final state. I'll just be more specific next. So let's write down according to the plan, what is the detail balance, diagram weight, proposal probability, seeding function, probability of selecting the first point, I have to accept. I go back, I have d new prime, probability to use this update, probability of selecting a particular phonon line, accept. Okay, we found the ratio. So this part I already explained, it will just happen naturally. Those are just the numbers which are fixed for your simulation, those are configuration dependent, number of electronic intervals, number of phonon lines in the final state. And now I have to design this function, this was the question. How do I design this function given the ratio of the diagram weights before and after? Okay, we go to the design. I write down the ratio, given the model. So I have vq to modular squared, I have phonon propagator between the two points. And I also have some change in the electronic energies between the two points. This is more difficult to calculate, so let me keep it as a number. But I know that if I suggest very, very large momentum transfer, this will become a Gaussian because I will just put all electrons under the phonon propagator to very large momentum but then it will be parabolic function. I don't care what were the previous momentum but if I put another, the additional one which is much bigger than everything else, this will become a Gaussian. So I know that this will become a Gaussian in q2 if q2 is very large. Well, just don't forget that for Freolich model, vq model squared is q squared, so this q squared and vq, they will cancel and produce a number. So the entire function is proportional to simple exponential for the phonon, simple exponential for the electron and somehow I have to suggest the q2, d phi, d chi and those are kind of spherical angles, okay? So that's the probability density and delta E I have written it but I know that for large q it's q squared. Now I design my function which more or less will satisfy me. So I suggest to do the following. Throw the solid angle for q2 at random anywhere from zero to four pi, that's easy to do. And then I suggest to select q2 from power load distribution. Oh sorry, no, that's not what I'm doing. I suggest that you feel, yeah, that's what I'm doing. You suggest to throw q2 from this power load distribution. So it will sample the characteristic momentum scale which is the intersection between the electronic dispersion and the phonon branch. So that's the characteristic momentum. Well, this is just normalization factor but once I throw this distribution, I have to throw tau two minus tau relative to tau two, tau two prime relative to tau two from this factor. It's roughly the phonon propagator but I also multiply it by q2 divided by q0 squared. So it will mimic Gaussian if I throw very large momentum q. Now, okay, this is the design. You can use any other design. I'm simply trying to satisfy the limiting cases the best way I can. Simple exponential distribution, easy to play. Power load distribution is easy to play. Of course, this is very easy. So I can use random numbers to immediately generate tau two prime, q2, phi, and chi from this distribution function and that's what you plug in to this substantiation, okay? Oh, sorry. Okay, this will not give you a substantiation of one but if you measure on average, what's happening, this will probably be 0.5, okay? Essentially, we're done. So we are at the level when we finished everything. So those are the only updates you have to perform. You have a graph, make it slightly longer. Oh, sorry, that's this one. That's one update, very easy to perform. Yeah, a lot. And the other one. I'm ergodic because I can insert a new propagator anywhere I want because I selected this point more or less at random and I can throw it reasonably close enough. And I go to high order graph. If I do it again, I go to yet another, I have already graphed no restrictions about what kind of typologies you generate. If I want to remove the line, I just select any of them, I done. You have those two updates, your code is running. Yeah, so let me start finishing. I have more slides which will tell you that you can produce more updates. Some updates will simply play the harmonic with your graph. That's another update which is extremely efficient and I can do it for any other graphs just making a huge change in tau, okay? And I can make it with the acceptance ratio very close to one. This is just for being slightly more efficient and correlate high order graph faster. You can do this, trivial, it's simple exponential, acceptance ratio one, you can do this. Changing the momentum of the line, you can do it with the acceptance ratio one. You can swap lines places. Well, you see what's happening? Looks like I don't change anything except the momentum transfer because the outgoing and the incoming lines I swapped them places. Now normalization. I have to normalize my data. You run Monte Carlo, you measure something, your answer keeps growing, growing, growing, growing. But that's normalization. Suppose you measure histogram and your histogram keeps increasing, increasing, increasing, increasing, you say, okay, what is my answer? Look, I know the shape already. I don't even, I don't only know the absolute value. But now imagine that in your histogram, the first bin, you know exactly what it is. Then you say, okay, I know how to normalize my data. If I measure the entire function, but the limiting case of my function, the first bin, is known analytically, I will immediately tell you how to normalize the data because for this special bin, I know exactly what I have to have because it's a special limiting case. You just slash the entire histogram by the value of the first bin and multiply by the non-analytical value I. Now you have the entire shape and normalization correct. Remember that I'm doing diagrams of all orders. And I collect histogram for my functions from all orders. So all I have to know is what my function is doing when the diagram is of the lowest order and this is typically primitive to calculate. For example, I have a diagram where there are no phonons. It's a very electron propagator. I can normalize everything to the properties of the Bay electron propagator. I can take any integral from the exponential. I know exactly how to do. That's more or less the spirit of diagrammatic Monte Carlo. Don't worry about denominator. You will be always able to normalize your data through the simplest possible term in your diagrammatic series. You always have something which you know analytically perfectly well, okay? Finally, suppose you don't know anything analytically about your function, just totally impossible. You can add to your configuration space an artificial diagram. Let's call it diagram of order zero. Diagram of order zero contains no phonons and no electrons. It's the number one, okay? I can arrange a Monte Carlo process. Remember, I can change the number of variables as I like. I can go from having an electron propagator from zero to tau with momentum P to the diagram which I call nothing, just one. And I have an update to do it. I know that one is one, which means you design, if you like, a bin which you know exactly what it is because it's yours by design. And you arrange a common Monte Carlo process for everything. So now I just have known red bin which I designed. The rest of the function I have no clue what it is, but it's a joint statistic so everything is related to each other exactly, right? And then I normalize everything by the property of this designer bin. So I have my normalization. Yeah, I'll probably stop here. Yeah, I have answers because it's okay from the simulation for the Green's function, you can extract a lot of information because if you have a Green's function, I can measure the slope. It will give me the energy. The intersect between the straight line from the slope and z axis will give me the quasi-particle residue. I also have estimators for the effective mass. I know many other things, okay? I will skip those. You can measure other properties. For example, this is the typical example of Feynman's variational principle for energy. Monte Carlo is of course below because it's exact in the long simulation time limit. Feynman was variational, so our data are below. You can compute the number of phonons in the polaronic cloud as a function of say coupling constant. And you see that, okay, well, we can easily simulate something which says okay, for coupling constant, I think it was 17, yeah, 17. That's the probability to catch this number of phonons in the polaron cloud. 12 was 17. Yes, the phonon, the electron is roughly accompanied on average by say 60 phonons. Well, you have to appreciate it. Yes, I will have to go to graphs of 4,100 probably because this is already having a tail and the tail is still measurable, but this is 90 phonons. 90 phonons means the following. You select the graph for the Green's function. I cut it at some point in time and I count how many phonon lines are sitting on top of the electron. It can go up to 90, typically 60. How many topologies I have for the 60 diagrams, maybe 60 factorial. So that's what it's buying to you. It's buying to you an exact answer when anything analytically is totally impossible. Yeah, I'll probably stop here. There's another project which we are running now. This is about violating your garrifle regal condition and what polarons are doing when the coupling constant is large. But I'll probably stop here just to keep time.