 OK, so let me continue on the course of yesterday. So I would like just to add one thing to what I said yesterday before going on. So yesterday, I defined the H-ring force on the work. So let me just very briefly remind that if you have a graph G with vertex V and H set E, and if you have some initial positive weights on the edges, then the H-ring force on the work is the process, discrete time process on the edges that's the one I defined yesterday. At time 0, you jump according to the, with probability proportional to the weights you see. And each time you cross one edge, then you increase its weights by 1 in both directions. So that's what I defined. So that's the process defined by the equities. OK, so what I said yesterday is that there's a phase transition in dimension. Let me remind just the two statements I gave yesterday. So I said that if you consider the ERRW, so the H-ring force on the work on ZD, for simplicity, consider that you have constant weights A. It's not necessary. Then in any dimension for A small, what I said is that the H-ring force on the work is positive recurrent. And in fact, it's exponentially localized. And I said that in dimension 3 and more, there's a phase transition when A is large. So that's stronger reinforcement. And when there's weaker reinforcement, then it's transient. OK, so let me give the ERRW sticks. So I said that there's two proofs of that. And let me do the ERRW sticks about the proof of Angel Crawford and Cosma. So all proof goes as follows at the beginning. Then I explained yesterday that the H-ring force on the work is a mixture of reversible Markov chain. So it means that you have a measure depending on the starting point and depending on the initial weights on the conductances and such that you can represent the H-ring force on the work by first picking a conductance at random according to this measure and then running reversible Markov chain with these conductances. So all proof goes as follows as I explained yesterday. What is proved is that for A small, if you take an edge, then there's an estimate which says that if you look at the weight, you have a bond on the moments of the conductances. So you need always to renormalize because the conductances are defined up to a constant. So if you look at the ratio between the conductance at E divided by conductance at x0, then for S smaller than 1 fourth, for A small, then you have exponential decrease of this moment. So there's a constant such that this is smaller than the distance of the edge 2, 0. And then it gives the outcome. So let me give some type of heuristic of why this is true. So imagine that, so OK, so I will give an heuristics. And the heuristics, in fact, it seems that the heuristic is due to Spencer and Angelicrofen and Cosma had a very nice proof to make these heuristics rigorous, which is a non-trivial step, of course. And in fact, it's in the exercise. It's the exercise 3. It's a difficult exercise. I mean, it's not OK. That's why it's the third one. So let's say it's a problem, a three star problem. But I gave indications. And so for those who are very strong, I can try to do this. OK, so let me explain the heuristic. So you take an edge e. And suppose that a is very small. And you take an edge e. And you look at the H-reinforced random walk. And you look at the vertex by which so you look at the first time you cross this edge. And imagine that it's in this direction. So it means that the vertex from which you arrive at the first time on the edge e is this one. So this is e equal e naught. So that's the situation. And then you look at e1, which is the directed edge by which you first reach this point that you can call i naught. So you look at e1, which is the edge by which you first reached i naught. It's clear. So if you look at the configuration, so you have the h, e1, and e naught, then if you look at the configuration at the first time you reach this point, then you see that the weight here is 1 plus a. And the weight here is a. So you see that because it's the first time you reach this point. So you never cross this one because that's the first time you cross this one because you first cross this edge by this vertex. So that's the configuration when you arrive to this point. Now you see that if a is very small, then you have a weight which is much bigger there and there. So it means that you will make a lot of, before first crossing this edge, it means that you will go several times along this edge. But now if you look at the Hando Moore representation of the ERRW, you also know that this is a mixture. You have a conductance x, e naught for this edge. And you have a conductance x, e1 for this edge. So it means that in terms of Hando Moore in random environment, in terms of a mixture, then it means that what you do is that you choose the edges according to these weights. But what does it mean? It means that necessarily, or at least that's heuristic. It means that you need to have that x, e1 is much larger than x, e naught. Because you come several times to this edge. And then you continue. You say, OK, now look at e2, which is the edge by which you first reached this point. And you have exactly the same argument. You have the same configuration at the first time when you reach this edge. And this gives that this is much smaller. Each e1 is much smaller than each e2, x, e2. Yes? By exactly the same argument. It's clear? And you continue like this. And you see that you draw in this way a self-overning path, which is called the paper of Angel, Crawford, and Cosma. So that's the heuristic which is given in Angel and Crawford and Cosma. And that's the heuristic that they make. And so they call this path the dominating path of the H-ray and force random walk. So it's clear what it means. So it means that, OK, so at first step, you say, well, so what does it mean? It means that along the path, you have some decrease of, let's say that e1 is the less. You have some decrease of the weights. It means that this one should be much smaller. The H you are entrusted in should be much smaller than the H at 0. Yes? OK, of course, this is not rigorous because there's a dependency between the path and the weight. So it cannot be made rigorous like this. But what is proved rigorously is that you can look at d gamma, which is the event. You fix the self-overriding path gamma. And you look at d gamma, which is the event that the dominating path of, OK, that's also somehow the backward loop is random walk of the path, OK, dominated path of E-R-R-W is gamma. OK, so you fix the gamma. And you ask that the E-R-R-W respect this path in this sense. Oh, yes. And what is proved rigorously is that on this event, so that's the result of Angelico von Nankosma, on these events, then the estimate that I wrote on the board is true. So which means that you have some exponential decrease. So in fact, you have a constant as that depends on the degree of the graph. OK, that's the degree of the graph. Well, let's say it's the dimension here. And you have square root of A. And to the power, the length of the path, OK? Yes? So it means that if A is small, then you have exponential decrease. But now you see that you have old self-overriding path. So this estimate is true on the event that you follow this path. Yes? But now you have to sum on old path. So you have to sum on all simple path. And that's where you need that the degree is bonded. Because you know that at size, OK? So now you have to sum on old path. Then the thing is that this is smaller than, OK, you know that you have at most, OK, k is the degree of the graph. Yes? So it's 2D. So you have at most k to the n path of length n, OK? So you see that when you sum, you have an extra term which is k to the size of the graph, OK? So now it depends whether k times, so you have something like, so this will be bonded by the distance between e and 0 and plus infinity of k to the n times this constant. And the question now is whether this constant is smaller than 1 or not, OK? So there's a competition between the number of path and the exponential decrease that you get along one path, OK? And if you remind of the course of Simone Wartzel last week, she gave a proof of localization of random Schrodinger operator at large disorder. And she explained the proof by fractional moment. And if you remember, then there's the same picture that you have to sum on all self-affolding path. And along the self-affolding path, you have an exponential decrease. And the question is whether you have the number of path wins or not compared to the exponential decrease, yes? And I hope that tomorrow I will also be able to give a proof of localization that exactly follows the proof of Simone Wartzel of last week by some random Schrodinger fractional moment bond, OK? I don't know if it's clear, but OK. So there's an exercise to prove this and solve this. So OK. She's not easy. But OK, so. On what you said yesterday that to imply recurrence, you would like to have an exponential decay of the resistances or two-connectances, sorry? Yes. That's always true, then. No, no, no, because if k is larger than one other, if k times this constant is larger than one, you have nothing. But I mean that there is always a regime where you have exponential decay of the. Yes. Yeah, yeah, that's OK. I said that ERW is positive. In fact, it's exponentially localized on all graph. The result is that on all graph of bonded degree, if you take a small enough, then you have exponential localization of the conductances. And so positive records, but it's really exponential decrease. And does the phase positive recurrence and exponential decay of the conductances do they match? Well, that's we don't know. OK, we don't even know whether there's only one phase transition. OK, there is a phase transition in dimension three and more, but we don't know whether it's a sharp transition because there's no monotonicity. And so we are far from, in particular, in dimension two, it's not clear whether you could have polynomial decay only. It's expected that not, but the problem is that in dimension two, it's conjectured to be like a random Schrodinger operator with a very long length of decrease, localization length. So the localization length is supposed to be exponentially in the small disorder. So it's extremely long when you have a weak disorder. So you cannot see it on computers, for example. If you look at the ERW in dimension two with weak localization, then it's very difficult to differentiate with respect to a simple underbook, for example. But it's supposed to be exponentially localized. OK, so now I would like to, OK, so that's so that's the heuristic of the proof. And the good thing with this proof is that it does not use anything else than the dichonist Friedman theorem. You don't need explicit law. Now I would like to present, OK, so let's say that I start part two. So I would like to explain the relation between the ERW, a process that I'm going to define, which is the vertex reinforced jump process, a sigma model. Sigma field, sigma model. That I will define also. And some random Schrodinger operator. So in particular, in these first three terms, I will explain the relation between the ERW and the works of De Sertoris-Penser and Tzernbauer about the sigma field, OK? So did I forget something? OK, so let me first explain the first three. So I will define the VRGP. So that's what's the VRGP. This is the vertex reinforced jump process. So that's another process which I will choose closely related to the H-reinforced random work and the relation with the sigma model. So let me say that in this part, that's the other approach that I mentioned about localization. So this comes from a work with Pierre Tarras in 12. OK, so let me define the vertex reinforced jump process. So what's the, OK, so I'm sorry, there are many two, at least two processes. So what's the vertex reinforced jump process? So it's the VRGP, we call it the VRGP. You again take a graph G, and you put some weights, but now you call them W. You will see later Y. So that's the same as the A, but now we call them W. And we think that they are conductancies. So that's positive weights on the graph that you call conductancies now. So what is the vertex reinforced jump process? It's a continuous time process that I will denote by YS. So it lives on the vertices. So it's a jump process because of this graph. And it's defined as follows, such that if you look at the process, OK, so it's again a non-Markov process. So it's a non-Markov jump process. So it's behavior will depends on its past, OK, such that, so conditionally on the past of Y at time S, then the process jumps from if you are at position I. So it jumps from I to J with a rate, OK, W IJ LJ of S. So what is LJ of S? I will make it more or more, I will be more specific. OK, so where, what is LJ? LJ, it's one plus the local time of the process. So L, I will always, when I have a process, I will always denote by LY, the local time of the process at time S, at time, at the time. So it's the integral from 0 to S of the time that you have spent at the vertex I. That's the local process, so it's the time that you have spent at the vertex, yes? OK, so let me explain what it means. So, OK, if you don't know what is jumping with a rate, what does it mean? I say that it jumped with rate. It means that if you look at the probability that the process Y, S plus dS is equal to J, if you condition on the fact that you are at position I at time S, and if you condition on all the past of the process, then it is equal to W IJ LJ of S dS, OK? So it means that in infinitesimal time, you jump with a probability dS times the rate, yes? It's clear? That's very important, so OK. So why is it a reinforced process? So if you start at this point, then at first, you don't see anything because you look at the point where you want to jump. So if you start at I not, you see the point J. So you see that nothing moves because the local time does not change there. So at first time, you jump with a rate, which is given on each edge. You jump with a rate which is given by the W, OK? But if you jump first there, then you see that at this point, the local time is not 0, yes? Because it's the time you have spent before jumping. So from this point, you will see something which is larger than 1. It will be 1 plus the local time that you have spent there. So you want to come back to this point, yes? And OK, let me also explain that if W is small, then it means stronger enforcement, exactly as the E, R, R, W. And W large, it's weaker enforcement. So why? It's a bit less obvious. It's a two-step process. If W is small, it means that you wait a very long time before jumping because the jumping rate is small, yes? So once you are there, then the time you have spent at I naught is very big because you took you a long time to jump. So it means that 1 plus the local time that you see here is much bigger than 1, yes? It's clear. So you have the same picture that W small correspond to. OK? So you see that it's a type of reinforced process. It doesn't quite look like the other one. In fact, I should say that, OK, something I should have said. OK, so let me. So this process was, in fact, first considered by Davies and Volkov in 2004. And in fact, it was proposed by Werner. I think he asked the question in the conference. And the idea of Werner was that he thought that this process has a continuous counterpart of the vertex reinforced on the MOOC. So what's the vertex reinforced on the MOOC? Something that I should have said before is that there are several types of reinforcement. So the vertex reinforced on the MOOC, it's the same as the edge reinforced on the MOOC, except that you put the weights on the vertices. So it means that you jump with a probability which is proportional to 1 plus the number of times you have visited the vertex instead of an edge, yes? So you see that it looks like this one. But the vertex reinforced on the MOOC, it has a very different behavior to the edge reinforced on the MOOC. In any dimension, it localizes to a finite set. And in dimension one, it localizes to five points. And this is a result of Tharis in 2004. So it was proposed as an equivalent of this vertex reinforced on the MOOC. But in fact, it has a very different meaning, and it will be related to the edge reinforced on the MOOC. Maybe it was not clear, so if it's not clear, forget. OK, so what I would like to explain is that this process is very closely related to the ERW, yes? But before, I need to explain some things which are in some place. OK, there's some, excuse me? The local time is just the time which stays at one vertex. Yes, yes. So that's one plus the local time, and the local time is just the time you have spent at one vertex, at the vertex i. Yes? Yes? Sorry, including the previous time? Or just the, it includes the previous time? It includes all the past. Yes? Shouldn't we normalize the probability? Which one? No, no, no, because OK, the process goes faster and faster. That's not the discrete time process. So it means that as time runs, then the process goes faster. Yes? Because the local time is increasing, so, and if you are on a finite set, then at the end, it goes at infinite speed. OK, and that's related to what I'm going to explain. So there's a thing that leads to some confusions about this process is that since it's the time-considious process, then there's several natural, I mean, there's not only one way to look at it. You can make time change. So I would like to, OK, I don't want to be, I will not be very rigorous about that, but OK, that's something simple, but general that leads to non-clear explanations. So what you want to do is to change the time on each vertex. So how can you do that? The thing is that if you take a function f, which goes from which is increasing and which is objective, strictly increasing, OK? So that's the projection between the two. Then what you would like to do is to change time of the process so that the new process, the time change process, actually not y tilde, has the property that if you look at the local time of the initial process at time, OK, so you change time. Let's say that you change time from s to t. You have a change in between. So it means that t is a function of s. And what you want is that the local time of the process at time s is a function of the local time. 1 plus the local time of the process at time s is a function of the local time at the same vertex of the new process, OK? So it just means that you change the time on each vertex. You change the time in the same way, OK? So if you do that, then you see that you have a simple transformation of the jumping rate. Because if you look at the time ds, how the time runs on the new process, on the new process, then it's equal to f prime of the local time at the position where you are. So it's y tilde of t, y tilde dt. Because the process runs, OK, it means that the way you change time depends on the position where you are, OK? Yes? So if you replace formally in this equation, then you see that this new process has jumping rate given by wij. So you have this f of the local time of the new process at the destination times f prime of the local time at the origin of the new process, OK? Why? Because this corresponds to this term. And this corresponds to the change of the time dt, OK? So now I will use two time change. You will see why. So there will be two time change. One is the exponential scale. It corresponds to f equal exponential of x, OK? f of t is equal to exponential of t. Then in this case, it's easy to see that it's clear that the new jumping rate are wij exponential of the, OK? And this process, I call it x, OK? So this one in the exponential scale, it will be called x, OK? So in this case, the jumping rate are given by this, OK? And there's the natural time scale. You will see why the natural time scale. Where you take f equal square root of 1 plus t. And in this case, it's 1 half of wij square root of 1 plus. And this one, I call it z. That's 1 plus the local time at the destination divided by square root of 1 plus the local time at the origin. So if you don't understand this argument, all what I want you to remember is that there are several times scale. And the two that we are going to use is this exponential time scale. So you have a process that jump with a rate given by exponential of the local time of the two vertices, OK? And you have this other one, which is the ratio of the square root of 1 plus l, yes? So this one will be used to make the relation with the ERRW. And this one will be used to have the finite theorem, OK? So the thing is that at the end, this VRJP will be a mixture of Markov jump processes. But you see that in the original time scale, then the process is going faster and faster. So it's not possible that it's a mixture of Markov jump processes. At the end, it's going to infinite speed, yes? So you need to find the right time scale such that this process is a mixture of Markov jump process. And you see that this one is a better candidate because it's a ratio of the local time. So it means that if the local times are behaving in a good way, asymptotically, then it could stabilize. Yes? It's clear? Well, I know that it's very simple, but it's a confusing part. So Professor, when you say the time scale, you mean that after this kind of time change? So what I mean is that you can make a time change according to this relation that I wrote there, and such that the new process after time scale has this jumping rate, OK? So it means that in time dt, the probability to jump from i to j is given by wij exponential of the sum of the local times, yes? So is it much faster than x? x is the faster, I think, and z is the slower. OK, so now I want to explain the relation with the ERRW. So that's, again, a question of mixtures. So that's in the same work. So that's, again, a question of mixtures. And the result is the following. OK, if you look at the y till n, it's the discrete time process associated with the VRJP. So it means that you have a jump process, so you can look at its discrete trajectory, yes? You forget about the continuous time, and you just look at the position where it jumps. So that's what I call the discrete VRJP. Then the result is that y till n has the same low, OK, sorry. It says that xn, the ERRW, with initial weights a has the same low as y till n with, so that's, again, a question of mixtures. The thing is that the ERRW is the same as a VRJP with random conductances. But not any independent random conductances. You take w as gamma random variable with parameter ae. It's clear or not, what it means. So it means that the ERRW, it's the same as first taking the w as iid, not iid. It's independent conductances. And then considering the discrete time VRJP, yes? In the discrete time, you take the continuous time, and then you forget about the continuous time. You just look at the successive positions of the process, OK? OK, so I will prove that, or at least sketch the proof. Because I should give some proof, so otherwise it's a bit. So let me sketch the proof, at least. So how it works, the thing is that the first step is what is called the Rubin construction. It's a way to define a continuous time ERRW with the same discrete trajectories. So the idea is simple but clever. The idea is the following. For each edge, you look at the point process. So what is this point process? So for each edge, you have a point process v0e. So that's the 0. And then the first point on the point process is v1e. And the interval is an exponential with parameter ae. So the first point appears at an exponential with parameter ae. And then the second point appears at an exponential. The distance between the two is an exponential of parameter ae plus 1, OK? So for each edge, you look at a point process such that the interval are given by the exponential of ae plus 2 plus 3 and so on, OK? So you see that the interval are going smaller and smaller. But nevertheless, since one of the sum is divergent, then it goes to infinity, OK? It's not convergent. And you do that for each edge. So now what is the process, continuous time process, x tilde t? So now you define x tilde t. How to define this process? The idea is that you put a clock on the edges. So each edge has a clock. And what is this clock? I call it l tilde e of t. So this clock is running only when, OK, so what is a clock? So you have an edge. And you consider that you measure the time that you spent at any of these two vertices, OK? So you measure the time that you spent on one vertex that is an extremity of the edge, OK? So it means that the clock of the edge e is the sum of the local time of the two vertex, of the two extremities of the edge, OK? And now, so that's just a definition. So that's a function that depends on the process. And now each time the clock rings, so what does it mean that the clock rings? Means that you have some time. And you think that this point process is an alarm. So you have the time that is running on the process. And each time you meet one point of the point process, it means that you have an alarm. And when there's an alarm, then you cross the edge, OK? So each time the clock rings, then x tilde, the clock l e x tilde t rings, then x t crosses the edge e. It's clear what it means, OK? So clock rings, it means that l e x tilde t is equal to some vk, OK, vk e. Yes, the point process are attached to edges. The clock are attached to edges. They measure the time that you spent in any of the two vertices. And each time the clock rings, then you cross the edge. Of course, you can only cross an edge where you are, since the time is only running on the edges that are neighboring your position, then you can only cross edges. So now the result is that this process has the same discrete path as the ERW. And that's a very simple loss of memory of Poisson process. So x tilde has the same discrete path as the ERW, OK? So why? So just imagine that, OK, so how to see that? If you suppose that at time t, you are here, OK? So it means that on each, OK, so you are at this position at time t. So now if you look where you are on the point process, so it means that you are somewhere here. That's the local time at time t. And you are here for this one, or here for this one, OK? So now if you look at the first of this edge that will ring, then by loss of memory of the exponential, you know that the time you have to wait there is an exponential of parameter the position where you are. It's clear or not? Yes? So it means that the time you have to wait, when you come back, the time you have to wait is an exponential of parameter EE plus the number of time you have already crossed the edge at time t. Yes? So it means that it has the same low, then the probability to choose the edge is proportional to the weight, because that's the inf of exponential and that's proportional to weight, OK? So it has the same discrete path. Yes? So that's something that has been used. OK, so this was first used by Selker. And it has been used several times in this context. And now the second ingredient is a result of Kendall that, in fact, has appeared first in branching processes. So the step two, and in fact, this was used by Pierre in some other paper. And the step two is a result by Kendall, Kendall representation. So it says the following. So if you have a process of this type, so you have a point process, you look at the point process with exponential EE, exponential EE plus 1, et cetera. So let's call it the Kendall process. I'm not sure it's the good type and the good name, but OK. Then that's, again, a question of mixture. Then in low, the result of Kendall, you can say that in low, it's the same thing as the Poisson point process. So this one is not Poisson, but it's almost Poisson. It's a mixture of Poisson. And that's OK. In general, it's not stated like this, but it's a mixture of Poisson. And it's a mixture of Poisson with measure W exponential T dt, where W is a gamma. Let's know if it's clear. So this process is the same as you first pick W random according to gamma. And then you look at the Poisson point process with this intensity, yes? And that's the same. In general, it's not stated like this. It's stated differently. It's stated that you look at this W is something like the log of the number of points before time T divided by T or something. And conditionally, on this asymptotic value, then it's Poisson. It's not Poisson. It's a time change of Poisson. In fact, this is a time change of uniform Poisson. But if you write it in this way, it's very simple. So now it's finished. Why? Because for each edge, you have some time lines. And each of these times line have some gamma random variable, W. But now if you condition on this W, then what do you see? You see that you have a Poisson point process with exponential T dt. But OK, let's say that it's exercised. So conditionally, on W, then x tilde t jumps as rate, jumps with rate, Wij exponential of the, what's this T? This T corresponds to the clock of the edge. So it means that it's exponential of Li x tilde t plus Lj x tilde t. OK. OK. I don't, OK. You need to think a bit about that. But it's all the ingredients are there. Yes? Questions? OK, so now then I would like to add to the, OK. So that ends this proof. So the next thing is that, so now you know that the edge reinforcement work is a mixture of VRJP. So you know that the edge reinforcement work is also a mixture of Markov processes. So that's natural to ask whether the VRJP is also a mixture of Markov processes. Because you see that if it was true, then it was implied that the ERW is a mixture of Markov processes because it's a mixture of mixture. Yes? And so that's what I, and it's true. And I'm going to state the result. So that it will be the first big formula. So sorry, there will be two formulas. Two big formulas. Can you say that the ERW is random working? Yes, but you cannot go the reverse. Because the ERW is a mixture of VRJP. But the VRJP is not a mixture of ERW. So, but in fact, there are some examples where it's not OK. There's none reversible version where this fails, in fact. OK, so now, second point, what is the mixing measure of the VRJP? That's the continuation of, I don't know, OKB. Let's say it's B. So that's where the second time scale is important. So you see that the first one, we used it because we used it to prove the relation with the ERW. And this one will be the one such that it is a mixture of Markov jump processes. So consider the time change VRJP. So that's the one which is called Z. So with jumping rates, OK? So now let's think a bit about that. So imagine that somehow the process stabilizes. So it means that the local time will go to, OK, imagine that you are on a final graph. So the local time will go to infinity. But if the process stabilizes to something, if there's an asymptotic distribution, this is not generated, that the interest asymptotically, this ratio should converge. And this should give the asymptotic behavior of the process, OK? And that will be the first contain that indeed it converges and that the law of these asymptotics is explicit, OK? And then conditionally on this, it will be Markov, OK? So that's the first. So that's the same. OK, so the first point is that, OK, so for some reasons, I will not look at the asymptotic of this, but the asymptotic of the log, OK? Because the formula is better with exponentials and so on. So the first part is, so let v finite. It's very important here that it's finite. And in fact, it's not easy to go to infinite graph. So let zt starts at i naught. Then the vector ug, so you look at the limit when t goes to infinity of 1 half of the log. The 1 half comes from the square root of the log of lj divided by the local time at the initial position, OK? That just to normalize. You need to normalize in some way, OK? Then this converges almost surely, OK? So you see that the value of u is equal to 0 at i naught. So it's root of that i naught. And you have an explicit formula for the distribution. So the formula is not that complicated. I will comment on that. So there's a constant. You don't care about this constant. I say that you care about constant, but when it depends on the parameters, not when they are completely near. And then you have an interacting term, which is the sum on all the edges of wij. So it looks a bit like a Gaussian free field, but it's not. And then you have the hyperbolic cosine of the gradient ug minus ui, OK? So that's the cosh minus 1. So that's the exponential term. And then you have a square root of a determinant that I will define later. And then you have the product on all the edges, which are not equal to i naught, because on i naught it's normalized. And you have exponential minus ui d ui, OK? So I will comment on this formula later. And so what is this d? So this formula is the one that appears in the diserter response cell, but I will comment on that later. And so what is this d? dw u. I will give two forms for that. The first one is a sum on spanning trees. Does everyone knows what is a spanning tree? Yes. That's a maximal tree in the graph, OK? So it's connected without loops. And you make the product on all the edges in the spanning tree of wij exponential ui plus uj, OK? So it's the spanning tree weighted by not the w, but w times exponential ui plus uj, OK? And there's another formulation for that. It is a minor. In fact, it's any minor of the following matrix. Maybe there's a problem of sign, but OK. And you put w in the extra diagonal term. You put w exponential ui plus uj. That's off diagonal term, OK? And on the diagonal, you take minus the sum on the line. It's clear what it means. So it means, in fact, it's the infinitesimal generator of the Markov-Jump process with conductance is w exponential ui plus uj. It's clear? Yes. So it means that the sum on lines is equal to 0, yes? And OK, the determinant of that is 0 in general. But you take any diagonal minor. So it means that you remove one line and one column. In fact, all the minors are equal because the sum on lines and columns are equal to 0. It's a simpler exercise, yes? So this formula is called the matrix 3 theorem. For those who don't know, OK. I will not use so much this formula. This part is important because it's the part that makes things difficult. But I will not use it so much later. OK? So is it clear, the formula? So you have one term, which is some interaction between neighbors, which is given by the hyperbolic cosine of the difference of neighbors, which looks like some interaction between neighbors. And then you have this square root of determinant that correlates to that long-distance things. Yes? OK, so that's the first point. Excuse me, here, the W are just random, or they are? OK, they are fixed. OK, so to be clear, the VRGP, it's interesting in itself. So it means that you can take W fixed, and then you have a process which is interesting in itself. Or you can take W random. And in this case, it gives the ERW. But in this formula, it's fixed. So when you say that Uj exists, so it exists as a random variable given by this distribution? Yes. So it converts almost surely to a random variable. That's like in the polar yarn theorem, where if you look at the asymptotic distribution of the polar yarn, then you have some parameter p, which is the asymptotic number of red balls. And it converts almost surely. But the limit as the distribution, it's a random, and you know explicitly the distribution. So that's exactly the same picture as the one I described, the general picture I described after the polar yarn description of the polar yarn. So you have an asymptotic statistic with explicit law. And the second point is that conditionally, on this random variable, then the time change VRJP is a mixture, sorry, if it's conditionally, it's not a mixture. So conditionally in this, the VRJP is a Markov jump process. We freight what you can guess from the initial formula, 1 half of Wij exponential Uj minus Ui. So why is this formula? It corresponds to the initial, because what you get is that the log of the ratio of the Lj over Li, it converges to exactly the exponential of Uj minus Ui. So if it's a mixture of Markov jump processes, then it cannot be anything else than that. OK, there's a bit of confusion because I take the log. But what is proved in the first part is that the ratio of the square root converges to this value. So it cannot be anything else than this. If it's a mixture of Markov, if it's Markov, it's necessarily this form. Yes? So that's why it's called the natural timescale, because it's the timescale for which it's a mixture of Markov processes. So let me make some comments on that. Excuse me. At the plus 1 there, you had inside the root the plus 1. Yes, yes. But L tends to infinity, so you don't care about the plus 1. L tends to infinity. That's the local time. In the end. In the end. For T large. But in the beginning, OK. You don't care about the beginning. The thing is that if it's a mixture of Markov jump processes, it means that asymptotically it will look like this process. And the non-trivial fact is that it's not only asymptotically. It's that conditionally it is value. From the beginning, it is a Markov jump process of this form. But I said that if it's a Markov process, necessarily it is of this form. Because asymptotically it's given by a. Could be anything which is asymptotically of this form. I agree. So that's why it's not a trivial statement. I say that if it's true, it's necessarily of this form. If it's a Markov process, it's necessarily of this form. But it's not a trivial fact that it's true. So I want to comment on that. So let me make several remarks. So first, this formula appeared before in a very different context, so the same work as one I mentioned several times. This is a tourist Spencer Tzernbauer in 2006. And it appears in a very different context. It was not related to any random process. In fact, it's one component of a supersymmetric sigma of, I write all the terms, supersymmetric, hyperbolic, sigma field. So there's a lot of terms. So if I have time tomorrow, I will explain why it is supersymmetric, why it is hyperbolic. Sigma model means that it's an interaction, but where the spins lives on some manifold. The fact that it's supersymmetric, it means that it's a supersymmetric manifold. And the fact that it's hyperbolic is that it's a hyperbolic manifold. In fact, the spins, they live on some H22. So it's a supersymmetric generalization of the hyperbolic plane. If I have time, I will explain why it's true. And they were interested by that coming from, so the motivation was random matrix theory, random matrix. In fact, random bound matrix. And more generally, it's about Anderson localization or delocalization. But there's no absolutely no rigorous statement about that. And that's why they prove this localization and delocalization for this model that are the origin of this recurrence or transience of the processes. Let me, that's the first comment. Second comment is that there's one non-trivial, in fact, the most non-trivial fact is that the measure mu is a probability. I didn't give any name to this measure. So this terrible measure there, I will call it mu I0WDU. I should have written there. The second remark is that it's a non-trivial fact. And in fact, it's the most non-trivial fact that mu is a probability. And there are several proofs of that. There is one, which is the original one, which is due to the Dissertoris-Penser and Zernbauer. It's by supersymmetry. In fact, the idea is that behind this measure, there's some geometry. In fact, there are some invariants by some supergroup. And you can deform the measure by leaving the integral equal to 1. And OK, that's a non-trivial, Eileen non-trivial argument. It's related to Bayesian integral. OK, there's a probabilistic proof. That's the one. That's the one. In fact, we didn't know that measure before. So we gave a probabilistic proof of this formula. So we proved directly that this measure is the limit of some random variable, so it's a probability. And so now we have also a direct computational proof related to some random Schrodinger operator that I will try to define tomorrow. And the last remark I would like to make is the following, sorry, Mark III. Is that, so you see, it looks somehow like the GFF, the Gaussian free field. OK, for those who don't remember, the Gaussian free field on the graph with conductance is W and rooted at I0. It's the Gaussian vector with distribution Ij. And then you have the square root. You have the square of the difference of the vectors. And then you have a normalization, but that's, in this case, the normalization does not depend on U. So somehow, with this definition, it corresponds to DW0. So that's a constant. That's really a constant normalization compared to this one, where the square root of D depends on the field. So you see that it somehow looks like this measure, but it's not related to that measure directly, except when W is large. So when W is large, if you look at the measure there, then you see that U is small. Why U is small? Because it's rooted at I0, so it means that the value at I0 is equal to 0. And then you have W. W is very large. So you need to have a small value to the hyperbolic cosine of the ground minus 1. So it means that the ground must be close to 0. So it means that they have a very small fluctuation of the U. So it means that U is very small. So if U is very small, then the hyperbolic cosine, it's more or less 1 half of UG minus UI squared. So you get this interaction. And when W, U is small, then the determinant is almost the determinant with 0. So it means when W is large, it looks like a Gaussian free field. But it's not directly related to Gaussian free field, at least not that I know. So let me make a fourth remark. And then I will give a brief idea of the proof. Not full, but this part of the proof. So what's the relation with the ERRW? Then you see that if you have a Markov jump process with this jumping rate, it means that if you look at the discrete time process, it's reversible. So it means that the ERRW is a mixture of W ij exponential Ui plus Uj. Why I put a plus? It's because the invariance's measure is exponential 2 Ui. If you have to have a symmetric conductance, you multiply by exponential 2 Ui. You have Ui plus Uj, where the W are gammas independent. And conditionally on this W, U follows this law. So it means that you have another representation of the ERW. By first picking W random, then you pick this U according to this measure, which the parameters depends on W. And then you get these conductances. OK, if you make the computation, you find back the Diaconis-Copersmith formula if you make the change of variable and so on. OK, so is there any question about that? So does that mean that the VRJP is a mixture of ERRW? No, no, no, that's the contrary. It's the thing is that if you have W fixed, then if you look at the discrete time VRJP, it's a mixture of reversible Markov chain with these conductances if you don't care about the continuous time. And now you say, OK, the ERW, it's a mixture of VRJP. When you take the W random, OK, so it means that you have two mixtures. OK, I need to think a bit about that. But you have two layers of randomness. So does this go through the way that Diaconis-Copersmith allow you to suggest it? No, in fact, not in this direction. From this, you can get Diaconis-Copersmith. But it's difficult to do. I mean, what you would like is to somehow cut the measure of Diaconis-Copersmith in two layers. So it's difficult. There's no direct proof. In the reverse, you can, because reverse it works. You make the change of variable. It's a bit heavy computation. But even heuristics to come from the node. There's no, OK, it's not clear how to go from ERW to VRJP. I don't know any way to go in this direction. OK, let me give just a OK. So let me give just a sketch of one part of the proof. And in fact, the details are in the exercises too. OK, there are three exercises. When the first one is more simple, it's about the same thing but with the ERW. The second one is the second part of this exercise. But I will explain the idea. So sketch of the proof. If you assume that mu is a probar. OK, assume that mu is a probar. That's the difficult part somehow. Then now, the idea is to do the same thing as for the polar yarn yesterday. But it's more complicated. But somehow, it's like Bayesian statistic. You want to understand what is the posteriori law of the field, the conditional beginning of the process. OK, so assume that mu is a probability. In fact, it's convenient to change variable. For this case, it's convenient to change variable to have something that, OK, if you look at the variables u, they are rooted at i0. And it's more convenient to have something which does not depend on the starting point in terms of the variable. So you define vj as ui minus the sum. Vi, it's ui minus the sum of the ui of the uj. So it means that this one, just a way to have something which is symmetric in there. The sum is equal to 0. In this case, the measure has the same form, because the grad does not depend. Then you have some factors that can be extracted. That simple computation, what you get is that you have an extra term that depends on the starting point. And then you have the same sum, so that's the same. And you have the same determinant. So the difference is that instead of having exponential minus the sum of the ui there, then you have just exponential of v i0. So it means that it gives an extra weight to configuration where the v at the starting point is big. Because when this is big, then it gives an extra weight. So it has a tendency to pull the field at the origin. So now what is the idea of the proof? Step one, so if you look at the consider, d0 by pv is the law of the Markov jump process in the environment given by u. So it means we fight 1 half of wij exponential vj minus v i. So that's just the law of this process. So the first step is to compute. And that's a simple computation is to compute the probability. So I will not write formula of this event. So you look at the event, you start from i0. And you look at the event that you jump at time. You jump to i1 at time t1. And then you jump at time t2 to i2 and t3 to i3, et cetera. Up to t. And then you finish at in at time tn. So you look at the probability or the density of probability that your process follows this path, that it jumps exactly at this time. Then it's rather easy to prove that this is the sum. There's a term which is an holding time of the process because each time you have a Markov process. So each time you have an exponential, you jump at some exponential time. And it's equal to local time of this path that I denote gamma. So this is gamma. So that's the local time of the path gamma. So the time spent by this path on each edge. And then you sum on all neighbors of l, neighbors of i, of w, i, l exponential v, l minus vi. That's the holding time of the process. There's one half. That's the holding time of the process at some point. So that's why it is exponential because of the exponential holding time. And then you have an extra term which gives the discrete term somehow. Sorry, it goes there now. And then it's the product for k equal 1 0 to n minus 2n of the transition probability, the initial rate of the path. So that's w ik ik plus 1. And then you have an extra term which somehow comes from the product of these jumping rights. So there's cancellations. So there's exponential of the last position minus the first position. That's a simple computation. And now there's a combination with the measure. And the second step is now you want to, so step 2, I finished. So step 2, if you suppose that you have z, which is suppose that u is a random variable with distribution mu i not w, and z tilde is conditionally new as low the Markovian process with u. Then the thing is that if you look at the low of u conditionally on the past of the process z tilde, you condition on the fact that you followed the process that I described before. So somehow it's the posterior distribution of the u conditionally on the beginning of the process. Then what you can prove is that that's very easy to say that it's proportional to this probability that I did not there. That's the process time, the measure mu. That's Bayesian statistics. That's Bayesian formula. And now the thing is that you can combine the exponential that are there with the exponential that are in the measure and make a change of variable. And you can prove that the low of this mu up to some change of variable, it's again, it's a mu. There's a change of variable, but you have to make a change of variables. But it's again a mu of the same form. But at the position, at the last position. And then you have a change of the w. So you have w and g square root of 1 plus li square root of 1 plus lg. That's just a sketch. So the idea is that you multiply this by this formula, and then you combine so that you get a density which has the same form. And you make a change of variable to get the density that is the same form. That's exactly as the, everything is somehow by Asian statistics that you have a priori law of u. Condition on an observation, and you want to compute the posterior law. And when you have explicit formulas, then you can do it. OK, so thank you. Questions or remarks? So the fact that it's approximated like you often see few thoughts of our leader, this rainbow is improved in some sense? Yes, more or less. In fact, that's the, somehow it's the key behind the fact that it's normalized. In both proof, in the proof of the Sertorius-Petzer and Cernbauer, there's a supersymmetric deformation. And when you deform, then you converge to a Gaussian free field. And then you use the normalization of the Gaussian free field. And in the probabilistic proof, you follow the path. You have a Feynman-Kach argument. And asymptotically, the posteriori distribution converges to some Gaussian free field. And this gives you the normalization. So at the end, you use the fact that, OK, when the time goes to infinity, the W becomes big. And then it converges to a Gaussian free field. And that you use the normalization of the Gaussian free field to prove the result. Maybe it's not clear. This is a true question. When you mentioned the sigma model and the supersymmetric, did the ones come from the string theory? Excuse me? I mean, when you mentioned the sigma free or the sigma model, supersymmetric, are there same things in the string theory? In string theory? Yeah. I don't know. Yeah, I think so, yes. Sigma model is, OK, I'm not an expert. But I think that sigma model is a central object in all these topics. Anyway, sigma model is a very general term, as far as I understand, to explain interactions on some spins that lives on some complicated manifold. Yes. OK.