 Donc le sujet de mon parler sera principalement de la famille de l'enquête renforcée en marche. Donc je vais définir après ce que je veux dire par ce que je veux dire. L'enquête renforcée en marche. Donc ce qu'est ce que c'est de l'enquête renforcée en marche, dans deux mots je vais définir précisément ce qu'il est. C'est une famille de des processus qui ont une tendance de revenir à leur pass de trajectories. Ok, donc il y a un renforcement dans le sens qu'ils veulent revenir à deux sites où ils ont une visite dans le passé. Et, ok, donc pour cette partie, ce n'est pas réellement négatif au sujet de l'école. Mais ce que j'aimerais expliquer aujourd'hui, peut-être pas aujourd'hui mais dans les prochaines lectures, c'est comment cela est relié à l'opérateur de Handel Schrödinger ou à Anderson localisation. Donc pour ceux qui étaient à l'école l'année dernière, c'était le contexte du cours de Simone Bartzel mais Handel Schrödinger, je ne vais pas utiliser ce qu'elle a dit. Donc je vais expliquer dans la parole. Donc, je vais... Ok, donc ce que je vais expliquer clairement dans la parole c'est comment ce modèle de renforcement en marche est lié à un certain modèle de théorie, ce qui est un fil de cinéma. Je n'ai pas besoin de savoir ce que c'est mais c'est quelque chose d'une généralisation hyperbole de l'offre de Gauchan Free Field, si vous le savez. Et ce fil est lié à un moyen très non rigoureux pour le cours de Simone Bartzel à Anderson localisation. Donc c'est la première relation à Anderson localisation. Je vais récolter quelques mots sur ce que l'on parle de Anderson localisation. Et ce que je vais dire aussi précisément dans ce cours c'est comment ce fil est lié à l'opérateur de Schrödinger, ce qui sera rigoureux. Ok, et ceci est aussi lié à ce renforcement renforcement. Donc, je vais juste remercier ce que je veux dire à Anderson localisation. Donc, quelques mots. Vous n'avez pas besoin de savoir le cours de Simone Bartzel. Mais ce qu'est Anderson localisation? Il concerne les propriétés spectraux de l'opérateur de Schrödinger, qui est de cette forme. Vous utilisez H0. Donc imagine que vous êtes sur ZD et que vous considérez ZD comme une graffe. Donc, H0 est la graffe de la maitre adjacente. Et vous regardez cette maitre adjacente plus l'ambiance lambda times des potentielles, qui sont les potentielles IID sur les vertices. Et... Ok, donc ce sont des opérateurs de l'opérateur et des fonctions d'D. Et la question de localisation et de l'opérateur, qui était le contenu de l'opérateur de Simone l'année dernière, est la suivante. Il y a une picture générale. Je ne vais pas dire... Je vais juste dire ce qu'elle explique l'année dernière. Mais l'une des contenus qui est importante est que pour l'ambiance large, ce qui signifie que quand vous avez une forte disorder dans les potentielles, donc quand l'ambiance large, alors ce que vous espérez, en fait ce qui est prouvé, rigorosly, au moins dans des conditions dans les potentielles, c'est que ce opérateur H lambda a un spectre pur point. Donc c'est juste une traduction. Je ne vais pas le définir précisément, mais ce qui veut dire un spectre pur point, c'est que ce opérateur, c'est un opérateur de base de function L2, ce opérateur a une base de function L2 et ces functions sont localisées dans le sens de qu'elles ont une diminution exponentielle. Ce qui est exponentiellement localisé. Qu'est-ce que ça veut dire exponentiellement localisé ? C'est à dire qu'il y a une base de function L2 qui est essentiellement à cette forme, si tu prends un point c'est 0. C'est localisé à un point qui veut dire que tu as une fonction qui ressemble à tu peux avoir une oscillation mais tu as une diminution exponentielle dans la distance avec respect à la distance à un point, dans la direction. Donc c'est ce qui est appelé la phase localisée de l'opérateur de schrodinger qui veut dire que tu as une base de function L2 avec une diminution exponentielle. C'est à dire que c'est comme en fin de dimension tu peux écrire à ton opérateur comme décomposition en projection de la function L2. Et ce que c'est conjecté est pour l'angle lambda small. Et c'est le principal conjecteur dans ce... C'est que pour d largeur est equal à 3 donc en dimension 3 et plus que pour lambda small. Ce conjecteur est que ce n'est plus le cas et qu'à l'intérieur du spectrum il y a une partie qui est délocalisée. Donc je ne veux pas être très précis dans ce respect. Donc ça veut dire que partie du spectrum ressemble à délocaliser une fonction donc ça veut dire que tu as ce qu'on appelle l'extrême state donc ça veut dire que tu ne peux pas essayer le spectrum comme tu ne peux pas décomposer l'opérateur sur le basis de la function L2 mais la résolution spectrale de la spectrum de l'opérateur contient quelques états extensifs donc ce qui peut sembler comme un CNU donc c'est délocalisé. Et c'est le principal conjecteur sur le délocalisation donc un modèle de délocalisation donc tu vois que ici tu as ce opérateur c'est H0 plus un potentiel IID donc ce sera la relation avec ce réinforcé délocalisation en fait ce sont des conjecteurs qui ont compris partie avec cette correspondance que je vais expliquer plus tard c'est que pour ce réinforcé délocalisation le régime localisé correspond à des récurrences de la délocalisation et le régime délocalisé correspond à non seulement correspond à l'exercice de le process donc c'est très très végue introduction mais ce sera principalement le contenu de la course donc je vais dire que ce dot de l'arrore c'est donc c'est loin d'être compris et c'est pas même clair qu'il y a un bon sens ok donc l'idée c'est qu'il y a ok le modèle de délocalisation peut être interprété dans une façon non rigoureuse en termes de modèle super symétrique Sigma Field et en fait il y a un Sigma Field qui est associé avec ce réinforcé délocalisation mais c'est un modèle de délocalisation pour la famille qui est supposée être connectée à l'envers à l'envers de ce modèle ok donc il y a beaucoup de choses qui sont loin d'être compris dans ce qu'il y a ok donc le premier c'est un modèle de délocalisation pour la vraie chose et second, même pour la vraie chose il n'y a pas de connecteur nucléaire dans les termes mathématiques ok donc je ne serai pas capable de expliquer ce qu'il y a mais Excuse-moi Vous avez toujours le box de bottom ? Non non, ceci est ok ceci ok, donc let's me dire ce que je vais dire clairement c'est que ces deux choses sont liées à un opérateur de Rondemschrodinger où le potentiel n'est pas IAD c'est une dépendance donc il y a une dépendance à une distance de 1 et en fait c'est seulement à la basse de l'especteur donc en fait la correspondance entre le rède et le rède et ce processus va apparaître à la basse de l'especteur mais ce sera plus clair dans le dans le talk, j'espère ok donc tout ceci est rigolo mais mais ce n'est pas donné d'informations sur le modèle de Rial Anderson ok, donc je vais maintenant commencer par quelque simple chose donc avant de défendre ce que l'est le Réinforcé de Rondemschrodinger je vais me reminder de quelque simple chose sur des probabilités classiques donc des théorèmes définitifs donc peut-être partie ou beaucoup de vous connaissent ça très bien je vais au moins le reminder donc des théorèmes définitifs et des théorèmes poliers ok, donc pourquoi je me reminder de ça ? c'est parce que les idées qui sont contenues dans ces théorèmes définitifs et les théorèmes poliers sont la base de les idées qui apparaissent dans la description du processus de Réinforcé donc je vais me dire les théorèmes définitifs c'est la forme simpleste donc ce qui est la forme simpleste c'est que vous suppliez que vous avez un processus Rondemschrodinger XI qui prend des valeurs en 2.01 alors vous dites que c'est extensible le processus est extensible pour tout n et toute permutation d'un n donc vous prenez la permutation de la première n d'un n puis la loi de la permutation de la variable n la première si vous regardez la permutation de la variable n c'est en cas de la loi de X1 Xn donc ça veut dire que le processus est invarié par la permutation de la finite de la permutation et il y a la beautifière de l'intervention c'est-à-dire qu'il y a des statistiques de Bayesian il dit que si vous avez un processus extensible c'est un mixeur de sample IID donc j'ai dit plus clairement donc définition de la formule simple donc c'est la définition qu'est-ce que c'est extensible donc l'intervention dit que si vous avez un processus extensible alors d'abord si vous regardez les statistiques asymptatiques du nombre de 1 donc si vous regardez 1 over n xi alors cela converges quasiment comme n tend à l'infinité à des normes randomes normes normes p ok ce p peut être random et conditionnelly sur p alors le xi il y a IID et si il y a IID et la statistique c'est p et c'est nécessairement IID distribué comme les variables avec le paramétre p ok donc c'est la définition classique dans la formule simple ok donc vous voyez que vous avez des statistiques asymptatiques et puis conditionnelly les statistiques asymptatiques alors le processus est simple ici c'est IID oui c'est clair donc maintenant let's me come to the typical example of this of this theorem it's the polyarn and then you will see that there's something ok and this somehow will be the prototype of what is going to be to appear in the course but in more complicated form so what is the polyarn so you take an an balls of two colors let's say red and you have red balls which are white there then you have blue balls which are green well ok white and green ok so you have white balls and green ball and ok so let's say that at the beginning you have a white ball and B green ball ok and the polyarn it's the following simple process what you do is that you pick at random one ball in the urn ok and you put it back so you you you pick a ball and then you put it back in the urn but with an extra ball of the same color ok it's clear if you pick a white ball then it means that you add one so you put it back and you add a new one of the same color ok so in this case you have four four white balls and three green balls yes so that's the polyarn so who knows the polyarn almost everybody good and ok so it's very simple to see that that the polyarn it's so if you think at the sequence ok so if you if you look at the the sequence that describe the balls that that have been picked so suppose that that zero is white and one is green then it's an exchangeable process ok let's say that it's an exercise it's very simple ok you can compute easily the probability to pick some sequence and then you see that it's exchangeable ok and there's an extra information that you can get so it means that means that you can apply the the affinity theorem and the good point here is that if you look at the so it means that you have some asymptotic statistic if you look at the asymptotic number of white balls that you get it's the P and the P has an explicit law and the extra information you get in the in the polyarn process is that P the law of P is explicit and it's beta random variable beta AB so what is beta AB so that's a distribution on on 01 and what is the distribution it's proportional so it's P to the power so the density of this random variable it's P to the power 1 minus A minus 1 1 minus P to the power B minus 1 and of course it's on 01 and then there's a constant so in general in probability we never write constants but you will see I hope that at the end of the course you will see that in this in this subject constant matters a lot so I write the constant so it's gamma A plus B divided by gamma A gamma B ok so I will prove that because it's two lines you don't need the finite theorem to prove that and it's very simple and it will be somehow a prototype of what will be in the rest of the course ok so what does it mean it means that the polyarn you can state it differently you can say that ok the polyarn if you look at the distribution of the polyarn ok you can say differently you can state this theorem differently you can say ok the polyarn it's a mixture of iid sampling so it means that you integrate it means that you first pick P at random and then you you take iid Bernoulli according to P ok so this is why I write it by P to the P so this is iid sampling Bernoulli parameter P and then you integrate with respect to the beta ok I write it like this beta AB the DP ok it means that you first pick P according to the beta and then you take iid sampling ok it's clear it's the same thing so let me now prove that I will just I give a proof of that because you will see it's just two lines if you take it in the good the good way so it's clear statement so let me prove this formula it's very simple so how to prove that without the finity and without anything it's quite simple it's something like Bayesian statistics if you did Bayesian statistics at some point is that ok let's start let's start in the reverse direction so we will start in the reverse direction so we start by this by this integral ok and we want to prove that if you take expectation of respect to this P then you get the polar ok so suppose that you start from the right the right hand side so if you look at the probability that xn plus 1 is equal to 1 condition and the fact that it's equal to epsilon ok these are 0 or 1 ok and you suppose that you have k so what did I ok I don't I should put ok let me sorry you have to put a corresponds to the white so it means that you have to put 1 when it's white and 0 when it's green Vous changez l'av. Alors suppose que vous avez k1 et n-k0. Alors, c'est très simple. Vous avez deux intégres. Vous voyez, c'est la probabilité, c'est l'intégre de la probabilité. Ok, j'arrête directement. C'est la probabilité d'abord choisir l'av. et l'intégre de l'av. Ça signifie que c'est p x la probabilité que, dans le premier lieu, vous choisissez k1 et n-k0. Donc, c'est juste ça, parce que vous avez fixé la séquence. Et puis vous l'intégriez, selon l'intégre de l'av. Et vous divisez par la même chose. Mais vous n'avez pas les dernières termes. Ça signifie que c'est p x k0. Vous divisez par la probabilité d'avoir la première séquence. C'est p x k0. 1-p x n-k0. Et l'intégre, selon l'intégre de l'av. Ok? C'est clair? Mais maintenant, ce que vous voyez, c'est qu'il y a encore un n-k0. Oui? Vous voyez que vous avez juste le ratio de deux consents que vous savez très bien, qui sont exactement la normalisation de l'av. C'est pourquoi j'ai écrit la normalisation. Vous voyez que c'est l'intégre de la densité de l'av. Vous avez le ratio entre l'intégre de p et l'intégre de l'av. Plus a-1. 1-p divided by l-k plus b-1 dp. Divided by the same thing, but you don't integrate the last one. You have k plus a-1. 1-p n-k-b plus n-k-1 dp. Ok? So that just a ratio of gammas. So it gives. So that's gamma a plus b plus n plus 1 divided by... Ok, I don't write everything, but if you make the computation you will see that it's just a plus k plus 1 divided by a plus b plus n. Sorry. Ok? So it's exactly the probability to pick the n plus 1 ball, one condition of the past. Ok? And everything is for the poliarn. Because that's the configuration of the poliarn. If you first pick k white ball and n-k green ball. Ok? It's clear? No? Ok. So that's a simple computation. You agree? But I did it precisely because it will be somehow a prototype of what is going to appear in the rest of the talk. So for us, so it will be a prototype of what I will call explicit definitive theorem. So you will see I will do a more complicated proof of the same type later. So what is an explicit definitive theorem? So it will be more complicated so it means it will be related to Markov chains and so on. What is the principle of the explicit definitive theorem? Is that you have some asymptotic statistics with explicit and interesting law. Well, interesting, it's a matter of taste, but with explicit law. And the second point is that conditionally, so here it's the p, and conditionally on this statistic, the law of the process is simpler. So it means that you have some type of Markov property of the process. Here Markov means that it's IID, but more generally it will be some independence in the process. Conditionally on this stat, on this statistic, the process is simpler. And you will see several occurrences of this. And somehow once you have such a picture, you don't really need to have a definitive theorem. If you can guess the explicit law of the process, then you can do the same trick as there. And then you get directly the fact that the mixture has the law that you expect, that you want. So now let me, ok, of course the polyiron if you have not, let me not describe it precisely because I will not need it, but if you have more than two colors, if you have k colors in the polyiron, then there's an explicit law which is called the Dirichlet law. You have exactly the same thing but with a Dirichlet law. Ok, but I don't want to describe it. I will just say a word later about trees, but I don't work on trees. Ok, so that's the first point. Second point. So I will define what is the H-reinforced random walk. So you will see that it's a type of polya type Markov chain. So it's a H-reinforced random walk. H-reinforced random walk. And I will explain what is the correspondence of the definitive theorem in this case. So partial exchangeability. Ok, sometimes it's also called linearly H-reinforced random walk. So you will see later why. So I will nickname movie E-R-R-W. Ok, so what's this subject? So it's take a graph. So vertex with vertex set V and H set E, which is non-directed graph. Non-directed graph. And suppose that it has finite degree at each vertex. Ok, that it's connected, otherwise it's not very... And on this graph, you put some weight on the edges. You put some positive weight on the non-directed edges. Ok, so what's the linearly H-reinforced random walk? Then let me first not give a formula but draw a picture. So imagine that you start at I naught. So you have some weights A1, A2, A3, A4. So what does the process do? At time zero, it looks at the weights that it sees, that are the weights of the edges that are neighboring the edges. The initial edge, and then it jumps according to the... with probability which is proportional to the weights. So imagine that it jumps to this edge. Then it increases its weights by one. So at first it jumps according to the weights. Once it crosses an edge, it increases its weights by one. But it increases the weights in both directions. In this direction and the other direction. Then you see a weights A plus one, A1 plus one. And then you have A, I don't know what. And then you jump with a probability which is proportional to the weights. So you see it's reinforced because you feel the fact that you already crossed this edge in this direction. And so you have a tendency to come back. Ok, so that in this sense it's reinforced. Ok, and then you continue. If you come back, then you have A plus two. And then you increase here. If you cross this edge, then you have A4 plus one and so on. Ok, and then you always increase the weights by one. So if you want to write a formula, then it's simple. If you look at the probability, if you have X which is neighboring Y. If you look at the probability that XN plus one is equal to Y. Knowing that XN is equal to X and knowing all the paths. Ok, knowing X dot XN minus one. Then it's proportional to the weights of the edge at the time N. So it's AXY plus the number of times. Ok, so this is the number of times you have crossed the edge X1 in the past divided by AX. So I will define AX plus the number of times you have visited the vertex X at time N. Ok, so what is AX? So it will be the standard notation. AX will be the sum on all the edges that are neighboring X. Ok, so that all the edges connected to X. And NXYN, it's the number of times, before time N, that the process in non direct sense is equal to X and Y. Ok, so it means that the number of times you have crossed the edge in the past. And NX, it's the sum with this formula. So it means that it's the number of times you have crossed, you have visited the vertex X. Ok, it's clear, definition. So let me make a remark. So there's an important remark which is simple and pictorial. It's not clearly stated, but it means that it says that if A is small, A small corresponds to stronger reinforcement. And A large corresponds to weaker reinforcement. Ok, why? It's very easy on this picture. It's because if you have A which is very small, if you look at the first time you cross this edge, if you have A which is very small everywhere. So it means that you have to compare, the first time you cross this edge, then you have to compare 1 plus A compared to A. Ok, so it means that you have a very strong reinforcement with A, A small everywhere. Ok, and when A is large, then you don't feel the fact that you have increased the weight by 1. Ok, so you have these two regimes. Ok, so second remark, important. So this process was introduced by, ok. So this process was introduced by Diaconis and Copper Smith. So Mao, I think it was also in the thesis of Pimentel which was supervised by Diaconis. So I think that Diaconis is behind this. Ok, so Diaconis, he's at the origin of all these type of processes. Ok, if you read this, the motivation he gave was that he was working in Paris and that he was taking streets randomly and that he has a tendency to come back to the street in the northern. That's the official motivation. Motivation, in fact, if you look at the... He was giving the talk in Paris, right, when he said that. Yes, well actually, I don't know if it's... Ok, so that's the official motivation. In fact, there's something much deeper in his motivation, I think, is that if you look at the publication of Diaconis at this time, then there's something like 10 or 15 papers on definitive theorem. I think that is real motivation. Ok, I guess that he understood that this process was interesting because it has some interchangeability property. And it's also related to Bayesian statistics. So I think his motivation, his real motivation was definitive theorem. And probably Bayesian statistics, which was one of his specialty. Ok, so now I come to the point of why is this process interesting. So you see that there's a choice in this process. So it's called linear linear and forced just because you increase by one. Ok, so there's no reason to increase it by one except that you can say it's good when it's linear. But there's no other motivation and it's not clear that for all process it's good when it's linear. So there's a deeper reason why this process is interesting and more interesting than somehow other. You could take these weights to some power, for example, which would give a different process. Or this time's log of blah blah, anything. But there's a deep reason for why this process is interesting. It's related to definitive theorem. And so it leads us to a theorem of Diaconis and Friedman, which was in fact before the introduction of this H-Reinforced Random Work. So that's a definitive theorem for Markov chains. Ok, so let me define, give a definition. So the idea is that you would like to have something that, so you see that the definitive theorem says that when you have a process which is exchangeable, then it's a mixture of IID samplings. You would like to have something similar saying that when you have a good property on the process then it's a mixture of Markov chains. And that's the Diaconis-Friedman theorem. So before I need to state the definition, if you have a random process, suppose that it's on a graph, but you don't need that it's on a graph, but you can always take the complete graph. Then it is partially exchangeable if for all paths gamma, so if you have a path gamma which is equal to gamma naught up to gamma n, then the probability that y naught yn, so that y follows the path gamma, then this only depends on the number of crossings of directed edges. It depends only on the number of crossings of directed edges. So it depends only on the number of crossings of directed edges by gamma. And let me write something which is useless in this case, but I write it anyway, and the starting point. So it's clear the definition, so it means that it only depends on the number of times if you have a graph. It's partially exchangeable if it only depends on, you count the number of times you cross each of these edges, and it only depends on the number of times you cross the edges and not the order. So it means that the probability to do this, this and this is the same as the probability to do this, this and this. To assume that yn has to be a walk, I mean if you take this edge you cannot jump, it has to be a walk. Ok, you don't care, ok, it can be, no, I mean it's ok. I say that it's on a graph, but you don't care that it's on a graph. Ok, it's just on a set, but you can always take the complete graph. And in this case, ok. It's clear, excuse me, there's something, no. The number of crossings of directed edges. Ok, so it's the number of, for each x, y, it's the number of times, it's the number of times for k equal 1 to the size of gamma minus 1 of gamma k, gamma k plus 1 is equal to x and y. Ok, so it only depends on this, yes, on the family of all these, ok. But that's for directed, ok, I can't directed in this case. Ok, and I will say that it's partially exchangeable and reversible. So you take respectively and reversible, ok. If it depends not on the number of crossings of directed edges, but non directed edges, so respectively non directed edges. It's clear what I mean. So it's partially exchangeable if it depends only on the number of crossings of directed edges and the starting points. And it's partially exchangeable and reversible if it only depends on the number of crossings of non directed edges, yes. So it means that in this case it depends on the set of n, ok. It depends on the set, on all edges it depends on the, ok. For all edges it depends on the family of n e gamma which is the sum, you have the same thing of, but now you count the number of times this couple non directed is equal to is the edge, ok. So now, maybe you can guess what is the Diaconis Friedman theorem. Actually I already said it more or less. So what is the theorem of Diaconis and Friedman? Ok, so this process Diaconis and Friedman, I don't remember exactly. I think it was introduced something like in the 80s. I don't remember exactly when, but in the 80s. And this theorem is also in the 80s. So the theorem says it's a definitive theorem. So it says that, but here of course it cannot be a mystery of IID sampling because if you have a graph, for example, you know that you jump only to neighbors. So it cannot be IID sampling. But it says that if Y is recurrent, so which means that it comes infinity often to its starting point, infinity often. Ok, so you need a condition about recurrence. But if you have one point which is recurrent, if the initial point is recurrent, then in this case, if Yn is partially exchangeable, respectively reversible, then it is a mixture of Markov chain. So what does it mean that it is a mixture? It means that you have a random probability on the set of Markov chains. Yes, you have a distribution on the set of transition probabilities. And the law of Y is equal to an integral. Ok, let me write it a bit vaguely. Ok, I will be more clear in the case where it's reversible. So what does it mean? It means that the law of Y is an integral on the... Ok, these are transition probabilities on the graph. Ok, and you integrate, you have a law on transition probability and then you take the law of the Markov chain with this transition probability. So it's a random walk in random environment. Yes. And when it's partially exchangeable and reversible, then it's a mixture of reversible Markov chain. So in this case, I will be more clear about what it means because it's what we will be interested in. So, respectively, mixture of reversible Markov chains. Ok, it's clear statement. So, in the case where it's... when it's a mixture of reversible Markov chain, then it's easy to describe. So just remind that if you have a reversible Markov chain, Markov chain on a graph, the graph G, it can be described by conductances, by a set of conductances. Ok, any reversible Markov chain can be described by conductances just by the probability to jump from X to Y. Ok, I've mixed your notation. Let me switch to I and J. The probability to jump from I and J, it's X, I, J divided by X, I. Ok, and any reversible Markov chain can be described by conductances. Yes? Yes, it's an if and only if. It's very easy to see the reverse direction. Ok, because obviously a mixture of Markov chains is partially... Excuse me? Non, no, it doesn't... Ok, yes, yes. For the partial exchangeability, it's if and only if. Yes. So what does it mean on the edge reinforce random walk? So if you apply it to the edge reinforce random walk, so application to ERW, it means that... Ok, one point which is not completely obvious, which is not difficult, is that if the graph is finite, I will explain why later, but if it's finite, then the ERW is recurrent. That's quite simple, but... Ok, if you have a finite graph, then the ERW is recurrent, just because it's grossly nearly, so by Boyle-Cantelli type argument, you see that you cannot stay in a finite box. Ok, you have to visit every... And there's another argument which is more direct. Ok, so if you apply the theorem, so it means that... Ok, so now the thing is that the edge reinforce random walk is partially exchangeable. And the reverse is partially exchangeable and reversible. So why? Ok, I leave it as a size, but you can compute exactly the probability that the ERW follows a fixed path. It's quite easy to see that it's the product on all the edges. Ok, I take a notation. I will use several times this notation... Well, at least the ones. This notation A, K, it's A. It's the product of all... Ok, it's the product of A, A plus 1, A plus K minus 1. Ok, so it's quite easy, and that's an exercise. To prove that the probability that you follow a fixed path, then it's the product on all the edges of A, A, comma, that's the number of times you have crossed the edge for gamma divided by the same 40 vertices. Ok, let's just take a one line recurrence, it's completely obvious. Ok, so you see that it only depends on the number of times you have crossed the edges because these can be computed from these if you know what is the starting point. That's an exercise, but it's quite easy, yes? No, you don't... For the normalization, you need the linearity. Otherwise you have A... Instead of having AX plus the number of times you have visited the edges, you will have AE plus, you will have the sum of the normalization and that's not linear in the... I mean, you cannot say that if you have alpha there, it will be alpha there because the normalization will not be there. Ok, and it's partially exchangeable. And reversible. And so it means that... So what does it mean? It means that you have a mixing measure on reversible Markov chains So it implies that for all starting points, for all initial weights, there exists a certain measure that depends on the starting point and on the initial weights, on what? On the set of conductances and you can always normalize so that one specific edge is equal to one, let's say. So there exists such a measure, such that the law of the E, R, R, W with initial weights A and starting from I naught, it's the integral of the law of the Markov chain in random conductances X starting from I naught integrated against the measure U. Ok? So now, you can say what's the... Oh, I'm very slow. Ok, you can ask what is this measure. So the important point is that this measure in fact depends on the starting point and it really depends. Ok? And the second thing is that that's a theorem of Diaconis and Coppersmith. They were able to... I will not write the formula, but they were able to compute the measure U. I don't write the formula because I will write several formulas later. Ok. I don't write this one. It's in the exercise sheet of tomorrow. And the thing is that once you have this measure, and if you know that it's a probability measure, then you can prove directly exactly as the same as for Polier-Ern that the edge reinforced random work is a mixture of this reversible Markov chain. Ok, that's a direct computation. If you can guess what is the measure and if you can prove that it is normalised correctly. Ok? Ok, it will be in the exercise. I will write the formula. Non, non, non. Ok, that's on finite graph. G finite. What I state is that you have an explicit formula for this measure. And if you know that this is a probability measure, then you can prove that it's the mixing measure of the edge reinforced random work directly by direct computation. Ok, so... So I don't prove Diaconis-Frenmante or I'm just... Let me say one word about the proof. In fact, the proof is quite clever but it's not extremely difficult. There is a very simple... Ok, so if you want to prove the Diaconis-Frenmante theorem, so you want to prove that if you have a process which is partially changeable, then it's a mixture of Markov chains, then the idea is that you can use the standard infinity theorem. So you look at... You know that you have... The starting point is a recurrent. So you look at the sequence of cycles. So the sequence of cycles is a sequence that lives in the... Ok, in the set of cycles. Which is some nice polish space. And it's very easy from the partial exchangeability to prove that this is exchangeable in the classical sense. Ok, because you can flip cycles just because it's partially exchangeable. So you can flip cycles. So it means that you can apply the standard infinity theorem to the sequence of cycles. So it says that the cycles are a mixture of IIDs. Ok, and then the trick is to prove from this that... The cycles are the law of Markov chains, somehow. But you can prove that conditionally in the standard infinity theorem, then it is a Markov chain. Conditionally on the asymptotic distribution of the cycles, then the process is a Markov chain. And that's the tricky part. The first step is very easy. And you use the standard infinity theorem. Ok, so let me now come to what is known. So the plan of the course is... In the first course I will talk about the probabilistic part. But I think I will be slower than what I expected. So now what I would like to do is to explain what is known in this process. And in particular that's the type of motivation for the rest. And to show that there is a phase transition in dimension 3 and more. Ok, so let's say that it's 3, what is known on the E-R-R-W. So the first result of course was this result of Diaconis and Copersmith about the explicit mixing measure, which was... Ok, if you look at the exercise, you will see that this measure is rather puzzling. And you will see several other measures which are puzzling. But then there's a case, a special case which is much simpler. And this case was investigated by Pimentel, I think in his PhD thesis. So the first case which was treated, where the behavior of the H-R-N-F-W was understood, is the case of Trees. So that's Pimentel also in the 80s, I don't remember exactly when. So let me state it in the simplest case. So I think it was more general. But suppose that you have a binary tree, Diary tree. Suppose that you start from the root. And you look at the H-R-N-F-W, which has constant weight, constant initial weight. Then there is a nice simplification in the case of the tree. So why? Because what the main property of a tree is that if you go in one branch of the tree, then if you come back, you have to come back by the same H. You agree with that? And that's the characterization of tree. So it means that if you look at what happens at the vertex. So imagine that you look at the H-R-N-F-W. So if you look at the first time that the H-R-N-F-W reach this vertex, then you see that the weights, at the first time when the H-R-N-F-W reach this vertex, then the weight is A plus 1 on the edge coming from the root, and it's A on the other edges. Ok? And then each time you come back to this edge, so it means imagine that you choose to jump to this edge, then suppose that you come back, then when you come back, necessarily you have increased the weights by 2. Ok? Because you have no choice except to come back by the same H. So it means that each time you choose an edge, then you can consider that you increase the weights by 2 for the next time you have to come back. Ok? Yes, it's clear. So if you have one edge, one vertex, then the situation is a polarity. That's a standard polarity, and they are independent on each vertex. Yes? Because the initial weights is A plus 1 A. And that's a polarity where you increase the number of balls by 2 each time. But increasing the number of balls by 2 each time is the same as taking initial weights which are equal to the half of the initial weights. Yes? Ok? So you can see that what happens at one vertex, it's independent of what happens at the other vertex. Ok? At the other vertices. Yes? It's clear? So it means that the process is governed by independent polarity. Yes? Ok? So that's the... It's clear? For each vertex, there are mixtures of IID sampling. So it means that for each vertex, you have a polarity, independent polarity. So it means that in this case, the ERW is the same as a random walk in a random Dirichlet environment. Why Dirichlet? Because it's the mixing measure of polarity. Yes? On z, you have just 2, so it's a mixture of beta loom. And you see that you have a push to the origin because the weights of this polyiron, the initial number of balls in this edge which goes to the root is a plus 1 over 2 and the others are a over 2. Ok? And from this remark, then it means that it's a random walk in a random environment on trees. And in this case it has been, in fact that was the main content of the paper of Pimentel to prove that you have a phase transition depending on A. And so if you take, so the theorem of Pimentel states that if you have a D, a r e 3, you suppose that D, ok D is the number of I don't know what's the D, if D is the degree or ok, let's say D2, at least B in a r e 3. Then there exist a certain value strictly positive such that if A is smaller than A0 of D, then the E r r w is positive recurrent. So what does it mean here? It means that it's a mixture of positive recurrent Markov chain. Ok? The proportion of time you spent at the origin is positive. Yes? And... Ah, because ok it means that you can couple, you can couple the E r r w with independent polar irons just because if you if you go in this branch you increase by one but you know that whatever the process does, the next time it goes back the weight is A plus 2. So it means that the process is the same. If you look at independent polar irons and H-rn force on the mark the process is the same because the weights of the polar irons each time you come back to a vertex the weights of the polar irons will be the same as the weight of the E r r w. Yes? Sorry. No. Now what happens if I just go one way many times forth and back and the one edge let's say the upper A I never go that way. Yes? You don't care because you will not use the next step of the polar irons. Ok, if you never come back that's not a problem because you will never use the next step of the polar irons. So I agree that if the process goes to infinity then you have A plus 1 there but you will never have to use the next step of the polar irons. Ok? So it means that the polar irons and the process do not finish with the same vertex but each time you come back they have the same weight. If you are transcend at some point you don't have the same weights at infinity but you will never use it. It's clear or not? Maybe you have to insist that the polar irons is a mixture of I, I, B it's not a asymptotical. No, no, no but the question is why the polar irons are independent? So what I mean is that if you take polar irons or independent polar irons on vertices and you run the process with these independent polar irons and in parallel you run the process of the E, R, W then they are the same law. Yes? Because each time you come back then you have the same configuration. I think that there is a one the second possible fact is that you may you may only use a finite number of drawing your polar irons for one vertex. And so you use a finite version of a polar iron like if you sample 17 times from a polar iron it's as if you were first sampling this beta of this theory. Yes, because if you look if you look at the first end step of the polar iron it's the same as if you take a random environment at the beginning and sample 70 times from this random environment that's still a polar iron even if you sample a finite number of times. Yes? Ok. So once you have that I don't even understand what this Dirichlet environment is. Ok, that's Ok, I Ok, it's because I didn't define it. The Dirichlet environment it's it's it Ok, that's what I want to explain is that it's it's independent polar irons. Ok? Now in a polar iron what is the Dirichlet environment? It's it's the same as the beta the Dirichlet law it's the same as the beta law with n coordinates a k coordinates instead of of 2 So it means that it's a law on random vectors of size k k is a number it's the number of it's the degree of your it's 3 there it's the degree Ok? And then you look at you have a law on on random probability on k points Ok? on a set of size k and the distribution is the product it has density product to the a i minus 1 where a i is the weights Ok? I see and these are ok so these are not these are not the edge weights these are the transition probabilities Yes, the p are the transition probabilities and the edge weights appear as the parameters of the law as in the beta law Ok? the initial number of balls appears as the parameters of the of the beta law and if you have higher degree it's the same Yes? Ok, it's true I didn't define the Dirichlet law so Ok, so and so where I was ok so so the result of Pimentel from this representation it says that if a is strictly is is more reinforced ok if a is smaller than this a naught then the erw is positive recurrence so it means that come back infinity oven and in fact the amount of time it spent at the origin as a positive proportion and if a is larger than this a naught of t then the erw is transient ok and if a is equal to a naught then it's weekly recurrence it's a nulle recurrence ok and that I will not prove that but that's an exercise that's a certain exercise it's a paper about random working random environment ok and ok so now I would like to explain the left quarter what we know on the case of z d ok so of course when you have such a result you would like to know whether this is the effect of infinite dimension of the tree or if it's the true for z d let me state what we know what is known on z d so before let me make a remark about how to use the representation of copersmith and diaconis in this case ok so imagine that you are ok so consider the case the simplest case where the weights are constant so suppose that you are just ok it's more general all the result that I will state are more general than that but suppose that it's that the weights are constant so it means that you are on z d and you have a constant weights on d constant initial weights on d edges so the question you ask and that question that the diaconis ask in the beginning is whether this process is recurrent or not whether there's some recurrence or ok in dimension one it's easy with the argument of pimental to prove that it's always positive recurrent ok but in dimension more than what than one it's it's not easy and ok so how to use the diaconis freedman representation so the idea is the first idea is always the same and it's simple that you take a big box and you start at zero and the thing is that if you take a big box you know that you have a finite graph and you know that the H-Rainforce random work on this big box it's a mixture of Markov chain reversible Markov chain with conductances conductances xe ok so the idea is that you would like to have a control on the size of the conductance ok so if it's localized so it means if it's positive recurrent then you expect that the conductance decrease either exponentially or or polynomially or so ok so what you would like to have is some control what you would expect for example is some exponential decay of conductances so for example something that I will not prove which is not very difficult to prove is that for example if if you can prove that for some as strictly positive then you have a control you have exponential decrease of ok you need to normalize all these conductances are up to constant so you can always divide by the initial the conductance of the initial starting point so if you have a control if you can say that this is exponentially decreasing for example if you can prove that some moments of the conductances is exponentially decreasing then it's rather easy to prove that by some weak theorem that it is a mixture of positive recurrent Markov chain ok because you can take a weak limit of the mixing measure and then you have exponential decrease of the conductances then in this case this implies that you have that the E R R W on Z D is a mixture of positive recurrent Markov chain ok I don't prove that but it's rather simple ok you understand that if the conductance decrease very fast ok so that's somehow what you would like to prove and then you see that this boils down to understand the mixing measure of U so you have some you can consider the X as some random field and you would like to have exponential decrease of the random field you may have the impression that it's too much to ask what it may be too much to ask and yes it's not clear whether it's too much to ask or not now it's not clear whether in dimension 2 for example it will be true or not ok so the first result in this direction was by Merkel and Roles but unfortunately they could not deduce anything about about the E R R W but nevertheless their result is very nice and it's for D equal 2 and what they could prove is that there you have some polynomial decay it's 1 over the norm ok you have a constant any constant so you have some polynomial decay of the conductances for some xi A strictly positive unfortunately if you have no if you have xi which is larger than 1 then it implies recurrence of the of the E R W if xi is between 0 and 1 then it does not imply anything but nevertheless they could prove this result and if you remember about the talk about Simone Evard that somehow this corresponds to the things that he says that she said that for Anderson model in dimension 2 it would be maybe easier to prove I think it's extremely difficult to prove to prove polynomial localization or some polynomial localization of the green function by using some Mermin-Wagner argument and in fact behind this result it's a Mermin-Wagner argument ok but it's not for Anderson of course but it's for this specific model and they use specifically the the law of of mu ok and ok I have 3 minutes so now maybe I can state what we know on then there have been some progress in the last 5 years and many people were involved in this in this result in particular there have been some and that will be the aim of the next talks some very important work coming from very different subjects from mathematical physicists Dysertorie Spencer and Cernbauer and I will explain later what and Dysertorie and Spencer about the sigma field and ok that was in 2006 on 10 and I think it was it's clear that it was a key key it was absolutely not related to the H reinforce Handom work but it was key steps in what was found later and so let me state what we know now so I state it only on ZD so for NED ok it's for NED in fact for D equal 1 it's true but there exist some value A such that such that if A is smaller than A inf of D then the ERRW is a positive recurrent so it's a mixture of positive recurrent Markovschian ok and this was proved independently by myself and Thaïs in 12 and Angel Crawford and Cosma who have a very nice proof just from the what they explain up to now and it's much inspired also by by this work of this author and Spencer ok so it means that in any dimension there is a positive recurrent phase for the process yes and ok the interesting fact is that in dimension 3 and more there is a phase transition and this phase transition is ok Spencer says that it's the same type of phase transition as for Anderson localization so I mean say it that I don't understand what it means say it that but so in dimension 3 and more there exist some parameter A sub of D such that such that if A is larger than A sub of D then the E R R W is transient ok so it's interesting because you have a phase transition depending on the initial weight when the initial weight is very low you have very strong reinforcement and then you have you have positive recurrence but when the weight is large enough then the reinforcement is weak and then it's transient ok and ok let me let's so this one this is due to this territory myself and Thares but in fact it's very much inspired by by the work of Dissertor Spencer and Cernbauer ok it will be more clear later but so there's a in this proof there's a huge contribution of Dissertor Spencer and Cernbauer ok and now with Zeng there in dimension 2 and 3 and then I will stop ok ok, what we could prove is that for dimension 2 so it's with Zeng in dimension 2 then we can prove that the E R R W is recurrent this is related to random Schrodinger operator and also a C L T dimension 3 and more ok so what I mean is that it's possible to prove recurrence but not positive recurrence in dimension 2 and it's not clear whether it should always be positive recurrence it's probably always positive recurrence and it's probably always exponentially localized but it's not clear and in the transient regime then it's possible to prove a C L T so you have diffusion of the process ok, I don't know if it's clear so the aim of the talk tomorrow will be to to explain the relation with the work of Dissertor Spencer and Cernbauer the localization, the sigma model to introduce another process which is called the Vertex Red Force Jump process which is related to the H Red Force on the book and to explain also the relation with random Schrodinger operators ok, so yeah questions? yes so can you describe the the measure that selects an example of the conductance which is a relation can you explain what the measure looks like for the sampling of conductance you cannot compute the S moment ok, I can write the formula if you want it's the ok it's the first thing is rather simple it's the product of all the edges of the graph that's the Diaconis Coppersmith formula and then you take the conductances to the power A E minus 1 so it looks like the Dirichlet distribution that I wrote somewhere and you divide by the product on all the vertices of the weights the conductance of the vertex to the power A i plus 1 over 2 and then there's a subtile thing is that it depends on the starting point and the way it depends on the starting point is that they are square root of X ok so it means that it gives an extra weights to the configuration where the initial conductance is the conductance at the initial point is large ok, because this push the conductance to be large but the difficult thing is that you have a square root of something which is difficult term which is a determinant and it can be written as the sum of all spanning trees of the product of X so this can be written as the determinants of the Markov generator of the Markov chain yes ok, this is the same as the, you look at the the matrix that has value X X ij on the line i and j and minus X i on the diagonal and you take some minor and you take the determinant is the same as this it's the matrix tree theorem ok the difficulty is this term because this is a very non linear term and it's a term that correlates the values very far and the thing is that in the case of tree then you know that there's only one spanning tree on tree and so in this case you have just one term and that's why the measure is simple ok but the difficulty is that once you have such a measure then it's very difficult to analyze anything because of this term because it's non convex measure so it's and I didn't write this one because you will see tomorrow another one which is the same form a bit simpler and there's a constant important constant