 Thank you very much. So this morning we've heard the DMA speak about the E-sync model and the E-sync model is one of the paradigmatic models in statistical mechanics and there's beautiful work on the E-sync model not just in two dimensions but in various other settings as well. What I will be now be talking about is another paradigmatic model in statistical physics and that's called percolation. And who of you have heard about percolation seen a course of it or okay who has not? That's pretty good. I will still give the introduction so don't worry but at least I know that there's something that I can rely on. Now percolation is best known when you do it on an infinite lattice. So what is the idea? You have a lattice, you have edges, I'll just stick to a bond percolation so you have lots of edges and you remove them independently with fixed probability. That's the simplest setting. And then you can imagine that when you erase almost all the edges then basically nothing remains so in particular you're not going to have an infinite component after erasing all the edges at random. But when you keep lots of edges then actually you will have an infinite component remaining and often this infinite component is unique. Now you can sort of feel that this process is monotone. If you keep on removing more and more your clusters are going to become smaller and smaller or if you're adding more and more edges your clusters are going to become bigger and bigger and therefore there has to be some critical value at which you see something peculiar happening namely above it you will see an infinite component and below it you will not. So that's percolation on an infinite graph. So what you should think about is zd, z2, z3, z4 or whatever. What you have is that there exists a unique pc which of course depends on the graph and below it you're not going to have an infinite component above it you're going to have an infinite component and close to it is where all the fun happens. So that's basically what Dima was saying today but then for the easing model where you also have a critical value and sort of everything that is interesting in the easing model is happening very close by this critical point. Okay so above infinite component below none. I will be talking about a slightly different setting and this is the setting in which our graph is not infinite but it's finite and then this this beautiful picture of this uniquely identifiable phase transition collapses completely. So it's not at all clear how you should define what the critical value is because you know the cluster is going to be finite at every point of p even when p is 1 because you start with the finite graph. So what my main message will be today is that in many settings you can actually identify what critical behavior means but you have to be a bit more careful and also this critical value is not uniquely defined. There's a large range of values for which the behavior is roughly the same that's the main message and we'll discuss this on high-dimensional torii. The main example of course is the hypercube which is an example of a finite graph and of which it's not entirely clear how many of its edges you should remove in order to get something that is somewhat connected but not completely connected. So this is work with Asaf Nachmias which was published in January this year. It was completed in 2012. Let that also be a lesson for PhD students. It can take a long time before your paper finally appears and it's building on joint work with lots of people including a former PhD student of mine, Markus Heidenreich and there's also some other PhD student coming up later later on. My postdoc, boss Gordon Slade, Christian Borges and Jennifer Chase whom I've met when I was a postdoc at Microsoft Research and Joel Spencer. All right so what are we talking about? Well first of all we're talking about the hypercube and the hypercube is very easy to imagine in three dimensions where just the corners of the cube and you connect the corners to one another but if you're thinking about a hypercube in 76 dimensions then it's a bit more difficult to imagine and actually our dimension is going to go to infinity because we want to have large graphs, right? So how do you define this? Well the vertices are just collections of zeros and ones. Think of these as the corners of a high-dimensional cube and you connect two corners with one another when they're actually differing by precisely one coordinate. If you think about what a cube is in three dimensions this is precisely what it is, okay? But this definition holds in any dimension. Now this is a finite graph, the volume is 2 to the power n, that's the number of elements in it and the degree of every vertex is n, that's the number of neighbors that every vertex has, okay? So n on the one hand is the dimension but n also describes the size of the graph as well as the degree of every vertex in the graph and it's a very nice graph because it's completely transitive, every vertex looks the same, etc etc. So percolation is described here, we look at bonds and we make them independently, there or not there, occupied or vacant and it's occupied with some probability p that we will later have to choose appropriately and vacant with a probability 1 minus p, independently of everything else. So I think of the vacant edges as being completely removed from my graph, that means that I have a random subgraph and I'm only allowed to walk along the edges that are remaining, okay? Now this model was first studied by Erich and Spencer, two very big names in random graph theory in 1979 and what they looked at was the connectivity transition. So how large should p be in order for this, for percolation on this graph to be completely connected so that even along the edges that are kept you can still walk from any vertex to any other vertex and what you see is that p actually has to be pretty big, it's basically one half, right? So let me make a little table here about what we know about the critical value because that's what this talk is all going to be about, I mean how to quantify critical values and here we're thinking about the hypercube which I'll denote by t2n, the two-dimensional, oh sorry, the hyper, the torus with side length 2 in dimension n, the reason for introducing this is that later we will generalize this and then we know that the critical value for connectivity is roughly a half and that's Erich and Spencer 79, okay. Now what we would like to study and that's actually what Erich and Spencer proposed in their paper is the connectivity, not the connectivity threshold but the occurrence of a large cluster in this graph when p varies and we will see that p has to be much smaller for that. Yeah. Is there any headband argument why it must be a one half? Yes there is. So what you should look at is the number of isolated points so they're two to the power n points and the probability for a point to be isolated is one half to the power n so the expected number of isolated points at this point is precisely one and this is a slight generalization and then you see that this mean is roughly correct it's a possible approximation. Good question. All right. Now one of the things we see is that the hypercube it's a high-dimensional graph where vertices have high degree right but it has a lot of geometry that actually makes it difficult so rather than studying the hypercube you could also study the complete graph which is sort of somewhat similar it has size little n and all the edges are there so there's no geometry whatsoever every two vertices are a neighbor and now I can do precisely the same thing namely keep each edge with probability p independently of all the other ones and then see what what happens to the connectivity structure of this graph. Now this is a very old problem this is called the Erdos-Renyi random graph sometimes called the binomial Erdos-Renyi random graph was actually invented by Gilbert around the same time that Erdos-Renyi started working on the problem and what Erdos-Renyi did was display a double jump phenomenon and the double jump is that if you take p to be of this form where bear in mind this epsilon could also be negative think of epsilon as being relatively small but in the first case fixed if epsilon is negative then the largest connected component is basically logarithmic in the size quite small dust whereas if epsilon is positive then actually it contains a positive proportion of the number of vertices so it's huge it's like the giant component in infinite graphs right the giant component also contains a positive proportion of the vertices and here that's said the same thing is true so it basically says that the the critical value should be somewhere around 1 over n now what you also know is that at the point 1 over n there's something special happening and there the cluster sizes are of the order n to the power two-thirds so you have anomalous behavior intricate scaling relations critical exponents okay this is interesting so basically what we see is that for the complete graph we get that pc is roughly 1 over n but there's a little bit more because you can study for which values of p you basically have the same scaling as for p precisely 1 over n okay you can imagine that if you change p just a little bit then you're going to add very few edges so actually you're not going to change the critical components too much if you change it by too much then all of a sudden the whole thing is going to clump together and you're going to have something which is which is a lot bigger the precise amount that is allowed is something of the order a constant over n to the power one-third that you can multiply your probability with that's the same thing as saying that you can add a little bit which is lambda divided by n to the power four-thirds i have a reason for writing it this way okay so this is sometimes called the critical window now what we see in this critical window and there's lots of beautiful results here i'm just scratching the surface what we know is that the largest connected component there is of the order n to the power two-thirds but actually you have many components that are that large in fact if you multiply by n to the power minus two-thirds so you renormalize then actually this vector will converge in any topology that you would like and the limits will be proper random variables so you have really beautiful intricate critical behavior with funny critical exponents here i would like to mention two the n to the power two-thirds so you have a two-third there and you have a one-third there okay so that's precisely the kind of results that i'm after but then in more general settings now actually you can show that these these the scaling window really is a unique scaling window if you take your p to be slightly different then actually you're you're sort of subcritical or supercritical in the sense that only in this regime your critical exponents your critical components are random even after taking the scaling limit and the first and the second are of similar order of magnitude so if you approach this this critical value one over n more slowly than like n to the power n to the power minus one-third then you'll actually either be supercritical when the thing is is positive or subcritical when the thing is negative okay and that can be quantified in the following sense that if epsilon is much larger than n to the power minus one-third and p is one plus epsilon over n then the critical x the the largest connected component is basically two times epsilon times n which is substantial and that actually is concentrated and the second largest component is much smaller so there you have a unique largest component whereas in the subcritical regime same setting but now epsilon much smaller than minus n to the power minus one-third so you're away from the critical window but on the left hand side then actually your your largest connected component is roughly of the order one over epsilon squared times the log of epsilon cube times n and it turns out that the second largest is equally big and the third largest is also equally big and the fourth largest is also equally big so you basically have a conglomeration of many clusters that basically have the same order of magnitude in the subcritical regime and they're really much smaller than what happens here in the supercritical regime so somehow what you should think about is that you have the scaling window where you see critical behavior this is a rough proxy for subcritical and supercritical behavior but actually it connects up nicely and continuously as soon as you're outside of the scaling window you're either subcritical or supercritical that's the picture now we would like to establish such a picture in more general settings in particular for the hypercube all right now of course this this question of Erdogan-Spencer was picked up sometime later the first paper on it is 1982 and in the paper of 1982 by Aytai Komlos and Szemeredi they basically did something very similar to what Erdogan and Renyi did for the for the complete graph namely you take p to be a constant over the degree so 1 plus epsilon divided by n or a little n you should really think of as being the degree and then you see different behavior for the settings where epsilon is strictly positive but fixed and epsilon strictly negative and fixed but here the change is rather dramatic because if epsilon is positive then the size of the giant component is basically 2 to the power n which is rather substantial whereas it's it is going to be much smaller here that's not so much visible here yet but in the next result by Bologbash Kohaiakawa and Butchak which is 10 years later they really managed to show something about how small this object here becomes but these are pretty detailed estimates where they're showing that if epsilon is sufficiently negative the largest connected component is basically 2 over epsilon square times the log of 2 to the power n whereas if epsilon is sufficiently positive something like 1 over n then actually it's roughly epsilon times 2 to the power n so bear in mind these bounds are polynomial in n whereas these this number here is exponential so you basically go from something that's polynomial here to something that is exponential here that's a pretty big jump so you could say well we aren't really close to the critical value yet but then it immediately raises the question again what is critical all right so let me write down what these what we these results are so itai Komlos and Semmeradi 82 they basically say that pc is 1 over n times 1 plus small over 1 Bologbash Kohaiakawa and Butchak in 92 they say that pc is 1 over n times 1 plus capital O of check well something polynomial i think the worst thing is log n squared over the square root of n but on the other side it's it's slightly different so it's small but it's not that small and what you see is that you really jump enormously by going from one side to the other side so i would say that these results are still in the barely sub and barely supercritical regime but of course the question is how to define what is critical and in their paper they actually raised the question whether the critical value is equal to 1 over n minus 1 but that may look a bit weird but it's not so strange the reason why you have this is that on any graph any transitive graph let's say an infinite transitive graph you can show that pc is always greater than or equal to 1 over the degree minus 1 there's a branching process comparison that you can do all right you have in total the degree number of edges that you might use but in order to get to a vertex you have to eat up one so there's only the degree minus one left and you know you better be able to continue otherwise you're going to be dead right so this you should really think of as one over the degree minus one so that that raises a question the question is is this true now we were told of this story when i was doing a visit at Microsoft Research by Joel Spencer and he said you know with the results on high-level acceleration that have been established at that point in time can you say something a bit more precise and in fact he came to us with a question you know if you take p to be 1 over n plus something over n squared say can you identify what that something should be now we started thinking about this as a as a mathematical physics problem and we were trying to figure out what should be critical behavior so we basically didn't really answer the question of Joel at that point in time but we rather started thinking about what critical should mean in this setting so let's look again here what is special here well here the largest connected component is n to the part two-thirds but that's a random variable it's very difficult to say pc should satisfy that the largest connected component is n to the part two-thirds because it's random so rather we take this criterion expected cluster size is the number of vertices raised to the power of third expected cluster size is is a continuous function of p so you could take that as a definition then of course you still have to prove that it's a proper definition of the critical value but this is what was done in a series of papers with borges chase myself uh Gordon Slate and uh Joel Spencer sometimes these papers are referred to as the bitches papers for obvious reasons um meaning the acronym um and what we do was that we just take the expected cluster size as a function of p it's a continuous function it will grow from zero for p is equal to zero to the volume of the graph two to the power n for p is one and we just take the point where this expected cluster size is equal to two to the power n over three of course this seems like a definition that I just pull out of my hat and that may not make any sense to you right so what I'll argue in the remainder of the talk that this is actually a relevant definition and it's probably the right definition all right so that means that the challenge is that we have to prove that the the critical value that is defined in this way really is a critical value and one way of doing that is by investigating the the structure of connected components above and below this critical value all right so this is the first result and this you should really think of in terms of what it was for the for the Irish Renewable Engram which really serves as the inspiration for our results so let me go through it slowly I take p to be the pc that that was defined here this pc and I multiply by one plus epsilon and either epsilon can be positive or negative and I'm trying to figure out which epsilon will give me something critical now in the first result you should think of this as being the subcritical case I'm taking my epsilon to be negative and much smaller than the inverse of the cube root of the graph so that actually sounds that I'm in this regime but then with a lambda that slowly tends to infinity and here I should really think of this as corresponding to the volume of the graph not the degree whereas this I should think of the degree of the graph okay and what we see then is behavior that is very similar to what what we see on the on the Irish Renewable Engram in particular the maximum cluster size is of the order 2 over epsilon squared times the log of epsilon cubes times 2 to the power n and this logarithmic term really is substantial because that's precisely what this is saying and there should be an absolute value there by the way which I didn't write down okay so the left hand side is it is there a log of something you see no there isn't so in in these results that actually is not good the lower bound is not good but this was later improved by Asaf Naghmias and Tim Hulshof and they can prove that that the same thing is true but then with a constant times this not with a constant 2 still an improvement but it's it's not there yet but that you should really think of that as being the order of magnitude so this corresponds to the cluster sizes on the Irish Renewable Engram in the subcritical regime the barely subcritical regime now on the other hand I can also look at what happens within this critical window where bear in mind we should take this epsilon to be of the order 1 over the volume of the graph to the power one-third so in this case that's 2 to the power minus n over 3 and then what we see is that the maximum component is of size 2 to the power 2 and over 3 both in terms of a lower bound and in terms of an upper bound that's precisely the same as it was on the Irish Renewable Engram where you should think of this 2 to the power minus n over 3 as being the volume of the graph which is little n for the Irish Renewable Engram okay yes you don't write anything about high or P in this window why is it is it clear that this is the window it remains of the same order no you don't write anything about the mean where the mean size of the cluster oh it's it's it's 2 to the power 2 and over 3 yeah right yeah the lord the expectation of c max can also be bounded from above and below by constant times 2 to the power 2 and over 3 yeah yeah what is the omega oh so this basically says that i do this for a regime of little lambda it's actually a capital lambda where where this lambda in absolute value is bounded by some constant that's what it means oh this is a large constant right so you should think in you should think that c max multiplied by 2 to the power 2 and over 3 converges in distribution to a proper random variable that has no atoms at 0 and infinity now we can't quite prove that but this sort of looks like it right it means that if i multiply this by 2 to the power 2 and over 3 then it's a tight sequence of random variables because the probability that will be larger than some large value apps omega will be small and also one over it will be tight because the probability that one over it is larger than omega is also pretty small yeah so it's a tightness statement good question any other questions all right so i started talking about the the hypercube but as it turns out there's nothing special about the hypercube but i also think about other high-dimensional tori i mean in the on the hypercube you take zero one and erase it to the power n but what would happen if you would take zero one two to the power or if you would take a large cube and raise it to a fixed power then you get high-dimensional percolation on a torus we are in high-dimensions because for the hypercube we take the dimension to infinity so okay now we actually address that question as well our results are are less sharp there but we basically do the same thing so we have a torus here the width is denoted by little r and the dimension is denoted by little d and then we do precisely the same thing as what we do on the hypercube namely we say that the expected cluster size is equal to the volume to the power of third ignore labdat just take it one actually labdat plays an important role in the proof which is why it occurs here but for all practical purposes you can forget about it and then we need some peculiar condition which sort of seems to come falling out of the sky and if we have that that assumption then actually the results that I was referring to here with appropriate modifications because the volume of the graph is changing carry through now this condition looks very weird it's called a triangle condition but it's not that weird because we actually know that if you do percolation on zd the infinite zd then this triangle condition plays a very important role there it actually quantifies when the percolation phase transition is mean field meaning when the percolation phase transition on zd is very similar to the percolation phase transition on a tree which does not have any geometry or when it's the same as the phase transition of a branching process and if you want to include geometry when it's the same as the critical behavior of a branching random walk you know so this condition has appeared before but then on infinite graphs it was invented by Eisenman and Newman and it has become one of the key properties that signals high dimensionality of percolation on a graph and this is an adaptation to finite graphs because on a finite graph of course this this triangle will always be finite but we want it to be uniformly finite that's what it says now we actually do know this condition after a lot of work for percolation on the nearest neighbor torus for the dimension large but fixed think of d being 25 and r tending to infinity then you would believe that percolation on this large torus should be very close to critical percolation on zd and that's what we'll see later on so that's precisely this setting now you have to be careful here the reason is that i'm actually quantifying that this pc satisfies this funny equation right now if i think about percolation on a high dimensional torus which is extremely big i would believe that the critical value ought to be the same roughly the same very close to the critical value in the infinite lattice that's certainly not obvious from these statements it is true but it requires quite some proofs all right now this is a result that only works on the hypercube it's just a result with asaf nachmias photo taken at obrovolvach and it really shows that the the critical window is what we believe it to be from the earlier two theorems because in these earlier two theorems what we were showing is that if you're on the left of the critical window then actually clusters are very small if you're within the critical window then clusters are of size volume to the power two-thirds what we are missing is the statement that above the critical window the cluster is concentrated and it's large and that's what this says it says that above the scaling window the largest connected component is roughly a constant times epsilon times 2 to the power n it's identical as what it was for the error 20 random graph and the second largest component is smaller so the giant component is unique as you would hope it would be and we can also compute the expected cluster size okay so this can be interpreted by saying that the percolation phase transition on the hypercube and on the complete graph are very alike now there's another interpretation that you could give here that is actually quite relevant and that plays an important role in the study of the error 20 random graph and that is a connection to branching processes you can really think of this two epsilon here as being the probability of a Poisson branching process to survive and that probability is the same as somehow taking a vertex uniformly at random in your complete graph and asking its cluster to be large whatever large means that's also that probability is also going to be two epsilon okay and the remarkable statement that you see here is that almost all the vertices that have a large cluster are actually in the same connected component they're all connected to one another that's the statement that's a bit of a surprise is that statement clear okay now we're still left with this question which we haven't which I didn't even write down so there was a question whether pc is actually equal to 1 over n minus 1 and that was a good reason for this so if we think of somehow the tree approximation to the hypercube we would just build a tree in which every vertex has a degree precisely n and the critical value on that tree would be precisely 1 over n minus 1 okay but it turns out that this is not the critical value and it's not even close why not well as it turns out there is something like an asymptotic expansion for the critical value in terms of inverse powers of 1 over n you should think of n as being extremely large right so bear in mind that if I would want to think about 1 over n minus 1 well that's the same thing as 1 over n plus 1 over n squared plus 1 over n cubed plus 1 over n to the power 4 etc adding that sum that geometric series is indefinitely will give me 1 over n minus 1 so this also has this same feature and as it turns out this asymptotic expansion is correct but it turns out that the coefficients are slightly different the first two are the same also both one but the third is seven halves rather than one so that means that the critical value on the hypercube is a little bit bigger than what you would believe it to be on a tree with the same degree that's what it says and that's actually not so surprising because if you think about a hypercube um it really isn't the tree because there's lots of cycles of size four lots of little squares and if you were to approximate it by a tree if you would go two sides of a square you would think of these points as being different but they really are the same so on the tree you're over counting this point by a factor two now of course it's not that likely that you have these but you are going to have many of those so somehow you're if you if you equate this to be the critical value you're underestimating what it really is and this size that this is indeed the case now we cannot compute all these other exponents there are lots of them here we know that they're rational we know that they exist but we don't know what they are we can compute the first three that was for us relevant because we see that the third is different from the one of one over n minus one i'm guessing that the fourth one is 16 but i don't have a proof for this and the proof would probably be very long all right so let me spend a few words on the proof not that that many the proof is fairly complicated the the paper with asaf in the journal of the european mathematical society is 90 pages and that was sort of the six or the seven in a in a row the other ones are somewhat shorter but they're also quite long it's a difficult story it's difficult to make these things precise so how do we do these subcritical proofs well we do this immediately for several graphs at the same time not just for the hypercube but also for these high-dimensional tori and and in fact the proof also works for the complete graph so we take a relatively abstract approach and the abstract approach is very similar to the abstract approach that is used in high dimensions if you want to investigate critical percolation let's say in z25 there we know that the critical behavior is also very much alike the critical behavior of a of a branching process and the tool of the trade there is some is a is a combinatorial identity it's an expansion technique which is called the lace expansion and the lace expansion is a difficult technique but it can be understood and at the end of the slides i will refer to a survey on the topic where we basically do the lace expansion in all detail at an expository level and also describe all the the progress that has been made on high-dimensional percolation including hypercube percolation but also some other topics that's what i would like to say about this proof i think that's enough now i would like to spend a few minutes on the supercritical setting which bear in mind we were only doing for the hypercube in fact the results apply a bit more general and i heard somebody speak here about expanders one example for which it applies is expanders with a sufficiently large girth so percolation on an expander with a sufficiently large girth also has this similar qualitative phase transition where you see the giant component emerge above the scaling window and the scaling window has the same shape as on the other screen now if you want to formulate the precise settings under which uh under which the results apply it's three quantities first of all we need that the degree tends to infinity we're talking about high degrees like the d regular graph with d tending to infinity then there is a condition that relates two objects random walk and percolation which are a little bit weird so m is the degree i take m minus one times pc bear in mind that's roughly this first order approximation to the critical value right if it were a tree then this would be precisely the critical value but we know that generally pc is a bit larger than that we raise that to the power this t mix and that should still be roughly one why this comes out this way is not entirely obvious but let me describe why it is true for the hypercube so for the hypercube we know that pc is one over n plus one over n squared plus seven halves over n cubed plus higher order terms right so if i multiply by n minus one i have to multiply this by n minus one and then i get one plus capital o of one over n squared do the arithmetic it's a third order term that actually deviates from this this is precisely one over n minus one so if it would be this then i would get one but it isn't this because this one is a bit larger so that means that i basically have to blow this up by a factor n and that's going to be my error term okay now it turns out that this mixing time the mixing time of random walks is actually pretty well known mixing time of random walk on a hypercube that's of the order n log n but bear in mind we're not looking here at the mixing time of a normal random walk in fact we're looking at the mixing time of something which is called the non back tracking random walk and that's just the same as a regular random walk it's just that you have to keep track of where you were before and just not step back so you don't track back your last step it turns out that also that mixing time is the same that's actually one half year now if i raise this to the power t mix that's the same thing as raising this to the power t mix and then we get one plus well some constant over n squared to the power n log n and that's indeed one plus one bear in mind here it's crucial that i go all the way up to size one over n squared not one over n that would not have been enough if it would have been one plus a constant over n raised to the power n log n i would be out of business okay so indeed this result applies good yes it's believed and this is an open problem so there are lots of open problems in this big survey it's believed to be borel summable but not summable so similar identities hold for zd and there the coefficients have been non rigorously established by the way the the coefficients there are the same rigorously you should replace n by the degree of that graph which is 2d but then the next term is 16 the next term is 105 it sort of starts growing rather uncomfortably so it probably doesn't converge but this is not known it is believed that they're beryl summable and that would mean that if you divide through by n another n factorial or something like that that would be summable so really the idea of thinking of this as being an infinite sum and ignoring the error not a good idea that will probably start becoming negative for larger than one or whatever if if n is too large good question but now there is speculation on the function of which is an asymptotic expansion sorry i didn't get that actually this is still an asymptotic expansion it's an asymptotic expansion but you have to be careful with any other speculation about the nature of this function poles yeah borel summable all right i mean just concrete nature like where it has poles i mean in the complex plane or no idea you're probably the right person to answer these questions and to ask them i'm certainly not so i wanted to give a bit of the ingredients to the proof it's a few slides somehow trying to highlight what the key ingredients are there's four key steps basically in the proof and in the very first step we're trying to analyze how many vertices we have in large clusters where large is not let's say two epsilon times two to the power n but it's pretty substantial yeah so that's what this random variable measures it measures the number of v for which the connected component in which v is the number of vertices that that contains so that's all that the number of vertices to which little v is connected after doing percolation that's what this object is is greater than or equal to some k where k will be appropriately chosen and it will actually be sufficiently large and this means sufficiently large that actually we know that that number for that choice of k not really has the same order of magnitude as what we believe to be the giant component we call the giant component has size two epsilon two to the power n and now we have something that can certainly not be more than the giant component that basically has the same order of magnitude so what we would still like to prove is that all of these vertices or most of these vertices are in the same connected component that was proof one now the second step is sort of this remarkable connection to the mixing time of non-back tracking walk and that turned out to be a very handy technique that allows you to simplify all sorts of nasty sums that we get in our proofs and just replace them by some nasty functions that depend on the spatial coordinate and just replace them by their by their average so bear in mind if I if I think about anything on the on the complete graph you have complete symmetry so if I look at the probability that two is in the cluster of one that's the same as the probability that three is in the cluster of one four is in the cluster of one five is in the cluster there is complete symmetry now in the hard because that's not true because if I take one vertex the other pole is much further away from it than its immediate neighbors so its immediate neighbors are much more likely to be in the same connected component as this vertex than the other pole now somehow on a qualitative level what this shows is that if you take a percolation path that is sufficiently long then the endpoint of that path is somehow completely mixed it could basically be anywhere so rather than fixing this point I could also replace it by the sum over this point and divide through by the total number of vertices that's very convenient if my spatial functions no longer depend on the spatial variable they all become constant certainly life becomes much easier so that's the role that non-backtracking random book plays so it says something like this if I take an r that is sufficiently large then this probability that that zero say is connected to x in a path that is sufficiently large can be uniformly bounded in little x if x if little r is sufficiently large so for example then think about this and I can just replace the little x by summing out the overall possible x and dividing through by v if all of these are the same then of course summing and dividing by the number of summons is the same as any any of its values right but this clearly depends on x whereas this is completely independent of x so that's a nice trick so that's how non-backtracking random book enters the picture and somehow this bound this here you can think of the the the number of occupied percolation paths of length t mix so this is the expected number of occupied percolation paths of length t mix and that's roughly one so on average if I take a vertex there will be roughly one vertex at distance this t mix and t mix is not that large okay good now the the proof will go by a sprinkling argument where we say we have some p p is p is super critical we think of p as consisting of two little parts a slightly smaller value and then we sprinkle extra edges on it so what we would like to do is to investigate what the chance is that somehow clusters are merging together so what I would like to know is an estimate on the number of vertices that will start creating a cluster when I raise p a little bit now for that it's crucial to know how many boundary edges there are between the the clusters of these two vertices because if they're if they're not if they're touching one another and I increase p then I will make some of these edges if they're sufficiently many I will make some of these edges occupied and then all of a sudden my two vertices are connected to one another hey now we have a much larger cluster right so that's the idea but then of course I need to know first that sufficiently many vertices have sufficiently many closed edges between their clusters and their cluster should be large so that's what this statement says so here we say we have two x two two vertices x and y that could be arbitrary arbitrary vertices now I look at the number of edges between the boundaries of their clusters so this says that the path percolation path between x and u has at most l edges the percolation path between y and u prime has at most l edges and then I'm counting these u v's so if this u v was already occupied then actually the two vertices are in the same connected component if they're close if it's closed and I sprinkle a little bit make it occupied then all of a sudden after that change x and y will be in the same connected component okay so this is the this is counting the number of edges and I want that number of edges to be sufficiently large that's what this says don't don't care about the precise formulation of the constant it just says that the number of closed edges between these two vertices are large and then we call such a pair good and what I would like to know is that there's sufficiently many good pairs because the good pairs after sprinkling will start coagulating very quickly and then they will form a giant component so if I know that there's sufficiently many of them I'm in shape for doing a sprinkling argument and this says that so bear in mind I'm already saying that my clusters are large so I cannot have more than two epsilon times two to the power n v is the volume two to the power n squared of these pairs because I have only that many vertices that are in large clusters but what the statement says is that almost all of those pairs are actually good so almost all the pairs of vertices that have a large cluster actually also have many edges closed edges between them that's a pretty nice statement so then this sets us up for sprinkling and sprinkling basically says that we take two p's a p that is slightly smaller than what we have and then we increase it a little bit and we think about percolation as being obtained by first taking the p1 percolation and then sprinkling extra edges on the closed edges which are these ones with the extra p2 and if I do that in the appropriate way such that this equality holds then if I look at the union of these two percolations then I'll actually get my original percolation model that's how it's set up so we take a p that is slightly smaller we prove that there's lots of large clusters with lots of nice properties etc etc and then we sprinkle some extra edges on it to make the p the right value and then we see that all of these large clutches clusters emerge I will not go through the details but this is this is where it goes and that actually then completes the proof this argument is very alike what itai komlos and samredi did but they use an isoparametric inequality instead so rather than looking at closed edges they looked at large paths and that is wasteful and therefore you lose a lot so therefore you cannot get to the critical value this argument is a bit more robust all right but I've been saying so far that many of these techniques are true are are usable rather generally in high-damp engines now that's what this result says and that's a result with marcus heidenreich who's now in Munich who was my PhD student graduating in 2011 I think it says that if I look at the critical value for percolation on the full lattice that is what I would call the critical value for percolation and if I compare that to the critical value of on the hypercube as defined by the expected cluster size being the volume to the power one-third and their distance is precisely of the order one over the volume to the power one-third so this says that the critical value of percolation on the infinite lattice is inside my scaling window that was defined in a way that completely did not make use of zd as a whole so this immediately implies that all of the results that have been proving so far also apply to critical percolation on a high torus where critical really means the critical value of zd all right that's one result that I wanted to say about high-dimensional percolation the second is the following which is one of the most novel parts on high-dimensional percolation this is joined with another former PhD student Robert Fitzner and for those of you who know something about percolation there was always this discussion about what is high-dimensional so think about nearest neighbor percolation there was an unpublished document by Hara and Slate that was saying that 19 was enough but there's nothing special about 19 except for the fact that it's the smallest integer that is smaller than 20 so they started doing some analysis and at first it was working for d is equal to 48 they say ah that's not quite good enough and then they got it down to something like 25 and at a certain moment it was clear that they weren't able to go to the appropriate dimension which is 7 because of this triangle condition they weren't able to go all the way down and then they said well you know 19 actually sounds a lot less than 20 so let's go to 19 they worked very hard got it to 19 and then they stopped that that proof was never published there are a hundred notes it's about a stack this big which I've gotten from Takashi Hara that was extremely useful and we decided to take this up again because we were thinking that somehow an analysis that would be slightly improved on the basis of non-backtracking random book might give us get us a bit closer to the upper critical dimension and indeed it does we're not able to go all the way down to dimension 7 and I don't think we will be able to go down there ever with our methodology but we can go to dimension 11 so d greater than 10 and actually that methodology does not only give us control over the critical exponents but it also gives us control over the critical value so our estimates are upper bound on pc are provably 1.3 percent off the real value which I think is pretty okay we don't we have no idea what the critical value is in in 11 dimensions but we can give a bound and the bound is roughly 1 over 2d minus 1 except for 1.3 percent okay I'm not going to explain how this is done I'm running out of time here are a few open problems there are many more and if you want to see a really huge list look at this book which is almost done it's on the basis of a summer school that I've taught at CRM in Morial the PIMCRM summer school 24 hours of lectures on high-dimensional percolation that gave rise to this this book of about 270 pages you can download the pdf from my webpage and there's lots of problems there on boundary conditions on you know name it oriented percolation all sorts of other models so they might not be as big as the questions that Dima was posing this morning but there's certainly quite a few that are quite nice and that might get you going now here are a few open problems that I personally find quite interesting so scud studied the the limiting probability of the largest cluster in subcritical phase of a hypercube percolation so bear in mind this is partially resolved by Nachmias and Hulshof they show that this largest subcritical cluster is of the order one over epsilon square times along but they don't get the right constant so the upper bound is two we believe the two to be sharp but we don't have a proof for that it's a nice problem very much related to this and that's actually what what inspired also this question what is the size of the second largest component in the supercritical regime in the air is renewing graph and it's related to the phase transition of branching processes there's a beautiful theorem which is called duality if I have a supercritical branching process I condition it to die out then actually that's a subcritical branching process with an offspring distribution that you can determine exactly on the air is running on a graph that says something like the following suppose I'm in the supercritical regime in the air is running on a graph now I remove the giant component and have a much smaller graph that much smaller graph is a subcritical air is running on a graph of a smaller size and a P that you can compute explicitly at least asymptotically and you can show that really a subcritical so that explains why the second largest component in the air is running on a graph above the critical value is logarithmic in size because it's the same as a subcritical largest component which is logarithmic right so we don't know that here we only have very weak results on the second largest component we prove that it's smaller than the largest but it should actually be there should be a large gap it should even be true that the second largest component decreases with epsilon we have no idea how to do that very nice problem now of course we have a concentrated giant component that immediately raises the question whether there is also a central limit theorem for it that probably is proving it may be harder um within the critical window we don't know that the the critical clusters are actually truly random that they converge in distribution to proper random variables prove that and prove that the scaling limit then also is the same as the scaling limit of the Erdogan random graph as identified beautifully by David Aldous in the 1997 paper it made me hard extended to percolation on the nearest neighbor tours with a d greater than or equal to 11 we haven't done that it may be possible we don't know whether we can do it whether our our numerical control is sufficient for that here are some of the papers thank you so thank you rimco for this very exciting talk so are there any questions is it possible to do numerical simulations in the critical phase on the hypercube or is the space just too big that's an excellent question bear in mind that the width of the critical window is two to the power minus n over three take n a hundred it's nothing how are you ever going to see the difference i mean it's basically one point and we don't know what that point is we don't know what the critical critical value is so i have no idea how to do this would be very interesting i mean we know that the largest connected component is concentrated above the critical point but you can add for example edges one by one yeah and then well you're gonna you're gonna stop whenever the the largest component is of the size you want volume to the power two thirds yeah you could do that but that's not going to be enough to give you for example a proxy of the scaling limit of the largest connected component because all of a sudden now these are deterministic right you would rather do it the other way around fix a p inside your critical scaling window and then see what the largest connected component looked like i don't know how to do that so it's a good question if anybody has an idea i would certainly be very much interested in this so any other questions how do you compute the coefficients in the asymptotic assumption expansion of the critical yeah that's an excellent question the first answer is the lace expansion the lace expansion gives you an implicit equation for pc that you can iteratively use sort of inductively by looking at the coefficients and doing asymptotics on those and they will crunch out all of these coefficients you could do this arbitrarily far so in terms of the upper bound so bear in mind that for this analysis we only needed the upper bound we also have a completely independent proof of this upper bound with the seven halves that does not rely on the lace expansion so that's technically less demanding and that's just computing computing how many points you are going to be connected with somehow if this expected size is is very large then you know that the expected cluster size is quite large and if the expected cluster size is quite large then you're super critical right because that was the definition of the critical point so that's a bit sort of more calculus alike also again using these non back tracking random walk ideas but that's only an upper bound the upper bound typically is the hardest but still we didn't think about proving the lower bound in the same way or proving higher order terms in this way does that answer your question so maybe a soft question for the and can you tell us a bit more about the your personal story when you you were a postdoc at microsoft and you were a young postdoc and you discovered yeah with all the others these so i was a postdoc at microsoft but that was not when i was started this microsoft research has a very interesting visiting program over the summer so after i was a postdoc in 1998 i came back several times and all of this work was done in a summer visit in 2004 and that was actually quite exciting because for my birthday i just got in the book n is a number or something like that about the life of airdush so i was reading in the evenings in this book about airdush and then i would go to microsoft and there i would see bella bolobos joel spencer you know all the people who were listed in that book would be there that was very exciting yeah so we had a lot of fun trying to discuss this the the the people who are doing airdush when you run a graph like problems they are much more combinatorial than we were at that time so we were more from a mathematical physics perspective so we really had to learn a little bit how to speak each other's language and a large part of the time that we spent there was actually spent on precisely that trying to understand what they were saying and they trying to understand what we were saying so for example i've sometimes heard joel spencer say that this is the lace expansion this is certainly an expansion but this is not the lace expansion so there are some language barriers that you need to come across when you when you do this kind of even interdisciplinary projects within math if you're talking with people from a different community then they they use the same words but all of a sudden it means something completely different and people with experience in biology probably recognize that okay so any other question so if not let's thank Remco again and of course all of the other speakers today