 Okay, good morning and thanks for all being here. So after getting this invitation, I decided rather than focusing on kind of one work in probability, I want to give you several examples of an approach that has been useful for me. Hopefully it can be useful for you which is seeing connections between different areas either within probability or adjacent areas that use probability theory. And so I will give three examples necessarily not going into all the details in them. And these are examples I've worked on over the years. Some of them are not recent at all. So one example I'll tell you about is intersection equivalence. So that is something I worked on 20 years ago. Then one example is second example would be broadcasting on trees which I've worked on also some time ago. And then something more recent from the last five years which is still developing which is the connection between cover times and a Gaussian process, specifically the Gaussian free field. So these are different examples but the common theme is in all of them a lot of the progress is made not only by trying to work hard and solve a hard problem. That's always recommended but also looking is there a solution of a hard problem in another area that can be useful for the problem I'm interested in. So this example of intersection equivalence let me go back to the background really to something very very old. So a theorem of basically of Cacutani and Dub from the 40s which tells us which sets are hit by Brownian motion. And to be concrete let's focus on three dimensions this can be adapted to any dimension so so just to be concrete I want to focus on our three but so I have given I'm given some some compact set A and I want to know when does Brownian motion so B will just be the path of Brownian motion when does Brownian motion hit the set A. So I want to know when does Brownian motion hit the set A with positive probability and theorem of Cacutani is this is only if A has positive a one-dimensional capacity so in if D is bigger than three then the one you will replace here by D minus two A and here a so what is the capacity of a set it's the inverse of the minimal energy so we take the minimal energy of a measure and invert it and this is a measure a probability measure on A and here the I'm sorry so if I'm talking about capacity alpha this will be the alpha energy and the alpha energy of a measure is this is called the risk risk energy so we take D mu x D mu y over x minus y to the alpha so these are known as risk capacity is when you put alpha here the case of alpha equals D minus two is distinguish and it's a Newtonian capacity but and okay and all this can in two dimensions things are a little more delicate you have to talk of logarithmic capacity let's focus on three dimensions so this is the classical theorem of Cacutani and so one natural question that was open for many years in the people worked on it a lot in the 80s was I want the same thing for two independent Brian motion so I have one Brian motion path everything is in our three for concreteness have another Brian motion path b2 these are independent Brian paths and by the way just so there is an annoying thing that usual Brian motion we started the origin so you know if a set is contains the origin it will intersect it so I want the set not to contain the origin alternatively it's better to think and we will do that that we start the Brian motion instead of at the origin let's just start it at the uniform point in the box so I didn't say this in the beginning either I have to take a removed from the origin or I have to start not at the point so let's do the latter thing so let's think of the initial point so so b at time zero will be uniform in some box say the unit cube okay and then I take b2 it's an independent Brian motion so here is b1 and then I have an independent Brian motion started at the another uniform random point and I want to know when does this so it's also very classical that the two Brian motions in our three intersect with positive probability and so there is a ready jump here so okay so then the question is when is a if I intersect these two Brian motions so we already know and this is again a classical theorem of the Veretsky-Erdesh-Kakutani the two Brian motions in three dimensions will intersect with positive probability but now I want to intersect with a set a again a some compact set I want to know when is this intersection on empty with positive probability and what is quite easy to show is that it's enough that the two dimensional capacity of a is positive so just to get an idea a alpha dimensional capacity is closely related to alpha dimensional house dwarf measures so if a set has house dwarf dimension bigger than alpha then it has positive alpha dimensional capacity so that so if a set it has to be really bigger than like a plane in order for it to have positive two dimensional capacity so so this direction was was relatively easy kind of a second moment argument aim and was was well known but this direction was open for a long time in the 80s very say good people thought about it and already as a grad student I was very interested in this question I didn't solve it then and the first solution was actually obtained so this is true but not easy the first solution was obtained by Fitzsimmons and Salisbury 89 published in the analysis institute in Ripon Coray very impressive solution using very nice but quite fancy potential theory there's a sentence from their paper that I remember so they they consider much more general setting and which makes it a little hard to read their paper so they write that we will work in the setting of special standard processes we won't define them precisely here but see page 347 in Blumenthal Gittur roughly speaking these are processes that are left continuous in the rate apology okay so this is just roughly speaking so but in fact you know there is a really nice idea in their proof I won't go into that but a few years later I found a different proof which I want is the one I want to tell you about which was based on this idea of intersection equivalence and and the point is that a related theorem was proved by Russ Lyons in the context of percolation on trees and that can be converted to a to answer this question and get some more so let's shift gears for a minute and then we'll connect to this so so the question so I want to tell you about Russ Lyons theorem which was kind of published first published so 92 Anna's probability is the paper I'm referring to and you can also find an account of this in my my book with Russ which has is about to appear in Cambridge University press so you can find some you know exposition of this theory but the original papers of Russ are very well written and what the theorem concerned is the following question so I I have a tree so it's not a regular tree it's some general tree and and I'm doing percolation on the tree with some parameter p so so what does that mean we have so every edge is open with probability p independently and then we we have some root and we ask what is the probability that the that the root is in an infinite cluster infinite open cluster and the answer is that say up to a constant indeed up to a factor of two this is going to be the equivalent to the capacity of the boundary of the tree I'll write the tilde because we're in the tree and and here I want to use the following notation so I'm going to take p to be two to the minus alpha and so to define the capacity as I did there I need to have a metric because we have the distance so so what is the boundary of the tree the boundary of the tree is the set of infinite rays infinite paths from the root and if I have two two rays I will consider the distance between one ray psi and another ray eta to be two to the minus so you could put e here but alpha two to the minus c eta so what this is this is the point if I have the two paths c and eta two rays this is the point c minimum eta so it's this is the root this is the point where the two rays separate and this absolute value means the number of edges from the root to this separation point and this is the one of the natural metrics on the tree boundary and then the rest of the definition proceeds as above so the alpha energy I'll put a tilde to remind us that we're in the tree of the boundary of the tree is just the integral d mu c d mu eta of the distance between c yeah I'm sorry so so this distance okay so so this is the d of the integration this is the distance I hope that's okay and we take the distance to the power alpha okay so this is the alpha energy so this is Lyon's theorem so again in the in this theorem also one direction is is easy to show that to show that the probability is at least the capacity is a second moment argument the other direction needs some kind of a Markov property so I told you about Cacutani's theorem and you know there are many proofs of Cacutani's theorem but really the reason they it works well in both directions in one direction you just use second moment the other direction you use a Markov property of Brownian motion namely in the if you know that the probability of Brownian motion intersecting A is positive then you want to show that the capacity is positive so you have to find a measure of finite energy what will be this measure it will be the hitting measure by Brownian motion of the set A so you you're given the data that the Brownian motion hits A you want to construct a measure you just use the hitting measure and then the fact the Markov property of Brownian motion can be then used to control the energy of the measure in the tree so Russell didn't exactly express his proof like that but you know later I want to send you to a different paper called Martin capacity a sort has a longer title but this is with Benyaminie Pimentel myself in 95 also in Anna's probability where we explained Lyon's theorem in a way which is close to Cacutani's theorem namely suppose you do percolation and you look at all the vertices that are connected by all so let's just go to level n and look at all the vertices that are connected by open paths to the root this is a random subset of the boundary in this case we're just doing a finite version of the boundary so we're just going to level n but a nice thing is that if you go along the this finite boundary of the tree so a long level n and just jump left to right just along the vertices that are connected to the root this is a Markov chain so if I if I'm here and I'm connected to the root by an open path then to find where is the next vertex that connected what happened to the left is irrelevant because of the tree structure and this Markov property is really one way to understand what makes you know the non-trivial direction in Lyon's theorem work so it's really the connection to that so anyway so once you have a Lyon's theorem the point is that this this is an equivalence and now we want to think about it not on on the trees but in the context of Euclidean space well there is a natural mapping from trees to Euclidean space just biadic expansion or say binary expansion so if we have so let's see there's the eraser so if we want to realize kind of a connection between trees and Euclidean space so you can use binary expansion in the case of you know of a cube we would divide it into two by two so if three dimensions we get eight cubes and this gives a natural structure of a eight-ary tree every vertex has eight children and if I have some set A let's suppose that the set A is in the is in the cube so we can always move the sets to the cube so we can assume that we're talking about sets in the cube then we can use the binary expansion to find the tree T so T is the tree of binary so let me be T A tree of binary expansions of points in A so A is now a set in zero one cubed compact okay so if we look at this tree if A was for instance the whole cube the tree would be the full eight-ary tree every vertex has eight children but if A is a smaller compact set for instance if A maybe doesn't intersect this cube then already in the first generation instead of having all eight children I have fewer children and so on so just by looking at the geometry of A which binary cubes it intersects I can easily build this binary tree and then A once you're in this form Lyon's theorem can be extended so the so now I'm going to look at a random set so lambda it's called lambda of alpha this will be inside the cube this would be a random set obtained by or I'll call it a random random fractal or fractal percolation it is obtained by retaining cubes with probability p which is 2 to the minus alpha okay so I I'm drawing two dimensional pictures but think of three dimensions so I take the cube divide into three and each of these cubes I retain with probability p if I remove it you know then it's not going to be in my set say I you know remove this one now I have these each of these I again subdivide and I retain with probability p remove with probability 1 minus p and so on so this is a classic construction of fractal percolation but we see that it just corresponds to percolation if we map it back using binary expansions it just corresponds to percolation on the tree and therefore we can express in Lyon's theorem in the point in the form that the probability of the random set lambda alpha intersecting a fixed deterministic set a this probability is equivalent to the capacity of a so actually I should put the capacity of the tree corresponding to a in alpha so this is just a translation of Lyon's theorem in this in this kind of setting this is just the probability that when I do percolation on the tree corresponding to a I have survival in that tree so that corresponds to this intersection being non-ant okay and now if I want to think of this the capacity the capacity till they want to think of it geometrically it corresponds to a slightly different metric right the distance between two points in the cube is not their Euclidean distance but it's pretty close it's the minimal binary cube that contains both of them so that's usually like the distance but not always if the two points are you know very close to a you know very close to a boundary like if the two points are here then their Euclidean distance is small but their tree distance is large nevertheless there is a theorem that says that the capacity a theorem aim proved by Pimentel and myself that the capacity tilde is equivalent to the capacity of the set a so although the metrics are not you know they're not by Lipschitz equivalent at all but a capacity is sufficiently flexible that one can map one to the other this argument is really just based on Cauchy-Schwarz it's not hard now once we have this then we are in in good shape because now we can state one connection already let's see how are we doing on boards okay good so so putting these together say with Cacotani's theorem we see that the probability that that the Brownian motion say let's call it just b1 intersect a is non-empty is equivalent to the probability that well lambda one so lambda one is remember a random set corresponding to the index one intersect a is non-empty right because both of these things any question let's call it something okay so so both of these things are equivalent to the one capacity of the set a okay have we made any progress so one one thing is you see that this is what i'm calling intersection equivalent so the random set b1 the path of a Brownian motion started uniformly in a cube is at least inside the cube intersection equivalent to this random set so these to this random fractal so these sets look completely different their topological properties are very different but their intersection properties are the same given a deterministic target set or or any target set which is independent of these of these sets then to ask whether intersects b1 is equivalent up to constant to ask whether into intersects lambda one so okay so why is this useful because it's very easy to understand what happens when we intersect two independent copies of this set so remember what was our goal to understand i take b a b1 and intersect with b2 intersect with a and want to understand when is this non-empty well let's think of this as some set freeze b2 and now we know that this is equivalent by this to the probability that i'll call lambda one of one so this is so this one just corresponds to the parameter my parameter alpha but this one means i'm going to take you know a one copy of the set intersect b2 intersect a is non-empty right so this means equivalent up to constants i won't care about the value of the constants now and this is just applying this fact that we obtained using but replacing the set a by the set b2 intersect a so you just condition on b2 and apply the previous thing okay but now use commutativity of intersections so you can look at this the other way and this is equivalent to the probability so beat this is now b2 intersected with a and this set so i can now use the same principle again to say this is equivalent to a lambda one of one intersect lambda two of one so these are two independent copies of this random set lambda of one so let me remind you what is this set lambda of one it's very simple you take the cube divide it in eight and each sub cube you just keep this probability half and remove this probability half and repeat so that's lambda of one and we have two independent copies of this set and we're still intersecting with a okay have we made progress yes because this intersection is easy to understand i told you about this random set you intersect two copies what do you get we get lambda two this is exactly lambda two right because if i'm taking intersection two copies it's the same as taking a cube and saying i keep it with probability a quarter instead of a half so i have lambda two sorry lambda of two just one set lambda of two intersect a but now i have Lyon's theorem or our variant of it applied with parameter alpha equals two instead of alpha equals one and get this is equivalent to the capacity two of a so we get a very soft proof of this a Fitzsimons-Salisbury theorem in this case okay but we can but the intersection equivalence gives you some more information which was not so their theorem although it gives you this their theorem doesn't actually give you everything you can obtain from these techniques it does give you there are some advantages to their technique there are some advantages to this so both both are useful the point of showing this example it's you know now more than 20 years old is that if you have a hard problem well if you can solve it great if not maybe this problem has been solved in a different language or keep your antennas up to see is a problem being attacked in a different area which is really equivalent to your question now let's see i didn't put away my clock so what is the time okay it's say 10 10 10 o'clock so we have till 10 20 okay till 10 25 okay so uh so i had a nice story about uh broadcasting on trees i won't tell it in in that kind of detail but before i go on any questions about this story yeah you can um so if you look if you want to see more about this you know if you search for intersection equivalence there's a paper in communication math physics from the 90s and all of this is also covered as i mentioned in my book with ross okay so the second example which i won't give any detail on but this is an example where which i heard from first heard about from several computer scientists so one who was then in France and is back in France Claire Matthew formerly Kenyon and Leonard Schulman and also Leonard student Will Evans and they were computer scientists interested in the following kind of question so you have a bit let me put the bit to be a plus or minus one and this bit can be and we have a tree and the bit can be flipped with probability epsilon so we have a if i have here here some a bit or some spin s here i'm going to get minus s with probability epsilon and i'm going to get s with probability one minus epsilon so this is true along every edge of this this is called the binary symmetric channel with probability epsilon you flip with probability one minus epsilon you don't flip so you have let's say i started with a plus so usually i have a plus but occasionally with probability epsilon i have mine you know this i have a flip okay and so on now this came from a problem in noisy computation which i won't tell you about but the really the question they were asking is suppose i see the individuals in generation n and think of n is very large so i see the spins in generation n does that give me information what was the original spin so of course this depends on the epsilon so we started thinking about this problem but it's so you may want to think of it first on the binary tree but we understand it on any tree anyway the let me just tell you the answer i don't have time really to explain the techniques but the key thing is a so let me explain it for for the bary tree so if one minus two epsilon is less than one over square root of b then then information dies out so the mutual information between the root and everything at level n goes to zero exponentially and and if one is two epsilon is bigger than one over root b then the information survives so the mutual information stays bounded below which means if you see the nth level you can never be sure of what was here but you have some useful information that doesn't die out so this is a paper called broadcasting on trees and the easing model and the case of bary trees turned out was already solved by bleche ruiz the grebnob in the setting of easing model on trees but we did not just that case but the case of general trees so for general trees replace a for general trees replace b by one over the critical percolation probability of the tree and and this is a actually using some of rust lion's theorems one could extend this to understand this process on any tree the interesting part of the story is it turned out you know i have friends in different areas and i talked to some of my math biology friends and it turns out this question same question was investigated heavily in math biology as a toy model for understanding uh phyllo genetic reconstruction so you have you know some mitochondrial DNA and you see how it evolves through mutations and you want to understand given the present population what can you say about mitochondrial DNA of some ancient ancient ancestor and there it's really not binary but this is kind of the first toy model that they were looking at and they hadn't even in biology they hadn't even obtained the sharp result for be for for the bary tree by that time although there was you know some phd thesis on it and so on and again this problem was investigated separately in statistical physics in the context of the easing model so um making these connections were very useful because it turned out later that some of the techniques of the biologists although they were inferior for the binary case they were very useful for handling more symbols so um al-hanan mosul wrote his thesis with me in jerusalem on this topic and continued to develop it further his student alan sly developed it more and this is a story which is continuing so it turns out that this stochastic block model which is a very hot topic now is a is very usefully approximated by this model so understanding this is kind of a necessary first step to understanding stochastic block model and this is a very hot topic which i recommend you to look at but i can't get into um but the point is that you know this is really probability but was investigated by people in different application areas or in different topics without being aware of each other so just you know being aware of what's happening in different communities would do probability even though they're not called probabilists is uh can be very useful uh okay so in the last few minutes uh whatever 18 minutes i want to tell you about last thing where again a different connection this will only be a brief survey because of the time okay so this is about cover time a topic in probability but it was really studied more by combinatorialists and computer scientists cover times for random walks on graphs so what i'm telling you is a very brief survey of work um from five years ago with jan ding and james lee and some later developments so jan actually this he did this when he was still in uc berkeley now he's in chicago so this is about random walks on graphs you know topic that you know probabilists some probabilists like many combinatorialists computer scientists and the issue is how long do you need to run until uh you cover the graph and by putting conductances you can get any reversible Markov chain let's focus on simple random walk on on the graph so uh all the weights will be one we're just doing simple random walk so this is um okay local times and hitting times and really the cover time is when we visited everywhere so just reminders of some notation so hitting time from u to v is the expected time a random walk started at u will take to reach v so this is almost a metric it satisfies triangle inequality but it's not symmetric so we can symmetrize it and get the commute time h u v plus h v u this is now a metric on the graph both triangle inequality symmetry now the cover time the object of interest here which again it's a very natural thing to look at but computer scientists were interested in it because it is related to algorithms for determining connectivity so the cover time is just the expected time from the worst starting vertex to hit all vertices of g just to give you an idea here is the order of magnitude of cover times of different of different graphs so you know if you have a path it's n squared complete graph is just coupon collecting its n log n by the two-dimensional grid was determined up to a constant in you know 89 it's n log squared and but determining so by n here I mean the number of points so this is a root n by root n grid determining the right constant in front a took a lot of efforts that's work I did with Denver's and Z Tony but I'm not going to be interested in constants right now so this gives you order of magnitude of cover times in some graphs but and there are some general bounds cover times are always at least n log n and at most in order n cubed okay so it will be useful in understanding cover times to use the notion of effective resistance many of you know that one way to think if you haven't worked with effective resistance one way to think about it is via the commute time I just defined so this is a commute time identity actually first proved by computer scientists which is says that the commute time is twice the number of edges times the effective resistance so if you want you can think of this as a definition of effective resistance there are other definitions as well and this this in particular tells you that effective is it's one way to show that effective resistance is a metric is via this identity the triangle inequality of effective resistance is not obvious from the classical definition but it is obvious this way okay so Aldous and Phil in the 90s asked can you estimate the cover time of a graph up to constant deterministically of course you can estimate it by running a Markov chain running the walk averaging and you'll get some statistical estimate which will be pretty good but suppose you want to know it for sure deterministic estimate up to a constant can you do this efficiently so this is very easy if you want hitting times because hitting times satisfy an obvious linear equation right hitting from u to v if u is different from v well i have to take one step from u and then i have an average over the neighbors of u of hitting from w to v this is a system of linear equations which specific which is uniquely solvable and gives you the hitting times very easy most n cubed but what about cover times you know they couldn't write such equations in fact you can write equations for cover times in this space of sets and it will give you some a linear system in exponentially many variables so this will give you an exact solution if you have exponential time so they asked is there a polynomial time algorithm to estimate it up to constant and kim can kim love us in v some of the top combinatorialists in the world showed that you can get a log log n squared approximation um using what's called the matthew's matthew's bounds so matthew's in 88 proved a nice upper bounce so the cover time is at most the maximal hitting time multiplied by log n you know when i have extra when i give longer talk i you know i give this proof it takes just a few minutes but i won't so cover time is at most the maximal hitting time multiplied by the harmonic series but there's also a lower bound so the cover time is at least following you maximize over subsets of the vertex set and then you minimize over u different from v the hitting time from u to v times the log of the size of the set and what these people showed is that this lower bound together with the maximal hitting time which is also a lower bound if you take these two together then they are very close to sharp so up to a log log n squared they give you the cover time but you know it's not up to constant they gave examples that show this log log n squared is really there so they were motivated by this aldous fill problem but you know and they made a lot of progress but they didn't quite solve it now there was another conjecture which also motivated them this is a quite conjecture of a winkler and zukerman and it is the concerns the blanket time so the blanket time so when you reach the cover time you visited every vertex in the graph but non-uniformly some right the last vertex you visited is very you know you reached is visited once and by that time there will always be some other vertices that are visited many times even in the complete graph right when you cover the complete graph this is just coupon collector you reach some vertex once most vertices are visited log n times by that period so they asked for the blanket time which is when you cover the graph approximately uniformly say the all the local times are within factor two of each other so here the local time at the vertex is the number of visits to x normalized by the degree we normalized by the degree because that's a stationary distribution we know that asymptotically if I look at the ratio of two local times asymptotically it must tend to one this is just the fact that the empirical distribution will tend to the stationary but here we're not interested in asymptotics we want some quantitative bounds and what so winter and zukerman look at this blanket time again it's the expected time when all vertices are visited say within a factor beta of each other and they made very courageous guess they guess that the blanket time is within a constant of the cover time so they made this guess based on well very good intuition and about three examples so they they analyze you well they can understand the complete graph they can understand the torus that's about it but then they had the intuition that what happened in these examples should happen completely generally and and they conjecture that the blanket time is always within a uniform constant it just depends on this beta so so if you and doesn't depend on the graph really amazing courage would turn out to be right so again by this the the method of kind lovers and kim actually implies that their guess was right up to a log log n squared because a this estimate that the kan kim lovers will develop works both for cover time and for blanket time and so you get that they're within log log n squared of each other but you know still don't know that doesn't imply that they're within a constant okay so I was very interested in cover times too and I worked on them with martin barlow a ding and akhmias we were really trying to understand cover times of specific graphs a erdos reny graphs near criticality these are quite delicate graphs and in order to understand the cover time there we took the approach of kan kim lovers and boo and refined it a little bit and we got the following general bound which we applied in the case so the cover time we could bound by the number of edges times the effective resistance the diameter and the resistance metric times some integral so this is the root log of the entropy number so you have the vertex set and you ask n so what is n of sd epsilon given a set s a metric d and an epsilon it's a minimal number of epsilon balls you need to cover the set and here with the set is the vertex set the metric is the effective resistance metric and the and and the radius we want to look at is r times epsilon and you take that for each epsilon you intake square root integrate square and this is a bound for the cover time again this was some of the ideas in this proof were in this love us a kan kim boo paper but you know we had to refine that little we got this bound a you know we submitted that paper and we were kind of in a rush and after we submitted you know I was talking to John who was my student I said wait a minute you know this integral looks familiar so this is very similar to the Dudley integral for Gaussian process going back to 67 okay so right so this is you know there's some differences here there's a square the metric is different but you know it does look similar so then you know maybe now this Dudley integral for is a way to bound the soup of a Gaussian process so what is going on here so well I'll tell well many of you know Gaussian process so x s is a Gaussian a is a Gaussian process the metric we use on the Gaussian process is just a metric given by the standard deviation so the distance between two points is just the a l two distance between the corresponding Gaussian variables and then so this was a theorem of Dudley in 67 which was almost sharp so this is usually a very good bound but it's not exactly sharp and actually finding the right a the sharp bound took 20 years after this Dudley theorem and when I saw they said well maybe for cover time we don't need to go through the same 20 years we can just piggyback over on the work on the Gaussian processes so for nick telegrande a you know culminating in telegrande 87 a managed to find in a sharp formula for the soup of a Gaussian process and that naturally you know is this analogy useful can we use their theory somehow so to understand the positive answer we have five more minutes so we won't really understand it but I'll tell you a few things so it turns out the relevant Gaussian process is a famous one which is important for various reasons it's the Gaussian free field this is the Gaussian free field it you know has studied both in discrete and continuous settings and I think we'll hear something about it later today in the continuous setting in the discrete setting it's given a graph this is a centered Gaussian process and the variance of gx minus gy is the effective resistance between x and y so given a graph g I know how to compute effective resistances and then I can build a Gaussian process where this is the effective resistance and equivalently if you're used to describing a Gaussian process via its covariance kernel the covariance of the two variables gx and gy is the green function from x to y for random walk killed at some specific vertex v0 so this is just means I start random walk at x look at the expected number of visits to y was killing at v0 normalized by the degree of y so this green function is a positive definite symmetric function and so it's a covariance kernel and that's the Gaussian free field some pictures of what it looks like in two dimensions in in a two-dimensional lattice and and if our theorem which answered both the Aldous field question and the blanket time conjecture of Winkler and Zuckerman is so it was obtained in 2010 I think the published in Anna's math in 2013 is that the cover time is equivalent up to constants to the number of edges times the soup of the Gaussian free field on the graph squared and this is also equivalent to the blanket time okay so we don't know any direct probabilistic proof between the cover time and the blanket time well obviously the blanket time is bigger than the cover time but the other way we don't know any direct probabilistic thing it's just via analytic characterizations we can show they are equivalent okay so that's that's the theorem so indeed we didn't have to wait the 20 years because we could just use you know some of the previous work so you know there's a rich theory of the maximum of Gaussian processes and I can't go into it but there is this gamma two functional first described by fernick and then used by telegrams a completed by telegram to show that the soup of a Gaussian process is this gamma two and our main theorem was that the cover time you know can be restated that the cover time is equivalent to this number of edges times this gamma two squared and this then allowed us also to find an algorithm to estimate the to estimate the cover time the story of gamma two is very interesting but I have to I have to skip it in retrospect there are many analogies between Gauss and process and you know the random walks that were not really realized before so this is there is a classical inequality in Gauss and Posse called pseudocove minoration that says you know if you have a Gaussian process and you have some separated points then you can lower bound the maximum and it turns out this is very close to the Matthews lower bound okay so I'm going to skip the definition of the Gauss gamma two functional and say that the crucial element of the proof which also has a strong French connection is this dinkian isomorphism theory specifically a theorem due to Eisenbaum a Natalie Eisenbaum a Caspi, Marcos, Rosen and she so at least two Parisians there and this is an amazing theorem that relates the local time of a process to a to a Gaussian field and in our case to a Gaussian free field again no time to tell you anything about that but that's kind of was a crucial element in the proof and you know that's a whole other story I'll finish kind of in the last minute one thing that so we got an estimate of the cover time up to constant but we wondered is it the connection actually sharper is the cover time asymptotic to the number of edges times the maximum of the Gaussian free field square if this is true then the telegram theory is not precise enough for that because it only estimates Gaussian process up to constant so one needs a different technique and indeed an upper bound followed from this isomorphism theorem so the cover time is at most that but we couldn't at the time prove the corresponding lower bound and and this was a proved by John ding in a bunch of cases the general trees and assuming a uniform bound and the degrees and then kind of oops and then the general thing was proved by Alex Jai based on an idea also you know independently found earlier by Lupu and this is again goes back to making breaking barriers so we're talking about random walk on a discrete graph but it turns out in order to get the sharpest results it's better to look at the cable process so you have the graph you put the edges you have the random walk instead of doing a random walk on the graph you do Brownian motion on this metric space obtained by thinking of you know the edges as segments and turns out that if you apply the isomorphism theorem in this context it's more powerful than applying it directly on the graph and one gets the complete the exact estimate so this refinement again is due to Alex Jai you can find his paper on the archive again there's related work not exactly the same by Titus Lupu down in Paris and that's a good point to stop thank you thank you very much you are there any questions so don't worry you don't need to talk into the microphone you can just speak up so many I have a question about the intersection equivalence before so you you mentioned the market property on the tree when you look at the tree from left to right but if you do it in the Euclidean setting does this have an analog in the Euclidean setting when you use the dyadic tree because it's you mentioned this market property but I didn't see if you use it and so the point is that it's really so in the Euclidean setting you have this you know these bound the cubes have some boundary intersection which creates some you know superficial difficulties but the thing is we can completely transfer the questions to up to the tree and and really solve them on the tree where we have this market property so in a real roughly speaking you can think of a a market property obtained by a kind of random a piano curve in in space but really rigorizing in this format is much more painful than it needs to be you just a transfer the question you know the question you have to the tree and there kind of the nice separation you have in the tree yields a very clean market property so everything you know so you do infer any consequences you want in Euclidean space by mapping to the tree rather than working directly in Euclidean space there is some cost to this approach for instance if you're looking at high dimensions you lose constants that depend that are exponentially dimension so this is not a good approach for kind of high dimensional uniform estimates but many questions we care about are in you know two and three and low dimensions this has an analog in two dimensions in two dimensions you know you have to vary the percolation probability as you go deeper but so so everything works just transferring to the to the tree any other questions so if not we thank you all again