 Merci Nicolas, et merci aux organisateurs d'organiser ce très bon événement. Je suis vraiment heureux d'être ici, j'en profite beaucoup. Donc, oui, le purpose de ce talk est de dire quelque chose sur le spectrum de très grands graffes, et particulièrement, de dire quelque chose sur ces pièces que vous voyez ici dans la picture, pour relater cela à la géométrie du graffet, et particulièrement à l'isopérimétrique profilé du graffet. Je ne vais pas vous dire de quel modèle cela a été fait, je vais vous le dire dans une minute. Je vais juste revenir à quelques basics. Qu'est-ce que nous faisons ici ? Je considère juste un graffes simple et finie, comme vous le savez, il peut être représenté par un symétrique d'adjacent, qui est complètement encode dans l'information du graffet. Ceci est, bien sûr, un symétrique, donc vous pouvez regarder les valeurs de saignement, et cela capture beaucoup d'informations sur votre graffet. Il y a une grande partie de la théorie de graffet qui est spécifiquement évoquée à la connexion entre ces valeurs de saignement et les valeurs de saignement, et la propriété structurelle de votre graffet. Si vous voulez étudier ces valeurs de saignement, spécialement à la taille du graffet et de l'infinité, c'est une bonne idée, spécialement pour la probabilité, d'encoder cela dans un majeur, comme nous l'avons vu dans les lectures précédentes. Donc, vous prenez la distribution de spectraux empiriques, cet objet ici, qui est juste une majeure de probabilité sur la ligne réelle. Et la question est, comment est-ce que cet objet ressemble à un graffet typique? Bien sûr, le monde typique ici implique d'assurer qu'il y ait des modèles où vous étudiez votre graffet. Donc, nous allons faire quelque chose très simple ici. Nous allons prendre la distribution uniforme sur tous les graffets sur 10 000 vertices. C'est un set final, vous pouvez juste choisir un élément en général, et ensuite prendre l'histogramme de valeurs de saignement. Qu'est-ce que vous avez? C'est cette photo. C'est une vraie simulation que j'ai faite. Et ce que vous voyez, c'est cette forme semi-circular. C'est un phénomène très connu dans la théorie. Il s'agit de la fin du 1950. Il a été découvert par Wigner. Il s'appelle la loi semi-circulaire. Et c'est plus général que ce que j'ai justifié, mais je vais juste mentionner cette application spécifique. Il s'agit d'un model d'air d'eau, donc chaque hache est présente indépendamment avec des probabilités P ou L. Si vous voulez le graffet être uniforme, juste prendre P pour être 1⁸, mais vous pouvez faire quelque chose plus général que ça. Et en tant que quantité, ici, il s'agit de l'infinité. Donc, vous voulez suffisamment beaucoup de non-zero entries dans votre matrice. Ensuite, vous avez une convergence pour cette forme semi-circulaire. Donc, la convergence est maintenant très bien compréhendue. Il y a quelque chose remarquable qui n'est pas à tout dépendre de la choice spécifique de votre modèle. C'est juste une matrice simétrique avec des entries de l'IID sur la simétrie, bien sûr. Et pour d'autres matrices, sous quelques assumptions sur la distribution de la single entry, vous avez cette convergence. Donc, c'est un genre d'université qui est excitant. Et dans le contexte spécifique de random graphs, je vais vous donner d'autres exemples où la la simétrie semi-circulaire peut être établie. Random regular graphs. Donc, au lieu d'être indépendant, les entries sont extrêmement correlatées. Donc, sur chaque roue, vous avez le même numéro de 1. Et sur chaque columne, vous avez le même numéro de 1. Ce numéro est dn. Le numéro des habitants de la vertex. Et puis, vous avez le même type de séance limitée par la simétrie. Donc, une solution semi-circulaire. Ok, donc, c'est l'université, let's just mentionner que la pensée des Certes de la matrice est compréhende dans un sens très fort que cette réalité. Même si vous zoomez dans l'interhopéreté et vous regardez les statistiques locales de la façon de l'IID, vous ne vous souhaitez pas tester le modèle que vous avez C'est une autre histoire. C'est très bien, mais si vous pensez à ces deux conditions, sigma et square, vont à l'infinité, cela force votre graphite à être dense. Vous devez avoir beaucoup d'agis comparé à le nombre de vertices. Donc le nombre de graphs devient très large. Mais vous aussi voulez que le nombre d'agis soit encore plus grand que ça. Et pas tous les graphs sont comme ça, spécialement si vous pensez à un réseau réel, l'internet graph, pour exemple, c'est une grande graphite, 10 à 12 vertices. Mais si vous regardez un single page, un single page typique, il n'y a que 3 ou 4 neighbors. C'est extrêmement sparse. Ce n'est pas tout ce régime. C'est une question naturelle. Qu'est-ce qu'il s'agit de sparse ? Le spectrum de ça. Et cette issue a été créée par une physiciste quelques années auparavant, qui s'appelle beyond the semicircular law. Elles donnent des simulations of real world networks, scale free networks, where the spectrum is actually not at all semicircular shaped. So there is some need for a theory here. And you may wonder why would we care about looking at the spectrum of a complex network like Facebook or internet. But that's actually something that is done in practice. There is a, in particular, a whole book on the subject by Pete Vanmigam which really explains why it's so important to look at the spectrum of a network to understand that network. And this is really important in that setting because a complex network like the internet is something that is extremely complicated to grasp. And if you look at the spectrum you get a simple fingerprint of that which you can try to understand. So that was just a motivation. Let me show you a few pictures when you take a sparse graph instead of a dense graph. What kind of spectrum do you see? So the analog of the previous simulation that I did was again this Erdos-Renis model but now I choose the parameter p so small that every vertex is only a limited number of neighbors actually free in this case, on average. So the graph looks like something like this. You probably have seen this in your courses. There is this nice phase transition when the average degree goes above 1. You see this giant component appearing in the middle and around that component you see a bunch of floating small components actually trees for most of those components. That's how the graph looks like. Now if you take the spectrum of this object you get the picture that I showed in the introduction. And not much is understood about this spectrum. Particular, you see those spikes. There is one at zero, one at minus one. The other seems to be there is some pattern of them. And if you remove them you see this more smooth curve which you expect to be an absolutely continuous part in the limit. But this has not been shown yet. Ok, so what if I change the model now let me take the other model that I've talked about, the random regular model. Again now the degree is fixed but I take it to be free. The graph looks like that. It's now connected. There is no floating component around. If you look at the spectrum, that's something very, very smooth. We understand everything about the density. The theoretical density, the red one is the K-stand, my K distribution. And you see that the approximation is quite good. So you have those two different models and they do not yield the same spectrum at all. And what we would like to understand is how does the geometry translate into the spectrum and maybe vice versa. And if you have seen the lectures on Schrodinger operators maybe the intuition is that the spikes appear when you have a lot of roughness in your graphs. So you expect this to be true. You expect to see a lot of spikes when your vertices have a very high variability in the degrees. And on the contrary when you have a very regular graph like that you expect something much smoother. But this picture is not at all rigorous yet. Ok. So let me just mention a third model of sparse graphs which doesn't have a dense analog. I'm not spoken about it yet. Simple example of a sparse graph right? Take a large tree uniformly at random on a certain vertex set. And the spectrum looks again like this Erdoschen spectrum. You see those spikes and maybe something continues on top of that. Again it's an open question to determine whether or not there is something else than those spikes actually in this model. We really do not understand a lot about this. So in the dense regime a lot of universality in the sparse regime it's not universal at all you see those very different patterns it's actually a good news for the engineers because if you want to understand a network through its spectrum it's a good idea that the spectrum differs from one model to the other. Otherwise you do not get a lot of information out of it. Let me mention a few questions a few open questions. As you have seen in Charles talk it's now rigorous that a lot of sparse models have a sequence of spectral distribution that converge to a deterministic limit. I'll come back to that again in a few minutes. Now this limit is model dependent. We do not understand a lot about it. This has been proved specifically in the free models that I have showed to you. So let me just credit the where the result was proved. Questions that we would like to type of questions that we would like to answer especially in this talk let me mention that even in the Erdos-Renis case those questions were mentioned think about random matrix theory for a while the adjacent matrix of an Erdos-Renis graph is the simplest possible random matrix that you can think about. It's symmetric, Bernoulli's, IID's so you should know everything about the limiting spectrum. But again those questions were raised in random matrix theory for example what is the height of the spike at zero what is the dimension of the kernel of your adjacency matrix when the size of the graph is large. Where are those spikes this question was raised at a random matrix colloquium in 2010 and maybe more ambitiously as Charles described in his lecture any measure you can decompose it into pure point absolutely continuous and singular continuous part what can you say about this decomposition can you prove that there is or there is not each of those parts here and even better can you say something about the support of those components. The goal of my talk today will be to show you how to answer some of those questions and the framework that Charles presented to you yesterday. Ok, so let me just mention in one slide what's this local week convergence is all about in case you missed Charles lecture. So this local week convergence is just a definition of convergence for sequence of graphs take deterministic graphs for the moment finite deterministic graph gn you say that this sequence converges to some weird limiting object this row here if the local structure in your graph converges so in what sense do I mean that? well in any graph you can always look at the ball of fixed radius around your favorite vertex in that graph that's the neighborhood of the graph you like to understand this as the size of the graph and so infinity but the radius here stays fixed now this would be very easy to do if you only care about one specific vertex this zero here but you would like to do it simultaneously for all the vertices in your graph if you want to understand the local neighborhoods of all the vertices simultaneously one natural thing to do this was the beautiful idea of Benjamin and Schram to just take the empirical distribution of all those possible routes ok and you say that your sequence of graph converges is this empirical distribution of neighborhood converges so I'm not putting the formal setting up here but the project is a probability distribution of a locally finite routed graph so it's the distribution of a random potentially infinite routed graph think of the Galton Watson trip for example and this describes what you would see in the limit if you choose a vertex at random in your graph and look locally around yourself well here is one way to state that let me give you examples so that you understand better what I mean by that so there are a lot of models for sparse graphs for real world networks all of them are actually bad they are not good models to model a graph like the internet or social network but let me give you a reminder on some of the most classical random graph models so the first that we have seen is the random regular graph fix a vertex set 1 to n and pick a graph uniformly at random among all graphs which have the property that every vertex has the same number d of neighbors if you look at a large random graph like that and pick choose a vertex and look locally around this vertex what you will see is actually something that does not contain any cycle why is that because those graphs are taken at random and if you think about it 2 of your neighbors do know each other it's something quite small if the size of the graph tends to infinity and you can sort of repeat this argument further away from the root and you will convince yourself that the typical length of a cycle in such graph is logarithmic in it so there are many cycles in this graph but they are so large that if you look locally around yourself you will not see them there are also a few short cycles in the graph, a few triangles a few very short cycles but they are extremely rare if you pick a vertex uniformly at random there is little chance that you you pick one which is a triangle in fact what you will see is a tree and because your graph is irregular the only possibility is the irregular rooted tree so this is the limit, the local week limit of your sequence of random irregular ok so other interesting model this Erdos-Renis model with that probability that you you take so small that average degree remains constant called this constant C here again the same thing the same phenomenon occurs short cycles are extremely rare cycles typically have size logarithmic in it so if you look at a fixed ball on the typical vertex you will not see any cycle therefore you will see a tree but now the tree has no reason to be the regular at all the vertices are varying a lot in the Erdos-Renis model the distribution of the degree of a single node is just a binomial with parameter n and pn so in the limit this converges to a Poisson in this regime and it's not hard to see that the limit of the Erdos-Renis model is this Galton Watson tree with the degree Poisson does everyone know what the Galton Watson tree is ok so you can complexify this a bit I mean this model was not appropriated for for real-world networks because all the vertices have this Poisson distribution so basically they all look the same and in real-world networks what you would like to take into account is the fact that some of the vertices see a lot of people you have those big hubs in the internet graph or in facebook you have certain people which are connected to a lot of other people of course most of the vertices just see a small fraction of the graph but some of them are extremely high so you would like to allow for distribution degree distribution where you see more variance than in a Poisson distribution and this can be done actually there are models of graph where you can decide in advance how many vertices you would like to assign to every vertex so you decide in advance that vertex 1 will have 3 neighbors vertex 2 will have 4 neighbors and you specify those numbers as you wish so you have a bunch of degrees you want it to one for each neighbor and then you can construct a random graph which has exactly those degrees that you specify how do you do that? well you just once you have this drawing just choose a uniformity at random say you connect this one uniformly then this here so sometimes you can create loops sometimes you can also create multiple edges so it's a multigraph not really a graph but still that's a way to generate a random graph with a pre-scaled degree sequence now I would like to do asymptotics on that so I have to put some assumptions here on the degrees how do they behave as n tends to infinity and the natural assumption is just to say that the empirical degree distribution converges so you look at this for each end you have a sequence of degrees now and you let n tends to infinity but you assume that this distribution converges to some limit limiting the degree distribution so it's a probability distribution on the integer you assume also that the first moment converges and if you have those two assumptions you get something which is called configuration model with empirical distribution path and the limit of this is the natural analog of the Galton Watson tree in the Poisson case it's just the Galton Watson tree with the grill opie I'm cheating a bit here it's not really the Galton Watson there is a small difference between the root and the other vertices but if you have never seen this Unimodera Galton Watson tree just think of it as the usual one other models of sparse graphs you have seen this uniform random tree pick a three uniformity at random choose a vertex look uniformly in a small ball what will you see of course you will see a tree the limit has been found actually long ago by Grimet who called it the infinite skeleton tree so let me just tell you what this nice object is so you have an infinite line some infinite line of vertices and then at each vertex you have a pending tree t1, t2 etc and those tk are just IID Poisson Galton Watson with mean c equals 1 they are just critical Poisson Galton Watson tree ok, that's the thing that you see the root is here that's the root so if you pick a three uniformity at random and sit at a vertex look around you, you will feel like sitting here that's what you will see ok, that's the limit ok, there are more sophisticated examples of real world networks sophisticated models for that let me just mention the preferential attachment graph that is constructed progressively vertices come one after the other and the vertices connect to those vertices that are already present and they connect to them in a way that favors those vertices that already have a lot of neighbors so if you enter the network and see someone who knows a lot of people will have a higher tendency to attach to this guy this mechanism was created to explain the emergence of very large hubs in random graphs ok, so this model is quite sophisticated to analyze in general there are many things that we do not know about it but the local week limit we know that it exists and that certain random tree that I will not define was found recently ok, last example oh no, not an example just a remark that you have seen in Charles talk here there are only a few examples but in general, any local week limit not any probability measure can emerge as the limit of this Benjaminish Ramson keep in mind that those probability distribution on routé graphs a certain stationarity the limit sort of remembers the fact that the root was chosen uniformly at random we will not go into the details you have seen them in Charles talk but really think of this unimodularity as some sort of stationarity of your random graph it looks the same from any vertex that's what it means essentially ok, now if you see this list here one striking thing is that all the limits are trees random trees even if you started with graph in the beginning, so you have cycle in the random graph models but in the limit, you don't see those cycles anymore you get random trees those are only 5 models but again there are many many models take your favorite one it will have a local week limit if not, then change your model that's not a good one now you have seen in Charles talk it's really true that there will be a limit at least upon taking a subsequence if you have a sparse graph sparse in the sense that this degree sequence is uniformly integrable then you have limits up to taking subsequence so really this notion of convergence is so weak that virtually any sparse graph has a limit because of this weakness you might think that observable that this topology will capture will be just trivial in fact this is not the case there have been a lot of recent work that show that on the local week limit of a sequence of graphs you can read a lot of asymptotics of complicated parameters and one of them is this spectrum so I will recall the result that Charles mentioned that's probably the most important slide in the talk but again you have already seen that right ? if you take a sequence of graph that converges in this very weak sense that's enough to ensure that there will be a limit for the spectrum, the histogram of the eigenvalues of your adjacency matrix ok and even more is true in fact the limit as you have seen the convergence occurs in this strong Kolmogorov-Smirnov sense which implies that the mass allocated to each atom also converges so that's something quite strong so this refinement is due to abertum ok so I did not tell you yet what this limiting object is but actually you have seen it in a couple of lectures now this limit is connected to the theory of the spectral theory for self adjoint operators in infinite dimensional Hilbert spaces for an infinite graph you can still under some appropriate condition you can still define a spectral measure not a general spectral measure but one for each vertex in your graph and then this limiting object is just the expectation of this spectral measure at the root of your random network so one way to define this spectral measure for a fixed graph is like this it's characterized by this still just transform you also have a moment characterization which is certainly simpler but I'm mentioning this one because I'll be using this one a lot afterwards ok let me just recall you that as Charles explains in the case where graphs are unbounded Poisson-Galton Watson III for example this object here does not have bounded degrees then this adjacency operator is not a bounded operator so it's not obvious that this object it's all right so this is a delicate issue you can build some local finite graphs actually even trees local finite trees on which this object does not exist but those are pathological graphs and in fact thanks to this stationarity property this unimodularity property you can show that this will not be seen this kind of pathological graph will not be seen so this theorem here really looks like a definitive answer to the question that I mentioned at the beginning you want to understand the spectrum of your favorite model of a large graph and actually this theorem gives you everything you want it tells you that the spectrum all what you want to know about your spectrum is captured by this limiting measure which is defined directly on the limiting object and since every random graph sequence have a limit in this sense well that's the end of the day the thing is that as you have seen in Charles talks for some limiting object here you can do explicit computation and get exactly what this measure here is but for random graphs like the one I've shown there is no explicit formula you can already see it from this characterization that even this object here to know something about it you will need to know something about this resolvent of Galton Watson III you have to understand this object here and even if you can understand this you still only get a handle on the still just transform of your measure you still need to invert this transform to get something about this right it's not at all a constructive formula in the sense that it's very hard to say something even about the atomic part or the absolutely continuous part of those objects just based on this definition so rather than being a definitive answer to the questions that I have asked in the beginning you should really see this theorem as a sort of starting point of a new theory this theorem is a motivation for actually now building a theory of spectral analysis of unimodular random graphs what can you say in general take a general unimodular random graph what can you say about this object here how can you relate this to the geometry of your graph you have seen examples in the talk of Charles of things that you can say about the continuous part of it based on some geometric properties of the graphs I'll try to say some other things about this this is too difficult in general so I will focus on trees the official reason for focusing on trees is that all the models that I have showed to you the limit is a tree so why I mean it's already a good starting point to look at trees the truth is that we really do not know how to say things about the atoms for example in the non tree case ok so in trees there is a miracle that you have seen already in two lectures today if you think of a tree you always have a recursive structure like that the tree can be decomposed into just a root and a bunch of sub trees and if you remove the root then the trees become disjoint so if you look at the adjacency matrix of a tree what it means is that you have to say the first entry corresponds to the root but if you remove this if you ignore this row and column what you see are just a bunch of block diagonal matrix and the adjacency matrix here is just the one corresponding to the values trees that's how the adjacency matrix of T looks like of course you can subtract Z and the same is to 1 now if you want to compute the still just transform of the spectral measure that you are interested in you just need to invert this matrix but you have a explicit expression in order to invert this when the matrix is a diagonal block matrix you have to do the exercise in linear algebra it's called Schur-complements formula in the case where the tree is finite to get this formula remember that this is the thing I'm looking at I'm really willing to understand that because that's precisely what defines my spectral measure and this object here it's an analytic function of Z this object is just given by a simple recursion like that you can extend it to infinity as Charles explained it's a simple exercise to do that and from this you immediately get a few things for example if you take the infinite regular tree then all those trees are just the same and they are exactly the same as the original tree so if you plug it here what you get is a fixed point equation for this quantity you can solve the equation x equals minus 1 divided by z plus dx from this you can solve explicitly and that's how you get the k-ston-my-k density that's the only example of a tree where this is explicit in general for example if you think of the Poisson-Galton-Watson tree then it's not true anymore of course that those trees are the same but they are still the same in distribution and they are independent what you get instead of a fixed point equation for a single number what you get is a fixed point equation for the distribution of a random number and in principle so you can show that this fixed point equation has a unique solution and so you get a uniqueness if you can solve this equation you know precisely what this object here is and if you take expectation and then invert the stigest transform you get everything you want about the spectral measure this equation here if you can extract information about it you should be able to recover completely the spectral distribution of your tree so you basically reduce the problem of understanding a complex network to a problem of just understanding a fixed point equations for random variables in complex valued random variables in principle everything is contained in this equation of course it's hard to extract any useful information for that so there are things that we can do and I would like to show two things that you can extract from this equation on the spectrum of your favorite random tree so in order to do that let me just since it's a school let me just show quickly how you can get the value of the mass at any given location the atomic mass at a location from this stigest transform so suppose that you know the stigest transform of your measure you want to understand things about your measure and all you know is this stigest transform which is the object I have been considering this is a function of the parameters z z takes values in the complex plane minus the real line and this transform is injective so in principle you should be able to recover everything from it recover the atoms takes it to be decomposed into a real part and imaginary part and multiply everything by epsilon and that's just an identity that you get now send epsilon to zero inside this integral here everything tends to zero there is epsilon everywhere this first term this real part here is less than one half the second part the imaginary part is less than one tends to zero except at a single point when lambda equals lambda zero then this term is simply not here and what remains is just one so if you do the dominated convergence theorem what you see is that you convert precisely to the value of the atoms the message here is that if you understand your fixed point equation of the previous slide which gives you this if you can understand this fixed point equation which is the a given number on the real line then you will understand the mass of the atom that's the key to do that and there is one specific location where this can be done explicitly it's the case where lambda zero is zero in that case there is a sort of simplification by symmetry which helps a lot and this allows you to recover completely the mass of the atom at zero so that's a joint work with Charles quite a while ago but let me, I just wanted to show an explicit formula to you just to convince you that you can say things explicit about your spectrum so take your favorite degree distribution remember that this Galton Watson 3 is the limit of this general random graph model where you choose your degrees arbitrarily as long as they converge to this distribution pi now take the generating function of your degree distribution that's the parameter of your model now the mass of the atom at zero is given explicitly by this minimization problem in terms of the generating series of your degree distribution so that looks like a complicated formula if you specialise it to the Erdos-Shreini model remember that in the Erdos-Shreini model the limit is the Poisson distribution so pi is quite simple for Poisson to get this the size of the kernel of your adjacency matrix grows linearly with n and the constant is just this quantity so this answers one of the questions that I presented in the introduction this formula was actually conjectured by a physicist Michel Bauer and Golinely and the question, the general question was the size of the kernel of sparse random matrices was raised by Costello and Wu who did the case where the graph is dense in the dense case they managed to study when the matrix is invertible and when it is not so it's much stronger than this but they asked for a formula in the sparse case so that's one of the things that you get out of this recursive distributional equations can you get more than that so what I would like to present now is a more general thing which allows you to understand the mass of an atom at any location lambda you will not have an explicit formula that's out of reach for now but still you will have information a lot of information on the eigen space you have an infinite graph think of the Galton-Watson tree on this graph you want to solve this eigen function equation and describe this eigen set so if you want to do it on a finite graph there is a something nice that you can do just a simple observation look at this set S which is the union of all the possible supports of eigen vectors associated with the eigen value lambda so your graph is something large and complicated but suppose that the eigen vectors just live on this part so that's your set S this just means that outside this set every eigen function will vanish that's not just what it means now if every eigen function vanishes outside this then any eigen function must actually solve the eigen value equation on the restricted connected components here because it's 0 everywhere else you can as well forget about the rest of the graph and solve the equation directly on those small pieces it's a natural thing to look at this set because it allows you to reduce your problem to a bunch of smaller problems right now there is one miracle that happens for this set in the case of trees think of a finite tree so now this big graph here is just a tree or a forest in that case the eigen value equation on the restricted components here have a unique solution of two proportionality of course so if you want to describe a general eigen function on a tree that's quite easy on each one of those components there is a unique eigen function if you forget about the rest of the graph f2 fk now a general eigen vector will have to vanish everywhere else so it will just be a linear combination of those those unique solutions on every component except that you have constraints here you may have nodes on the boundary of two components or more components this has to be zero because it's outside the support s but it has to solve the eigen value equation here so you have this value here and this value here to be such that they can't sell each other when you sum so you have subject to the concern that at every neighbor this is the the boundary of my set s that's the set of vertices which are not in s but are neighbors of some one then at every x you would like this this to be zero to sum to be zero and that's precisely that's an exact description of your eigen space in the case of a finite tree that was a well known, it's just a lemma in spectral graph theory that for trees you have this nice decomposition now in particular if you want to know the dimension of your space which is the multiplicity of the eigen value lambda well that's just the number of components minus the number of constraints that you get here you get this nice formula for the dimension of the eigen space again that's for finite trees so that's an easy lemma let me just rewrite this number of components because it's a tree or a forest the number of components is just the number of vertices minus the number of edges so I just rewrite it in this simple form ok now the main result that I wanted to present is the following if you look now instead of finite trees at infinite trees then there is no reason at all a priori that those subset here are finite, you could have eigen vectors that have an infinite support but what happens quite miraculously is that on a unimodular random tree then almost surely the connected component of this support will be finite so somehow because of stationarity the eigen function equation splits into finite part even on your infinite Galton Watson tree so that was the first surprise and from this plus a bit of work on the still just recursion that I showed you can get the analog of this formula but now for an infinite for a unimodular random tree the analog of this formula is this so this is just the multiplicity of the atom lambda once you normalize by the number of vertices and then those are the natural infinite counterpart to this ok so that's the general formula if you want to compute the mass of an atom at a given location that's just given by this so there is a big cheat in this result which is that if you want to compute the mass of lambda you still have to compute this set S, it's simple because it splits into finite components but I didn't say anything but those finite components and actually it's quite hard to understand the structure of these finite blocks so it's not really an explicit formula but what I want to convince you of is that you can extract a lot of information just from this formula and some simple information on your graph in particular the isoperimetric profile of your graph so that's the content of the next few minutes ok so let me mention just one one simple consequence of the previous result even on an infinite tree understanding the mass of an atom at lambda boils down to understanding finite subgraphs your eigenfunction has to be an eigenfunction of a finite tree so one consequence of this which is surprising at first is that the set of atoms of your measure cannot be anything it has to be inside the set of eigenvalues of finite trees so if graphs are not trees things of a nerdoshranny in the limit your eigenvalue must be an eigenvalue of a finite tree so you have algebraic constraint and that Charles explains this to you because the the characteristic polynomial of an integer matrix is an integer polynomial with real root and monicoefficient you have constraint on this set and you can actually explicitly describe this set this was a theorem that I got a few years ago this set is precisely the set of totally real algebraic integers so roots of real routine monic integer polynomial so one direction is obvious the other really says that any such number can be represented as the eigenvalue of some finite tree ok and because of that result you immediately get the complete description of the spectrum of many limits that I presented to you in particular the Poisson-Galton-Wadson tree which is the limit of the Yardoshranny model why is that simply because as Charles explains the other inclusion is obvious for Galton-Wadson tree there is a positive probability that the Poisson-Galton-Wadson tree realize any given finite tree so you will see eigenvalues of any finite tree in the limit and since you have this other implication you get the precise determination of this spectrum the same is actually true for any Galton-Wadson tree with degree distribution as full support that's just the same simple argument and as Charles explained you don't really need to have a finite tree to realize an eigenvalue all you need is this thanks to the construction that he described even if you have an infinite graph as long as you can realize any finite tree at the root as a pending subtree this is an infinite thing but as long as you can see any fixed tree with positive probability pending at the root then you can create an atom for any eigenvalue of a finite tree and therefore you also get say unimodular Galton-Wadson tree condition on non extension or this infinite skeleton tree for all those models the pure point spectrum is completely determined it's precisely this algebraic set and that's the maximum possible set that you can have in any unimodular random graph ok can you say more than that yes you can we have not used at all here this explicit formula what I just showed just uses the first part but we have not used this formula so what can we say about this formula well first of all by unimodularity you can set up a specific mass transport choosing well your transporting function so that this formula that I showed rewrite into this thing here that's easy that's just a simple rewriting the first term here is just the number of components because you divide by the size of the components so it's really just if you think of a finite graph it could be the boundary term ok so this is still something that we do not understand a lot because we don't know what this subgraph looks like, the component of the root shows up here and here and you don't know what this object is so how to compute this thing is not clear right now but let's put some simple assumptions suppose that the graph is somehow smooth the degrees are between a small constant and a large constant with the minimum degree being at least 3 suppose that your tree is like that again any model of large random graph provided you restrict your degrees to this case you will get a limit like that ok now if you have this then take your formula, look at the sum here of course the sum you can lower bound it by this the size of the boundary the number of term in the sum I divide it by delta so I still do not know anything about the size of the boundary but in any tree with the minimum degree 3 any finite set the boundary of any finite set must satisfy this inequality that's an easy induction on the size of the set whenever you are adding a new point to your set you must add at least delta edges to it so you have to add this again it's still problematic the component looks like but recall that this component has to satisfy the eigenvalue equation for my lambda and not every tree can do that let me bound this just by this to lambda which is the minimum possible size of a tree with eigenvalue lambda then you get something which does not depend anymore on the set that you don't want to understand here it's a completely deterministic bound plug this bound into this formula and you immediately get that if you want this part here to be strictly positive you have a constraint on this tau lambda which is a sort of complexity of your algebraic integer so in any unimodular random tree with degrees like that you have only a finite possible list of eigenvalues that can appear and those eigenvalues are those which are not too sophisticated those should be eigenvalues of small trees of a size at most capital delta small delta so this looks like this was a surprise that this set is actually finite so remember that if you look at the case where pi has infinite support then you have a dense set of spikes everywhere at every algebraic number but as soon as you restrict your degree a bit then you get just a finite number of spikes in your spectrum and actually this set is quite small if you take this to be at most 2 just meaning that the ratio of maximum to minimum degree is at most 2 then the bound says that your eigenvalue should be an eigenvalue of a tree of size 1 so actually the only case is 0 so the only possible spike is 0 if there is 1 and we have a formula to compute this explicitly so in every tree like that you have an explicit determination of the spectrum of course if instead of 2 you are starting to allow more variance in your degree so if you allow for free for example then you are restricted to 2 kinds of eigenvalues eigenvalue of 3 with 1 vertex and eigenvalue of 3 with 2 vertices and these are just minus 1 plus 1 and this is 0 you get a slightly a slightly larger bound and so progressively as you increase this quantity here you get more and more eigenvalues in your list but still it's a finite set and not that this relies only on a very crude bound I really use nothing about my tree just information on a minimal and maximum degree so it seems like there are much more things that you could extract from that I have not done so yet but I'm sure that there is things that you can say in specific models like the Galton-Watson model for example let me just mention one extension of this to the isoparametric profile instead of using this trivial inequality with the degree you can replace your delta minus 2 with this constant here the isoparametric constant of your tree which is what it is and then you get by the same argument that your your atom is bounded your set of possible atoms is limited by this isoparametric constant of your tree so that allows you to sort of relax the condition that the minimum degree should be free but not that it does not allow to relax this condition for Galton-Watson tree for Galton-Watson tree as soon as you allow for degree 2 vertices you will see with positive probability extremely long path like that with only degree 2 vertices and this will make your isoparametric constant just go to 0 so it does not help anymore for Galton-Watson tree but still there is a variant of this isoparametric constant which works well for to overcome this issue here and this is the encode isoparametric constant so in that case it's the same kind of quantity but you force your set to take also the root in your infinite Galton-Watson tree with degree 2 you will have a lot of those arbitrary long paths but they will be quite far away from the root if you want to get a very large path like that you will have to go far away from the root and since you have to to take all the vertices that connect this graph to the root you will collect a lot of boundary edges in the limit so there is hope that this quantity is strictly positive even in the degree 2 case and that's actually true that's a result by Yuval and Chen a few years ago that Galton-Watson trees do have this encode isoparametric encode isoparametric constant bounded away from 0 so you plug this into this result and you get again that for unimodular Galton-Watson trees even if you allow for degree 2 vertices you still get a finite possible list of atoms so that's the result that I mentioned by Yuval ok so here is the final corollary that I wanted to mention I want to understand the set of atoms in the limiting spectrum of your favorite model of graphs this graph will converge to a tree hopefully and in many cases to just the unimodular Galton-Watson tree suppose that your degree is bounded by some constant and then you have this nice dichotomy either you have leaves in the trees or you don't if you don't have trees in the tree so the probability that the degree is 1 then your spectrum is completely finite on the other hand if you allow for leaves in your Galton-Watson tree then the atomic support is quite large it's actually the largest possible support for a tree with this constraint here it's just dense in this set you have this nice dichotomy between finite and and huge ok let me mention that this was this dichotomy was conjectured by Baleen, Charles and Arnab a few years ago that's the proof of it ok again this only uses very limited information about your graph just the isoperemetric constant I'm sure that there are more things to extract from this formula if you restrict to specific model let me mention 3 specific open problems that can be interesting to you guys of course the general question is take a general unimodular on a graph what can you say about this an even harder question would be to look at the spectral measure itself not this average but the random object which is the spectral measure at the root of your random graph and say things about that as we have seen this is a much much more difficult task but still so can you say something about this support that's something too ambitious I would say but let me give you 3 specific problems that are more doable I think more reasonable can you characterize those degree distribution for which there is no atom at all that's a simple simple question there are criteria as we have seen in Charles talk there is a result by Simon and Carter which says that if pi is sufficiently close to a direct mass then you will not have any atom the result is much stronger than that and you have also this criterion that Charles explained which is that if you can constrict a a line ensemble on your unimodular tree and which covers every vertex then also you will have no atom at all but it's not clear which degree distribution satisfies this property it's an open question in general and again the formula that I gave might shed some light on this simple other open problem Just then I'm going to say what in the proof that in this case of a line ensemble you have some continuous spectrum Yes you do have some continuous spectrum you may also have atoms in addition to that and the question here is can you determine those distributions for which there is no atom at all Right, I thought you just said if you have a line ensemble it doesn't rule out the atom No, no you need no but there is a quantitative quantitative bond that he gave that if you can prove that there is a line ensemble that covers every vertex so if your tree is Hamiltonian then an Hamiltonian tree would satisfy this but which Galton Watson trees are Hamiltonian beyond regular trees That's not obvious Not clear Ok Simple limit, remember this infinite skeleton tree which is here the limit of large random trees You know that there are spikes everywhere at every algebraic number there is a spike Is there something else? Completely open Simulation suggests that there is something else but The result of Charles about trees which with at least two ends the result that he mentioned This does not apply here because this skeleton tree only has one infinite one end The answer here is not clear and to be even more specific like take the Poisson Galton Watson tree Can you say you know that there is when c is larger than one you know that there is something else than the atom but this something else could be a singular continuous or absolutely continuous and can you say things about the absolutely continuous part So you have conjectures on that by physicists but this is open Ok, so I'll stop here Thanks Thanks a lot for the beautiful lecture So the second lecture will be tomorrow So are there questions? Is the limit of the differential attachment rough for somebody to know the spectrum? No, but still it's a tree We can apply the general criteria that we are starting to build for trees right? It is a tree but apart from that the limit is quite nasty Again, even in simple cases we cannot say much about the spectrum so in this complicated case I guess it will be rough Again with a much nicer example for the real Sure How do you say that from simulation you guess that there is a continuous part in certain conditions So it's clear that you don't want to just try to see if this is continuous or not just by looking at the histogram but there is a nice computable things that you can do Suppose that your sequence converges to a limit and that this has no ATO It's pure point This is pure point Then up to a small fraction of the mass say epsilon sum of the first k atom in the limit So you can choose k large enough so that this is 1-epsilon Then remember that this converges even the atoms converges So you know that also for the atom this will be the same exact same atom then this converges to this which is at least 1-epsilon of the mass for n large enough And it tells you that all the mass is contained in a finite number of eigenvalues so it tells you that the number of distinct eigenvalues is at most this number k plus epsilon n but epsilon is arbitrarily small so distinct eigenvalues has to be negligible with respect to the size of the graph Now if you take the characteristic polynomial of the adjacency matrix of g checking whether the computing number of distinct eigenvalues is something that you can do without solving for the root you just look at the greatest common divisor between the characteristic polynomial and the derivative and it looks like there is a fairly large proportion of distinct eigenvalues in this uniform model seems like there is more than 0.5n distinct eigenvalues in the tree so it looks like there is something else but proving it is un autre issue