 Donc, ma main domaine d'intérêt est le chaos quantité, qui est partie de la PDE, ou au moins le étudiant des modèles quantités comme la limite classique est chaotique. Et dans ce framework, nous avons vécu un petit peu étrange, un petit peu étrange, une feature spectra, qui était appelée et fractalée par Low, qui je vais essayer d'expliquer. Et puis, dans la deuxième partie du talk, je vais, je vais, je vais dire, jouer un peu avec des modèles toits, et ce type de modèles toits involve des graphes spécifiques, des networks directes, et c'est ce que je vais essayer d'expliquer. Donc, de toute façon, cette application de ces idées de quantum chaos, de graphes directes, c'est un sort de problème, pour moi-même, qui j'ai demandé à moi-même quelques années auparavant, et de toute façon, ça a développé très rapidement. Donc, ça a développé, ensemble avec deux étudiants, Master Students, Quentin Gendron et Justin Trias. Tout d'abord, oui, mon travail sur le fractal vileau pour les systèmes de quantum chaos a mostly started with the help of Maixx-Vorsky, from Berkeley. So, there was no graph involved, there was no network involved at that time. So, the little toy model with the graphs came up with some Master Students, and I also have, nowadays I also have a postdoc, Mustafa Sabri, in Norset, with whom I plan to work on this a little bit. Okay, so on this picture, you can see on the left side, you will see, this is called the Baker's Map. Okay, it's a transformation on a square, on a tourist, if you want, on a square. It's a chaotic transformation. It's an open transformation because some of the points disappear, okay. And so this will make, this will make up a toy model, a first series of toy models, and then from this toy model we'll end up with toy models described in terms of directed graphs. Okay, this was the right, an example of the run on the left. So I will try to tell you about this project. So, I don't know if this has any connection with Google Matrix or Google Matrix analysis, so that's probably the talk of Klaus Fram this morning who could answer a little bit to this question. But, well, I will mostly study this specific model and tell you what, what we'll try to do about this. Okay, so where does this idea of fractal viola come from? It comes from scattering problems in PDEs, okay, in wave mechanics, if you want. So if you study waves, propagation of waves through the wave equation, outside some obstacles, so you have some obstacles, okay, which have various shapes, and outside of these obstacles you propagate waves. This is described by the wave equation, okay, this equation here, with some initial conditions. And if you are interested in the long-time behavior of the wave, okay, how do they propagate? What happens at long times? If you focus on what happens at long time in some bounded region, you want, since this is a linear equation, if you want to understand long times you have to look at the spectrum, okay, this is somehow the first reflex one should have. And when you look at the spectrum of the Laplacian, it is on L2, on the L2 space of the complement of the obstacles. The spectrum is, let's say it's a real line, okay. When I say the spectrum, it means here, I take the square root of the eigenvalue, sorry, the spectrum corresponds to taking the resultant Laplacian plus lambda squared, okay, for some reason. So if you look at the spectrum this way, there is a continuous spectrum here on the real line, but in general, you can propagate, you can take the green function, the resultant of the operator and you can continue it analytically across the real axis and what you get, you get when you look at the analytic continuation, the meromorphic continuation, you find poles or finite multiplicities which are called resonances, okay. And these resonances, so I call them lambda j, these resonances lambda j, they impact the long time behavior of the wave, okay. In this way, okay. In some sense, there is an expansion, okay. In terms of these resonances, so here I just look at the resonances above a certain line and these resonances impact the long time behavior and this is why we are interested in the distribution of these resonances, this complex value, generalized eigenvalues. They could be seen as generalized eigenvalues of the Laplacian, okay. So this is where this pop up and so for this reason we are interested in the counting or at least studying the distribution of these resonances, especially in the high frequency limit, okay, when you take high frequencies so you go along the real axis and you take little boxes below the real axis, you want to count for instance how many of the resonances are in this box because the number here would be important when you try to write down such an expansion, such a high frequency expansion, okay. And so the question is about, the question came up of counting these objects. In general, there is no explicit expression, there is no approximate expression for these values but sometimes you can still get some information about the counting function, okay. And this is where this, somehow this fractal bilo came across. So the type of question we can ask, how many are there or is there a gap, for instance, we are sometimes also interested in the presence of a gap that is a band here without any such resonances. And in this high frequency limit, it is natural to connect these wave mechanics with classical mechanics or ray dynamics, the ray mechanics that is you follow classical rays, okay, you follow the rays of light, if you want, okay. Which bounce back and forth from the obstacles and then bounce to infinity. So we have to understand the dynamics of rays because high frequency means that you can describe things in terms of rays. So, and for this long time study, what happens at long time, it is natural to consider trapped rays that is the rays which get trapped forever between these obstacles, okay. So the idea is that somehow these trapped rays will impact in the high frequency limit these trapped rays will impact the distribution of these resonances. And what people came out, okay, was to conjecture such dependence in the counting function. So if we want to count the resonances in such a box, they should grow like a certain power of the lambda, the frequency. But this power is a fractional power, okay, a fractal power. And this power would be directly, should be directly connected with the fractal dimension of the set of trapped trajectories. Okay, here for instance, I plot this example when we have three obstacles, okay, three convex obstacles. In this case, one can show that the set of trapped trajectories is a fractal set, okay, fractal subset. It has a certain fractal dimension. And this dimension should impact the scaling law here in the counting function. And so far in most models, people could only show an upper bound, that is the number of resonances is bounded above by such an exponent, but not from below, okay. So we're trying originally, we're trying to find models for which we have a better grasp on this scaling. In particular, prove a lower bound also on this number of resonances. And this is, because this is a non-self-adjoint problem, proving lower bounds, spectral lower bounds is in general much harder than to prove upper bounds, okay. So, for this reason, I mean these models are a bit too complicated for us. So we looked at toy models, okay. And so this is where this baker came across, okay. So the baker's transform is a transform is a well-known transformation on the unit square, okay. When you split the unit square in three slices here, and you squeeze each slice, okay. You push vertically and you elongate horizontally and then you stack them again one above each other, okay. So this is a well-known toy model for classically chaotic systems, classically hyperbolic systems. And what we did was because we wanted to mimic some open system where particles could escape to infinity. In order to mimic this escape to infinity, we decided that one-third of the particles after the transformation, one-third of the particles would escape to infinity, okay. So we somehow forget all particles which are here. We just erase them. We send them to infinity. So this gives you what we call an open map or open baker's map open because somehow part of the particles go to infinity. And this map can be very easily described, okay. First in this by these equations. But they can also be described very easily if you use base-free numerology. That is, you write position. P and Q are the coordinates here. P and Q, let's say. So Q is the horizontal and P is the vertical one. And if you write these coordinates in terms of using base-free, base-free notations. So epsilon-1, epsilon-2, they are digits or treats, if you want, digits in base-3. So they can take values 0, 1, 2, okay, in base-3. And let's write P and Q by a sequence epsilon-1, epsilon-2, epsilon-3, et cetera, to infinity. And P is written by epsilon-prime-1, epsilon-prime-2, epsilon-prime-2. So this is, in this direction, this is the treat, ternary decomposition of P. Here, this is the ternary decomposition of Q. And you see that the transformation in this representation, this map is very easy. You just shift, you just shift, that is, you push the comma here on the right. And this is non-zero only. You keep it only provided epsilon-1. Here, it's different from 1. Okay, because epsilon-1 equal to 1 correspond to these points here, which were here at the beginning. So the rule is very simple. It's just a shift and keep only the symbols such that epsilon-1 is different from 1. So it's a very easy decomposition. And this way, you can easily tell, you can easily look at which points do not go, do never go to infinity, which points are trapped forever in the future. And this corresponds to this color code. Here, each color corresponds to the points which go to infinity at time 0, at time 1, at time 2, at time 3. Okay, so yellow, they go to 0 at first time, okay, time 0, let's say time 1, maybe. And blue, red, they go to 0, at time 2. Green, they go to 0, at time 3, etc. Okay, and you see that what remains after you removed all these points, what remains are the points which are trapped forever in the future, which I call gamma-minus, okay, for some reason. And these points, they're just made of the ternary decompositions such that this here, the composition of q doesn't contain any value 1. Okay, so in ternary decomposition, it's made of 2 zeros and 2s, okay. And so this trap set is a fractal set, is the product of the canto set, the standard one-third canto set, okay, here, times the interval here, okay. And the dimension of this canto set here is well known, it's log 2 over log 3, okay. It's the house dot dimension or box dimension, whatever you want, okay. So that's a model where we control very well what's going on at the classical level, at the level of this classical map, okay. Now, this model has been quantized, okay, of course now, what does it mean to quantize such a map? I mean, there are many ways to define what quantization is. So I just want you to believe me, to believe what I would say, okay. There is, let's say, let's say appropriate or I see reasonable way to quantize this model because we are on a phase space which is compact, which is a square. The quantization is, it lives on discrete space, okay, dimension n, n is some integer. So c to n is the quantum space, okay. And somehow the high frequency limit will correspond to taking n very large. And in this representation, okay, these states can be, you can associate several basis, basis of states which are localized in position, basis of states which are localized in momentum, that is, along the vertical coordinate. And both are connected with each other through discrete Fourier transform, okay. So there are rules for this quantum, discrete quantum mechanics, okay. This we did not invent. And using these kinematics, if you want quantum kinematics, people, I mean Balash and Boros about almost 30 years ago, they decided they showed that this matrix is a good quantization, okay, it's a natural, somehow quantization for this Baker's map, this Baker transformation. So it's a, here this is the inverse Fourier transform, discrete Fourier transform dimension n. Here this is Fourier transform dimension n over 3. And here in the middle, there is a big block of size n over 3, okay. And here another Fourier transform. So it's, this one is block diagonal and this is just inverse Fourier transform. So when you plot this matrix, so I'm sorry, the picture does not show very well, unfortunately, you plot this matrix, you see that it is concentrated, so here I just plot the absolute values of the elements of this matrix. It is concentrated along these two lines, okay. And somehow, apart from these two lines, somehow the matrix elements are smaller, but they're not very small, but they decay away from these two lines here. So this matrix seems very simple, you know, it's just a bunch of Fourier matrices which you multiply to one another. Unfortunately, there's no way to compute the spectrum analytically, okay. There's no, there's essentially no way to say, well, to say much about it. So if you look at the eigenvalues, well, they are complex eigenvalues, you can show that they live inside the unit disk because both matrices are subunitories, so the product is subunitory, okay. So eigenvalues must be of modules smaller than one. And the idea now of this model is that we hope that this model should behave like the previous wave model, okay. That's at high frequency, but it corresponds to high frequency for the waves, corresponds here to large values of this n, n is the dimension, the quantum dimension, the quantum parameter should be large, okay. So we look at these matrices for large and larger n and look at how the spectrum behaves, okay. And see here the spectrum here should be the connection between the spectrum and the resonances of the Hamiltonian are, we should compare these values here which are the spectrum of these Baker's matrix with these values exponential of minus resonances, okay. So somehow the counting function now, we were counting objects in some box below the re-axis. Now we will count objects in coronas, in annulus, annuli, okay. Annuli meaning that we look at how many eigenvalues are sitting which have absolute value greater than some number r, okay. So here r correspond to the, what was the depth of the box before, okay. So we try to count these eigenvalues out which have modules larger than r, okay. And so this is our notation for the counting function, okay. So the question, do we have such a scaling law? So n is the size of the big matrix. This big matrix has a rank of order n, it's like two n over three, okay. So it's of order n and here we expect this fractal value would be the fact that the counting, the large, reasonably large eigenvalues, the number of large eigenvalues grows like n to the new where new is this log two over log three, okay. So which means that most of the eigenvalues of the matrix are sitting very close to the origin. Yeah, very, very small, okay. So that is what we see here on the plot, okay. And this is what we're trying to show. We're trying to show that all of these matrix has rank two n over three of order n. The significant eigenvalues, the non small eigenvalues or non negligible eigenvalues are much fewer, okay. And much fewer means like a different, a smaller power of n when n goes to infinity. And this is the content of this fractal value in this setting, in the setting of this quantum map. And c of r, here would be a function, some profile function which we don't know a priori what to expect, okay. So we tested this type of asymptotics, okay. Let me show here. So here I plot the number of eigenvalues larger than r. Here this is r, this is this modulus. So all these plots correspond to different values of n, okay. Corresponding to these different colors. So here I plot n, I think, or maybe it's n over three, I don't remember exactly. Okay, going from 81 to 5000. It's not big numbers, but we did this about 15 years ago, yes, with Zorsky. And so we see that here this is just the picture. For any n, we look at the number of eigenvalues above r, modulus above r. And then here we rescale them by this theoretical power, by this n to the new, okay. We just rescale the curves here by this factor. So it's not perfect, of course, okay. There are still some, a lot of fluctuations, but it's more reasonable than before. I mean, this is still, we could see a sort of tendency, okay. That these curves would lie not too far from each other, okay. Of course we see some, okay. This color code here is not arbitrary. We see that if you go between there, there is a sort of arithmetic of this, arithmetic of this maps because they are associated with multiplication by three. There is somehow connection. So between the color here, 100, 300, 900, et cetera. Each time you multiply by three, the curves are somehow not too far from each other. So there is a scaling going on across such values of n. But this is not perfect, okay. I want just to emphasize that this type of test here of this factor low was done also for other maps, okay, for all the types of open chaotic maps. And it led to similar results. Sometimes the curves are better, sometimes they're worse, but so it depends. Okay, but this was numerical, okay, most of the time. And Dimash Planski also did it for, I don't remember exactly which map it was. It was opening maybe the standard map or something. So here the ingredient was starting from a chaotic system, quantizing the chaotic system, an open chaotic system, and looking at the spectral distribution of this quantum map, quantized maps in the larger, well, not high frequency, but large end limit, okay. And trying to find some scaling, some strange scaling, fractal scaling for the number of nontrivial eigenvalues, okay. So we have no proof. And most of the time, in most of the models, people can get only upper bounds. So we were trying to find a model for which it's possible to get lower bounds, okay. And what we did, as I said, this matrix, the matrix for this Baker's map looked like this. So we just decided to remove all the entries which are not exactly on this line, on these tilted lines here. That is, keep only the black ones and kill the other ones, even though they are not so small, okay, they should be gray here instead of white. But on the black board here, we can just see white. Actually, they should be gray here. Let's erase all these gray digits, all these gray entries, and keep only the skeleton, what we could call the skeleton of the matrix. So the skeleton is just given by a discretization. If you look at, this is the matrix here. A discretization of the map, q goes to 3q on this interval. You remember that the map, the description of the Baker's map horizontally was q goes to 3q. And when you look at the graph of this map, you look at, there would be tilted diagonals here, like this. And what we do, our matrix is just a discretization of these tilted diagonals, okay. So this is now our toy square model if you want, the toy model of the toy model, okay. Replace our full quantum matrix by a skeleton matrix, okay, SN, and keep only these values here. And keep the values including the phases, okay. So that's the toy model. And we are quite lucky about it. If we restrict ourselves to specific values of N, namely powers of three, okay, take three to the k, for k arbitrary integer, okay. So if you take this type of numbers, so the matrix looks like this, okay, with our explicit values. Of course, there's still a big junk of zero columns in the middle, but then here you have these tilted diagonals here. And you have some phases, okay, which are the phases which were the same as on the previous matrix, on the previous Baker's matrix, okay. So we have a matrix which have some skeletons, some topology, oh, not topology, but let's see, some structure here plus some phases, okay. And again, if we look at these phases for these numbers here, we are quite lucky because there is at the quantum level, there's a way also to use this base three decomposition on the fact that we have a shift, okay. Namely, for these values of the dimension, of the quantum dimension, you see that such a number, such states in this quantum can be used by decomposing this big state into a tensor power, okay. I'm just writing the fact that C to the N is the same as C to the three tensor k, okay. And each of the states, each of the basis states, okay, Qj, where j goes from zero to N minus one, could be written as, similarly as in the classical case, I'm just writing a number, integer number in base three, you can write this vector here as a tensor product of elementary states in C3, okay. So it's just a tensor decomposition. And what we realize is that these matrix here for these values of N acts very nicely on this tensor decomposition, namely it acts through a shift and a little transformation of the first digits which is put at the end, okay. So any state in this tensor state, in this tensor space, can be written, I mean any basis state can be written this way. And when you act on such a tensor product, you just shift the indices here and you put this one on the, at the end, after application of this matrix. So this is very simple. And once you have this writing up, once you have this very nice tensor product, you see that if you take the kth power of our action, the kth power now is local in this tensor product, means that any tensor element here is, you apply this matrix omega 3. So thanks to this, you can study, you can directly write down the eigenvalues of the matrix, okay. The eigenvalues, they will be given in terms of the eigenvalues of these three by three matrix, okay. No matter how k is large, how large k is, you can write down the eigenvalues of this big matrix from the eigenvalues of the small matrix, okay. They are given by this geometric means, okay. So here I plot for instance, for some value of k, I mean k is equal to 10 and k is equal to 15. I plot the eigenvalues of this matrix. So the big matrix SN has a large kernel here, actually a generalized kernel of dimension 3 to the k minus 2 to the k. And the non-trivial spectrum, it's of dimension exactly 2 to the k, is given by these eigenvalues here, okay. So and it's restricted to an annulus which is between lambda minus and lambda plus, okay. And lambda minus and lambda plus are non zero, yeah, for this matrix here, you can check. So here you can, for this model, for this very simple model, very specific model, you find such a fractal value, okay. An exact, you know exactly how the eigenvalues are distributed because you have explicit expressions, okay. So it's really a nice toy model. So it was anyway, it was the first example where we had a rigorous proof that we had this fractal value, okay. So on the other hand, the spectrum is very regular, okay. It's given by this expression. So you see it's not like a random type of spectrum like we had before. And it depends very strongly on these matrix, this small matrix here, okay. So thanks to this tensor product structure, somehow this tensor product structure is very rigid and it boils down to computing the eigenvalues of a three by three matrix. So it is not very generic in our settings. So we wanted somehow to avoid this very strong dependence, very algebraic dependence on this tensor product structure, okay. So we tried to see if we could get such a fractal low, okay, the existence of this fractal low only looking at the topology in some sense or the geometry of this matrix, okay. Not looking really at the phases here, the precise phases, okay. Because here the phases are important. I mean, if you didn't have this regularity in the phases, you would not get this tensor product structure. So can we get rid of this very precise phases here, but keeping just the topology of this matrix. And when I say the topology of the skeleton, I just mean which entries are non zero, okay. So I keep the matrix, I put ones everywhere and I see this, okay, as an agent and see matrix, okay. So, and from there, I want to see, can we guess something about the spectrum of this, the spectrum of this SN matrix only looking at this topology. So when you look at this matrix and you take powers, okay, when you take high powers as we, the same thing as before, we see that there will be in the end, after power K, if you take the power K, the power K has exactly three K minus two to the, three to the K minus two to the K, null columns. There will be many null columns. So originally, they are only here. There were only three null columns, okay. The one at the center, which are killed by the quantum map. And then the other ones, if you take higher powers, if you're higher means power two, okay, this column will be killed and this column will be killed. Okay. And if you are, and if you look at N equal to two, three to the K for larger values of K, you will also have much more and more null columns will appear when you take higher and higher powers of this matrix, of this agency matrix. Okay, and from this, you can get that just from the topology of the matrix, no matter what phases you put here, by taking products, you will create many null columns and this will give you a low bound on the dimension of the generalized kernel. Okay. The generalized kernel, which will be responsible for the geometric multiplicity of zero, okay, in the column. So this will be larger equal to three to the K minus two to the K. Okay. That's the number of null columns after you took this power here. Okay. Now, what's important is that this phenomenon is not due to the tensor product, it's really due to just topology of this matrix. Where are the non trivial entries? Okay. So now, this type of matrix, it's quite natural to view it now as an agency matrix of directed graph, okay, directed network, if you want. Okay. Such that directed network on N nodes, okay, on N vertices and you have an edge KJ if AJK is equal to one. Okay, that's the standard way to view an agency matrix. So, what does it mean that we can kill columns by taking powers of this AJC matrix? It means that if you look at the graph, so here this is for N equal to nine. You look at the graph, okay, this is the graph, directed graph you get with the arrows we have. If you look at the graph, you see that some of the nodes don't have any descendant. Okay, there's no arrow starting from here. There are rows arriving here, but no arrow. These here are the columns correspond to the columns which are here in the middle which are killed immediately after one step, okay. And then you also have columns here. You also have edges here which have future in these columns which will be killed. So, let's put some colors, okay. These colors correspond to the colors I had before when I was showing the Baker's map, okay. So, the yellow one corresponds to one which are killed, which have no image. So, they are killed after one application of one step of this agent-sensit matrix. And the blue ones, they are the ones which are killed after a second application, okay. And then if you do a higher, larger number of applications, you don't kill anything anymore, okay. So, these four ones, zero, two, three, zero, two, six, eight, they are never killed, okay. They will still be some entries of the matrix which are non zero, okay. And topologically in terms of directed graph, these four nodes here, they make up what is called a strongly connected component, okay. So, it's a subgraph with such that any two points can be connected and you can always find a path going from six to zero or path going from zero to six. Or you can always find a path going from two to six. I mean, here it's obvious, but I mean, this is the definition of a strongly connected component, okay. You can always join always join two elements of this strongly connected component in both directions, okay. And these correspond to the non null columns of A9 to the square, okay, when I take A9 to the square, there are columns which remain which are these columns here, okay. So, there's connection, of course, between what happens for the agency matrix and the topology of this directed graph. So, I think most of you know already this definition of strongly connected component. And there's a fact. So now what we can do is look at the reduced graph, okay. That is reduced graph mean that we group together we group together all the elements of the strongly connected component. Here, these two, we group them a big vertex, okay. And we keep the arrows. We don't keep the arrows between them, but we keep the arrows with the external nodes, okay. And so what we get is a reduced graph like this. And this reduced graph by definition because we grouped the strongly connected component in one big vertex. This graph now is what is called ac click, ac click. There's no cycle here in this graph. It is ac click, okay. And it is ac click. It means that the fact that it's ac click means that we can draw an arrow of time. We can order the vertices such that two vertices will be, so that the order is compatible with the direction of the arrows. So here, for instance, I drew the arrow. They're all going down the arrows, okay. So I can try to I can shift a bit my vertices so that the arrow is still go down and there's an order the order between the vertices. This one goes before this one before this one, before this one. So there's just an order, okay. A time order, if you want, between the different nodes of this reduced graph. And this ordering is always possible if you have a non cyclical graph. Okay. So once we have this order, let's use this order to write down our matrix. First our reduced matrix, okay. So this is now the reduced adjacency matrix corresponding to this reduced graph, okay. What I'm saying is that choosing this order correspond to a certain permutation of the entries, okay, of the entries of these nodes here. And up to this permutation, when you apply this permutation to be in this order, you get a matrix which is a lower triangular, okay, automatically. Being low triangular really correspond to the fact that this order is compatible with the arrows going down. So now we can restore, for instance we can restore this big component here, this strongly connected component can restore it in any order we want, and we get a permutation of these initial indices so that the big matrix, the adjacency matrix after this permutation is block diagonal, okay. Block diagonal, and each non trivial block is a strongly connected component, okay. And below we have some elements, okay, which are not, which are. But anything, any block on the diagonal which is non zero corresponds to a strongly connected component, okay. So you see now, after this permutation, of course permutation is a similitude, so you know that the spectrum of these matrix would be the same as the spectrum of this matrix, okay. And you can do here, I did not use at all the fact that had phases or no phases. I just use the topology that is where are non trivial, non zero entries, okay, in this permutation. So you can do it with the matrix with phases, and if you do it with matrix phases, you see that the spectrum, the non trivial spectrum of this matrix is contained only in these strongly connected components, okay. So let's look back, let's look back at what we had for the specific values of n, okay, n was equal to s to the, s to the, 3 to the k, 3 to the k. So we'll find the same result, of course, but we just, we can just look it, just look now at the strongly connected component, okay, the strongly connected component for this matrix, oh sorry, was this, was this set of points, okay, which was, and it could be written in this way, okay, we get this matrix, okay, between the nodes 0, 2, 6, 8, this matrix like this, and if you put the phase, if you get back the phases, we get this matrix here, okay. So here, this type of graph, okay, or this type of matrix here, is called the De Bruyne graph, in general, okay, on two components. So the Bruyne graphs, they constructed from nodes which can be written in some binary, or ternary decomposition. And here, the nodes are connected to one another under the rule that if you have epsilon 1, epsilon 2, here it corresponds to epsilon 1, epsilon 2 on 2 digits, okay, on 2, 2, alphabet of 2 elements, this node goes to this node here, so which means that this one gets killed and the epsilon 2 here should appear as epsilon 2 here, and this one is anything, okay, so, you see that, it corresponds to the fact that each node has 2 antecedents and 2 images, okay, although this way, by this way, this is what is called the Bruyne graph, okay. And this structure here I plotted for n is equal to 9, but of course it works, it can do the same for any power of 3, okay, 3 to the k. And you find each time for any such power, you find that the graph you get has a unique strongly connected component which is the Bruyne graph, but now, with words of length k, here I had words of length 2, now you have words of length k, which index the non trivial nodes, here, okay. And the spectrum again depends only, as we saw before, on the eigenvalues of this matrix here, 1, 1, 1, omega, okay, which is the reduction if you want of the matrix we had before, okay. So we find again the same spectrum, naturally, but now, we had this interpretation in terms of strongly connected components. So now the obvious question is, do we, what happens, what if, is it crucial for our result, our result that these elements here is omega, that is exponential 2i pi over 3, the third root of unity, or could we put something else, okay. And another question, what happens if we are not exactly at this value, n to the 3 to the k, but if we have, if we have other values, okay, because our matrices could be constructed for any values of n, or at least values of n which are multiples of 3, and what happens also for all the types of directed graphs, okay, that's the type of question we can ask. So the first, the main example I would show is to make, to put some randomness, okay, to, in order to get some more generic spectrum. So randomness here will be to keep the geometry, to keep the topology or the topology of the graph, the non zero entries, but put random phases instead of these specific values I had before, okay. So I keep the same topology and put random phases, independent random phases, okay. Let's say that they are, they could be more general than random phases. I want them to be complex valued and I want them to be non zero, so I take, I put phases, okay. Uniformly distributed across the unit circle. So if you write down the, you can do the same computation as before, okay, because the topology is the same, you get, you get a reduced matrix, okay, which corresponds to this strongly connected component, which is a four by four matrix, which has independent, independent random, random phases here. And the spectrum, the spectral question you ask now is what is the spectrum? What can you say but the spectrum of this matrix? Okay. And you can do the same for nine, you can do the same for three to the k, okay. You can do the same for all the values of n. So the question is, what if you want to understand more the distribution of the non trivial spectrum? Okay. We have to understand the distribution of the spectrum of these, these type of reduced matrices. Okay. So this is, I think, well, let's say, as far as I know, new random matrix problem. So you impose the structure, you impose the structure of the adjacency matrix, of the structure of the graph and you just put random phases on the edges, on the links, okay. So what can you say about it? So not much up to now. There is still a sort of nice feature. Nice feature is that if you look at this type of matrices, okay, so which have the spectrum of this, the buoyant graph or at least it looks like they also, the numbers are non-zero and sort of tilted diagonals. And when you take these matrix times, it's a joint, okay. You find a block diagonal matrix, okay. And that helps a lot. You find that R times R dagger is block diagonal with blocks of size 2 by 2. Okay. So it's a block diagonal matrix and these matrices, so each block is made of phases, it's made of combinations of these phases and combination of independent phases. So each block is independent of the other blocks, okay. They're independent of one another and you find, so you have a way to analyze the eigenvalues of this, the statistics of the eigenvalues of this matrix are R star. And the eigenvalues of this matrix gives you the statistics of the singular values of this matrix. So you have access to the statistics of the singular values of this matrix, okay. And this gives you already some non trivial information, okay. Because the singular values, they are nicely there. You have a bunch of blocks which are independent of one another and so you can analyze each block independently of one another and you get some statistics, global statistics of the distribution of the singular values and from singular values you get some inequalities on the eigenvalues. So far the only result, rigorous result we could get is that when you take the large M limit, so you take larger such matrices, you find that there are not many eigenvalues, they come statistically with high probability, so WHP means with high probability, you cannot have many eigenvalues close to the origin which are small, okay. So more, which means that here the statement is that with high probability the number of eigenvalues smaller than R, R small, is itself smaller than C times R times M. M is the dimension of the full matrix. So it's a small fraction, it has to be a small fraction of this proportion of this big matrix, okay. So this is the type of result. We don't have an asymptotic distribution of the spectrum of this matrix but this is a first result. We know that there's no kernel. Typically, generically, this matrix doesn't have a kernel. It has a non trivial kernel. It does not have a non trivial kernel and also it does not have an accumulation of small eigenvalues close to the origin. So this is sufficient to give us a lower bound for this fractal value, okay, that is a low bound about how many eigenvalues are non small and not small, okay. So we would like to know more about the density of eigenvalues, okay. Again, the plots are very bad. I'm sorry, I don't know what happens. So we would be interested to have some information about this empirical measure, okay, empirical measure is just the sum over delta function on the eigenvalues, okay, of this random matrix. And the question is, does this empirical measure converge to something when M is larger with high probability? Okay, this type of question people are asking in random matrix theory. So, we did some numerics already some time ago and this is what you see, okay, for one instance of such random matrix, I think it was for M is equal to 4,000 or something like this. You see that there seems to be a rather regular distribution of the eigenvalues. There seems to be a hole here, okay, it's strange and they are contained in some, they are contained in some disc, inside some disc, okay. And here, this was supposed to be, this was to be, yes, this was supposed to be a counting function from here, count the number of eigenvalues which are smaller than R, okay. So, and this stops, this starts here. This should be a beginning that should be some zero here and then it starts to grow up to one, okay. This is the rescaled counting function of eigenvalues smaller or equal to R as a function of R, okay. So, you see, there are some regularities. There are several curves here due to the fact that we use several dimensions. We look at matrices of several certain number of dimensions and each time we draw them randomly several times. We draw the phases randomly several times, several times, okay. So, it looks like there is an asymptotic smooth density but we don't know yet, at least we don't know yet what to think about this smooth density, okay. So, I mean the plots look much better on the computer than on the screen. Strange, yes. So, let's do it, let's try now to do it for different numbers of n, okay, which are not powers of 3, so you don't have any more this, this tensor product structure. So, then the graph is a bit more complicated when you try to extract, when you try to extract the graph and the strongly connected component from the graph, it's less regular, only it's not always very regular. So, here I just give an example where n is the, after 9, you have n equal to 12, okay, so which is not power of 3, obviously. And you see that the strongly connected component, the matrix reduced to the strongly connected component has this structure, okay, which is a bit like the one before but it is less regular. Okay, we have some bunches of here with 3 elements, then one element, one element, then three elements, okay. So, it is a bit less regular but it still has, let's say, same type of shape and what is nice is that this, when you take r times r star, that is when you try to compute the singular values of this guy, it is still block diagonal. You still have this block diagonal property but now the blocks are of different sizes, okay, they're not always of size 2 as before. So, so this is where my, the second student, master student was working on it, Justin Trias, okay, he worked a bit on this setting when we don't have this tensor product. So what he showed is that there is still there is still a large kernel or generalized kernel, the non trivial spectrum that is the maximum size of the non null columns after you take powers of the matrix by itself has size of other n to the new, so with same fractal exponent, okay, so, this will give us the upper bound for the fractal value, that is, we have a large kernel and this, what is outside of the kernel, the non trivial again value, they have at most this dimension, okay, they cover a space which has at most this dimension. This comes just from the topology of the graph and what he showed is that there is always a large, strongly connected component of size, about the same size, okay, n to the new, not necessarily with the same prefactor, but, okay, plus sometimes there is a small, or small, strongly connected component sitting by, okay, and if we apply, so we did not work out, we did not work out the details at that time because it was the end of his internship and I went to do something else, but I'm pretty sure now that one can show for this generalization that we can also show that there is a non trivial spectrum of size n to the new, okay, so we will get, we would get in this way, we would get this form of fractal value with an asymptotic, okay, this just, that is a number of non trivial again values that larger than R would be asymptotic to n to the new, but without a precise, without a precise knowledge of what is the constant, what is the C of R, how does this depend on R on this size? So this student also did a few numerics, well he computed actually a spectrum of all the matrices from 300 to, yes 9300, he did it without my knowledge somehow he liked to play machines and he plotted all the counting functions, so now he counts functions in the other way that he counts number of eigenvalues larger than R, okay, okay, well, so you get each color correspond to a specific values of M, okay, so you see that all these curves and here is scaled, here is scaled by the, by this n to the new or by this formula, here is scaled by the dimension of the smaller, of the reduced matrix, okay, which is what you should do. And he found, he found this type of curves so up to a little rescaling by factor between 1 and 2, okay, up to such a rescaling, he found some, well some general shape which could look like, let's say a tendency to be a universal shape or I don't know how to interpret it, okay, but still, we could still be confident that what we found for specific values n equal to 3 to the k, could still hold, could still be true for this more general setting, okay, but we would like to understand what could be this asymptotic shape or profile function, okay, into how does this, how does this distribution of eigenvalues depend on R, okay, what could be this profile, profile function. So to investigate this, somehow the only idea I come up across would be to generalize the setting and to allow the geometry of the graph to change also, to be also randomized in some way, okay, but I don't randomize the big one because I don't take a general directed graph on n nodes, but I want to randomize the small one which is this strongly connected component, okay, in some sense. So this detailed m which has already a dimension m of the order of n to the new and it's because I cannot, I cannot have a direct, I cannot have access on this, on the spectral distribution for this matrix for this specific skeleton, how about considering a general, more general problem that is which consists in looking at directed graphs with m nodes, okay, with some rules, okay. So what we observe is that for the reduced graph we encountered all the nodes had as I just showed two examples all the nodes had either two incoming and two outgoing two outgoing points or two incoming and one outgoing and two incoming and three outgoing, okay, this was the three examples, the three examples I had, okay, for the type of reduced matrices, okay. So let's look at, for instance, so the idea would be to look at the types of, at the family of directed graphs, okay, which I call GD, where you impose a certain distribution of degrees, okay, a certain distribution of of these guys, distribution of these guys and these guys, okay, and this would be encoded in this type of vector, okay, for each node you decide that this node will be of this type, this node number two will be of this type, number three will be of this type, et cetera, up to node number n. So you fix the distribution of by degrees, okay, and now you connect them as you want, okay, anyway to connect these things, connect these half edges to another half edge, okay. So this way there's a way to construct a family of random directed graphs, okay, which would correspond to any permutation between these half edges, okay, and we just put, you can view it now as a random graph with specified degrees, distribution of degrees, okay. So in this way you can, of course, when n becomes large, there's a large way to choose the permutation between the half edges, so there's a large set of directed graphs with the same distribution of degrees, okay. So now we want to use this model for the smaller, for the reduced graph corresponding to the strongly connected components, okay, and above this random choice of reduced graph, we also, we still want to put the random phases, okay, so we add the random phases on each of the links, okay, each of the edges. So this way we obtain an ensemble of random matrices in this set, okay. So this would be a, let's say, a model to try to investigate and well, there has been some very little results about this in the probability community. So we would really understand, I would like to understand the spectral behavior, the spectral distribution of these matrices, okay, this type of random matrices with random phases sitting on random graphs, or at least some family of random graphs. So there has been some results about the topology, the structure of these random graphs, okay, because we did not impose it to be a strongly connected component, but fortunately it was proved about 15 years ago that if you put some constraint on these degrees, okay, then with high probability, this random graph will be strongly connected. It will be a single strongly connected component in the graph and all edges will be connected to other edges. And if you relax a bit this condition, that is if you allow, if you allow some of the degrees to be equal to one, like we had here, for instance, okay, this we had the degree one. So if you allow some of these degree one out degree one edges, you still have a high chance to be a strongly connected component or the graph has a large macroscopic strongly connected component, okay. So these are, let's say topological as topological facts about this family of random graphs. And as far as I know, there is no direct, there is no general general study of the spectrum of this, even the spectrum of the adjacent symmetriques. So there is a conjecture which you can find in a paper in 2012 by Bordona and Chaffay who in some end remarks, okay, they listed in some open problems and above these open problems, they made a conjecture about the spectral distribution, asymptotic spectral distribution for the adjacent symmetriques, okay, so when you put ones everywhere for this type of random graphs, okay. In the case where these graphs are deregulary, meaning that each edge had exactly the outgoing each node has the outgoing edges and the incoming edges, okay. So the deregular digraphs and the conjecture, this formula, okay. So whatever it comes from, so it came from, I discussed with them, okay, it comes from free probabilities, okay. The theory of free probabilities there's a way to guess or to make, let's say guesses about what could be the spectral distribution, the asymptotic spectral distribution of such random graphs, okay. Call it a complexe cast-and-mac-k measure, by you. So this is, this is a, let's say a complexe general, a direct graph generalization of the cast-and-mac-k measure which corresponds to the spectral, asymptotic spectral distribution for the random deregular non-oriented graphs, okay. So D is just a number of outgoing edges and number of incoming edges. So if you do the plot, so this, I did it recently so I did not put it infortunity on the, directly on the same plot as the one by Gendron. So this was the plot, the distribution for this random phase model for the de Bruyne graph, okay. And here this would be the proposition, okay, of the distribution of, the spectral distribution of the counting function for the, for this model where D is equal to 2, okay. When you have two edges coming in, two edges coming out. So we should correspond to this one here. So here the topology is fixed, okay, it's given by this de Bruyne graph. Here the topology is random, okay, and I looked at the, but there's no phases. Here the topology is fixed, there are phases, random phases. Here the topology is random, okay, in this model and the phases are equal to 1 everywhere, okay. And somehow there's a, well, somehow a similitude between the two counting functions, okay. And in particular, for instance here, you don't see it on the plot, but here somehow the spectral distribution stops at a value which is equal to square root 2, okay. Square root 2 and here, this is actually what you'll find here. The spectral distribution jumps, I mean not jumps, but grows until square root 2 and then it stops, there's no eigenvalue anymore, okay. And when you look at, if you look back at my picture, this picture here, this circle here, the spectral radius is really about square root 2, okay. So, maybe, okay, so my conjecture, let's say, I would say on my beat, would be that here we randomize the, we randomize the connectivity we randomize the topology of the graph, but there's no phase, here we had a fixed graph but we randomize the phase and maybe somehow the two types of randomization give you the same distribution would be like this, which would be like this in the case of, when we have always the same connectivity, okay. So that's the type of conjecture which one can make. So, I was a bit fast. So, I just, okay, I just wanted to show you a type of models, okay, very simple models of graphs which came out the study of quantum, let's say, quantized chaotic systems and some questions which came up, which came across when studying the spectral spectral properties of quantized chaotic systems and in particular this quantized maps, okay, this specific quantum map and from this, from this model we drew a second model, a model of, which could be interpreted as a model of graph with some phases. So, it's a big graph, directed graph with some phases. So, somehow to obtain, the fact, the reason why we obtain this fractal value was really due to the very specific topology of this graph, okay, to the fact that the strongly connected component was very small compared to the big size, okay, because it was, it was the big size to a power new and new was smaller than one. Okay, so this is where this fractal value was coming from and then I proposed to you a model of putting random phases and investigating the spectral distribution of these random phases, random phase matrices, okay. So, well, what I could say is that this, and then in the last part, I described a random graph model where we impose the specific, when we impose the distribution of the degrees, of the bi-degrees and in most of the examples which I had, these degrees were bounded. So, it corresponds to a scarce, scarce, no scarce, directed graph above which we put random phases, okay. So, in this direction, so this spectral question has not been so far investigated rigorously, but I hope that maybe the presence of these random phases should make the analysis easier, okay, that in, then in the case of looking at the adjacency matrices, okay, because there's more averaging than if you, then when there's more averaging somehow the objects which you look at are easier to study from a probability point of view. I want to mention that once soon of Borden-Aff-Cost a studied the statistics of the largest eigenvalue of this random adjacency matrices, not the first eigenvalue. The first eigenvalue is always given by, will be given by the number of, by the degree which was equal to 2, here in this case. But when you look at the second eigenvalue, somehow, you find that there are many eigenvalues of the same order, okay, and this order would be about square root 2. So, somehow, there's a, I expect that maybe the second eigenvalues of appearing for this, for this, for the adjacency matrix would be of the same order of the rest of the spectrum of this adjacency, random adjacency matrix would be similar to what happens when you randomize over the phases, okay. And to finish, I just want to say, of course, we looked at examples which were, depending which were built on a single, very simple, very simple map on the unit interval, open map on the unit interval. We could do the same, one could do the same, twice could do the same for all other types of expanding maps. Okay, expanding means that the maps for which the slope is larger than one, okay, because somehow this was important in the structure of the matrices, so the fact that these tilted diagonals are steep enough, okay, they're not close to the diagonal, they're steeper than the diagonal. This corresponds to the fact that the initial map, q goes to 3q, has a slope larger than one, okay. All right, so thank you for your attention. Five minutes, please. Okay. In some cases, you can deduce the distribution of the spectrum from the distribution of the singular values and by some special cases. Yes, there's a trick which they call hermitization. The single ring to ramp by this point. And, for instance, in the last, in the case of the, just two strikes before, when you're high. It's not the usual case. About the complex case and market distribution. Yes. You can, it's probable that you can get the distribution of the singular values of this direction. Yes. Of this direction. And, when you compute it and that you make it a rotationally invariant. I don't expect the distribution of the singular values to be just to be the same as the distribution of eigenvalues. I don't think the same. You have inequalities between distribution of eigenvalues and distribution of singular values. But, in general, you don't have the distribution do not match each other. Imagine that inside this circle. I know that in the study of random complex matrices like the genome ensemble or things like this, people try to transform the spectral problem for random complex matrix into looking at the spectral problem for a certain self-adjoint matrix. So, there's a procedure called hermitization. Hermitization. And, OK. So, I didn't try it for these matrices here because it requires some estimates about the probability that to have a... 2 by 2 blocks. Yes, for the 2 by 2 blocks. Yes, this is, I guess, what you can get for the 2 by 2 blocks is a bit better, probably a bit better than what I wrote in my statements. But, a nice statement I was interested in avoiding accumulation of small eigenvalues close to 0. I wanted to avoid to show that this was not possible statistically, this was not happening. But, this does not give me the dissuasion, this does not give me the asymptotic distribution of eigenvalues. It just tells me that they are not too many. It gives me, let's say, a low bound for the eigenvalues in the sense that they cannot accumulate statistically, they cannot accumulate near the origin. But, so far, at least the way I used the singular values was through the, using the value inequalities which connect sums of eigenvalues to sums of singular values and this can give me this type of lower bound. But, to get an asymptotic, I mean to get convergence to some asymptotic distribution, one has to do something else. I do have one about the result by Cooper and Fritz that he used. So, you said that if some degrees are one, then you still have a strongly connected component. But, how many of the, of the notes in the induced subgraphs still have that degree? Ah, I don't know. Because maybe, when you reduce to the strongly connected components, you lose all the guys that have that degree one. But here, degree one means that I can have one out and one in. So, I mean, these edges, they are not isolated, they are not, they are not, how do you say, as I pass, no ways. You can still, having degree one means that I can have a. They may not be part of the strong connected. Yes, that's right, that's right, yes. It could be outside. But, I think in the result, actually you could have a positive fraction of such degrees. And you would still have a big connected. So, I guess, necessarily we'll have some of them inside, yes. Okay, thank you. Welcome.