 Je veux commencer par conclure le sujet que j'ai parlé d'aujourd'hui avec une question qu'il y a un peu d'interesse qui est une autre question concernant les propriétés de groupes et de semi-groupes. C'est de la nature un peu différente. Il doit faire de la distribution du sucre dans les fieldes quadratifs. Alors, juste comme un souvenir, je vais fixer un grand discriminant positif qui est un fondamental de Sion, c'est-à-dire que nous prenons D2. Ensuite, nous pouvons considérer des formes quadratiques avec des coefficients et la collection de ces formes quadratiques peut être partagée dans un nombre de classes formées dans la groupe de classes où l'équivalent de deux formes quadratiques signifie que l'une peut être transformée dans l'autre en BNSL2Z transformation. Alors, l'une peut associer à chaque classe un fondamental de Geodesic. C'est seulement dépendant de la classe, qui est l'équivalent de Geodesic dans l'espace unitangent de la surface model qui est connecté à l'alpha minus B plus minus d'alpha d2a à son conjugate. On peut prendre l'alpha réduit, qui implique que nous avons une fraction périodique. Donc, Duke's theorem nous dit qu'il est normalisé si nous regardons les mesures, les mesures actuelles sur ce geodesic et nous avouons de l'équivalent de la groupe de classes, alors que le D va à l'infinité, cela va convertir les weeks qui sont à la mesure naturelle sur l'unitangent de l'espace unitangent. C'est du Duke's theorem. Ce qui est assez amusant, qui est une expérience que j'avais dans le Jérusalem, c'est qu'il y a de la discrétion entre les personnes qui font du nombre de théories et les personnes qui font de l'urgodique. Donc, du Duke's theorem, l'un peut, en fait, dériver un statement sur l'individu geodesic, au moins sur la plupart d'eux. Parce que ce que cela implique, qui est vraiment un statement sur la totalité des mesures associées au Jérusalem, c'est-à-dire que, pour les meilleurs géodesics, nous avons une distribution équipée. C'est assez amusant, parce que j'avais cette question, j'ai demandé à l'urgodique des théories, ils m'ont dit que c'est un truc très grave. Et puis, vous vous demandez de nombreuses théories et nous ne savons pas de ce genre de choses. Et puis, je réalise que c'est le genre de choses dont vous prouvez le même manière sur l'exception des séquences, donc, si vous avez, vous regardez les mesures victoires associées aux fonctions de l'oeil et puis vous avez aussi une convergence, d'exception d'une séquence sincée. C'est ce que Snellmann prouve et, en fait, l'une peut être assez précise, l'une peut même faire ça quantitative dans une certaine précise et je vais revenir à ça immédiatement. Et bien, c'est, en fait, un conséquence de la théorie précédente et parce que l'une a cette théorie douce dans une forme quantitative explicite, l'une peut prouver des résultats sur ce que ça signifie les théories plus fondamentales qui sont assez précises et je vais revenir à ça à un moment, mais je vais vous rappeler qu'il y a un conjecture que, en fait, quand la groupe de classe n'est pas, dire, la plus grande possible, si nous avons une groupe de classe qui n'est plus que 1,5 epsilon, alors l'une expecte que, en fait, toutes les juridiques seraient distribuées par la formulae de la groupe de classe ce qui s'applique si nous sommes en ce conjecture et puis nous nous regardons parce que c'est ce que nous allons faire ici, à des événements mauvais, en fait, les juridiques fondamentales qui ne sont pas bien distribuées alors nous devons regarder dans les fields quadratiques avec une solution fondamentale ici epsilon d, qui est assez petite c'est-à-dire, ce serait aussi apparent dans nos constructions plus tard en regardant cette conjecture il y a un théorème de Popa qui vous dit que la conjecture est vraie si la groupe de classe c'est Alexandre Popa si la groupe de classe est plus petite qu'un suffisamment plus petit de D qui est un résultat qui est prouvé en utilisant les techniques mais en fait il se termine que cette conjecture implique un statement qui est juste un petit peu moins plus petit que ça sans aller dans cette technologie suffisquée si c'est fundamental ou pas une autre remarque c'est que même dans le GRH on ne peut pas prouvoir ça mais on peut seulement le prendre avec les groupes de classe qui sont moins qu'ici un quartier donc il y a un quartier donc concernant les geodesiques il y a un problème je ne sais pas si ce problème a été écrit mais j'ai compris par mon quartier que cette question était parfaite donc on est regardant les geodesiques fondamentales qui ne sont pas distribuées donc ce qu'on peut par exemple demander est-ce que nous avons plein de ces créatures qui ne sont pas dans une région loin de la Cusp en autres mots sont en ligne et en fait ce qui est dans la littérature et en fait c'est un papier de Peter Sarnak qui s'appelle Réciprocal geodesique c'est une question similaire d'ailleurs nous avons demandé le geodesique fondamental pour être Réciprocal qui signifie que l'entreprise quadratique irrationale a un fraction continuée qui est panadromique donc en fait c'est ce papier de Réciprocal éventuellement ce qui se passe dans ce papier c'est qu'ils commencent à étudier Marc of Triples et des choses comme ça mais c'est en fait ce problème c'est-à-dire qu'il va s'appliquer plus tard ou trouver une fréquence qui a triggered ces développements autour de l'affinatif donc ça va retourner à ce papier donc ce qui est ici c'est que nous avons prouvé que un papier est posté il y a un autre papier qui est dans le fait donc le premier papier qui est posté nous dit qu'il est correct et en fait c'est un papier d'affin сидement pendant que il y a une batterie de de l'abris si on a un papier qui est une de l'abris il se prend deux c'est certain après on a jetted un papier du papier qui est donc on va l'assurer parce que la plus en fait, nous avons beaucoup d'entre eux comme le groupe classique de Power1-epsilon. L'autre constat est que le D est assez large parce que le nombre d'éléments dans le D, moins que le T, va être le T ½-epsilon. Donc, ces deux constatements sont les meilleurs possibles. Basiquement, ils sont les meilleurs possibles. La raison est la suivante. En une main, ce que nous avons prouvé, c'est que, n'importe quel est le D, si nous avons pris les gammes dans les géodésiques phénomènes pour lesquelles le gammes reste dans une région compacte, pourquoi ? Parce que le nombre d'éléments de D est boundé par l'HD, le nombre du groupe classique D-epsilon, pour le D venant à l'infinité. Où est le D-epsilon ? Eh bien, c'est-à-dire que le D-epsilon peut être fait comme petit qu'on veut. Donc, ce que vous voyez sont deux choses. Tout d'abord, si l'HD est plus petit que la Power, suffisamment, si l'HD est plus petit que la Power d'D, le D-epsilon va être empty. Donc, ça s'occupe de l'exemple de Popa. Une façon de ne pas être distribuée, c'est de rester dans une région compacte. Et en fait, l'exemple de l'exemple de l'exemple, c'est exactement ce que j'avais referé, c'est qu'il peut être utilisé d'une version quantitative d'un TIRM de Duke. Et en fait, ce que nous avons utilisé sont des résultats explicits, des estimations explicites qui apparaissent en papier de Closeline et de Ullman qui donnent vraiment des estimations en termes d'explicites, des normes sublèves, etc. que nous pouvons procéder pour prouver cela. Donc, ce statement est le meilleur possible exactement parce que de cette raison. C'est que ce qui se passe c'est que, no matter how we choose the why, we are going, sorry, what they said was nonsense, so no matter how we choose the why, we can find an epsilon so that this property holds. So that statement is best possible. No, what about that second statement? Well, I should go back to the Dirichlet class number formula. If one takes the sum of hd log epsilon d and your sum for d less than x this thing is well understood. On the other hand it is much, much harder to understand the sum of hd for d less than x. And there are some, say, the certain exthesis is one of the papers on that and also there is a paper by Hooli about the same time where they study this problem and get results which are unconditional as long as epsilon d is quite small I guess up to x or something like that that has been much more recent well known by Fouvery but in any case this is still for small epsilon d. On the other hand what Hooli what is the conjecture is that if we take the sum of the hd for d less than x this should be even not mistaken conjecture to behave like x log x square or something like that and in any case it means that if we are looking so basically as I said earlier we should look at discriminants for which the class group is of extreme size like square root of d now if we believe this asymptotic then the number of such discriminant has to be smaller than in fact t at the power of half plus epsilon so this came close to what we can expect. Now the way what has it to do with orbits well how do we produce these low line fundamental geodesics well if we are so we are looking again like I did before around the Zaremba problem at the semi group generated by this particular matrices matrices here where say A is bounded by capital A just to ensure that we are going to have bounded partial quotients and equivalently the corresponding fundamental the corresponding geodesic is going to be low line so the alpha which is represented here has a discriminant which is trace gamma square minus 4 where gamma I am doing a lot of notational confusion here but anyway so gamma is that 2 by 2 matrix turns out that so the problem is to get fundamental one so basically we want to have this d square free so the problem is to find elements gamma of that form as many as possible for which the trace gamma the square of trace of gamma minus 4 is square free there is basically this paper of Cernac on reciprocal geodesics is led to the same issue about getting square free in Markov triples which is of course much much harder but here what we can treat this problem in a rather simple way because to get trace gamma square minus 4 square free what it amounts to is to have trace gamma plus 2 and trace gamma minus 2 square free now how do we do that well you have a statement here but what is really going on is that we are getting we are making sure that trace gamma plus 2 and trace gamma minus 2 don't have small prime divisors and small means some power of n so really the statement this statement follows from another statement is that the number of gammas in Ga which are bounded by n so in a ball of radius n well that number is going to be n at the power 2 delta a from what they discussed before now if we require trace gamma plus 2 and trace gamma minus 2 not to have prime divisors less than n to the epsilon and the question is how epsilon is going to be that's going to depend on the level of distribution of the sequence well we still will have a lower bound of that form which means that the set of traces is going to be at least n at the power 2 delta a minus 1 in other words if a is getting large where delta a goes to 1 so we're going to have n at the power 1 minus something by the way what you see is that if the number of traces with the property is bigger than n at the power 1 minus something so let's call it eta prime and we know more over that say trace gamma plus 2 trace gamma minus 2 don't have prime divisors which are less than n to the epsilon well if this epsilon is going to be larger than whatever this eta prime we can of course conclude that it's square free so the whole problem there is the following is that we want to have a level of distribution which does not degenerate when the a goes to infinity because on one hand I want to be close here to 2 so I have to take a large but as I told you eventually how this resonant free region is going to depend on the alphabet we're starting from so just by using that the problem is that you take a larger to get these 2 delta a close to 1 you may very well lose the level of distribution so you need to have another device there to take care of that and that's what makes the thing not completely trivial so like I said this is this is part of the story there is another theorem but I didn't put it down because it's not the paper it's not completely written is that in fact we have a similar theorem for reciprocal geodesics so again you have the statement that there is a set D of positive fundamental discriminants which is the same large and with a property that for D in D we have at least one reciprocal geodesic which is low lying in the sense that it's contained in that set why and well what is why basically there is a why we have to take we have to go high enough and so in some sense this gives an answer to that question was asking at the time so basically this is more or less what I was planning to say in the previous talk but I could not conclude no I don't have quite enough time to speak about the other topic and what I will do is just select a few aspects of what I wanted to discuss in the next talk and I will need your help now so the next subject are number theoretic problems that have to do with total eigenfunctions and there is a lot to say there I want to restrict myself to two two aspects of this story one are moment inequalities of total eigenfunctions and then the other one is the problem of the number courant theorem and the number of nodal domains so what is interesting about the spectral theory of the flat torus and perhaps people working in this area at least most people may not consider it as that exciting but as a harmonic analyst I like the subject because it's kind of at the interface of two areas on one hand the eigenfunctions are given by explicit trigonometric polynomials so you can try to do some kind of explicit analysis on that without having to go through this micro-local type of machinery where you basically don't see anything anymore so this is completely on the table here which doesn't mean that the problems are trivial and then because we are talking about special polynomials because the frequencies they correspond to lattice points on spheres there is of course also number theory in that the reason why this is perhaps interesting there is a mysterious correspondence between phenomena that happens for the d-dimensional flat torus and these phenomena are not stable, are not robust in the sense that it's important to keep the torus flat if we perturb the metric a little bit we lose these features so on one hand you have these features for the flat torus and on the other hand you have phenomena which are of course not proven but conjectures in the arithmetic hyperbolic case and there is some kind of similarity between these phenomena something which is not that well explained but in any case let's not worry about that third aspect what I want to discuss are certain features spectral features of the flat torus and these features are not are not typical are atypical in the sense that if we are looking at other manifolds we don't have say for instance for the sphere situation is quite different so let's start with the classical moment inequalities it is well known that the control of the L2 norm of the eigen function so the land is always square root of E the control of the L2 norm is going to give a control on the LP norm and there are there are precise bounds on that and well basically you don't have to worry too much about this exponent so on one hand you have the behavior at infinity you just take p equals infinity it's going to give you d minus 1 over 2 for the infinity bound and then there is an interpolation but it's not an interpolation it's a piecewise interpolation where the critical exponent here is 2d plus 1 over d minus 1 so for two dimensional manifolds it's d equals 6 interpolation between 2 and that exponent and between that exponent and infinity these bounds are best possible and easily seen to be sure for the sphere however for the flat torus one expects better and these conjectures are also the conjectures that one would make say for the dimensional arithmetic hyperbolic manifolds so there the breaking point is 2d over d minus 2 and the conjecture is d from control of the Lp norm by the L2 norm when p is less than 2d over d minus 2 in particular for d equals 2 it should be for all p and then above 2d over d minus 2 one has this bound and at 2d over d minus 2 one has a divergence which is like lambda to the epsilon now this conjecture is very hard and is basically open better has been some progress a like to report on first of all there is a result which comes close to such inequality although it is still significantly weaker in the sense that we are losing a factor lambda to the epsilon and that inequality well I assume d is at least equal to 3d d equals 2 is another story I may say worth about that in a moment but for d at least equal to 3 this result is quite significant and one can obtain one get that result for p bounded by 2d plus 1 over d minus 1 note that this is almost a perfect inequality general inequality for explicit for class of explicit trigonometric polynomials at the fractional p for larger d so there aren't too many examples in harmonic analysis where you can prove something in Lp without going through an interpolation argument for an even moment so this is one of the instances of that and then regarding the second statement what one can prove is that the second statement is true for p sufficiently large and that depends on some earlier work on this subject and also on the first statement so I guess what is the most interesting is the proof of that first statement which is a consequence of a broader general harmonic analysis theorem which they obtained with a cp in the meter a few months ago and which somehow he calls the L2D coupling conjecture now what does this thing says well it's one of this what is nice about the statement is that for once it's a not quite trivial harmonic analysis statement which is clean in the sense that you get exactly the right exponent and the right bounds or almost the right bounds so this is a very general statement there is no structure what we are doing is that we are taking a compact smooth hypersurface in RD with the positive definite second fundamental form if we don't have positive if we have a non degenerate second fundamental form there is a version of that but it requires suitable modification and in this form it is not correct then we are looking at simply we are taking a small delta we are taking all functions which for it transform live within a delta neighborhood of this hypersurface s and then as people like to do in that field is to decompose this shell in tangential plates which are going to be so the sickness of the plate is delta and the size is going to be like square delta so you get square delta in D minus one dimension and then delta in the remaining dimension and we can make a corresponding decomposition of F by taking its Fourier transform restricting the Fourier transform to the plates and taking inverse Fourier transform so these Fourier restrictions of the function F is denoted by F sub tau so then you have an inequality which is which is rather amazing is that you can bound the Lp norm of F by the square function the L2 norm of the Lp norms of the Fourier restrictions F2 you can do that for P up to a factor delta minus epsilon where epsilon can be taken or a bit really small and that is true for P equals to D plus one over D minus one and therefore actually there is something missing here P has to be at least equal to two below P bigger than equal to two and then what happens this is automatic consequence by interpolation it's not completely trivial you can interpolate but you can interpolate one will have the obvious bound which this is the obvious bound between between that exponent P and infinity now that exponent is best possible what one expects but this is an extremely hard problem is that in the subcritical regime you don't need the delta minus epsilon anyway at P equals to D plus one over D minus one you will need something it's not a clean bound now what is remarkable about this inequality although it is a general for analytical inequality it's main interest are really diophantine applications and in some sense what you're getting are statements which are arithmetical and for which before either there was only an arithmetical proof known or or they are simply new so this has an amazing how do you say amazing number theoretic implications which is one of the reasons why somehow in the past I was kind of afraid of believing in such a conjecture because it really leads to things which are explicit statements about say diophantine equations the number of solutions of diophantine equations which I don't know I didn't know how to prove using standard divisor functions and in a way the last result is just a manifestation of this decoupling theorem if you take for instance D equals 3 so the exponent here is going to be 4 so what is known is that for the 3 dimensional flat torus you get a control of the L4 norm by lambda to the epsilon L2 norm in fact the lambda to the epsilon shouldn't be there but anyway the proof of that some kind of arithmetic some kind of easy divisor considerations turns out that these are really facts from analysis which raises a very interesting question is that one would really like to know how far one can go for instance and this is something I have been working on in the recent months what you get number theoretic I will not go into that it's very reminiscent of inequality vignogradoff's mean value theorem and somehow what I started wondering is whether vignogradoff's mean value theorem is just a theorem in analysis so I have a clear analysis conjecture that would imply vignogradoff which is still open in general woolly has then quite remarkable work on it but it's only solve for k equals 3 not for larger k so it could be that in fact what is underlying is a very general harmonic analysis principle in any case you are getting a certain number of consequences of that which are order surprising now I didn't want to I don't want to talk too much about the two-dimensional torus for the two-dimensional torus of course things are different because we have very few lattice points what it turns out is that the old inequality of Sigmund Cook about bonding the four norm by the two norm is still the the best one knows and in fact in the world of flat tori it's the only uniform bond on eigenfunctions we have this is a rather trivial thing but it's all we have which is rather shameful but that's how things are so earlier say a couple of years ago we tried to study this problem of well it's expected 2 dimensions what one expects is that basically you will control uniformly all the moments and you would first start looking at the sixth moment so the typical thing is that you would start trying to count solutions of equations p1 plus p2 plus p3 plus p4 plus p5 plus p6 where p are the lattice points on a given circle we have done some work with Enrico Bombier on that which was some well we did we got some things but altogether it was rather disappointing so we tried basically everything we could think about with only limited success so one thing is that what's rather surprising is that the combinatorial approach leads to results we don't know how to get in any other way so there are some non trivial results using incidence geometry on one hand then also we looked at the so if you go brute force analytical approach you end up with problems about elliptic curves and ranks of elliptic curves so this is a kind of problem you would say well the time is not ripe because you wait until people understand more about the rank of elliptic curves but if you really I mean if you spend some time on it to prove something so then we kind of eventually convinced ourselves that we got an interesting result so in any case it's kind of interesting so what eventually is a state of the situation is that using the incidence geometric approach we can get a control on the sixth norm which is not what you'd like like to put a constant here but something which is better than what what we know how to get by other means and this basically is a consequence of the Cimmery-Littrotter theorem now there are stronger incidence conjectures in particular there is a so called Erdisch unit distance conjecture and there has been quite a bit of progress on that but it's still open Erdisch unit distance conjecture will tell you that if you have n points then the number of unit distances is bound by n1 plus epsilon if you assume the Erdisch distance conjecture you have n to the epsilon then about having control for all p well we can prove it for most E's no for most E's may say not very exciting because for most E's well the number of lattice points is going to be only the square root of the logarithm of E so you like to look at situations which are a little bit richer where you have more lattice points in particular you like to take for E a smooth number then we basically what we like to do from the beginning is trying to do something with elliptic curves and what we manage to do is to prove in the realm of smooth numbers an estimate up to n to the epsilon so here capital N may be confusing capital N is the number of lattice points right so it's the multiplicity of the space so we get that kind of inequality at least for most smooth numbers and I don't think the result of this is so exciting but say the technique which is behind it may be of some interest to some of you and what is the technique the technique is the following so like I said you go brute force you write down basically you are looking at the equation p1 plus p2 plus p3 equals a given point ab what you're getting here are three equations in four unknowns so you can eliminate two of them then you get a curve which is a genus one curve if we assume that it has say an integer point we can take an origin then we're getting an elliptic curve and then we know from the general theory that we can control the number of of integral solutions by an exponential of the rank now if we knew that the rank is bounded of course we are done but unfortunately well at the time people were convinced that ranks were not bounded what happens here is that we have certain specific family so no people would tend to believe maybe that within a fixed families this is a fixed pencil the rank may be uniformly bounded by say even despite whatever recent advances say of Bar-Gar and so on we don't know that so we were not willing to assume too much we still wanted to do something so what we did is the following we made a lot of say more standard conjectures say then start conjecturing things about the rank and so what we assume we don't have to assume modularity anymore but we assume GRH and we also assume the burst when it on dire conjecture that is not so much of a crime and in any case so basically the only hold on well I mean one way standard way to get the hold on the rank is exactly through this conjecture that it will tell you that it is the order of vanishing rank of the corresponding has a veil l function which is associated to the elliptic curve at the point s equals 1 and that is an object that has been studied and there is an explicit formula for that which is due to to veil, veil explicit formula and it allows you to bound the analytic rank by a certain expression which one can truncate and then you will see that you have so the main problem is the second term here so the second term is a certain weighted sum the capital X is a parameter here so what you get here is a weighted sum of quantities which I call a sub p e and this a sub p e are basically certain character sums now the problem with this thing is the following if you just estimate these character sums you are getting something like square root of p so if you put here your square root of p you are going to have something here which is going to be of size well a bound you get the square root of whatever bound on p the bound on p being exponential x and that doesn't give you anything so what has to be exploited here is a double cancellation so we need to exploit the cancellation from the summation in x and also from the summation in p and there is nobody who knows how to do that presently so since we can't do that we have to do an average and so people have done that starting from the work of Goldfeld and then his bronze considers exponential moments as needed if you want to control the number of lattice points because you have the exponent of the rank but all these arguments are for nice families now basically I made a statement for almost all smooth numbers so what you're getting eventually when you look at the underlying elliptic curves what you're getting is a family which is not arithmetic at all this is a combinatorial family it's a little bit like what I was talking before is that you have objects which are purely combinatorial but what is nice is that when you try to use this explicit formula you only care about the behavior mod p for each individual p so you do a reduction of this family mod p so most of the work we had to do was to show that this combinatorial family when you reduce it mod p in fact behaves like like a nice family and that then basically once you have a nice family you can start using exponential sums exponential character sum estimates over finite fields basically character sums estimates on that when you vary rs in a certain say nice family of elliptic curves over fp and basically you always have the double cancellation as long as a jane variant is non constant I mean this is a standard thing so there we could make something work so I think this idea of I mean this observation that in fact you only care about behavior for individual p is kind of interesting that it may apply also in other situations that this is something new, something that had not been done before now the second thing I want to discuss is another problem which may be of interest to some of you which is what happens with Courant's nodal domain theorem in the case of the two dimensional flat torus so the general theorem of Courant tells you this is always true that the nth eigen state has most n nodal domains but then for planer domains say for planer membranes this applies to the torus also there is a better result by Playel which is a consequence of the Faber-Kran inequality which gets you that number 0.691 etc so try to remember that number and well this is not the best possible in fact the conjectured bound would be 2 over pi from above recently there has been some very small improvements on that I have some results and then there are some related result by Steinerberger where we got down a little bit it's basically a macroscopic improvement but say maybe the argument is the most interesting part because what is exploited here what is exploited there are certain stabilities of the so-called Faber-Kran inequality which is underlying that and then also Sam all result about packing densities of disks and I think maybe this is the first application of that result this is a result that was gotten like 40 years ago result of blind there are several papers which tells you that if you pack disks over edges between two bounds say A and B with ratio say B over A which is sufficiently controlled then the density can't be more than the density that you get for the ordinary packing I mean there is such a result I don't know why people cared about that at the time but this has an application there so these are upper bounds lower bounds you can't say anything intelligent there is a result of there are constructions by goes back to Hans Lewey in the old days who showed that you can have for a bit really large you can also make constructions of the tourist situations where only two nodal domains so now what I want to talk about is a subject that has been rather popular over the recent years which is the Bogomolny-Schmidt conjecture so Bogomolny-Schmidt tells you that the asymptotic distribution of nodal domains of chaotic manifolds is universal and is described by percolation theory no this is nothing proven there but note that the expectation for the number of nodal domains is ten times smaller than what you get from player so it's kind of interesting to try to understand that now you see what happens is that ok so this is chaotic manifolds we are looking on flat tour this very principle that the expectation one expects that say whatever happens for for eigenfunctions in the chaotic case when you have hyperbolic manifolds is going to be reflected by the generic behavior of eigenfunctions on a completely integral manifold and so particular on a flat tour so these are many beliefs and one should separate the different issues here so the first issue is whether there is really something like a limiting behavior for random eigenfunctions for instance if you take the sphere you can take random spherical harmonics you can count their nodal domains is there really limit distribution when the eigenvalue goes to infinity so this is proven by Nazarev and Sodin in a very nice paper with some equally nice follow up in more recent I don't know if the paper exists already or not let's see some joint work of Peter Sarnak and Igor Wigman that have further developed these techniques in any case what happens with this technique is that you really prove that there is some convergence to limit distribution but it doesn't tell you anything about what is really the limit distribution so at this point what is rigorously known is that this average expected number is going to be bounded say by point 2,2 etc which is significantly smaller than what you get from player but still well it's still about 4 times as big as what it should be on the other hand the studies by various people and this numerical studies they kind of predict the number but not quite so that the tendency now is to believe that indeed somewhere this expectation is around what is prescribed by percolation model but it's not exactly that in fact there are no real series I mean there are flaws in the justification why this number should correspond to some percolation principle because the features are really different there now what I want to talk about it's something slightly different basically one like to say that the behavior in chaotic for chaotic in case of chaotic dynamics if you have a hyperbolic surface it's going to be similar so there we are running to the problem how can you deterministically implement this random wave model and well that's not that's not so easy there are some potential developments there no so in some sense you could try to justify that by extending the general belief say that in this hyperbolic world egg and feng shes tend to have a Gaussian behavior so we could build on that and make stronger assumption that somehow would possibly justify the behavior like a random wave model but what it turns out so there are some results which I mean there is a result in this direction which is non trivial which is a conditional result I am actually not sure maybe not conditional but in any case certainly not directly relevant to Bogomol nor Schmid but it tells you something non trivial is that at least you don't have this legui phenomenon the number of nodal domains is going to infinity they are they are basically maybe the most interesting part are the techniques which are involved it's a non trivial statement that I wouldn't say it's directly related to the Bogomol nor Schmid problem well what it turns out is that you can prove a deterministic result for the flat torus and there are two statements here which are both are number theoretic the first statement for e in a set of full density if you are taking eigen functions any way you want not talking about random eigen functions what I am doing is taking eigen functions say with coefficients that you choose to be flat so let's take we take the x is all the same to be equal to 1 so this is a deterministic example what happens is that you will have in the limit for the number of nodal domains the same asymptotic as in the random wave model if you want a deterministic result well you have that same statement in certain under certain assumptions on e in particular if e runs in a sequence of energies for which the number of prime factors remains bounded you will have that same phenomenon also so we trying to mimic Gaussian distributions and well basically there are two number theoretic things which go into that the first is a phenomenon of echidistribution of lattice points on circles so you are looking at the lattice points on a circle of radius square root of e and you are looking at then the angular distribution of these lattice points and you have an estimate on the discrepancy which is non trivial it gets you something and it basically has to do with the distribution of Gaussian primes in sectors now another input basically you know if you want to to prove something about Gaussian behaviors you will have to estimate moments sooner or later so what is important is to control additive relations in the frequencies when we are well beyond LP bonds we really want to have the right behavior so what turns out this is relevant in the second statement in deterministic statement is that in fact much more should be true but say as a consequence as a consequence of the sub space theorem what one has are uniform bonds on additive relations in groups of bounded rank so what happens is that really if we control the number of prime factors we have a group say we have R prime factors we have a group of rank R and then we can use the work of Evers, Schlickerwein and Schmidt which will tell you that if we say the number of solutions of a unit equation say 1 equals something solutions which are not degenerate is bounded explicitly L and the rank of the group in a very explicit way and in fact one even expects to have better bonds that would give even better results without having to assume almost anything on the number of prime factors in fact nothing at all but so far one doesn't have this uniformity in particular we would like to know that the number of solutions grows say sup exponentially in the rank but this is not known anyway I don't have a unit equation here so you would have to reduce it to a unit equation and in this form so you will have an extra factor here which is this N which is the number of lattice points so this is an extremely strong statement because it basically tell you that you don't have non degenerate relations and somehow you can put these things together and prove with some work mostly soft I would say that you will have a distribution I mean it requires an idea of course that you can prove you can do a deterministic implementation of this random wave model now the advantage of this is that unlike in these experiments which were done by these various people if you want to check this random wave model now we have we have some construction that does not require you to generate random coefficients it was never really clear to me how they generate random coefficients anyway but let's say at least we don't have to do that we can just take all the coefficients to be 1 so since I don't know how to operate a computer basically I had to rely on some help and what well this is not very what I asked this is a student of Alex Gambert what is his name Michael McGee he did some plots and well these are plots for various squares or not squares these plots they don't tell you much so this is a something that is maybe better visible tells you more which is about the level set so you're talking about the black area I think where f is going to be less than one fifth of incident in the white where it is more anyway this is not you see then for the so I forgot what are the numbers here I think the the first one is square free the second one is not a square free somehow there is a different behavior but what is quite interesting to me it's something else you can do a count of these nodal domains and what really turns out is that basically the the number of nodal domains is pretty much in the ball game of the Bogomal Nismith like it would be I mean you have situations where it corresponds quite quite well so here for instance we have 006 which is exactly the constant we want in any case the numbers we are getting here are very well below the deterministic thing we are getting from from player or the conjecture trade bond they are pretty much the game of Bogomal Nismith and in fact among these experiments there are situations where these lattice points are not so well equidistributed so what they should do is ask him at some point to make maybe also to put the discrepancies of the distribution probably one will see that these numbers are even more in correspondence when these discrepancies are getting quite small I think maybe I should stop here until you got fed up I can go on more but I guess I probably will stop here Ok Any questions or comments? Yes? So some of the predictions you have talked about were for general general m squared plus m squared and now you are limiting to talk a little louder So in this you are limiting to having the energy with low with the limited number of prime factors No We just look at everything Of course because of the nature of these are relatively small numbers we don't have too many prime factors for most of them the number of prime factors is going to be relatively low I wouldn't worry about that because if you assume something and what one knows to prove in terms of dependence on the rank of the group it wouldn't be an issue anyway so this is not this is not really so much of an issue the question was more you prove something theoretical but then you may start asking when does this stuff really kick in and apparently it already kicks in at a very low level this is what I wanted to check is how far one has to go to see something these things are I would say more reliable than what you do with random coefficients because if you are looking at spherical harmonics random coefficients with god knows 100 or something frequencies I don't see how you generate 100 random coefficients so you need to use them so the random type things what does it really mean So back in the section of your talk about the eigenfunctions of the flat tori so can you give me an idea of why there should be some transition between small p and large p in these inequalities oh yeah, just from the number of lattice points you just do the trivial bound and then you see basically whatever comes out it's what you get from checking what happens on zero so you just look at the contribution of the major arc around zero and then you decide this is it oh, maybe this is not it but still the chance that this is it is quite high at least it's confirmed for large p instead of flat tori if we take a negatively curved manifold can you give me an idea what should be the bounds of epilons instead of flat tori if we take some negatively curved manifold negatively curved manifold not just arithmetic hyperbolic manifold any negatively curved manifold I don't know of any experiments so what's your question can we have some empty bounds like you gave in ah right right yeah well this is a a story by itself you know of course there is nothing that precise so a very modest thing would be so the first thing you like to show is that there is some better infinity bounds than the trivial bounds ok so even that is not well understood so one may hope to have a power gain there and in the in a non arithmetic cases is not known so what is known is a gain of a logarithm or something like that but not a power gain question which is harder is to try to get something non trivial for the L6 norm this has been recently achieved by in some work of of Sork and Zeldic which are based on geodesic restrictions of eigenfunctions so it's a way to connect LP norms on the whole manifold and L2 norms of geodesic restrictions so there are some developments there that eventually led to a say non trivial theorem but it's not that they are not proven basically almost nothing is proven not even a power gain and in the arithmetic case well some of these things are of course are conjectural but the powers are not necessarily the right powers also this is not me ok so you're using the subspace theorem so are some of the constants ineffective well what I'm using is a statement I am using the statement in Jannes paper which is more than I really need which is based on an earlier paper by these guys which appeared in Crellis journal which is the absolute subspace theorem so I don't know about ineffectiveness of certain constants what matters for me is that there is an explicit not only effective but explicit dependence on the rank and that is very precise and you get a dependence on the length of the relation which is not too much of an issue what is more of an issue here depends on the rank so these dependence on the rank are exponential in the rank and one in fact believes that there are sub-exponential now there may be other things ineffective there but they do not affect the bounds in terms of dependence on the rank so I think in the subspace theorem in this variance also the number of exceptions is always effective but it's height which is always the issue so if you only need to know how many exceptions I mean how many relations there are it's always effective if you want to know the size of the proficiency possible relations then it's not a problem so the number is yeah enough is to have a much more modest theorem that you probably don't have to go uses this this absolute subspace theorem to deal with that because after all we're talking about about integers in some sense what's remarkable about this theorem is it's a theorem about real numbers there's no algebraic number there so this is a theorem about groups finite say groups of say some bounded rank so in the complex numbers so there is absolutely no assumption about algebraicity and still the proofs are based on algebraicity that one may imagine that there is a proof possible that doesn't go through these machines so we distinguish large height small height etc so that's a whole process which well there is a trivial reduction that you can assume that these guys you can find a model say I guess this is kind of a logic general principle is that you can always rephrase it is equivalent with the problem for algebraic numbers without really specifying what is the height of the number and then one has to start distinguishing so you would imagine there has to be a proof that does not use that but this is not I don't know the details of the argument I should have told you that right away this is why I had to turn around the question without getting to the point I never read that and I refuse to everybody has his you paper you paper when you reduce to algebraic exponential so the basic variables they are of the initial variables so the the one coming from the smooth yeah right well I should have what do you mean by that I mean what is the level of difficulty of evaluating the algebraic exponential well I'll tell you right away so that's what we have ok so first of all it turns out that all these guys they sit in the same in a one parameter family so you can write a viastras equation and then there is a parameter lambda there and this lambda is so it's one parameter which is expressed in terms of a b remember p1 plus p2 plus p3 equals a b and e so we have a distribution there which is a rather wild distribution but it turns out that when you look at mod p somehow you can prove that this distribution mod p becomes basically the uniform distribution on fp that is what is going on and then once you have that then the thing is completely standard I mean there is a paper by Nick Katz in the bulletin that studies that kind of questions so we need double cancellation right so we have so that's the next one so we have this exponential sum here in x we can't exploit the cancellation by summing in p but we can exploit the cancellation when rs are ranging in some manifold in fp provided we have an constant gain variant this is all you need, non constant gain variant it gets you double, you need a double cancellation I don't know if that answers so because so it means the variable are relatively long so so the one the initial variable is horrendous tremendous, it only becomes good when you reduce mod p so these are just to show that this distribution is good mod p which is not a trivial thing and of course it's not perfect nothing is perfect but things are good enough that you can implement the uniform distribution of things but then at the moment you do not exploit a potential p averaging so to get further consellations and none of this gaze does that I mean if you go back to these other papers they don't exploit the p average please, maybe what they exploit is the average of the family yes, so then in some cases it may be possible to use a fact that some kind of horizontal satellite, but do we know a horizontal satellite for such things it depends of what elliptic surface you have but it could be related ok, in some cases it could be some kind of modular and then you might use things around p conjectures so so there we would fix the elliptic curve and start trying to make an estimate of the analytic rank for a given elliptic curve by using a double cancellation that involves also the modulus well, we didn't know about such results but maybe you know that there are such things so in principle there may be such things what you expect, I don't know it's a question I don't know precisely the shape of the surface but we were checking with Katz and he didn't seem to for him he didn't seem to so what you like is some kind of horizontal satellite well, I mean which is not so easy in that case but as far as I know these other people Goldfeld-Brummer, he's brown they use as a second average averaging parameter they use the family and then, well that's much easier than using the modulus right you fix the modulus