 So we can start, so this is the last lecture of the series, and so what I plan to do today is first there was a request about more proofs last time, yes I will speak more about bounds for about the proof of interpolation formula and how to estimate, so how to prove the moderates of how to prove the moderate growth of this fun of semi-norms of our generating function f which we constructed in the previous lectures and so after we prove the moderate growths then our interpolation formula will be complete, so this will complete the proof of. It will prove that composition one way back and forth and one order will be identity operator, because one should prove something else, it's another composition so identity operator, yes, but I think this is this would be important anyway somehow to know to know that the series converges and so and so complete the proof of there and so the second part of the lecture and I will speak about some open questions and some conjectures and also general generalizations and so and so now to prove the moderate growth of these functions which we have constructed, so an essential result which we still have to prove is this propagation of moderate growth property and so the proposition which we already performed related to the previous lectures I will just recall it so let k be an even integer and we have two functions h1 and h2 which are continuous and have moderate growth on the upper half plane and also we suppose that there exists a function f which is also function from upper half plane to the complex numbers and what we assume is that so f is also continuous and f satisfies the functional equation which is so to say generalization of the functional equation which we tried to solve for proving the interpolation formula only for the interpolation formula we had some very concrete functions h1 and h2 and so what we have done in the previous lecture we have shown that actually how to prove this functional equation and for example if we can solve it in a neighborhood of a fundamental domain D then we could extend it to the whole upper half plane so I will remind that fundamental domain D it was this fundamental domain for the group gamma 2 domain D and so now what we show we will show that if we have such a solution and if we know that our solution f it has moderate growth only in the domain D no I think now actually now it will be sufficient for us now only on the closure so if now if f has moderate growth on D and I think because our function is continuous so we can just even speak about D itself not take a closure so then we claim that then f has moderate growth the whole upper half plane h I don't understand that if you want to reconstruct f on some other point of set of D it will be given by some formula of some of values of h1 and h2 at some points and then it will be something like 2 to power some number of steps yes yes yes so this is exactly what we will do so yes again maybe I will use the technique which was disliked last time but yes but I don't know maybe maybe it's not very geometrical and a bit too formal but this is the way I can organize myself and think of it without losing details or maybe a bit too heavy notations but pictures but here somehow that list I think that for many people this like geometry of upper half plane is quiet counterintuitive so it's so sometimes it's easy to get to get to get lost because somehow here actually like one thing here is that when we are moving to the real line somehow it seems like everything becomes very small but in fact it becomes very big and yes yes yes so that's yes I think this is what we are going to do maybe only somehow in a formal informal way so I will try to include more pictures and maybe to try to show what is the geometric intuition behind our long formulas and so now to somehow to maybe understand better what moderate growth and upper half plane is so we will introduce one notion which I think we already used it in the previous lectures so for an element of SL to R we define it's the Frobenius norm of gamma is in the usual way so and so now if we want to evaluate this Frobenius norm on the matrix with entries A, B, C and D then this will be just so to say the A2 norms is if we see it just as a vector with this coordinate it's a definition like this and so for us it will be it will measure for us how big elements of SL to R are and if elements are big then they will also move our points which we take in this fundamental domain they will move another away and if elements are in this sense small then they then the elements will also remain close yes yes yes yes yes yes and so now this norm has some useful properties so and like one of them is of course that it is it is it is sub multiplicative so so for us that if we take those then we will get that smaller this and so also another useful property would be that if we take a negative of a matrix then the norm will not change and so it means that it is also well defined on PSL to R and the last I think it's some of this special feature of dimension 2 now if we take inverse of an element then its Frobenius norm will not change because all our matrices they have determinant one and probably in the future I will omit this sub index because it takes a lot of time writing it and so now maybe we will need a few more slow how now what we can do we can describe the moderate growth of a function in terms of Frobenius norm and so how we do it so we and this is actually would be the right definition of moderate growth so we know that the upper half plane it's an image of point I under of the action of PSL to R so it's even it's so upper half plane it's even a quotient of PSL to R by the stabilizer of point I and so now we say now the function f from the upper half plane into complex numbers it will have moderate growth if we can find positive constant c and n such that the absolute value of f evaluated at some element of SL to R g applied to I is bounded by this constant times the Frobenius norm of g just before all and so now we will also need two somehow useful effects so one of them is about the slash operator and the automorphi factor in a slash operator so if we have tau at the upper half plane and g tau would be an element of PSL to R such that g tau applied to I gives us tau and we take any element gamma which will be in PSL to Z then we can evaluate the automorphi factor in the following so as estimate the automorphi factor in the following way so it will be bounded by Frobenius norm of gamma and Frobenius norm of g tau squared and so another fact of which we will use it will be a fact about fundamental our fundamental domain and so what we want to say is that so now we take any element at the our upper half plane and then it will if we act on it by gamma 2 it will always have some point equivalent to it which belongs to the closure of this fundamental domain or we can take the group SL to Z and then look at its equivalent in group SL to Z so this was our domain D and here we will consider its subset which will be we will call this domain by f so here's point i and here are so now it's not not this domain but if you take a elemental domain for SL to Z which is like this standard one sometimes it's called the keyhole domain so here's the point i and here we have points which are third roots of unity so now this domain it has the following property that if we take any point tau at the upper half plane and consider all the all its SL to Z equivalents then the point which lies in this domain it will correspond to elements of PSL to R with the smallest possible Frobenius norm and so but somehow for for our reasons we will not be working with this domain but rather with this one so our plane will be like this so suppose that Z is a point in this fundamental domain and we have an element GZ in again SL to R such that Z is the image of i under the action of GZ and take any element delta this time of SL to Z then there exists some absolute constant C such that Frobenius norm of GZ is 3 smaller than C times okay so somehow it will not be strictly smaller than Frobenius norm of this element but at least it's somehow bounded by some absolute constant and so and so now what we also want to see is somehow if we take an if we take an element in for example in our group gamma 2 and write it as a product of generators then what we want to control we want to control for example the lens of this product in terms of Frobenius norm so we consider the following elements gamma 1 which will be T squared and gamma 2 it will be S T squared S and gamma so gamma 2 bar it will be image of gamma 2 in PSL to Z so we now know that this group it will be generated by gamma 1 and gamma 2 and that it will be free and so now it's just it's free just means that every element in the group gamma 2 it can be expressed as a finite reduced word it has any unique representation like this and the finite reduced word it will look like this so gamma it will be equal to some powers here e1 f1 e2 f2 and so on and so we know that all these indices they are all integers and somehow all the all this indices starting from f1 they are all non-zero and only the first only the e1 can be 0 so you know that e1 it will be some integer number and all the others they are some non-zero integers and what they are non-zero integers till the sequence and so now we have the following the following claim what tells us relates so the size of our element with respect to the Frobenius norm and it's the length of its reduced word so first we know that the length of the word which will be just the sum of absolute values of all the exponents here it's this cannot exceed the Frobenius norm of gamma squared and also another useful fact if we take only the initial sub words here then they will have then their Frobenius norm will be smaller than the Frobenius norm of the whole word so in other words we can say that if we take initial sub words then they have strictly increasing Frobenius norm sorry why wouldn't you expect the right hand side to be like a log I don't I don't know yeah but okay so first I suppose that log will not be true but I think somehow okay so I don't think like this I think this is sufficient for us but also I thought that this bound is actually it's not very I think it should be first power okay so maybe maybe first second power is to be to be safe so to say okay maybe I may maybe second power is a bit overkill but too but I think logarithm will not be true maybe yeah basically so maybe I'm now I'm confident about this so I may believe it like this but and so so now we can start out the proof of our proposition so okay so here I will do again what we did last time so I will again so I will again consider this like this domain D it's covered by the translates of domain F so this will be M1 which is equal to 1 so this will be like M2F this would be translated by some element M3, M4, M5, M6 so M i are elements of S to Z and as we have seen in the previous time that these elements M1, M2, M3, M4 and M5 they give us a basis of of the quotient of a group algebra R by the ideal I change the leg boards no more than we will be exponent yeah because it's supposed to get some kind of hyperbolic matrix to start to raise some power like to go linearly and now go exponentially yeah it's kind of yeah but I think it's the most it's like we had like two estimates which are kind of in between so so to say yeah but at least like this estimate will suffice suffice and so so now we know that M1 these elements M i they are they give us a basis of this quotient and so what we'll write down we'll write down such a vector M which will consist of these six elements and now for gamma in S to Z what we will write will if we multiply this vector M by gamma on the right then what we will get we will get representation the sigma of gamma which is a six by six matrix multiplied by vector M and plus two vectors such that and the vector whose entries will belong to the ideal I and we know that the ideal I it's a free ideal so we can write it in a following way and so now how to compute this vectors and one and then two so for computing them we can use the chain rule or the the classical condition we have to satisfy and so if we so for example if we know our gamma we can written our gamma is a product of generators of S L to Z then we could compute the value of vector N at this element and so the formula will look like this and so on we continue till at the end we get r1 multiplied by r2 sigma secret of secrets and signal product yeah oh yeah yes sorry yes and so let's mark this formula by by star so we'll use it later and so now what's one thing for for our which we'll need for our argument we also need to know that this function these two functions and one and then two of gamma they cannot grow too too fast so if our element gamma for example has a small Frobenius norm then we want to know that these elements of these vectors whose entries are elements of group ring they also have to be small in a sense that all their coefficients have to be bounded and also all the group elements which enter this expression with some non-zero coefficients they also have to need to have small norm and so here again we do it and okay so we're not fighting for the best estimate we will just want it to be somehow sufficient for our proper purposes so we say that there exists two constants c and n positive constants such that for all gamma in ps l to z that if we know that we can write for example one of the coordinates of this vector as a following sum so it will be an element of a group algebra so it will be sum over z and delta is a coefficient and so here s is any of our coordinates then what we claim is that first the sum of all of the absolute values of coefficients has to be bounded by the Frobenius norm of gamma would say polynomially and also the Frobenius norm of each of the deltas has to be bounded in gamma also polynomially and this will be true the coefficient is not zero and so now how do we prove this so now the proof is not that difficult so first what we observe that it will be sufficient for us to prove this only for gamma in gamma 2 and not in all ps l to z just because gamma 2 it's a subgroup of index 6 so proving it for gamma 2 it will be sufficient so we consider gamma in gamma 2 now what we will combine we'll just combine this the this formula for n the chain rule for n so now it will follow the statement will follow from and also from the lemma which we formulated above about the length of the words and also here we'll use this the place where we'll use the polynomial growth of of sigma so here it becomes important that this matrix sigma does not have very big coefficients just with for example if we write gamma as product of generators of gamma 2 then we will know that first the number of terms it will be bounded by Frobenius norm polynomially then we know that the Frobenius norm of the initial sub words it will be will increase as function of length of the sub word and also we know that the Frobenius norm of the actually of the ending the ending sub words if you take the last i minus m letters in a word that this function will be decreasing as a function of i and also we know that all the entries of sigma of r1 are i they are bound also bounded polynomially by the Frobenius norm of n and so now after we now we have everything we need to estimate the value of our function f at points of the upper half plane and so what we do for this for this one to somehow to so now we know that if it take one to compute our function f at some point at the upper half plane then we it will we can express this value in terms of the value of values of function inside of this fundamental domain d and also add some expression which will depend on h1 and h2 and this expression we will pick it up from the functional equation and the expression of it will depend on of course on the element of gamma 2 which maps our point or other elements of a cell to z which map our point tau into the fundamental domain and will depend on their expressions as words written in generators and so we will define f to be f the following vector-valued function and so now we see that all components of these functions they all have so now you know that all components of f they will have moderate gross on the fundamental domain f because somehow our the conditional f is what it has no moderate gross on the bigger domain d but now because our elements and one and m6 they are they give us the tiling of a fundamental domain d by translates of fundamental domain f so now we know that this is satisfied for all for for f not that the bigger but in the smaller fundamental domain and so now we can rewrite the functional equation of f in the following case so the functional equation of f is equivalent to the following and so now this has to be equal to the pink so this is the same as so now here what we do here we write here we use this formula for m multiplied by gamma and so now what we see here that we can rewrite this in the following case so this would be just the presentation of gamma which acts on vector f and so here from the functional equation we know that this is the same as if we take function h1 and act on it by this vector n1 and the same if we take h2 and act on it by the vector n2 and so here actually some of the first coordinate of this expression is the the value of f and this is what we will actually want to estimate and so now we know that this will be true for all this holds for all for all gamma in ps l to z and so what our goal is now is now to estimate values of function f at the point tau in terms of the Frobenius norm of element g tau which brings point i to tau so just to estimate so now if you know that tau it is g tau times i then we want to bound this by polynomially in g tau in the Frobenius norm of g tau and so now we are ready to do so so we suppose that we have found a point tau at the upper half plane and we found an element gamma in ps l to z which brings this point tau to the fundamental domain of tau prime the fundamental domain f such that tau it's an image of tau prime under gamma and so then of course we can write we can write just we can write f of tau in the following way so it will be the same as f of gamma tau prime and so this will be the same as and so that's firstly easy observation is that this term does not really change anything that we have control over this one so you know that the absolute way this the automorph effector we can express it for example in the imaginary part of tau prime divided by the imaginary part of tau and so because our here's what okay we can estimate it by maybe i didn't write the concrete right estimate here but i think we certainly have control over that so let's say that it's here we do have control over it and so now we just write the second part so this is so what we want to estimate it at t prime and so now from the inequalities which we have already proven now we can see that actually we have control over all all of the terms here so first let's start with this one so so here because we our representation sigma has this nice property we can just estimate it as polynomially in a Frobenius norm of gamma times the norm of vector f at point tau prime and now because our tau prime is in a fundamental domain so we also know that this is bounded polynomially in tau prime and so but now we know that we can what we really want to estimate everything in terms of tau but we know that somehow first that gamma is a product of g tau and g tau prime inverse and we know that g tau is bounded by the norm of g tau prime is bounded by the norm of g tau so here we see that this is actually can be written bounded by some constant times for example to the power 3n and similar story we have with these two terms so let's look at one of the coordinates so one of the terms so if s is the number of the coordinate and i is either one or two so and we have now we know that we can write this entry of of n as a linear combination of group elements but now because we have a bound for the coefficients and bound for each of the deltas so here we also can bound these terms so Frobenius norm of delta and Frobenius norm of tau prime times so here we know that by our assumption function h1 it also had it had moderate growth so we can bound it by the Frobenius norm of delta g tau prime and so now we see that this whole term it is indeed polynomially bounded in the Frobenius norm of g tau and so from here we can see that f because now that we we've done this for any point tau and so what we what we've bounded here we bounded for example here the l2 norm of this of actually of this vector here so in particular this also gives us bounds for all of its coordinates and the first coordinate was the coordinate we actually cared about so we see that f has moderate growth on the upper half plane and so now what now after we proved the the proposition about the propagation of moderate growth now we come back come back to the estimates which actually interested us from the first place which were estimates on the sub norms of the capital X so this was our actual goal so probably I will not give you all the details no no no no but but but there because I'm kind of there we did not have this interpolation formula and we did not want like it wasn't important for us and also actual for the proof of for the proof of optimality again this is not not needed so this is only for for the convergence of all the formula and so now to finish the first part of the lecture I will just outline our strategy for for taining the bounds we are interested in so what we somehow so now what it reminds to show that so now so now to prove to prove that this functions so here we take the norm with respect to variable x and see it as a function of tau at the upper half plane so now we to show that they these functions they have moderate growth on the upper half plane up a plane you coordinate x will you put index of h bar because it's hard to follow it's a function h bar with coordinate x or tau no no growth on it's an upper half plane it's a function of what x or tau on h so as a function okay so I would say like if you take the norm somehow this second variable it will just disappear so to say yeah it's not very clear maybe like this it's actually a function of tau and so so now what we do so now somehow it suffices to show that that so like first of course that this function can be extended to the all upper half plane but this is something we so to say established on the previous lecture so we can just apply the proposition and the only thing we still have to show is that these functions they would have they have moderate growth on inside of the domain d this functions without poles yeah so this is this is the after multiply by the difference of j yeah yeah so this is somehow what we had for it before f like how do we prove the this this bounce the first of course we use our integral representation so so use representation so and so with this integral representation we have so to say two problems like we have different problems or difficulties we have it is first that this function has poles at the upper half plane so and here and if tau is indeed will only affect the boundary but here of course like the way to fight it is to change the contour of integration so it's the first problem it would be that we have so poles of k at the upper half plane yes yes yes as alpha and beta there are multi indices and these are seminars on the short short space and so but this problem is kind of it can be easily solved like this so from contour like this so if you have contour like this we can to deform it and to make it for example a contour like this and the second problem it is that also that k tau it it also has a pole that t goes to infinity so if in dimension eight here it will grow so to say linearly in in t but in dimension 24 it will grow exponentially in t and of course this is a seems like more serious problem but what we can do here here we use some transformation properties that this kernel has and then we just change integral representation so here we what we do here here we change integral representation and instead of integrating on our contour which looked like this so from zero to infinity what we do we make picture like this so we're integrating from minus zero from zero and from one to some point at the upper half plane and and then integrate from tau zero to infinity and also like on each of these contours we use then not this kernel but rather some different version of it and in this new integral representation what is good about is is that now at all all the casps involved in this picture our new functions they will decay exponentially and so this would allow us proving this polynomial bounds on on the on the function f itself and on its derivative with respect to x variable so probably we can make a break just so the last part of the lecture what i would like to do is maybe to speak about some questions which remain open and so and so as we've finished previous part by proving interpolation formulas maybe first question would be so what other interpolation formulas are possible so it's a problem right down to the the the classical channel interpolation formula it tells us so it tells us that we can reconstruct band limited functions from the values at so to say the integer so if you have if you know this f of for example values of f at the points n and so n from this integer and also we know that the Fourier transform is supported under integral minus one half one half then we can reconstruct our function so here in this sense we know something about values of functions and we also know quite a lot about values of its Fourier transform but we also well we know that it has to vanish in here and so the formula which we have presented in this series of lectures so it was a formula which reconstructs function from the following information so here it's a more symmetric with respect to the Fourier side and the functions itself so so it tells us if we take so it's a second degree interpolation formula and it tells us if we know the values of this time radial Schwarz function and we know its values at square root of n we even integers and the same information for its derivative and for its Fourier transform then we can reconstruct our function back and so together with Danila Reichenka we also have proven a simpler version of this formula which tells us that the function can be so maybe because this is a second degree interpolation formula but maybe so it would be more natural to have a first degree interpolation formula so here if we take function just at square roots of simply integers so here it's again sorry it has to be a radial Schwarz function so we take only values of function and Fourier transform and don't take this derivative so here we see that somehow this formula is the kind of look alike so here we take twice more information at each point but have twice less points but here we the number of points doubled but the number of information we collect so let's say decreased and so from from from this we also can reconstruct our function back and so here there is there was one another formula which Danila found he found formulas which are so to say in between these two cases and I think at the moment he did not write it down but he gave a talk at the American Institute of Mathematics last last autumn so and it is now available online it was an IM workshop it was in 1919 where he was talking about the following formula so if we have our function f and then we take its values we take integer even integers and shift them by two numbers so to say alpha and beta and so alpha and beta they can be there are some real numbers and they belong to some interval which will depend on a dimension so for example for fixed dimension they cannot be too big and so the same information about the Fourier transform and then considering such an interpolation formula it also leads to a nice functional equation which can be solved explicitly in terms of hyper geometric functions and so now you see that if you take alpha and beta equal here then we get somehow this this this formula yeah but somehow like it was actually like what what we see from functional equations that if we take higher derivatives somehow things break break down we don't like this is the functional equations we get they they don't lead to such nice representations so either the group which is involved is not discrete or the representation has for example infinite codim infinite dimension so the rather the ideal which we get from functional equation that has infinite codimension so on the other hand somehow I think what is not impossible is that maybe if we do take other sequences we also should get formulas of this kind so and so here later I will so the question so for which sequences alpha n and beta n so the sequences of some non-negative real numbers so that the radial function can be determined by its values at this point for example is it possible that if we take for example the density of these points is big enough then we will always get for example some kind of interpolation formula maybe not without explicitly knowing the functions but at least no but some nice situations where we get really isomorphism to fast decaying sequences up to maybe finite dimensional space and full space and that's actually it's another question because it's not in the first form but going back from if you if you have a sequence or like we're going from sequence to function or what was the question function to sequence to function you get the same function where you don't get zero yes I think actually if you click like we will not okay so yeah I think like you don't get zero because like because our like functions interpolating bases they are like they are nice yeah so but this is actually what we prove in our paper that it is indeed a basis and also that somehow I guess so why it doesn't vanish I think it's more or less clear it's because it has this like values at the prescribed points right so if if you somehow if you it's not clear it's it's really kind of very good question no but but okay so if you hope like well like we have we have like well one for like so maybe let's forget about like second order interpolation just we have we have fun functions at each of the basis of functions and each of the functions is for example like n's function there's one at n's point and zero everywhere else and then if you have for example convergence if you know that the sum of such functions with some coefficients converges and we'll we'll get something which will have exactly this value at this point ah so whether it can vanish at all the points yes yes so it's actually like in it's actually it was something what we prove in our paper that this cannot happen that we that there is a uniqueness that it cannot vanish so the question is also like now about what what if we take our other real numbers like this and yeah it's actually like this what we're talking about this question of whether it's a basis or not so it's also deduced from this like nice analytic properties of the generating function f so now so yeah so the question was whether whether this would make sense and so here like for for this question for example about the dimension here because if you're speaking about radial functions so for example this question dimension d it shouldn't be an integer number because here we can just if we use dimension only to define the Fourier transform so for example we can think about real of course they're not dimensions but it's like real this quasi dimensions and then it just means that our Fourier transform is a certain Bessel transform with this parameter d and and so of course another questions which remind open so what's about universal optimality and other dimensions is no like obvious way to deform your interpolation formula to 8.001 dimensional space or yeah so this was actually also what one of the things we discussed in this workshop like what we did we computed kind of like a derivative close to our like around eight what would and it turns out that actually derivatives are also some nice because because the location of like for the derivative so to say the location of poles is still nice so the derivative also has some nice properties like where you can see some nice numbers which are either algebraic numbers or some numbers you get expressed in terms of pi in terms of logarithms of rational numbers so if i understand maybe i don't understand you're saying that sort of in 8.001 dimensional space you expect the interpolation formulas to involve square root of 2n and these nice numbers no it's like okay so they have to move them a little bit each of the points a little bit but then like how much you use them it will be this delta multiplied by some nice numbers at least in the like at least of you are very close yeah so the physics said for decades many decades to dimensional regularization yeah yeah but i mean it's so what happens with so like other question is what happens with universal optimality in other dimensions and so here if so one thing is that so as we've seen for example at one example for this in dimension three so like in dimension three we certainly know that we don't have universal optimality because we try to solve different optimization problems at least not for this like not for natural class of functions so we try to solve different optimization problems and we get different results and similar situation actually happens also in other small dimensions we can work with so it is somehow so probably so universal optimality might not in universally optimal configurations so they might not exist so how we don't really know of course about all of them but yeah now i don't have much hope that in some very big dimension such a configuration would happen so they might not exist in dimensions which are not in our list of four good dimensions which are one two and 24 oh yes the dimension two is still mystery so it's like a lot of numerical evidence but also no no proof yet so on the other hand so again so if you do this kind of like dimension regression and so what we might do here we might forget about actual physical configurations but instead we might talk about this about the distributions so the universal optimality of say radial distributions and so here it seems that actually like this four dimensions they are not so special anymore it seems like here they they are in in line family of with other dimensions and behavior here is not much difference from behavior everywhere else and so here's the something which we already discussed a little bit last time so what we can do we can for example consider this such distributions so other okay so so it would be probably so what i really wanted not be not just a temporary distribution but maybe the distribution which is defined on continuous functions so for example wanted to be of this form so some linear combination of delta functions and here omega i is in r and so what we also want i wanted to have a functions to have a Fourier transform and so so Fourier transform results it also Fourier transform so to say it zero also has to be one which will replace the condition of our of the configuration having density one so it means that like if this function to this Fourier transform of sorry dimension is not necessarily not exactly so yeah so here is somehow so in this question seems like dimension should not really be a that could be actually any real number okay so omega is complex it's not not sure that the integral transforms will be still nice behaved here and so what we want somehow the optimization problem which we have is like this it is that we now we want that like we want the for example see fix again our parameter alpha and we want such a sum to be minimal so this would be not an analog of this Gaussian energy minimization in real dimensions so and so what somehow the numerical evidence we have is this just the following that yeah so for so it's here we just consider problem like this so alpha is fixed so for each alpha we have a different problem and so it seems like the following is true so the for fixed for fixed dimension d the optimal solution u exists and also it does not depend on and so so we have a serious numerical evidence for this Henry con did a lot of computations for different dimensions like if they were taking real dimensions or very big dimensions and this seems like it seems like it works at least numerically and so we have numerical evidence for this and also what is it it also seems like this optimal nodes they seem to have no good arithmetic properties yeah so in a sense like they're not periodic because this this adjusts some real optimal nodes they adjust some real numbers and it's because of this because it's possible still to form generating series with this exponents for example but then it will be some and so it will give us some function on the upper half plane but it will not be there will be no s l 2z nice s l 2z action on it so yeah so yeah yes yes probably this like matrix s like this involution still could act nicely on it but then there is no periodicity there is no way to describe the behavior so yeah and so now like an open question is where is now we don't have that much confidence but so now if we take this node we could also ask whether they give us a interpolation formula the second degree interpolation formula as well so and here they here it seems like numerically it seems like it works if dimension is big enough for example dimension for this has to be bigger than three or maybe just bigger so the interpolation formula is there in the interpolation formula with these nodes because we know substitution like dimension 8 24 yeah doesn't print rate make expansion and dimension oh yeah so we try so we tried tried to do it a little bit but yeah so as I said like we found like wood first derivative and it seems to be nice but then for some reason I think like at this workshop there was limited interest in this problem so maybe maybe it's maybe somebody will become interested or we will come back to this question but I think it would be somehow it would be nice to understand how the like understand the evolution of these nodes so to say with respect to the dimension and also somehow the evolution of weights because it seems like nodes and weights they are parts of the same story so to say here yeah and also like a strange thing is that at least numerically it looks like all these optimal solutions they are always symmetric yeah yeah but I think okay so I think actually for for universal optimality it's more or less clear and this probably happens because Fourier transforms somehow acts nicely on Gaussian so if I take Gaussian I could be on Fourier transform it's Gaussian again so if it's universally optimal it has to be equal on both sides yeah and so somehow we we definitely know that this does not work in small dimensions for example we definitely know that it does not work in dimension one I mean the interpolation formula and so we know that d equals one and so here's because of the following easy fact is that so if you have Schwartz function even Schwartz function on the real line is not determined by the values yeah but yeah but that's because then because in the real lines of the optimal configuration is just all the although actually maybe from zero because I wanted it to be even because like optimal configuration it's now not this they are the lenses are actually ends and not square roots of ends and and functional dimension should expect where is that square root of that yeah but but but somehow but if it like set up the same problem it will actually give us those part this sparse sequence because this is the universally optimal configuration in dimension one and so this this is actually so the result was proven the universal optimality was proven by a conland elkies so conland elkies excuse me conland kumar so and so here maybe I'll just give you a small yeah it's very kind of easy to see so here is the thing is that we can construct many Schwartz functions on real line which says that the function itself and it's Fourier transform will vanish and at all integers actually even to second order and so like one construction which I like is the following one so let's take g to be some even Schwartz function and we define f to be the in the following way so it would be the sine px squared which will give us double roots at all integers and we multiply it by the second difference of x okay so g sorry g and so now if we take the Fourier transform so what will happen it's an easy computation but here we will again so this second difference will give us a sine squared of pi y head and this sine squared will actually give us now the second difference of the Fourier transform of j so we see that there are actually many functions which have zeros exactly here and yeah so looking in dimension two it also seems like those nodes are way too sparse to produce a interpolation formula even though we cannot like for dimension two we don't have example like this where we can actually say this is a radial function which has zeros at all for example at the all points of eight two letters and there's also radial and so maybe at the end maybe I would like to speak about some other optimization problems which seem to be real related to this one so maybe one of them it's a recent paper of Henry Cohen and Philippe Goncalves so it turns out that this kind of very explicit solutions we can find them not only for this for the Euclidean optimization problem or for the corresponding linear program but also for for other similar optimizations problem optimization problems so this is what the Henry and Philippe have proven so they consider the following question which was posed by Borgain, Cosel and Kain some time ago so they consider also function from r d to r and for a function like this they define the following number they call it r of f it would be the smaller it would be somehow the number after which the like the smallest possible radius after which f has no more sign changes so this has the same sign for x bigger or equal to this number r so how it looks like it might look like in the following way so if you have a if this is the plot of our function f it's not true is it radial function I think this this time it's okay it might not be radial but but what is important that it has real mean for for non-radial it will we can also give the definition but later we will actually consider the radial functions so we could have a function like which behaves and oscillate but then becomes something like this and so then we see that now it doesn't change sign at infinity and we see that this is like the biggest interval where it's containing infinity where where it has the same time the same sign then so if this was for our function f then this number it will be r of f it's not allowed to touch I think it is it is it is allowed so sign in a sense like so touching is also allowed and so yeah so I think Burgen Burgen and Q-scotters they needed these type of functions to prove some inequalities in analytic number theory so and also they discovered that for this problem also some kind of an uncertainty principle holds so it means that this number r if we consider it for function f and it's for for its Fourier transform then somehow like it cannot be too small the product of these two numbers which are related to f and f Fourier they cannot be too small so so so they considered that this would be currently a plus of t it would be the set of functions such that there's a first it's that f is in l1 its Fourier transform is in l1 and also that yeah it's like f and f are real valued in particular it means that f has to be even and second condition it is that f is yeah so eventually non-negative and the but the Fourier transform of zero is non its Fourier transform at zero is also non-negative okay so now it's written in my notes but maybe non-positive would be more seems like more interesting condition and also that f hat is eventually non-negative character this is not so it's like yes it's this conditions they have kind of so contradict each other so eventually it means that it's non-negative but only starting from some point and then the Fourier transform of it has to be non-positive right so like this is non-negative but the value at zero is not non-positive so no no I think it has the hat has to be here and no hat here it's like it means like the all programs means that the in integral of f like it's it's eventually it becomes non-negative but still if you take the integral over all real line it is non-positive so it means for example that there should be some points where f is negative and so what Burguin, Klözel and Kehn they showed they showed that the following number for each dimension which would be like the infimum over all f in the set of the square to take the arithmetic mean of r of f and r of f hat so this will be always some positive number just geometric right geometric and so what Kohn and Konkalfas did they have computed this number explicitly in dimension 12 it turns out that in dimension 12 it's possible to find a function which is an extremizer for this problem so they were able to prove that a plus of 12 is exactly square root of 2 and also found the function which is an extremizer in the problem and the method they found this function is actually it's very similar to the functions we have constructed so this function it also has has its roots at square roots of even integers and it's also can be expressed explicitly in terms of modular forms and again somehow 12 it seems for this problem 12 seems to be the only good point the only point where we can solve the problem explicitly because in all other dimensions the solution seem to be transcendental and for the extremizer of course it will have many many double roots and for other dimensions the location of those double roots are usually some some real points which seem to have no particular arithmetic meaning if you like expansion, net wealth, coefficients, you have arithmetic meaning I think so again I'm here I'm not sure because I don't look closely at this particular problem but I somehow I would not be very surprised probably it's also true in this case so I think like we have nice results when the nodes when the nodes are nice then many things can be computed explicitly you can ask the same question in real dimension as well are there numerics down there or yes so I think so here I'm not very sure but probably I think they have a quite long paper where they do a lot of different numerical computations as well but in principle like this problem it also can be posted in infractional dimensions there is no obstructions here and maybe to conclude my talk I would like to maybe mention another results which appeared very recently and which show that there is actually a big connection between this optimization and Euclidean space and conformal bootstrap so I did not read the paper in detail yet but it seems to be that exactly the same optimization problems as we have considered they also play a role in conformal theories so that's a recent paper by these two authors by Leonardo Rastelli and Delimil Matzak correct so a few days ago they posted a paper which was called Sphere Packing and Quantum Gravity where they apply the same technique almost essentially the same technique as we described here to solve some problems in conformal field theory thank you