 So, we continue with our series of lectures and as I promised to you yesterday, so today we will speak about positivity of magic functions and how to prove it. So, in the previous lectures what we have done, we reduced the question about universal optimality of E8 lattice and Lich lattice to the following positivity statement. So, we constructed this function f tilde and we've also shown that if this function is non-negative for all real positive real non-negative values of error and for all purely imaginary tau in the upper half plane. So, if you could prove this in equality then this would apply that the E8 lattice and the Lich lattice are universally optimal. And so, what will be useful for us to try to prove this statement is that we also shown that this function f tilde, it has the following integral representation. And so, here, so f tilde is this, so sine squared which will generate double zeros for the function f tilde and the following integral from zero to infinity of this meromorphic function k which is defined on the product of two upper half planes. And we have seen that this integral representation it converges absolutely if tau is in the domain D and imaginary axis it belongs to domain D and also if r is bigger than square root of 2 and 0 minus 2 and n0 it's a number which depends on dimension. So, for dimension 8 and 0 is 1 and so this condition somehow just means that r has to be strictly positive which is actually fine for us it means that we can use this representation for all the purposes we want. And if dimension is 24 then what will happen is that this number n0 equals to 2 which tells us, so it means that the first possible lens in Lich lattice is missing, we have no vectors of lens square root of 2. And so this integral we can converge only if r is big enough because here I made a typo there should be a minus sign. It makes somehow because here if you have a minus sign here it is correctly should be then this will be a very fast decaying term which will compensate for poles of k hat. And so this is just to recall what what were the Meromorphic kernels we spoke about so this kernel k hat it was a compute defined it as a transformation of kernel k we worked a lot with so it's k it's a function of two variables for example tau and z and we apply slash operator with respect to the first variable and so also what so now what we would like to do we would like to give an explicit formula for this kernels kd and as we discussed it in one of the previous lectures it is convenient to certain to split this kernel into two parts so to say into a positive part and negative part and this positive part and negative part they are defined by the action of this evolution s and so so here are the explicit formulas for the kernels so we will write them in a following way so we will so here this is a column vector which consists of these functions phi minus 2 phi 0 phi 2 and we define these functions I guess in our second lecture and over the functions which generated the ring of functions in the class p which are annihilated by the ideal I and so in the next slide I will remind what these functions are and this is also for these are functions with a generate the ring the module functions which are a legate the deal I till I had two ideals I plus and I minus so these are for ideal I plus this are for ideal I plus tilde this are for ideal I minus and this are for the ideal I minus tilde and so for each of these functions we have a concrete description I will show it to in the next slide and what is here in the middle so in the middle we have a three by three matrix which I will also given in two slides from from now so here are the functions phi 2 phi 0 phi minus 2 so they are so to say they come from the world of quasi modular forms so the four functions psi psi 4 psi 2 and psi 0 they are a mixture of logarithms of lambda functions so here this calligraphical L it's a logarithm of lambda function and this is a logarithm of it of the lambda function acted by matrix S by linear fractional transformations and so the functions this psi 4 psi 2 they are some usual modular forms of weight 2 and 4 respectively and so they are modular forms for gamma 2 yeah so and here maybe I will write down so what are u v and w u v and w u they were the so they were the force powers of theta functions I think like this and so there is one linear relation between this four functions so one of them is the sum of two others and those two functions they actually generate the ring of all modular forms of for the group gamma 2 and so we also have this functions phi tilde and psi tilde and they are somehow they come from the same world as they are friends without tilde so here again we have some expression which is in terms of quasi modular forms and for psi t psi tilde we have again modular forms of level 2 and the logarithms logarithm of lambda so and now we come to the matrices so the matrices tried to write them in a in a way which would be somehow which couldn't cause we have now four different matrices two matrices for dimension 8 and two matrices for dimension 24 and so we tried to write it and somehow compact form so here we introduced some abbreviations for this so what so here we take this so this number number for functions of weight starting from minus 2 to 13 and for from them we will construct diagonal matrices and now each so the each of the the first matrix is defined like this so it's a diagonal matrix times this particular matrix so here we introduce this notation for Jane variance so that things look a bit nicer in inside of a matrix and so this what we've got here this will be sort of same matrix where objects they will be holomorphic at the upper half plane and possibly have poles at the cusp so I think in this case we don't know in these caves we do have maybe one cusp at infinity somewhere and here we divide by the difference of Jane variance and this will give us our poles at the points where tau and z are gum are cell to z equivalent and so similar representation we also write for this matrix which corresponds to the minus sign so somehow we tried to find some nice structure in these matrices which could give us a hint about positivity but we were not able to so maybe somebody else could could do it later and so here is what we have for dimension 24 so again structure is similar so we have a product of three matrices so this is a diagonal matrix this is this matrix and this is again a diagonal matrix and so sorry is there supposed to not be a j of tau minus j of z denominator oh yes I think it just disappeared somewhere or maybe maybe maybe it got absorbed somehow no no probably not no no probably not no okay yeah yeah yeah yeah yeah yeah yeah yeah yeah there is minus 1 so it's already in from here yeah so we see that like is somehow in dimension 24 we have more poles and after all expression got a bit more complicated but not too much. And so now when we have this explicit expressions we can what the next thing which we try to do is how to see the positivity of the function we are interested in and of f tilde from some properties of our kernel k. And so again just to we just recall the integral representation here and so if we see for example if in those cases here again minus is missing and again in those cases when this integral converges absolutely then for example just the positivity of k will imply the positivity of f. And for example this happens in dimension 8. This is the only thing we have to check. And in dimension 24 we have additional problem coming from the divergence in this integral so we will need to work separately with the small values of r. And so so here how we what will what we do we how address this problem in dimension 24 so here we use so to say a truncation. So it means that we are going to split our kernel k24 into two parts so one of them this e hat it will be a part which so to say contains the pole and the remaining part will grow at most polynomially on the imaginary axis of the second variable the imaginary axis goes to infinity. And so here again the here are the explicit expressions for the for this term which we have which we are going to consider separately. And so now what we do we set this parameter p which was somehow chosen to make our mnemerics work better. And so now we split this integral into the following two parts. Okay so it's from zero to infinity but here we have kind of just a bit slightly tricky definition of this part. And so now the advantage of this is that this second integral here it can be evaluated explicitly and so this is what we what we get. So here we got a pole of second order but you remember that we are also in our representation of function f till they were also multiplying it by sine squared. So this pole it will be compensated by factor of sine squared. And so now the positivity of f we will deduce it from the following statements about kernel k hat the kernel k truncated and the remaining part. So first we show that the kernels itself that they are positive if tau is in the imaginary axis the first variables in the imaginary axis and the second variable is also the imaginary axis. So now we should show the same for the truncated kernel and we also estimate the remaining term which comes from the integration of a pole. And so in this inequality it actually will be split into two simpler inequalities which we can prove which you are going to prove separately. And so now somehow what we are going to do because our now our function functions which we are interested in they are all defined on the product of two real half lines. But this is the product of two half lines it's a non-compact set and so doing numerics on it might be not very convenient. And so what we have done we have decided to instead of defining function on a product of two half lines to go to the to consider a function on a unit square. And so for this we will need to make a change of variables and so this is change of variables which we will do we will instead will take a modular lambda function which is a help module for the group gamma 2 as our new parameter. And so lambda it maps the imaginary axis into interval 0 1. And so now what we have to do we have to express all these functions in terms of lambda if our variable in the upper half plane is actually in the imaginary axis. And so for this which functions we will use we will use we will need for this polynomials also we will need logarithm because we had logarithm of lambda in our formulas. And now to express the quasi modular forms and modular forms of higher weight what we will need we will we need to introduce the elliptic integrals because the classical fact that the elliptic integrals they invert modular functions. And so here are the elliptic integrals so the elliptic integral of the first kind and the elliptic integral of second kind. And so here are the formulas which we use to express all the functions in terms of lambda so with curly L and curly L calligraphical L as it's easy it was just equal to the logarithm by definition. So the only way to check that the branches of the logarithm we are using here is the right ones and it is the case. So now for the modular forms we have expressions like this so they can be expressed in terms of square of this elliptic integral of a first kind. Now in some of the formulas we also use the just the variable on the upper half plane and so now it also can be expressed in terms of k and this way and here we have a formula for the quasi modular form e2. And now after we have done all this so we can make a substitution so now we have we can compute two functions well not two functions actually they are three functions so functions l ld and l truncated such that k of tau z it's equals to l evaluated lambda of tau and lambda of z and so the one this identity is to be again valid if the real parts of tau and z vanish. And so now we'll get all these expressions and then they can be expressed in terms of these functions in terms of for functions x logarithm of x logarithms of 1 1 minus x the elliptic integral e elliptic integral k and elliptic integral k of 1 minus x. And so I think also I can use the rithome slightly like technicality with this truncated kernel because here we have exponential to the power x so it will be yeah we will get some rather unpleasant term something like exponential to the expression in terms of of elliptic integrals so but then but then that one we estimated separately. And so now let me show you how how these expressions look like how long they will be if we express them in terms of this easy parts so here is a mathematical code for here it's a typo here should be sorry l8 so the function l8 so it is kind of lengthy but maybe not not too big and so here is the plot of this function and so in a plot you can see that it yeah somehow on one hand just by eye we can see it seems like it is positive on the other hand it is maybe not not exactly easy to see that because on the boundaries here our function it actually vanishes so it vanishes on these borders and it goes to infinity at this edge also we see that here on the diagonals we have so to say virtual singularities so the function itself is smooth there but because we had to decide by the difference of J invariance so the computer does not know how to evaluate the function here so it leaves the diagonals blank for us and so on the next slide we have so this is it here we show how what is the growth of functions at the edges so here as we have seen it before so so here so maybe I explain a little bit what it means so it means that this is so say the order of vanishing and it is also for example here if it's we're going to y equals to this edges y equals to one and asymptotically it means that here our function it will look like a logarithm of one minus y times some coefficient which still depends on x but we simply don't write it at this picture and so we see that somehow here we have the vanishing here we have vanishing and here we have actually growth and so so now what we somehow as we told you if we could not find the way how to prove the inequalities which I showed you before by some simple mathematical argument so what we do instead we do a computer computations but we also try to make them like a mathematically rigorous so in a sense that when we can compute value of a function at some point so we can do it just to say using if we have two numbers which we know exactly then we can use rational arithmetic and do some operations with them for example like two times two multiplied by four and this is mathematically rigorous statement on the other hand you can you can write your numbers in decimal representation and this is how it will look like then and this is often what people somehow see when they are using maybe making some computations a computer this is what we get however in reality what this really means often if you have this like something dot something it is not that this number is exactly the rational number represented with this decimal representation usually it means that we simply don't know what goes after the dot or we somehow we don't really want to know what it is we know that it's something not important for us and it works quite good for many practical purposes there are practical purposes for which this does not work that well and I think like 50 years ago people realized this that maybe it's better to have not only a representation like this but actually knowing so what happens if we know two numbers approximately then there is for example if you multiply them then they will also know their result only approximately and the error will also accumulate and so for this we what we can use is something what is called interval arithmetic so suppose we can like other way to interpret this equation here is that we know that we have some quantity which is 2 plus some error and then we'll multiply it by something else which is also 2 plus plus an error or what we can do we can say no that like our first number lies in this interval and our second number lies in this interval and then if you multiply them we obtain something which lies in this interval so here for example if you know our numbers with precision like this then we know the product we know our product will lie in some interval like this and so interval arithmetic it's attempt to to do the computations in this consistent way when we know all our variables with some precision then we say that the result belongs also to some interval so here we could write this interval explicitly and in some situations of course even the because evaluation is not so easy we will just give here some interval which definitely contains the result and we are sure rigorously sure about that and so our days there are actually quite many packages where this is implemented for example such packages exist inside of Mathematica so for our computations here we have used Mathematica and so also as you've seen on the plot so with our function it's not somehow it's not an easy function it has all those how peculiarities it it has problematic behavior sorry problematic behavior here at the edges and it has this like virtual problem at the diagonal where like in the diagonal of course this function is smooth but if you want to compute it we have to we cannot just compute it by dividing 0 by 0 we have to do something yeah something like that but still we have to do it Mathematica yeah it could be some legal issues because it's not public domain maybe we should use nature yeah maybe but we used license at first but also we don't know what's what's happens in set mistakes it's not really proof yeah yeah so I will pick about it maybe a bit later and so so so but it yes like one nice thing about commercial products is usually they're made to be liked by people so if you use it it's very convenient and easy but actually so Henry con and Abinav Kumar they did work on writing at least some of our procedures in Sage also and I'm I think in Sage writing or maybe working is it sometimes is more difficult but what we discovered that it was somehow it worked actually much faster but one might do one reason for that is that now like writing it for second times we are more experienced and can introduce some maybe better procedures but also it worked faster and so and so again like the problematic points which we had so they were like diagonals and the good point where two diagonals intersect it's like using lipidal rule rule twice and so we also have to do something about edges and maybe like most unpleasant points there are the small corners where because like here we have so to say three different singularities intersect like two edges and also one diagonal so that they require to our most most work and so here is the picture of like all all this each of these regions it was a positivity on it was checked by different program and so at the end what I can tell you also the running times of the program so they say like all the computations they were made of Mathematica and here are some technical details about computers these computers were these computations were run by Stephen Miller in Rutgers and so he used 16 cores to run Mathematica on it and so here we had like some small files to which you run pretty quickly so we had to write down our own estimates for elliptic integrals so some symbolic computations with kernels symbolic computations with so this is like dealing with the doing some parts of the truncation argument work and so here what so it looks in dimension 8 so in dimension 8 actually everything runs pretty fast yeah so it is somehow maybe everything will add up to a few hours maybe this one was seems like this is the longest thing also turns out that for checking numerically the longest was actually checking generic points you got algorithms which we had to run on the singularities they run pretty fast but for generic points for some reason it was actually longer longer to check to check them and for dimension 24 here the running times are much longer so here it already adds up like to a few this part adds up to a few days and so also for dimension 24 we also need to consider those small values of r separately and so for so these are the running times of programs which deal with the truncated part of the kernel and so maybe you have some questions sorry you didn't insist on every arithmetic you want a non rigorous competition could you do it much faster I mean okay I so so the I don't know like it depends on how on rigorous you want to be so like so like I don't know like this plot it computes it like in two seconds yeah but that okay but then like here like problems happen if you mean here probably like one thing which is necessary still to check that everything is fine on the in this proof on the singularities because like if you thoroughly try to rescale it and to come closer to the edge usually plot will show something very bad it will not look like it's positive it will look like it it will show the noise but it looked like the computations of the boundaries were much faster than yeah so this also okay so it could be that something could be implemented in a more efficient way or I don't know maybe there is some fundamental reason for that because actually like is from things which we computed actually like do doing for example tail because of the probably because of the boundary we have to somehow work harder do more mathematics and mathematics somehow speeds up computations so we have to compute Taylor series actually do more like more estimates and those generic points we are just lazy we just take the existing program and it should be much faster in principle in principle yes because like you said the hardest part mathematically was the most painful part was the corners where yes yes yeah but the but then probably we then there there we did all the work for I missed it actually Steve Steven did all the work for the computer expansion here at some first few terms some yeah it shouldn't be hard to get well like painful that's why I said yeah just like if you've the method of tail or a tailored for the for the boundary then somehow sometimes it's difficult to extend it far far far there that's that's that's certainly the danger did you try alternative hardware just to be sure yes I think actually so Henry con was also running these programs independently from Steven and also getting the most similar result so is it possible to write programs to check the results of the computations and then I don't know and then write another programs to check computations yeah so here so here I don't think it's for security you write something else to check this maybe this maybe it's more like it sounds like more like an statistical physics we're sure that this is true by like 09999% so I think this maybe can increase your security in the result but probably mathematically it cannot prove it anyways because if you if you don't believe in the first program then maybe a second problem the program is also corrupt or your second computer is also defective so yeah and of course now it would be very nice to have also a theoretical proof of this positivity but at the moment we have no idea how like whether it's how to achieve that or I'm maybe like even is there any good theoretical reason for it to be positive maybe it is just positive by accident so to say maybe have some more questions do you have such can work to remove the singularity a factorization that cannot be more singularity of the diagonals okay I tried I tried to do it but guess maybe what I can so like writing in some kind like an algebraic way right to divide by something so I tried to do that but not maybe not for very long time and not very so I can see like this kind of a big expression and then if you try try doing some kind of elimination of variables or it actually it gets much much much bigger probably if you ever like worked with you know because I get effectively if you have I worked with it it's like it's a function with like algebraically independent many many variables and so at the end I got like some really really huge result which I certainly could not handle by hand and so it again would be a computer computation only instead of maybe instead of computing to this like this interval arithmetic it would be computing huge symbolic expressions so I somehow it is it looked to me that it would take also very long time because I'm out of here we realized like especially try to write our expression more one more compact so it is small but if you try to do any kind of elimination then this expression just grows extremely fast but I mean like in this in this so I try to fucking vinyl or rename the variables and see if it's some nice function but it did not look like a nice function it's just like a some big big polynomial and I don't know how many it was like 10 no no 10 yeah but the metal breakers they may they make after all expression a bit smaller yes I tried to write it down as an expression of like of these of these how many they are like 12 variables but I could not see any particularly good structure in it so I think I think here either there should be some very like good good new idea why positivity should hold or yeah or probably any other method seems to be not much like not not more rigorous or much more elegant than the one we are already using here so suggestion your function looks like it doesn't have minimum yeah okay so minimum is like zero it's where we can prove that it's positive just so equation for critical point but how would this imply because you know it's there are a few types of minimum of the function because it seems it's not critical point inside this yes yes I think it might have no critical point not no okay if you need to find me on the bottom yeah I think it's not it looks it doesn't have no critical point at all inside yes so I think like I mean okay so maybe like bond of the boundaries because of this like logarithm I think it I don't know maybe maybe I don't know it can't can boundary count as a critical I think it will prove that the guy just yeah maybe I don't have other way to try it and like the reason with something it's maybe some simplification right which critical points it will get some question for this columnic functions which is easy to yeah it's actually be easy yeah maybe maybe we can try that one theoretical proof would be would be nice on the model are you derivating a few times your expression in order to get the positivity so on the guys on the bottom on which boundary on here on the nose like to hear with what we do we kind of just compute what the limit is so to say that like compute the expansion for example in like here if we take for example x going to zero here x going to zero so we just find an expansion in terms of x look at the first term and see and I get first term in the expansion it will be some explicit function again even in terms of and then check whether this function is positive and also bound the bound the rest yeah so here like at the boundary it's not exactly Taylor expansion because here we have we have also like logarithmic terms so it's like this you need to check something on the part no no you have to check this function is kind of this bounded below and to the boundary takes value zero zero infinity on the boundary nodes equal to zero and can continue to say probably for this one need some some time has to be in it maybe it's difficult to do it just like this could then probably we make a break the function you are showing depends on your location parameter so the one function I showed it was for dimension 8 so we did not depend on it truncation parameter because like in dimension 8 we don't have a truncation parameter yeah and it's okay so I just turned off the computer but and in principle I also were good to collect from picture from the functions I have I've chosen to show one but again I also make pictures so far so for 24 dimensional kernel but it looked very similar to this one so I thought maybe it will be not so interesting to show yes actually like we do have like this three different essentially things so that's like this kernel for the man like full kernel for dimension 8 full kernel for dimension 24 and also this truncated kernel for dimension 24 but all they look more or less like like this so that's for dimension 24 what we did we proved the inequalities for the like kernel itself which was like without truncation without modification and there was like another program which was running for the truncated part like kernel without without this like first pole and the pole was also handled separately but but actually considering pole it was much faster because it was essentially like function in one variable so it's easier to because it depended on this variable z in a very simple way I have another question is it possible that I don't remember x x and y but let's say the the boundary where it goes to infinity is x equals one y equals one is it possible that if you fix for example x and you increase y that the derivative is positive everywhere as the functions increasing from the bottom boundary to the top so we did not check this I don't I'm not so sure about that I'm not sure if it is actually if it is monotonic everywhere because it looked like it has some kind of fun well that's the turnaround so to speak going from one boundary word zero yeah I'm not so sure about that yes like also I can't think we look at this like with the presentation we have now if you take some kind of a couple derivatives of this function then expression also gets more and more complicated so this was the reason why we did not fight on derivatives but tried to address the function itself yes maybe maybe the second parts will be a bit shorter because I ran out of slides earlier than I expected but that's almost always situation with slides and yes maybe now I will speak a bit more about some implications of this result if we do believe in this our computer computation and believe in universal optimality then what kind of consequences will it will it have and so like wow one interesting question is the uniqueness of optimal configurations so suppose that we have some optimization problems like f would be some completely monotonic potential and suppose that we have some so what we have already proven so from universal optimality it means that if we take any point configuration see then its energy will be at least the same as the energy of d but now the question is what if we have equality here would this mean that C is the configuration C is isomorphic to our lettuce so the answer is yes and so why it happens so we can for this we can remember how we proved the con the con kumar bound so you remember that we had our function we had this function so so now we know that there exists a function fg such that we know that fg is bounded by g that its Fourier transform is non-negative and also that and that also we have the correct we need is that the condition about this has to be the energy of all the lettuce and so now if you remember the proof of of the linear problem programming bound by con and kumar okay so row is one case one so we have to do we have to also fix the density of course of our configuration so under from the proof of the theorem of con kumar we will see the linear programming bound it will tell us that for each so the only so the set of all Euclidean lenses of elements in this configuration see it has to be contained in in sight of the distances between like x and zero zero okay no alright sorry because we don't it's not the latest so we don't have so so the set of possible distances it has to coincide with the set of possible distances between elements in our lettuce and this happened because we when we proved the universal optimality of lambda D we I whispered this energy as a reason that an estimate for C and that estimated would be a sharp if and only if this condition holds so maybe so maybe here one thing so maybe maybe it's not for here an important thing is that so we would have so if we have our potential G which goes like this and this was the function f of G so now the uniqueness it will be true if what we need is that G equals the f of G only only four points are in this so to say in the set of possible lenses of two elements of our lettuce and for example we know that this will be true for if G is for example a Gaussian because in that case we have an explicit for formula for f G and and now that we have a qualities only on our square roots of even integers it's kind of step function yes you're spear backing step function maybe something like step function but which is also somehow infinity at the here here so probably step function would also work but step functions of how it does not belong to this it's not completely monotonic so it has to be so for the spearpacking the problem the function has to be something like this like here zero the specking radius here we have zero but here we have kind of infinity and then one could argue that this is a completely monotonic function and so and so as long as we have this condition satisfied then we see that we always have the also have the uniqueness and so uniqueness it says is a mouth it's argumented I seen it well first it was a paper of a con and elk is so now if we have some point configuration and it's a set of possible distances is the same as set of possible distances of our of one of our two lettuces then these two sets they have to be and then both sets have the same density of course then these two and I think also what we have to assume here that C is periodic otherwise it's difficult to speak about isometry because for example we could take a lattice and take a one point and this does not change the energy however it's not the same configuration anymore so we we should assume that C is periodic so now so the density of C is one and that the set of pairwise distances in C it will be the same as the set of pairwise distances and our lattice so it means that there exists some lattice such that like C shifted by this lattice is a for some lettuce you want equal set of distances or inclusion so I think even inclusion suffices so then we claim that C has to be isometric to to one of our lettuces and so how do we do it so first we see that like this set it has somehow a very special arithmetic structure so it is it's just will be the set of all square roots of even integers for integers bigger equal than n0 right and so for example it means that the distance squared between any two points it's always even integer and so here is a simple algebraic claim is that now we take we fix one point for example x0 and compute a scalar product like like this then we know that the scalar product like this it always has to be an integer just because if we know somehow if we know the distances between points then we also know the scalar products between them and so this is a usual there is I forgot that name of the formula which expresses the polarization formula right and so now we see that we fix this points and all that points x0 for example and we turn translate our set so that this x0 for example coincides with 0 so we got some set of points such that there are all the scalar products between them are integers and so now the claim here is that the set so this the linear span of the following set like c minus x0 it has to be is an integer less lattice integral this sorry no even so even so even lattice like it's actually means that so definition is like lambda is well it is even if first either norm of a squared is always an even number and some lambda or equivalently it means that the scalar product of two lattices has to be an integer no even means that you have in the basis metrics metric measures which is even numbers on yeah but this but this will mean that you have no no it's like you wouldn't be a second one it's not good it's not exactly between it I think I think it just removed yeah okay no it's equivalent to saying that it's Sophia yeah one implies another but not and so it has to be an even lattice and so now also we know that if it's an so so let's denote that lambda c be there I think it's yeah so from this condition so let so let lambda prime be the lattice which is spent by by c minus x0 yes yeah so this is what I was going to say that now because we know that somehow we know that c minus x0 is contained inside of lambda prime so this means that the density of lambda prime is at least one on the other hand because it's an even lattice it cannot have a higher density so from here we see that lambda prime is even it will actually mean that the density of lambda prime is also bounded by one and so we see that I see it's contained inside of lambda they have the same density and now since we assume that c was also periodic the only way those two sets can have the same density if they are equal so this way we can see that c is an even unimodular lattice and so in dimension 8 we know that there is only one even unimodular lattice up to isomorphism and it is the latest e8 and in dimension 24 we have 24 different lattices like this but only one of them has the same set of a pairwise distances as leach lattice whereas the first where the square root of 2 is omitted because all other they're called nemyer lattices and all they will contain vectors of length square root of 2 so this gives us the uniqueness of solutions of this optimization problems so we know that if some configurations some periodic configuration solves this problem then it has to be isometric to leach lattice or e8 lattice the assumption that c is periodic seems to have only been used the end to who it's equal to lambda is that correct or I get that c is inside even unimodular lattice of dimension d with density like the difference having density zero yes I guess I guess so yes okay so this also implies that any such c is this you just take for example e8 lattice and you move some densities your subject is that correct up to the translation as I'm just the proper probably you are right even though so maybe have to think it's think it's right and so maybe one more application of the theorem is the now what we can do we can look not only at the Schwarz potentials but also for example to pop our laws and in particular if we look not not at all configurations but only concentrated the lattices this result it also gives some can can be applied to the optimality of Epstein a zeta function among lattices so now what we can do we can also write that now that among all lattices are D with the term and one so the minimum value of the Epstein zeta function so I just recall the definition it's some over lattice points without zero and we take the length of a point and take it to the power minus 2s to the Epstein zeta function for if our parameter s is between zero and infinity yes yes so we will do the analytic continuation as well so easy shift and so yes we can extend so somehow it's so the thesis the it's more or less straightforward to obtain this where the Epstein zeta function converges it just follows from universal optimality and now for the analytic continuation what we do we can use the functional equation and so obtain it also for s in this range from zero to infinity because the smaller the value of this factor in the functional equation it will be positive so things will be fine and then actually for if s is negative there are something interest because there are the sign of the factor is not constant anymore it's either positive or negative so in some cases it will be maximum in the other cases it will be a minimum and also like one interesting thing is that we also can say something about so of course this zeta function it it has a pole at d divided by 2 but here we can still normalize the value of our function at this pole for example subtract the the pole from it and then we will get what is called the height of a torus defined by this lattice so and so also the if we take height and so the height is defined in the following case so it so it will be some constant which depends only on D and because it's constant it's not so interesting for us right now and the following limit and so and so this height is the smallest iso then so it is the smallest among among among all the dimensional flat volume one so it's the same as the lower regular sum of logarithm of legs probably you take some of legs six to four minus distance yeah the sense it will be and make pre-transforms so it's a constant which depends on dimension yeah but was a question to us and so like something what Henry Kahn observed is that here in for this if we the optimality for the Ipshti and zeta function it also somehow implies here also have uniqueness so the only optimizer will be the eight lattice or the leach lattice and so and so uniqueness and it will follow from the following formula so we can write the following integral representation for zeta function and from here because we already know that our lattices they are optimal for the values of theta function because of what the value of theta function it will be just the Gaussian energy of lattice okay I'll with the maybe with a theta function minus minus one which is value at one this will be the normalized Gaussian energy so and from here we know that we have you so that our lattices lambda d they are unique optimizers and so from here we see that for for the values of abstinence zeta function we also have the uniqueness so maybe for that maybe have some more questions one of the first lecture you mentioned that this six dimensional presentation is a polynomial growth and useful for say something about this polynomial growth was a function is it interesting story on this story this is how we use it we used it to prove the interpolation formula and to prove all the all the bounds so so I decided I somehow like it said no interesting around but so like here maybe I decided to I mean that part because it was a bit technical but maybe maybe I can speak about this more tomorrow if it is interesting so the pressure for it's not yet in my lecture this was the part which I did not did not do yes I guess it's not necessary for for the universal optimality so it's the most it's important for interpolation formula it assures that the this functions on the interpolating basis if we fix a point and to the for example make the index grow what we want to know we want to know that for example this value they will grow at most polynomially and this is where we use it yeah but I'm always simple I probably have more time than I expected that I can speak about it maybe tomorrow and so today maybe I wanted to speak about more like so many people ask the question so what is so special about dimensions 8 and 24 so maybe like at the remaining half an hour I will try to speculate about this so what is what is special about 8 and 24 and so here so and so I don't have a simple answer to this question so it's still not clear why what are the particular properties of these numbers which are important but what we observe are the following two phenomena which seem to be quite special to these two dimensions so first is that the energy minimization so it can be solved by linear programming and so rather by so say by by a concrete linear programming bound to which I explained before and this seems to be special to these dimensions maybe also including one and two and second also properties that it's a solution of linear program has so it has particularly nice structure so that in this case we were able to somehow for example give an explicit formula for this solution which probably would not work in general and so yes which are not minimizing but extremizing functional I think it I think it might be I think there are some results like this I think actually there are some results of that kind maybe maybe not for all lettuces but maybe like for for example for shells of lettuces there are things called perfect lettuces I think they have some might have some properties like that and so and so here maybe first speak about the first like why so why do we why we don't expect that energy minimization will be solved by this method in general so why there is something unusual which happens here and for this I will somehow introduce a new definition and so what we are going to do we want to so let C be a periodic configuration simplicity and what we would like to do we would like to somehow not only to consider the set of all pairwise distances in C but also to compute some kind of statistics of this pairwise distances so and so we introduce the following function we say that is a two-point correlation function this would be the radial distribution mu and for example we can take it in space which is dual to the space of radial Schwartz functions but we will see that actually it's not just a temporal distribution it's it's essentially a measure and so we define it in the following so maybe such that if we apply applying mu to any function it will it gives us energy of energy of configuration C so it's certainly a linear so this is a linear functional of our energy profile and if we assume that fdk is fast enough then it will be also well-defined and so here just to somehow to we also want to include zero which we excluded from computing self energy of point interaction so we include zero here so and so now this distribution it has a number of nice properties so so how can one could think about this correlation function so it will be like a sum of delta functions at those points which occur as a pairwise distances between two elements in C and the weight will be somehow it will measure how often this particular distance is met with some normalization so to say and so what we see that's obviously that this distribution is actually a non-negative and also if we somehow this is essentially what we've seen while proving the linear programming bound by Conant-Kumar is that the Fourier transform of this distribution if well defined I will not bother as this question like right now but if we can make sense of its Fourier transform then it also has to be non-negative and also what we know we know that if you restrict this mu is in the in the neighborhood of zero we will get just the delta function of zero and if we take its Fourier transform then in the neighborhood of zero it will look like the density of our configurations times the delta function at zero and so for example you can check that all this actually works if mu is a lattice for example and so now we see that like in this transformation of how knowing the complex hull of all such two point correlation functions it is equivalent to being able to solve all energy minimization problems but of course the difficulty here is exactly that we don't know we know very little about this convex hull except for these two conditions it's not that easy to think of any other restrictions which should be and so the what what is the linear programming method it is that the more we are trying to for example found chalk so if we have this for example set you know that by M which will depend on D so this would be the convex hull of all possible configurations and for example maybe with fixed density so it will be some difficult set we know very little about and now we know that some of this set is it's contained in intersections of two other convex sets we actually understand much better so one of them are all just all positive distributions and now there are so to say positive so those which have positive or non-negative Fourier transform and so maybe let's denote it this intersection of these two cones by L for the linear programming system and so now what what so what what so now what the universal optimality property means so the universal optimality means that our cone here it has so to say a corner point so this corner point it sticks out and it becomes a solution to many optimization problems so now universal optimality for some reasonable set of functions kind of it means that we have a corner point and so in particular what it means if for example lambda is universally optimal for a cone of functions for example we can denote it by s and in our case our cone of functions it was a cone of this completely monotonic functions of squared distance or the cone which was a which was a linear span of all real Gaussians and so now we see that lambda is universally optimal if and only if the set m which I will omit index D here it's actually it's contained inside of the dual of of this cone s plus the correlation function of lambda and so here so here as to all it will be just the set of all measures new such that the pairing between new and f is not negative for all functions f in the cone s and so the picture we will get here will be like this so so we have here our this will be the point new lambda this will be the cone m and it has to lie inside of a new lambda plus s dual of course everything happens in a infinite dimensional space so this picture with a corner is well in the dimensional space this kind of intuition might be quite misleading but now for so what what happens for the leaf and somehow we we don't know whether all all the cones MD whether they do have such corner corners for example now we know that such a corner it would exist for example in dimension 8 and 24 but in other dimensions it could be that which we simply don't have corners like like that so we don't have universally optimal configurations for at least maybe for not for this natural nice corner functions which we have selected before and so another know what is a special what is special about dimensions 8 and 24 it is that this the con l it actually for it this point m lambda it also somehow it has to be lie inside of to be between our cone it has to lie exactly here so it he needs to have exactly the same corner and in this case we can prove linear universal optimality by linear programming and so as we somehow numerical results maybe I'll speak about it about a bit more tomorrow so what they suggest is first that like if we go to other dimensions usually we will not have a corner like this so here indeed these dimensions so so the station when m has a corner at least for our nice set of functions so this is unusual at the same time but numeric shows that this con l it actually it seems like it always has a has a corner like this l has a corner this this seems to be a usual situation so is it usually a two-point correlation function of the lattice no no but like this thing that con l it doesn't it's not it just comes from these two conditions of about the positivity it's not related to any lattice so it's just like it's some virtual it will be some virtual distribution which does not correspond to anything and actually this is the case why sometimes or even very often why linear programming fails or the methods which we used right here so it could happen that we are optimizing we want to optimize for this con l but we don't have a nice description of it so it's very difficult to access it so what we are doing we are optimizing over a bigger con l and so this would be our objective for example and so very often we can find a solution here which will be optimal solution of this problem inside of l but it is not inside m so it cannot correspond to any to any actual configuration and for example this happens if we try to solve a sphere packing problem in dimension 3 by the con l case linear programming bound so we can find so to say some parasitic distribution and we know that somehow the distance from zero is too far and the weight of this first so to say first delta for first non zero first delta function which is not in that not zero the weight is too big and it for example contradicts what we know about possible number of points in a kissing configuration so we know that such a configuration with this particular correlation function cannot exist but the way the single questions in coding theory are also fine and good having distance and also the scobatianski which they learn some some this art and there's also maybe some extreme things like stuff yeah I don't know did you study this yes so I think from very nice cold yeah so so if so in this case like linear programming for for example for fine for finite codes it's it's not easier to work with it because there I have to work with polynomials everything is compact everything yes yes and there are there I think also somehow there is rather this codes which correspond to leach lattice and eight letters and they are also perfect codes and I think their optimality is proven by linear programming and and also also we know that for example like kiss you come to spherical codes than kissing number in dimension you see a number problem in dimension 8 and 24 it's also solved by linear programming and also solved like exactly unlike in other cases so it seems like these two lattices they so certainly friends with two point correlations maybe like last thing I would like to say is that another another special feature of dimensions 8 and 24 so it's numerically it seems that in all the dimensions we were able to find this corner here at least approximately we cannot describe it but it seems like it's exist but in all dimensions except for 1 2 8 and 24 usually this distribution it's some very transcendental and nasty thing and we cannot find any nice description of that and so like maybe another second feature it is that the solution it seems nice no no no they're just some just some real numbers nice only if D equals like 1 2 8 or 24 just so probably yeah of course one can add some bombastic things its dimension of super strings is 10 which is 8 plus 2 and for strings is 26 which is 24 plus 2 which is not accidental but I have I still have to discover what the connect connection is so I do things can happen for the high dimensions I we have not seen so far we made some experiments and that it seems like actually in high dimensions things kind of they'll look the same they kind of stabilize no the like it's very similar very similar behavior like understanding asymptotics is not that easy but we have not seen we have not seen for example other case when you have only square roots of integers or some unusual arithmetic behavior I think it seems like all the higher dimensions they all look like each other they are all transcendental so to say