 So it's very nice to be here again after a short break, and thank you all for coming. So maybe I'll start by briefly recalling what happened in the first three lectures, because we have two weeks of a break, so maybe somebody has forgotten some of the details, and so in the previous three lectures. So maybe I'll write it a bit schematic, so like our big goal of this series of lectures is to prove the universal optimality of the E8 and Lich lattice, of lattice we denoted by lambda d, and here d is the dimension, and it's always either 8 or 24. And so this was our big goal, and so just briefly to recall you what it means, then what is the universal optimality? So it means that, so for each configuration of points, see inside the d-dimensional Euclidean space such that the density of this configuration is 1, which is the same as the density of Lich lattice and E8 lattice, and for each positive constant alpha, what we require is the Gaussian energy of the configuration C will always be bounded by below by the minimal, by the energy of our lattice, either E8 lattice in dimension 8 or Lich lattice in dimension D, and so here, so and alpha is related to P, so here P is the, P it's Gaussian with exponent alpha, and so this energy, it's an energy of mutual interaction of our points, and we think that our points they repel each other, and the energy is given by this function where area is a distance, it depends only on distance between two points, and so we take and normalize some overall pairs, and this way compute the energy of the whole configuration. And so what we also explained in the previous lecture that our method for proving this universal optimality is the linear programming, so it's a linear programming method which is used quite a lot in this kind of geometric optimization problems, and particular linear programming we are using in this case, it was adopted by notation found by Conant Kumar, and so what we will show is that linear programming it implies universal optimality, and so linear programming it is, it means that we need to show an existence of a certain function, so we need to show that there exists for each alpha we'll find a special function f alpha which would be a radial Schwarz function such that the following conditions hold is that this f alpha should not exceed our energy profile so to say for all points in the Euclidean space, and also its Fourier transform has to be non-negative, and if we are able to construct such a function like this, then for configurations of density 1, so this number, the difference of the values of this Fourier transform of the auxiliary function at zero, and the function itself at zero, it gives us a lower bound for the energy of any possible configuration, and if we want our bound to be sharp it means that this difference has to be exactly equal to the energy of our lattice, optimal lattice in question, and so what we also observed last time that, so we suppose that such an optimal function does exist, and then what will happen is then the existence of our optimal lattices, it will pose certain restrictions on this function, so now the, so the existence of this optimal lattice, down to d it will then imply that if such a function exists, then it satisfies the following conditions, so that this inequalities, they have to become sharp at the vectors which have the same length as some non-zero vectors from our lattice, and so now what also comes into play is that now we know actually, sorry, dual, yes, but it's the same, and so now what comes into play is that the vector lenses of vectors in both lattices, they have very nice algebraic structure, so they will be just, so these possible lenses, they will be square roots of even integers starting from some, from non-zero which would be the length of shortest vector, and actually this coincidence or this good property of the both lattices make this problem somehow accessible for us, they give us whole hope to solve it, and that's because we observe that there exists a Fourier interpolation formula which can helps us, which helps us to reconstruct radial Schwarz function exactly from these values, and so the radius of this interpolation formula is like this, so here again let d be our dimension and n0 be the number which depends on d, it will be the length of shortest vector in our lattice, so if dimension is 8 then n0, so it's the length of shortest vector squared divided by 1, so it's almost length of shortest vector, so it's either 8, 1 or 24, 2, and so then the interpolation formula says that there exists a sequence of Schwarz functions, the radial Schwarz functions on the corresponding Euclidean space such that for any radial Schwarz function f we can reconstruct this function just from its values of the square roots of even integers and the same information for its derivatives and for Fourier transform and derivatives of the Fourier transform, so we would have a formula like this, and so this also is the formula we discussed on the previous lecture, and so just, and so how would we approach proving such a formula, so what we are going to do, we are going to reduce the proof of this Fourier interpolation formula to a solution of a certain functional equation, and so the functional equation relates to this formula in a following way, so let's consider the following generating series which will include all the functions a n and b n of the interpolating basis, so here what we do, we do the following, consider the following sum, it starts again from n zero goes to infinity and we take functions a n of x and multiply them by this, by the coefficients e to pi i n tau and the same we do with b n, only with b n for convenience we will need this additional coefficient depending on n and also tau so that we can distinguish between a n and b n, so we consider a function like this and also another function x tilde which will contain the same information about the Fourier transforms of these functions in variable x, and so now what we do, now we take our interpolation formula and apply this interpolation formula to a complex Gaussian, so now what we do, so interpolation formula if it is applied to the following function f which is 2 pi lens of x squared times tau and here tau is a variable at the upper half plane, then what we get is the following functional equation, yes right, thank you, thank you, yes and so the functional equation is like this, so the functional equation itself tells us that the function namely this complex Gaussian e to the pi norm of s squared tau it will be equal to f of x tau plus the following modification of f tilde and so also implicitly we have two more equations for these functions, for example we know that a function f is sort of say linearly periodic, so if we take a second difference of these functions with step one in variable tau then we will get zero and the same is true for f tilde and so now what we want to do is to solve this functional equation then it will give us explicit form for our interpolation formula and so here one important thing that once we have the explicit interpolation formula we can also find this function f alpha because we know all the information about the function f alpha which is needed in the interpolation formula and so from the interpolation formula we will see that not only that we can find the function f alpha explicitly but it also will actually coincide with some value values of our generating function f, so it will be the same as our generating function f only it has to be for the second parameter we have to take a purely imaginary number pi alpha, so this would be the number the imaginary axis on the positive half plane and so so now with this picture what is still remains to be done so here there are our main goal is to prove the universal optimality so what do we still need to prove the universal optimality so for the universal optimality we still need to do the following so for actually for both for universal optimality and for maybe skipped one more step about how so before we write for what still has to be done so let's write down about what is our strategy for solving the functional equations and so what we are going to do we are going to search for our function f in a very special form so we want to search for our function f in a form like this so f of x tau we want to define it as a following expression and so this part comes from our knowledge about the some special values of this function and so this part gives I will give us the double zeros and it turns out that the multiplier here it's convenient to to assume that it has a follow order following form of and so this function k which we wanted to take here we also assume that it is a neuromorphic kernel on the product of two upper half planes and that it has the following properties so so what we will introduce this kernel in the previous lectures so the properties of k so first it's that we want k to be a neuromorphic function which is defined on the product of two upper half planes and not only along our passive integration so other properties that if you look at k of tau z as a function of z and tau fixed then it will have only simple poles the points where z and tau they have to be invariant with respect to action of sl2z and also we want our function to satisfy the homogeneous version of the functional equation and so the slash notation we introduced on our previous lecture and so this this should be true for all elements a which belong to the following to the ideal and this is a ideal inside of a group algebra of ps l to z and it's generated by these two elements the first t minus one squared and s t minus one squared and in the previous lecture we discussed how to the relation between this notation with a slash operator and the functional equation as it's written on this blackboard here r it's a group algebra of sl2z and so now so k has one two more important properties so one of them it's the so we know that has simple poles at tau equals z but the residues they also have to be certain particular numbers and so this will be true for all elements of our group ring and phi it's a particular linear map from the group algebra module of the ideal i it's a linear map and on the previous lectures we have seen that this quotient is actually finite dimensional and it has dimension equal to six so it's to define this map it will be sufficient for us to define it on some representatives and so we define it in a following way so so on the last in previous lectures we have seen that these six elements they are indeed representatives for this quotient and so we define our linear functional to be one applied to element t and zero applied to all other representatives we have chosen and so another important conditions which i probably will not repeat in the details but we also had certain gross conditions on k near the boundary so gross can so gross conditions in general of the boundary and certain particulars once at the cusps so for example we know that this kernel it has to vanish as tau goes to zero or to infinity and that it has a pole in z as z goes to infinity but this pole has so to say bounded order in the center of every integral of k is it convergent or we should kind of regularize some error yes we'll have to regularize it so to so so this is our big picture but actually the integral here it will be defined only as for now it will be defined only so this will be well defined for first it is well defined for tau in the domain d which was this standard fundamental domain for gamma 2 and it somehow so priori it's probably clear it's that it is defined away from sl2z images of imaginary axis but because we have so many residues vanishing here so it's actually well defined on this domain d and also because this kernel it has a pole as a second variable goes to zero and to you know as it goes to zero i think it's fine as the pole is the second variable goes to infinity so it's only defined well defined in the absolute value of our vector euclidean vector x it has to be bigger than square root of two and zero minus two so in dimension eight we have a problem only at point zero but in dimension 24 we have to be careful around the ball of radius square root of two around zero do you expect that because we kind of inserted a zero where it shouldn't be in dimension 24 yes yes so it's because it's because we we know that these conditions hold for all lattices which are in for for all vectors which are in the lattice and for example in each lattice the first possible so to say possible lens is omitted and so we actually we will not have the equality here and so now what is our strategy to proceed so this maybe i will say a few more words about the kernel so it's a proposition it is that so the kernel k with all these properties and with gross conditions properly specified as we did it in the previous lecture the kernel becomes unique so the so the kernel with this property is unique and we can write it down explicitly so it will be kernel which first will be two different actually two different kernels for two different dimensions and we will write in the following way so here it's a Ramanujan's delta function taking into some power which will depend on dimension so here we will have a holomorphic function of two variables tau and z divided by difference of gene variance which will give us our simple poles and what we know about the holomorphic function p so what we know that in tau variable it satisfies the homogeneous functional equation so it's would be only in weight will be now not d divided by over over two but t plus 12 alpha which comes from this from this term sorry and so also we somehow in our gross conditions actually require that after multiplication with the suitable powers of delta functions we will get function which is belongs to this class curly p so which is holomorphic and also has moderate gross and so this will be with respect to variable tau and with respect to variable z we know that it also our function will be annihilated by a different ideal which is actually related to the ideal i and linear functional phi which we defined before so again we have 12 beta so here have a different ideal and it also will be a holomorphic function of moderate gross and so what we can do we can actually compute this functions p explicitly in terms of classical modular forms and so now what remains to be done so now we have two different objectives so to say one of them is our primary goal to prove the universal optimality and another goal for this course is to prove the Fourier interpolation formula and so for the universal optimality what we see what needs to be done now so first what we have to do of course we have to show that the function f is which we defined above that it's defined not only for for x outside of this ball around origin but actually for all vectors so what we need to do we need to extend x in rd but this is this can be actually easily done because for our kernel k we can write it's a Fourier expansion in the the very the second variable and it will have like two first first two terms in this expansion they will have negative exponents so they will be responsible for this populate infinity and so what we can do we can just integrate this term separately and so here in the formula somehow the singularity which we get after this integration will be again just a simple pole and it will be killed by this sine squared which we are multiplying by so it's an easy task uh then we also have to work a little bit to show that this function actually belongs to so it's a radial schwarz function in variable x is an additional parameter tau and so then also what we have to do we have to be able to compute the Fourier transform of this function so the Fourier transform again with respect to the Euclidean variable x so show that the Fourier transform of this with respect to x and so this time it will be actually the function f tilde which we have defined before which is related to f by this functional equation and it also has a nice integral representation so this time it will it has the integral representation like this so it equals to so you are going to have the sine squared integral from zero to infinity and here instead of integrating function k we integrate over the following function so this time we have an integral like this and so this this is not exactly obvious but here it comes with some integral is a proper integral manipulation so here we do have to work with a counter integrals and then to apply to exchange taking integrals with the Fourier transform and use the fact that Fourier transform applied to the Gaussians here it looks very nice and so now we are almost done because now the the thing which actually reminds to for us to prove actually prove the universal optimality is only to check the positivity so and now so as we discussed last time so now just to prove the positivity what we actually need to show we need to show that that this function is small is bounded by the corresponding Gaussian and but it's actually it's only six files suffice suffices for us to prove the following to prove that actually f tilde of so now it suffices to show that so this function is non-negative when alpha is a positive real number and so here because we have actually integral representation is also helpful for us here because what turns out that we what we can show we can show that the this this kernel function k which so this modification of kernel function k let me denote it by k we can also with hat this time hat it does not mean the Fourier transform because here the transformation which actually applied to k is different it's only for our convenience so what we also will have is that this kernel k it is positive if alpha and t are both positive numbers so if we look at this function defined on the product of top or half planes and we restrict ourselves to product of two imaginary axes then this restriction of the morphomorphic kernel kernel is positive and so this works this helps us to prove inequality at all points where the our integral representation for function f and f tilde everywhere where it converges and we still have a problem in dimension 24 in this small ball of radius square root of 2 because there our integral representation does not converge so knowing something about sign of function under the integral does not tell us anything about the sign of the result so this is something what we have to handle separately so also what we have to do we have to prove separately that the function f tilde which corresponds to dimension 24 it is actually positive or negative for all vectors of length smaller than square root of 2 and so this both inequalities the only way we found to prove them is by checking it by computer by numerical computations and so this is something I will speak about in our next lecture I will tell more details about how how we proved these two inequalities and so now what still remains is the interpolation formula and as you see that for universal optimality we actually did not need the interpolation formula itself somehow interpolation formula it was an inspiration for us it showed us a way how to construct this magic functions f alpha but the interpolation formula itself is not needed for for proving universal optimality however we thought that maybe it's a a nice result on its own so for this reason maybe I use the space here we also decided to prove the interpolation formula and so to prove interpolation formula we need a slightly different information about the function f capital and so if universal optimality what really interests us is the positivity of functions f and f tilde then for the functional equation what's important for us is that these functions they can be extended to the whole upper half plane and also that these functions are they have nice gross properties at this plane so so now for the interpolation formula what we still need to do yes in principle we could do it in any dimension but maybe we are a bit lazy so we did only in dimensions 8 and 24 yeah but I think like exactly the same method it would work in other dimensions as well so there is maybe small differences that if dimension is not divisible by 8 then maybe more modifications have to be done probably nodes have to be shifted by one or so what we have to do for the interpolation formula so first for the interpolation formula we we cannot it's not enough for us to know our function on the only on the imaginary axis we have to extend it to the whole upper half plane so we have to extend capital f of x and tau as a function of tau from the domain d to the all upper half plane and then of course also we have to show that functions f and f tilde as we have defined them that they will satisfy the functional equation and now for our formula to work not only formally but also to be an analytically nice formula to have good convergence for example not to have a very rapid growth of of this basis functions a n and b n at every given point we also need to know that our functions have moderate growths and so what's more explicitly what we need we take our functions which are functions of x and tau so now we consider them as functions of x and tau runs as a parameter and we take the semi-siminorms of the schwarz space on these two functions so then what we get will be only functions of tau and what we want so these are semi-siminorms that are taken with respect to x and so what we want we want them to have moderate growths in tau for all this should be true for all multi-indices alpha and beta and so for what is our plan for today would be to concentrate on the interpolation formula and to prove the extension and the functional equation and probably we'll have no time left for the growth estimates so maybe if time remains I will just discuss this a little bit and for the next lecture I will show you our numerical results and explain to you how we prove the positivity and probably I will since I will use the projector maybe I could even present you some more our numerical results on proving positivity and also some experimental results just related to this problem and maybe other dimensions yeah also yeah also yeah I also can show you this one yes and then in the last lecture probably I would like to discuss some open questions which remain like for example if you have this interpolation formula which other interpolations for formulas do we have or for okay can we theoretically have or what which other problems can be solved with this approach and which problems seem to be definitely impossible to solve with methods like this and so maybe we make a small break and so for what I planned to do for today is to show that function f defined by the formula above on this particular region of tau and x it can be actually extended in tau and in x to a function which satisfies the functional equation and so so probably for for now we will work work only with x which is big enough we will not address this problem of extending our function in x and we'll concentrate on extension in tau and so first what I'm going to do is to prove the half of the proposition which I from last previous lecture so which tells us that if we want to extend f to the whole upper half plane so what suffices is to extend it only to a small to a neighborhood of this domain D rather it's a closure and if the extension will satisfy the functional equation then the it can be extended further to the whole upper half plane and so the proposition is the following so let k be an even integer and suppose that we have two holomorphic functions h1 and h2 and suppose that o it will be an open neighborhood of the closure of D and if we have a function from on o which is holomorphic and it satisfies the following to the transformation law and the transformation law is very similar to the law we had before only here at the right hand side instead of having particular functions prescribed by our problem we allow ourselves having any functions h1 and h2 and so we want this to be satisfied so whenever both sides are defined because our function f is not defined on all upper half plane but it's defined only in this neighborhood o and so suppose that these conditions are satisfied then we claim that f can be extended to the whole upper half plane holomorphically and this new holomorphic function the extension will satisfy the condition also the function equation and so here for the proof we make one important observation it is that the representatives we can choose representatives of the quotient of a group algebra by the ideal i to be the same as representatives of a group of a quotient of a group sl2z by its subgroup gamma 2 so and how it works and it works in the following way so let's consider our fundamental domain so this will be domain D so it so it contains consists of all points at the upper half plane with the real part between 1 and minus 1 and these two semicircles excluded so the semicircles of radius one half around plus one half and minus one half and so now what we can do we can divide this domain D into six subdomains and each of these subdomains will be a fundamental domain for the action of PSL2z on the upper half plane and so we'll take this domain here and then we call it f and so f it will consist of points such that their real part of is between zero and one and also we exclude these two big circles so we write that the value of tau should be bigger than one and epsilon failure of tau minus one has to be bigger than one and it is a fundamental domain for the action of SL2z on the upper half plane and so if here this is f then this would be the translate of f by the matrix t minus one here just remind again that t it's this matrix and s it is this matrix and so here it's this part it's a image of f under the action of s and here it is s t inverse and here it's ds and here's sds and so now what we claim is that it's that this elements one t minus one sd ss it is a basis for the quotient of the group algebra by the ideal i and so now let's consider a colon vector with this entries yes functions h1 h2 could be arbitrary or actually they could be arbitrary but it's no relation yeah and in principle there is no real relation and so what we consider now we we consider this as a this line as a column vector and let's call this column vector m so then it will be an element of six and so we see that the and so also it's elements of this vector will also call them like m1 m2 m3 and so m m6 and we see that the closure of d it's the union of this translates of the closure of f and so as we discussed it in the previous lecture so now we have a so we have a representation so so there exists a representation so we denote it by sigma and this sigma which I use now it's slightly different from the sigma we defined last time because in the previous lecture we used a different set of representatives for this quotient but I don't want to introduce new letter so I will also call it sigma and then this will be the only representation I use for for the lecture today yes yes yes and so also we have the following maps so now we remember that last time I discussed that this ideal i it is a free ideal is freely generated by these two elements t minus one squared and s t minus one squared and so therefore we have the following two maps so we're all write them as n i from ps l to z into r6 and this maps they are the following that okay so i equals either one or two and so we define these maps in the following way if we take our vector m and multiply it by some element gamma of ps l to z on the right then what we get we got we got this would be the same as sigma of gamma times m plus d minus one squared times n1 of gamma and plus s t minus one squared and two of gamma gamma is an element of ps l to z and this happens because we know that the the representation gamma it's defined in a such a way that difference between m times gamma minus sigma of gamma times m it will always belong to the ideal to the sixth power of the ideal i and now each element in the ideal i it can be uniquely represented as a sum of two summons one of them is t minus one squared times some element of the ring and another is s t minus one squared times some element of the ring so now I hope for we have a functions like this and so now these functions they and one and two they will satisfy the cosine relation so and so the cosine relation is the following if now we want to compute this function on the product of two elements of the group then what we get will be the following and so this would be true for both i equals one and two and so now so what do we do next so now by we can shrink our neighborhood o such that only so if you had here our neighborhood somewhere above it you could have our neighborhood neighborhood o and so we can make a neighborhood o smaller so that it intersects as little of its gamma two translates as possible so and because d is also fundamental domain for gamma two so at the end we see that the only translates which seem to be necessary are these were here it's translated by t squared by d minus two by s d squared s and by s d minus two s we what we can arrive is that so the only gamma two translates of intersecting all are the following one so this would be the d squared o and t minus two o and s d squared s o and s t minus two s no no so what we want to take so oh it's a neighborhood of the closure of d and what we somehow what we want to do we want to somehow it's not important for us which neighborhood of d we are considering so for example we can make it smaller as long as it is an open neighborhood and you make it so small that if we take o and take gamma two translates of o that they intersect so to say only as long as it is necessary so we don't want that to intersect with some translate which is somewhere for far far away we want all of them to be only these ones where somehow this is unavoidable and so now what we also want to do we want to take an open neighborhood of the closure of f and so also there exists an open neighborhood of the closure of f such that so what we want we want that the union of translates of o f by our elements mg they are coordinates of this vector m so that they have to this union have to live inside of neighborhood o and so we can also do it by taking this open neighborhood just small small enough and so we do it so that so we want that each in particular if we take our f and slash it with this element mg we want it to be well defined on o f all j from one to six and here also we want somehow to make this o f small enough so that if we have different ps l to z translates of o f we don't have unnecessary intersections so we also assume that intersection of o f with its ps l to z translates is not zero it's only if gamma if gamma translate of the closure of f and f closure of f they share a boundary point and so namely we will have this will be the full list of elements of so this omega denotes an element of ps l to z where this is true and so this will be the elements s d d inverse s d inverse d s and d s d inverse and so okay probably one one also should be illustrated and so now what we wanted to do we want to make so to say vector valued version of our function f so and so we do it in the following way we define so that f vector valued for us which would be the vector column vector which trans consists of all translates of f by this by the elements of vector m so it would name it to be like this so f it's exactly what this notation means and so now somehow it would be a bit more convenient for us instead of extending f and making sure that satisfy a functional equation it's rather to extend so now this vector valued version of f it is not defined on the domain d anymore but now it's defined on a smaller domain f so now I want to extend this function from vector valued function from f to the all upper half plane and now we have we need to find a substitute for the functional equation so we have to translate a functional equation into this vector valued language and so the translation would be this following so if we have four for each tau in o f and each small omega in this set capital omega we have such that the image of tau under the action of omega is still in the set of f and so we denoted by this number by this element of the upper half plane by tau prime we would have the following so that the j so the automorphi factor to the power minus k times the vector f evaluated at omega tau or in vector valued notation we can denote it like this so the same as f slash k it has to be the following numbers has to be matrix sigma of omega times vector f plus the function h1 and now we slash this function which h1 with this vector n1 of omega and the same with h2 and the function n2 and so now let's denote this formula by sun for example and so now actually this formula sunny formula is equivalent to our formula which we denoted by star on that blackboard so now this formula it's equivalent to to f satisfying the functional equation for all tau in the union of this six translates of the domain of f and so now what we want to do now I want to extend our vector valued function f actually not not only to the union of this neighborhood but to the all upper half plane and we will do it so to say by imitating this equation so now what we do now we choose sorry so so capital omega it is this set of elements yes six elements such that if we take our fundamental domain f of a cell to z and its intersection with translation by this element is not empty and so now what we want to do we want to take actually any element of the upper half plane so let so that w be any element of the upper half plane and so then we know that because the domain of f it contains the closure of fundamental domain f so we know that there exists for sure an element gamma in the group psl to z some element tau which is in this domain of f such that w is translate is a gamma translate of tau and so now we will define the value of this vector valued function f at point w in a following way so here we just simplicity we multiply it by this automorph effect on this side and so we set this to be the following values so so it is just an analog of the formula we had above but now we replace omega which was an element in this sub subset by any element of gamma and so now what we have to do we have to show that the function defined in this way at first it is actually well defined and then that it is also holomorphic too complicated way to prove the service but see that's here because you have if consider reflections kind of anti-holomorphic lapse of this forebore it's you have a group freely generated by four reflections if you kind of consider images of this but for reflections and then it will be kind of easy for example you can translate by two type signatures yeah from the left and right using the first equation it's means that you generate two consider like the header group generated by two reflections one vertical line another vertical line and then there's another group which is completely the torsion to each other the same around zero and it's a cold sink it's it'll be more like like you know like mom for curves short description to get like three groups sitting in a phenomenal group over the surface and here it's something similar to get free group always not exactly four involutions without reflections nothing about number six would mean this yeah yes i think like number six here maybe it's not so important in here but because it's kind of namely it looks that you can immediately from the first functional question can extend by shift by two yeah shift by two it's like free group it's one generator it's index two by two reflections by plus yeah so i think in a sense it is what we are probably doing because you also have some reality conditions maybe it will be just really reflection principle it's the end of the day because your functions kind of real value test of also on this battery yeah yeah but yeah but here okay but here also there is this like h1 and h2 which like in this setting can be anything so so maybe maybe we have to think yeah i mean i'm sure that there might be some maybe either easier way to do is like maybe it already follows from some more general known results yeah okay so maybe i will still finish the long proof sorry sorry for that okay so now what what remains for us to do is to before we find some better way of thinking about this is that function defined by this equation it's actually it is first like well defined because of course this gamma which we have chosen here it's not unique right so we still could have this ambiguity because of for example because of that set omega and also that it is a holomorphic so so what we will do suppose that w has two presentations suppose it has two possible presentations with two different elements in o f and gamma and gamma prime in ps l2z then what we know is that by our definition of the set omega that tau prime it has to be omega times tau for some omega in our set capital omega and from this we also see that gamma has to be gamma prime times omega and so now we can see that we could have so say two different definitions for the value of this vector valued function f at point w so one of them uses gamma and tau and another uses gamma prime and tau prime and actually it will follow from the co-cycle condition that these two different presentations will coincide so what we see is then so here how we can we write it so now we can just use the here we use the fact that sigma it's a representation and here we use that n1 and n2 they satisfy the co-cycle condition and so from here we see that this would be the same as and so here we use the definition of our so here we use this not the definition but this condition which we know that it holds for all elements omega in a set capital omega and so we see that this now will be equal to same for h2 yes and now we will see that some of the terms will cancel here so namely this term here it will cancel with this term here and this term here it will cancel with a term here okay so not not maybe not not all of them only the yes only the first part so this will stay but this one will and here the same so this one will cancel and this one will survive and so from here we will get exactly the representation as we as we hoped for and so but of course probably the easier solution is also possible this would be just the same as and so here we would have this part then equals to this part so it would be the same as and so now up to this automorphi factor which actually is not a problem because it also satisfies chain condition so from here we see that this exactly the the representation which we would get for f vector valued if we started with a different representation of omega as in the translation of a point in this domain of so this tells us that now we see that this function is well defined also we know that it is a homomorphic well defined function which satisfies this condition and so if vector valued f is well defined it means that one of its coordinates our initial function f is also well defined it's also well defined at homomorphic and so this function is also well defined at homomorphic then it's homomorphic of course because the yeah because it's homomorphic in all translates of o f and so when we glue them together we don't get any discontinuities and so if it also well defined at homomorphic and so now we know that this condition star it holds for extension of of the vector valued version of f and so we know that the functional equation has to be true for the function f itself because it's a homomorphic equation we know that it holds in in this open domain o and so it also has to be true at the all upper half plane and so now what I would like to do is in the remaining part is to show that our function f capital which we defined by a certain integral representation that it indeed can be extended to an open neighborhood of of this domain d and then this extension will satisfy the functional equation and then by combining these two results we would obtain an analytic continuation and also functional equation for for the function f and so so now what we want to show is that so now we have that the function the function f extends to a homomorphic function on an open subset h which contains the closure of the domain d and this homomorphic extension then it satisfies the transformational f of here we want to want to get zero here so this has to be four e to the and it will satisfy these functional equations and so probably so from the two equations probably I show only the upper one and so now this is here the proof is also not difficult here we just use the fact that we know the residues of our kernel k and so by it we can assure that the functional equation holds and so how we do it we do it in a following way so so what do we do so first we consider so here again we assume that just simplicity will assume that the absolute the norm of x is bigger than our critical value and so now for now we take our tau in the upper half plane and from the upper half plane we exclude all the images of imaginary axis by the sl2z so and so now when we exclude the imaginary axis what we can do we define a function like this so we call it f sharp so it would be a function like this so it's exactly a function which has exactly the same integral representation as our function f so this formula x changed into r where r is the so it's a norm of our vector so because we know that our functions are radial so we can just look at this as the functions of of one real variable and so now what we see that this function f sharp it will be a piecewise holomorphic function so it will be holomorphic everywhere except on these images of the imaginary axis and on those images it will have some jumps and so the jumps of course they will be controlled by residues of k and this function function here and so now what we know is that for every alpha in sl2z we know that the residue of it now we look at this as a tau as a fixed number and z as a variable and we want z to approach alpha translate of t then we know that here this residue it will look like this and so this will help us to understand the jumps of this function and look another property of I forgot to say about this function so f sharp so f sharp it's even though it's not holomorphic anymore but what we gained for this is that now this function it is so sharp it is it satisfies the homogeneous functional equation because our kernel k satisfies the homogeneous functional equation so and so now what we can do we can so now what we would like to do we would like to extend function f holomorphically outside of this domain and so one way how to do it suppose that we have function tau which is here just on the boundary and we would like to extend our function f for example to this domain here so this would be domain for example we can call it u and u it will be the set of all points w such that the real part of w is bigger than one and that the distance between this fixed tau and w is smaller than epsilon and so now to extend it what we have to do we have to look first what are the translates on of tau which of them lie on the the images of imaginary axis so here it will be tau minus one and here will be tau minus two and so also here we will have some translates here and here but then if we go carefully to our formula here and recall what when how we defined our linear functional phi so then the only point of these four points which will cause problems for us now we remember that we're integrating on this path from zero to infinity and all the only points which will cause problems for us it will be this point here so what we have to do we have to go like this and actually for for these points they will not contribute to the to the singularities because the residue of k there will be zero so what we have to do we have to change our contour of integration so this would be some new contour for example we can call it gamma and so now we can define for if we have maybe let's this be some not just tau but tau is zero for example so now for tau in this region u we see we can define that f of tau r to be this define it like this instead of integration just from zero to infinity we take this new pass and so now what is important is to see what would be the difference between our extension of f at this point and the function f sharp which is also well defined on this set and so now we will see that now we see that the difference here it exactly will be the integral somehow around this point tau zero minus one so by knowing the residue we can compute this explicitly and so this is the answer and also we know that actually f of tau r it actually equals to the f sharp of tau r for tau belongs to the fundamental domain and so now if we take for example points tau here tau minus tau this would be tau minus one and this would be tau minus two so now from also we know that this functional equation holds from here we can compute what would be this number so I think here I did one small mistake so this is not zero what is true is that so what satisfies functional equation is not f sharp but rather f sharp minus the exponential so this would be the same as this is actually same as exponential so applied the same functional equation so so from here we can actually compute what is the and so now if we somehow don't make any more mistakes or make a so if you make no more mistakes or make correct number of mistakes then we will obtain that here we have zero and for similar considerations we can also prove the second equation here yes okay sorry just so probably then it's all all for today and then tomorrow we will continue with some interesting numerics