 to thank all the organizers for the invitation. Thank you for the nice introduction. And then just the one thing is like, if you post the question in the chart and I happen to miss it, please feel free to speak up. I would be happy to address any of the questions. Today I would like to talk about the joint work with Frank Caragari and the Vaseline Dimitriv on the observations of the erosmetic holnomicity theorems. And so I would like to first talk about the first application that some of you may already heard one of us speaking on the unbounded denominator conjecture. Then I would like to use that as an opportunity to kind of motivate the so-called erosmetic holnomicity theorem. And then I'll give another application to periodic data values. So let me start from the unbounded denominator conjecture situation is so we consider finite index subgroups. And there's a special type called the principal congruence subgroup, which are the elements congruence to the identity matrix mod n. And so this helps us to like see two types of subgroups. One is the so-called congruence one, namely those ones can be defined using congruence conditions. And the other ones, we call them non congruence subgroups. And the question is like, could we just distinguish these two types of subgroups other than just this definition? And it goes back to the work of Art King, Schrodinger, where they're proposing the conference in 1968. And they said that conjecturally one could study the free coefficients of the module forms was level to the finite index subgroup gamma and try to distinguish whether gamma is congruence or not congruence. And to set up some mutations, and I would like to require a classical example goes back to the work of Klein and the three. And so we start from the lambda function, namely it's like the Hobson module of like, in the curve with a level two full level two structure, and it could be concretely write down as this Q expansion. The formula that doesn't really matter for us, what's important is lambda over 16 is a Q expansion. I forgot to write that Q equals to here Q equals to pi i e to the pi i tau. So the important thing is like this lambda function as a Q expansion has Z coefficient and the more over is like, it doesn't have constant term and the leading term for Q it has coefficients one. So this is the important thing for us about like lambda over 16. And so we could study like finite index subgroups of gamma two, which coming from like coverings of like C minus two points. So like one easy covering was coming from like the like n's power map. And this gives us like a finite index subgroup whose Hobson module is nothing but we take the n's roots of like lambda over 16. And the work of Klein three, they said that this group gamma n is congruence if and the only if n divides eight. So one could actually do a computation in this special case by just like compute what are the possible like, sorry, what one can do like concrete computation to see which ones are congruence or not. But for the purpose of today's talk, I would like to mention the observation here is like, if we take the ace roots, another thing that we use from the formula above is if we take the ace roots, it's still have integer coefficients. But if we take, for instance, the third root, then we can actually compute the free coefficient and see that we have denominators show up and the denominators here show up as three. So that's why like we do see if we just like take all the facts and try to make observations, we see that the relation whether like a subgroup, a finite index subgroup in this particular case being congruence or not is related to when we look at the three coefficients of the option module, whether it has denominators or not. And more generally, we don't need to talk about the way zero module for when we can talk about module forms of with K and the devil gamma. And for today's talk, we do not, we allow ourselves to have module forms, which has like, which is metamorphic at the cuffs. And when we talk about Q expansion, of course, we could talk about Q expansion at all the cuffs, but like just the four simplicity for today's talk or just the night and go also like compatible with the literature will usually here just take the Q expansion at the cuffs, i infinity. And also, as we'll see from the proof is like, usually the Q expansion, since we are not meromorphously at the cuffs, it will start from some like n zero, but like taking the situation that n equals to assuming it's holomorphic at one cuffs will still give us the essential case of the conjecture. And the other cases can be easily reduced to the essential case. So usually when we talk about module forms as homomorphic functions, we have real coefficients to be complex numbers. But actually, indeed, we could study like just the module forms with three coefficients over a cube bar, it's because like all these module curves can be defined over a cube bar. And the entire space of module forms is been spanned by module forms defined over cube bar. So that's why, okay, so I think I made a couple of reduction steps as night for today's talk, we talk about no module forms, we are not meromorphic at all the cuffs, but night for simplicity, we assume it's homomorphic at one cuffs. And we only consider module forms was coefficient is like in cube bar. And this already sees helps us to see like all the module forms like with with respect to say one, like finite index comma. And now we are actually at the point that we could formulate the conjecture. Conjecture set the following so we can observe two things, given a module form was like Q expansion at i infinity inside with coefficient three coefficients in cube bar. And the two things are equivalent. One is that so it doesn't essentially doesn't have the knowledge or it has bounded a denominator naming that we can use an integer to clean up all the denominators of the three coefficients, which if you use, okay, let me actually flip a couple of slides back and go back to the example of like the third root of lambda, although I did not write out all the coefficients. But from the first couple of things, it seems that at least for this example, it's like the power of three is getting larger and larger when we do this kind of third roots situation. So that's in first case is like we don't allow the denominator to get larger and larger. So we study these module forms. And the second part is saying that these module form is equivalent to say that the level of the module form is actually congruent by the level it meets the largest possible gamma such that F satisfy the transition law for the module form. And the conjecture of the theorem says that these two things are the same. So let me first make a remark about like congruent, why congruent module form has like bounded denominators that comes from the situation is like when we have congruence like that, oh, we have the classical hex theory, which decompose all the homomorphic module forms into summation of hack Eigen forms. And we also know that the non for the normalize hack Eigen forms, all the three coefficients are algebra integers. And hence like for congruent module forms, we can we first reduce to the homomorphic case and decomposing to hack Eigen forms. Now we have things which satisfy condition one. So that is like relatively classical, and which also motivates the conjecture in the still the same paper of Archie and Schrodinger, they did some computations and they expect the situation to be true for like all the like just for non congruence once the module forms are not congruence. And this is like proof that in the reading the work of Frank Karagalli-Batling images and myself. Okay, so to motivate the title, the title of the talk says like the Arithmetic-Connomistic theorem. And I would like to give a sketch of the idea of the proof and to hint on like why such a theorem will be relevant to us. Okay, so the idea of proof is like it's standard that people can reduce the theorem, the proof of the theorem to the weight zero case. And then we're going to fix an N, which is the least common multiple of all the curves with of gamma. And now it's like we have with zero case, namely that we look at all the module functions, namely these are like rational functions with restricted pole because we only allow pole at infinity. And so basically it's like rational functions over the module curves defined by the level of gamma. So that's why it's a finite dimension of extra space if we view it as a dimension as a vector space over the J9 or like for synthesis they will study it's over the like them the line. And that is fine as far as we take the intersection of our group with like gamma that this doesn't change like the think topic. Okay, so this is one thing it's like we know that at least for the congruent situation. So actually more precisely here it is, it's like the number in here, which is the least common multiple of the cusp width is called the Woffat level. And in the work of Woffat, he proved that if we have a congruent module form, so like condition one says bounded denominator, so like assumption two says congruent. So for by the work of Woffat, he proved that if we have a congruent, if we know our plurium module form has level which is congruent, and we know that the like least common multiple of the cusp width is in, then he proved that this group gamma, or like this module form must be kind of of level, the group gamma must contain gamma and the module form must have level gamma n. And that's why we can actually consider the dimension in this case, just by the index of the group gamma n inside sl2z. Sorry, my typo here, it's kind of switched the order of the index. And then in this case, we can compute it's like asymptotically n cubed. So in this case, we know like everything about what is the dimension of the vector space. Now we're going to go for the part one is like we have a module form with zero, and we know that the Fourier coefficients has z bar coefficients. And then we want to actually give an upper bound of the dimension of the entire vector space. And this is like where the like L diversity theorem or the Arithmetical Naomi theorem plays in. So basically it's like, I think in the next couple of slides, I'm going to explain like why such a theorem would actually give us a certain like dimension bound. So these terms will give us a dimension bound, which related to certain things that we can compute explicitly. And then with some inputs from Navierian theory, we'll be able to actually get an upper bound, which is not too far away from the first one. Because it's like, there's two parts I would like to explain is like, of course, we are expected to have a dimension bound, which is at least n cubed, because it's like we already see from the classical Hex theory, it's like everything true is containing one. So that's why like we definitely expect to have a dimension which is at least n cubed. And then here is like we get the dimension which is not the desired n cubed, but not too far away from it. And the final contradiction, if you are interested, like you could read our paper, I'll explain later, but the essential contradiction come from, let me just be brief here, the contradiction coming from that, like the log n doesn't really matter. If we had one counterexample, one could use kind of like replace tau by tau over p to construct a new counterexample. By counterexample, we mean that a model formed with like Z bar coefficients, but it is not congruent, whose level is not congruent. So using this trick, we can actually produce too many counterexamples, so that like by too many, it's kind of it's violating the upper bound n cubed log n that we have as like with that angle to infinity. That's why like we get a contradiction. So in short, it's like the Woffort kind of computation is like from his work classical, and then we have some extra input from like a non-congruence module form saying that we don't need the like very sharp upper bound. We have some kind of like rooms that we can preach, namely that n cubed times any power of log would be suffice for the things to work, and then we just need certain type of like good enough upper bound, and that is where we are going to use the so-called arithmetic polynomial theorem. Okay, so before I move on talking about like arithmetic polynomial theorem, I would like to talk about a very special case of it first. So go back to the classical situation is like, I hope that kind of it doesn't seem a little bit out of nowhere because it's like we're studying like module form with full coefficient to be algebra integers. So that's why here we have like a power series with integer coefficients. And then the easiest case we could say something is like when the convergence radius is strictly greater than one, then like from calculus, the like or you could sense defense and do like Cauchy integration formula, that's what we know that the coefficients a m must be strictly smaller than n as n goes to infinity, and hence the only integer which has absolute value strictly smaller than one is zero. So that's why it's a polynomial. And of course, it's like from the arithmetic point of view, the Archimedeum place is not kind of something special from like all the other periodic places. That's why we could just like have like a static version, which we are now considering convergence radius at all other primes. And but here we put a restriction thing that we only want like finally many of them to be to could have possibility to be strictly smaller than one. And those are more generally in the broader work situation, they talk about the convergence radius of meromorphic they don't require like on a disk the function to be holomorphic meromorphic is good enough and the conclusion is a rational function. So these are the like classical work where people study power series and conclude that it's a rational function. And in our case, it's like we are not really going to be able to kind of use these type of rationality criterion because somehow like, if you actually try to think about what's the possible convergence radius we could say about our like module form, it's not large enough. So that's why like we can go a little bit further, which goes back to the work of event j when he was working on this so-called L diversity criterion. So the difference between the previous one and this one we still talking about like power series with Z coefficient. But like what if the convergence radius is too small is we can actually just look for maps from the unit disk to see where C has coordinate x. So instead of just looking at the disk around just the disk around zero and the try to pick the largest possible disk, we say that we allow ourselves to study maps from a disk to see. And here the derivative actually is like the thing that we replace the convergence radius in the previous baby case. And of course it's like this number like if you pick the largest possible one, it should be larger than the convergence radius that we talked before. And so similarly there's a static version. And if we require this map to be injective, it gets some like rationality criterion goes back to the work of Prodiapo trenders. And what's more important is the proof is effective. So here's the thing is like in the work of like event j he did not just prove that the function f is algebra. He also proved that like he can use the function phi or if you use like more places just like all these kind of uniformization from unit disk maps, and you use all these maps to actually give a like upper bound explicitly upper bound on the average versus degree of the function. So that's the important part for us. And I would like to use module forms. So we'll see a version of this quite here and later. Now let me just summarize what is important is like we replace convergence radius by these kind of maps from the disk to the coordinate. And just let's look at the example of module forms and the four simplicity, we just study those ones with cusp voice device to that's why we can write as like the bracket q and restrict ourselves to the coefficient because the general case can also be reduced to this case. And the record like what is important for us is like lambda over 16 is q plus higher powers and the z coefficients, which means that if we have a power series with z coefficients in q, we can write it as a power series with z coefficients in x where x is lambda over 16. Okay, so now we have a power series in x, it has z coefficients. Up here, we know it's a rational function on a curve which admit the finite run fine covering to the lambda line. So it's algebra over q lambda. So like the even if we can find a nice five, which I'm going to extend like how it's kind of like the first statement of the theorem is not enough. What we need is like we need an upper bound of the algebraic degree. It's the same as saying here we can actually can bound the degree of like this night. So we have a like z coefficient module form f it lies on some like module curve, and we can actually bound the degree of this module curve over p1. And by using the kind of the actually effective upper bound from the algebraic degree coming from the theorem. Unfortunately, that the previous kind of like in the theorem in Andrews book doesn't give us a good enough bound. So that's why we're going to do the following refinements. Okay, so I'm going to expand a little bit of the notation later because this is just for the purpose that we would like to incorporate the cusp of ways to be not necessarily to that we need to do something else. And I would like to just the first to display the theorem. It says like, let me just the emphasis on the part that we have a five here. So we have a five such that's like two conditions. One is like the free coefficient has z coefficient. The other is like when we kind of pull back to find it has a holomorphic function. So this is the kind of essentially also kind of we put the end. So feel free to think about the situation when n equals to one. Like as kind of it gives you the main part, like the main idea of the theorem is like, we have the z coefficient part. And then we also have like, when we pull back by a max five, it gives something which is holomorphic. And then overall, we'll have a dimension bound, which is essentially using the function five. It's like we want the derivative to be as large as possible because it's like, this is like what we replace the convergence radius. And we also want the function five to not grow too quickly. Because if you think about like, you would like to use like Cauchy integration theorem to like prove something which is similar to the baby case, then we would like to use like the Cauchy integration formula to get some upper bounds of like the absolute value, which we know it's an integer. That's why it's at least the one from the lower bound, if not zero. And that's why when we use this kind of integration theorem in complex analysis, we would like to control the size of the function five that we use to pull back f. Okay, now go back to a little bit of the technical detail here, is that we have, so in general, we won't be able to talk about cusp with equals to two case, then we have like t equals to q to the one over n. And also it's like instead of working with lambda over 16, we work with the answer of it. And those then so you use the familiar you that we seen before is like take the answers of it. And so what is fine, fine comes from the following is that we want to map essentially it's like, we want to avoid the single add we want to avoid the cusp because cusps is where that we do not know how to expand our module forms over the cusps. And like the, like, complex analysis speaking, like, if these are the single add we probably won't be able to expand our power series over them. So we would like to avoid the cusps and but we want the derivative to be as large as possible to have a like small dimension bound here. So that's why the natural thing that if we know which points we want to avoid the natural thing that's when what do is take the universal covering the universal covering is the one which gives the largest possible like derivative. And but like in order to control the top part. I mean so we just shrink it a little bit so that we are not integrate on like, we need to pick a small close disk inside the open disk. And that's all. Okay, so I hope it's like, let's put this way is, what is this strength sense is like I still only use some explanation about like why condition one to host for all our module forms with like all five level and and also like have Z coefficients, although the Z coefficients part is kind of relatively obvious here. And the choice is just the like the choice that we make for the fight here is to hope that we have a dimension which is not too big. And the more concretes speaking, that's the situation. So, so the Z part gives the condition one of the serum and the wolf of the devil. So here's there's the one cost that I intentionally did not mention, which is the cost plus zero is because if let me go back to the previous slide is like, we do rule out all the like answers, but we did not rule out the cost for zero. And we say we take the universal coverage. So why is it good for us? That is because our assumption the wolf of the devil guarantees that the local monogamy at all these kinds of weights zero module form there are like rational functions on the big module curve. And we'll have kind of local they just looks like take the at worse to take the answer. That's why when we take the lambda map by the answer, it already solved the kind of local finite monogamy. That's why we can actually extend our function over the zero plus. So that's why it's like zero is different from the other ones. And this like, we're looking like take answer is the purpose to solve the local singularity so that we can actually extend it over. Okay, then like, if one just follow the previous thing and just do the explicit computation, which I'm not going to give more details, during the explicit computation from the previous formula, when really gets the n cube log n bind by some inputs also from the Navina series. I would like to mention that also in the reasons work communicated to us by like boss and the shops, they can give an alternative. Actually, they give two alternative proofs our serum with the constant e here replaced by two. So for our for this application, it doesn't matter which constant we have here because in the end of the all we want is just the like a capital O of n cube log n. But for the Okay, then put this way is like, not for this application, we don't care about the coefficients. And hopefully some future applications will be able to see sometimes like having certain improvements would be like better. And that's why I would like to talk about a kind of like, I'll put it as a simplified version of what we can prove for the arithmetic connomicity theorem, which would also give us like the other application, periodic data values. So let me first explain this theorem. And so this is essentially not similar to the previous theorem. Let me first point out what's in the previous theorem. So we said that we could replace e by two. So we have this integration, just take them that equals to zero case. So we have the integration here. And we also have the derivative here. Okay, so there's like two or three other things pops out here. One is lambda. The other is not RP. The last one is Sigma. The not RP part that you can feel free to ignore it because it's just the same that we want to just the easiest possible analytic version, where we have like the periodic convergence radius in the conditions three here, we want to have the periodic convergence radius to be at least RP. So that's just like an easy like analytic version. Actually, more generally, we could instead of talking about convergence radius, we could also try to do these like entry type of maps. But for simplicity, that's just a stick always like periodic convergence radius. And the lambda parts, which probably will show up in the last couple of stands, and that this is kind of give us a continuous way to like see a lot of like different erasmatic polynomial theorems. So the important part is the Sigma. So let's look at the condition one here. The condition one here is like previously in the previous application of the unbounded denominator case, we have like z coefficients, or you could do like z to the one over m to make it the analytic version. That's completely fine. But like sometimes we'll have a denominator which of form like we take the square bracket here means the least common multiple of one up to n to some integer power. So in the later application, so there's two parts of the story. One thing is like when a typical example I could give here is like think about the function fx equals to log one minus x. Okay, so the for the function of log one minus x, you'll see that if not to have z coefficients and the like power series, the Taylor expansion, you will have like one over n. So you have n show up on the denominator. The log x is kind of like commonly studied object in like do fentanyl geometry. So we would like to take that into account. And then this, so that's where the Sigma comes in. And also it's really, let me also say a little bit of the kind of the name of the theorem. The theorem is still like when we consider all the functions with some control on the denominators of the power series. And then the condition two and three could think about it as like we have certain type of convergence radius at all the places. And so here we only say p divides n because for the others we can truly bounded by one. And then we're going to give a dimension upper bound of the vector space spanned by all these functions. And the whole normal velocity coming from is like unlike the situation when we do not have denominator. So when we have a function f with z power, sorry, with z coefficients, you raise to any power, you still have a z coefficients power series. And it's kind of converge on the same disk for like all kinds of conditions. But unfortunately, once we put into this denominator condition, we will not be able to just raise to the power. If I say f square, then unfortunately, I won't say that it's have like denominator of this type one up to n to the Sigma. So that's why it's no longer an algebraicity theorem. And we also know that log one minus x is not an algebra function. And the but what is the kind of operation that one can do without like increase the denominator and having the same kind of convergence property that's taking derivative and that it's been the whole normalcy. It's like instead of using f, f square powers of f to span a vector space of large dimension and talking about the algebraic degree, here we're talking about f, the derivative of f, higher powers of derivative like the iterative derivatives of f. So that will expand like a vector space with functions all satisfy those like trivially satisfy all the conditions if our f satisfy the condition. And then the conclusion would be that the differential module which given by like f derivative like iterative derivative of f will be a finite dimensional vector space. In other words, it's like f would satisfy a differential equation of finite order and we give an upper bound of the order of the differential equation. So that's the difference between like the waste denominator and without denominator case is like we're like go beyond the situation of the algebraic theorem and we talk about the so-called arithmetic polynomial theory. And so here is like I'm going to give you an example of like a difference polaroid that one could do like by choosing a different lambda. But first I would like to actually explain what is the denominator comes from. And does that come from that we can think about the baby case of the broader work of power series that we talked about before. In the situation where we have just the finite many primes show up on the denominator, the proof can actually come in from a product formula. But unfortunately it's like when we have just the q coefficients instead of z to the 1 over n coefficients, then like we cannot just directly exchange the summation because exchange the summation is not holds when we have actually like infinitely many things that we have to take care of. And that's why there's kind of a so-called tau invariance introduced in the book of NG where which essentially measures the difference when you do the exchange of limit and the product formula thing that one could prove the baby case for z to the 1 over n. And the generalization of the previous situation is like so let me not flip back the slides but saying rapport is previously we said if we have the summation of log rv strictly positive that like when we are in the just the finite many denominator case we have something as a polynomial. Now if we have infinitely many primes show up on the denominator, then the generalization we should do is like the summation of the radius strictly greater than the tau function here and then the conclusion that it is a polynomial. So this is let me just flip back this is related to the denominator because the first two turn is just the summation of like generalizations of convergence radius and then minus sigma in this case one can actually compute sigma is exactly the tau invariance that using like nj's definition. So like it says it's like if like we definitely wanted the denominator to be positive in order to have a meaningful dimension model. So back to our case is let me just restate what's the situation that we use it's like we have ventral functions and we have like the tau invariance is smaller than sigma we have the convergence radius and we have like for simplicity here we just like manipulate the arc medium space with a generalization of the radius then the conclusion is like we could have such a dimension bound. So to compare to the previous case when we have the integration bound so previously we have the integration of the log source of this function but unfortunately we have a coefficient two there and then in this case if the superman is not too big then the coefficient that we could put here is one. So here is just the kind of you can choose the lambda in the previous theorem like wisely so that you could get like basically a family of dimension bound and it depends on what type of problem that we try to solve some cases like different them that will give like better bounds all the other case. Okay so okay good let me just summarize it's like basically what is our input is like for each place we want to have certain type of like generalization of the convergence radius and depends on like how good the function is and then we can actually cook up a dimension bound in terms of all these uniformizations and I would like to talk about so in the book of nj on g functions given that he already introduced the like tau invariant he also had the version of the bound of the dimension just the bound is like not as good as the one that we present here. Okay so I would like to actually give the second application of today for the previous theorems about like the like zeta values. Let me just so quickly go through like the classical situation is that we have the Riemann zeta function we consider the zeta values and then conjectually we expect that all these are outrageously independent over q and there are previous results for zeta 2 and zeta 3 and we will not be able to say anything about the classical zeta value 5 but in the periodic situation life is a little bit easier so the periodic zeta value can be defined as like the limit like so remove the p part and take the limit of like the zeta value once we take the limit in the periodic sense um so the limit exists so based on like commerce congruence condition on the Bernoulli numbers or the other thing one could think about are these are like constants of like periodic isonstein series as like periodic isonstein series can also be achieved by like interpreted like the like classical isonstein series and in the work of frank carugari in truzon 5 he proves that the like true adic and the three adic zeta value as three are not rational using the over convergent isonstein series and using the dimension bound that i just showed earlier that were able to prove that the true adic zeta value at five is not rational so the definition gives us like a numbering set q2 and what we can do is to prove that this numbering set q2 is actually a numbering set q2 not rather than a rational number okay so i would like to report a little bit from like frank's work about like what is the like isonstein series and how did he actually set up things to use the over convergence okay so as i said like these like periodic zeta values can be actually realized as like the constant term of a periodic module form with weight in this case negative 2k and the we can pair it with a classical module form to make it into something of weight zero and also for x not two it's a rational curve and then it has a hapsen module so we have the product of a weight 2k and the weight negative 2k module forms will get something weight zero we can write it as a power series like in terms of the hapsen module of this module curve and we will like to actually study this power series so what do we know about okay so we're going to study this power series and apply our like a mathematical number here and uh sorry there's a typo here is that um we are going to since we're going to talk about like the periodic data value at five which means we are going to take the case that k equals to two for k equals to two so this is a weight minus four module form this is a weight four module form then one can actually run the argument a monogamy typo computation to see that actually indeed is like these functions so four is related to the 2k here so these functions are all linear independent like the smallest kind of degree of differential equations that such function was scientifically done like degree five differential operator so these things are not any independent so we at least have a lower bound of the space that we are going to study if we assume for contradiction that the periodic data value is rational so what about the radius so the radius is like since it's a like weight negative 2k module form so we see that the denominator it will have n to the 2k plus one and the classical one is essentially like z coefficients and the half sum module is also z coefficients so when we put things together is that we know that derivative doesn't change convergence radius so we can just think about f so it has like the periodic convergence radius to be at least one for all the primes and the and the here is the extra like over convergence comes in is like in this case we can have more about the true added convergence radius we could have a true added convergence radius to be 2 to the 12 which is like the one on the disk union all the super singular locus and then so so the denominator comes from 2k plus one so the tau sorry the sigma in this case is actually 2k plus one and then for the phi is so we want to have something such that the log the summation of the log of the radius is strictly greater than five and the so if we take just the take the log of like this one okay 12 log 2 and it is indeed strictly greater than five and so which means is like we want to have our like arc media radius to be also not too small and the one of the large ones is that we can take the just the q coordinate q coordinate gives us something such that the derivative is one but unfortunately if you just take the super large disk there we will not be able to have something such that the supremo phi is small so like the formula says let me just flip back a little bit so the formulas here says is like we already have like the 12 log 2 which is very big and our sigma is equals to five so we want to kind of find the like a region not necessarily just inside it we would like to find a region inside the q coordinate disk such that this one is not too small so that we still have this one positive and this one is not too large things is kind of a very concrete like Hobson module one can actually study like its gross behavior and we were able to actually let me not give details about like what is the precise choice I mean as there are kind of multiple choice one can make is like so one could actually just like choose the super origin to avoid those places where like the function phi gets too large and then we get a dimension bound from our serum to be at most the six so let me just summarize it's like we use the like periodic over convergence to have a larger radius we use the q expansion some part of the q expansion and our Hanami city serum to give them a dimension bound to be at most the six and on the other hand we do have like six functions if we assume for contradiction that this is the rational number that gives us contradiction because all these six functions are linearly independent so what goes wrong the conclusion is like ah we just should not have like f here like so when it's okay when it's rational that's not fine but we should not just have the function f here because it should not have two coefficients it should just never exist in this like um vector space that we study about like convergence radius blah blah blah these type of things okay so and then so as the conclusion that this like periodic data values it's just irrational um okay since I only have three minutes left I would just like to display the slides about the little hint about like how I actually prove the arithmetic Hanami's t-series and this is inspired by the work of boss previous work of boss shambhala and also reading the work of boss shams that we try to revisit the slope method like you know earlier work of boss tried to just to manipulate things to see what type of to get uh I will put it this way like a uniform the proof of the arithmetic Hanami's theorem um in our paper we proved like the serums using like Siegel Neymar and here is like using the slope method we could have like at least so far a uniform bound which sees different chlorase and the upshot is like we construct a z module which is essentially just that the function f along with like some polynomials these are the way that we construct auxiliary polynomials with high vanishing order and now here I'm finally at the moment to expand lambda lambda is like so this is the module is something that we know that we're going to study and in order to apply the slope method we need to make it into an omission module and the lambda here is just the the different choice of norm that we put here so given different choice of norm since it's a different omission module we get different estimates in this kind of slope inequality about the arithmetic degree of the omission module and also the height of the function which is coming from like take the ans coefficient map from these filtrations filtered by the vanishing order and so then like once one set up the slope kind of method when just the estimates all these terms when estimate the arithmetic degree and also the height bound and the putting things together we get our whole normicity bound which is something depends on lambda and then we just choose the suitable lambda for each question in order to get a good upper bound so let me just stop here thank you very much for your attention