 please Igor. Okay, thank you very much for inviting me to give this talk and even more thanks for arranging the whole seminar program all together. It keeps him up to a.m. every Thursday. But yeah, I enjoyed it. Thank you very much. So today, indeed, I will be talking about Vile Sums in all sizes, which is a joint work with Chenggao Chen and Bryce Kert. We did it while all three of us were here at UNSW and now we all left different locations. Okay, so let me start with basic definitions. Assume that we are given a vector of real numbers, which belongs to the unit cube of dimension D. And in fact, it's more convenient to think about this cube as a torus. So we reduced everything model 1. So T of D is this d-dimensional unit torus. And then our main object of study is the first following Vile Sums, which we denote as sub D, UN. It depends on three parameters. D is the degree of the polynomial in the exponent. Ui's are the coefficients. And n is the variable of summation. And E of x is the exponential function of x with a coefficient 2 pi i. So these are all unit vectors. And as the name suggests, they're named after Hermann Weierl, who introduced and instigated the sums and, more significantly, he foresaw the great value for mathematics. For more than 100 years ago. And just in case, if you forget the definition, instead of putting the title, and my name is the footer of each slide, I decided it would be more useful to put the definition. So hope you can see it. So this definition will be always with you. And there is another set, which will appear a little bit later. We'll talk about this. Okay, so when Hermann Weierl introduced the sums, he almost immediately found two very important applications, and actually that was the motivation to introduce the sums. In 1916, he proved that these sums can be used to show some uniformity of distribution results for fractional parts of values of real polynomials. And actually, even now, 100 years later, we don't really know how to deal with this question. Otherwise, how to show the fractional parts of polynomials are uniformly distributed. There's only one exception. There's a dynamical approach of Fürstenberg, which however still loses to the original approach. It gives quantitatively weaker results. And after this few years later, he also found another exciting complication, namely, he proved the first so-called subconvexity bound for the even-zeta function, which is this first non-trivial result towards the still-opened dintel of conjecture. And of course, since that time, lots of other applications were found, and here is certainly a very incomplete list. People obtained new bounds for the 0th region of this zeta s solved some other problems, like varying problems, and many other additive problems found bounds of short character sums and also L functions for highly composite modeling, investigated low-lying 0s of L functions, and so on and so forth. This is a very incomplete list, as I said, and there are actually very surprising complications, which I know at the first glance has nothing to do with number theory. And maybe later, I will describe some of them. For example, last year, Erdogan and Shakan found links between bounds of some very special wild sums and partial differential equations. A very unexpected link. Okay, now let me first explain what we know about wild sums, and then we talk about things which we do not know. And our knowledge can be kind of classified in the two classes, average values and individual bounds, and it starts with average values. So for average values, of course, you can trivial it, prove that the average squares and it's nothing but a partial identity, a completely trivial result. But if you make one step forward and decide to look at higher moments, for example, for the first moment, the equation becomes highly non-trivial. For example, there's no easy way to say anything meaningful against the first moment of the wild sums. And finding estimates of this type, integral of all possible vectors, u vectors of coefficients of polynomials over the unit cube td, for s2, free and so on, are usually known under one collective name, Wiener-Graduos mean value serum, but it's better to use plural because there are several modifications. So well, it depends on your point of view whether you want to treat it as one serum or as a series of serums. And I will be abbreviating this vmvt and it solves the problem with the sexta s that will be compressed in t. So why do we call it Wiener-Graduos mean value serum? It's because Wiener-Graduos in 1935 obtained its first non-trivial bounds on this quantity, j sub ds of n, and actually with the right saving. And we talk about this a little later, what is the right saving? How can you estimate these moments? So the saving was right, but he achieved this right saving for the values of s slightly higher than one would expect. And we know that it was kind of right expectation. So he went a little bit far up with the values of s for which it has its bound. And he also linked this average bounds to individual bounds. So he obtained first non-trivial results within his method, which for large values of d were much stronger than the result of Hermann Weierl and some other researches. Well, so he kind of opened this direction and after works of many other people, most significantly, Wiener-Graduos, Korobov, Kratzuba, Ford, Rohn, Wuli, and I'm sure many other people. So 80 years later and several dozen papers later, we have the following, which is the optimal Wiener-Graduos mean value serum. So what we know now is the following estimate. So for any s starting from 2, this integral, the average s moments of while sums can be estimated by a sum of two terms, n to the s, and n to the 2s minus d times t plus 1 over 2. And I ignore a little over in the exponent. I will never mention them, even of course they will present in essentially all estimates I write today. So this result, which is optimal, and I will explain in a second why I call it optimal, is due to Tera Wuli 4d equals 3. It was appears 2016. Then at about the same time, Wolfgang de Meta and Goetz establishes 4d greater equal 4. And actually this same method didn't seem to work for d equals 3, as far as I understand. And a few years later, Wuli made another step and extended this estimate to more general exponential sums with linear combinations, not just powers, n, n squared, n to the d, but with linear combinations of d arbitrary polynomials. And later it will be important for us. So we'll come back to this. So now let's discuss why this believes that this bound is optimal. And it also kind of will help to understand how one can deal with the sums and how they behave. So I want to consider a generalization of my integral j d ds. I want to introduce another parameter, vector lambda. So it takes the same moment as before, the s moment of while sums, and twisted with an exponential function of the scattered product of lambda and u. So there is a linear function here in the exponent. Okay. So what I want to notice? I want to notice the following identity. If we take the sum, write them to s power, and open up, I will get linear combinations of the power sums, because instead of writing the absolute value of s squared, I write it as s times the conjugate s. Conjugation just changes the sign to the exponent, and I do it s times, so I have two s terms, two from which square of s, and they appear in this shape. And of course, this linear function of lambda will be combined in the same exponent, so we have this expression. It looks very clattered, but actually they have very simple structure, which we'll see on the next slide. And what I want to do next is to change the order of integration, to change the order here, and integration will move inside, and the product will go outside. So for hj from 1 to d, I have this single integral over uj. And now it's time to recall the orthogonality relation, then when you integrate this function between 0 and 1 for some integer w, you have two possible answers. It's one when w is 0, and it's 0 otherwise. So you will see that these integrals, they become characteristic functions of this exponent being 0 or not. So if you put everything together, you see that this integral, this twisted moment, is nothing but the number of solutions to this system of equations. You have d equations, and in each equation you have s powers of n, of n on the left-hand side, and another s powers on the right-hand side, plus this shift lambda j. Okay, now, so this number of solutions to the system of equations. Now let's come back to the previous slide and look at this. And I claim that it's obvious that when lambda is 0, when lambda is a 0 vector, which corresponds to our initial definition, this integral is larger than all other integrals, because it's what happens. I take this integral, take absolute values, bring it at side, use the triangular inequality, and it will kill this exponent, and the value only goes up. So applying the absolute value, I can only increase the value. So lambda equals 0 gives me the largest possible value. So we have this. And of course, when lambda is 0, when it's missing, I obviously have n to the s solutions. I take n1 equals to ns plus 1 and so on, and I don't even want to worry about taking permutations. I don't care about constants. So this trivial solution already gives me n to the s. So n to the s should be present in any up about, because this value is at least n to the s. And it was the first term in the optimal binogratzous mean value theorem. So we are done with n to the s. Let's look at the second term, which was more involved. And here, I want to combine three very simple statements. The one which we just used, then when lambda is 0, the value of this function is bigger than all other values. Great equal. We don't know whether it's larger or not. Now, if you sum over all lambdas, if you look at the system and put solutions for all possible lambdas, which means you have no restrictions at all, whatever lambda comes, you call it lambda j. So sum of all lambdas without any restriction is the same as the total number of variables we have, which is n to the s. And another thing for this thing to be nonzero, lambda j should be at most this, because lambda is given by this expression. And each term is at most n to the j. And we have two s terms here, but they're positive and negative. So by absolute value we have this. And we are done now, because we have that many possible values on which this function is supported, values of lambda. For each of these values is at most this. So the quantity we have after j sub d s n without any lambda at all times this number of possibilities for lambdas is greater than the sum of all j's of all lambdas. But we know what's the answer here, it's n to the s. So if you put these things together, you see that this term must also be present in any upper bound. This means that the upper bound, which I gave you a few slides ago, which contain both terms is optimal. Okay. So now that was about average bounds, what's known about individual bounds. And here I wish I could tell you something that the result is, that at least some of the, one of the results is optimal, I can't, our knowledge is very, very sparse here. So what, what do we actually know? So if you use the Vinaigradoff's method, plus the optimal form of the Vinaigradoff's mean values theorem, then you have the following result. Assume you have this vector u, the vector of coefficients. And also assume that for some nu between 2 and d, the linear term is no good for us. You have to know something for, for non-linear terms, you know, your polynomial. It's a member that you, you is a vector of coefficients of the polynomial, like in this definition in the photo. So you know that u sub nu is approximated by a rational number a over q with this precision. Of course, there are infinity-mediate approximations of this type for any irrational number u sub nu. Well, then for this value of n here, and this value of q, and this value of nu, you have the following bounds. S doesn't exceed n, which is a trivial upper bound, because you have n roots of unity. So this is trivial upper bound. And now let's, you have some savings, because as you see, all these exponents are negative, so they give us no trouble. And there is one more term here, which can potentially be, be a trouble maker, because q comes in a positive power. But we talk about this in a, in a few moments. So here if you have q to the minus one, and you have to rise it to this power, which of course reduces your saving. And to the minus one, again, you have to rise to this power. And here's the only restriction is that q must not be very large compared to n to the nu. And right now we don't really have any other plausible approach to have, to have better bounds. So at some point we need some new ideas. And let me just tell you that the formulation looks very involved, but it's very easy to see that any, any bound of this type must depend on the finite time properties of the coefficients u1, ud. For example, if all of them are integer numbers, then of course you have no consolation. You have integer numbers in the exponent, so all those exponential functions take really one, all vectors are aligned in the same direction, so you just have n. So you have to know something about your i's, how much they're different from, from integer numbers, which means that you have to have some information of this type. Okay, so this would function at everything I can say about individual numbers. So in summary, what do we have? We have almost complete knowledge of average values of the world. Yes. Philip, you wanted to ask a question? Yeah, I had a question regarding the slide above. So the, the condition here is not symmetric in, in new or in the ordering of the Yes. Yes, sure. You would like to know this for the highest coefficient. Sometimes you don't. Of course you gain more when new it's large. So if you have this information for the, for the leading coefficient, it's better. Sometimes you don't. Okay. So this is why kind of the bounding proofs is, is your knowledge. And yes, it's not symmetric with respect, with respect to Q and n. Yeah, but this, this, this, what this method gives. Okay, so what do you know, you know, essentially everything you would like to about average values and know something that very little about individual behaviors of, of, while sums. So what I'm going to do next, I will move to the part with some new results and outline some recent results, which tell us something about the distribution of the while sums. Namely, if you look at while sums of, with scribe size and as my title suggests, they will be small, large or close to the average values. After this, we consider kind of an intermediate scenario, which interpolates between these two cases. We consider some kind of equations, which combine average and individual results. And then I show how this two different directions come together. And allow us to say something interesting about the while sums. So this is the plan. So let's talk about the distribution of values of the while sums. So I recall our main tool, the mean value theorem. So this moment doesn't exceed n to the s coming from the diagonal, diagonal solutions. And this term, which gives you the right saving. And because this is the right saving. There are immediate implications of this. So it tells us that when s runs between one and d times d plus one over two, exactly this expression. Then the first term dominates. And instead of n to the s, we should do the trivial bound. It's n to the, sorry, n to the two s, which comes for free without doing anything because it's a trivial bound on this sum. You have n to the s, which means that on average, this sum is n to the one half, which is natural to expect because you have capital N vectors and you treat them as random vectors. So you expect something with the square root constellation. It's not surprising that you have n to the one half. Now, let's take the largest possible value, s, before the second term takes over, which is this, as I said, this last term here and they take this. And it tells us that if you ask, what's the measure of the set of coefficients you, for which the corresponding sum is bigger than n to the one half plus epsilon. So you add a small epsilon to the exponent, a fixed positive number. And for this, you have this bound. So it's true for a set of very small measure, n to the more or less d squared epsilon. So you know that the number of sum, the measure of the set of large sums is small for each concrete value of capital N. Okay. Now, you can do actually something more interesting. You can interchange the roles of n and u. And it's not immediately obvious. So what is true if you apply the mention of Radimacher theorem is formulated here for almost all vectors of the coefficients u. And for all values and for all sums, you still have the same bound. So you have a universal set, a small universal set of bad values of u destroyed. And then for all sums of all lengths, you have the square root bound. It's not a very difficult proof, but it will take some efforts to prove. Okay. So now we are ready to ask our questions. So what about typical values? You know that on average, all these moments suggest that s, the sums s, the viral sums should be about n to the one-half. But typical values are not necessarily average values. And one of the equations, for example, is here. Is it true that for almost all u, for almost all coefficients, for infinitely many, or maybe even for all n, these sums are at least as large as n to the one-half. The fact that the average values of this order of magnitude doesn't allow us to make this claim. Now, what's about extreme values? For example, what can we say about the set of u, the set of coefficients, for which were infinitely many n, the sums are very small, less than n to the delta. Is it a massive set? Does it exist at all? Maybe it's true for all vectors u we do not know, at least not yet. The same thing you can ask about large values of s. What can we say about sets of u in the unit terms, in the d-dimensional terms, such that for infinitely many n, these sums are very large occasionally. So all these equations are already open. And right now, we are not even able to rule out such ridiculous statements as this. Say, for almost all u in the d-dimensional terms, for all n, all the sums of this size n to the little over one, this should be very, very wrong. Nevertheless, I have no idea how to prove that it's wrong. So as far as I know, it's still a possibility, even if I would probably bet my annual salary against it. Okay, so let's try to make some quantitative statements. And now I want to introduce a set E, which will be with us essentially all my talk. And it's also presented here. So the set E, it's a set of exceptional sums, which is defined as a set of u. The vectors define our polynomial in the exponent for which real sums, which correspond to this u, at least n to the alpha for infinitely many n. So these are polynomials for which they are large sums. And I define c to of d as the infimum of all alphas for which this set of is of measure zero. So when alpha is bigger than this number, then for almost all u, sums are less than n to the alpha. So the exception of set is small, if alpha is bigger than c to d. So what I said before about when Schroff-Demacher theorem allows us to claim that c to of d is at most one-half. Because for almost all sums, we have the square root bounds with little o, but because it's inf, it makes no difference. And it's certainly a very natural to make the following conjecture, which Chang'eau, Chen and I kind of put officially in our paper, but I'm sure many people would guess the same that this is just an inequality instead of inequality. That c to of d is one-half. That everything switches when you pass the threshold one-half. But almost all sums should be at least n to the one-half and at most n to the one-half in Chen'eau. But as I just mentioned in the previous slide, right now we are not even able to show that c to of d is positive. This ridiculous statement on the last slide would mean c to of d is zero, which of course is not, is not likely to be true. And you know nothing about this except for the case d equals two, where we have some extra tools and extra possibilities. And then we know that the conjecture is actually true. So when d is two for quadratic sums, we know the c to of two is one-half. Okay, so we know very little about the set, and we cannot say much about c to d, but we can say something if you measure the size of the sets, not via the Lebesgue measure. This tool is too powerful for us to handle. But why household of dimension? Which is kind of more refined tool, and it allows us to make some non-trial statements. And we will be interested in the following regimes. Small real sums when alpha tends to zero. So we are interested in sets of u for which sums are bigger than m to the alpha for very small alpha, which should happen all the time of course, but we don't know how to prove it. We will be interested in very large sums, and this should be a very small set. And we are also interested in the regime when alpha tends to one-half, which should be a typical case. But I put a question mark, we still don't know this. We assume it's typical behavior, we don't know this, but let's call it typical still. So these are three kind of interesting regimes of alpha. So as I mentioned, we will be using household of dimension and here for definition, which I'm sure all of you know. So this is a low about the infimum of s, such that for any epsilon, there's a cover of our set a, a, which satisfies this condition. The sum of diameters of the sets ui, the power s is less than epsilon, where the diameter is just the largest distance between two elements of u. So what do we know about the sets e of alpha? Well, we actually know something. We cannot show that they, they are very massive with respect to their back measure. But for any alpha, we know that they are reasonably rich, meaning that if you take this set e of alpha d and intersect it with any small cube inside of you, d-dimensional torus, and then look at the intersection. This intersection is a positive part of dimension. So there are no holes in this set. That's everywhere, but it's more than just everywhere, everywhere. And there's enough kind of mass in this set. So there are many elements everywhere in the unit, in the d-dimensional unit torus. More specifically, we have the following low bound. I want to define this quantity, kappa sub d, which is defined formally by this expression. It's probably not so interesting what probably is important to know that this limit is three quarters, and if three divides d, you have just the exact value kappa of d is three over four d. So if you have to minimize this and maximize or renew, you obtain this. And with this definition, we have the following flow bounds. So for any small cube in the d-dimensional torus, when d is two, the dimension of this set, or the intersection of our set e alpha two, and this cube exceeds three half, the smallest out of three halves, and three minus one minus alpha. And we have a similar result, except with this slightly weaker coefficient kappa d for d greater or equal three. So everywhere, you have some positive value on the right hand side, which is probably the most interesting part here. And in fact, when alpha is less than one half, then the second term doesn't matter anymore in the bound simplifies like this. It becomes three halves, and this is about three or four d. And of course, for alpha less than one half, we expect that this set would be a positively back measure, probably of full measure. We don't know how to prove it. So our conjecture, which says that theta d is one half, it's exactly the threshold between small and large values of sums. Because if our conjecture is true and theta d is one half, then this set should be of of the measure d. And in fact, we believe that even stronger holds that all these small cubes should also give you an intersection of a full dimension d. But as I said, it's just a conjecture, and we don't really know how to approach it. So so far, we only have this, have this low bound. Well, that was a result which holds for any alpha. If you consider a shorter range, then recently together with price curve, we obtained a strong result. So in the range between zero and one half, strictly less than one half, we almost have something which resembles what we want to prove. This household of dimension is at least d minus one over two d. This term must not be here. But well, so far, this is the only thing we can prove. Unfortunately, this doesn't work for the critical value one half. We really would like to have it with alpha which can take the value of one half. We can't. This inequality is strict, but we can do it for a potentially slightly larger set. So what we want to introduce, we want to introduce yet another parameter. My definition of the set e alpha d says that this sum should be greater equivalent to the alpha. Hey, we need to introduce a constant c. We need to lower a little bit. And in this case, if you allow to introduce a constant c, which of course you'd have to choose less than one, you may need to choose less than one. Then we have this inequality, the same as here, even with alpha equals one half. So this is alpha, which is now one half. And the price to pay is to put an extra constant here, which we see equals one would give us what we really wanted. We cannot guarantee that c is one. We already know it exists. So potentially it could be less than one. And so the result is a little bit weaker than what we want. Okay, now upper bounds. Well, as I said, we certainly expect that this set is of dimension, of dimension d in this range. So we don't expect any upper bounds in this range. So we only want to talk about alpha, which is strictly greater than one half. Namely about sets of k-efficient of our polynomials for which while sums are occasionally very large, larger than the average value. So here we have the following estimate, which looks very convoluted. And there's no point of reading this. It contains some fancy function u, which depends on d and alpha. But what's interesting here is that if you forget about minimizing over k and just take a concrete value of k, say d minus one, then this upper bound tells you that this function u, which is an upper bound on the dimension, is d minus something. So it's strictly less than d. So even we expect that the set e alpha d is of how is the dimension d for alpha, which are between zero and one half, inclusive of both ends. It changes instantly when we bypass one half. The dimension is certainly less than d. And this is what we can prove. So it's a non-trivial bound. And now, of course, you can try to push it more and try to get better bounds. And indeed, we get kind of several other results. We have another bound, which is a very strange shape. But I presented because it will bring me to the second topic, which I am going to discuss. Namely, we have two parameters, k and d, two integer parameters. And the only restriction is that k is between one and d minus one. It should be strictly less than d. Then now we start with a bound, rather than with alpha. And we adjust alpha to the bound. So we claim that the dimension of the set e sub d alpha is at most k, provided that alpha exceeds this. And again, we are interested in non-trivial bounds on the dimension. So d minus one is what we are after. We don't want to take d and it's not allowed anyway. So the largest value of k is d minus one, which still makes sense here. So with k equals d minus one, you substitute here. You see that when alpha exceeds one half, certainly it must exceed one half, plus this function, which has a linear function of d in the numerator and a quadratic function of d in the denominator. So it decreases as one over d, more or less, when d is large. So if you exceed one half a little bit, the dimension already drops below d minus one. So it's just we just pass almost instantly, you have a drop in the dimension by one, at least. And this bound is obtained as a byproduct of some results on our next topic, projections of while sums. But before we leave, just a few more words about the set e alpha d. So what's the truth about this? What do we expect? Well, upper bounds and low bounds are very far apart, so they don't come close to each other. However, surprisingly enough, there is one regime where we are not so much off. So we essentially know not the truth, but kind of symmetries here. Namely, it's a regime when alpha comes to zero. So when alpha is of the shape one minus delta, but delta tends to zero. In this case, you know that the dimension is cornered between two linear functions of delta. So when delta decreases, this dimension behaves as a linear function of delta. And more precisely, we have these estimates. So instead of one minus delta, switch to alpha again, so delta becomes one minus alpha. So the lower limit is greater equal three, and the upper limit here is at most just squared plus two d. So we know something at least in at least in this regime. But otherwise, we have no possible conjectures. We have no idea what one could expect for this function. So I will be very much welcome some input if someone has anything to say. Okay, now let's move to the next topic, projections of while sums. Again, we call that we have a complete knowledge of average values of the while sums. And we know something, but overall, very little about individual values. So the question is whether you can interpolate. Now, let me recall what we know for the individual values, something which I presented at the beginning of my talk. We know that if some coefficient with a nonlinear term is approximated by a rational fraction over q, up to one over q squared standard Dirichlet approximation, then we are in business, we have this value. So if you know something about at least one coefficient, we can ignore the rest. So if you control, say, ud, we don't care what happens to other coefficients, they could be integer numbers, and it's still a very, very interesting case. And actually, what we still have a non-trivial bound. So essentially, without doing anything, we have the following result for almost all ud and all other components of our vector u. We have this bound. It's because for almost all ud, we can choose q of this size, for example, which is a quasi-optimal choice here. Then these two terms become balanced, and the third term for this value of q gives us no trouble. It's dominated by the first two, which are equal in this case. So for almost all ud, you can always find approximations with a denominator of the size you control very well. So in this case, almost all ud, and almost all ud, and you don't care anymore what happens to other UIs. So for all of them, you have this bound. So this saving becomes 6, 1 minus 1 over d minus 1 in the exponent of n. So we have a result which holds for almost all ud and all other vectors. And now you can ask, what happens if you ask for family results of this type? So what do you want to say? You want to say that for almost all components of U on prescribed k positions, say the first, the second, and the tenth, and all components on the remaining d minus k positions. For all lengths of our sum n, you have a statement of this type, where x, x, x is whatever you can prove. The first bound you can do, depending on the positions you fix on k and maybe on something else. So it's kind of a prototype of the statements I want to prove. And this scenario was considered by several people. For example, by Flaminian Farni 2014. Then Trevor Woolley revised this topic and used a very different approach to obtain results of this type. And then finally, Chen Gao-Chen and myself last year also worked on this and we obtained several results in this direction which I will now present. But I want to reformulate it in a slightly convenient form. Instead of saying you want to fix, coordinate some positions, I will always be fixing them on the first k positions. But for this case, in this case, I need to permute the functions. And once I permute the functions, I can as well generalize it to a different situation. So I assume that I have d linearly independent with constants polynomials. Meaning that no linear combination of them is constant. It's not enough to say that it's nonzero. They need to avoid constant functions. And given a vector u, I define this function. I call it t not s, s is reserved for the classical, while t means that we have linear combinations of some polynomials. And of course, when psi i is t i, this is just a classical while sums. Now, I take my vector u and decompose it into two parts, part x of dimension k and part y of dimension d minus k. So my vector now has two components. And instead of writing t u n, I will write t x y. So we will separate two components of u. Okay. And now, following this work of Flaminion Favnii and Trevavuli, we are interested in bounds on the sums of the following four shape. They should hold for almost all x and all y. So some components, some components x are allowed to be kind of some of them because they're allowed to be discounted, some set of x, but we need it for all y's. Equivalently, we are interested in bounds on this quantity. So it should certainly have absolute value. And my apologies. We should certainly talk about the largest absolute value of the sums, maximized over all y's. And we want those bounds to be true for almost all values of x. So it's soup with respect to y and almost all with respect to x. So interpretation of this is the following. We take our sum t u n and project all the sums along y components. And we take the largest sum in each projection. And we want a bound which holds for almost all x coordinates. This is what we call a projection. We basically just take all sums with values of y, which is the same as projecting the sum. Okay. To formulate the results, we need to introduce another important parameter, which I call sigma with two parameters k and phi, of course. There is this vector function, functions we are given. So to sum of the degrees of polynomials in the y part, polynomials over which we maximize. And in this case, we'll improve the following estimate. Assume that k is between 1 and d minus 1. So this means that you at least have something in the y component, because k is the dimension of vector x. Then assuming that our set of functions as an integral of Ronsky, it's a necessary condition, which is easy to see. For almost all x, you have this bound. The largest radio of the sums, tips, phi, x, y, or n, doesn't exceed n to the one-half. Of course, one-half must be here, plus some losses, or what one would expect, what you notice for pure pile sums. And this gamma, which I call the gamma w in honor of Wuli, has this form. And last year, Shangar and I managed to get a little bit better. We reduce this exponent gamma by subtracting 1 and 2 from here, which you can see gives a larger value. Okay. So the bounds are non-trivial. If this exponent extra, extra exponent gamma is less than one-half, because we already have one-half, which means that sigma, the sum of degrees in the y part, should be at most this. And please know that if we, in the classical case, when you just permute these functions t, t squared, t sub d, it's always satisfied. In the more general case, well, it's a matter of luck. It depends how your functions look like. If they contain polynomials of huge degrees, then, of course, you have nothing, but this condition will be violated. So in the general case, we don't always have non-trivial results. And the problem still makes sense, and it would be nice to find a method to address it. We also have some specialized bounds, but I probably skip this. It's not so important. But I want to come back to the link I mentioned. Link between projections of some, something which we have discussed, and now bounds on the dimension of this set, e alpha d, set, which we defined here. To establish this link, we need to consider a new scenario, a slightly more general scenario, where we consider arbitrary projections. Before, we just discovered y coordinates. Now we want to do something else. So we can see the collection of all k-dimensional linear subspaces of i d, and pi sub, double sub v denotes orthogonal projection onto this linear space v. So we project not necessarily with respect to y coordinates, but we consider projections on other spaces. And very much as we did with e of alpha d, we consider this set, which defined exactly the same except of the while sum, I consider my more general sums with linear combinations of other polynomials, and the set of u for which the sums are large. So the equation we want to ask is, given this set of polynomials phi, for what alpha the measure, the big measure of this projection is zero, for all possible linear spaces. Well, what can we say from our previous result? From our previous result with this exponent, we know that if alpha is greater than this exponent, then the back measure of this projection, pi dk is zero, where pi dk is a specific projection, which we just discovered the last d-k components. Now we are after more general projections, not just pi dk, but projections on arbitrary linear spaces. And we have a series of results, slightly weaker bounds, which means our gammas, which appear since these results are larger, but we still have results, our method gives results for these arbitrary projections. And this allows us to address this issue, link projections of while sums to upper bound on the dimension, on the household of dimension as exceptional sets. And it's based on a classical result of Marstrad and Matilda, which is known as Marstrad-Matilda projection theorem. I don't want to give any formal definitions, just informally, it says the following. Assume that you have a set A, and you think about this set A is a set of our vectors U. And assume that almost all k-dimensional projections are massive. Massive means a full measure. Then the household of dimension of the set should be large. And as large as the dimension of the subspace which you can control. The largest of spaces you control, the larger bound you get. So it's exactly the link which allows us to use results on projections of while sums to obtain upper bounds on dimensions of the sets. Okay. Well, now I want to talk about something else about links between this direction and partial differential equations. I won't say much about partial differential equations, but at least I explain what sums appear in this area. And I think it was discovered by Erdogan and Shakan last year, who noticed that the previous equation with this very special set of polynomials, a polynomial psi of x, which could be anything, most commonly is x to the d, but not necessary. And this linear combination of phi of x and just x. So we have only two functions in your system. This function appears in the investigation of some partial differential equations. So what do you want? You want to answer a question of the previous type, namely, you want to know what's the smallest possible value of theta such that for any polynomial, phi of t of degree m, any coefficient tau, which appears here, for almost all x, these sums are less than n to the theta. So you have d equals 2, you have only two functions, k equals 1. One coefficient is, you maximize over one coefficient and take almost all values of the other coefficients. And you want to know what's the best bound you can get here. So it's exactly the previous scenario. And when Cengar and I looked at this, we saw that, well, it's a done deal. We just apply our result. Unfortunately, it didn't work. As I said, our previous results, they give some material bounds, but not always. It's never guaranteed. It depends on the luck on the structure of the exponent. Here, we didn't get lucky. So we didn't get nothing with the result itself, but the method works. So basically, we had to revise our method in this particular scenario. And then we obtained some results. And well, as I said, this function is related to partial differential equations. And here, there's two particular types where this question appears. And it was discovered by Erdogan and Schokhan in their very nice paper. Of course, this sums looks very strange. And nobody would probably look at them, but the existence is justified by these links. So what we know now, we know the following. So in their paper, they proved that you can take theta. Theta m is the best exponent, the best value of theta. To be the smallest out of these two quantities, this comes from the classical, while method from the Van der Kogh method. And this comes from Vinogradov, Burgain, Demeter. Goods. And using our method, we improved it a little bit. And we have kind of a very lengthy result, which gives specific exponents for small values of m. And then a formula, semi-explicit formula for larger values of m. But the upshot is, this function of m is about square root of m. So we ignore it. And this 0 and 1 also ignores this. So what we have in the denominator, we have two times sm, where sm is more or less m times m minus 1 over 2, whereas they have m times m minus 1. So it's twice larger or smaller, depending on the point of view, asymptotically for large values of m. So this is what our methods give. And just in the remaining few minutes, I want to outline very briefly ideas behind the groups. So how we obtained these results. None of these ideas is new, but we put it in a slightly different context. Well, the first idea, which was used by many people, starting from Vinogradov, is the continuity of while sums. So it says the following, you have two vectors of coefficients, u and v, and assume that they're close to each other. Because you work with continuous functions, of course, the sums will be close to each other as well. And you can make a quantitative statement of this type. Okay. So and this statement actually works as stated and has been used. But we noticed that this can be used only on the first step. And we introduced something which kind of was used recursively. And we invented something which we called self-improving government. So you use this principle as the first step, and you have a result. And you know that for almost all values of u, your sums are small. And please remember, they're small for all values of n. It's important. This is why changing the order between n and u was, in our results, was very, very important. So now for almost all values of u, the sums are small. Next time you estimate the difference between SU and SV, you use partial summation. And in partial summation, you obtain sums of this type, for which you already have a non-trivial result. So you improve your previous estimate, and you use recursively until you come to kind of fixed point of this recursion, which gives you a bound. It does give an optimal result, but it improves you, what one can get as a first step. So that was one of the things which allowed us to get new results. And the second idea is also a very old idea, which also goes back to Winogradov. In fact, appears even before his mean value theorem. Now it has a generic name, completion matrix. So what we want to say, we want to say that for almost all values of u, for almost all coefficients, this sums are small for all values of n. Again, it's important for us to have it in this order. We first throw away a bad cell, set of vectors u, and then we want this bounds for all values of n. This means that for each n, we have to control the size of exceptional sets. And we want this to be small. So if mu of n is this measure of the exceptional set for which the sums are not small, we need to show that this series converges. This is to say that for almost all n, we have an upper bound. But it will give you something this approach, but it's not so good because you have to deal with each n individually. You have to say if you want to deal with all sets up to capital M, you have to control M different sets. It's too much. This approach doesn't completely fail. It just gives a weaker result. So what we did used completion methods, namely instead of using these sums, completion methods allows us to control each of the sums for all n in this dyadic range with just one sum. And the sum certainly looks ugly and scary, but in fact it's harmless. All this kind of fudge which you have to add gives you no trouble. So what's the fudge? First of all, we have a summation between minus n and m, one over h. Well, we don't know how to end the harmonic series. Then in the exponent, we have the same linear combination plus one linear term. And linear terms usually change nothing in our argument. So this why the sums are not so difficult to estimate more or less is the same quality as for initial while sums. So instead of dealing with n sums, if you want to say make a statement from these measures up to capital M, you have to control only log m sets. You, for which this sums say supported on powers of two are small and it's much easier and this also allows us to get better results. And I think my time is up. So it's time to stop. And I would really appreciate questions or even more would appreciate answers. Thank you so much Igor. Please thank you so much for the wonderful talk. Please unmute your microphones as we can properly thank Igor.