 OK, so once more I've written the Riemann hypothesis so that we can see it when we are going to use it. So I'm continuing with another type of applications. So I've chosen something which is really something of a case study of a type of sums that turns out to occur quite frequently in applications and which goes beyond the simple Riemann hypothesis. So we need to learn more about the trace functions in order to study certain types of sums. So these are sums of products, not sum products, but sums of products. And this is not contained in the survey we wrote for PISA. So this you can find in a kind of semi-survey paper, which is called a study in sums of products, which is on my web page and I think it's on archive. So what is the situation? So we want to study sums of the following type. So for applications, in I think number theory, we maybe not often, but we sometimes, or let me say often, need to bound sums of the following type or to understand, sometimes we want more than just bounds. So it's a sum of a finite field, Fp. And instead of having just one k1 of x times k2 of x conjugate, we have, finally, many functions called m, the number of them. And I'm still going to put another one, an m plus 1 factor, which I will call L of x conjugate. And the reason I'm writing it this way is that typically this plays a different role than the ones k1 up to km. So where m is at least 3, let me say m is at least 1, although m is 1 or 2 can often be handled with just the right hypothesis. Ki and L are trace functions. And in applications, we assume that we've already managed to reduce to geometrically reducible cases. So we also assume that we control the conductors. Complexity of all factors being small. So typically p is the main variable. Everything else may depend on p, but the complexities will be bounded uniformly in terms of k. So let me give examples from the literature. So m equals 2 and L equals 1, the constant function 1, is essentially the situation of the Riemann hypothesis. So C are h. And yesterday I gave examples. And there are countless examples in the literature where we have sums like this, which are exactly tailored to the application of the Riemann hypothesis. So I'll concentrate on an example where m is at least 3 and make a small list. Kind of an old one is in the proof of approach. So it's a bit hidden because it's applied in the case where the Ki's are characters of rational functions. And so you can glue them together in a single thing. But the way they are formed is really by taking your i power of k of x plus ai. And then you get a product of things. So you have something like Ki of x is a multiplicative character of x plus some shift ai. And L of x is 1, typically. So this case can also be done without knowing more than the Riemann hypothesis because the tensor product of one-dimensional things remains geometrically reducible. So this will be extending this to more than two more complicated situations. So that's maybe one of the oldest ones. Case two, a relatively old one is in a paper of Fouverie, Michel, Riva, and Sarkozy, which has to do with pseudo-randomness of some kind. And here, so they have m arbitrary, if I remember right. The Ki's are either clustom and sums or symmetric powers of clustom and sums. I don't remember. Do you do arbitrary type of clustom and sums or just KL2? KL2, 1. Just OK. So it's something like symmetric powers of KL2 of some ai times n. So I didn't define the symmetric powers. If Ki is 1, this is just. KL2 of ai times n. So multiplicative shifts of clustom and sums. And L of x is then an additive character. And this is used to kind of control the same sums, but over shorter intervals, with the poly-Avina-Gravity method. So it's important to be able to have such an L in the statement. So I'll give a long list of examples just so that it's clear that this comes up quite often. So in a paper of Fouverie-Vanette on the divisor function in isometric progressions, there's a case with m equals 4. And the Ki's are clustom and sums in some Fi of x, where Fi are rational functions, which I won't write down. What else? So there's recent work with Fouverie-Gangouli, myself and Michel. And in an extension by myself and Guillaume Ricotta, we have, again, m is arbitrary, Ki of x are clustom and sums, KLR of aix, where r is in the first paper and is arbitrary in the second paper, which doesn't make too much difference, actually. And L is 1. So there's no extra factor, except, again, that we can expand. So what we did, the application of this, which I may be mentioned later, if you insert, let's say, a negative character, you will be able to have a refined version in small intervals. So this is quite recent. Again, quite recent, Alastair Irving in studying a divisor function in isometric progression, but with a well-facturable moduli needed something like this, with Ki of x is clustom and sums, shifted clustom and sums, again. And L of x, again, is an additive character. Last example. So in a recent work of Blommer and Milicevich, they are the case where, if I remember right, m is 4. And it's, again, related to some clustom and sums. Is that right, Georgie? Yes. But you said the issue is totally there, the reducibility. Right. So this will be the, yeah. OK. So this is to show that this comes up in many applications. This is certainly not an exhaustive list. And what we need to do is, or what I want to present is how one does handle such sums in many contexts. And this will be a well-motivated way to introduce one of the most important invariants of trace functions and of their underlying algebraic objects representation, which is the monodromy group. OK. So sometimes, so I'll start with a case where we can actually just deal with the remaniposites and basic formalism that we already know to handle such sums. So what we already know suffices to get cancellation in these sums. Maybe one example, I forgot what I was doing it. So this is a bit older, but there's this paper of Bumbier and Borghain, which I mentioned once or twice, is also of this kind with relatively complicated summands, which I won't write down. So here is the example. So suppose you are interested in the sum in Irving, for instance, where the AIs are distinct. So AI distinct. And I'm going to take l equal to 1. So you have something like Clusman sum in x plus a1 to Clusman sum in x plus am. And without having to do anything really, we can say that there is code cancellation. And what is important here is that the implied constant will depend only on the states in the applications. P is the main variable. P will go to infinity. And the proof of this is not hard and has to do with the ramification behavior. Yeah, yeah, yeah. And then the point will be that sometimes that's not the situation. So this is, as you'll see quite easy from the remaniposites, if you expand it as a character sum, this will be a character sum in m plus 1 variables. By orthogonality, you'll probably kill one or two of them. So this is a character sum, additive character sum, in about m variables. And if you translate this, it's code of p in terms of the vector sum. This is code cancellation in character sums with arbitrary large number of variables. So if you wrote this simply as a character sum, then it seems very up. And if you shift a little bit the character sum randomly so that it doesn't refactor again in this nice way, you probably cannot do anything at the current time towards code cancellation. So the proof is not difficult. So you just isolate all the factors except one of them. So I guess, yeah, so I'll write k1 of x to be the product of the first m minus 1 terms. And k2 of x is the conjugate, but closed sum and sums are real valued, so I don't even have to write the conjugate. Just do it for emphasis. And I'm going to apply the remanipotesites. So kl2 is irreducible, so the shift doesn't change that. k2 is only geometrically reducible. k1 on the run is not. So usually, so this is a product of m minus 1 geometrically reducible objects, but they have rank larger than 1. In general, this product has no reason to be irreducible. So it turns out, as we'll see later, in this case, it is. But at that point, we don't know it. But even if we don't know it, we can at least factor it. Decompose it into sums of geometrically reducible, take the product of all these. And we see that k1 is a sum of boundedly many, where the bound depends only on m, geometrically reducible. So technically, there might not be geometrically reducible, but this is completely a simple issue to deal with, summands. Let's say mj. We can try to remember an hypothesis to every sum with mj times k2 conjugate. So when I claim that the sum of mj k2 conjugate is a code of p uniformly independently of j and p. So why is that? So we want to apply the remember hypothesis. So we have these assumptions here by this decomposition. The conductor of k2, we know, is bounded by 1, or by a constant independently of p. And when I mention this decomposition of products, it is also such that every summand as conductor bounded by the conductor of the product of the things, which is itself controlled in terms of the conductor of the k2, which is uniformly bounded. So c of mj is bounded by 1, and c of k2 is bounded by 1. So of course here, I'm also not taking care of the fact that the product might not be just an absolute trace function. You might have to take away the behavior of infinitely many points and handle this separately, but that's not important. So the only issue is that can we have that mj is geometrically isomorphic to k2? That's the only thing that would prevent the remember hypothesis of this estimate. And it's not possible because k2 is ramified at minus AM. So this is because k2 is ramified at 0. And the formalism tells you that in this change of variable, you need x plus AM to be 0 for the ramification of k2 to appear, or 0 or infinity. So in fact, k2 is ramified at 0 and infinity. But infinity does not help us because everything is ramified at infinity. So k2 is ramified at minus AM. But I say that mj cannot be ramified at minus AM. But mj is a component of this product, which I call k1, each factor of which, each k2 appearing in this, is un-ramified at minus AM. That's because the AIs are distinct. And it's part of the formalism, which is only easy to believe intuitively and not difficult to prove once you have the right definitions, that if you have something which is un-ramified, then any cement will also be un-ramified at this point. So mj is not ramified at minus AM. And on the other hand, if there are geometrical isomorphic, then mj would have to be un-ramified at AM. So contradiction. So it's very simple and extremely powerful. But sometimes it's not enough. So among the examples that I gave, which I think are all still here, the Burgers bound, you don't really need that. But because they are shift, the same argument works if the AIs are distinct. So some of these I raised. There was this example, which is not the animal where you had. So this would not work, for instance, for KLR of AIX, which is a situation in Fouvri-Michel-Riva-Sarquez, for instance, and in FGKM and KR, where we really have this situation. In this case, everything is always ramified at 0 and infinity independently of i. So you cannot distinguish these shifts just by saying that they are not ramified in the same place. So same ramification. Something just a little bit more complicated works for the Bourguin or the Bombieri-Bourguin sum, actually. So moreover, so there's actually, even in that case, that we began. Sometimes this is not enough because sometimes the AIs are not distinct. So when the AIs are not distinct, you cannot always expect cancellation. So it could be that the AIs come in pairs, which are equal. And so you have a sum of squares. But in that case, you might want to know what is the alpha. So what is the leading term? You had something like alpha times p. We want to know what is alpha. So if some AI are the same, especially, so in this special case, if m is even, and let's say a1 equals a2, am minus 1 equals am. So we have m over 2 equal pairs. Then there's no cancellation to expect. But you might want to know what is the leading term. Because sometimes that's what you need in application. And it turns out that the answer, trying to answer what is this leading term, trying to understand it, leads you naturally to this notion of monodromy group, which is a very important invariant attached to trace functions or to the underlying representation. So the answer will be that, at least in some cases, and especially when you have, for instance, a custom on sums, the answer, which is rather surprising and extremely useful in applications, is that the leading term and the cases where there is a leading term is purely combinatorial. It's independent of p or any fancy number theory. It is something which has to do only with some combinatorics. And this has very nice consequences. So this is related to what is known as the monodromy group, geometric monodromy group, of the underlying representation. So I want to define this actively quickly. So definition. So if you start with a representation, oh, so whatever that was, it was a homomorphism for a certain group, pi1, which depends on p to some five-dimensional vector space. Then the monodromy group of rho, so ggome rho, is defined as the image. So you have this normal subgroup pi1g. And so what you can do when you have a group and a homomorphism can take the image. So this group, you should think of it as being rather complicated and difficult to understand in a really concrete way. It has many quotients, which are quite delicate to understand. But if we have rho, we have a way of seeing it as a subgroup of something a bit simpler. So we take the image. So this is an invariant of rho, which is quite nice. However, it was realized at some point, as seen by Grottendijk, that this is still too complicated, because it inherits from pi1 of p a number of topological algebraic complexities, which are more or less annihilated, at least made much more combinatorial and manageable when you take this ischic closure. So what does it mean? So intuitively, this means that this geometric monodromy group is a subgroup of glv, which you can think of at gln of c, where n is the rank, given by algebraic equations. So like Philippe described in his lectures. So algebraic equations, in terms of coordinates of the matrices and in terms of the determinant, and is the smallest such that cannot be distinguished from the image using just polynomial relations. That's a way of understanding this ischic closure. Cannot be distinguished. So typically, it will not be the same at all. But if you're only allowed to use algebraic means, algebraic equations, things like linear forms, and so on, you cannot distinguish these two groups from the actual image of the geometric pi1 by polynomial equations. So by itself, this would not really probably be very helpful. However, we know something about that. So we know an abstract statement of the lean, which as a consequence means that, in fact, this group, which sounds like just as complicated as maybe the collection of trace functions, because there's one for every trace function, can only vary in some discrete countable set, essentially, because of the rigidity of certain types of linear algebraic groups. In particular, it can be described by combinatorial data. And this has the effects of making a number of invariants of trace functions or of sums attached to them being purely combinatorial. So there's a very deep fact that the lean proved on proving the Riemann hypothesis in the strong form that I mentioned, and which says the following. So g rho gm is up to a finite group. So up to a finite quotient. So it's built out of a finite group, which is just a group of connected components, a semi-simple linear algebraic group. And what you should take from this, so even if you don't know really what is a semi-simple linear algebraic group, so in practice, especially in concrete cases, this means that this group is a product of some basic building blocks, which are the simple linear algebraic groups, or finitely many groups among the following. So a finite group. So it could be a finite group. These, at first, seem like the situation that would be nice because it's finite, so we can understand them. In fact, this is a situation which is typically bad because the groups are quite tricky objects, as I think the lectures of Harald have suggested. And then the other building blocks are big groups like SLV. If V, as even dimension, it could be a symplectic group for alternating non-linear linear form on V. It could be an orthogonal group on V with respect to some static form or special orthogonal. And there's a bunch of others, few exceptional groups. These things called G2, F4, E6, E7, E8. But essentially, once you've taken away the finite groups, you have a finite product up to maybe finite intersection. The product might not be completely exactly a direct product, but it's something that looks like a product of groups which are very well understood. And most of the time, they are classical groups which we know from linear algebra. But for any given such nice linear algebraic group like SLV, there are literally uncountable many, I guess, subgroups of which it is a Zariski closure. So passing to the Zariski closure is really a highly simplifying fact. And it is an important feature of the theory that this simplification is actually not harmful for many applications. So there are some deeper applications where you actually cannot quite afford to take the Zariski closure, or at least if you do it, you then need to refer to deep results of stronger approximation type in order to actually prove something. So applications do receive in particular. But I don't want to talk about that. You may just say that the sum is in the Zariski proven by using a weak form of this weak type of depth. OK. I don't remember exactly how it's done. I think Katz has a different proof in his four lectures on value two. Or is it the one you refer to? No, I'm thinking of the value. The value, yeah. Yeah, so I mean, this is one of the important steps on the way to proving the Riemann hypothesis. And again, so let me think again concretely, what can we say about this group? If you're given just the trace function, so if the trace function is purely abstract, it could be a complicated thing nevertheless to find it. So then you might want to just try and use this abstract structure theory of semi-simple groups to say something. But as you have seen in the applications I gave, usually for the sums of products, we don't have random trace functions. The KIs are usually taken from a list of trace functions that we already know. Which are the ones that come up always in ITIK number theory. And so it could be that someone has already computed the relevant monotromic group. And so you can use these results. As it turns out, this is the case. So Katz and a few other people, including Gabber, have computed or has computed. So also, has computed this GROGM for many rows. So many trace functions, if you think in terms of trace functions. So including all those which have come up in ITIK number theory up to now, as far as I know. So he once said that he doesn't write papers when he doesn't succeed. So it might be that sometimes people ask him and he didn't succeed. So he has a number of books and articles and so on. So usually, if you come up with a reasonable trace function for ITIK purposes, there's really a good chance that the statement computing this thing is found in the books of Katz. So for example, so the first big success in this direction was in the late 1980s for Clusterman sums. So for KLR, we have complete determination. And in fact, even of the Zellic closure of the image, the bigger pi1 completely. So you have either SLR when R is odd or SP2R when R is even. So that's the situation. So these are nice because in some sense these are the nicest linear algebraic groups for many purposes. So in particular for KL2, well, I mean, also all groups are of pain, as I'm sure you know. Well, I mean, they are not. I mean, finite orthogonal groups are really annoying. No, I think, I mean, SP2R is really nice. Because typically when you get SP2R, the GRS is also the same. So for instance, that's a. And SLR is the one closest to just pure linear algebra. But I mean, you can disagree. SP2R or SPR? Yeah, R, of course. Yes, thank you. So there are rank R. So dimension of the R-ranked vector space is R, and it's either SL or SP. So something which is immediate from the definition and the formalism is that this is invariant under change of variable by fractional linear transformations. This changes the representation, but will not change the image up to isomorphism, which means that with the KLR of X, we get also the shifted ones and all the AI of X and so on. What other? No, I'm not going to give any other examples. I haven't written down any really concretely. So that's one thing concretely. So nevertheless, you might think that, OK, maybe you come up with a function that cats has never seen yet. You want to try and get a guess yourself what does this thing look like. So as you can see in application, it's good. The terms in bottom of the group is really complicated like SLR or SP1. It's simple and complicated. So simple, meaning that it's a product of just one single of these simple algebraic groups. And complicated in the sense that this algebraic group is not a finite group, for instance. So in that case, there's something quite wonderful which can somebody else get guessing whether you're in this type of situation. And this is the so-called Larsen alternative. So it's a theorem of Larsen, essentially, which has been generalized and studied by cats a lot. Cats can be the Larsen alternative. So it's a concrete way to determine if you're in one of these situations up to some possible finite ambiguity by means which are similar to this diaphragm team criterion for irreducibility. So that's rho representation associated to trace function. So we need a small assumption which is not actually that difficult to either check or achieve by some twist, but I don't want to go too much into it. So we need to assume that if we take the Zajski closure of the whole group associated to this rho, we are not having a bigger group. So usually, the worst that happens is that this Zajski closure is maybe this group times a finite central subgroup. And then by some kind of twist of determinant, you reduce to a twist of rho, which satisfies this assumption. So this is true for Cluson-Unsum. I mean, it can be something dedicated to prove, but it is often a situation to which you can reduce. So in that case, you have a way of saying a lot about the group with concrete means, which is a special case of the Lien-Zechulosubution theorem. So I'm not going into the Lien-Zechulosubution theorem here, but this is the key fact that is behind this and the Riemann hypothesis. So then, OK, so let m4 of rho, the so-called force moment, be the integral over subgroup k of the trace of x to the power 4 with respect to arm measure, where k inside the geometric measurement group, seen as a complex group, is a maximal compact sub. So for instance, if you know that the GGM is SLR, or if it is not k, then this would be SUR. So you define this. Mu is the probability arm measure. So this is some numerical invariant of the group. It really only depends on k, which is really only depending on the group. Then we are following facts. So first, so this will be purely algebraic facts. And then I will translate this concretely for trace actions. So if m4 is equal to 2, so it's a number you have to compute it. Suppose you can compute it. Suppose you get 2. Then there are two cases. Either the group is finite, or it contains SLV. So this group I see as a subgroup of GLV. And if m4 is 2, so it's just a computation that you might be able to do. And then if you get that, then it could be finite, or it could contain SLV. So it could be a finite group times SLV. That can happen. But suddenly then it's big, at least. So that's quite miraculous, because it's purely this is about all the subgroups, essentially about all possible compact subgroups of a large linear group. And just computing some invariant of the arm measure if you get a 2, then you know that it's actually very, very restricted. And it is possible that it's finite. There are a few examples where this happens. And then you have to deal with something else in order to maybe exclude this possibility. So second case that I'm going to write. There are other cases where I'll just write 2. Suppose the dimension is even. And suppose you know a priori. So this can also come just from the construction. The way one constructs trace functions or this representation, one might know that it has a simplistic symmetry. That's something that can come from the data. And so that means you know for some reason that GGM is inside the simplistic group of V for some alternating form on V. Now you compute M4 again. And suppose you find 3 in this case. So either this is finite or it contains SPV. So this is really, really nice. But what does it mean in terms of the trace function? So because I'd say this seems to require already knowing the group, or at least the k and knowing the arm measure and so on. So the second part is, so if you have a sequence Kp. So if rho p is a sequence of such things with a conductor bounded uniformly in terms of p. And trace functions Kp. Then? But Emanuel, are those absolute values around the trace to the x to the 4? Let me check. See, we get worried about just the essential subgroup because it has a central character, which would make it. Yeah, I think it's, yeah, you're right. No, no, it's the absolute value 4. Because otherwise this would be often 0 anyway. Yeah, yeah, you're right. Yeah, modulus of trace, force power. OK, let me not try and give precise statement here. I'll just give a slightly more precise one, which is enough to guess what happens in many cases. So if the complexity of rho is small, and this can be quantified, then the point is that this m4 can be approximated. And again, this approximation can be quantified with the Riemann hypothesis as the average of the trace function to the power 4. So the force moment of the trace function of rho will typically be an integer, or very close to an integer, but quite an integer. And this integer will be essentially m4 of rho. So this gives you, if you're given a trace function that you can compute numerically. So this could happen if it's an exponential sum, for instance. And you were trying to guess, let's say, is the monodromy SL3. So somehow you know the rank is 3, or you know maybe you might not know the rank. But you feel that this might be the case. You compute this for suitably large p. If it's two points, something very small, that really means that the m4 will be 2. And that the geometric monodromy group is either finite or contains a surface. And then there are tricks of cats to try and distinguish between the two. So usually in this case, you can say more. So there's a very famous example in the historical literature of that. So if you look at Klostermann's paper, where he introduced Klostermann's sums. So as you know, he proved that Klostermann's sums are bounded by p to the 3 quarter. By computing the force moment, essentially. So I've already proved one force is your normalization. Yes, you're right, thank you. Yeah, so with my normalization, the right bound is 2. So prove p to the one force. Prove by showing that the force moment is equivalent to 2. Converges to 2 as p goes to infinity. Actually, there's an explicit polynomial expression where the leading term is two times p when you do the computation this way. Two times p squared if you write the unnormalized Klostermann's sums like he was doing. So in other words, implicitly, Klostermann was proving that for the Klostermann's sum where the rank is 2, classical Klostermann's sums, so either the finite group is finite or it contains SL2. So the right answer is that it contains SL2, which I think could also be done this way by computing the sixth moment. So you place the four by six, then there's a statement, and one can, I think, exclude the finite case this way. I mean, that's not how cats actually compute these monotromic groups. There are other techniques for this purpose, but. So this shows the sixth time you've done this. So in Daniel Wursch in a master thesis at ETH, a few years ago, computed the fourth moment for KL3. So he was able to basically reprove the monotromy of Klostermann's sums into variables as to be either finite or contain SL3 by this type of method. But it was much more delicate computation. OK, so now let's go back to sums of products. Only five minutes left. No, 15, because we started a bit late. OK, good. So why is that relevant? So why is this monotromic group relevant for sums of products? Well, one way of phrasing it, there are other ways, but one way of phrasing it is that if you know the geometric monotromic group as a concrete linear algebraic group, you might be able to say that your row is geometrically reducible, even in cases where it's not obvious. So there's an easy fact, which is a special case of this property that the monotromic group and the representation cannot be distinguished from purely white means. So row is geometrically reducible, which is something we might want if and only if. So when g acts on v, so the monotromic group acts on v as a subgroup of glv, acts without invariant non-zero, non-equal to v invariant subspaces. So it is irreducible as a representation of the vector space v subspaces. So if you know this group, then you might be able to say that, again, because linear algebraic groups have been studied around, irreducible representations are known. And typically, if you're given a linear algebraic group semi-seemble as a subgroup of a glv, you're able to say whether it's irreducible or not. So let's go back now to our situation. So now, let's assume we have finally many of these things, our representations. And then we want to study the product. So the product of the trace function of the k i's is, so up to finitely many things. It's the trace function of the following representation. We can write down precisely what representation it is. It's obtained as the composite of, so you start with the original one. You take the direct sum of the row i's. So this will give you something in glv 1 cross Vm. And then, so this implicitly maps some, so would associate dx to a vector k1 of x, k2 of x, and so on. Now you want to take the product. And the product means you take the external tensor product of linear transformations there acting on the different factors. So this goes into glv1 tensor v2 tensor vm. And here you have to think of g1, gm. So actually, here I should say product, sorry. It's clear if I write this way. So it's a product of linear groups. And if you take a tuple, then you send it to the tensor product of these things. And so if you know enough about the row i's, you might be able to check that this composite is irreducible. And so here we use another fact which is surprising. First time one learns it, which is surprising for me, is that so complicated non-Abelian groups, especially simple but non-Abelian groups, like those that Aral discussed in the finite case, but some other analogies with this semi-simple case, tend to be very independent of each other. So complicated, so I'll just say non-Abelian. So we're saying complicated, simple linear algebraic groups tend to be independent of each other. So in the following sense, so this is a fact which I will state in a little bit on precise way for lack of time and in a special case. But there's a general fact along this line which is known, at least called by Katz, the Gursa-Kochin-Ribet criterion. And what it does is in situations like the one above, it will tell you what is the geometric monodromy group of the first of the direct summaries, given some facts, which could be non-obvious, but some facts on the rawis themselves. So again, I'll do just a special linear case, but there are analogs from any other groups. So assume rho 1 up to rho m have that all their geometric monodromy group are special linear groups. So the VI's could be distinct, so the dimensions could be different. So assume also, so we want to say that then the monodromy group of the direct sum will be the product of DSLs. So this cannot be true if, let's say, rho 1 is equal to rho 2, because then the diagonal image is gg, rho 1 of g, rho 1 of g, because they are the same. So assume there is no isomorphism of representations, so rho i geometric isomorphic to rho j. So here you need to assume a little tiny bit more that I don't want to go into details. You might have to also show that rho i is not isomorphic to rho j tensile with a character. But let's not go into this. Usually that's what you need to check. Then, and this is the fact, the monodromy group, yeah, if VI, well, I mean it's obvious. So if the dimensions of the groups are distinct, then, of course, this would be true. Then you can compute the geometric monodromy group of the direct sum, and it is as big as it can be. So it has to be a subgroup of the product of the monodromy groups of the factors, and in fact, it is everything. So this is a very surprising fact when one is used to a billion groups. So if you've taken a billion group, if you take a product of a billion groups, there are usually many subgroups which surject on all factors. So this, basically, the content of this group theoretically is that a subgroup of this product which surjects to each factor, and is not the graph of isomorphism between factors, has to be everything. That's the group theoretic content, and it's only its faults for a billion groups. So there are many vector subspaces, so lines, let's say, in a vector space of dimension to which surject to both axes. In fact, that's the generic case. And this doesn't occur for complicated non-a billion groups. So I still have 10 minutes, yeah? Okay, so mark. So other cases exist. So this is not something that's related just to SLV. On the other hand, it is not true all the time. So sometimes, for instance, for orthogonal groups in either an odd or an even number variable, I don't remember, this is false. So that's one way that orthogonal groups are bad. Actually. I was just saying one way that orthogonal groups are bad is that they don't satisfy always the Gorsuch and Ribé statement. OK, so now let's go back to the applications. Before I say this, let me state a corollary. So how do we use this corollary? So in the situation of this Gorsuch and Ribé criteria and the way I stated it, the direct sum is geometrically reducible. Because one can check that the product of the special linear groups acting on the direct sum of the subspaces is geometrically reducible. And that's not all. So application. So let's consider now trace functions which are two sum and sums with AIXP without assumptions on the AI. Why? I mean, if there's an invariant subspace and it's not. I mean, then the exonultant of product would be a, right, no, you're right. So the thing with trace function of the product. Yeah, this object. So it's because it's obtained sketchy argument. So we have this map here. We know what is dMage. It's the product of dSF factors. And this Vi is irreducible. And tensile reducible representations remain irreducible. That's an easy fact for representations here. No assumptions. OK, so in that case, so as I said, it's not obvious because of the fact that the ramification is the same. But so one shows that the Gorsuch-Chinry Bay criterion applies to the distinct AIs. So maybe some of these are different. And I put them together. At least that's the even case. In the odd case, one has to take into account possible dualities which I don't want to discuss. Actually, I think I should have taken them into account. Yeah, sorry. So here I need also to show that Hawaii cannot be the dual of Roj. So this would not be needed in the symplectic case. So this means that, concretely, this means in practice that Ki is not proportional to Kj. And Ki is not proportional to the conjugate of Kj. So I do the even case in any case to not have to distinguish between Ai and minus Ai. So suppose Ai is even, so 2, 4, 6, and so on. And then we are in the symplectic case which I didn't state but it works. And you can apply it to the distinct AIs. So if the AIs are all distinct, let's say to begin with, then the trace function, which is the product of these, is geometrically reduced. One knows its rank. It's r to the power m. And therefore you can apply the Riemann hypothesis to compute many sums of products with these trace functions, including when the other factor l can be complicated. For instance, if l is not of dimension exactly r to the m, but is geometrically reducible, then without having to do anything, you get code cancellation. So you get something like, so KLr A1x. So this is again in the distinct case. So the implied constant will depend on the conductor of l and on the number of factors m. But that's all for l geometrically reducible with, for instance, dimension of l distinct from r to the m. So for instance, if l is just an additive character with dimension 1, then this is automatic. So just to finish, yeah, I'm finishing extremely quickly. So if some AIs are not distinct, then one gets, so I don't have time to explain why, but it's not difficult from the same type of formalism, unless, so in that case, unless, kind of the obvious situation, so one gets cancellation, let's say for l equals 1. So in the paper, we explain more cases. So unless m is even and the AIs come in pairs, come in m over 2 equal pairs. And in that case, so the leading term can be computed by the data of the non-domic groups is the multiplicity of the trivial representation in the tensor product of the standard representation of the VIs to the power, so multiplicity of AIs, so distinct AIs. So in particular, it's easy to deduce that if the distinct pairs are distinct, so the m over 2 pairs correspond to m over 2 distinct AIs, which just repeat twice each, then this multiplicity will be 1. So this was the crucial point in the paper with Fouffry, Gangouli, and Michel, and then similar things for the paper with Rigotard to show gauchan distribution for divisor functions for the intervals on average over the residue class. OK, so I'll stop here. Thank you. You have a question for me? Yes, very good. This area, I think, is from intersection with Adler-Stone-Sperber theory of exponentials. You say it's another world. So the way I'm presenting it, it doesn't really have an intersection, in the sense that what they typically try and do is to work with higher dimensional character terms, and they want to find good criteria that ensure code cancellation. No polynomial, they are not... Yeah, so here I'm not really doing... I mean, we're not really doing anything. We're trying to avoid multidimensional terms as much as possible based on the principle that this can be presented as a black box, and because it's a one-variable sum, it gives code cancellation. And if you have two-variable sums, then the only thing you can get for... If not for free, but easily, is just... You gain code of p instead of gaining everything to get code cancellation. So, of course, there are situations where this will not be enough, but I think it really goes very, very far, especially in this type of applications in I think number theory, where the sums that come up are often coming up in this form. In the circle method, that would not be the case. So when you do circle method to count points, then you naturally get full transforms or characteristic functions of several varieties. These are naturally high-dimensional objects. I have the feeling that those sums, they're all theory, are very, very few applications in the world and they're all theory, compared with what you say that you've been able to do. So why is it clear that M4 is an integer? So, if you write it as an integral of our measure, then it's the integral of a factor of a compact group. So it's the multiplicity of the trivial representation is the corresponding. So it's not obvious if you just write... Let's say if you use a formula for our measure on SUN and write what it is, it looks like it's a complicated trigonometric integral that could be anything. In fact, it's an integer. It's actually a good check for computation. So if you do this M4 in a sampled way and you get like 2.35 something and the prime p is relatively large, that usually means there's a mistake in the program.