 So, as I said yesterday, today we will start with something that is of historical importance, the sum product theorem, but instead of proving that and then using that to prove a statement about grossing groups, I will show you how it's closely related and, in fact, essentially equivalent to growth in the affine group, a very, I won't say simple because it's not a simple group, but a very nice small group, and then I will show you how to prove growth in that group. So, sum product theorem and its relation to solvable groups. It was historically important, it played a role in SL2 and so on, but we will see how really the right role here is in solvable groups, not so much in SL2. So, what is this sum product theorem? So, over finite fields it's due to several people. Let us put the names of organ, cat, cell, cognac, and people involved. So, it says that for any subset A of, say, Fp, or Fp star, that is not too large, so it has fewer than P or P to the one minus epsilon elements, you have that either A times A, meaning the state of all products of two elements of A is much larger than A or A plus A is much larger than A. What do I mean by much larger? I mean larger by an exponent, 1 plus delta, where delta depends only on epsilon. So, the fact that for a finite field you have such strong growth, it's actually quite, I'm not sure whether surprising by now, we are familiar with this fact, but it's a very strong statement. So, let me just put here, compared to what you get in, say, Freiman's theorem, which we will not talk about, but basically when people were studying and people are still studying which sets grow under addition say, then we are nowhere close to being able to make growth statements of this sort, A to a power. But if in a field you allow an either or statement, either you have growth by addition or by multiplication, then you are able to give growth by a power. So, this sort of statement will be familiar to a few of you anyhow, because it was already known for R, it had been known for a long time. So, note, so first it was already known over R, for that matter over C, but over R, this is Erdosh Semeredi from a while ago. The accents in Semeredi has been arbitrarily put, I always forget where it is. So, but let me emphasize that Fp, you could say, oh, finite fields are smaller, so they should be easier. No, in fact, Fp is quite a lot harder than R, because Fp has no useful topology. All proves that I know or that were known over R use a topology in some sense, sometimes very elegantly, but Fp is harder because, of course, a discrete topology is valid, but not useful. And also, this proof over Fp turns out to generalize pretty much to all fields. You have to put in conditions to make sure that A is not, say, a subfield. Fp has no non-nontrivial subfields, but instead of Fp you put Fq, then you have to put in the condition that A not be very close to a subfield in some sense. Very well. So, rather than prove the subproduct theorem, I will state the main intermediate result towards the subproduct theorem, and then I will show you how it follows, actually, from a statement about grossing groups that we will prove without using the theorem. So, pre some product theorem, if you wish, for the lack of a better name. So, this is, you can, this can be found in Burgan or Libichokanyagin. So, let X, so, and this, by the way, is just as good for all intents and purposes. If you have two sub-sets, so it's in fact a little bit more general, even, one of Fp, another one of Fp star, then 4Yx, whereby 4A, I mean A plus 8, four times, 4Yx plus 2Y squared X is going to be, its number of elements is going to be at least half of the mean of X, Y. Of course, X, Y could be bigger than P, and that would be absurd, so, mean of X, Y and P. Some trivial assumptions, so, assume X is close under the subtraction, is to make the statement very well. All right, so, what is the affine group, and how is it connected to all of this? So, let G be the, so, first of all, let me tell you what's the modern perspective, in my view, isn't this? You should see this, not about a, a, a statement about a field or about one group, it's a statement about one group acting on another group. So, all of this is really about one group. So, which group acts on which one here? You have two groups, the multiplicative group and the additive group. The multiplicative group acts on the additive group, not the other way around. That's basically the, the distributive look. And we will also see how you can put all of these back in the framework of just one object, if you wish. Though, I really think this is the most natural thing. So, let G be the affine group over Fp, and how is that defined? So, it's a group of transformations on the line, so it's, actually, yes, it's, you can write it, it's matrices. And it's a very, yes, it's a very nice example. I'm always forced to look for a, a substitute for simple when I'm talking about a non-simple group. It's a solvable group. It's a very nice, small example of a solvable group. And it has, let me talk a bit about its subgroups. So, there's you, the maximal unipotent subgroup. So, you don't need, many of you are familiar with these terms, so you don't need to, to know what they mean. You just need, need to know that these are general terms for which these are paradigmatic examples. So, other solvable and null-potent groups and unipotent groups are essentially more complicated versions of the same. And the basic ideas are already here. So, the maximal unipotent subgroup is this. So, it's the upper triangular matrices with one and the other. And T for torus, here the maximal torus, well, the only kind of torus would be this. This is not the maximal torus, it is a maximal torus, but any maximal torus is conjugate to this. So, any maximal torus can be written in this way if you change the basis. We already talked last time about a centralizer. I will remind you of the definition. The centralizer Cg of an element g is simply the set of all h's that commute with it. So, and it's a simple fact that can be checked quickly for g in g, g naught in plus minus u. In general the right condition is for g to be regular semi-simple, meaning having distinct eigenvalues, but for g in here, Cg is a maximal torus. Not necessarily this one, but conjugate to this one. All right. So, we will look at two actions. I promised. So, there's the action of u in itself, which is just a group operation. So, it corresponds to addition, if you wish. You see clearly the addition here. And there is the action of t on u. How does t act on u? By conjugation. So, this corresponds to multiplication. You see, our terms say. All right. And these are the actions that we will study. So, let me remind you quickly what we saw last time namely the orbit stabilizers here. Efficient blackboard utilization. Brief reminder. So, we have a group acting on a set. An element, little x of the set. And you have a and b in g. Then the intersection of this with the stabilizer of x. Where the stabilizer I remind you of x is just the set of all g's in a group, say, that fix x. So, the stabilizer of x will have a large intersection. So, it's a generalized orbit stabilizer because we are doing it not for subgroups, as is traditional, but for sets. So, this will be star one and star two. Also, the intersection with stabilizer is not too large. And as we saw yesterday, this is really quite easy to prove. So, for example, you get many consequences out of these quite easily. So, here's a consequence that you could prove with your bare hands, but using pigeonhole essentially, but you can also get it immediately as a consequence of orbit stabilizer. So, let g, it's the simplest kind of consequence. So, you have a subgroup of g and you have a subset as well of g. Then I claim that the intersection of a inverse with h is at least a over r, where r is a number of cosets of h intersecting a. So, how do you prove this? Well, it will just be an application of the orbit stabilizer theorem. You define x to be g modulo h. So, g acts on g h by left multiplication. You just set x to be h, which is an element, seen as an element of g modulo h. And so, you have that a inverse a, intersection a h, is at least a, it's by a times h, understood as the orbit in g h. And this is nothing other than the narrate itself. All right. Now, let us work with those specific groups. So, lemma, let g be f, you have a group, u, the unipotent subgroup. Pi will just be the quotient map. Let a be a subset of g, a equal to a inverse, assume that a is not contained in plus minus u. So, there are at least some matrices which are not equal to plus minus 1. They even need plus minus 1. Well, I forgot about these pluses because I'm not putting in the minus. Let, we'll just have these from now on. Let x be in a, but not in u. Then, a intersection u is at least a over pi of a. So, you're going to have many elements. A itself might have no unipotent elements and no elements in your favorite torus. However, just a square. So, yes, a times a will have many unipotent elements. And perhaps even more surprisingly, it will also have many elements in your favorite torus. It happened here. That's very strange notation. Forgive me one instant. Oh, sorry. I couldn't, I think, I hope my handwriting is better on the board than it is on my notes. Yes, definitely. But for which torus? It's for any torus of the form t, well, c of x in this case. So, for any x in a, which is not in u. And that's enough for, yes, forget that minus over that. It's enough that x not being u. All right. So, how does the proof go? Let a be a inverse a u, which is a squared u because a is equal to a inverse. By Lemma 1, a u is going to be at least a over pi of a. It's very simple. So, consider the action of Geon itself by conjugation. The most interesting action, as I was saying yesterday. So, by the first half of Forbit Stabilizer, you have that a inverse a stabilizer of x is going to be at least, well, but what is a stabilizer of x? The stabilizer in the action by conjugation is just a centralizer, which is what we define t to be. So, this is going to be at least a by the orbit, a x. Of course, this is the orbit by conjugation. So, this is gxg inverse. So, it's really part of the conjugacy class. And now, I realize that the chalk is hiding in a box. Very well. All right. Now, let us define a t. So, a u by definition lives in the unipotent subgroups. So, we already know that this is as big as we needed. That was easy. It was quite easy. It was just Lemma 1. But now, we have to work just a tiny bit harder for the rest. So, we define a t to be a inverse a stabilizer of x. So, as I was saying, a of x, this orbit by conjugation, this has the same cardinality as a of x x inverse. So, it's not changing the cardinality when we multiply them by a group element. And we also have that a over x x inverse is an element contained. It's certainly contained in a4, because it's a multiplication of four things in a, and they're inverses. And it's also contained in u, because it's contained in the bracket. This is... So, anything of the form yx y inverse x inverse is going to be contained in u. So, a of x has cardinality at most a4 intersection u. But now, we apply the second, the even easier half of the orbit stabilizer theorem applied to the action g to gu by left multiplication. And there we have that a4 intersection u, because u is stabilizer here, the stabilizer of the identity in g, or the zero point in g model of u. So, this is going to be a to the five, because we are applying this with a4 and a, instead of a and b, a to the five ax. And ax is nothing other than pi of a. So, it's ax where x is u, x is u, seen as an element of g model of u. So, ax, the orbit of a, how many distinct elements does it have? Just pi of a. It actually is pi of a. Very well. That can conclude at least a over a4 u, very well, which is what we wanted. In case it is all making your head spin a little, just remember. So, you are the things that just have one of the diagonal. And when we play with brackets, we are basically using the fact that in the diagonal everything is fp, so it's all commutative. So, that's why the bracket of two elements, x, y, x inverse, y inverse, is of that form. It's like a matrix like that. And pi of a just consists of, or pi of g say, just consists of looking up this element. So, pi of this matrix is just r. But, and pi of this matrix is 1. And if you had something else here, c, n, a, pi of this matrix will still be r. Just making sure. And when I said before that this could also be proven by pigeonhole, well, it's essentially the same idea, because pigeonhole is hiding here. But just think of it as follows. So, there has to be a class of h that is the most popular one, the one that contains most elements of a. It has to contain at least a over r elements of a. And now if you take a fixed element of that class and you quotient it out by all the other elements of a in that class, you get elements of that and it's clear that there are at least a over r such distinct things. But I'm essentially redoing the proof of this, which is sort of trivial. All right. And what have we been doing here? We were trying, just starting from an a, we were trying to isolate as many unipotent elements as we could. And that's about as many as we can. We're not guaranteed that there are more than that. But that's as much as we could possibly hope for. You have a's in different congruence classes, but it turns out that you can guarantee that the most interesting class, the one where r is one, is no less popular than the others. Grossa model. It's having a squared here and the same here. So, this sort of statement, we don't assume anything about how slowly or rapidly a grows, but the more slowly a grows, the stronger the statement is. So, if a to the phi is very larger than a, then you have a lot and a lot of elements in intersection of a squared and the maximal tores. Very well. Sorry. It's not containing you. Where? Thank you. I'm very ashamed. Thank you. Very well. All right. And now we get to the meat. So, we have already, we already know, starting from an a and g, how to isolate two large subsets, one made out of unipotent matrices in a squared and one made out of the eronal matrices, also in a squared. Now, we will show how starting from two such sets, one of the eronal matrices, one of unipotent matrices, we can get a lot of growth in the unipotent group. In fact, almost as much as we could hope for. That could be affine, the affine group of a recipe. U, maximum impotent, as usual. T, a maximal tores, your favorite one, doesn't have to be that one. You can write it that way. And two sets, like the ones we have just produced, one containing U and one containing T. And we make the usual soft assumptions. A U is its own inverse is an A T. So, if it were not the case, we would just add the inverses to the set. So, A, the identity is an A T in A U and A U is not the identity. Then A T squared A U to the sixth is at least one half, I mean, A U, A T. So, this corresponds almost exactly to some products here. And in fact, it's almost child's play to deduce some products here and from this. Because if somebody gives you a set in F P star, a set in F P, you just construct matrices having here the elements of the set in F P star, as there are, and matrices here having the elements of the set in F P as there are A's. And you just apply this. And you get this. I could even make the statement more precise if I had more space and more chalk, so you would get exactly the form of the sum product statement. Just saving a few A T's and A U's, but no need to do that. And so, as usual, we need a P here just in case A U A T gets bigger than P, which would be absurd. So, you need this mean. Very well. So, how do you prove this? We are not going to use the sum products here. This implies, in fact, it's essentially equivalent to the sum products here. Just a rephrasing. All right. So, how are we going to do things? Fortunately, this is not part of my hand. But I think it's undamaged. But that must have hurt. All right. So, how does the proof go? So, call A a pivot. Latex is nicer because it's clear that A is not the definite article. A is a variable. How am I going to put this? Call A a pivot if the function phi A that goes from A U A T to U given by U of T gets taken to U of T of A, meaning U of T A T inverse. So, you call A, the element A in U, a pivot if this function is injective. Now, we're going to have several cases. So, why on earth are we calling this a pivot? Well, it's like Archimedes. Give me a pivot and I will move the world. If you have a pivot, there's a pivot already in A U, then you have automatically the definition of an injective map. Well, over finite sets. It's a map such that the cardinality of the image equals the cardinality of the domain. So, the cardinality of this equals the cardinality of A U cross A T, which is A U times A T. All right. And so, A U A T A, which is, you know, A T acting, it's the action by conjugation. Just like in the statement, that's what it's meant by A T of A U. So, A T squared, which is a subset of T, is acting on A U by conjugation. Gives you something in A U. So, this lives in U, by the way, just to make clear. It's all living in U. So, this is going to be, at least, right? Because this is contained in that. And this is going to be equal to A U T. So, you're done. Ta-da. Very easy. So, all right. But what if there is no pivot in A U? Or what if there is no pivot in U altogether? Let us do that case first. I wouldn't say this is the hardest case. I will tell you in a second why. There is no pivot in U. So, in other words, injectivity always fails. So, we say U, 1, T, U2, T2 collide for A in U. If injectivity fails there. So, because there is no pivot, for every A there will be two distinct pairs that collide for it. So, for every in U, there exists, because of this, there exists U1, T1, U2, T2 distinct colliding for A. All right. But you also have, and this is just a tiny, tiny bit subtler, about a subtle as a linear equation of one variable, which is exactly what it is. This is a tiny bit subtler. You also have that for any two distinct pairs, you will have that they can collide only for 1A, at most 1A. So, exercise. But you will see if you work this out, this is just a linear equation one variable, which must have at most one solution provided it's not trivial. Or none, if the linear equation has zero as its only solution, and it happens if U1 equals U2 and T1 is not equal to T2, or vice versa. All right. So, this is an exercise. Let KA be the number of collisions in UA cross UT, not in U cross T for A. Then what are we going to have? Then you're going to have that the sum, the total number of collisions. What do you mean by U? UAUT are two sets that I'll give, oh, sorry, AU and R. Thank you. Maybe this is slowly creeping dyslexia. AUAT. It's AUAT. Very well. Thank you. Not good. All right. So, here you have the diagonal terms, namely the ones given by non-distinct pairs. So, for any two, any pair of any non-distinct pair, you're going to have P minus one solutions. But there aren't that many of those. And then you're going to have the contribution of the distinct pairs. Right? Now, right away, you have a reason why I was saying that this is not the hardest case. Since for every A in U, you're going to have that KA is going to have bigger than AUAT. Because in KA, we are counting the trivial collisions. And there are these many of them. By assumption, there are no pivots. So, there are more, there are, there is at least no one non-trivial collision. And from this and that, we have that, this implies that AU times AT is greater than square root of U. Pretty large. And in general, in this sort of problem, the problems for large subsets are easier, in general, than the problems for small subsets. So, it will not be hard. This is a trivial prophecy because it will be fulfilled in the next, in the next blackboard. But what do you mean by not, it will not be hard? Well, often when you have sets that are this large, well, there are cases in analytic number theory where this is exactly the case you can do at all. And it's a very hard case. And the hard case is the impossible case. But here, it's an easy case. In these sort of problems, it's often yields once you attack it with a bit of pigeonhole and Cauchy Schwartz. And that's exactly what we are going to do. So, already there, because of the size of things, it is implied by the fact that there are no pivots. You are starting to smell victory. So, choose the A that is most pivoted. So, the A such that K of A is minimal. Then, by definition, K A or pigeonhole, whatever you want. These are the trivial contributions. And then, you know, you have no more than average because you are the worst. By Cauchy Schwartz, to keep everybody happy, you have that this will be, at least, A u squared, A t squared over K A. And so, calculation gives that phi A, A u, A t. Because of this and that, you're going to have that. This is at least one half the harmonic mean of p and A u, A t, which is at least one half of the mean. We're not quite done. Not quite done. Why? We will be done if A were in A u, but it might not be. In fact, well, it will not be. Or, well, it might not be. But what happens if it's not in A? All right. You could say if A is not in A u, it's sort of unreachable. But it will not be, precisely because it is not a pivot. So, what do I mean? So, because A is not a pivot, just like everything else, you have two elements. They are distinct. They collide for A. Then, T1 is going to be distinct from T2 because of what I said before, in that if T1 were equal to T2, you would have no chance. They could not collide. And so, the following map will be injective. So, it's a map from u to u. You get taken very well. This is injective. And you can see this quite simply. Well, the simplest way is actually just a matrix regulation because then you are multiplying, you have a matrix. You multiply its upper left entry by r, one on the one hand and by r2 on the other one. You take the position and wonder of wonders. It's not the identity. All right. Very well. In general, if you do this in a big matrix group, you have the root system, but it's also sort of trivial. Okay. So, it's injective. So, what are you going to do with it? So, we really like to apply injective maps because they don't change the size of things. And now, we are going to use the fact that T is abelian. So, since T is abelian, what are we going to be able to do? So, I like to call this step unfolding. You will see what happens. So, this is, by definition, T1 of u of T. So, I'm just writing down the definition of phi A here and here is the definition of psi. It's the quotient of the two things. You apply T1, you apply T2, take the quotient. So, this is going to be T1u. And now, T is abelian. So, I can switch the order of T and T1. Very well. And look. So, A before was something unreachable, right? But this is, what is this thing here? It's not so unreachable because when we say that this collide for A, we mean that u1 T1A equals u2 T2A, right? So, T1A T2A inverse is going to be our old friend because T1, T2, u1, u2 are all our old friends. Say it's a stranger that we are going to try what we're trying to reach. This is u2 inverse u1. So, the very fact that A is not a good guy, it's not a pivot, A is just as good as it gets allows us to express it in a way in terms of things we know, the T1s and the u1s. So, rather, we can apply something to A, namely this map, and get something we know. We apply to it a map we know and we get something we know. So, this is just u inverse u2. So, here, yes. So, because we even write it down, just by applying the map, this unfolding map and using the fact that it has commutativity, this map goes to migrate to the inside and makes A very nice. All right, this is good because now we know, phi A u A T is contained in, well, all of these things are things that are known. These are in A u, in A T, etc. So, this is in A T A u to the fourth. Very well. And, as I was saying, we really love injective maps because they don't change the size of sets. Yes. So, you have A T A u four is by containment greater than or equal to the left side. So, this is because psi T 1, T 2 is injective, is equal to this set. And this set is precisely the set whose size we managed to bound from below. So, it's just as good, even though it was not a pivot, we managed to express. We managed to bound from below the size of this set by this. And, as we know, it's at least one-half. Very well. This took a little while, but it still would say this is the easy case. Just that we managed to introduce several concepts one of being, being this technique of unfolding. It's a very simple idea. All right. There exist pivots in u, but none in A u. That's the third case. So, there are some pivots, but, but luck, none of them are in A u. What do you do then? So, in particular, there exist pivots and non-pivots in u. Since A u is not the identity, well, this is one of the advantages of these very simple groups. It's not identity or minus identity by assumption. A u generates, no, just identity, excuse me. A u generates u. So, there exists a non-pivot A in u such that there exists a g in A u for which g A is a pivot. So, in general, this is an idea that is true in general. If you have a set that is neither empty nor does it occupy the entire group, and you have a set of generators for the group, then you can find an element in the set and an element in the set of generators such that the generator takes you out of the set. And this is great because it's a very obvious, but it allows you to use induction. The usual induction is the same. It's just that you have the generator one of the set of integer z. So, what happens then? Because g A is a pivot, phi A g, a u cross A t is injected. So, what now? So, we really like the element A g. It's a pivot. It's going to allow us to move things to get growth. But we don't know what g A is. g A is not in A u. A is also not in A u. But remember, A is a non-pivot. And we already knew that just from the fact that something is not a pivot, we can get our hands onto it in some sense. Just unfolding things. Because A is not a pivot, it satisfies an equation involving only things we know. That one over there. And this is precisely what we will do. So, because A is not a pivot, two distinct u 1, t 1, u 2, t 2 collide at A. And now we unfold, as before, phi t 1, t 2. Now we apply to phi g A u t. So, this we know is going to be large because phi g A is injective. Because g A is a pivot. But this is going to be t 1 u t of g t of A, almost as before, t 2 A t of g t of A inverse. And this, you know, we use commutativity. So, you end up having t of, well, precisely this thing over here. And this will be t of u 2 inverse u 1. Same story as before. So, we get t 1 u t g t of u inverse u 2. And here the inverse t 2 u t g. Very well. And just as before, we use the fact that psi is injective. We get that then A t squared A u 6. Nothing magical about 2 and 6, by the way. We just care that they are constants. But writing 2 and 6 takes less chalk than writing c 1 and c 2. And it's nice that they are small. So, this is at least, as we just showed, all of these things are things in A t and A u. So, this is at least phi g A u t. Now, this map is injective. It doesn't change matters. And g A is a pivot. So, this is A u A t. And that's the end of the proof. But even though it took less time, it's really the important step. When you do induction, I mean, you apply this step in induction, essentially. And you can see it all as an inductive process. And then B corresponds to the end of the induction. A corresponds to the beginning of the base case of the induction. C is a really inductive step. It's just yet that you don't need a prefixed ordering for induction to work. You just need a set of generators. Very well. And as a corollary, we obtained that. So, this gives you right away the sum product, the pre-sum product theorem. But in order to get gross in the affine group, we just have to apply this together with a lemma. We proved before, guaranteeing us a large set A u and a large set A t. So, corollary would be, let g be the affine group or F p. U as before, pi as before, A in g containing its inverses and the identity. And A not contained in any maximal torus. If it were contained in a maximal torus, it would be a problem about gross in a billion groups and about gross in the affine group. Then either A to the 57, not optimal, but, hey, it's concrete and it takes as much chalk as writing C sub 7. It's at least half square root of pi A times A. This is, again, qualitatively optimal. The most you can hope for, say, in, you know, a couple of steps is a power of pi of A here times A. Or A to the 57 is 1 half pi of A times P. And so both statements are true here. A, all of U is contained A to the 112. Thank you for mentioning number 2 to the 2 to the something the other day, which makes these numbers seem really, really small. Very good. Yes, but the numbers he mentioned are the ones you get from this sort of thing after you apply this sort of argument too many times. And many other arguments. So we let T be C of G, where G is just some G in A not in U. Since there exists an H not in C of G, we have H G, H inverse, G inverse, it's not in E. And yet, since this is a bracket, so this is what they always mean by bracket. H G, H inverse, G inverse, this is in U because all brackets are in U. So hence we have that A4 U is not empty. And we just apply, well, we apply the lemma to obtain A U, A T. And we apply, so A4 U, A2 intersection T. And then we apply the probe. And we are done. All right. So this is really qualitatively best, as I said, at least once pi of A is of size at least A to the epsilon. If what happens, what would happen if pi of A were very small, or if pi of A were, we just consisted of identity. Then again, you would not have a problem in the affine group, really. You would just have a problem in the unipotent group. Or what is at the same, a problem in the group, in the billion group, Z must be Z. Now, growth in a billion groups is actually, you would say, oh, in a billion group, it's simple. I'm not allowed to say simple. It's a very nice constrained sort of group. Surely, growth in it will be easier to deal with. And it's a slightly more complicated affine group. In fact, no. So yes, the additive case is the one that was studied first, at least in additive combinatorics. And it's still being studied. But less is still known about that than is known about the affine group, say. We still don't know when to guarantee, when we can guarantee growth by a power in the affine, in a million group. So people are getting closer, but we are still not there. And this is pretty much the situation in general, qualitatively speaking. So this is typical. So solvable groups, so morally. People always say morally before doing something is slightly moral, meaning a very overarching generalization. But solvable groups are like the affine group. And nilpotent groups. And I am concluding here are like you, which is Fp. So in a billion groups, this is the classical case. This is hard. We don't know when growth by a power happens. In many cases, it does not. All right. So sometimes the naster your group, the easier it is to get rules. And that makes sense, sort of, a posteriori. Because if a group is not a billion, it means a gh is not usually equal to htmc. So things do grow more rapidly. All right. And I think I'll leave things here for now. And tomorrow, well, not tomorrow, but next week, we'll see a more complicated group, which is simple, but almost simple, SL2FP. And I hope to give you the main ideas there. And then at the end, we will take a brief look at the other completely orthogonal sort of animal, namely the symmetric group. The question was, may I still have the phone here? Yes. Yes, the microphone. So what are the best known delta products? I think people have computed it. Yes. Good question. Well, I mean, here we were doing the pre-sum products theorem, and I think I gave the strongest results, namely there you have delta equal to, but you have several sums and products. Right now, I think it's somewhere around 1 over 10 for A, no bigger than square root of P. And for A bigger than square root of P, I think 8mc plus 8mc is another thing.