 So that's at the stage. Several of the methods, though not that some products here in itself, will reappear today. But we will look at a more difficult problem that of simple linear groups. We see SL2 as an example, but mostly everything, I will say, today generalizes both the proofs and the statements. In simple linear groups, linear groups being matrix groups or groups of lead type, as you prefer. I will say almost, because SL2 and the like are not quite simple, but they are almost simple. That is, they have quotients or subgroups of bounded index that are simple. So we will do things in full for SL2, but that will just serve to make some simplifying assumptions, that's all. So what are the main ideas? So, well, a basic technique, just the first one we will see, which will save us a lot of work, is escape from subvarieties. This is a weak form, as Pivar and Savo have pointed out, of something we will prove using in part escape, namely, dimensional estimates. More about them later. These dimensional estimates will lead to what? Well, one of the center pieces of the entire of all proofs ever since SL2 is the idea that you find tori, in fact, basically all tori, you find tori containing many elements of your set. Or if not of your set, of your set, by times itself, a bounded number of times. a to the o of 1 will just mean a to the k, where k is a bounded integer. And these tori are used for, so once you have this tori with many elements of a in them, so a tori is just a set of diagonal matrices or some conjugate theorem. That's what it is in SLN. So once you have many diagonal elements, what do you do? You use them for pivoting an induction, much like what we did yesterday. Not yesterday, but yes. Whatever it was, yes. Was it Friday? At any rate? No time for a joke involving that. And is pivoting an induction lead to growth? Well, actually, so there was a case of a Spanish poet in the 15th century who said that after being released from the Inquisition's dungeons after several years of torture, and that was seen as a case of great forgiveness, but it might have been that he was absent minded. So at any rate, escape from subvarieties. So all right, so what do we use escape for? It's used to show that there are many, meaning as many as there are elements of a times a constant, elements of a, or a times itself, about the number of times that are generic in some sense, in some given sense. What do I mean by that? Well, just about any given sense. But what do I mean generic? Means in practice, outside a proper subvariety, a subvariety of positive co-dimension. Well, can I be more concrete? Well, say that we want an element, all of whose eigenvalues are distinct. That's a generic condition in the sense that not only are most elements or the great majority of elements of that kind, but the only way not to have that property is to satisfy an equation that is to be in a variety of positive co-dimension. So I will write down the example. So this is what generic means. So for example, a generic G in SLN is regular semisimple, meaning it has n distinct eigenvalues. So to be very, very concrete, ABCD, for example, has two distinct eigenvalues. Even only if the satisfying equation, which is a very simple equation, e plus 2. So rather, if you don't satisfy an equation, you are generic if you don't satisfy an equation. So escape from subvarieties is something that is used to escape from special cases that you don't want to bother about. All right. And how do you do that? Well, here's escape lemma. Let G act on a vector space V mod of k. Let W be a subvariety, a proper subvariety in V of k. That means since vector space is reducible, this means all components are of positive co-dimension. That is, the dimension of each component is lower than the dimension of the entire space. Otherwise, W would be all of B. So let A be a subset of G, as usual simplifying assumptions. Let X be an element of V such that the orbit, remember, this is the orbit of X. That is what happens to X when you multiply it by all elements of A that you might want to multiply it by. The orbit is not contained in W. So there is some finite sequence of A's that, when multiplied by each other and then by X, will give you something outside W. So there exists a way to escape from W. That's all you know. And the conclusion is that for at least constant times A elements, so a positive proportion of elements, in practice, G of A k, or k sub 1, we have that we have escaped from the subvariety. And the implied constants here depend only on the degree and dimension of W. If W is made out of several chunks, I mean the degree and the dimension of all of them under total number. All right, so a sketch of a proof. So first of all, let me just sketch the proof for a case that may look very special. But as we will see later, the idea is generalized easily. Once you put just one x-ingredient, just let W be a linear subspace, or union thereof. No, but let it just be a linear subspace to make the ideas as clear as possible. So suppose we're going to do a proof by induction. So suppose we have already proofed things for all W prime of lower dimension. In the base case of the induction, it will not bother about dimension 0, that you can do yourselves. So very well, we will see. What happens if GW equals W for all G and A? Well, that's an easy case. That will not happen in practice, because in either, you have that Gx is not in W for all G and A. And in that case, you are done, evidently. You are not even stuck in the space. So you don't even need to automatically escape from it. I mean, the conclusion is true for all G and A. Or what happens? Or then you have the Gx is in W for some G and A, naturally. And so, well, A, the set generated by A, the group generated by A, times G is just a group generated by A. So this is Gx. And then Gx is in W. And you're assuming that W is stable under the action of elements of A. And hence, under the action of the group generated by A. And so this would have to be contained in W. But this goes against the assumption. The assumption is that there is some way to escape from W. However long, however rare. And then we will show that it's short and common. So this is a contradiction to the assumptions. So we can assume that Gw is not equal to W. So what do we have then? We let W prime be the intersection of W Gw. Then the dimension of W prime is smaller than the dimension of W. So by inductive hypotheses, just by our assumption here, we have that, well, this lower dimensional thing, W prime, is something from which we can escape in bounded time and in many ways, for many elements H of A. But all right, but this implies, so this statement here, the fact that Hx is not in W prime, implies by definition of W prime that either Hx is not in W or Hgx is not in W. Hg or gh? No, excuse me. That's right because either Hx is not in W or it's not in Gw. So either in the first case, we have this. And in the second case, we just multiply by G inverse and you have this. So one of these two elements is not in W. But then you're done because for every element H that escaped, we get an element either H or G inverse H that escapes from W. Of course, we could have some repetitions, but it just halves the number. And so we're done. All right, so that's about it when W is a linear space. But how did we use linearity ever? Well, we just used the fact that intersection of a linear space and another linear space is a linear space of lower dimension, provided they are not the same. But in fact, that's a pretty weak consequence of being a linear space. You can do something similar for W general. We still have that, say, for W reducible, that the dimension of every component of W intersection Gw is smaller than the dimension of W. And the number and degree of the components is bounded in terms of the degree and so on of W. So this is just Bezu, right? So the first statement here is just common sense. You have any reducible variety. Any sub-variety of it is going to be of lower dimension. And the second statement, it's just a generalization of the fact that, say, on the plane, a curve of degree D and a curve of degree D prime are going to intersecting at most d times d prime points. In general, what you have is that if you have some variety of some dimension and another variety of the degree, say, d and d prime, then the degree of the intersection for some definition of degree will be at most d times d prime. All right. And if that is not irreducible, it's just a slightly more complicated statement. But it's essentially the same. Either the degree goes down or the number of components of maximum degree goes down. So just get the picture. And this is enough to adapt things. Just get the picture right. So this is a variety. Now I displace it. I have another variety. And the intersection is of lower dimension. And so that's enough to make the induction work, simply because dimensions are integers. So the induction ends. All right. So much for escape. All right. Very well. What do I mean by dimensional estimates? Dimensional estimates are, in fact, stronger. They give you an upper bound for how many things can live in a special variety, any kind of variety. And in fact, they give you upper bounds. And in some special cases, we will be able to get lower bounds, which are particularly precious. On this quantity, where, as usual, A is in GK. So this is an algebraic group. This is a field. A is assumed to generate G of K. So this can be softened a little. And W is a sub-variety of G. Very well. So what are our aims? We have aims of two kinds, lower and upper bounds. So we would like to get this, and we will get it, that the intersection of A with W is no bigger than you would expect it. Well, what would you expect it to be? No, this is huge. You expect it. I mean, if something is fairly regular and of a certain size, at least in three dimensions, you expect its size to be about the cube root of the size. So it's the same here. You expect this to be dimension of W, dimension of V. Expect it. What do you know about A? Well, you're assuming nothing about A, say that it generates. But this will be the slower A grows, the stronger the statement will be. Because you have a K on one side and A on the other. And K is some constant. And of course, the other aim would be, and this will be only partially attained for some W, that you have about in the other direction. This will be called proposition one, because it will be an attained aim. So what are the conditions? Well, the ones I have put here, this is achieved for G simple, almost simple. It can be simple or almost simple in the algebraic group. So the history of this statement, I mean, it's developed independently in two ways. So I was always working. Sorry. What is V? V. Where did V go? Oh, oh, sorry. This is team G. My apologies. Thank you. There is no V. All right. Thank you very much. Please correct me in the future. That was a test, an intentional, but still. All right. So all right. So when I was working on these things at first, I was doing it for a special W. Well, as needed. I mean, you gave me a W, and I will prove a result of this sort for a given G. And then say I had to do more and more, develop more techniques. But there was always some ad hoc work involved. Sometimes a bit painful. Lars and Pink, they had proven this sort of statement in full generality for G general, but for a very specific interface, namely AS subgroup. They were not in this business. They were in the business of giving a simpler proof of part of the classification. So what happened here is that Piber Sabo managed to strengthen what I was doing. And in the case of Rear Green Tow, they are using both, basically. It's a combination of ADS in both. So both of these teams have general statements very well. All right. So again, I will start working with a special case, and then I will show you that we don't use this apparently very special condition that much, and we can soften it. Just to make the ADS clear, we will do the case of W one-dimensional. It's enough to construct a map phi from Wk times itself. How many times d equals g times 2g of k? And what kind of map? An injective map. Such that what? Well, it's not going to be an arbitrary injective map. We want the map to map things in A, also in W of k, of course. So things that are both in W of k and A to A to the k. k is some constant. Why do we want this? Then we will have, well, injective maps, again, this is a kindergarten idea, but it's very powerful. Injective maps have the property that the image is of the same size as the main. So W of k over a intersection A to AD is going to be equal to the size of the image because of injectivity. This is d times. And so the other kindergarten idea is that if you have something, it's containing something, it has no more elements of that thing. Yes, so this is the size of p to the k. And now this implies we just take the d-th root and we get that this is at most A to the k to the 1 over d, which is exactly what we want because it's the esteem of W, which is 1 divided by d emoji. All right, how do we construct such a map? Let's leave that over there. May I erase your list? That sounds heartless. I will do that at the end. Very well. All right, so first of all, we don't really, really, really need an injective map. It's enough if it's a map that is finite to 1 or rather bounded to 1 in the sense that every element has a pre-image with a bounded number of elements. And we don't even really need that. We can soften that because we can allow some exceptions and the conclusion will still be true. So we can relax injectivity. Enough if, for every g in g of k, the pre-image has found the size. Or even just the pre-image, intersection with everything except an exceptional set is of bounded size. So W0 here, WD. Thank you. In SL2, it would be 3, but never mind that. Where W0 is a sub-variety of WD, a proper sub-variety. Since it is irreducible, this means of positive co-dimension. So it's particularly easy to see that it is enough, in this case, when W is one-dimensional. Because then, any sub-variety is going to be contained in a union of things of the form, just a point times everything, everything. So this guy is going to contain WKS3 for some s-finance. Very well. So how are we going to define phi? So we're going to define phi to start here, for the reference. We'll send W0, W1, WD minus 1 to W0 times G1, W1, G1 inverse times and so on up to GD minus 1, WD minus 1, GD minus 1 inverse. Very well. But what are these GIs? Well, we'll have to choose them. We'll choose them so that this map will indeed be almost injective. How do we do that? Well, what you will see now is that we will, are you going to use genericity again and again? And the wonderful thing about varieties, living in, say, an irreducible space, so you can always reduce to that case, is that as soon as you have figured, as you have shown, that there's a point lying outside the variety. You have shown that the variety is not a whole space. And if the thing is irreducible, it must be of lower dimension. So it's going to be an exceptional case. All right. So assume for simplicity, without loss of generality, really, that the identity is contained in W. So this helps us simply because it will be easier to work with the tangent space then. So let the tangent space to W at E be denoted by gothic W, which I will denote in turn by curly W because I do not know how to draw a gothic W. How do you draw a gothic W? No goths in the audience. At any rate, then the tangent spaces to these other things, G1, W, G1 inverse, G2, W, G2 inverse, et cetera, are just the conjugates of this curly tangent space and so on. So first of all, I will have a slightly more modest goal before going for injectivity. I just want these one-dimensional linear spaces, these lines in the tangent space to be linearly independent, to be in the different directions. So first of all, can I find a G1 such that it is different from W, that it's not W? So the question must be, first of all, is there such a thing? Can this be the case for all G in G? No, it cannot be the case because by assumption, G was a simple or almost simple group, so it's Lie algebra is simple. That means it does not have subalgeras. And so you are not going to have this. That's not how things start. Sable under conjugation, so because all right, very well. So you should say you have some such G1. All right. Can you have that this conjugate is contained in the span of these two spaces? Can you? No, because if these were the case, then again, this space would be stable under conjugation or some subalgeras. I told you it was simple and so on. So there are G1 up to GD minus 1 such that, well, the first one is linearly independent and the second one is not in this span and so on. So the sum is of the right dimension. It's of the full dimension. It's entire space. Very well. Of course, these GIs just exist. They are in G. But what happens? This just means that these are linearly independent things. So the fact that this condition is true is just the same as saying that a certain equation is not fulfilled. So a determinant is not 0. In other words, the good GIs are the ones that live outside of a variety. We have just shown that the variety is not all of G times G times G times G times C because GIs that give you some linearly independent do exist and even over K. So a proper subvariety, not just a proper subvariety, but a proper subvariety such that its values over K do not contain all points of G of K. And so you can escape from it. There are G1 up to GD minus 1 in A to the power of 1 such that star star is true. All right, very well. So we have that. And now what do we have? Well, there's going to be genericity yet again because you're going to have. What does it mean that star star holds? It means that this map phi is non-singular at the origin. That is, its derivative has non-zero determinant. So for phi as in star, we have that d phi at the origin. So e, identity, identity, identity is non-singular, non-singular determinant. But again, there's a being non-singular is a condition of the sort not satisfying an equation. So again, what do we get to erase now? Again, what are we going to be able to conclude? Because the map has non-singular derivative at at least one point, namely the origin. It has non-singular derivative where? At almost every point, in the sense that by definition, you have non-singular derivative on a variety. We have just shown that the identity is not in the variety. There's a variety living in G cross itself a few times, something irreducible. And so this variety must be of positive dimension. W times itself. It's non-singular. W outside, well, a sub-variety, W0 and W3. This is tautological because it just means not being the sub-variety is defined by having zero determinant, but a proper sub-variety. And moreover, W0 and K is not all of W3. But this is exactly what we wanted. This is exactly what we were looking for. And we get that outside that's very wicked variety. The determinant is always going to be non-zero. And if you have a map between two spaces of the same dimension and the map is non-singular outside its forbidden set, then the pre-image of every point is going to have dimension zero. And dimension zero, not just that, but bounded degree because all of these things have degree bounded in terms of each other. And the variety of dimension zero with bounded degree is points are just a bounded number of points. Degree of, very well, which is bounded. All right, so these works, we were assuming that the dimension of W was 1. So what about dimension of W greater than 1? Well, surprise, surprise, we're going to do an induction in the dimension. Let me just give you an example. But it already has the main idea there. Example, let us have, since this is an example, I can assume that d equals 3. Why not? So y of dim 2 and g of dim 3, like a cell 2. Then I can have, yes, phi is going to go from, say, y cross y to g. I can define this to take y0, y1, to y0 times g1, y, g1 inverse, some g1. All right, then what do we have? We do have, by this sort of argument, not that the pre-image of a point is just a set of bounded number of points that will go against the dimension. This is four-dimensional, y times y. This is three-dimensional. But you will be able to show that we can show that phi inverse is one dimension, the right dimension, for g generic. Well, it's just basically a geometry with a sort of arguments using simplicity that we have seen here. So 2, 2, 2, 2, 2, 2, we have another. So what does it mean that phi of y0, y1 is g? This means that y0, g, y1, g1 inverse equals g, what it is given. So this is a variety. So y0, y1, lie on a variety w of co-dimension. Well, what is a co-dimension? It's the number of independent equations you satisfy. g is specified, so there are really as many, the number of equations implicit here is the number of dimensions of g. So this thing is of co-dimension 3. But since y cross y is of dimension 4, this means dimension 1, just 4 minus 3 equals 1 in y cross y. Something one dimensional. We have at any rate of lower dimension than 2. But we're lucky that it's 1. So what we can do now is that we can bound. So w here is just y intersection. I just copied that almost. So we can bound this intersection as above. We have gone down from 2 to 1. The essential thing is that we have gone down in dimension. So phi of y0, y1, this is going to be. So let me just have this outside this special set, this proper sub-variety. It's always some exceptional set. So this set, the preimage of y0, y1 for y0, y1, not exceptional, is going to be what? It's not going to be all right. So it's going to be, no, sorry, the image of almost entire set is going to be what? It's going to be not a over intersection y squared. That would be if it were injective. But you have that every point in the image has preimage of dimension 1. And moreover, that this sort of thing applies. So you have vision by the max for g0 special. A intersection, well, this sort of guy, what we call w there. So and we can bound this from above, as we were saying. This at most, by the argument before, a to the of 1 to the, in this case, 1 over 3. It's a dimension of y divided by the dimension of g. The dimension of wg divided by the dimension of g. So we get that the intersection of a and y of k is at most, well, we just take a square root, right? Here you have phi of y0, y1. But this is contained in a to the of 1, as usual. And here you have the max of a intersection. This is a clear division here, a of w. And we have shown precisely that this is at most a of of 1, 1 over 3. So in fact, there was no need to draw a thick line, because this is just that. Very well. And so we have that just dividing. We get exactly what we wanted. Hence, a intersection y of k is at most a to the of 1 to the 2 thirds. And this was just dimension of y over the dimension of g. And this is precisely what we wanted to prove. So I have given you the simplest case of the induction, but it is paradigmatic. It always goes more or less in this way. So the one-dimensional case, we did in full. And I showed you how to go down from 2 to 1 dimensions. And you do that. That's how you do it in general. You lower the dimensions. Unfortunately, dimensions are integers. Again, crucial fact. Very well. So that gives you a result of what I call type 1. Namely, it gives you an upper bound. It tells you that the set A does not have more elements. The intersection of the set A with a variety does not have more elements than it deserves. Now, can we give any sort of happy bound, the lower bound, showing you that there are elements of a special form? Well, for some special forms, yes. Thanks to the orbit stabilizers theorem. We can show that for what kind of varieties? It's a centralizer of a certain G. You're going to have that any G in A. You're going to have that the dimension of the intersection of the centralizer, we say square. The centralizer, remember, are all the elements that commit with G. That's an equation. It's a variety. So this will be dimension of C of G over dimension of G times a factor that is very small when they grow slowly, namely A of 1 to the A of 1. I don't think you even need this A of 1, but it does not matter. All right, so how do you prove this? Well, I said orbit stabilizer. How much more time have I got? Excellent, very well. By orbit stabilizer, in the version for sets that we did last time, we have that intersection with C of G. It's at least A cubed. Here's a conjugacy class of G, but this is almost a variety. It's basically a variety. Bar for Zariski closure. This is a variety with dimension of G minus dimension of C of G. And what you do now is, well, in order to get a lower bound, you just apply an upper bound to the denominator. We get this is, at most, A to the K, so the dimension C of G divided by the dimension of G. But we already know what the dimension of C of G is. And so you get that the intersection, we just divide A by this. And we get that this is at least A to the dimension of C of G divided by the dimension of G times A over A to the K. So no one was needed in this case, but it would not have hurt. It's that simple. It's just an application of orbit stabilizer. All right, but what kind of animal is C of G? That means the centralizer. I spoke before of regular semi-simple elements, which, in SLN, are just the elements that have n distinct eigenvalues. And I said that this was a generic condition, which it is. So for the regular semi-simple, which, in SLN, as I said, is just a fancy name for having distinct eigenvalues that is being diagonalizable, what do we have? Well, if you have a diagonal matrix, which is in eigenvalues, the elements that commute with it are just the diagonal matrices. So if we are fancy, we call that a torus, a maximal torus. C of G is a maximal torus. So this shows, in particular, that in fact, for any torus that is the centralizer of an element of A, you're going to have many elements in the torus. And this is really the centerpiece of the proof. And this has always been. So this sort of statement, let me call this statement then from now on, the rich torii statement. Tori is what we care about the most. So this statement, this was already central back in the day. But back then, I proved a slightly weaker statement, unfortunately. OK, thank you. Yes, but since a finite number of connected components, yes, he's not. Yes, back then, let me emphasize this, just so as to my misunderstanding, that back then there was a technical weakness. I proved this. I proved only that for most Tori T, for as many numerically speaking, you're going to have this sort of thing. This turns to have made things more complicated and silly, but that's the way it was. All right, so we finally, just about enough time. Proved that for G is equal to SL2, but the same ADS, it is a modern proof. So this ADS worked for all G, almost simple. K a field, A a subset of G of K, generating G of K. As I said, that can be softened. We assume D, so it's not really necessary. Then either A to the of 1 is bigger than A to the 1 plus delta, or A to the of 1 is almost everything. In fact, it's going to be strengthened. So I will not have time to prove this strengthening, but I will just tell you that this statement, once you have almost entire group, and I mentioned that the first time, you have that A a couple of times more, it's going to give you the entire group. In fact, Nikolov and Piver have managed to make this equal to three times this of 1. All right, very well. So how does the proof go? All parallelisms with the proof that I gave you last time, the long proof at the end, are complete intention. The main ADS are the same, and you can actually put the proof in the same framework, and that's what I'm going to do. So let us assume that the first conclusion does not hold. So let us assume that A barely grows, which I don't. And we will derive either a contradiction or the second conclusion. So assume that this is less than A to the 1 plus delta, 4 C large and delta small. So let us show that the second conclusion holds. A to the o of 1 is almost everything, or contradiction. By the escape lemma, there exists some semi-simple element. It's a generic condition. So its centralizer is a maximal torus, meaning it just consists of the diagonal matrices under some basis. Well, coaxi in G at pivot, just like last time, if the map phi xi, defined in this way, induces an objective function from A. This is T of k plus minus C to G of k plus minus C. So this is there in order to avoid this sort of trouble that Emmanuel was referring to, modulo plus minus C. But so we call this a pivot if it induces an objective function, at least when this is new. Very well. All right. This was really BG, but you get my meaning. All right. So there are three cases. The first case is the case when we get really lucky, like last time, in case A. And we do have a pivot in A. By the rich torus theorem, or lemma, there are many elements in the torus. A, dim T, dim G, minus O of delta. The O of delta is coming from that ratio. And this is, of course, just 1 over 3 in our case, just for concreteness. But that's enough. Hence, because since this is a pivot, so the map is injective, we have that phi psi A, this at least 1 4 is coming from the plus minus A. So it's almost at least as large as the domain. And this is a constant to the 4 over 3 minus O of delta. Very well. At the same time, we have that this is contained in A to the 5. So we have that A to the 5 is really big. It's bigger than A to the 4 thirds minus delta. And this is in contradiction to our assumption. 4 delta small enough. So case A was easy. But that's the case B. Case B is the pessimistic case, which turns out not to be the hardest case, really. Namely, the case when there are no pivots whatsoever in the entire universe. Such that. So because there are no pivots, for every psi in G, every psi in G fails to be a pivot. That is, for every psi, there is a collision. That two elements collide under phi. The map fails to be injective. This is just phi psi of A to T2. And this is just phi psi. So it's a failure of injectivity. For every psi, you have a failure of injectivity. But then this is the same as saying that A2 inverse A1 is plus minus E psi. So just as before, the failure of injectivity minus is, well, it helps us get hold of things, so to speak. Because this means that for every psi in G, we can get hold of it in a certain sense. We have that for every psi in G, there is some T in T, namely T2 times T1 inverse, such that, and it's not trivial, such that psi T psi inverse is in A inverse times A. Now, this is about the only place where I will use really the simplifying assumption that this is L2. So the fact that it's not the identity will mean that psi T psi inverse is regular semi-simple. Of course, in the case of SL3, say, you could have two eigenvalues that are the same and a third one that isn't. And you would need to deal with this as a special case. It seems to get a little bit more complicated. So this is regular semi-simple. And so C, the centralizer of this guy, is a torus itself. It's just psi T psi inverse, a torus. So by rich tori, the rich tori lemma, there are many elements in that because psi T psi inverse is indeed in A inverse, say, because we have managed to get hold of it using its non-pivotness. We have that by the rich tori lemma. There are many elements of the torus, psi T psi inverse in A4. So well, we have that there are many distinct tori, basically the G over T distinct tori, psi T psi inverse, just by as psi varies over all psi and G. And they intersect only at plus minus identity. So you have a lot of tori, all of them. All of them are rich. They intersect only the identity. So you've got a lot and a lot of things all over the place. And they are all in A to the fourth, right? This is number of tori times A to the number of things we have in each torus, minus 2. We delete the identity plus minus. And so we get, just to be very concrete, that this is at least P squared A to the 1 third minus of delta. Because the size of G is roughly PQ. Now that this matters. And the size of T is roughly P. We are not using anything in particular about that. I'm just being concrete. And so this goes to that. So the fact that this is bigger than that, you have a P here, this gives you a contradiction to the assumption that A barely grows, unless A is already almost as large as in Tarski. Very well. And we are left with the last case, the inductive step, Kc, when there are pivots and non-pivots. And because A generates G, there must be a negative input for the last talk. As in Batman? Very well. Serials. All right. Well, so next time we will, please don't remember everything. Please don't forget everything I said today. And we will do the last case in the last talk. Enjoy. Thank you.