 Finally, we've had our final lecture. So last time I was close to the end of the proof of the main result for SL2. So as I was saying, it was proving that any set in SL2z is at growth, as long as it generates a group. And as long as it's not as large as the entire group. So we were in the middle of this. Let A generate, as I said, the group. For simplicity, we assume that A is its own inverse. That E is that it contains all the inverses of its elements and that it contains its entity. And so the conclusion is that either in a bounded number of steps A grows by a power or in a bounded number of steps, well, not even in a bounded number of steps, right away, really. Let's just put it in the simplest way. In a bounded number of steps, you get something bigger than g to the power of delta. In fact, one can strengthen both of these statements. One can replace A3 and this one can in fact replace this by A cubed being already the entire group. But we will prove this marginally weaker version, which is more than strong enough for all practical purposes. And you can deduce these strengthening from the version I'm giving you. At any rate, we were in the middle of the proof or rather close to the end of the proof. But the case we left for today was the most important one. It's when you have some pivots and some non-pivots in g. It's a typical induction step. So I remind you briefly that a pivot, so it's a reminder. We call psi a pivot if this map going from A to T of k plus minus c, D of k more or less plus minus c. So which map it's given by? Well, it takes A comma t just to A psi T psi minus 1. So it's called a pivot. It's called a pivot if this map is injective. All right, so the induction case is what happens when you have that there exist pivots and non-pivots in g. As I said last time, in order to induction, you don't really need an ordering. You just need generation because A generates g of k. There is going to be some pivots in g or rather some non-pivot in g and some A in A such that psi is not a pivot but A psi is a pivot. In general, whenever you have a set that is non-empty and not the whole group and you have A generating the group, then you are going to have some element of the set that is taken out of the set by the action of that element of A. Very well. So since psi is not a pivot, as we saw last time, this implies that there are many elements of psi T psi inverse in A4. Very well. But at the same time, A psi is a pivot. So this means that phi psi is injective. So as usual, you have that you consider. So we're going to consider what? Phi x. X psi. Yes, phi x. A x, yes, A x, phi A x, exactly. So all right. So it's going to be injective on the elements of the torus when you have A here and when you have the torus A psi T, A psi inverse here. Intersection, we say k if you refer. So here it's going to be injective. So it's image, it's going to be at least one fourth A A4 psi T psi inverse, where the four just comes from the modulus here. And this is at least a constant times A to the 4 3 minus O of delta, simply by this. And so you're done. Because this is, of course, the A psi is going to be, so, yes. And what do we have? Yes, so all of this, all of this map is going to be contained in, did I, so this is correct, right? Oh, the way I defined it, I really mean, I really mean this, excuse me. So, so A psi, this is good. So, simply this way. Excuse me. I wrote out the definition of phi x psi. And, of course, this is at most A to the 7, because all of these elements, this is an intersection, so we can see it from this perspective. And K was just, well, it doesn't matter. K was just 4, I think. So this way, this way I wrote 7. So it's fine now. So we have that A to the 7 is, at least, A to the 4 3 minus O of delta. And we had a, so we have growth. We had a contradiction to the assumption that there was no graph, and we had that. All right. So much for SL2. And the same ideas that I have presented. So it is a contemporary proof, if you wish, an updated proof that is streamlined and that is also generalized as marisily to higher rank groups. So the same ideas apply and give you growth in SLN, in SON, and so on. Now, as I said last time, there is a sticking point still there, and that there is a dependence on N. That really is there for results of this type, but that shouldn't be there in the consequences. Those problems are tied up with the fact that no useful diameter bounds were known for symmetric groups until rather recently. But now they are known. So we have some hope to advance. So let me devote the rest of today's talk to the symmetric group. I will not. So this has been proven three or four times right now, so the proof has become shorter. I cannot yet give you a full proof in one hour or 50 minutes for a cement, but I will try to give you several of the key ideas of the proof. So giving a second proof of this is a difficult exercise for the audience. All right, so what was the previous bound known? So cement is just this group. It's a group with same factorial elements. It's a group of all permutations of N elements. So what was known? So for G, cement, or really this was known for any subgroup. Any subgroup, they're off. We will have that for any A that generates G. So this was known. This was by Cherish from the late 80s. So it was known that the diameter of the group is bounded by, well, basically E to the square root of N log N. So this is, of course, very, very far from being a power of N. So what we showed was not a power of N. We showed something close to that, what's called a quasi-pollinomial bound. So for G, cement of N, or we can also show it for any transitive subgroup, they're off. See, this bound is actually tied for non-transitive subgroups. This is 1,4. It's not 2,019 yet. It looks like 2,019. I don't have the power of prophecy, and even Anna does not take that long. So for any transitive subgroup, so let me explain. So this result is actually tied for some non-transitive subgroups. Because if you have a two-cycle, a three-cycle, a five-cycle, all these joined, then you will have something of huge diameter, a cyclic group of huge diameter. So if you have a transitive subgroup, you can actually deduce it from this result. But using caveat, using classification. What do you consider that moral or immoral is your choice? But I have to tell you. I am resolved from Babay and Sheresh, 92, using classification. So the classification here, by the way, is also the one that tells us that basically the two cases are the matrix groups. The matrix groups and C-men. There's basically nothing much else to study. All right, so for G in C-men, as I said, G equals to C-men, as I said, and any A-G generating G, we prove that the diameter is what? At most, e to the log n. So as you can see, there's a difference there. And this is the kind of bound that gets called quasi-polynomia. Now again, I have to warn you again. Yes, the proof does use at one step classification, not just to obtain this corollary, but the proof itself. I don't like that. It will be very nice to have a proof without classification. And it will probably give three instead of four. What do you get without classification at all? No, I don't get something in the sense that most of the proof does not use it, but there is one step that uses it. And if I knew how to do that step without classification, I would. But I mean, it could be that you still get something better than that? Well, I suppose one can... Sorry. I imagine, but I haven't tried. And it will be depressing, I think, because everything will work out except for detail. So I think it's better to try to just replace that step so that it works well without classification. Because it's really a trick. It's a way to handle a special case. So some notation. Just to confuse some people and not to confuse others. The nomenclature for actions is different in tracymetric groups. So people write x to the g instead of gx and x to the a or x to the h instead of for the orbit. They also write ax instead of the stabilizer of x and a. So, of course, then... It's actually more intuitive. People in symmetric groups always like to blame the Lambert for the other convention. Now, it's also important there are two kinds of stabilizers. The set-way stabilizer. So that's when you stabilize the set. As I said, that's when you take it to itself, but you might permute its elements. And the point-way stabilizer. So in the case of the... Just so as to wrap one's head around this. In the case of the action of a group of itself on the point-way stabilizer, the set-way stabilizer is a normalizer, and the point-way stabilizer is a centralizer. Very well. So let me talk a bit about random walks, which are exactly what they sound like. You walk on a graph, taking a random decision at each step. But first of all, let me tell you what kind of graphs we will be working with for the most part. You define the Schreier graph, which is what many people prefer to call a multigraph in that repeated edges are allowed. The Schreier graph is not a graph that screams. Rather, you have that the set of vertices is just x, and the set of edges is x, xg, x in x, g in a. So a Schreier graph is a special case of a Schreier graph. It's a Schreier graph for the action of a group in itself. So this possesses several of the nice properties of a Schreier graph. It's regular, connected if the action is transitive, and symmetric if, as is usually the case, a Schreier is our usual convention. Very well. Excuse me. Yes. Is there a conjecture for what the upper bound should be? Enter the hour of one, and we don't really know what hour of one is. Some people believe it's 2 plus epsilon, but yes, enter the 2 plus epsilon will be the most optimistic conjecture. Already enter the log n would be very nice, which would be e to the log n squared. Yeah, 3 would be nice, but not so nice. 2 would be nicer. So a little lemma. So what do we know about random walks? Well, a priori, we know almost nothing, almost nothing. Already, just because a graph is connected, well, for any connected regular graph, you're going to get a very, very, very, very weak expansion type result. In the sense that you will know that lambda 1, the eigenvalue that is not the one that corresponds to constant functions is a bit bigger than zero, and moreover, you can give a lower bound, a non-constant lower bound for lambda 1. So, and that turns out to be useful. So this is folklore. And that tiny gap leads to an upper bound on the mixing time. Regular, meaning every vertex has a greedy connected symmetric graph. The same properties as that. We send vertices. In what? A case such that a random walk with at least k steps ends at any x with probability that it's almost uniform. An infinity, a very strong infinity sense. So after these many steps, yes, a random walk. So you drop somewhere in Paris, nothing about anything. It starts walking very randomly. And after enough time, you will be anywhere in Paris with the same probability without knowing anything about the map of Paris, even at the stage of giving the bound. So just let me say that if any were very large, this bound would be horrible, right? But this is very useful when any's not so large. All right. So what is the idea of the proof? So let me tell you very quickly how do you prove that there is a non-zero gap that you can actually bound if it's not a constant. So it's more comfortable here to work with the adjacency matrix, which is almost the same thing as the laplation only without the subtraction. So for f non-constant and real valued and any r between the max and the min of f and s equal to the set of v's of vertices for which the value of f of r is bigger than r, you're going to have that the sum of a alpha over all vertices in s is a little smaller, really a little smaller. That's the sum of just plain old f of v. This is very easy to prove. It's just that your values over this set of privileged points get slightly contaminated by at least one guy from the outside because the graph is connected. It's just interconnected. And then the only thing we have to do is to apply this to any non-constant eigenfunction. You might say, well, eigenfunctions might be complex valued, of course, but because it's a symmetric graph, the eigenvalues will be real valued and the eigenvectors can be taken to be real valued. Since you can take combinations of eigenvectors with the same eigenvalue, so you worked only with real valued eigenvectors. So you apply this and you have that the adjacency operator makes any non-constant eigenfunction slightly smaller in a set or the sum of a set. That means that the eigenvalue must be a little smaller than one. So that shows the gap. So it's a small, non-constant, let me emphasize, non-constant eigenvalue gap. And this leads, as we know, to abandon the mixing time. So as I said, this is weak, but fortunately what you have for the symmetric group is that even though its representations are in no spatial geometric structure, they do have a very interesting... You do have representations that are spatial in a sense and that they are representations that are much, much smaller than the group. Non-trivial representations that are like that. So we apply this sort of thing to a shire graph. It's useful when n, which is just the number of elements of x, is much smaller than g. So the example would be, in our case, just g acting on the elements one up to n or on k tuples of distinct elements are off. So let A be in Cm, A equals to its inverse, A equals to 3 transitive. So it's an example of how to use this. It's a proposition to the Baba Bil-Sharash. It's by 3 transitive, I mean it acts transitively on 3 tuples. On 3 tuples of distinct elements and you take a g in A whose support is neither the entire thing or zero. The conclusion will be interesting. It will be non-trivial, smaller than the inverse 3. So this will tell you that you can do something interesting when the support of g is not all of n and it's at most the inverse 3. That can also be softened. Then, and we will also use some sort of epsilon, then there is a g' in A to the l where l is not too large. It's n to the 7 log n over epsilon, such that the support of g, oh, by the support of g, what is the support of g? I should have explained. It's just a set of elements moved by g. That's the number of elements moved by g. So this is at most 3 plus 3 plus 3, 1 plus epsilon sub g. By the way, this 3 plus here, it's here because we want it to be there. It's useful. And in fact, I will sketch the proof briefly. I will just keep the detail of how to get this 3 plus. Excuse me. Do you have a g' somewhere? Thank you. This is a prime. But sub g here. So as soon as sub g is, say, 1 fourth, or say if sub g is 1 sixth, this tells you that sub g' will be 1 12. So it's a sketch of a proof. Well, the basic idea is this. So say that you have, it's called s, the soup of g. It's for simplicity. So for sigma and c-man, we can define h to be, what happens when you coordinate g by sigma. Support of h is going to be just s displaced by sigma. So sigma of s. And the support, the simple exercise of gh will be a subset of s intersection of sigma, s section of sigma displaced by g, and s intersection of sigma, and sigma displaced by h. So the support, the size support of gh is going to be at most three times the intersection of s and s-sigma. And now what's the idea? If you could pick sigma just completely at random, then the intersection of s and s-sigma would, on average, be how much? I asked, well I, oh, thank you. So how much for sigma random, how large would the size of s intersection of s-sigma be? In fact, all right, you are s. I now take a random permutation sigma and I end up with s-sigma. You are half of the room. What is the size of s-intersection s-sigma likely to be? People think this is a too personal question. Well, so you're half of the room so s-intersection s-sigma should be of size about n over 4. Right? So for sigma truly random, so uniformly picked in cement, we would have that the size of s-intersection s-sigma is s-n times s. So your size is half because you're half of the room. Now, can we see that, can we see the proof of that statement in detail? Yes, and I will have to give it to you in detail in a few lines because we don't actually have sigma random. So we'll actually have to show you that the argument is based on just a very soft property of sigma that gets stimulated very well by a random walk because of this. All we need is that sigma takes any element to any other one with probability just about the same for any paraphernalia. So we don't really have this but we have a substitute that is actually quite good because this expected value is a sum over all x prime and x of the probability that x be taken to x prime. And so what is a probability for sigma, an outcome of random walk, not of lengths infinity but of reasonable lengths n, big n and d will both be reasonable. They will be bounded by n, little n. So for the outcome of a random walk what is this probability? Well, by this the probability is about 1 over v, 1 over n. This probability will be at least s over n. Very well. By far the problem. That's it. That's almost it. Of course this gives you the result just assuming one transitivity and without the 3 plus here but I will not go into the tricky details. So just let me tell you what an immediate corollary of this is which was a very useful result that was around before us and which we do use to treat the special case. But what you should really take away from this is not just a result itself or a specialist corollary but really the idea that some probabilistic proofs get mimicked very well. So because we really want a random sigma no, we just wanted to show that there exists a sigma for which this is true. Now the probabilistic methods ever since Sardosian company consist in saying well, in order to prove that a good sigma exists we pick a random sigma and we show that it's good with positive probability. So this is the stochastic version of the probabilistic method. You are not allowed to take a random sigma as in any element of sigma in which equal probability sigma is the outcome of a random walk. And in some circumstances it is just as good. This is the moral of the proof. So the corollary, if you have a in cement conclusion, always remember to state the conclusion, then the diameter is bounded by n to the 8 log n very well. Babaik once got upset when I pointed out that he had put 7 in his paper and it's really an 8. It's the same thing. So proof, so apply the proposition, say of them, to G and then you apply it again and again and again and you're done. To G, to G prime you just bracket away. Because every time it gets smaller by actually much more than a constant factor because it decreases more than exponentially. You're done in no time. All right, so this is useful and it's also an example of the probabilistic method. So let me be like every year this is the moral explicitly just with random walks. All right, let me tell you another story now. It has to do again with something that has been a life motive of this mini course, namely how to adapt results valid for subgroups, known for subgroups so that they are valid over sets, general sets. Just sometimes it will be states, it will be h by a and sometimes h by a to the k for some moderate k. Let me call this section large orbits and stabilizer chains. But the set up is really the splitting lemma. So I will actually state the splitting lemma in both it's all the new variant but I will just give you the old variant and then I will actually give the translation of the proof to you as an exercise which you can now do. Following a bit this model user on the walk. So this is called Baba splitting lemma which is again just like large and pink. Baba in fact here was working in the early 80s back when it wasn't clear even whether the people who were doing classification believed that classification was done. So he was trying to with some success to give much shorter proofs to give much shorter proofs of some consequences of classification and in the process he showed the following. So let's have a two transitive subgroup of cement meaning a subgroup that acts transitively on pairs and say you have some subset of sigma and assume that sigma splits a lot of pairs what do you mean by splitting a lot of pairs? So assume that there are at least a positive portion rho for the pairs which end of pairs just any pairs of distinct elements such that sigma separate so for a positive portion of pairs sigma separates half and b what do you mean by separates there is no g in the stabilizer of h with respect to sigma the point wave stabilizer with alpha g is equal to g so it's what's called sigma separates a and b. So this is a true definition if sigma contains other a or b but it can also happen that sigma separates a and b for example it could happen that this stabilizer is trivial but in this case it separates everything but you could it could also separate something very well and what is the conclusion it's always good to have a conclusion and there exists a sigma h a sigma in h which is not too large just of logarithmic size such that the point wave stabilizer with respect to sigma to ds is equal to the union of course of sigma to the g for all g in s alright so how do you prove that so if you split a positive proportion of full couples then a slightly enhanced version of yourself splits everything so let's point out first that if h is a continuous big h if g prime takes alpha to beta then hg prime h inverse takes alpha h inverse to beta h inverse tautology tautology like all of math but what do we do now so now we consider taking random things so for hh taking at random uniformly h inverse alpha beta to any alpha prime beta prime with equal probability why? simply because all cosets of certain of several parts of the same size it's a basic reason but you could also say that this is intuitively clear so the probability that h inverse takes alpha beta to some alpha prime beta prime such that sigma separates alpha prime and beta prime is how much so we know that sigma separates at least a proportion of full pairs so a probability that h inverse takes alpha beta to some alpha prime beta prime that is separated is at least how much is at least rho very well so but at the same time if this happens if h inverse takes this the consequence would be that no g prime in h sigma h takes alpha to beta meaning sigma h actually separates alpha and beta why? simply because of what I said before alright so now let us do let us choose a lot of random elements independently let s be a set of r random elements of a so by the above the probability that for every h in s there will be some element in h sigma h such that alpha g prime b so the probability that for every h in s sigma to the h will not separate alpha and beta well for every given element of s it is 1 minus rho the probability that this never happens is just 1 minus rho to dr but this would hold this looks like a very strong condition so no wonder the probability is small but this would hold if there were some g in h stabilizer it would go to ds because that g would be good for all for all these purposes if there were a g prime h sigma s with alpha g prime equal to beta so the probability the probability that this happens we have shown is small so the probability that this happens for some pairs is at most n squared is at most n squared minus rho r. But this would happen. So if not true, if these things where you were bounding were not true, then, well, what do you call an element that doesn't take any element to any other element? So what's its identity? So if your group separates everything, then your group is just consistent with the identity. So all you have to do is set r large enough that, so the probability becomes, and of course it's just constant times log n. So in that case, with positive probability, h sigma s is your identity. But if something happens with positive probability, it happens. Again, the probabilistic method. And so there exists an s of size r, of course, such that this is true, which is all we want. So that's it. It's approved by Dito Babay of this using the probabilistic idea. And now your exercise is just to translate this so as to give the same result for sets. And then I will tell you briefly how this result gets used in the overall scheme of the proof. So what was the, I will give you the statement. So if a is in 7, assume as usual that a is in the inverse in a, a generates something to transitive, sigma is some subset, and assume that there exists many pairs such that, well, it's not a subgroup now, it's not a, it's a to the k1, where k1 is 9 and 6 log n separates alpha and beta. Then there exists an s in a to the k2, where k2 is something like that, and 6 log n, such that the point, the point way stabilizer of a a prime sigma s is the identity. And the s is of logarithmic size. So yeah, just translate this using a random walk. Very well. All right, so how do you use this splitting lemma within the proof of growth in cement or of small diameter in cement? Because you see the, actually the, a result of type 8mc, 8mc is bigger than 8 to the 1 plus delta, that's not actually true over cement with delta constant. So the recent idea that goes back in part of Peeber, the concept of stabilizer chain, of building stabilizer chains, comes actually from the theory of algorithms over symmetric groups. The expression comes from, goes from Sims in fact, long before. So all right, so what can you use from a conclusion like that one, that a inverse, the point way stabilizer is the identity. Well just by pigeonhole, you have that the only way that this can happen is for it to be bounded by n to the sigma s. And that implies that sigma is at least log a log n squared. So the assumptions then can happen only once sigma is already fairly large, not before. So the only way that your sigma will split, with the splitting lemma is telling you then, is that if sigma splits a positive proportion of full pairs, sigma must be pretty large. So when sigma is not yet very large, so when sigma is smaller than the constant, so let's find this to be L, log a over log n squared, there are less than the proportion of splits by a to the k1. So we're going to use that. We're going to use that for all sigma that are not very large, you are not separating opposite portion of full pairs. And in particular you have large orbits, because if you don't have large orbits, if every orbit is of size most than over two, then at least half a full pair is separated. Okay, so what do we do then? For any sigma we obtain, so first of all there is some guy who has long orbit, but even the point way stabilizer of that, because it's just one point, is going to have some long orbit, you assume that, then you can see here the point with stabilizer of alpha 1 and alpha 2, still of size much less than this and so on and so on, alpha L, such that alpha, and then you stop very well. There you stop and this is what's called a stabilizer chain. So it means that using the splitting lemma you show that if we say for a you have any, really any set a that generates something to transitive that is not tiny, you're going to be able to find a point with a large orbit and then you're in the point with stabilizer of that, you're going to find some other element which you can take to be the second one with a large orbit, something like this, and then in the point with stabilizer of those two you're also going to find some sort of large orbit and so on and so on and that's extremely useful. So let me show you some things that you can do with that, very quickly. So yes, so an exercise is that then this implies that a prime L intersects, so a prime is justifying to be fixed, that a prime L intersects at least sigma L, L prime cosets of the point with stabilizer contained in the set with stabilizer. So just choose set to be a set of representatives. And so what are these representing? What are these cosets? What are the cosets of the set with stabilizer in the point with stabilizer look like? Well, you can just restrict them to sigma or rather to the, yeah, you restrict them to sigma and they are acting on sigma and they are giving you these many different elements of sim of sigma. So you end up having, in this way, you can identify these elements to one, two, up to L and what you have is that you have built a prefix, so you have that among the elements that fix one up to L as a set, you have almost all permutations of one up to L. So that's already the beginning of the light in this proof. And now that you have this nice prefix, what can you do with it? Well, cement, sigma, and so in particular set, acts on the point with stabilizer by conjugation. So sigma here is first just alpha one. I say of course, but I'm going to be slightly tricky. I'm not going to go up to alpha L. I'm going to go up to alpha L minus one. All right, so it acts on that by conjugation, but we know that the point with stabilizer has a long orbit. Why? Because it was slightly tricky and I did not put alpha L minus one in sigma. Long orbit, namely, that of alpha L. And well, we can assume, as it turns out, that it acts like cement or all time in it. In fact, you need much less than that. So, but warning, this is a step that uses declassification. In fact, what we need is not really for it to act on cement or all time like that. We just need to act, we need to be able to assume that it acts like a two-transitive sub-problem on that. Or even something a little bit stronger, namely that it's not obvious. We can find an S of constant size. So it is acting on this orbit and it acts, yes, it acts two-transitively on the long orbit or almost the entire long orbit. If you find a way to show this, and it's entirely possible that it could be done without fluctuation, excellent, publish it. It acts on sim L. So on the line above, you want it acts like sim in Sweden? Yeah, no, it's act like sim of the orbit or sim M, whatever it is. Sim or old, yes, very well. All right, so now that you have something, so you just, maybe in clever, you find a set of generators of at least a two-transitive thing on that. So you have on the one hand, yeah, something in the point of a stabilizer that has just a couple of elements that generate something with a long orbit that are two-transitive on that. And on the other hand, you have these long prefix acting on that way conjugation. So what do you get? Surprise, the orbit stabilizer serum comes to the rescue. Because you have this tension in between. So what is going to be your action? So let's act on S, or on the entire group really, but that's on S by conjugation. By orbit stabilizer, either there exists an element of G commuting with every, so an element of that, that is not the identity, commuting with S in S. But if it commutes with every element of S, it commutes with every element of the group generated by S. But the group generated by S is two-transitive. And if you commute with a two-transitive, something that is two-transitive on a set, you must be what on that set? Not you? Yes, in Siemens, if you commute with a two-transitive subgroup and with every element of a two-transitive subgroup, what must you be? So if you have enough elements that, you know, you will always find one that takes you to U, then if you commute with that element, then you're not making a distinction between those two. If you commute with, and if that happens for all pairs, you must guess B, the identity. So G fixes every X in the long orbit. But the long orbit is long, so it means that G has small support. And then you use the result that I told you about by Bill Sherrish. So the alternative is, of course, so either this happens or what happens. Or you're going to have a long orbit. So there exists an S in S. So in particular, because you're doing these two S, there's going to be some element of S. So you're doing these two, to some k-tuples, for a case size of S. So you're going to have some S in S, just like pigeonhole, the one with the largest orbit. You're going to have some S such that this orbit is of size, at least, set to the one of S. So every tuple is taken to the different k-tuple. And, you know, there's going to be some element of the k-tuple that really moves around. But what does this orbit leave in? So this is contained in A to the, well, whatever it is, N, O log N, stabilizing sigma. So this is contained in the point by stabilizer. So you have constructed a lot and a lot of elements, because sigma is of size L factorial, basically, or rho to the L times L factorial. You have constructed a lot and a lot of elements in the point by stabilizer. And now, what can you conclude? You cannot conclude right away that you have growth, but you do have, because maybe you already had a lot of elements in the point by stabilizer. But now, with the elements in the point by stabilizer, where, moreover, you have a long orbit where things do generate something like cement or all 10, you can just recur. You can now work on that point by stabilizer. And there, you will find a long prefix. And there, you will continue working and working. It gets mixed here, but this is a basic idea. So instead of having that, or really working by induction with a set A that grows, you work with a stabilizer chain that grows. So then you have alpha 1 prime, alpha L prime, and so on. In fact, you have to repeat this step log times every time, so this does not get shorter and shorter. But it does work out. So this was a very brief sketch of the main argument. But what I really want you guys to take home with you is really this action of the set by stabilizer on the point by stabilizer. So once you get the deception, it's really moving around. Otherwise, you'll be yes. Then you are building up a lot of things in the point by stabilizer. That's what makes you win. So I think that's it. And I think I've already stated the main problem. So just to repeat one of them that was already mentioned, people actually believe that the diameter should be genuinely polynomial on n. It might even be n to the two plus epsilon. And some people believe that when you pick a random pair of generators of Cmn or all 10, in fact that should give you or might give you an expander graph with probability arbitrarily close to 1. So that's it. So how complicated is the use of the classification? Is it a direct consequence? It's fairly direct. I mean, we use this. I don't even mess with it directly. There's a paper by Babay and Sheresh that just shows you that the diameter of a group with respect to any set of generators is basically bounded by the diameter of the largest symmetric or alternating factor. So that's the only way in which we use classification. Yes. Is anybody working on trying to do this with SLM as I was talking? So trying to combine the two things? If somebody's trying, they are not telling me. But they might be trying, yes. But if some were in the audience, you would say this is not a mathematical question. Working recorded. Please work on the problem. It's very interesting.