 Very well. So what do we mean by growth? So you have a group, a happy group G. So let us say that you have sets A, B and G. We will define A times B just to be the set of all products of an element in A and an element in B. And by A to the K I will just mean I will mean the product of K copies of A, meaning the set of all products of K elements of A. Very well. And just to fix notation, A to the minus 1 is a set of fin verses and by double bar, just by bar I mean the number of elements of A. Very well. Assuming A is a finite set. So you have A finite living in G. And the question is, well, you have a set A, you multiply it by itself, you get A times A, you multiply it by itself K times, you get A to the K. How is that set growing? Is A times A much larger than A? Is A to the K much larger than A? So this is a question that can be seen from different perspectives, coming from different fields. So on the one hand, you have what has come to be called additive combinatorics. An entire subarea, perhaps the core subarea of additive combinatorics is about this. But it has traditionally, as the name additive indicates, it has traditionally focused on the case of G abelian or even G day integers. Then you also have, well, you are in the right place for this, perhaps, you also have geometric group theory. So geometric gives away what part of the tools in the viewpoint are, but let us say that from the point of view of the problems it handles, it centers on the case of G infinite and K going to infinity. There's a third thing that one could say studies this problem. It almost seems like a joke, subgroup classification. Why do I say that classifying subgroups, describing which subgroups a group has, is a problem of this sort? Well, so if you assume that S contains the identity, then you have that A to the K is equal to A, or the cardinalities are the same if and only if A is a subgroup of G. So classifying the sets that don't grow at all is the same thing as classifying subgroups. You might think I'm being slightly pedantic, but in fact it's very useful. It's very important to put this in this same framework because it turns out that some of the techniques developed in group classification are robust and can be extended not just to sets that don't grow at all, but to sets that grow slowly. Our setting will be, we will be working with non-Amelian groups in general, but we will not be working with say a fixed infinite group, but rather with an infinite family of groups that may themselves define it. And we will be aiming at uniform results. Of course, well, some statements about growth are trivial for a fixed final group, but not when you have an entire family because there you want uniformity and uniformity is difficult. So in geometric group theory, you often take a set A and look at A to the K as K goes to infinity. Of course, if the group G is finite, nothing, the things get very boring at some point in that A to the K eventually becomes equal to G, and then the cardinality of A to the K just stays constant thereafter, doesn't grow anymore. So the question, a basic question would be, really, what is the least K for which this happened? So what is the least K for which A to the K is equal to G? So this makes the asymptotic state, the asymptotic situation highly boring as K goes to infinity, but you ask, yes. Well, at any rate, so by this I mean the set generated by A, meaning the set of all products of elements of A and the inverses, products of whatever length. All right, so the definition is that the diameter of G with respect to A, and I'm going to put a little mysterious gamma here, is the least K for which this happens. So why this name, diameter, and why the funny gamma? The gamma is the first letter of graph, if you write graph in Greek letters. So the Cayley graph, the undirected Cayley graph of G with respect to A is defined to be the graph consisting of the set of vertices V equal to G. So just draw a dot for every element of G, and connect two dots corresponding to two elements of G if the ratio is in the set of generators. So this is a regular graph. What do I mean by a regular graph? It's a graph where every vertex has the same degree. So the degree is A. What do I mean by degree? Well, some people like to call it valency because they like to remember when, well, we all learn a bit of chemistry in high school, I think. So it's a number of, just a number of dashes, the number of edges coming off of a given vertex. So every vertex has the same degree. That's nice. We will assume that we don't have to make a mess out of Cayley graphs, directed ones and directed ones. We will assume A equals to A inverse from now on. That's a very, actually, a very soft assumption, as we will see. We can just add the inverses to the set. All right. And it's easy to see that the diameter in the sense of this definition is the same as the diameter of the graph in the traditional definition of the diameter of a graph. What is a traditional definition? Well, you define the distance between two points to be the shortest path, the length of the shortest path between them, where the length of a path is just a number of edges in it. And now you become a pessimist and you take the two points for which the distance between them is maximal and that distance you define to be the diameter. Just like for a circle, you take two points in the antipodes and their distance is the diameter of the circle. Very well. But all right. But this sort of graphical, I wouldn't say quite geometrical yet, way of seeing things is quite fruitful. In fact, some sharper questions than diameter bounds can be really formulated most clearly and most intuitively in terms of the graph. So other notions, there's that of expander graphs and these are all related. So for a graph that is regular, finite and undirected, we can define the discrete application on functions from the set of vertices to C. So how does this linear operator, the Laplacian act, it's like the toy version of the Laplacian in the continuous case. You take so the value of the Laplacianized f at x is going to be the value of f at x minus the average of its neighbors. Very well. And well, you could define the Laplacian in general, but the point is that because the graph is undirected, you're going to have that is a symmetric operator. And symmetric operators are very nice in parts because they have just a real spectrum and it's not hard to see that it's a real non-negative spectrum. In fact, it's quite easy to see. So the spectrum is just a set of eigenvalues. It will look like this. So there will be the eigenvalue lambda zero equal to zero corresponding to which functions. So give me a function, give me a function on the graph that is an eigenfunction of the Laplacian with eigenvalue zero. Give me a more interesting constant. Very well. The function that takes the constant value one everywhere, say, or any other constant value. All right. So that gives you the eigenvalue zero. The zero vector is not an eigenvector, not in my definition of eigenvector. It's an eigenfunction. All right. And then you have other eigenvectors, eigenvalues. And we say that gamma is an epsilon expander if lambda one is at least epsilon. But most often you will just hear people talk of an expander graph just like that. That doesn't make sense in isolation. What makes sense is to talk of an epsilon expander or rather, and this you will find very often, an expander family of graphs. So this is a family. It gets interesting if you impose that they must be of constant degree. Otherwise, the question is a bit easy. And you say that this is an expander family if they are all epsilon expanders for some epsilon. Of course, it would be trivial if the family were finite. The interesting thing is if the family is infinite. Well, trivial assuming that it's enough for the graph not to be stupid for lambda one to be bigger than zero. It's enough for the graph to be connected and it will be connected if a generates g. But as we will see, once you have that, any graph is an epsilon expander for some epsilon. It's just that the epsilon has to be constant for the entire family. That's what's special. That's what I mean by uniformity. Very well. And it's particularly nice if we manage to construct an expander family made out of KD graphs for many reasons. One of them being that we really care about groups here. All right. Another related notion is that of mixing time. So if one had to put a hierarchy of notions. Yeah, for a graph to be an expander is stronger than for you to just have small diameter. Mixing time, having small mixing time is an intermediate notion. So what is a mixing time is the, it's a least K such that a random walk of length K on the KD graph say ends at every V with roughly equal probability. Approximately uniform distribution. What do you mean by approximately? Well, you have to impose a norm on distributions. It can be an L1 norm, an L2 norm, an L infinity norm, your choice. So strictly speaking, one should speak of say the L infinity mixing time, the L2 mixing time. But there are all sorts of useful inequalities between those, often going both ways. All right. So as I was saying, the hierarchy is as follows. Being an epsilon expander, I'm not even trying to go into try to catch that. Mixing time, this implies that the mixing time is epsilon log g. And well, for any sensible norm, if you are close to uniform, but in particular, you end up at any place with positive probability. So this implies the amateur. Well, the little epsilon here means that they implied constant depends on epsilon. OK. So some results. So I'm going to put the key proposition, not because I'm propping myself up or because I like keys. It's simply the name I gave to the result in my paper back in the day, the paleolithic. And well, I call the main theorem something else that I will call a corollary here. So somehow the name has stuck. So if you want to prove diameter, or if you want to prove several other things, really the key proposition is the following. You have g, in this case, the SL2Z mod BZ, that is two by two matrices with determinant one. And you have a subset of g, a generates g. Then, so the way I stated it more or less at the time was that if a is not too large, it's really 5 6, but it doesn't much matter. So say you have that a cubed is larger than a to the 1 plus delta, or delta is an absolute constant. We're seeing the same result. Whereas when a gets large, you have that a to the k already equals g for k, an absolute constant. In fact, as Nikolov and Piver showed, based on something that Gawars was also doing at the time, k equals 3 is valid. So you can just take k equals 3. I think it's 5 6. I think it's 5 over 6. Well, just to emphasize that these numbers don't matter, I will put 8 over 9, but I'm pretty sure it's 5 over 6. Let's look at this later, but let's get something trivial on that at any rate. And this is still true. In fact, if I replace this, I just made the statement stronger, by the way. Thank you. Because if you make the exponent larger, you make the statement stronger. You can make this 1 minus epsilon. It would still be true, with delta depending on epsilon. But since 8 over 9 is an absolute constant, delta is an absolute constant. Very well. But what I wrote here with 5 over 6 was also true, with k. All right. So what I call the main theorem back in the day is really a simple corollary. It's a diameter bound. It's that the diameter is at most a constant power of log of the size of the group. Both of these constants are independent of p. Oh, they are absolute constants. So this is true, I showed, for GESL2. So a proof is just an exercise. In fact, an exercise we can do together right now. We just need to apply the key prop to a, then to a times a times a instead of a, then to a times a times a times a, and so on. 827. And it keeps growing, every time by a power of 1 plus delta. And when does it stop? It stops when a to the k, a to the l, if you prefer, gets bigger than G to the 8 over 9. And then after k more steps, where k equals 3, you're done. So in fact, this over 1 is basically 1 over delta. All right. So what now? You can also, in a special case that people care a lot about, you can get a stronger bound. So people used to, so for t, for, well, before 2005, I think it was Lobotsky who had the following favorite example. So where you have here, yes, we knew that, you know, GP being SL2, Z naught BZ. And here, A modulo P. These are matrices of integers, but you can reduce a modulo P. It's an expander. We knew that this was an expander family. This is essentially, so this was a consequence of a rather heavy and beautiful result in number theory, well, on the upper half plane, so it's a little spectral gap. But quite frustratingly, what people did not know is what an R is to do with this. Nothing was known, essentially. No good, well, no diameter bound period was known, really. So already, for any A whatsoever, you get a polylogarithmic bound, you get a power of log, but this is a bit nicer, in fact. This is especially nice, because look at this. These are matrices defined over Z. And in fact, in this particular case, it's even easier because these generate a free group. So what does that mean, that they generate a free group? Well, that means that, as matrices in SL2 Z, if you start multiplying them, you don't get any trivial repetitions. If you start multiplying these two matrices in any given way, the only way you're going to come back to the origin is if you literally retrace your steps. Right? There's no other way, there are no loops out there. But that means, what does that mean? That means that also of modulo p, there are no loops, there are no repetitions, unless you have got so far into the game that you start to get two matrices that are congruent to each other modulo p without being equal. But the only way that happens is if you have two matrices, at least one of whose entries is bigger than p over 2. Because two numbers can be congruent to each other modulo p without being equal, only if one of them is bigger than p over 2 in the envelope. Very well. So here, A, SL2 Z. And so you have another corollary, an easy one. A corollary to the key prop. Here, the elementary is logarithmic. And how do you prove that? Well, it's an exercise to which I have already given you the solution. Namely, yeah, what I guess you said. So, at least one entry is big. So this means that for the first log p steps, roughly, if you just multiply things together, you're going to get the fastest kind of growth possible for free. And you have to stop only after constant times log p steps. Your set is already of size, so growth for free, literally, meaning because of freedom, because the thing is a free group up to g to the epsilon say and then apply the key prop of one times. That's enough. In fact, you can soften this assumption a lot and a lot and a lot. You don't need A to generate a free group. It's enough for the group generated by A to contain a free group and so on and so on and so on. So, then, it can be softened to, in fact, and I think these are, these actually ideas from the 80s, Weisfiler, Nori, and some other people. I don't want to leave anybody out. It's an interesting list of names, but I think these are some of the main ones. It can be softened. Of course, they were not working in these sort of problems quite yet, but they were working on this sort of thing. It can be softened to a Salisky dense. Salisky dense basically just means not living in some sort of stupid algebraic set. Actually, non-stupid. That's in non-trivial. Very well. So, that's a easy consequence. Now, let me give you a hard consequence. In this special case, well, not just this one, but in general, when you have a fixed thing in SL2Z, A is not of some silly special form like upper triangular matrices, say. This one, that means not of a silly form. And then, if you take the reductions, modulo p, one is actually able to show more than just logarithmic diameter. So, it's something even stronger. It's not a corollary. This is a bona fide theorem, the YouTube organon gum board. So, let A be in SL2Z. A is a risky dense in SL2Z. Or in SL2R, I should really say. GP is SL2Z mod PZ. Then, so, this was the right condition because they showed that if this is true, then A mod P generates SL2Z mod PZ. So, that's clearly necessary. But in fact, it's also right because you can find something free inside and so on. This in SL2, this is not much of a problem. So, you need P bigger than a constant. So, you are sure this generates. But that's enough. This is an expander family. P bigger than a constant. So, let us give the briefest sketch of the proof possible in part because it will introduce some other important concepts. So, we had our little notion of growth here. Well, so first of all, it turns out that one can think of growth differently in a way that if you wish is more closely connected to mixing times. So, when you have, say, mixing times, we define them in terms of probability distributions. A probability distribution is just a measure, you know, with positive values, blah, blah, blah, non-negative values and total measure one. So, you have a measure mu on the group. And you don't multiply a measure, but multiplication would be a different thing. What corresponds to multiplying sets is to convolve measures. And when you convolve measures, well, you want to have some sense of what happens to the measure. DL1 norm is always one. So, you look at DL2 norm, how flat the measure is, which corresponds roughly to how big its port is, very, very roughly. And indeed, so you can get an implication in the other direction would be trivial, but you can get an implication in this direction. So, mu, a probability measure with some conditions, you get that when you take a convolution of a measure with itself, it will become flatter. This is always less than one. So, it's the L2 norm is becoming smaller. The measure is becoming flatter. And in particular, more equity distributed. So, how do they, how do Burgand and Gamber deduce this? Well, the key step here, in fact, from this perspective, you could say that it's the meaning of Balox and Merredi gavers, in a sense. So, there is this very nice, this important tool in additive combinatorics. First problem by Balox and Merredi, but then which, with much stronger bounds by gavers. And the proof by gavers also generalizes to non-Abelian groups. So, this was shown to hold in the non-commutative case by Tao, though I think Vu already showed that this was essentially a graph theoretical statement. I will not go into the details of what it says, but it's really the key in this sort of passage. Passing from here, if you had a statement like this, passing to here would be very easy. You just consider the special case of mu equal to the uniform distribution of A, Z outside. You apply it to, what happened? To 1A? Yes, apply to that distribution and you get this result, just using Cauchy-Schmidt. So, this direction is much subtler and you have to use this. And then, how do you go from here to expansion? Because you see this is, just gives you some sort of weak mixing time, pretty weak mixing time. How do you pass to something that is stronger even than honest-to-goodness mixing time, meaning expansion? Well, you have to use two things. So, one of them is the growth for free that you get in the first log steps. And then there's another thing in that having a very, having no spectral gap, meaning lambda 1 very small, would have its ill effects amplified by the fact that lambda 1 has high multiplicity. And that would give a contradiction to this. I've heard that this is a very common trick in physics, but in this context, it was introduced by Sarnak Sohe, in the 80s. Well, he also always says that actually I've used that. Oh, okay. Thank you. All right. But physicists, I think, believe it belongs to the ancient Babylonians, such as Wigner and the like. All right, very well. Oh, for this, have you computed the epsilon for some, you have committed the epsilon for some families, right? As I can tell you, for a Lüberski group. Oh, it's going to be horrid. But you can do it explicitly. I think it's 2 to the minus 2 to the power 8. It's bigger than zero. Minus 36. 2 to the minus. 2 to the 36. All right. I will not write that down. But the important thing is that it can be written down. Thanks, man. I never wrote it down. You wrote it down. And you improved that. I mean, I think if you had just, if you had done no optimizations, it would have been worse. Not so much. There's not so much you have to know. Okay. Well, so, all right. What are some generalizations of the keeper position? And then we'll look at the consequences. So, the first generalizations kept the lead type constant. So, okay. So it started with SL2. And then the initialization to SL2FQ, this was done by Dini, who was a student at the time. And also, value has a very beautiful thing, a very neat way to do this. Also, some, it may not be completely obvious that it's a generalization of this kind, but Morgan and Gambert show that the statement and measures is true on SU2. So, these are good opportunity to explain several things. So, first of all, this is a complex group. It doesn't say SL, it says SU, so it may not be apparent that it's deep down the same group, but the Lie algebra is the same. At the same time, there are difficulties, because these are the complex numbers. In what sense? If what you want is just the keeper position over an infinite rather than a finite field, in fact, the proof gets easier. The way to generalize the keeper position in the sense of just proving the same statement for the complex numbers is just to delete FB, put C, and then delete large chunks of the paper. It goes through. However, if what you want to prove then is this sort of statement or something related to this, such a form of the keeper position would not be enough, because what you really want is a result of our measures. And this passage does not work just like that over the complex numbers. You need a stronger statement here. You need a statement that keeps track of the distances between the elements of AQ. So, it's a very tricky thing. These are inventionist paper. Now, on the vertical line, so when you pass to other Lie types, what do you have? So, I have this SL3FP paper, and that took me a lot of work. In fact, I had to already make many things more abstract, and half of the paper is completely general. I managed to really finish the job for SL3, but I hope that the other half of the paper was on SLN, or on other groups of classical type. Well, you know, little offshoot. This was a bit of a blind alley. So, Piber and Sabo, whose names will be written there, generously say that Nick Gil and I proved one half of SLN. We proved, in fact, what we believed to be the hard half, but we couldn't finish the job because even though things worked out for SP2N, the spinor group, excuse me, the symplectic group, they did not work for SON. We were stuck. So, in fact, the broadest generalizations, so this got unstuck, and this is to all finite groups, all simple and almost simple finite groups of Lie type. So, these are two, these are simultaneously by Brillard-Grintao and Piber-Sabo. So, not to get into technicalities, but just to assure groups here in the audience that at least Piber-Sabo really did all finite groups of Lie type, even really, really exotic ones, not just Chevalier groups. All right. In this, I should put some other names on the board because I mentioned earlier subgroup classification. So, the classification theorem, I think many of you will have heard about it. It's a theorem whose proof, if it were written in one place, which it hasn't really, would be about the length of the Encyclopedia Britannica. If you wrote down the details, it might be more like Wikipedia, but with about, no, I know nasty jokes there. At any rate, people early on, after the proof was announced, tried to replace parts of it by more elementary proofs. And it turns out that many of those proofs are quite robust. So, one example is that of Larsen-Pink. So, Piber and Sabo were mostly just working on my SS3 paper without my help, and they were, you know, being very, very, some very intelligent and elegant generalizations of part of it, showing that things worked out better than I could make them work out. In Brillard-Grintao, they also give credit to, they were looking at Larsen-Pink. So, we will soon see. I will later talk about what the Larsen-Pink role exactly was. So, this was floating as a preprint for a long time, but it was elucidated in part by Krzyszowski. I think it was by Krzyszowski-Wagner. We will later see what the exact role was, but this turned out to be quite useful, because there were some statements in which I could prove a lot of things about sets of a special type, how many elements of A to the K were living in a variety. Larsen and Pink had statements that were like that. They were weaker in the sense that they were only for subgroups, not just for sets that grow slowly, but they were stronger in the sense that they were more general. And in fact, you can take some, in some sense, the span of these two kinds of results. You can show results that are valid in general for sets that grow slowly and in general, in the sense of being valid for all varieties, not just varieties of a special type, and you don't need to do ad hoc work. Very well. This is for the key proposition. And, well, this shows the corollary of the kind of the small diameter corollary immediately. And this sort of thing can also be proven to be used to give expansion. This is not highly non-trivial. It's Varyou, again, and Selejiko Sefidi. I should put brackets to make sure that Selejiko Sefidi stays one person. At any rate, so what are the open problems remaining there? All right. So just for linear algebraic groups, meaning matrix groups, you have several things. So first of all, so already, so, as a corollary, you have that for the SLN, SON, et cetera, you have that the diameter and g arbitrary generating g. You have that the diameter of gamma ga is polylogarismic. But with just tiny proviso, well, not so tiny. The constants here, they don't depend on p, they don't depend on the ground field, but they do depend on the rank. They depend on n here. So without n dependence, this is the false, this would be the full strength of Baba's conjecture, at least for matrix groups. So Baba made this conjecture in the 80s. Very well. So the first open problem would be uniformly. And then there has to be a change in the strategy, because the key proposition is actually not true uniformly on the rank. The delta, in the statement, which is no longer on the board, unfortunately, the delta in A to the one plus delta, that delta does depend on n. If you force it to be fixed, the proposition would not be true. There would be counter examples. But the consequence, however, ought to be uniform on n, at least you still believe. And we will see soon that there is evidence for that, recent evidence. Another uniformity problem is open even for SL2. Namely, I will put it as a question, because not everybody necessarily agrees that the answer is yes. So SL2 then mod BZ. And here we will set A. So you have any P and any A generating. So the family of all Kali graphs of SL2Z mod BZ over all P, is this an expander family? Is there a universal epsilon? That we don't know. You see? The results of a by type, these are uniform on A, completely uniform. But the results of expander type, of Wurgang Gamber type, they are not uniform on A, on the A that is sitting above, that is being reduced modulo P. They depend on that A. Whereas we would really like to have a completely uniform expander result. It's not completely clear that it's true, though again most people believe this for some measure of people. All right. So I said that there's some further evidence on A. Well, I would state as a third open problem, except it's not completely open anymore. Namely, why did I say, well, Baba's conjecture means these four matrix groups. Baba's conjecture is really about all, finite, simple, non-Abelian groups. Well, non-Abelian, that's obvious, why that is needed. If you have a non-Abelian group, its diameter can be very large. Say, if you take the group Z-mode 2004 Z, 2014 Z, or a mold, then its diameter can be as large as 1007. But, you know, if you have a non-Abelian group, it's 107. But, simple just basically means the opposite of solvable. Well, it has no normal subgroups, finite, that's also needed for these formulations. But what are the groups that are finite, simple, non-Abelian? Ah, that's what the classification theorem is about. It tells you that these groups come in three forms. These are the matrix groups, which I, matrix groups are just groups, finite matrix groups are the finite groups of lead type. Then there are the monsters. Well, there's one monster with big M, but then there's a finite number of monstrous groups. But there's a finite number of them, so we can forget about them. We are proving a sympathetic statement, so it's as if they did not exist. They get sent to the Jardin de Plante. And then there's all 10. All 10 being, it's almost a group of all permutations of an element. So, all 10 is the subgroup of Cmn of index 2. So, Cmn, this has n factorial elements. Elements is of course a group consisting of ways of permitting n elements. And all 10 consists of the even permutations. What are even permutations? Well, you have all seen the game of 15, and you know that it cannot be solved in its original phrasing, that if you switch 14 and 15, you cannot do any permutation that will flip 14, 15, just by shifting things. That's because there are even and odd permutations. So, the even permutations, all the permutations that you can do by shifting turn out to be even, so they cannot do a single switch. All right. So, that's a group all 10. So, all 10 is simple and nothing was known for all 10, or really very, very, very, very weak bounds were known for all 10. And all 10 also showed that this constant cannot be universally one. So, and what is known now about all 10? And why do we care? Well, a lot of people care about all 10 for its own sake. But then there's also, if you care only about matrix groups, there's also a highly sophisticated reason to care about all 10 and same n. So, if you're really much more sophisticated than I am, you start working over the field with one element. It's very sophisticated because a field with one element does not exist. But objects over the field with over one element, which is called Fn, n, some sort of cross English French pun standard. So, this group turns out to be just same n under most definitions of these terms. And so, the problem, the fact that the problem, so the fact that the problem was open here, really, the best bound was something like e to the square root of n meant that there was little hope that you could do SLN uniformly because you could not do it even over the field with one element. All right. Fortunately, now something is known. Very well. Take g cement over all 10. Take a subset of g generating g. Then the diameter. So, we haven't quite got log g to a constant power. We have got what is known as a quasi polylogarismic bound. e to the log n to the 4 plus epsilon. This log log g to the 4 plus epsilon. Very well. It's still open. We still don't know how to prove a generally polylogarismic bound, log g to the o of 1. That's still open. But this already gives us some hope. The rank is n. So, yes, of course, it otherwise would be meaningful, meaningless. Sorry. Yes. So, absolute constant. The only thing that is varying here is a rank. The cement group is a creature of pure rank. The field remains the field with one element which does not exist. So, it really isolates the problem of the rank. All right. Let me give you some main ideas. I will go into the technical details and many other things. More ideas, you would say in the next few days. But, all right. You're the main philosophy. So, first of all, I really think that the right way to see these things is to look at actions. So, what is an action? An action of a group g on a set x or an object x in general is a homomorphism from g to the group of automorphisms of x, which is, if x is a set, it simply is c max, the group of permutations of x. So, if somebody gives you a group, or even matter two groups, but yes, they just say one group, then you can look right away at several actions. You have several actions to study. So, you have the action of g on itself by left or right multiplication. That turns out to be sort of boring because, for example, the orbit, well, the orbits are the entire group. The stabilizers are trivial. A stabilizer of an element is the subgroup of all elements that don't move it. So, every element of g moves everything in g and the cc identity. So, very boring. What turns out to be much more interesting is the action of g on itself by conjugation. And it turns out that multiplication is not so boring if you consider the potions of h and so on and so on. All right. And then it turns out that depending on the kind of group you're working with, you can also, it's also worthwhile to look at other kinds of actions, which are different for different kinds of groups. And it's this difference that gives the particular distinct flavor to linear algebraic groups and to asymmetric groups, say. But it's all in the framework of actions and I hope that really a unified treatment will come out of this. So, if you have linear algebraic groups, well, what is a linear algebraic group, a matrix group? Really, the defining feature is that g acts on a to the n. And not just as some sort of unstructured set of elements, but in a geometrical sense, by linear operations. So, this will give to part of the proof of geometrical flavor. Not in the sense of geometric group theory, but in the sense that there will be morphisms of varieties and so on. In the case of actions on cement, well, cement is just by definition the group of actions on a set of an element. So, no structure there. G acts on one, it's just an unstructured set. And of course, you also can act on pairs of elements, of distinct elements of these, triples, etc. So, you could say that these are really stupid actions because there's no structure to this set of elements. However, just the fact that these are really small sets compared to the size of the group turns out to be very useful. So, if what characterizes a matrix group is that it acts on an affine space geometrically, what characterizes the symmetric group or the alternating group is that they act non-trivial on really small sets. All right. So, very well. And let me just say that, so, Terry Talhoff is not now in the audience, introduced the term approximate groups, which is quite nice, except I think it becomes nicer if you put a sub in it. So, it's basically equivalent, roughly equivalent, as we'll see tomorrow, to the concept of sets A that grows slowly. But even though this was not clear at first, it turns out that this name is really evocative because it means that classification results, such as Larsen-Pin, can be relevant. But more generally, I think that the right point of view, so, all right. So, you have that what this evokes is that the statements about subgroups can be generalized to statements about approximate subgroups. But in fact, as we will see, it's better to ask oneself a broader question. When can statements about subgroups be generalized to statements about sets without assuming from the outside, from the start that they grow slowly? Would such a generalization be absurd or meaningless? No, because what happens typically is that some instances, so, you have blah, blah, blah, blah, blah, blah, blah. So, some lemma, very often it turns out this can be transmuted into a lemma about an arbitrary subset or almost arbitrary, where some copies of H will become A, some instances of H will become H, another instances of H will become A to the K, say, for some A, for some K. And then the statement magically still works. The proof still to be very useful. Well, the first statement of that kind, do I have five minutes? All right, very well. I will give you something that is simple, paradigmatic, and also extremely useful, the orbit stabilizers theorem. So you have g acting on x. As I said, the stabilizer of an element is simply the subgroup of g consisting of all elements that fix x, whereas ax is the orbit. The orbit is just all the set of all elements to which x gets sent by a set, say, not necessarily a subgroup, a and g. All right, so the orbit stabilizer theorem, it has a grandiose name because it's what, in fact, most of you have already taught two undergrads. So it's one of the first things they see, so it gets called a theorem. Not a lemma, it's that you have that h is subgroup of g and h stabilizer, the intersection of h and the stabilizer of x is h divided by the size of the orbit of x. And the proof is just by counting. Very well, but the interesting thing is that this can be generalized to sets, arbitrary sets. So is such an equality going to be true for an arbitrary set a? No, but you can split this into two inequalities, the one that says greater than and the one that's written equal to and the one that says less than or equal to. And then you soften both slightly in slightly different ways. So for a and g, a inverse a intersection sub-effects, so one copy of h has become a inverse a, let me say, this is going to be greater than or equal to a x. And now even more generally, in the other direction, for a and b, subsets of g, b, a, no, this is true. Well, I have stated it in the way that makes it clear what the proof is. It's just counting again. You could also divide and have, trivially, a star x, a or bx. So this makes it clear that it's just the other half of the statement. So proof is, again, pigeonhole, then count. And it goes through. Maybe I even have time for one interesting application. So as I said, this first action is very boring, but the conjugation action is very interesting. So the action by conjugation, then the stabilizer of an element g, what is the stabilizer of an element g in the action by conjugation? Not you. Oh, sorry. I should never treat one student this way. So he's not my student. He's my older doctoral brother. So the elements that fix g are the ones that commute with g. So something fixes g, by that we mean that the conjugate of g by h is g itself. And that happens exactly when g and h commute. So the stabilizer. And what is the orbit of this? So the orbit of an element g under the action by conjugation is what? Yes, the conjugacy class. Thus, for g and h, say, a equals a inverse, simplicity, a squared intersection, the centralizer, it's going to be equal a. And this turns out to be extremely useful, because giving upper bounds often is not so difficult. You can do it by geometrical means in the case of SLN, as we will later see. But giving lower bounds is much less evident. We'll denominator the intersection just with n. Here, I think it's. You're just initial copying the. No, because that's the orbit of. So you could just make this be the set h. All right. But in particular, you can just state it this way. All right. But it's correct as stated. At any rate, this means that if you manage to give some sort of upper bounds for this, then you have a lower bounds for this. So this means that you get a lot and a lot of elements in a times a that commute with c of g. But all the elements of a very special form commute with c of g with an element g. If g says, say, it's a matrix with distinct eigenvalues, it's only other diagonal matrices that commute with that one. So this assures you that there are many elements of a particular form. So that's interpretation for overmatrix groups. But this is true in general over many groups. But this gives you an example of a statement where the thing is true over groups. And then you just soften it after changing the form a little bit so that it becomes true over sets. And then you can apply this. And you get some things that is very useful. All right. So tomorrow we will take a step back and look at some of the roots of the subject in native combinatorics and how one of the statements that played historically an important role, the sum product theorem, can be seen really as a consequence or as equivalent to growth in the affine group, of a very nice little group where we will prove growth. And then we will go into a sketch of how things work over SL2. And then at the end, I will try to give you some ideas of how things work over the symmetric group. Any questions or comments? So one question. So are there any kind of generating sets or symmetric groups with the kind of analogs of these things that you've over there? Which would be no better balance than what you thought you said? Well, it's fairly simple to. I mean, if you just write down. So the following is an interesting example. Because the first set of generators that springs to mind is just one. If you're looking at Cmn, it will be 1, 2. And this works in both ways. Because you can prove right away that the diameter here is log g squared. So first of all, it's much stronger than e to the log log to a power. But it's also worse than log g. So it shows right away that log g cannot be a bound that is universally true, not for the symmetric group and also, well, actually not for SLN either. But in fact, nobody has constructed an example where the diameter is log g cubed. It might be that this is the worst possible. We don't know.