 Thank you for the opportunity to speak here. So I will begin with, OK, so now I, OK, now I managed to advance my slides. Good. So I will begin with the conjecture of Odlisklan-Punan. So this asks for the following. Let p of x be polynomial with integer coefficients. And so the leading coefficient and the constant coefficient is fixed to be 1. And the rest of the coefficients are random. And they are independent. And they can take the values 0 and 1 with equal probability. So this is the model that we are working with. And the problem is to show that such a random polynomial will be irreducible in zx with high probability approaching 1 as the degree of the polynomial goes to infinity. So this is just one instance of expressing the expectation that if you just write down polynomial randomly, then you expect it to be irreducible. So here is a result towards this conjecture. So we need to assume the extended dream on hypothesis. I will tell you about this in a minute precisely what I mean. And so then let's consider a polynomial as in the above conjecture. And then the claim is that with very high probability, so the probability that it, yeah. So this minus should not be there. So this is exponential in square root of d plus an epsilon in the exponent. So with very high probability, the polynomial can be factored as q times r, where q is irreducible. And r is a polynomial of not too big degree. So it is at most square root of d. And it is a product of cyclotomic polynomials. So it kind of says that it is kind of irreducible with very high probability. And so you can have two objections regarding the statement, why this doesn't prove the conjecture. And so one is that the conclusion is not that the polynomial is irreducible. And so in fact, this is not a big problem. So very easily from this statement, you can deduce this one that the probability that the polynomial is irreducible will be asymptotic to 1 minus square root of 2 over pi d. So some specific constant over square root of d plus an error term, which is of the order 1 over d. So the point is here that if you just look at the possibility that your polynomial might vanish at minus 1, then this will happen with roughly this probability, this constant over square root of d. And this is what contributes this term. And so this is not that too hard to work out. And so this is one instance of the cyclotomic factor that can come up. And for other low degree cyclotomic factors, it's probably with some work it's possible to work out precisely what the probability is that it arises. But if you want a statement with such a precision without the negative sign here in the exponent, then you need to allow for these cyclotomic factors. OK, so this was one objection. The other objection is that we assumed something. And so this something is the extended Riemann hypothesis. So it is the Riemann hypothesis for dedekin zeta functions, zeta sub k for all number fields, which are obtained by adjoining a root of one of these polynomials that appear in the conjecture to the rational numbers. So this is the extended Riemann hypothesis. And this is not known. So this result is conditional. And for this reason, it doesn't resolve the conjecture. So if you just want to prove the conjecture, then it is possible to weaken the hypothesis, the reliance on the Riemann hypothesis, quite significantly. So if you are not interested in this error rate, then it is enough to know that there are no zeros of these dedekin zeta functions very close to one. And how close it is log d to the 3 plus epsilon over d. And so what do we know towards this? Well, what we know is that if you draw a somewhat smaller disk around one, so with radius one over log d times d. So the log d is not in the numerator with some power, but instead it's in the denominator. Then we know that there will be at most one zero. So there could be an exceptional zero for the dedekin zeta function, which, in principle, could be very, very close to one, but then there are no more. And so in fact, there is a version of this result which allows for an exceptional zero to be quite close to one. So the exceptional zero is perhaps not the bigger issue. The bigger issue is really that the zero free region that I would like to have has the log d in the numerator. And so this bound that we have is kind of old now. And it doesn't look like that anyone has an idea how to improve on it. So it's still close to what we know. But it is probably very difficult to get this unconditionally. So final remark about this statement is that, of course, you don't necessarily have to consider this distribution that it takes the value zero and one. But instead of that, you could take an arbitrary distribution on the integers. And in fact, you can even vary this. So you can change it with the degree as the degree grows. It is subject to some mild conditions that I don't want to state now. But the point is that the method is quite general. And I will tell you about that in a minute. OK, so there are a lot of unconditional results on this problem. So let, again, P be a random polynomial as in the oldest component conjecture. So the coefficients are 0, 1, independent with the same probability, with probability 1, 1, 1, 1. So there is a result by Cognac in, which gives a lower bound on the probability that this polynomial is irreducible. And it is of the form constant over log d. So we would like to have something which is going to 1. And instead, we have something which is going to 0. But at least it goes to 0 very slowly. And very recently, this has been improved a lot by very sort of caculopoulos and Cosma. I believe one of them gave a talk on this seminar about a year ago. So they proved that, in fact, the probability can be bounded below by an absolute constant. But we still don't have that it would go to 1. So some further unconditional results are available if you change the distribution in question. So now I take for the coefficients from which are not the leading coefficient and the constant coefficient. So they are still independent. But now they take the values between 1 and capital N, where capital N is some fixed number, which should be a parameter. So there is a result by Berry-Sorokan and Cosma, which proves the analog of the conjecture in this setting. If capital N is an integer, which is divisible by at least four distinct primes. So the smallest example is 210. And then in the paper that I have already mentioned, this is improved to any integer, which is at least certified. But somehow the probability that they get is not as good as what is available under the Riemann hypothesis. So their error is polynomial N, which is D, I guess. So this N should be D. Sorry about that. And of course, this work is also, this method will also work for more general distributions. But they need a little bit more spread out distribution than something supported on just two points. But if something is sufficiently spread out, then it has some nice Fourier transform, then it will work. OK, so another comment is about the Galois group in question. So if you know that the polynomial is irreducible, you might also be interested in what the Galois group of this polynomial is. And if you ever tried to find examples in a Galois theory course for a problem sheet, then you will know that finding polynomials where the Galois group is the symmetry group is very easy, because most of the things that you would write down will have the symmetry group as the Galois group. But that's something that we can't quite prove. So what can be done is that in all of these results, the statement that the polynomial is irreducible, it can be exchanged for the statement that the Galois group contains the alternating group. And it is difficult to distinguish between the alternating group and the symmetry group. And so this is actually a very frustrating problem. So when I was first working on this, then I made some computer experiments and I tested all polynomials that have been in the odalis copoon and conjecture up to degree 31. So that's a little bit more than a billion polynomials. And none of them has Galois group which is equal to the alternating group. So there will be some which have Galois group contained in the alternating group, but exactly equal to the alternating group that are now. And still, we are unable to prove even that most of them is not the alternating group. Peter, we have a question in the chat by Jacob Strypoop. Maybe Jacob, you want to unmute and ask away. No, how good my microphone is. I hear you fine. OK, the constant term changed from 1 to a 0 in the statement of the OP conjecture to this unconditional result. It's a 0 significant. Say again? The constant term in the polynomial on the previous term. Yes, so if you allow 0 to be the constant term, then so maybe if you would say that the constant term is also 0, 1 with probability 1, 1.5, then the constant term will be 0 with probability 1, 1.5. And if the constant term is 0, then the polynomial is not irreducible because you can factor out the factor x. But in the original conjecture, it was always 1, right? But now it's about to vary? I might be missing something. What do you mean? Wasn't the leading coefficient under comes? Oh, you mean in this? Oh, OK, so in this here, right here, right? So the way I formulated it is that the values will be between 1 and n, 1 and capital N. So 0 is excluded. So basically, that's the only thing that I was careful to exclude that the constant term can't be 0. So they have a more general version of the result, which maybe would allow an arbitrary interval. And but in that case, they modify, they model to exclude 0 for the possible values of a 0. So that's the only thing that you need to be careful about. OK, thank you. Thank you for the question. We have a further question by Jordan Elmerk. Jordan, maybe. Yes, please. Oh, yeah, so I was trying to do this in my head, and I couldn't. I mean, the discriminant of a polynomial of degree D with 0, 1 coefficients, I expect is pretty big. And someone could heuristically say, OK, if an integer of size n is a square, 1 over root n of a time, in other words, that's what distinguishes whether you're AD or SD. So I mean, does one just expect that naive heuristic grounds that because the discriminants are so large, there should be only one. Yes, so the discriminant is expected to be roughly of the size d to the d with some constants. Oh, yeah. Around the, I mean, theoretically, it could be smaller for some polynomials, but somehow it shouldn't happen. And then the probability that such a thing would hit a square is just extremely low. So it is not surprising. Agreed. So it will be occasionally square. So it could happen that, so most of the time this happens, you are in a situation when the Galois group is not primitive. So it could happen that your polynomial is a polynomial in x square or x cube or something like that. And in that case, in that case, your Galois group will be primitive and sometimes then the discriminant will be square. But basically, those are kind of the situations that I found. But some are still proving. You see the problem is that the discriminant is some sort of complicated, several variables polynomial. And you are plugging in only 0 and 1 in the variables. So handling this with analytic number theory methods is really frightening. But anyhow, it's a problem that I really like. And so far, I was not able to solve it. Other questions? OK, so please stop me if there is something. So let me just remind you what the theorem that we proved with Emanuel was. So we assume the extended Riemann hypothesis. And we take random polynomial as in the oldest component conjecture. So the coefficients that you see here are independent and 0, 1 with probability half, half. And then the claim is that the polynomial can be factored in the form of an irreducible polynomial plus something of a small degree which only contains cyclotomic factors. And this happens with very high probability. So I want to tell you very briefly the main scheme of the proof. And so the main reason why I want to tell you this is because it's quite a general method. And later on in the talk, I will tell you about some newer developments where this method was used. So the proof is based on the following idea, which is a consequence of the Cheboterev density theorem. Or in this case, actually, it's just a special case of it which is called the prime ideal theorem, which says the following. So now let's take a fixed polynomial with integer coefficients. And I want to determine whether it is irreducible or not. And here is one way of doing this. So if the polynomial is irreducible, then this polynomial will have, on average, one root in the finite field Fp if you are averaging over the primes. So this is a fact. And so what you can do with this is that if you take a general polynomial now and now you solve this polynomial in finite fields with various primes, and for each prime you count how many solutions, how many roots you have. And the average number of the roots will tell you how many distinct irreducible factors of your polynomial have in Z of x. Because each such factor will contribute one average. So this simple fact is the main idea behind the proof. And so here is how it is used. So I take a random polynomial as in the conjecture. And here is the quantity that I'm interested in. It is the expected number of distinct irreducible factors of my polynomial. Then using this observation on the previous slide, I can write this as the expectation over the random polynomial of the expectation for choosing a random prime in some interval, capital N to N. And I'm averaging the number of roots of the polynomial in Fp. So if I'm just looking at what's inside the outer expectation, then this will be the number of distinct irreducible factors of P. And then I'm just averaging over the polynomial. And of course, then I have some error term that I will tell you in a minute. And OK, so now comes the big idea, which is always the same. So you have two expectations, then you exchange them. So what you are going to do is that you are first averaging over the prime, and then you are averaging over the polynomial. And if you fix the prime, then you can take this step further. And what you can do is that you can take all elements in the finite field, and you can sum up the probability that that particular element is a root of the polynomial. So that's by definition the expected number of roots. OK, so I have this expression. So basically, the hope is that I will be able to say something about what the probability is that the random polynomial is 0 evaluated at a given A for a given Fp. And so basically, the hope is that when I plug in that into this formula, then I will get the result that the average number of distinct irreducible factors of p will be exactly 1. And then I will be happy. So now, of course, one thing that I'm worried about is this error. So I want to know something about this. And this is where the zeros of the data kinds of functions come in. So if you want to prove this fact on the previous slide about an irreducible polynomial having 1 root on average in Fp, then the proof of this will be very similar to the usual proof of the prime number theorem. And the difference is that you need to plug in the data kinzata function of some number field instead of the Riemann zeta function. And then what this error is, of course, it depends on capital N. So how far up you are going with the primes. And it depends on where the zeros of the data kinzata function is. And so if you assume the Riemann hypothesis, then this error will be small as soon as you take primes which are larger than some polynomial of the degree, maybe d squared. I don't exactly remember. But basically, the point is that you only need to go up with the prime up to some power of the degree. And this is something that we will care about later very much. So the goal is to show that the probability that given element of a finite field is a 0 of a polynomial is roughly 1 over p. This is what we want to show. And in fact, what we are going to show is that if you fix a prime and you fix an element in the finite field, then if you evaluate your random polynomial on this given element, then you get roughly a uniform distribution on Fp. So not only that, it will take the value 0 with a roughly probability 1 over p. But in fact, it will take any values with roughly 1 over p. OK. So it turns out that you can think about this as some kind of random walk on the finite field Fp. So this is the random walk that you need to consider. So the elements are x0, x1, et cetera. And so you start the walk from 1, x0 is equal to 1. And then this is how you propagate it. So at each step, you multiply the previous step by a. And then you add something random. So it will be either 0 or 1. And the steps are independent. So this is a nice Markov chain on Fp. And so it turns out that if you make formally the substitution that you change the bj to ad minus 1, then the distribution of p of a will be the d step of the random walk, except that in the last step, you are forced to move by plus 1. And plus 0 is not allowed. This is because the leading coefficient and the constant term were fixed to b1. OK, so basically we got a nice Markov chain on Fp. And so basically what Markov chains do is that the distribution of the j step of this Markov chain will converge to the uniform distribution on the finite field. And this is precisely what we want to show. And now the question is that how fast does it converge to the uniform distribution? So remember that we have to take somewhat large primes compared to the degree so that the error term in the Chebotary density theorem will be sufficiently small. But somehow this is something which is fighting against that. Because here for a given prime, the degree has to be sufficiently large so that you get this equity distribution. So here I need to introduce a definition in the theory of Markov chains. It's called the mixing time or delta mixing time of the chain. So delta is a parameter between 0 and 1. And this quantity is denoted by d of delta. So this is the smallest j, the smallest time, such that the distribution of the random walk, which I denote by mu j after j steps, is its distance to the uniform distribution is less than or equal to delta. So here I'm using total variation distance, but for the purposes of this talk, it doesn't really matter which norm you are using. OK. So what is known about this? So OK. So here is a result which is based on some work of Konyagin later. So it emerged in the collaboration of these four people. They didn't publish it, but now the details are available in a paper of Emma, Brea, and myself. So here is what you can say. So first of all, you need to make some assumption on A, this element of the finite field. So obviously you don't want it to be 0 because then somehow this random walk will not be very interesting because every time you multiply by 0, you get back to 0. So then this random walk will just move between 0 and 1 and you're surely not actually distributed. You also don't want A to be equal to 1 because then you just have a random walk on Fp when you are either staying where you are or maybe you move to the right by 1. And so you will need to take at least p steps just to have any chance of going around the circle. And in fact, to get to actually distribution, you will need to take p square steps and that's just way too much. And similar problems will occur if the multiplicative order of A is small. So you need to make an assumption that the multiplicative order of A is not too small. So it has to be at least log p to the power 1 sub-silon. This is actually not too bad. So this will exclude some elements of the finite field from this consideration. And you will have to deal with them separately. But there are very few of these guys and it's not going to make you a big problem. So if the multiplicative order of A is not too small, then the mixing time will be actually quite good. So what you should have in mind is that maybe for the delta you take some negative power of p. So what you get, the estimate for the mixing time will be log p to the 2 plus epsilon. So what this says is that what is the relationship between the degree d and the size of the primes p where you are doing the averaging. So what this constraint tells you is that the degree should be at least as big as log p to the 2 plus epsilon. And the other constraint in the Chebotarev was that you didn't want the prime had to be at least maybe the degree to the 10, something like that. So these two things are clearly very much compatible with each other. And then basically we are in business. And so you plug in this 1 over p inside this formula, then you will get precisely 1. And as I explained, the error terms will be controlled. And then the proof just works. And if you work out everything, then you will get the statement that I stated in the beginning. OK, so so far this was kind of old news. And so now I want to tell you about some more recent developments, which will be about other families of random polynomials. So what other families of random polynomials are interesting. So I have to talk first about the classical situation, even though in this setting, this method that I'm talking about hasn't been applied. And probably other methods will do better. So but nevertheless, let me just tell you what this is. So the classical setup is that you fix the degree d and you take a large integer h. And now you take a random polynomial with independent coefficients that are uniformly distributed between minus h and h. So this is kind of dual to the oldest component situation. So in the oldest component situation, you fix the height of the polynomial and you let the degree to infinity. Here you fix the degree and you let the height go to infinity. So this was actually originally studied by Van der Werden, who proved that the probability that the polynomial is irreducible will be roughly 1 minus ordo 1 over h. And this has been made more precise by cellar who worked out the precise value of the constant. So basically the result is that the probability that the polynomial is irreducible is 1 minus some constant over h. And the value of this constant is explicitly determined. But it is not very interesting for the purposes of talk what it is. And it's not very, it would take half a slide to write it down, so I didn't do it. And then there was a big conjecture of Van der Werden, which asked about, OK, so can you say instead of irreducible here that the Galois group is the symmetric group and can you still have the same probability? And so this has been open for quite a while and very recently Manjubar Gava proved this. And so actually he talked about this on this seminar series. It's not too long ago, so I just wanted to mention this, but I will not tell more about this now. So here is another setting which people are interested in. And that's the characteristic polynomials of random matrices. So here is the details of the model. So you take the random matrix M with independent identically distributed entries with respect to some fixed distribution in the integers. So if you like, it can be 0 and 1. We probably have half or something else. And now you take the polynomial to be the characteristic polynomial of this matrix. And then you are interested in various properties of this polynomial. This has a massive literature. And in particular, one question that was asked is is it true that this polynomial is irreducible in zx with high probability? And recently this was proved conditionally on the same hypothesis as above by Sean Eberhard. So it proved that under the extended dream on hypothesis, the probability that the characteristic polynomial is irreducible is 1 minus an exponentially small error in the size of the matrix. So again, this n is supposed to be the same as the d. Apologies for that. And the constants that you have here are just depending on the common law of the entries. And he also obtained the unconditional results which are based on the method of very sort of current cosma. And there he needs to assume something about the common law of the entries. And it is that there is an integer capital n such that when you take the distribution of the entries, modulo capital n, then it will be uniform. And capital n has to be divisible by at least four distinct primes. So it is somewhat more restricted. And also the error term, so in that case, he just proves that the probability is converging to 1. On the other hand, that's an unconditional result. So the proof of this result is based on the same scheme that I told you about. And somehow if you remember what I said, what this scheme of proof really cared about, about the random polynomial is what is the probability that a given element of a given finite field is 0 of this polynomial. And so somehow if that can be estimated well, then you can basically run the same scheme and prove that the polynomial is irreducible. So the main input in this proof is this result, which says that if you take a prime and you take a fixed element lambda in Z mod pz, then the probability that m minus lambda times the identity is not singular will be equal to this thing that you see here. And so what you should have in mind is that the prime p is kind of large. So actually the first factor will be the most relevant one, which is 1 minus 1 over p. So what you have here is roughly 1 minus 1 over p. And so basically what this tells you is that the probability that lambda is a root of the characteristic polynomial is roughly 1 over p. And this is precisely the input that we need for that scheme of proof to work. And so what is important here is that the constant that you have here, it depends only on the common law of the entries of m. And it doesn't depend on lambda over the prime. So actually that's the main point that it doesn't depend on lambda. And so in fact, Sean proved a much more general statement. So the crucial assumption is that the entries of the matrix are independent, but they don't really have to be in this special form. So in fact, the distributions can be kind of arbitrary distributions in Fp, modulo some very mild non-degenerative conditions. So I mean, one thing that you certainly don't want to see is that a particular entry is equal to some fixed element of the finite field with probability 99%. So those are the sort of things that you need to exclude here. But it's actually quite general. And then very recently, there was another paper by Ferber, Jain, Sach and Sony, who considered the variation of this problem, where the matrix is symmetric matrix. So you take a random symmetric matrix with independent plus minus 1 entries. So they are independent on and above the diagonal. And below the diagonal, they are just what they have to be to make the matrix symmetric. And so again, this is conditional on the extended Riemann hypothesis. And they conclude that the probability that the characteristic polynomial is irreducible is equal to 1 minus some error. And here the error is a little bit worse than what was before. And so the main point of this work is that you need to get a version of this result where the entries of the matrix are not independent. But you have a symmetric matrix. So only the entries on the diagonal and above are independent. And the entries below the diagonal are forced to be what they have to be to make the matrix symmetric. And that was a quite non-trivial step. So I stated the theorem in a form that appears in the paper. So I stated it with plus minus 1 entries. But they do say in the paper that they can handle more general distributions. And they leave the details to the reader. So you can work it out yourself if you want. Yeah, so I guess it's probably there is a superfluous minus also here. Sorry about that. Excuse me, I have a question. So the first theorem is actually unconditional. But the second one, you need the extended Riemann hypothesis. So for the theorems about irreducibility of the other paper. Yeah, they are both conditional on the extended Riemann hypothesis. This theorem about the random matrices, these are unconditional. How about the last one? The last one is about the same statement, but assuming symmetry. They need the extended Riemann hypothesis. The extended Riemann hypothesis is only coming in in this scheme of proof that I explained. So to show that the characteristic polynomial is irreducible with high probability, for that you need to assume the Riemann hypothesis. For these various statements about the distributions of the characteristic polynomials modulo p, these are unconditional statements. I see, thank you. OK, thank you for the question. OK, so in the last 10 minutes or so, I want to tell you about the other setting that we consider now. And here now we move from polynomials in one variables to polynomials in several variables. So let me describe the setting. So we take a connected, simply connected, semi-simple linear algebraic group G defined over the rationals. So you can just think about SL2 if you want. I actually, I also prefer to think about just SL2. And I deal by this simple G, the complex points of this algebraic group. OK, then I also need a finitely presented group. So this is generated by k generators, S1 up to SK. And these generators are subject to some equations, which is that you have different relations between them. So these are all words in the letters S1 up to SK. And the negative powers are also allowed. And so the condition is that these must give the, when you evaluate these words on the generators, you must get the unit element in the group. So now if you have this finitely presented group gamma and you have your algebraic group G, then you can talk about the representation of a variety of gamma. And so it is denoted by home gamma WG. And it is defined as follows. So you take k two poles in your group. And you take those k two poles, which satisfies the defining relation of your finitely presented group. OK, so what is this? So if you want to write down a homomorphism from this finitely presented group to G, then what you need to do is that you need to decide on where you map the generator. So for S1 up to SK, you need to choose elements G1 up to Gk in the group G, which will be the images of your generators. Now you need to test that these elements G1 up to Gk satisfies the relations of the group. Because if they don't satisfy the relation of the group, then this will not be a homomorphism from gamma W to G. And on the other hand, if these conditions are satisfied, then you do get a homomorphism. So these are precisely the homomorphisms from gamma W to G. And if you think about it this way, then this is an algebraic variety. It's a sub-variety of the group. Because it is defined by these equations. And these are polynomial equations. OK. So we got some nice sub-variety of the group. But we don't care about all of it. So inside this, there is something called the degenerate locus, denoted by V sub-deck. And this is the collection of two poles, G1 up to Gk, that do not generate a Zarisky then sub-group in G. So it is something that is contained in some proper algebraic sub-group of G. So for instance, in the SL2 case, what you should think about is that maybe all of these guys can be upper triangular matrices. And then somehow that part of this variety is not so nice, and you want to exclude it. So the object that I really care about is the representation variety minus the degenerate locus. So this will be some other variety. And I'm interested in the properties of this. And I'm interested in the properties of this. So actually, this is something that is studied a lot in group theory. You can, it is useful for some purposes. And so what I'm going to study is that I will consider a random group. So what is a random group? So I take W1 up to WR randomly. So I fix some numbers L and K and R. K will be the number of generators. R will be the number of relators. And the relators are chosen randomly, and they will be independent. And there are two possible distributions that you can consider. So one is that you can choose a uniform, the chosen element in reduced words or flanked L. Or the other option that you can consider is that you take a simple symmetric random walk on the finite group, and you choose your relators according to the distribution of this. So the proofs will work for Eber model. And then, so this is what we can prove. So this is a joint work with Oren Becker, Emanuel Breher, and Emanuel Breher. And so I say work in progress. So everything that I'm going to state in this theorem is actually written up. But we want to, we are still working on to make the result nicer. So we haven't yet released the paper. So again, you need to assume the extended Riemann hypothesis. So this is the Riemann hypothesis for some data kinzata functions that can come up. So if you want, you can assume it for all data kinzata functions. And then the statement is that there are some constants, epsilon and c, such that for all L, the following statements hold with very high probability. So the probability of failure is exponential in L to the epsilon. And this time I managed to write this down without the minus. OK, so here are the things that we can say. So the first is that if the number of relators is at least as many as the number of generators, then this variety Zw will be empty. If the number of relators is strictly smaller than the number of generators, then we can tell you what the dimension of this variety will be. It will be k minus R times the dimension of the group. OK, so before I give you the rest of the statements, let me tell you why this is what you would expect. So the ambient space for this variety is g to the k. So that has a dimension k times the dimension of g. And then you have R equations of this form. But in fact, these equations are as many equations as the dimension of the group. So for each relator, you lose R times the dimension of g dimension. So this is very much what is expected. So perhaps that is just one thing, which might be a little bit surprising. And that's the case when R is equal to k. So then the natural guess would be that the dimension of this variety is 0. That would mean that you maybe have some finite number of isolated points. But this turns out to be not the case. And the reason for this is that if you look at this representation variety, then this is invariant under conjugation by elements of g. So if you conjugate each coordinate by an element of g, then it will preserve this variety. So if this is not empty, then it has to be at least of dimension g. So this is the reason why it's empty when R is equal to k. So if R is equal to k minus 1, then on top of determining the dimension of this variety, we can also show that it is irreducible over the rationales. And if R is even smaller, so it is at most k minus 1, then we can also show that the variety will be absolutely irreducible. OK, so I am roughly out of time, so I'm not going to say. I wouldn't have said too much about the proof anyway. So the final comment I want to make is about this error term. So one aspect of this theorem that I'm personally not so satisfied with is that we don't have the exponent. I would really like to have here an exponential error rate. And so I would want to have this statement with epsilon equals to 1. And so that basically comes from a statement about random walks, about mixing times of random walks. And so the point is that if you do, so there are some available results in special cases due to health got. So he did the SL2 and the SL3 case, and maybe a little bit more. And then in full generality by Brea, Gintow, and Piver and Sabo, who proved that if you take the simple symmetric random walk on the FP points of these semi-simple groups, then the mixing time will be at most polylogarithmic in P. And instead of this polylogarithmic, we would like to have a logarithmic to have the exponential rate. And so that's available in the SL2 case, but with a caveat. And the caveat is, so this is a result of Brea and Gumball. So what they showed is that if you don't consider all primes, but you allow for a small exceptional set of primes, then outside of this exceptional set of primes, you will have the logarithmic mixing time. And so now basically the point is that it turns out that this method that I described to you to show irreducibility. And it is also, I adjusted to work in this setting of this theorem. So you don't have to know this result for all primes. So you don't have to know the mixing time results for all primes. It is fine to exclude a small set of primes. And it will still work. So it is actually enough for us to plug in this result. But the problem is that currently this is only for SL2. And my co-authors are working on extending this to the general case. And if that is done, then we will have the exponential rate here with which I will be much happier. So I stop here and I thank you for your attention.