 So I wanna talk about two torsion in class groups of number fields. So everything I'm gonna say today is joint work with Dante Bonolis. Okay, so I'll start with an introduction. So what's the setting? So let's say we have a number field K. So I'm gonna let DK be its discriminant. I'm gonna let CLK be its class group. And for any problem number P, I'm gonna let CLK brackets P be the P torsion subgroup of CLK. All right, and here is a classical problem. Even two integers, M at least two, and P a prime number. People are interested in bounding the size of the subgroup CLK brackets P. So the P torsion subgroup of CLK. For instance, in terms of the discriminant of K, but could be something else, depending on K, as K runs over number fields of degree N. So the degree is fixed, the prime P is fixed, and we view these two integers as being fixed. So it means that when we will have bounds for the P torsion subgroup of the class group of K, we will have constants, which will depend on these two integers. And we will not care about these concepts, okay? All right, so here is a classical theorem. So it's due to land-out. So for any N at least two, and any epsilon positive, for any number field K of fixed degree N, then the past group of K is bounded above by roughly the square root of the discriminant, okay? Of the absolute value of the discriminant. And there is a constant hidden in this notation. So of course it depends on N. It may depend on N and on epsilon, but as I already said, we don't care about the dependence on either N or epsilon of this constant, okay? So that's a very classical result. And here is the conjecture. So it says that if now instead of looking at the full class group of K, you look at the P-torsion subgroup for fixed prime number P, then this time, this quantity should be, the cognitive of this group should be much, much smaller. And in particular, it should grow more slowly than any power of the absolute value of the discriminant, okay? So again, the constant here may depend on N, on P, on epsilon, but we don't care about these dependencies. Okay, so this conjecture was first raised as a question rather than a conjecture by Broomer and Superman in 96. And then it appeared in the literature in different places in the 90s. So for instance, in the work of Duke, and of the people, and then it became a classical, but there were for some time, absolutely no results close to this. So it became like a full-clone problem, which is like really hard, okay? Still there is of course one case for which the conjecture is known. So by Gauss-Genes theory, it's not hard to prove that in the case NP equals two, two. So you're looking at two-torsion of quadratic fields. Then you can bound the two-torsion subgroups of quadratic fields by roughly the divisor function of the discriminant. So in particular, it's smaller than any power of the discriminant. Okay, so that's the only case when the conjecture is known. And indeed, even proving that there is actually some saving for some fixed pair N and P, there is some delta and P positive, such that for any number 50 we have such a bound. So the P-torsion subgroup, the causality of the P-torsion subgroup is less than some power of the discriminant, which is less than one-half, then already this is very hard, okay? But still, if we are ready to assume some strong conjectures, then things can be done. So this is what Inenberg and Banquetes did in 2007. So for any N, at least two and P, any prime number, they prove that if you assume GRH, then you get some savings. So in this case, delta and P would be one over two P times N minus one. Okay, so why do they need GRH? Because they notice that if you have many small primes, which split completely in K, then the quotient of the class group by its delta ocean, it has to be somewhat big. So it means that the delta ocean subgroup, it has to be smaller than just the trivial bound. Okay, so that's what they do. And so to prove this in general, the existence of such primes and many small primes which split completely in K, they need an effective version of the Chebotaric density theorem. So that's why they need the GRH. So apart from these conditional results, most results which are available are statistical in some sense by which I mean, people have been looking at maybe the average of the two torsion of cubic fields, say with bounded discriminance, or maybe they've been looking at higher moments and they deduce results like for some pair N and P, most number fields K, which are bounded discriminant, must satisfy some bound for which there is a saving. And sometimes the saving is as good as what we obtain with GRH. Sometimes it's better, sometimes it's worse, it depends. But of course, even proving statistical results is hard because for number fields of large degree, we don't even know how many number fields have discriminant less than X, right? We don't know how to estimate this quantity as long as N is at least six, I believe, okay? So even statistical results are hard to obtain. So I had neither the courage nor the time to display all the results that have been proved over the years. So I just displayed many names. I hope people won't be too mad at me for doing this. So of course, it started with the work of that important high born in the 70s. And then there were works by Bargava at the beginning of the years, 2000s. And then many of the people, so Hit Brown and Pierce, Elenberg, Pierce, Machete Wood, Frey and Wiedmer, Pierce, Ternich, Bargava, Machete Wood and more recently, Briglino. So if people are interested in having like a complete or fairly complete list of works in the area, I recommend to go to the article by Pierce, Ternich, Bargava and Machete Wood. They have a very, I mean, it's like a survey of everything which has been done in the area. You go to section seven of the paper, okay? So I'll now move on to unconditional point wise results. Okay, so that's what I want to focus on today. So the first breakthrough in the area was done by Pierce in the years 20 or 500 years six. So she has two papers. So at the beginning, the saving was a bit less than the one I displayed there. And in the second paper, she obtained that savings. So she's looking at the three torsion in class groups of quadratic fields. Okay, so n is equal to two, p is equal to three and she proved that we can get saving like what I displayed, so one over 56 in the exponent. And to do this, she looked at some day of integration and she counted the integral points. I mean, the integral solutions of this day of integration which is what we are going to do later today, okay? Okay, so this was improved. First, I have got Venkatesh by counting the solutions of the same day of integration but differently using the results in their jumps paper about integral points on the curves. And finally, this was improved to one over six by Enandberg and Venkatesh not using a day of integration but using the strategy I told you about before. So with this time without assuming gyrates because that's some specific case for which this can be done. You can prove that there are primes which do what they need. Okay, so next, also using the same method, Enandberg and Venkatesh, look at the three torsion of cubic fields and they obtain the same savings, so one over six. And finally, they could also handle quartic fields. Again, P is equal to three but this time they didn't complete the saving but at least for many number fields K, the saving delta that I displayed there is something like one over 168 I think but in general, it's probably a bit smaller than this but I mean, still there is some delta positive such that for any quartic field K, we have this bound. Okay, so if we sum up, we know a non-trivial bound towards the conjecture in four cases. So NP is two, two, so two torsion of a quartic fields and then P is always three and we know it for N equals two, three and four. So three torsion for quartic fields, three torsion for cubic fields and three torsion for quartic fields. So there's four cases of pairs, NP such that there is some non-trivial step towards the conjecture. Okay, so this was the case until the work by Vagabha Shankar, Taniguchi, Thorn, Seymour Mann and Zhao. So 2020 is the year of publication but it was proved probably two or three years before and for the first time, they could handle infinitely many cases. So from now on, please gonna be two always, okay? And, but N can be just anything, okay? It's fixed, but it can be as large as we want and they obtain the saving one over two N in this case and also, yes, so when N is equal to three or four, they have a refinement of their approach which gives them a much better savings. So it's the saving in this case is something like 0.2215. So it's much better than one over eight say when N is equal to four, okay? So today I'm going to focus on this case. So P is going to be equal to two all the time but N is going to be at least five because their result is just too good when N is equal to three or four and we just don't know how to get close to what they achieved. Okay, all right, so it's probably a good sign to stop and ask if there are questions from people in the audience. Okay, so if that's, I'll just continue. So I would like to go through their proof. So I'm going to refer to their result by BSTTTZ, I hope they won't mind, but I mean there's six authors, so. Okay, so I'm going to need a lot of notation. So again, K is going to be a number field. As usual, okay, it's going to be its ring of integers. Then I'm going to let sigma one up to sigma R be the real embeddings of K and sigma R plus one up to sigma R plus S are going to be the complex embeddings of K. All right, so then bold sigma is going to send K to R up to the N. Okay, so people call it the inventory embedding of K. So that's this map. So the first R coordinates are given by the sigma I's and then the next two S coordinates are given by the real and imaginary parts of sigma plus one up to sigma R plus S. Okay, so we have R plus two times S coordinates, so that's equal to N. Okay, so this map sends K to R to the N. And if we look at the image of the ring of integers of K, we get a full rank lattice in R to N. Okay, so I'm going to, sometimes people, sometimes people just call this lattice in R and just okay. I mean, they denote it by okay still. I find it somewhat confusing in some situations. So I prefer to call it LKs. Okay, so next I need to introduce the successive minima of LK, so any notation for that. So I guess it's pretty standard to call them lambda I of LK. So lambda one, what is lambda one? Lambda one is the shortest non-zero vector in LK. Then lambda two is the shortest vector in LK, which is not on the line generated by lambda one. I mean, some vector realizing lambda one. And then lambda three is the norm of some vector, which is in LK, but which is not in the plane generated by vectors realizing lambda one and number two. And we continue like that. So we end up with N numbers. Okay, so yeah, I should make this clear already. So lambda one up to lambda N, they are numbers. Maybe I said vectors, I meant norm of the vectors. Okay, so they are positive numbers. Okay, yes, so this equation is just one way to define them. They're exactly what I said. So in this notation, so I'm gonna call BN, the Euclidean bone, but I'm not sure I will use it again, but I will use the Euclidean norm quite often. Yes, so next, so in okay, there is a one. Okay, so it means that when you send one to our N, you get some vector, which is whose length depends only on N, okay, not on K. So it means that lambda one of LK is some number, which is bigger than one, okay? And another thing, two important thing to note is that the determinants of this lattice, if you just apply the definition of the determinant, you get that it's almost the absolute value of the discriminant of K with the square root, except that there is some factor, we need to divide by two to the S, where S is the number of complex embeddings of K. But the determinant of a K, it's essentially the square root of the discriminant of K. Okay, so it means that if you look at Minkowski's second theorem, so it says that the product of all successive minima is bounded by some constant, which depends on N times the determinant of the lattice. So in our case, lambda one is one, so it just goes away. And the determinant of the lattice is essentially the square root of the discriminant, okay? So that constant two to the S goes away as well. So in this, so we have two equalities there, the left equality is just the Adamao inequality, and the right inequality is really Minkowski's second theorem. Okay, okay, so we'll start now to go through the proof. So here is the first key lemma that they prove. So they prove that the cardinality of the two torsion of the class group of K, it's bounded by the pairs, the number of pairs Y and beta, so Y is gonna be some integer, beta is gonna be in the ring of integers of K, such that we have two conditions. So the Euclidean norm of the embedding of beta has to be bounded by dK to the one over N, and the norm of beta, so in K is the norm of K of Q, the norm of beta has to be equal to Y squared. So the right hand side is really the number of beta whose norms are squares and which are small, which have small Euclidean norms. Okay, so how do they prove this? I'm not going to go through the proof today, but I'll just say a few words. So if you have an ideal class whose square is trivial, that means that there is some ideal I, such that I squared equals its principle, so it's maybe equal to alpha times K, but then whenever you multiply alpha by the square of an element of K, of a non-zero element of K, then you're gonna remain in the same ideal class. Okay, and what they do is that they choose this element, so we have something like, so we have alpha and we multiply it by the square of some elements in K, maybe kappa, and they choose this kappa, sorry, kappa, we multiply alpha by kappa squared, and they choose this kappa so that the norm of beta is smaller like it's smaller than what I wrote, okay? And to do that, they use Minkowski's first theorem. So they build up some convex symmetric body and they say, well, if the volume of this body is large, which it is, then there must be some integral point, some, yes, some point in some lattice, which is going to be kappa, and so that's how they can locate beta. I mean, that's how they can make sure that the Euclidean norms of beta, the Euclidean norm of sigma beta as well, okay? And of course, the norm of beta is a square because beta times okay is gonna be J squared for some ideal J, okay? So that's how the proof goes. So it's fairly simple, but it's very clever. How are we going to use this? So let's take a basis omega one, omega two, omega n of okay, such that omega one is one, and we have this growth condition on the norms of sigma omega two, sigma omega n, and such that we have this condition, okay? So such a basis exists, some people call these basis reduced, some of the people call them minimal or almost orthogonal, they're all the same, they exist, and they satisfied these kind of properties. So if you look at a sum L1 omega one plus blah blah blah plus Ln omega n, then so usually when you do that, I mean, maybe if L1 L2 Ln, I mean, maybe that thing is gonna be a very small vector, and you won't have the property that it's played, but when you select a minimal basis, this doesn't happen, okay? So that's the basis for which you can do that, okay? Then we get the following. So what have I done there? I have written beta as L1 omega one plus L2 omega two plus blah blah blah plus Ln omega n, okay? Because omega one, omega two, omega n is basis of okay. So I have replaced beta by this in the norm, so that's fine. And then I had norm of sigma beta that's most the absolute value of the discriminant to the one over n, but now by using the second property, this becomes for any i in one n, L i is at most dk to the one over n divided by the sigma of omega i, but by construction, these sigma of omega i, they have to be at least lambda i of lk, one i. So that's the bound that we get for the L i's, okay? So maybe I should say that BSTTZ in their proof, they don't need all these notation, but that's gonna be useful for what we want to do after, so that's why I'm introducing this notation there. But maybe that's just our interpretation of their proof. And they don't need the successive NEMA and norm of this mess, okay? All right, so next, so for any beta in okay, so what's the norm of beta? So it's gonna be the product of the embeddings, right? So it's gonna be bounded by the Euclidean norm by definition of sigma of beta to the N, okay? So I guess that's just the, how do you call it in English at AGM? The arithmetic geometric means the inequality. So we get this inequality for Y, right? If I go back, so I had Y squared equals the norm, blah, blah. So if the norm is at most sigma of beta, Euclidean norm to the N, then I get that Y squared is at most decay. So Y is at most the square root of decay, okay? So then they use the Bombieri-Pilla bound for discounting function. So we are counting the number of Y and L1 such that Y squared equals the norm of L1 omega one plus blah, blah, plus LN omega N and Y and L1 are in some box, okay? So it's typically the kind of situations where you want to use the Bombieri-Pilla bound. So I have hidden some a few things there because sometimes it can happen that for some choice of L2 or L3 and N, the polynomial NK of L1 omega one blah, blah, blah, the polynomial in L1 is going to be a square, okay? So if you're in this situation, then the variety, the curve you are looking at, it's not going to be irreducible. So you're going to be in trouble, but there are very few choices of L2 or L3 and N for which this can happen. So in the end, it's fine, okay? But I have oversimplified a bit there. So what do we get for the bound? So the polynomial NK of blah, blah, as a polynomial in L1, it has degree N. So Y is bounded by DK to the one half. So we get DK to the one over two N in the exponent in the right-hand side, okay? This is the Bombieri-Pilla bound, okay? Key limit two. So it says that our lattices, LK, they cannot be just any lattices, okay? So in particular, the largest successive minimum of LK, it has to be smaller than DK to the one over N. So of course, I mean, in general, if you take just a very, very lattice with, I don't know, N minus one, very small vectors, non-independent small vectors and one very large, the next one very large, then sometimes it can happen that lambda N is about the same size as the discriminant of the lattice. But for lattices which come from number fields, this cannot happen. Okay, so why? Because so when we send okay to our N, we get a lattice, okay? But so that's a lattice because okay is the Z-module, good? But once we're there in our N, we have forgotten about the fact that okay is actually a ring, right? In okay, you can multiply elements and you still remain in okay. Okay, so by using this, you can prove that they prove that lambda N has to be as small as this. And also it's a consequence of Minkowski's second theorem. Okay, so key lemma one was a consequence of Minkowski's first theorem and key lemma two is a consequence of Minkowski's second theorem. Okay, good. So what does it tell us? So what have we done? So far we have counted the number of Y and N1 satisfying our dive-on-t equation, but now we need to sum over all the remaining LIs. Okay, so we have L2, L3 up to LN in some box and what the lemma says is that for each coordinate, the size of the box is at least one. Okay, so it means that there is no coordinate for which LN has to be exactly zero and nothing else. Okay, because if that were the case, then we would be in trouble because we want to bound the number of possible LN say by a decay to the one over N divided by lambda N, but if these were very small, if these were very small, very close to zero, then the number of LN would be only one and that would be much larger than this ratio. Okay, but since we have the key lemma two, we have this bound, okay? Which is good because by Adamard's inequality that I displayed two slides ago, then this is exactly bounded about by the square root of this increment minus one over N in the exponent, okay? Yes, and of course there is something that I didn't see and that I should have said, when you use the boundary pillar bound, you get absolutely no dependence in the constant on L2, L3 or LN and same, no dependence on omega one, omega two, omega N. So it means that the constant that you get in the boundary pillar bound, the only dependence on K is in this power of the discriminant, nothing else, okay? So now you can just multiply the two bounds that we got. So the number of Y and L1 and the number of L2, L3, LN and you get half minus one over N plus one over two N so that's a half minus one over two N which is what they approve, okay? So that's a proof, okay? Any questions? Maybe about their proof. So let me just continue. So I'll continue with, there is a question. So does this work for the motion? Well, no, no, why? Because so you see, let me just go back. Yes, there. Yes, there, so there if you drop the condition that the norm of beta has to be a square, if you just forget about it, then the number of beta in this set, it's exactly the square root of the discriminant, okay? So it means that even there, you have already your recover Lendau's with epsilon equals zero, okay? However, if you look at the three torsion say, then in this set of beta, I think you're gonna get that if you drop the condition that the norm of beta has to be a cube this time. So I'm looking at P equals three now. So the norm of beta would be a cube. If I drop this condition, then the number of all possible betas it's gonna be, I think just decay and not square root of decay. So it means that to get some saving over decay to the one half, you're gonna have to do, you're gonna have to do something huge because you have approximately decay possible betas, which satisfies some diluentine equation. And by using this diluentine equation, you want to save it which is more than a half in the exponent, okay? So that's just impossible. And there we got one over two N, a save, okay? So hoping for a half in the saving is just completely crazy. So no, it doesn't work for higher primes, all right? So now we'll just continue. You're welcome. Yes, so I'll talk about Salbergeal's work now. Okay, so what did he do? So I learned about his work through Tim Browning because last year I met Tim at some point in the summer in Zurich. And I told him that I had the idea to maybe improve upon what BSTTZ did by not looking at interval points on a curve, but maybe by looking at interval points on surfaces and using some more general versions of the determinate, the Bombay area. And Tim stopped me and he told me per Salbergeal did exactly what you just told me. So for some time I posed and so I didn't know exactly what Salbergeal's results were. Now I do. So I'm going to explain exactly, I'm going to state his result. Okay, so for this I need notation. So again, K is a number field and we're going to let mu K be defined by this. Okay, so mu K is log of lambda two divided by log of the discriminant. Okay, so that's new notation. And if you remember Minkowski's second theorem, so we had lambda two times lambda three times lambda times lambda N is bounded above by the square root of the discriminant, but lambda two is the smallest of all the elements in this product. Okay, so there's N minus one of them. So it means that lambda two to the N minus one, it's at most the square root of the discriminant. So it means that lambda two, I mean the exponent of lambda two in the K, it has to be at most a half of N minus one. I mean one over two times N minus one. Okay, all right, so we are going to need to keep that in mind all the time. Okay, so here is Salvego's results. So I wasn't so sure how to state it, but yeah, that's what it is. So it gets this exponent when mu K is less than one over two N. Okay, so that improves upon the BSTTTZ results in the whole range, mu K between zero and one over two N, okay? So there's two ways to view this improvement. So the optimistic way is to say, well, when N is large, one over two N is almost one over two times N minus one, so that's not so bad. And the pessimistic way is to say, well, for generate number fields, actually all the successive minima, they should be of the same size, right? So lambda two, it should be for most number fields, it should already be very close to one over two times N minus one. So in this case, this result is not gonna apply. Okay, so I'll give a very rough sketch of Salvego's first, so what does he do? Instead of looking at integral points on affine curves, he looks at integral points on affine surfaces, instead of using the boundary pillar method, he uses global determinant method, okay? So I'm not giving any details, and again, I'm hiding a lot of stuff there because there are many surfaces for which, and there are some surfaces for which the bound that I displayed there will not be true or will not be provable, but then in this case, one needs to prove that there are few surfaces for which this happens, okay? But I mean, I guess the outcome is really this bound, okay? And so then you need to sum over all the remaining variables, and when you do that, you get the exponent that I displayed in the result, okay? So and again, here, I mean, in the first bound there, it's crucial that nothing depends, I mean, in the constant, nothing depends on a two and three and N or we get one and we get two and we get N, okay? All right, so new bounds. So here is a DRM, so when N is at least five, we have this bound, okay? So yes, so it's not so clear that it's not only does it improve upon BSTTTZ, but it also improves upon some bagels result in the whole range, new K in zero, one over 20, okay? So I drew some plots, so that's in the case N equals five, and the range of new K is zero to one over 10. So I'm only displaying the range for which our result is better than both results, but in the remaining range, one over 10, one over eight, the BSTTTZ result is still the best, okay? So in these plots, the pink line is corresponds to our result, the green, the mint curve corresponds to San Berger's result and the horizontal line black, 0.4, is just a half minus one over 10, so that's BSTTZ, okay? All right, so how can we use this result? Here is the corollary. Let's say you fix F, a number field, which is not Q, then for any number field K of degree N containing F, you get this new savings. So now you get two over three N as saving, which is much better than the one over two N from BSTTTZ, but in this case, of course, the constant may depend on F, okay? That's the idea. Let's say you're looking only at 60 fields, which contains Q adjoints square root two, then you're gonna get saving one over nine instead of the one over 12 coming from BSTTZ, and the constant is going to depend on square root two, but that's fine, you don't care about that, okay? All right, so why is that? Yeah, because then that two is gonna be bounded in terms of F only, okay? Another corollary. So for any prime number Q, the two torsion, the class group of the field, a Q adjoint Nth root of Q is gonna be bounded by this. So here you get a saving, which is two over three N, okay, which is much better than the one over two N we had before, minus something, but something which looks like one over N squared. So when N is large, this is gonna be small, okay? So that's another example where you can apply our results. Okay, so why is that? Well, in K, there is Q to the one over N, so if you send it to RN using sigma, and if you compute lambda two, you get this bound, but the discriminant, is it computed? It's Q to the N minus one, so it means that lambda two, it's really the discriminant to the one over N, N minus one, okay, so that's why we get, I would say, all right. So I'd like to say a few words about our proof, but time is flying. So we use the same strategy as our regular, but instead of using the determinant method, we use some C, we use the square C, okay? So for this, we use, so we consider the same counting function as we had in Selberg's proof, then we smooth it because it's just easier, and so what does the square C do? It's a tool to bound sums like the sum in the last line. So sum of omega of y squared, when omega is such a function, like this one, okay? So when you apply this square C, you find that our counting function, it's bounded by two terms, the first one is what it is, okay? So in this bound, P is a set of primes, I mean, curly P is a set of primes, and straight P is cardinality, okay? But the second term, you need to bound it, and for this, I mean, depends on exactly what it is, but you need to, this is where the work actually is, right? Okay, but if you look at this one, so I didn't say it, of course, M over PQ is the Jacobi symbol, okay? So if you look at this sum, and if you just use the definition of our smooth function, our counting function, W there, so this is exactly what you get, okay? So you get a smooth function times the Jacobi symbol, where the what's upstairs in the Jacobi symbol is the norm of N1 omega 1 plus N2 omega 2 plus value, okay? And P and Q are fixed in this calculation, okay? And then you apply the Poisson summation formula, and you get something like this, where subside hat, of course, is the Fourier transform of psi, and T is a complicated exponential sum, okay? So in this exponential sum, you have a Jacobi symbol with the norm of alpha 1 omega 1 plus alpha 2 omega 2, where alpha 1 and alpha 2 run over our modulo PQ, and then you have some exponential, okay? So then the game is to bound the exponential sum, and for this, you need to find an appropriate set of primes, P for which you can bound the exponential sum, okay? So, I mean, I don't really have time to say anything about that, but we can do it, and we can prove that there is full square reconciliation in this exponential sum. So these bounds is as good as we can expect, okay? And the outcome is as follows. So we get that the first term in the question was just this, and then the second term, we get P squared, okay? So then we choose the best possible P, which is this. So we get this bound for the counting function, and then as in Salvega's proof, we sum over all the remaining variables, and there, of course, I didn't say, but when you bound the exponential sum, there is no dependence, you remember we had NK of L1 omega 1 plus blah blah in the Jacobi symbol, but there is no dependence on, in the constant there, there is no dependence on omega 1, omega 2, or L3, L4, okay? So we can just multiply these two bounds, and we get, we get arbitrary, okay? All right, so that was a bit fast, but I didn't really have time, so sorry for that. So further expectations. So what's gonna be in this slide, and the next one are not there yet, but we expect the statements in these two slides to be there and very simple, okay? So here is another, not there, but statement. So we expect this to be true, okay? So it's, you get a different exponent for DK and a different exponent for M.2. And I'm claiming that this improves on all the other results in the range one over five N, one over two, and minus one. So it means that it improves upon everything, even for the maximum possible value of mu K, okay? So situation is like that now. So pink is what we had before, black is still a BSTTZ, and the blue there is the new, not theorem, the new statement, okay? Again, what you see on the right of these three curves is that even for the maximum possible value, so this time I've plotted the three exponents in the range zero, one over eight. So before I only considered the range zero, one over 10, because between one over 10 and one over eight, we couldn't be BSTTZ, but now we can beat what they had even there, okay? So in particular, we should be able to prove the following. So that's a new bound. In this statement, there is no lambda two anymore, which I just replaced lambda two by its maximum possible value given by Minkowski's second theorem, and we get that. So remember, BSTTZ, they had a saving which was one over two N, and there, if you look at the first term, we get minus 15 over 28N, so that's minus one over two N, minus one over 28N, so that's always smaller than what they have, okay? So for instance, I mean, for N large, it's clear that it's gonna be better because the second term is in one over N squared, so it's gonna be small, but even when N is equal to five, we save something. We save one over 500 something, okay? Okay, so do I have maybe one minute to tell you? I'm running out of time, so if you want to know about how the proof of this result should go, then feel free to ask at the end. Yeah, I guess that's it. Thanks for your attention.