 I would like to thank all the organizers, Alina, Mike, and Filip for the kind invitation to speak here. And today I'll talk about smooth polynomials and smooth numbers. So I'm going to first tell you what is the plan for the talk. I'll start by defining these notions in the title and some other notions. I'll then discuss, oh, and give some motivation for them. I'll then discuss some old and new results in the discrete settings of polynomials and permutations. I'll then describe all the new results in the integer setting. And then I'll focus on one approach to prove some of these new results, which I'll call the integration approach. And only if time permits, I'll also discuss a second approach. So I'll start by defining what is a Y smooth number, also sometimes called Y-friable. So a number is Y smooth if its prime factors are at most Y in their size. And we are interested in their count, which is denoted CXY. So this is just the number of integers up to X that do not have a prime factor greater than Y. And this is a function that gets smaller as Y gets smaller. And let us see some special cases. So if Y is X, we are counting all the integers. So we get the floor function. This is because a number N is automatically N smooth. If Y is equal to 1, we only count the number 1 because the number greater than 1 will have a prime factor. If Y is equal to 2, we are counting perfect powers of 2. So we get log X in base 2. And one useful, one important feature of smooth numbers is the following. Their indicator function is multiplicative. In fact, completely multiplicative. And this, of course, comes in handy. For instance, the Dirichlet series of smooth numbers is very simple. Next, I'm going to define an analog of this definition in the so-called function field setting. So fix your favorite finite field, FQ. I'll say that the polynomial is M smooth. If the degrees of its irreducible factors do not exceed M. And I'll use irreducible and prime interchangeably here. And again, we are interested in their count, CQ and M. This is the number of monic polynomials of degree N that do not have a prime factor of degree larger than M. And if you wish, you can also replace exactly N by degree up to N and use results about CQ and M to study degree up to N. Again, let us see some special cases. If I take M to be M, I'm counting all polynomials of degree N that are monic. So I get Q to the M. Now, if M is 1, I'm not counting perfect powers. Rather, I count products of linear polynomials. And the number of products of linear polynomials is given by a binomial coefficient. And let me briefly explain why is this so. This is because we have two linear polynomials of the shape T plus a constant. And we are selecting N of those linear polynomials to create a one smooth polynomial. And repetitions are allowed. And there is no regard to orders. So the answer is described as this binomial coefficient. An important point I want to make that will be repeated several times in the talk is that one can and should compare results in the integer setting to results in the polynomial setting. But in doing so, one should calibrate the parameters correctly. Let me explain this. I claim that X should be compared to Q to the N while Y should be compared to Q to the M. And why is this so? Because in the count of smooth numbers, I ask the norm of my primes to be bounded by Y. And in the count of smooth polynomials, I ask the norm of my primes to be at most Q to the M. Where the norm is the usual norm in the polynomial setting, it's Q to the degree of M. Hence, it makes sense that QM should correspond to Y. And similarly, Q to the M should correspond to X. And now I'll go on to the third and last definition, that of smooth permutations. So a permutation is M smooth. If all its cycles are of size at most M, and we are again interested in their count, sub CS and M counts permutations on N letters that have no cycle longer than M, and S stands for symmetric. So let's see some special cases. If M is equal to N, I count the entire symmetric group, I get N factorial. If M is one, I count only the identity element because all the cycles are of size one. If M is equal to two, I'm basically counting permutations with cycles of size one and two. This is exactly the parametrization of involutions, permutations that if you apply them twice, you get the identity. And it's been known for a long time that it grows roughly like square root of N. Let me just mention by name some recent, some old motivation that has been recently popularized. There is an old riddle called the 100 prisoners problem that its solution involves counting permutations that are N over two smooth. And we'll see later that this is asymptotic to one minus log two. Any questions about any of these three definitions? Okay. If so, I'll discuss some motivation for cryptography for studying these subjects. The RSA algorithm is an asymmetric encryption algorithm. By asymmetric, it means that there is a public key and a separate private key. It relies on the difficulty of factoring numbers. And then it came as a nice surprise in 81 that Pomeran's found an algorithm which he calls the quadratic sieve to factor numbers in sub-exponential complexity. The number of operations as a reservoir peculiar shape, it's X for square root log N. I should say that it's a randomized algorithm so it can potentially fail, but then you run it again. And the reason I mentioned smooth numbers is that they are used in the algorithm both in its steps and in its complexity analysis. I do not have time to explain the algorithm but I'll state its complexity. So basically for any parameter T, you get an algorithm that factors a number N in this complexity. So T squared times N over Pc and T, this is one over the density of smooth numbers plus TQ. This is the complexity. And if you take T to be one or take it to be N, this is exponential complexity. It's not good. So the point is to find the T that minimizes it. And you see that it's by definition requires understanding this function Pc and T. It turns out that the optimal T as this shape, it's X for square root log N. And for this choice of T, the two factors, the two terms in the complexity are balanced. So the complexity will be TQ, which is also of this order. And this is only about smooth numbers and factoring numbers. However, there are other cryptographic algorithms over finite fields, which rely, for instance, on the difficulty of the so-called discrete logarithm problem. And then again, there are algorithms that break this in sub-exponential time, but this time they rely on smooth polynomials instead of numbers. So understanding this density is relevant in cryptography. And at this point, I would like to ask, is there any question about the motivation? In any case, this talk will not be about cryptography, but I wanted to let you know about this motivation. Now I'll proceed to discuss all the new results for discrete settings. And almost all the results are described in terms of the Dickman function, the Dickman row function, which I'll define now. So it's a function on non-negative reals. It starts as a constant function. From zero to one, it's equal to one. Afterwards, it's defined via a delay differential equation. So the derivative at time U is a function of the value at time U minus one. And if you don't like derivatives, there is an equivalent integral equation where the value of time U is some sort of average of past values. And from the integral definition, it's clear that the function is always positive because it starts positive. And from the first definition with derivatives, we see that the derivative is always a non-negative, sorry, non-positive. So it's a positive decreasing function. And the integral equation implies this inequality. Row of U is bounded by one over U times row of U minus one. This should remind you of something. It should remind you of one over the gamma function because one over the gamma function satisfies this with equality. It's not hard to show that, indeed, the Dickman function is dominated by the reciprocal of gamma. And by a very basic version of Stilling's approximation, this decays like U to the minus U. Turns out this is also the decay of the Dickman function. So you don't have to remember these formulas. You just have to remember that the Dickman function decays rapidly. So let me describe some results about permutations. Recall PS counts M smooth permutations. And it will be useful to introduce the following notation. You will be the ratio of M by M. The first results is due to Vgon-Charov in 44. He proved that the density of M smooth permutations is asymptotic to the Dickman function of U, as long as U is bounded. In other words, as long as M is at least linear in M. Now it's natural to ask, what is the range where this holds? Somehow there was no work on this specific problem when M is large for many years. People focused on small M, for instance, the involution problem. So the next result that I'm interested in is due to Mstavitzius and Petruchovas in 2016. They proved that the Dickman function asymptotic remains true as long as M grows to infinity faster than square root N log N. And their proof involves the saddle point method. And they proved that the range is optimal, meaning for smaller M, the asymptotic breaks. Last year, Ford gave a completely elementary argument involving recursions, showing that the density is always sandwiched between two values of the Dickman function. And this is in the entire range of parameters, M between one and N. I want to say that because of the rapid decay of the Dickman function, this does not give you an asymptotic result in general. It will only give you an asymptotic result if the two end points are asymptotic. And this turns out to hold exactly when M tends to infinity faster than square root N log N. So it recovers the first part of these results, the result of Mstavisius and Petruccioli's. I want to briefly discuss Ford's proof and some improvement to it. Let me write the density of smooth permutations as PNM. And one can prove a recursion for this quantity. PNM is related to past values of PN minus IM. And this is very similar to the recursion satisfied by the Dickman function. How does one prove this recursion for PNM? It basically goes as follows. You look at the permutations, you think of them as acting on the elements one to N and you partition according to the size of the cycle that contains the element one. And doing this partitioning gives you that. Now, if you replace this sum here in the first identity with a similar sum, where P is replaced by the Dickman function, you get a ribbon sum for row. And because the Dickman function decays rapidly, it is actually an overestimation for the EPI row. And that's an observation that has been used by Ford to prove his lower bound. And actually in my work, I even show that one can exactly quantify how much it overestimates the integral to get a physical space proof that there is a transition at square root N log N. I'm trying to say that one can extend the ideas of Ford to treat to also recover the optimality. Okay. I should also say that when M is below this range, Mastaviches and Petuchovas have very satisfactory formulas for this density in terms of the saddle point methods, so not involving the Dickman function. So I would say that permutations are easy or relatively easy compared to integers because there is no arithmetic involved and we can answer basically any question there. So let me turn to polynomial. Recall CQNM counts M's most polynomials of degree N and again, I'm going to use the same notation U is equal to N divided by N. The first result is in 87 is due to car. She proved that the density of smooth polynomials is equal to the Dickman function plus a certain error and this is uniform in the entire range of parameters. A few years later, Barley-Mont in 91 improved the error. However, one should remember that due to the decay of the row function, once M is below N over log N, already the main term will be absorbed in the error. So even for moderately large M, this will only be an upper bound. And the third results I want to mention is due to Mastaviches in 92. He proved that the Dickman row asymptotic holds in this range where M grows to infinity faster than square root N log M. This does not follow from the previous results from these two results. And this range should remind you of the exact same range in the permutation problem. It matches the range that we saw there. So it's natural to ask if these results, two results are related. And this is addressed in the following theorem. The first theorem I wanna tell you about today. In this theorem, I compare the density of smooth polynomials to the density of smooth permutations. So it's a density divided by density. And the theorem says that the ratio of densities is very close to one. There is an exponential saving in Q. Here U is again N divided by M. This holds in a very wide range. M can be even logarithmic in N. I'll discuss soon what happens for smaller. Before I go on to discuss some interpretations of this, I want to say that this result does not go, this proof does not go by studying the numerator and the denominator separately, rather it goes by studying the ratio directly. So the idea is to write both densities in terms of an integral. And when we subtract the densities, we subtract the integrals and we're studying this difference directly. We're very roughly speaking. I want to tell you two takeaways you should take from this theorem. The first takeaway is that previous results on CQ, smooth polynomials, follow for free from those four permutations. So it's not a coincidence that the ranges I showed you match to each other. If you know something about PS, you can use this to convert it to something about CQ as long as M is not too smooth. And the second takeaway is that working with PS leads to better saving and better ranges. So it's just a better main term than the Digmond function, at least in this case, at least in this problem. To demonstrate the strengths of this theorem, I want to show you that it actually incorporates within it the prime polynomial theorem. So let me explain. Let me take M almost as large as possible. I'm taking it to be N minus one. Now a polynomial is N minus one smooth as long as it's not irreducible. So the numerator in this case is one minus density of primes. And the denominator counts N minus one smooth permutations. These are permutations that are not N cycles. So the denominator is one minus one over N. It's not hard to show. And if you plug these parameters in the theorem, you recover the prime polynomial theorem with square root cancellation with optimal error. Of course, the prime polynomial theorem also plays a role in the pool. And I should say that, yeah, the error here can be shown that it's sharp. Now I'm going to discuss very briefly what happens for smaller M. So instead of stating a very technical result, I'm going to take M to be N minus epsilon log QM for some epsilon between zero and one. I can show that then the density, the ratio of density is not asymptotic to one. It blows up, it tends to infinity. In fact, I can show that the ratio is asymptotic to one if and only if M minus two log QM tends to infinity. So the range of the last result is essentially sharp. And now I want to mention two questions that I think are natural to ask, which are, first of all, is there an analog for this main term in the integer setting? Is there a main term that leads to error terms that are so good that are exponentially Q to the N? And second of all, is there an analog of this range and being greater than log QM in the integers? We see that at this range, something happens, at least in the polynomial set. It turns out that the answers to both the questions are positive. There is an analog of the main term CS. It's known as the Bruins approximation lambda X, Y. We'll discuss it later. And remember I told you how to compare results in integers to results on permutation on polynomials. Using this correspondence, the range M being at least two log QN will correspond to log Y being at least two log log X. In other words, Y being at least log X squared. So we should expect that something interesting happens in integers when Y is close to log X squared. And in fact, it was a barrier in the study of smooth integers in several works. So that's what I hope to convince you in the next slides. So at this point, I want to ask, are there questions about these results in the polynomial setting and also about the questions I just raised? Can I ask one question? Sure. Result in the previous slide. Is it uniform in Q? So if this is your- Yes, it's uniform in Q. And in fact, in the paper itself, it's even, there is a ceiling function and some nice improvement. Yes, it's uniform in Q. Thank you. Okay. So now I'm going to proceed and discuss integers. And the first result, or one of the first results on smooth numbers even predates those on smooth polynomials and smooth permutations. It's from 1930. And Dijkman proved that the number of Y smooth numbers is asymptotic to X times the Dijkman function on Q. Now U is not N over M, rather it's the integer on over. It's log X divided by log one. And it proved this relation as long as U is bounded, or in other words, as long as Y is at least a power of X. And there were several works that extended the range of validity of Dijkman. And before I discuss them, I want to say that something interesting happens near log X, there is a phase transition there. And there is a nice observation of Gronville that this asymptotic relation cannot hold for very small Y. For instance, if you use precise asymptotics for the Dijkman function, the right-hand side will be smaller than one if Y is below some multiple of log X. While the number of smooth numbers is always at least one. So there is a point where this asymptotic relation, the first significant results about this range is due to the drawing in 51, used Bookstab's identity, which I'll mention later in the talk, to derive this sort of asymptotic relation in a qualitatively better range. Y can be a smallest X of square root log. Then 30 years later, Ildebrand extended the range qualitatively even further, replacing a power of log by a power of log log by using a completely different identity. The exponent five over three comes from our best zero free region for the Riemann zeta function. And if you improve the zero free region tomorrow, Ildebrand's method will be able to give you a better range for this asymptotic formula to vote. And Ildebrand also adds some results related to the Riemann hypothesis and the relationship to smooth numbers. He proved that under RH, this asymptotic relation star holds for Y a smallest log X squared plus epsilon. He also had a remarkable converse. He showed that if you know that star holds in this range, you recover RH. There is a strong relationship with RH here. I should point out that when Y is a power of log, the saving he gets is does not go to zero. This is of one. And then in this range, he actually proves order of magnitude results under RH. So are there any questions about these results before I raise some questions? Okay. Now I'll state some questions, some of which have appeared in the literature. One question is very naive. What is the behavior of CXY when Y is less than log squared plus epsilon? The second question was asked explicitly by Ildebrand and reiterated by several others. I also mentioned in papers of Brownville. The question is whether one can show unconditionally or otherwise that the number of smooth numbers is not of the size of X times the Dickman function when Y is below log X squared minus epsilon. And a third question of a somewhat different flavor is about one sided inequality. So Pomeran's asked, is it true that CXY is always at least X times the Dickman function? Is the density of smooth numbers always at least the Dickman function? And also more independently asked a similar question. You may ask why would one expect something like that to hold? I'll give exact motivation later, but there is at least hope for it to hold because for a very small Y, we know that the right-hand side is smaller than Y. So for very small Ys, they're already taken care of. So you might hope that it falls over. And I've made some progress on these questions and I solved them, some of them conditionally and some of them unconditionally using two different approaches, although both of them involve subtle points. I give them different names. The approaches relate the number of smooth numbers to primes and to zeros of the zeta function, of the zeta function, which is not too surprising given the work appeal, the brand that already established some relationships. However, it turns out that the answers to the questions are quite tricky to state. And I'll try to convince you by giving you one taste, one small taste. Three is a theorem. It assumes RH to make matters nicer. And here I'm taking Y a very specific value. It's essentially log X squared, but not exact. For this choice, the number of smooth numbers is asymptotic to X times the Dickman function, but there are three different correction terms. There is a value of zeta at one half that comes from one reason. There is square root of two that comes from a second reason. And there is a certain exponential factor. So what's written here? CY is the usual Chebyshev function. It's the summatory function of the von Mangold function. And it's asymptotic to Y. That's the prime number theorem. So this is the error term in the prime number theorem here in the exponent. Now RH turns out is not strong enough to tell us that this is negligible. RH tells us that this is big of log Y inside the exponent. However, it's conjectured. There is a conjecture of Montgomery that uses a probabilistic models for the zeros of zeta. It's conjectured that this is little open. In fact, that it decays to zero in a certain rate. In a certain rate. So RH is not enough to allow us to ignore it, but it's conjectured that it can be ignored. But we see that for instance, if we're interested in knowing when is C of the order of magnitude of X times rho, the answer has to be stated in terms of this error term. And if I don't assume RH, or if I choose a different value of Y, I'll have a different formula, but quite a more complicated one. That's why I chose to present you this. But the point of this theorem is to convince you that and the answers have to be slightly complicated. Okay. Now I want to mention an unconditional result. Unconditionally, if you give me some positive epsilon, I can construct sequences X and Ym, where Yn grows like log Xm to the two minus epsilon plus a little of one. And for these special sequences, the ratio of C by the Deekman function will explode. And this answers the question of Hildebrand, that the Deekman function is not the right model for the smooth numbers for Y below log X squared minus epsilon. And this formula should remind you of some. When I told you about a result for permutations for M less than two minus epsilon log Qn, we had a very, very similar shape with this thing replaced by N. And it's not a coincidence, the proofs are very similar. The reasons this is unconditional and does not need RH is that the proof involves exhibiting large values of the error term in the prime number theorem and exhibiting large values can be done unconditionally. In fact, if RH falls, we even get better results. So that's why it's unconditional. Next, I'll mention a result about the inequality that Pomeran's conjectured. Here I assume RH. And as long as Y is not too close to X, which is a necessary condition that's already mentioned by Poran. And as long as we are not too close to the critical point log X squared, this inequality will hold. If you wanted to hold near the critical point, then this is true if the error term in the prime number theorem is even better than what RH predicts. So CY minus Y is little of square root Y, log Y implies these sorts in the remaining region. So Sergei is asking in theorem four. Yeah, epsilon has to be between zero and one. Thank you. For epsilon greater than one, in a sense this is an easier place. Thank you. And I want to say that also if RH is false, one can construct infinitely many examples to this inequality. About this very last sentence in theorem five, I want to say that Hildebrand had a work in this direction. He showed that if RH fails, he can show that this ratio is either very big or very small when Y is log X to the C plus little of one for some large C related to the counter example to RH. But he didn't say that it can be necessarily large. He said either large or small. And the proof of theorem five proceeds very differently than he did. But he was in the right direction and one can extend his method to prove this very last line in theorem. Any questions about these results? Okay. I want to present before I go on to describe more technical details, I want to give some sort of philosophy for what's going on. So given some nice function f of X, recall it's malintransform and inverse malintransform. So the malintransform is an integral of f against the power of X. And then I can write f as a complex integral of the malintransform against X to the S where S varies over a vertical line. And there is a special point which I'll denote CF. It's the real number, the positive real number that minimizes the integrand. So it's the point for which MFC X is the smallest possible. This is what's known as a sudden point. And now I'm going to give names to two functions. I'm going to call the number of Weismuth numbers, f one X and the Deekman function normalizing this way, f two X. So I see if there is any question. This is some question by Zach. Okay. It's a somewhat long question. So I'll try to address it at the end. Sorry. Yeah, but thank you for the question. So that you should think of the ratio of C divided by X times the Deekman function, a smouldered very well by the ratio of the malintransforms evaluated at the subtle points coming from the Deekman. And if you believe this sort of heuristic, then you can reduce many questions about smooth numbers to questions about this ratio, Mf one by Mf two which is very concrete, very explicit. Mf one is basically the generating functions. Smooth numbers are partial, the function we'll see later. And Mf two is more or less the Laplace transform of the Deekman. My point is that this ratio is a very concrete. Now this relation as I stated here holds under RH for wise a smallest log X that's three over two breaking the log X squared barrier. So working with this list to better ranges, better error terms, et cetera. Now what can I do unconditionally? Unconditionally I can sandwich this ratio between two values of this ratio of malintransforms. So here the ratio is evaluated at a subtle point coming from F two and here it's evaluated at the subtle point coming from F one. But it's still sandwiched by two values of the same function. So if for instance you're interested in showing that this ratio can get very big for Y log X to minus epsilon plus theta one then you just need to show that this ratio of malintransforms can get very low. So it reduces many questions to concrete that's the point. And in the next slide I'll try to make this completely formal. Okay. So here we'll see some objects that I only mentioned by name. So first I'm going to define you to the generating function of smooth numbers that you wish less serious supporting on smooth numbers. It has a very nice order product. It's a finite order product over primes up to Y and it converges for all real S greater than zero because it's a finite product. And the brain suggested a way to model theta S Y by something that has essentially no arithmetic. So what was his suggestion? Let's for simplicity assume that we work with real S greater than one and write the partial Zeta function as Zeta of S times a product over primes greater than Y. And because products are annoying you wrote it as the exponential of the logarithmic product obtaining sum of logarithms and then Taylor expanded the logarithmic obtaining this sum of. He then suggested throwing away two arithmetic pieces of information. So first he suggested throwing away the contribution of proper prime powers the contribution of K being at least two. Now after you throw this away you just get a sum over primes from over all primes greater than one. And we know what's the density of primes. It's one over log T by the prime number theorem. So he suggested let's approximate it by a continuous integral of T minus S against the density of primes. And this new thing, this new function let's denote it F. So F serves as a model for a Zeta S Y that has no arithmetic. So by Perron we know that C is an integral of the finite over product Zeta S Y against the power of X. In analogy, the Bruins said, okay let's replace Zeta here by F and call the resulting integral lambda X Y. That's the Bruins approximation for the number of Y smooth numbers. And it's a much better approximation than X times the Digmond function. And this is what I claim is an analog for the density of smooth permutations. And if you do the same process of throwing away this arithmetic information starting this time from the generating function of smooth polynomials instead of smooth numbers you actually end up with the generating function of smooth permutations. So it's a perfect analogy here. Now it has studied lambda sort of for its own sake but that will come beneficial soon. It proved that lambda satisfies a continuous bookstab identity written here relating values of lambda X Y to values of lambda X Z and lambda X over TT or T between Zeta Z and Y. I'll just remind you that the usual bookstab identity says that C X Y is C X Z plus a sum over primes between Y and Z of C X over P. So this is indeed a continuous bookstab identity that involves no problem, okay? It then showed that lambda is an asymptotic expansion. The first term in the asymptotic expansion is the Digmond part. And the next term involves the Euler-Mascheroni constant gamma and it's non-negative. And that's the original motivation for Cormorants to ask does C X Y is always at least X row? That's the original equation. Now, how did the Brogne prove that the number of smooth numbers grows like X row? It proceeded in two steps. Use the bookstab identities, both the continuous and the discrete, to show that C X Y is asymptotic to lambda X Y in some wide region with an excellent error term, an error term that is greater than any power of law. And then in this range, use the asymptotic expansion to say that lambda is asymptotic to X row this time with a rather weak error. And by transitivity, we are done. And then in 88, Syus showed that one can extend the range of star to the same range we saw in Hildebrand's result. So it recovers Hildebrand's result. However, it's even stronger. But in star, I'm hiding here a very good error. So it's stronger than it is. And it doesn't involve identities at all, only generating points. Okay. Now I'm going to explain to you the approach of Syus and then I'll explain you a way to upgrade it. So the approach of Syus is the following. We have integral representations for C and lambda. Let's subtract them. So C minus lambda is an integral of zeta S Y minus F S Y times some power of X. Next, it shows a very specific C. It shows C that minimizes F S Y X to the S, the so-called saddle point. And I remind you that F is a very concrete object is essentially the last plus transform of the big one function. It's an extremely concrete object with no arithmetic. And it turns out that for this saddle point, F S Y X S, when wheel S is C, the integrand is essentially real value. Moreover, it's positive. And we won't need that, but it actually looks like a Gaussian. We won't need that. So for this choice of C, the integral is a positive function. Now, let's go back to this difference and write it as zeta over F minus one times A. It showed that in the range of here, the brand zeta C Y over F C Y is asymptotic to one. So the integrand can be written as little in some range. Now, in general, if you have a little of one, you cannot pull it outside of an integral because there are oscillations that you won't capture. But if the integral is a positive function, you can pull out a little of one. So it basically showed that the right-hand side here is little o of the integral with F X to the S over X. That's what it showed. And this integral is the definition of lambda. So we are done. C minus lambda is little of lambda. That is proof, what I call the integration approach, although it does involve a certain saddle point. And it stated without proof that under R H, one can extend the range to Y being at least log X squared plus epsilon. And this recovers also Hildebrand's R H, R H result. However, log X squared is a real barrier even under R H because it turns out this ratio is not asymptotic to one for smaller Y. So this method has a real barrier. It cannot be extended beyond this range that Hildebrand stated. Now I'm going to explain, I'm going to modify this proof to break the log X squared barrier. And I'll finish in a few minutes. First of all, I'm going to give a name to this ratio of generating functions. So G is the ratio of zeta S Y by F S Y. Now, instead of subtracting from C, the main term lambda as psi as did, I'm subtracting a different main. Lambda times G evaluated at the saddle points, the same C from the previous. So again, I have a difference of integrants inside the integral. And I rewrite it slightly in a beneficial way. I write zeta S Y as F S Y, G S Y. Now there is a common term F. And I rewrite the entire thing as a different of the G function evaluated at two points, time F. Now it's clear that there is a chance of this method to go beyond the previous approach because just by continuity, if S is very close to C, you'll see cancellation in this difference, just by continuity. Of course, continuity is not enough. So one has to do some sort of computation and understand roughly speaking, when is this difference? In what range is this little log G? In what range does this hold? Where I think of S as C plus I T. But once I solve this calculus problem, I get the following results. Under R H, C divided by lambda is asymptotic to the ratio of generating paths, essentially. Evaluated at C and at C. And this will hold when Y is at least log X to the three over two. Now I didn't give you much information about this subtle point, but there is a very old approximation to it. And it's roughly one minus log log X over log Y. And a different behavior emerges for smaller Y by a different method. So this is the limit of the method, but it's also the limit of the result. So the takeaway is that working with some modified main term leads to a better result. I'll just give two more slides. Let's summarize what we've seen. We've seen that under R H, C is asymptotic to lambda times G, for Y at least log X to the three over two. But it's natural to ask, how do lambda and G behave? How does this help us? Turns out lambda is always of the order of magnitude of X times the Deitman function. And if log Y grows faster, sorry, if Y is greater than any power of log X, this is also asymptotic. However, if Y is a power of log, there is a correction term found by Labrotash and Tanonbaum. So they show that there is a continuous function KT, starts as one, and we have this relation, lambda is a symptotic to X log time, this correction factor. And if Y is greater than any power of log, we do not see it, but why is a power of log we don't see? Okay, this leaves us G. What can we say about G? Do you remember how we constructed F? We threw away some pieces of information from Zeta. We throw away prime powers, and we throw away the error term in the prime logic. And because G is exactly the ratio of these two guys, it captures precisely this information. And it's not hard to show that log G one is a sum over zeros, and log G two is a sum over proper prime powers. And for instance, if S is real, this is positive, and this sum over zeros, one can throw into it anything you know about zeros of the Zeta function. For instance, you can show that it exhibits large values, it exhibits small values, you can bound it using zero free regions, et cetera. And I want to finish on this slide. Here I'm saying that I want to compare with what I started this talk with, with the polynomial and permutations. Let me look at the density of M-smooth polynomials and call it generating function Zeta Q. It's a finite order product, just as in the integer case. Let me also look at the density of M-smooth permutations and look at E-generating function Zeta S. This will be some exponential function, entire function. And what I actually prove for polynomials and permutations is that this ratio is asymptotic to the ratio of generating functions, Zeta Q by Zeta S at some special saddle point C. Let me call it C prime, because it's not the C from the integer rule. In the range where M is at least three over two log M. And this is a perfect match to what I prove in the integers where C divided by this approximation lambda, which is morally the big one function, at least in order of magnitude, is asymptotic to the ratio of the partial Zeta function by F, which is the melin transform of lambda at the saddle point. But there is a perfect analogy here, I would say. And I think I would end here. So I would like to thank you very much for listening and for the questions, and that's it.