 Thanks a lot, Mike. Thank you, Alina, Philip, and Mike for the invitation. It is a really great honor. So this talk is based on joint work with Olexi Klurman and Yoni Teravinen. And like many other analytic number theory talks, I'm going to begin by motivating things by looking at problems about the primes that we still don't know how to solve. And a lot of those problems relate in some form or other with the nature of the sequence of primes. In particular, we'd like to know what kinds of patterns exist among them. For example, can we find infinitely many twin primes? Can we find infinitely many primes that are given by values of irreducible polynomials, like x squared plus one? And a lot of these questions explore the extent to which the multiplicative properties, so for example, primality or the number of prime factors of additively coupled integers, say M and M plus H, so for example, M M plus two or M M minus one in these two examples, must depend on one another or if they can be independent. And at least a qualitative version of a very classical conjecture of Hardy and Littlewood states that if k is greater than or equal to one, and we have an admissible tuple of distinct non-negative integers, and that just means that the integers h i don't cover all the residue classes modulo any primes, so we don't have some fixed prime factor, then infinitely often we can, there should be simultaneous prime values given by M plus h one and plus h two all the way to M plus h k. Now we don't know how to solve this type of problem at the moment. And part of the reason is because the prime number theorem tells us, among other things, that the primes are sparse, and it's difficult to study their patterns. So what we'd like to do instead is to look at some sort of avatar of the primes that say is supported on a non-sparse set that we can analyze in some way, and that gives us some fruitful information about the primes. And a classical instance of this is the label function. So this is the completely multiplicative function, which is given by a minus one to the number of prime factors of integer n, counted with multiplicity. And it's been known since Landau, in the beginning of the 20th century, that the prime number theorem is equivalent to the partial sums of the label function on average tending to zero as x goes to infinity, and that the Riemann hypothesis, which is equivalent to a statement about the size of the error term in the prime number theorem, is equivalent to a statement about the cancellation in the partial sums of the label function. So somehow the label function has some intimate connection with the primes, and so naturally, it's reasonable to think that if we want to study tuples, simultaneous prime values amongst tuples of integers, it might make sense to look at tuples of values of the label function. And in the remainder of the talk, I won't be able to say anything that will tell me anything about the prime. So for now, I'm just going to focus on the problem of analyzing these values of the label function. Okay, so for the remainder of the talk, I'm going to refer to a k tuple of pluses and minuses as a sine pattern of length k. And Chalice conjecture is a statement about precisely the values assumed by these k tuples of values of the label function. So for each k greater than or equal to one, we expect that the values lambda n plus h1 through lambda n plus hk, where these h i's are distinct non negative integer shifts, assume each of these length k sine patterns infinitely many times. And moreover, we expect that they equidistribute among these sine patterns, namely, there are two to the k sine patterns of length k. And we expect each of those sine patterns to be reproduced with frequency x over two to the k when n is an integer less than or equal to x. Now this is known for k equals one, because the prime number theorem, which is a statement about partial sums of the label function, tells us exactly that lambda then takes the value plus one and the value minus one with equal probability one half among integers up to x. But for k greater than or equal to two, we don't really know very much about this problem. But at any rate, we can interpret what Charles Conjecture says. First of all, I mentioned that these tuples should equidistribute amongst these sine patterns of length k. But the upshot of that is that we should expect that each of the values lambda n plus h1 through lambda n plus hk, these could should be statistically independent as n varies. In other words, the value taken by lambda at n plus h1 should not influence or be influenced by the values given by lambda at these other numbers n plus h2 through n plus hk for most n, at least. Okay, now there's another way to phrase Charles Conjecture, which is related to what are called correlations. So here by a correlation of arithmetic functions f1 through fk, I just mean products of the form f1 n plus h1 through fk n plus hk. And sometimes I'll call these autocorrelations if all of these functions are the same. Well, then the correlation averages of the legal function, you can you can take the sum and you can decompose it according to the different sine patterns of the legal function, a length k the legal function assumes, and thereby express this correlation average in terms of the frequencies of these sine patterns. And conversely, you can express the frequency of sine patterns using correlations. And that's because there is a very convenient identity to express the indicator function for the legal function taking either a given plus or minus one value. And if we apply that for each of the different signs that we want the legal function to assume along this tuple, then we get a sum of these sorts of products, which we can then expand into a term which is independent of n, and then a sum of correlation terms, correlation averages. And so if we expect that each of the frequencies is about x over two to the k, it would be enough to show that all these correlations are small. And so chalice conjecture in the correlation formulation is a statement that for each k, and each set of distinct shifts h1 through hk, these k-point autocorrelations vanish in the sense that these averages tend to zero as x goes to infinity. Okay, now we can interpret these these two versions of chalice conjecture as saying that a lambda of m, which depends on the prime factorization of m and therefore is manifesting some information about the multiplicative structure of n plus h1 and plus h2, 300 plus hk, this conjecture suggests that these multiplicative structures, these factorizations, for example, should be independent at some level as n varies. And so we might expect that any other function which interacts in a predictable way with products should also manifest this sort of independence. And so for example, if we take a multiplicative function, so a function that splits products whenever the factors are co-prime, should also have this sort of independence property for its various values. But of course, this depends on the function and there are many exceptions. So for example, so by the way, for most of this talk, I'm going to focus on real valued functions, although in this slide I'll refer to some complex value functions. So for example, for if f is a real Dirichlet character, modulo q, then if I shift n by q, then the function is left invariant because the function is q periodic. And more generally, if I take complex valued functions that take values in the closed unit disk, which already is u, then we have other exceptions. So of course, we have complex Dirichlet characters as well. But then also our comedian characters. So these are functions n to the i, i, t for t, a real number. And these vary slowly with n. So if n is sufficiently large, n plus 1 to the i, t, n to the i, t are very close together. So the value f of n is very similar to value f of n plus 1. These are not independent values. And more generally, if we take a product of these characters, if even if we take a function, we sort of make it slightly different, but not too different from a product of a character, a Dirichlet character and an Archimedean character, then we'll still see some coupling between these values. And so this independence between these values is not seen by these types of multiplicative functions. So there is a generalization of Chalice conjecture, which is due to Elliott, which speaks about functions that precisely don't have these sorts of obstructions that they look like twisted characters. And the way that these can be, this conjecture can be formulated is by using the pretentious distance of Granville and Saundra Rajan, which is given as follows. So if we're given two parameters, y and x, bigger than two, and we have arithmetic functions taking values in the closed unit disk, we'll look at the distance between f and g over the primes between y and x to be this sum over primes. So one minus the real part of f p g p g p bar over p. When we want y to be, if we want to consider all the primes up to x, we'll just write that as dfg x. And then if we take x to infinity, this is a non-negative, this is an increasing function in x. So this limit exists and we can define dfg infinity. It could be infinity, it could be finite. Now, if we replace the numerator in this definition of the distance by the maximum that it could take, which is two, then Mertens' theorem tells us that this pretentious distance is bounded above by two log x plus a constant. And all of the summands are non-negative because this real part is always less than or equal to one. So we get this set of chain of inequalities. And of course, the upper bound can be attained. I were interested more in the case where it's far from the truth, which suggests that f and g behave very similarly. So we'll say that f is pretentious if the distance between f and one of these twisted characters, the character terms in our comedian character, is bounded. So it doesn't increase with x. And we'll say it's non-pretentious otherwise. And an example of non-pretentious function is the liable function, for example. Now, as I said, for most of the talk, I'll focus on real-valued functions, in which case, non-pretentiousness is just the same as saying that f, the distance between f and any real Dirichlet character tends to infinity. And le's conjecture for real-valued functions, at least, can be stated as follows. If f is a non-pretentious real-valued multiplicative function, then all of its average autocorrelations tend to zero. Okay, in particular, if we take f to be the liable function, this recovers chalice conjecture. Okay, now this is still open for all k greater than or equal to 2. If k is equal to 1, this is sort of a generalization of pnt, the prime number theorem, which is due to piercing in this case for real-valued functions. Okay, so I want to talk about two problems related to sine patterns, and they have to do with chalice conjecture in some form or other. But in order to motivate them, let me talk about the knowledge that we currently have about sine patterns and how many are attained. And some of this is a bit weaker than what is actually proven, but I just want to give it for motivation's sake. So if we look at k between 1 and 4, and we ask, okay, how many sine patterns are attained by the tuple lambda n plus 1, by k consecutive values of the liable function, then it's known that in all of these cases, all of the sine patterns are attained infinitely often, and in fact, they're attained with some notion of density, positive upper density, certainly. If k is equal to 5, then work of tau and teravine recently has shown that there are at least 24 of these sine patterns that are attained, but as far as I know, not all of them are known to be attained. And this gets worse as k gets large. Currently, the best result that I'm aware of is from a very deep paper of Matamaki, Rajabio, tau, teravine, and Ziegler, where they show that more than a polynomial number of sine patterns for each k are attained. But as far as I know, there are positive proportions of sine patterns. We don't know if that occurs for any large enough k. Now, if we assume some unproven hypotheses, then child's conjecture is known to hold. So, for example, if we assume that Ziegler's zeros for Dirichlet L functions real characters exist, then Chen has showed that all two of the k sine patterns do occur. But if we want to know, I mean, currently, we don't know how to prove child's conjecture unconditionally. So this motivates a problem that was stated by Duda-Hur in 2018 at an AIM meeting, which is the following. If rather than looking at the Liouville function, we look at a Liouville-like function, which is non-pretentious. Can we show that not only can all the sine patterns occur, but can we get equidistribution among the sine patterns? So in particular, can we construct a set of primes p such that the sum of the reciprocals of the primes in that set is infinite? This is what leads to non-pretentiousness, as I'll show later. Such that the Liouville function of p, that is the function minus 1 to the number of prime factors of n coming from p, counted with multiplicity, is such that all of the sine patterns of length 2 to the k are assumed by these two poles, lambda p and plus h1, through lambda p and plus hk, and in a way that's in the sort of, with the same frequency for each of them. Okay, so the Liouville function is precisely the case where p is a set of all primes. Here we're looking at possibly p not being the set of all primes, but can we still have such a function that looks like the Liouville function that does what Charles Conjecture suggested ought to? Okay, so I'm just reminding you what the definition here is. So we're going to focus on p that have relative density 0. So what that means is if we look at the number of primes in p up to x, that is proportion 0 of the primes up to x, x goes to infinity. Then with Clermann and Theravine, and we showed that for such a set that has relative density 0, but still has a sum of reciprocals that's infinite, there are such examples, then we do get this expected equidistribution among the sine patterns of length k. Now, how does it relate back to Elliott's Conjecture? This is a multiplicative function which takes real values, but it's not the Liouville function. Well, it's not difficult to show as I'll show a bit later in the talk that lambda p is non-pretentious whenever p has relative density 0 and it has the sum of reciprocals condition being unbounded. So these lambda p's that we treat in this theorem are in fact non-pretentious. And so this gives us an instance of Elliott's Conjecture folding. And as far as we're aware, this is the first deterministic example of a real value function that satisfies Elliott's Conjecture. I say deterministic because it was proven by Kluhrmann-Schreder-Benjoux that if you look at so-called random multiplicative functions, they do satisfy Elliott's Conjecture with high probability. Okay. Now, I just mentioned an example of a function that does hit all of the sine patterns with the right frequency, but we could ask on the flip side, okay, well, maybe the Liouville function doesn't hit all the sine patterns, is that possible? Or must any plus minus one value completely multiplicative function produce every sine pattern of length K? And the answer is no. So here's a very simple example. If you take the Legendre symbol mod 3 and then you modify so that that takes the value 0 at the prime 3. So if you want to correct that to get a plus minus one valued function, you can choose its value, a corresponding modified function so that the value of the function at 3 is plus or minus one. And that leads to two different functions, that leads to chi A3 tilde plus and chi 3 tilde minus. And if you look at either of these examples, and you look at the numbers 3m plus 1, 3m plus 2, 3m plus 3, and you evaluate these functions, well, this chi 3 tilde retains some amount of periodicity from the Legendre symbol. In particular, if the number is not divisible by 3, then it is genuinely equal to the Legendre symbol of that number. So chi 3 tilde plus minus of 3m plus 1 is just chi is just Legendre symbol of 1, which is plus 1. Similarly, these functions of 3m plus 2 is just Legendre symbol of 2, which is minus 1. And then this last one, which is divisible by 3, we don't necessarily know, we get a plus or a minus. But the upshot of this is that we can never get three pluses in a row. In any set of three numbers, we'll get at least one minus and at least one plus. And so we can never get three pluses, and we can never get three minuses. So those sign patterns of length three are omitted by the consecutive values of these two functions. And you can ask more generally, as the Lamers did in the 60s, for a classification of all completely multiplicative functions that omit the pattern plus plus plus plus for any k. So for each k, you get a different problem, classify the set of f such that you don't get k consecutive pluses in a row. This is what the problem is. And I think their motivation for this was because if you did study this problem, you'd be able to tell which completely multiplicative functions such as say, Legendre symbols, modular primes, give rise to a certain number of pluses in a row, and that sheds light on the number of consecutive quadratic residues you can get in a row. At any rate, Isai Schur proved in the 70s that in the case k equals three, the two examples I've given you here, chi three tilde plus, chi three tilde minus, these are the only possibilities for completely multiplicative functions that omit the sign pattern plus plus plus. So of course, you can ask this question for other sign patterns. But as I mentioned in just a moment, the sign patterns of pluses are a bit more special in a sense. So we'll just focus on that. Now Schur's argument is a bit involved and it's not very systematic. There's lots of casework. So preferably we'd like to approach these types of problems in a different way. Now Hudson considered the problem with k equals four and actually did a lot of computational work on this and claimed that there were precisely 13 functions aside from Schur's examples, because so Schur's examples say that plus plus plus number occurs. So certainly four pluses in a row can't occur for those examples. But besides those, Hudson said that for four pluses in a row, there are 13 functions that omitted. And they break down roughly into a union of two sets. This is a very non-standard way of stating this. But the point is that one set consists of modified Legendre symbols. So for primes five, seven, 11, 13 and 53, you do the same thing as I mentioned with chi three. You just change its value at the prime to plus or minus one and you get two different functions that way. And interestingly enough, the first four primes here are just the primes after three. The first four primes after three, 53 is quite different. It is what it is. If anybody has some insight as to why that should occur, I'd be happy to hear that. And then the second set T is sort of related in that it contains two of these modifications of the primitive character of mod four. And then it has another function, which is basically the completely multiplicative function that takes value plus one at all the primes other than two and takes the value minus one at the prime two. Okay. So, yeah, I mentioned plus plus plus plus is special. Also, these cases k equals three and four are special. What do I mean by special? Well, in these two results, Scherz result and this conjecture of Hudson, sorry, the Scherz result and Hudson's conjecture, we have a classification with a finite number of examples. But as soon as you take k bigger than or equal to five for plus, you know, the sign pattern of k pluses in a row, you get infinitely many functions. And currently, my PhD student each and you at Durham is looking at this problem of classifying these examples, assuming elites conjecture. So that's one thing. And then the other thing is that this pattern of pluses and four pluses in a row is special in the sense that if we replace four pluses in a row by say a pattern like plus plus minus minus, or in fact, any pattern that has two minuses in a row, that's very easy to construct infinitely many examples. So for example, if you have two minuses within a segment of length four, and you look at the function fp zero, which takes the value minus one exactly at a single prime and that prime is bigger than three, then you can only produce minus one at a single time at most, you can never get two minuses in the same length for segment. So certainly you get infinitely many examples for certain sign patterns of length four. So plus plus plus plus is special. Minus minus minus minus also, but I haven't thought about that too much. So with Alexi and Yoni, we proved that in fact, Hudson's classification is correct, and we proved in a bit of a stronger way. What we proved is that not only do functions outside of Scherz examples, and the examples in S union t produce plus plus plus plus at least once, it actually, they actually must produce them with on a set of positive upper density. So a large number of these occurrences do happen. Okay, so in the remainder of the talk, I'm going to focus on these two problems, the problem of constructing a Leuval-like function that equidistributes among sign patterns of all lengths, and then also about Hudson's conjecture, and hopefully I'll be able to shed some light on both of these problems. Okay, so in order to talk about correlations and sign patterns, these are related problems, it's worth noting, and certainly what I'll discuss in a moment is sort of extending this work, that there is a complementary result to Eliot's conjecture due to Kluhrmann, which treats the case where F is pretentious. So Eliot's conjecture suggests that the autocorrelations of a non-pretentious function vanish, what happens if F is pretentious? So it behaves like a Dirichlet character at many props. So what Kluhrmann proved in 2016 is that if you take a real value function, which is pretentious, so if the distance of F from a real character is bounded, then you can actually compute asymptotically all of the autocorrelations of that. And what those correlations, we can think of them as like global averages, these global averages factor into a product of local contributions. So we have this Euler product, which represents sort of the behavior of the values of F and chi at primes that don't divide the conductor of chi. And then we have this factor, which represents sort of the behavior of F at the primes that divide the conductor of chi. Again, there's a misprint that should not depend on chi p new because that's just zero. So ignore that. But anyway, the CF chi depends only on chi and the value of F at primes that divide you. So we have this sort of local to global type result. And the key observation, which I'll delve into a bit more in the next slide, is that if F is pretentious to a character chi, so meaning F of p looks like chi p very often, then if we look at the, then let's say assume F takes values plus or minus one, then the function capital F, which is f of n chi of n, this is basically determined by a small primes at large primes, F times chi will look roughly like one. So the values at those larger primes will not really influence the value of that. And so the correlations between these different values fn plus h1, fn plus hk, which sort of are certainly related to the correlations of F, can be analyzed using CF. Okay, so let me be a bit precise about that. So how does this work? Well, if we take the one of these correlation averages, so I haven't normalized it, but but if we take one of these correlation averages and for simplicity, I'll assume F is completely multiplicative, and I pick some parameter y between two and x, the length of the sum, then I can split each of the numbers n plus h1 through n plus hk into the product of the primes that divide it that are less than y and the product of the primes that divide it that are bigger than y, and that will give me some sort of product for n plus h1, n plus h2, n plus hk, which I can then split because of multiplicativity. And that produces then a double sum of some over the divisors of these n plus hj, where all the prime factors are less than y, and then the contribution from the remaining primes. So this inner sum is over the ns that where n plus hi is divisible by each of these dIs. And then moreover, all the primes that divide this quotient are larger than y. Now, of course, if f is varying significantly, then this is a difficult thing to estimate, mainly because we don't really know how to deal with large primes in a systematic way. And of course, large means something specific, which I'll say in a second. But here's a very simple toy case. If f of p is equal to one for all of the primes bigger than y, then all these summands are just one. And so this inner sum is precisely a cardinality. It's the number of n satisfying some congruence conditions, as well as some sort of sieve condition, namely that all the prime factors of these numbers, these quotients are bigger than y. And this is exactly what tools and sieve theory allow us to deal with. So for example, if we take y to be less than x to the epsilon, and we make sure that these conditions are not too strong, so if the dj's are not too large, then we can use the fundamental limit lemma of sieve theory to get asymptotic formula like for this. So in particular, we can estimate the correlations very well under this condition. And in some sense, this is underlying a result of Debussy and Sharpezy, which I sharpened a few years ago, which says the following, as a specific instance of this, if we take the leval function and we truncate it, meaning we take it to, we take a function lambda y, which is equal to lambda for all p less than some parameter y, and we make lambda y equal to one for all of the primes bigger than y, and y here is less than x to the epsilon, then we can actually show cancellation in the correlation sums, lambda y and lambda y and plus h, or h. The point being that if we do this sort of, we sort of go through this scheme of taking the small, the contributions from small primes away from this contribution from large primes, the large primes don't really hurt us. Okay, so you can run the same type of argument, if not just if f is equal to one on the nose for all primes bigger than y, but first of all, if f is equal to a Dirichlet character, because then you can split, you can split the sum over n into residue classes, and then you can kind of rewrite this type of argument with an extra arithmetic progression constraint on n, basically. And moreover, you don't need f to be equal to chi p on the nose, you can allow f to vary slightly away from chi p, if p is bigger than y, and you can still run similar arguments. And this is sort of underlying what Clurman's theorem is about. So in Clurman's paper, what he's effectively doing is he's taking this threshold y to be something like log x, and if f is pretentious, then because the sum of primes here, it converges, the tail gets small, and so because he can control the tail, somehow he's able to control this sum in red effectively. And so kind of the insight of what we did in our paper as one of the inputs to the main theorems that I'm talking about here is that we noticed that actually using a better sort of sitting technique, we can deal with not just pretentious functions, but also a certain class of non pretentious functions. And these are precisely the ones where f of p looks like one of these characters in some narrow window of primes, namely between x to the epsilon in x. So this improves over this narrow window log x to x. And so the analogy between Clurman's theorem and our theorem is the following. So here's the statement, if chi is a real Dirichlet character, and f is a real value to multiplicative function taking values in minus one one, which is non pretentious. So therefore the distance gets large eventually between f and chi. But also f is close to chi on the primes between x to the epsilon in x, then we can show that the correlation averages of f cancel. Okay, and we use this to solve Dudavir's question. So how does that go? So if we know that p has relative density zero in the primes, and the sum of the reciprocals of the primes in p is infinite. Well, first of all, what we can do is we can show that lambda of p, this associated variable function, so this function which is minus one exactly on the primes in p, is close to one for all for most of the primes between x to the epsilon in x. So how does that work? Well, we know because of this relative density condition that eventually the number of primes in p up to y is small compared to pi of y. And so if we look at this distance quantity, which is just a sum over primes, it's a form one minus lambda of p. But of course, lambda of p is plus one if p is not in this set capital P. So this is exactly what this distance is. Well, then we can just use partial summation to relate that to the count of primes in p. And because we know that that is small compared to pi of y, we can get some some bound on that distance. So here this integral is of size log one over epsilon. And so epsilon cubed log one over epsilon is small compared to epsilon squared. So this the distance between lambda p and one is small. And in order to apply our theorem, just to go back, we want to show that lambda p is also far away from one. So here I've been thinking of chi as being the the the trivial character. Okay, so I want to show that's the case, but I'll also show that lambda p is actually non pretentious. And because I mentioned that earlier. So how do you see that? Well, if we take any real character, we can split the distance into two parts, one part relating chi to one, and then another part relating lambda p times one with lambda p times chi. And roughly speaking, you can rewrite that in this way. So there's two contributions. There's a contribution from how close chi can be to one. And then there's a contribution from the primes in p. So if chi is a non principal character, this first sum is large basically by the prime number theorem in arithmetic progressions under the Ziegler-Bofisch theorem. And the second term here is little low of log log x because p is sparse compared to the primes. So that deals with the case of non principal characters. And if we compare lambda p to one, well, this first sum is identically zero, but the second sum is exactly two times one over p, where p is ranging inside of this capital p. And of course, by assumption, this is tending to infinity. So this is also going to infinity. And therefore, this distance of lambda p to chi is always large for any fixed chi. So that means lambda p is non pretentious. And moreover, if we take x large enough, then the distance between lambda p and one is large. As we've shown the distance between lambda p and one between x to the epsilon and x is also small. So based on our result that I mentioned before, we show that all the correlations of lambda p are small. And as I mentioned before, that tells us something about the sine patterns. So you're using the fact that all of the correlations are vanishing, we then establish equidistribution in sine patterns. Okay, so that's how we solve the problem using this new idea about how to deal with a certain sub collection of non pretentious functions. Okay, so in the remaining time, I want to talk about the second problem that I mentioned about Hudson's conjecture. And Hudson's conjecture is an example of what I like to call a rigidity problem for multiplicative functions. And what are these? In general, they are statements like the following. I want that we want to determine all multiplicative functions, they could take bounded values, real values, or they take complex values that are not bounded values. That satisfies some sort of stringent local property. So I'll give you some examples to give you a sense of that. But what I mean by a local property is that there's some relationship between f of n and its neighboring values. So f of n plus one, if the minus one, there's some sort of conspiracies between these various neighboring values. And if we can't, if we can't shift, and so there's two possibilities, either you determine all of them, because there are some, or show that there are no such functions that satisfy this local property. So here are some classical examples. Erdisch proved in the 40s that any multiplicative function, which is everywhere non-decreasing, is a polynomial. So here I should have said that this f is integer valued, if it could take complex values in which case, k doesn't actually have to be a non-negative integer, my apologies. But anyway, you can modify this, so this makes sense in all cases. Sorry, not complex value, real value. If f is real value and has this property, then it looks like a polynomial entity output. Shark has proved that if f is completely multiplicative and satisfies a linear recurrence relation, then it actually has to look like a polynomial multiplied by a Dirichlet character. And again, I think here I want, I'll enter the alpha, not just k, my apologies. More recently, some of these problems have come under consideration because of the relationship with correlations, which I'll talk about. So if f is a plus minus, well, plus minus one valued multiplicative function, then the partial sums of f are un, are bounded, if and only if f is periodic and its sum over its fundamental domain is zero. So this is related to the Erdisch discrepancy problem that tau, that tau resolved. So tau showed that if f is completely multiplicative, then this condition never occurs. And so Kluhrmann's got tau looked at the multiplicative cases level, Kluhrmann proved a classification for these, for those multiplicative, but not completely multiplicative functions that satisfy this. And then with Kluhrmann, a few years ago, we proved that no completely multiplicative function that takes values on the unit circle can have uniformly bounded gaps between its neighboring values. So this is an example of a stringent local condition that never occurs for any function in this class. Okay, so I mentioned correlations come into the picture. And the reason, the rough scheme of how this works is we want to analyze these local conditions about neighboring values in two steps. So the first step is to rule out non pretentious functions. And the reason why we don't expect these to occur is because if we have enough stringent, if we have a stringent property, this implies that f is not very random. And non pretentious functions in some sense are random, random in some sense. And then once we've ruled out non pretentious functions, we're in the pretentious regime where Kluhrmann's theorem is at a disposal and we can classify these pretentious examples using his theorem. So in order to be able to talk about rigidity problems, I need to tell you something about non pretentious functions and their correlations. And as I mentioned, this is a difficult area. There aren't too many results, but there are some very deep breakthroughs that do lead to some useful information. But these typically have to do either with weighted versions of correlations or correlations along a long sequence of scales. So I'll be clear about what that means in a moment. So here by a weighted version, I mean a logarithmically averaged version. So here, I'm going to use a notation, this notation to mean that I'm summing this arithmetic function A of n with the weight one over n overall integers up to x. So if A of n is bounded by one, then this sum has absolute value at most one over n summed over x, which is a size log x. So that's why we normalize this by log x. Then tau proved a result in the direction of Elliot's conjecture, at least for two point correlations, which says again, for real value functions, that if f is real valued, multiplicative and non pretentious, then all of the binary or two point logarithmically averaged correlations vanish as x goes to infinity. Now, this is not quite the same as the two point version of Elliot's conjecture, but town terrifying showed that from this, you can conclude that there is a dense sequence of scales, why such that if you look at the correlation averages along the scales and why along the scales that are not in, okay, sorry, the two ways to look at this, if you if you if you look at all the scales outside of some small set, why then the correlation averages 10 to zero along those scales. And here's small means that if you look at the the logarithmic measure of y intersected with interval one up to x, then this tends to zero. So for most scales, these averages are small. And maybe I won't talk too much about this, but the idea that that is exploited that uses this logarithmic averaging is that you can dilate these correlations by primes and suitable intervals and get some sort of in variance. And the the upshot of that is you can start with a logarithmic correlation in one variable and get some sort of average in two variables. And that average in two variables can then be estimated using some recent advances related to multiplicative functions, specifically, variants of the matamaki-ragivio theorem. I won't talk too much about that the point is that this logarithmic weight is sort of crucial here. Now in the same vein, one could ask about k point correlations where k is greater is greater than or equal to three. And in this case, there are there has also been some spectacular progress, but the hypotheses required to apply them are stronger. So what do I mean by that? These results relate to what are called strongly non pretentious functions. So these are functions again real value functions, such that for real for each fixed real character chi, the distance between g and chi is close to as large as it can be. So it's bounded from below by a constant times log log y. If you recall this distance is at most two log log y. So it's of that size. Well, then talent turbine improved. Two results in separate papers, but they're related. If you assume as given k real value functions that are multiplicative, such that the product of f1 through fk is strongly non pretentious, not just individually, but their product, then the logarithmic average k point correlation average of f1 through fk vanishes as x goes to infinity. And moreover, by a similar sort of approach as they did for the two point correlation, there is a dense sequence of scales on which the usual correlation averages 10 to zero. Here the notion of density is slightly different. I won't go into it. But the point is that at most scales, these correlations are small. Again, if the product of these functions has a stronger non pretentiousness condition. Now, in spite of the fact they're stronger conditions, this is still useful. So for example, if you take f1 through fk to just be the livable function, then if k is odd, lambda to the k is just the same as lambda, because plus minus one values are invariant under odd powers. And it's not too difficult to show that lambda is strongly non pretentious. Again, using the properties of Dirichlet L functions. And so for example, they were able to show some results about sine patterns. Namely, they were able to get logarithmic equidistribution for the various sine patterns of length three. And again, the reason is because there's some relationship between counting the sine patterns and correlations, and they could basically show that all of the correlations of the livable function are small, either with this theorem or with tau's theorem or with the prime numbers. Now the drawback of this result is that of course, if we want to use it in a rigidity problem, and we want to show that using correlation data, non pretentious functions can't satisfy the condition, well this is not going to work sufficiently well for us, because it only deals with a certain sub collection of non pretentious functions. So we'd like to plug up that whole by being able to treat other non pretentious functions as well. And so we do that by noticing that, okay, what has been treated in previous theorems, I mentioned Cloramans theorem is about pretentious functions. So here I'm looking at plus minus one value multiplicative functions. So Cloramans result says if f is close to a real character, then we can actually compute the correlations. And on the extreme opposite side of the spectrum, if f, and look at k odd, if f is very far away from every Dirichlet character, then we can show that all of the correlations vanish. But there's some intermediate range. There's an intermediate range of non pretentious functions, where there are characters chi, where the distance of f to chi is not is smaller than any constant multiple of log log x, when x is large enough. So we'll call these moderately non pretentious, they're not strongly non pretentious, but they are non pretentious. Okay, and the interesting point about these moderately non pretentious functions, which links them back to this theorem I mentioned previously, about f being close to a character in the range x to the epsilon to x, is that for most, so for many scales, a moderately non pretentious function will look like a character in that way. So if we look at what this statement means, if we take y large enough, then the sum over here one minus fp chi p over p, its sum up to y is small compared to log log y, which, and we can decompose that sum roughly speaking into segments of the form x to the epsilon up to x for various x's, so these x's look like basically y to the epsilon j for various j's. And it's not difficult to show using say Markov's inequality that there are many scales, there are many of these j's where this sum here is smaller than epsilon, which means that there are many scales where we have this required closeness condition between f and a character. And because f is non pretentious, we know that f is far away from that character if we look at overall primes. So this puts us into the position where we can apply our correlation theorem and deduce that those correlations are small, at least at many different scales, these generic scales I'm referring to. Okay, so based on that, what we are able to conclude is that if f is a plus minus one value moderately non pretentious function, these are the examples that we'll consider in the sequel, then for many scales, you don't really need to read this, but the point is that there's some sense in which we have a dense sequence of scales along which these correlation averages tend to zero. Okay, now what does this have to do with Hudson's conjecture? So recall that in Hudson's conjecture, we have a rigidity problem, which is that we want to classify all the plus minus one valued completely multiplicative functions that don't take plus, plus, plus, plus as consecutive values. So we can relate that to correlations in the usual way that I've mentioned already a couple of times, which is that we can take the product of one plus f of n plus j, that's equal to two or zero, depending on whether f of n plus j is one or minus one. And if we never get four pluses in a row, then this product will always be zero, because at least one of these numbers f of n plus j is equal to minus one. So we have the left hand side is equal to zero, and we can expand this into correlations, we get a main term and then a sum of correlations. So if we know how to deal with the correlations for non-pretentious functions, we can rule out the possibility that a non-pretentious function does this. That's what we want to do. And of course that is what we want to do in this case, because Hudson's classification, his list of examples are precisely just characters modified at one prime, so they really are pretentious examples. And so we want to show that the only examples possible are indeed pretentious. And so to show that they're non-pretentious, we'll get a contradiction by showing that either these different correlation averages vanish, meaning are a little low of x, or they are sufficiently small. And I'll show you what that means in just a moment. So we can get rid of a couple of examples right away, a couple of cases, sorry, for these different correlations right away. If k is equal to one, then this is just an average f of n plus j. And we know by Viercing's theorem I mentioned earlier that these are small, if f is non-pretentious. And if k is equal to two, then we know that at almost all scales, by work of Taylor and Theravine, and all of these averages are small as well. Now of course, there is some issue about, okay, almost all scales, maybe that depends on which choice of correlations we have. It turns out it doesn't, it depends only on the functions involved, at least in this instance. There's a way to keep track of the sequences and make sure that all the subsequences in question are consistent. Okay, so what's left over using these two results is we need to deal with the four-point correlation and all of the three-point correlations, at least on most scales. So these are the scales that Theravine and Theravine's result deal with. Okay, I misprinted this, it should be two. Okay, so for the four-point correlation, we can't actually deal with that directly. We can't show that it vanishes, but what we can do is we can show that it's smaller than one. And the way we do this is by using the fact that if we take this correlation f of n plus one up to f of n plus four, and we shift n by n plus one, that product retains three of the same factors, f of n plus two, f of n plus three, and f of n plus four. So we can take our sum, our original correlation sum, we can sum it twice. And what that leads to roughly is just the sum of the product f of n plus one through f of n plus four, f of n plus two through f of n plus five. We can factor out three of those factors and use a triangle inequality. And what we're left with is this sum. And this sum has some n's which are equal to two or zero, depending on whether f of n plus one is not equal to f of n plus five or it is, they are equal to f of n plus five. So we get two here. And again, we can track this condition about equality using correlations. In particular, this event is supposed to have density one half. So that gets rid of that too. We get a main term and a correlation term, which we know is going to vanish along the substance. So we get that we can control this, we can control, we get an upper bound by a half, which means that the only thing left to do in order to rule out non-pretentious functions is to deal with the K equals three case, which now if we input what we know that the correlation average for the four point correlation is less than a half, less than a half, then what we have as a lower bound for the three point correlation contribution is at least a half times x, roughly. Okay, so that leads to the three point case. Now, as I mentioned before, if f and therefore f cubed, which is the same function, is strongly non-pretentious, then we know that along subsequences, tau and teravine improve that these are small. Well, there's that leaves open the possibility that f is non-pretentious, but not strongly non-pretentious, but these are exactly these moderately non-pretentious examples that our theorems are about. And so if we apply our results, then we can show that all of these three point correlations are also small along a subsequence. And as I mentioned, we can, it turns out that we can choose a common subsequence that works to make sure that all of these different conditions about vanishing occur along the same subsequence, which leads to this contradiction for non-pretentious functions, that we get this lower bound condition for the three point correlations, but then actually also, oops, this should be little, that's right, but also that we get an upper bound, which is like little over one along a subsequence. Okay, I don't think I have much time left, but if you're interested, I can talk about during the question period about the pretentious case. Thank you very much for listening.