 you want to start? I don't know, maybe if Zach wants to introduce this thing or I can do it, I mean, I'm more than... Oh, you can do it the way. Yeah, absolutely. You can do it. You can introduce our joint venture. This is a very nice thing and I want to start by saying that for me it was a very nice surprise when Emmanuel contacted me about doing this thing, this thing together and I find this initiative of a Met Associates seminar from ICP, very, very interesting, bringing all this community together regularly. I think it will be very, very nice. So thank you for thinking of us and this occasion. I'm very, very glad. And what makes me even happier is to have today, besides the catchy title, Certainty vs. Uncertainty, my friend Emmanuel Carnero and a friend of the Brazilian Met community who even a few hours flight away continues to impact what happens here. So I'm very, very happy and it's a joy for me to introduce Emmanuel Carnero from ICP who's talking about Certainty vs. Uncertainty. Please. Thank you, Edgar. Can you all hear me fine and see the slides? Yes. Okay, I have some people in my screen here. I hope you don't all don't close the video because it's nice to see some faces here. It's a bit solitaire on this side of the screen. Feel free to interrupt me at any time and have a conversation. The point here is just to have a little bit of fun this afternoon. Thank you, Edgar, for the kind words. It's very nice to be in touch with all the Brazilian community that is joining us today. Thank you very much for your initiative to have this amazing webinar and analysis and PDE in Brazil that started in the very beginning of this crisis. It's very nice to have energetic people like you, you know, running the show and you know, as you know, you're also one of our associates here. So we're kind of launching, as of a few weeks ago, our math associates seminar which is supposed to happen also once a week to gather our community of a CTP of associates of visitors of friends around the world. So I welcome everyone this afternoon here and some, most of you I know, I have met and the ones that I haven't met yet, I'm sure you have the opportunity to be here at ICTP at some point in the future as an associate as a visitor or participating in a conference. It's a pleasure to have you all here. And it's good to be in touch with everyone. This is the first time I come back to ICTP after three months. So I feel very happy today here in the office. The topic today for the talk. It's a bit, it's about Fourier uncertainty. So the type is certainty versus uncertainty. And I'm going to talk a little bit about this new joint work that I've been doing with the Oscar Chessada Herrera, which is from IMPA, is from Costa Rica. So I guess all the Latin American countries today will be very well represented here. Okay, are we ready to start? Good. I will divide, let's see if I can divide my presentation in three parts. Okay, I'll do a brief introduction on the Fourier uncertainty and recall some of the main facts. Then I'll move to a little bit of this particular instance that I want to discuss and the previous works. And then the third part of the trilogy, I want to talk a little bit of our point of view on the subject. By the way, again, feel free to jump in the conversation at any time. So my first slide is a little photo of the guy that's going to be the main actor in this afternoon, Joseph Fourier. And this is a memory that I have from my first trip to Paris. So the names on the Eiffel Tower, it's a very nice way that the French have to pay a tribute to their scientists. So there are 72 names in the Eiffel Tower and Fourier is one of these names on the west face of the Eiffel Tower. So let's talk a little bit about just to introduce the topic and to show you a bit of the relevance. A very basic example, I guess it starts with Fourier series. When you have a function, let's say, can you see the little hand moving here? And I do this. Yes, we can. Very good. So if you take a function f periodic function, say the in the torus minus half half and put in here the interval, you can do what we call write it as a Fourier series. You can take any integer coefficient k and compute what I call f hat of k, which is just the integral of the torus of f against the exponential into the minus 2 pi i kx. My exponentials here will have this normalization. And then it's known fact that under some conditions you can write your function f as the sum of these sines and cosines with this Fourier coefficients. And this is, I mean, not of people, not everyone remembers, but this is present in our everyday life much more than we think about. Every time that we talk on the cell phone, we're sending a signal, somebody's just cutting this Fourier expansion of my function into some number of frequencies. I give you maybe 30, maybe 50, the first 50 signals. And then you get on the other side and you have to reconstruct the message. And the way the reason we do it is that our ear, I mean, it doesn't make a difference if we send just 30 or if we send the next thousand. It doesn't make a difference to our ear. So it's much more cost effective to just send a little bit and reconstruct the message on the other side. Same thing happens with the TVs. A few years ago, I remember when I was a grad student, there was this boom of LCD TVs and then the definition of the TVs went from 720 to 1080 and then full HD, ultra HD. And then after a few years that stopped. That stopped essentially because the definition achieved the threshold of what our eyes can see. So it doesn't make sense to subdivide the screen into more pixels. So every day we're kind of playing this game of giving partial information on the Fourier transform side, receiving partial information on my function side. And with this partial information on both sides we try to optimize whatever our problem requires. We try to do the best that we can with the partial information on both sides. And this is essentially what the Fourier uncertainty phenomenon tells us. So this is my proper definition. So let's take an integrable function f and I'm going to define for the rest of this talk the Fourier transform f hat at the point c. C is in my rd. rd is my Euclidean space. It's just the integral of f against the exponential e to the minus 2 pi i x c. This is the inner product. You know, you start with an integrable function. This is a well-defined and continuous object f hat. Plancher-Elst theorem says that this is an isometry in L2. So you can extend this to an operator in L2. And these words that you may have heard in literature so many times, Fourier uncertainty, it appears in many different occasions. And to me, it essentially says that one cannot have an unrestricted control of a function and it's Fourier transform simultaneously. Okay. Now, this may take a lot of shapes and you may find this phenomenon in very different makeups. But essentially, this is what it's what it means. So let me give you a very first and basic example. It's kind of trivial if I give you the whole function say suppose we're in dimension one and I say that f is a Gaussian into the minus pi x and I ask you, can you have on the other side f hat of zero being the number c of your choice? And then you may say, well, no, not unless you're very lucky because I gave you the function completely. So if I gave you f completely, I'm already giving you f hat. So unless you choose the c here to be one out of luck, you would be screwed. So there's no, not any room to work on the other side. On the other hand, if I give you the function f and say that it's a Gaussian, but just outside of the unit interval and I ask you, can I have them f hat of zero at the other side being the number c of your choice? And then you have room to make that work. The answer would be yes. Okay, so if I give you less information on one side, you can ask for more information on the other side. So this is basically what it means. Of course, the topic is very, very broad and studied in mathematics and physics. It has their own interpretations in physics since beginning of 1900s. A quick check in Amazon, you find books about it, you know, and books that are just entitled the uncertainty principle in harmonic analysis. This first book has 600 pages, for example, there are people whose area of research is just uncertainty principles in free analysis and so on and so forth. Okay, so this is a big topic where we're presented now with daily activities. Let me give you, say, a few of my favorite examples of Fourier uncertainty principles. Okay, so this is perhaps the first one by Heisenberg around 1927, where he says that the L2 norm of a function on the left hand side is bounded by a constant that depends on the dimension times these momentum here. So you have f times x and an f hat times c. So if you look at the right hand side, you will see that the mass of f and f hat cannot be too concentrated near the origin, right? Otherwise, these integrals on the right hand side would be small, they cannot beat this one. So this is the first manifestation that you cannot concentrate too much, the integrals, the mass of f and f hat around a point. Let's see. So Hardy around 1933 had this other version of the uncertainty principle. If a function f decays like a Gaussian e to the minus pi a x square and f hat also decays like a Gaussian e to the minus pi b c square and the product of a and b is bigger than one, then your function has to be zero. Okay, so f and f hat cannot both have a Gaussian decay too much. In particular, you cannot, from this result, you cannot, you see already that f and f hat cannot have both compact support. Okay, so you already know, at least to go out from this talk knowing that f and f hat cannot both have compact support. Let's see. Another one that comes up in our study here this afternoon is this one by Ameren and Berthier from 1977. If you have two sets in RD that are of finite measure, then the L2 mass of f is controlled by the mass of f in the complement of e and the mass of f hat in the complement of f times a constant that depends just on the sets and the dimension. In particular, this other result here also implies that f and f hat cannot both be, for example, compactly supported. Let's go to more recent examples. Well, this is actually an old one. There's actually more than an uncertainty principle. This is what's called an interpolation formula. Okay, so there are these formulas that allow you to somehow reconstruct your function given some particular set of data. So here if a function is in L2 such that the Fourier transform is compactly supported in the interval minus half half, then you can recover your function f by just the values of f at the integers. So f of x is equal to f of n times this sink functions here. So if I give you the values of f at the integers, you recover completely your function f. In particular, if the values at the integers are zero, then your function has to be zero. There's a recent one by Danilo Radchenko, he was a postdoc here at ICTP, Marina Vyazovska, that they can recover a Schwartz function. So suppose you have a Schwartz function that it's even, then you can recover f from its samples at square roots of n and f hat at square roots of n, where n is a natural number. And this a n of x are interesting Bayes functions. There is a recent one, which is also, I like it very much, this is one by Mateus Souza and João Pedro Ramos from last year that says that if you have an alpha number between zero and this one minus square root of two or two, and you give me a Schwartz function that vanishes f and f hat at the plus or minus n to the alpha, where n is a natural number, then the function has to be zero. So you see, the idea here is that if I ask you too much about f and f hat, then you probably end up with just the zero function. Okay, so this is good for my introduction. Any questions so far? Are we good? Let me move into this part called Sign Fourier Uncertainty. So in 2010, there was a nice paper by Burgan, Kahane, and Klosell, where they put together a certain phenomenon for the Fourier transform and give applications to a problem in algebraic number theory. So they were interested in this particular paper and giving bounds for discriminance of number fields. And they somehow connected this to a problem in Fourier analysis, which is essentially they want now, the original uncertainty principle of Heisenberg tells you that you cannot concentrate the mass of a function and its Fourier transform around the point. Here, Burgan, Kahane, and Klosell, they want to investigate how much can you concentrate the negative mass of a function. So suppose now your function has a positive part and a negative part, they want to understand how much the negative part can be concentrated. And the setup is the following. So let's take a continuous function f in rd from rd to r, and I'm going to say that this function is eventually non-negative, well, if f of x is bigger than or equal to zero for sufficiently large x. And I'm going to define this r of f, or you can read radius of f, as the infimum of the r's such that f is non-negative from that radius on. Okay, so it's kind of the last sine change of my function f. For example, there is an example here, I just took a polynomial 10 degree 10 polynomial times a Gaussian. So this point here in black, so this is an even function, this point here in black about two points something is the last sine change of my function. So the r of this function would be this point. So they investigated the following problem. Now, consider the following family of functions. So you are in dimension d, and I take a function which is l1 of d, not zero, a function that is continuous, even real value such that the Fourier transform is also integrable. So you have function and its Fourier transform, both integrable. So they're both continuous, they're even, and they're real value. Now, I'm going to assume on the third line here that f and f hat will be both eventually non-negative. So at infinity, they will become non-negative. And the competing conditions is that at the origin, they are non-positive. So f of zero, which is just the integral of f hat is going to be non-positive and f hat of zero is going to be the integral of f, it's just going to be non-positive. So you see the tension going on here. f is eventually non-negative, but its integral over the whole space is non-positive. f hat is eventually non-negative, but its integral is non-positive. So there's a compensation here, there's a competing thing here. And they define the following quantity. So I'm going to use the calligraph a to represent the class of functions and I'm going to use this math bb a here to represent the constant that I want to investigate. I'm going to put a plus one of dimension d to be the infimum over all functions in this class of the product r of f times r of f hat. So the product of the last sign changes of the two functions. Why do I take the product? Well, it turns out that this is the right quantity to be investigated because this is invariant under dilations. If you have, if you want to take your function f and rescale it in order to put the last sign change very big, then you will do the reverse operation with the Fourier transform and this product remains invariant. So dilation here will not help you. So this is actually the natural quantity to be investigated with the product of the radius when these two functions, when each of these functions becomes non-negative. And what they do is that actually this quantity here, this infimum, is bigger or equal than a certain constant, a certain positive constant. So you should see this as an uncertainty principle. This infimum cannot be zero. You cannot make the negative mass be very, very concentrated in the origin in this sense. They actually give these nice bounds that this quantity here, depending on the dimension d, is actually bounded by above and below by, it's of order square root of d. And then the constant here is one over two pi e and the constant here is essentially one over two pi. But the message is that it's something away from zero. Okay, here are some examples. So I just take a function f of x sum of two Gaussians minus another Gaussian. You can see here when this function becomes non-negative and this other function becomes non-negative. And what I want is just the product of these two radii. And you can see that the competing conditions are verified. Both the integrals of both of these functions are non-positive. So, okay, so let's now think a little bit about this problem. So the first idea that Bruget and Kahane close our head is that you can actually reduce the problem to just look at eigenfunctions. You don't have to look at the whole set of f and f hat. You can do some interesting symmetrizations and just talk about eigenfunctions. So recall that we started with this family. You have f and f hat integrable and continuous, non-zero, non-zero functions, such that they are both eventually non-negative and their integrals are both non-positive. So what you do is you can play this rescaling game, because if you take f delta of x to be f of delta x, then the Fourier transform has an x over delta. So when you play this rescaling game, you can align both of the radii of f and the radius of f hat. So you can just assume by rescaling that the radius of f is the same radius of f hat. Then you can just sum the two functions. So when you consider this function h, which is just the sum, well, let's just take a look at why this function w has a radius which is less than the original f. And the reason is that, well, if you are past this radius r, this function is positive, this function is positive. So obviously you will sum and you get something positive, non-negative. So the last change, last sign change of w is certainly less than or equal than f. One interesting and crucial observation is that this function w is not zero. This function is not zero. And the reason is if w were zero, you would have f equals to minus f hat. So you would have f being essentially eventually non-negative and eventually non-positive at the same time. So f would be eventually zero. The same for f hat. But when I say eventually zero, I say that it has compact support. So f would have compact support and f hat would have compact support. And then your original function would have to be zero, which is a contradiction. You started with a non-zero function. This function w that you construct like this is also non-zero. So it's a function in the class and it does a better job. So if you start with any function in the class, you can always reduce it to an eigenfunction that does a better job. So that's the moral. So you can, well, there's even another symmetrization that if you want, you can actually average over the group of rotations of the Euclidean space to make your function radio. We are not necessarily going to use this right now, but it's right here. There's this additional one if you want to take a look. So sometimes I will consider them this, this eigenfunction problem. So this eigenfunction problem, I'm going to label the same thing, calligraph a plus 1d with a star. And this is just the set of functions f, continuous and integrable, real value. Now such that f hat is equal to f, so it's an eigenfunction with eigenvalue one. Now the two conditions are become the same. So now the condition is just that f is eventually non-negative and the integral is non-positive. And the thing that you want to minimize is just the infimum of the radius of such a function. So here's an example of some of two Gaussians minus another Gaussian. This is an eigenfunction with eigenvalue one, with a certain radius here being something between 0.6 and 0.8. Now let's take a look at the proof of this result of Bourgan, Kahane and Closel. The proof is actually quite simple. If you take now a function that is an eigenvalue, an eigenfunction, and you set r to be the last sign change, the radius of f. Now I'm going to denote x plus as the positive part of a function and x minus as the negative part of a function. So the integral of f is just the positive part minus the negative part. And you are assuming that this is bigger than zero. So this negative part somehow wins. It's bigger. So therefore this is the string of computation that needs to be done. The first inequality is the Hausdorff Young inequality. The l infinity norm of the Fourier transform, which is the same as f in this case, loses to the integral of f. The integral of f, the l1 norm of f, is just the sum of the positive part and the negative part. But this loses to two times the negative part because of the competing condition. And the negative part, as we are setting the problem, lives in the ball of radius r. And here you just use a basic holder inequality. So it's two times the l infinity norm of f times the volume of br. And you end up with the conclusion that the volume of the ball, br, has to be bigger than 1 half after you divide by f infinity. So this is how you get the lower bound for r by simply using Sterling's approximation. It cannot be very small. Let's see. There was further work on this in 2017 by Philippe Gonçalves-Gioggo, Leveri Silva, and Stefan Steinerberger, who produced refined estimates in all dimensions for this and also studied and proved the existence of extremizers. So these principles, they proved that in fact, in each dimension, there is one function that is an extremizer that does the best job possible. Okay, let me move now to a related or a dual silent certainty principle that was introduced by Henry Kohn and Philippe Gonçalves in 2019. And here the idea is to allow another eigenvalue, the eigenvalue minus one. So everything that we did was for this s equals to plus one. Now I am going to allow s to be minus one. And I will start with the following family. Again, now I'm going to baptize a s of d, s could be plus one or minus one, your favorite sign. These are the functions which are integrable and continuous, even revalued such that f hat is also integrable. Now the last condition is that f and s f hat are eventually non-negative. Okay, so when s is one, it's the original problem. When s is minus one, you want f to be eventually non-negative and f hat to be eventually non-positive, if you want to think like this. And these are the competing conditions. So if a function is eventually non-negative, its integral will be less than or equal to zero. And this guy is eventually non-negative, its integral will be less than or equal to zero. So you have perfect competition here. And you define the same way, this constant a s of d as the infimum of the square root of the product of the radii of the last sign change. You can play the same trick that we did before and reduce this problem to eigenfunctions. You may assume that f hat is equal to s f above. So in the case of minus one, you can talk, you can think about minus one again functions. And gonsalves and con, they actually prove the same sort of bounds for the plus one that Bourguin, Kahane, and Cosell had obtained. They prove for the minus one. It's bounded above and below by square root of d and times universal constants. So here's an example when f hat is equal to minus f. So you have really an eigenfunction here that does the job. You see that the last sign change here is actually a little bit bigger than one. My next example has two functions. This is not even taking an eigenfunction. You have f as this sine square pi x over x square minus one and f hat as the Fourier transform. So it's a band limited function. The Fourier transform is compactly supported. And you see that the last sign change is one. And here it's one, two. So the product is one. This is actually an extremal example. In dimension one, you cannot do better than the answer being one. We will see in a minute. So let me talk a little bit about sharp constants. You see, I showed you the, I showed you a problem. We defined a class and I showed you the proof. The proof is not particularly complicated. But what is the most beautiful part in these problems is that one can find sharp constants and extremizers in very, very few cases. You can do numerical computations to try to estimate what your constant is. But in very few cases, people have actually found the exact values of the sharp constants. For those of you who are new to this, so this, these problems about finding the sharp form of a functional in a quality analysis, they are very, very fundamental problems, beautiful and highly non-trivial problems, historically speaking. To me, they are almost like the, among the royalty of problems in analysis that one can think about. In this particular case, these constants were found just in four occasions. So this paper of Conan Gonzales is a paper in Inventionals in 2019. It's a very nice paper. They actually found the constant for the original problem of Burgan-Kohane-Kozel in dimension 12. It proved that a plus one in dimension 12 is square root of two. And they also recognized that the new dual problem that they proposed with the eigenvalue minus one was connected to the study done on the sphere packing problem. And as a corollary of the things that were done for the sphere packing problem, we'll see in a bit that there is a free analysis problem there too. So this, I pose as a corollary of the paper by Conan Gonzales in 2003, the paper by Vyazovska for dimension 8 in 2017, and the paper by Kon Komar Miler, Rychenko and Vyazovska in dimension 24 for the sphere packing. And you get the sharp constant in dimension 1, 8 and 24 for the minus one eigenvalue problem. And so these are four examples of some of the best papers in analysis and number theory over the last decade. These are three papers in the annals. This is a paper in Inventionals, for example. It is very hard to find. It is conjecture that in dimension two, the value of this constant in the eigenvalue case, this is not minus two. I'm sorry, this is minus one. It should be this number, four-thirds to the one-fourth. And the actual original problem of Burgain, Kahane and Coloseu in the real line turns out to be very difficult. And by now we have a conjecture that the value of the sharp constant should be this number. This is a very nice conjecture in this recent paper by Felipe Diogo and John Pedro Ramos in 2020, where they do this and they do this extensions of this plus and minus one sign uncertainty principles to a general framework, operator framework, very interesting applications to spherical harmonics, handcuff transforms and others. Let me give you just an idea, or I'm just going to browse for those of you who like a little bit of number theory and for you to appreciate the difficulty that it's behind finding these sharp constants. So I'm going to take this example from the paper of Henry Cohn and Felipe Goncalves showing that this constant for the eigenvalue plus one in dimension 12 is square root of two. So when you want to show that this constant is square root of two, you need a tool to prove a lower bound, you need a tool to prove the extremality, and then you need a tool to construct a function. So the extremality in this case comes from a very nice Poisson summation formula. You start with the z in the upper half plane, you have this Eisenstein series that we call E6, it's just this series here. The nice thing about this series is that it has a Fourier expansion, which is one minus sum, no negative coefficients Ck, which turned out to be a multiple of the sigma five, sigma five is the sum of the fifth powers of the device, okay? This turns out to be a nice modular form, and this is the functional equation that it satisfies. And once you have a functional equation like this, you have a Poisson summation, you can apply what this means for a Gaussian, and then you can extend for the whole class of radio shorts functions. So this leads to a very funny Poisson summation formula for radio shorts functions in R12. So you see, for every radio shorts function in R12, this formula in blue here holds, you know, it's a bit strange and different from other Poisson summation formulas that you might know in the sense that you have negative coefficients here. But this is exactly what we need, because if f is equal to f hat, you move this guy to the left, you move this guy to the right, and then you have f and f hat at zero on the left, and this quantity is supposed to be non-positive, and you have the sum of all the rest of f's on the right at the square roots of 2k. So if you are past this square root of two, which is the number, everybody will be non-negative. And here's the competition. You have a non-positive number equals to a non-negative number. Essentially, both sides must be zero. And this is, there is a little bit of technical work here, but this is essentially how you prove that this number has to be bigger equal to the square root of two. So you found the right lower bound, and you actually, from this Poisson summation formula, you actually found a hint on how to construct this function. This function should be zero at this value and at these values too. So there's a lot of zeros at this point square root of 2k. And this whole machinery developed by Viazowski for the sphere packing problem and her collaborators tells us how to construct certain functions with prescribed data of f and f hat at nodes of interpolation, like square root of n and square root of 2n. And just to give you an idea, this is, I'm going to tell you what's the extremal function in this case. We take z again in the upper half plane. You have the three classical teta series here, teta zero zero is just this one, teta zero one is this one with alternating factor and teta one zero is this one. So the classical teta series, and you have this 24th power of the Dedekin eta function, which we call the discriminant function, delta of z. So consider these four functions. And then you let a function psi to be this guy, teta zero zero to the four, teta one zero to the four, multiply by teta zero one to the 12 divided by delta, and put your f to be this function. Magically, this function will be your desired function. It has the zeros where it has to have, there is no negative where it has to be, and there is no positive where it has to be. It looks magical, it looks like witchcraft, it is a little bit, but there is a reasoning behind this thing. So once you work with modular forms in this case, once you put the eigenvalue conditions into play, the eigenvalue conditions that need to be satisfied, what you have is essentially the certain spaces of modular forms of a certain degree is a finite, finitely generated space, it has certain generated in this case by five elements. And when you put the conditions of the eigenfunction that you need, and some decay conditions at the cusps, you essentially solve a system and find that this would be the only function possible in this five-dimensional vector space that could do the job. Then you plug in and prove that it actually does the job. Okay? So when you see it for the first time, it's a bit magical when you start to read and understand, there is a lot of insight and what goes here. But bottom line, these are, have been very hard problems. The sphere packing problems are even harder ones. So this is a little bit of a story on the sphere packings. It has been solved in dimension one. It's obvious in dimension. So this is a problem of packing spheres in the space to cover the highest possible density. In the plane is when the circles align their centers as an hexagon. And space is a picture like this that you see when you go and buy oranges in the supermarket. This was, the score was about 90% of the space and the score was about 75%, 74% of the space. This is the density I'm calling delta three. And the connection to these problems on sign and certainty comes from this, this problem proposed by Conan Elkis and independently by Dimitri Gorbachev in the early 2000s. Let's call it the linear programming extremal problem for sphere packing because they found this way to give bounds for the sphere packing via a problem in Fourier analysis. So their problem is this, this problem here. Consider a continuous function which is not identically zero and such that let's say G and G hat are even real value and integrable functions. G of zero is equal to G hat of zero. This is positive. Now it's a bit different from our original problem. So G hat here is positive everywhere. So it's a positive definite function. And G is just negative outside of a radius R. So whenever you can construct a function G with these properties for some radius R, your density of sphere packings in dimension D is bounded by this number. So the idea of course to get the better and better upper bound is to find this R as minimum as possible. And this is exactly what they did. So they realized that they could construct functions like this by taking, you know, you can take a base, Hermit bases or Laguerre bases of polynomials times Gaussians. You can put the first 100 coefficients and try to optimize. And they were getting functions in this paper by Conan Elkis that we're doing a very, very good upper bound, very close to the density of the best known packings in dimensions eight and 24. In dimension eight, you have this E8 root lattice. In dimension 24, you have this lattice called leach lattice. So they were getting very close. And they conjectured that in dimensions eight and 24, one could find a function G exactly with this solution. And this is what the Vyazovsky did in dimension eight and Konkou-Marmille-Rachenko-Vyazovsky did in dimension 12. This problem is even a bit more complicated than the sinus certain in the sense that whenever you solve this one, you note that I put here a function G because if you define F to be G hat minus G, then this becomes a function in our class A minus one of D. It's a function that it's a minus one eigenvalue and it's F, it's eventually non-negative. Okay. So the solutions come exactly from these two papers. Okay, I hope I am not overly technical so far. So the last 10, 15 minutes here, I want to talk a little bit about the different point of view that we are looking into this that I'm calling generalized sine free uncertainty principles. Okay. So we have discussed the problems related to eigenvalues one and minus one. So when we started to think about this, Oscar and I, I remember we call you that this is all that I'm going to talk about in the last minutes is this based on this joint work with Oscar Quezada Herrera. We discussed, we started discussing this last year, maybe in August or September, and things got a little bit slow because now there are kids in the picture. But our original idea was how can we, you know, find suitable problems with the other eigenvalues? There should be somehow sign uncertainty for the other eigenvalues. It should somehow be related to odd functions instead of being related to even functions. And then our goal moved or evolved to a situation that we wanted now to investigate the situation where now the signs of f and f hat resonate with the given genetic function p at infinity. So I'm going to give you a given genetic function p and I want the signs of f and f hat at infinity to be the sign of that function p. And I'm going to put a competing weighted integral condition for you to have a nice problem. So everything that happened before will just now be the case p equals to one. And I am going to make an adjustment on the definition of actually eventually no negative functions that I'm going to allow now for functions to be actually measurable. You're going to take a measurable function g. I am not going to identify functions that are equal almost everywhere anymore. And I'm going to say that the measurable function is eventually no negative. If it's no negative for sufficiently large x, but for all the points and not almost all the points. Okay. And I'm going to define the radius of g as we defined before. The infimum of the radii are such that g is bigger equal than zero from that moment on for all the points, not almost all. Okay. So now let me tell you a little bit about this function p that we are allowed to put the very minimum that we can ask for this function p. So p is going to be a measurable function such that it is locally integrable and it's going to be even or odd. I want to take even functions to talk about some eigenvalues and odd functions to talk about the other eigenvalues. So this little funky tau here will be zero or one according if the function is even or if the function is odd. So these conditions p1 and p2 are the minimum ones and I will assume this throughout the rest of the slides. Let me assume two other conditions for the moment, right? So I'm going to assume this condition p3 that p is annihilating in the following sense. If you have a function integrable, which is a continuous eigenfunction of the Fourier transform, suppose that p times f is eventually zero, then the function is zero. Okay. This is a bit, you have to digest a bit this definition, but this happens for example, if the set where p is different from zero is dense, okay. If the set where p is different from zero is dense, then you essentially get that f will be eventually zero. So we'll have compact support. You cannot have an eigenfunction with compact support. And the condition p4 is that p is homogeneous. So suppose p is homogeneous, there is a degree of homogeneity that I'm calling gamma and gamma has to be bigger than minus d because the function has to be locally integrable, particularly integrable on the origin. Okay. So let's start with these four conditions on p. With these four conditions, I can pose the problem. So the problem is this. Consider the following family of functions. Now there is the dependence on this sine function p. p will be at the same time the weight and the sine. So I'm talking about functions which are l1, continuous, real value, and they are even or odd according to p being even or odd. Now I'm going to ask that f hat pf and pf hat are all integrable. I'm going to ask these integrability conditions. And I want pf to be eventually non-negative and this pf hat times s minus i to the power to be eventually non-negative. So these functions are eventually non-negative and the competing condition is of course that their integrals are non-positive. And you can play the same game, define this minimal constant aspd to be the infimum over this class of the radius of this guy times the radius of this guy. So it's a little bit to digest. It's in principle it's not clear that this class is non-empty. It's already a nontrivial problem to verify that the class is non-empty. But let's try to move on. Let's forget that for a minute and let's try to move on. So the first things that I want to do is to play the same trick that we did before to reduce to an eigenvalue problem, to an eigenfunction problem. So you can do scalings. So when you do scalings, since your function is homogeneous, the sign of the function doesn't change with scaling and then the integrability doesn't change. So this uses the homogeneity condition before you can align the radii. You can assume without loss of the generality that the radius of pf and the radius of this other guy are the same. And then you sum the functions as you did before. You consider w to be the sum and this will be non-zero. This will be non-zero because of that annihilating condition p3. And therefore you will be producing a function that does a better job. The radius of p times this function is less than or equal than the radius of these two guys. So when you use these things, you reduce your problem to an eigenfunction problem. So I'm going to put the star here to denote the eigenfunction problem. And this is now. You have a sine s, s is one or minus one. You have a function p, which is both the weight and sine. And you have dimension d. I want functions which are integrable continuous, real values such that f hat is given by this. So you see here all the four possible eigenvalues appearing. So f hat could be s i to the tau f. I want pf to be integrable as well. And I want pf to be eventually non-negative, but it's integral to be non-positive. And the question is find the minimum radius such that that phenomenon can happen. The previous problem was just the case when p was identically equal to one. So when you arrive at the eigenfunction problem, this eigenfunction problem makes sense even if you don't consider those conditions p3 and p4. So you may forget those conditions and just write with the full generality that their function p is just locally integrable and that it's even or odd. And let me make an observation here that if you take two functions that are different, that are equal almost everywhere, take for example a function p1 which is one everywhere and take a function p2 which is one everywhere for all x, let's say we are in dimension one, except on a sequence of points. And in this sequence of point you change the value to be minus one. And this sequence of points is a sequence of points going to infinity. Now any function f that belongs to this class for this function p2 will have to have product of ef times p being non-negative. So then at these points it will have to be actually zero. So this function will have to have zeros at these points an that you change the sign. So even if two functions are equal almost everywhere, they do not generate the same problem. Even if your function p is zero almost everywhere, you might be talking about a non-trivial problem here. So this is vastly unchartered territory. So this is the problem. Let me give you some examples of what we did. So let me recall the problem once more. You have an eigenvalue problem. You have a function p of your choice, just locally integrable and even or odd, a dimension d and a sine f. You want a function which is integral continuous and which is an eigenfunction in the appropriate sense. You want pf to be integrable, pf eventually non-negative and whose integral is non-positive. And you define, you want to find the minimum radius. For example you can take this function in d variables p of x1, x2, xd to be just x1. You can just integrate it against the first coordinate x1. That's an allowed function. And we will show for instance that the solution of this problem with the plus one sine in dimension 22 is 2. Here's another example. Take a function p of four variables given by this crazy thing here. It's a polynomial of degree 4 in its variables. This is also a function that you can plug in here. And we will show for instance that the solution of this problem with the sigma plus one in dimension 4 is square root of 2. Okay. I am a little bit past time but I think I got your attention a little bit with this. So I'm going to allow myself if Edgar allows me to go five more minutes to show a bit of what we did here. Can I, Edgar? Are you still there? Okay. I'm assuming you are. Well, I will say randomly. Yes, you can. Okay. Good. So this is a bit of our paper on this topic has two parts. In the first part we follow the classical path to obtain the sine uncertainty principle. The classical path meaning that we try to put the original and the methods of Bourgan, Cajun and Closel into a very minimal and genetic framework to make things work. Okay. So this includes two sorts of results. First, there are results that guarantee that the class of function that I'm talking about is non-empty. Okay. There is another type of result that guarantees that my function p is admissible. Admissible in the sense that I have some sort of inequality that replaces the Hausdorff-Young in that sense. I have an equality to leverage this norm of, this L1 norm of pf. These two ingredients when combined they will yield the sine uncertainty principle. So this is an example of a non-empty class. So assume that your function p, remember everything that I talk about from now on or all the time. I'm always assuming that p is locally integrable and that it's even or odd. These are the conditions p1 and p2 before. Now assume that your function p is, you can integrate against any Gaussian. When you multiply by a Gaussian, this is integral. Now assume that p is a harmonic polynomial, harmonic homogeneous polynomial of degree L times a function q, which is eventually non-negative. So you can take any homogeneous harmonic polynomial of degree L and you can take any function of your choice q, which is just eventually non-negative. Then the class is non-empty. So this is a lot of examples where the class is non-empty. In particular the examples that I gave you. Now, admissibility is what comes in here as a replacement to the Hausdorff-Young that one had before. And it essentially is a condition like this. Maybe I don't know if I can highlight here. I want that my function, I will say that my function p is admissible if for any eigenfunction of the Fourier transform I have that pf in the L1 norm controls some Lq norm of f. So there are some results like this in the literature where you have with the two norm controlling the two norm here where p is a quadratic polynomial. But these are kind of rare. This is not very tricky. This is a bit tricky to find. But I'm going to call that this is the admissibility condition. And another one of our results is a sufficient condition for admissibility. So if your function p is such that it has one of the sub-level sets with finite lab bag measure. So suppose there is a lambda such that the set of points such that p of x is less than or equal to lambda has finite lab bag measure. Then this function p is admissible in this sense. But this is a sufficient condition for admissibility but not necessary. There are functions which don't have sub-level sets of finite measure which still have an inequality like this. And one example is that function p of x to be equals to x1. Just the first variable. So when you put here p of x to be x1 there is this inequality here when q is equal to 2. So morally speaking the first theorem is just if you have a class that is non-empty with a function p that's admissible then there is the uncertainty principle holds. There is a constant c star such that this is bigger than c star. C star depends just on the function p on the dimension d and the exponent of admissibility q. And this is as I told you pushing the techniques the previous techniques to a limit or a near limit. But even with this you can generate a lot of examples you know. There are a lot of examples where the class is non-empty. Anything that has bounded sub-level sets or sub-level sets of finite measure is also admissible so you get in a good shape. The final result and this is probably where I'm going to stop. It's actually different so this is the second part of our paper here. It's a different mechanism to obtain the sign uncertainty. It's a different method than the one conceived by Bougain, Cajun and Closet in the beginning. And we call it dimension shifts. So there is a way that you can relate two different dimensions to relate the uncertainty principle in two different dimensions with two different functions. And this is very powerful in some cases. I'm essentially going to stop my talk here reading with you this this statement. This statement is the following. Take L an integer bigger equal than 0 and R of L or tau of L will be 0, 1 is just the congruence class of L mod 2. So start with the function P in RD plus 2L that is radial. Write P to be P0 of the radius. Now consider a function P tilde in RD which is of this form. P tilde is the radial version of P times a harmonic polynomial H of degree L and a function Q which is even non-negative and homogeneous of degree 0. So the first part of the theorem reads the following. If the original class for P is non-empty in dimension D plus 2L then a certain class with another signal here for P tilde in dimension D is also non-empty and you have this inequality here. The thing that you started is bigger or equal than this thing. In particular if this thing here is bigger than 0 if you can prove this then you're in good shape for this one. So this is particularly useful when you can use this construction to fall in the previous situation. So this is what's most surprising, this last two lines. If in some cases you can come back, you can do reverse engineering in this formula. If P has a bounded sub-level set, this is bounded meaning that it's containing a bounded ball. It's not just that it has finite measure. If P has a bounded sub-level set, if Q is equal to 1 and if H is this polynomial x1, x2, xl, it's in the orbit of this polynomial with the action of the orthogonal group then you can the equality here above holds. You can come back. So there are two quarter-layers of this result that I want to mention and to finish my talk. First, suppose that you start with a function P in that dimension. Let's say dimension 100 going dimension 100 and you have the function P which has a singularity. So this is useful when P has a singularity. So let's take P to be x to the minus 20. So if you have a function with a singularity x to the minus 20, it's very hard and it's not even possible to find those admissibility inequalities when you have a decreasing weight with a negative power x to the minus 20 dimension 100. So what you do here is you take this construction, this, you take this H to be just a harmonic polynomial with two variables of degree l and you take this Q to be this function here. Modules of x to the l the same degree and then you take sine H over H. When you multiply your H will cancel, you'll just have x to the l to add to the singularity at P. So if you start with the gamma singularity, you have gamma plus l, you lower the dimension, that's the price you pay, you lower the dimension, but you get a better singularity. In particular, if this l that you're allowed to take is bigger than the minus 20, so if you are in dimension 100 and you start with singularity minus 20, you just take l to be 20 and then 20 with minus 20, you get zero. You are in position to use the theorem before, but now in dimension 60. So you're good. So you find an uncertainty. So this is how you shift the dimensions to get the uncertainty. And of course, this equality case when the polynomial is harmonic in the orbit of this guy is a very surprising result because then you can connect those four cases where we had the solutions, the solutions of this coming from the sphere packing and this new solution coming from the paper of Conan Gosawis and those four solutions, they generate other 14 sharp constants. And this is the new sharp constants in this, that all the sharp constants that we know about this problem. So you take just r is just a rotation is a guy in the orthonom or orthonomal group, you can just apply it in the in this polynomial x1, xl and you can take dimension 8 minus 2l. So a with this sign, with this polynomial in this dimension, it's going to be square root of 2. This comes from one of the results before. So you have here three cases plus five plus nine, you have 17 cases, but three of them are the dimensions eight and 24 of sphere packing in the dimension 12 of Conan Gosawis. And then you kind of lower the dimensions and you get some polynomials here. So the crazy example that I showed you a few slides before of a big polynomial in dimension four was just this case here, where you take l equals four. So if you have l equals four, you are in dimension 12 minus eight, four, and you have a polynomial which is x1, x2, x3, x4, and I just apply the random rotation on that guy to make into that very big polynomial that I showed before in the answer is square root of 2. Moreover, we have a mechanism to transfer the extremizing functions that we had before into new extremizing functions in these cases. There's a there's a way to to write explicitly one from the other. Okay, I guess this is time for me to stop here. I hope this was not overly technical, but I know it was a little bit, but it's a bit of an answer. My talk is a bit of an uncertainty principle itself. I have too many things to tell you and too many details to tell you, and I can only do one or one or the other nicely. Thank you very much for your attention, everybody. Okay, so thanks a lot, Manuel. I think it was a great talk, really, and so if anyone has any questions, I invite you all to please raise your hand in or write in the chat if you have a question, and I will call up on you to lend you to your mics in order. So, let's see. If anyone has a question, just write please. I may just just say one thing and not really a question, but Manuel, any time you want to ask me for four more minutes or 40 more minutes for you to talk about mathematics, don't even ask. Thank you. Thank you very much for this very next talk. Thank you.