 The third lecture, and the aim for the last two lectures is to give you an outline of a proof of unique intonation, kind of three sphere serum or three ball serum in the station when the smallest sink is not a ball but a measurable set. It's not a new proof of three spheres. We will use heavily that we already know three spheres in the classical form. We'll use the monotonicity of the doubling index, but now we want to replace the small set by measurable set. I will start with Remus inequality. Some of you already heard it today in the morning. It's not necessary for the proof, but it gives you a nice flavor of what we're going to talk about. Now I'm talking about polynomials on a real line. I have an interval i and r and a measurable subset of this interval. Then we claim that for any polynomial of degree n, you can estimate the norm of the polynomial on the interval if you know the norm on the set e. I'll write a simplified version here that gives you some nice estimate. But to prove it in this form is not so easy. The truth is that Chebyshev polynomials are extremal here. So you have to look at Chebyshev polynomial on, say, minus 1, 1, see how fast it grows from this interval and it will give you the right estimate. If you are following the problem sessions, you will be guide through the proof of this estimate in one of the problems. It's a very elementary thing to prove. So this is Remus inequality for polynomials of degree n. Show now how to use this one to do propagation of smallness or quantitative unique continuation for harmonic functions. And the key point is that if you have a harmonic function, you know how to approximate it by polynomials. It's real analytic. So what I'll write down is that there is an upload constant and a function on 0 or not, such that for any harmonic function in the unit ball, if you approximate this function by a polynomial on a small ball of radius r, you can choose a polynomial such that the maximum over r times the ball is bounded by constant q of r to the power n maximum of function over the unit ball. You can just write down Taylor expansion, estimate the derivatives, and take pn to be the Taylor polynomial. See what you'll get. You can say that if you have a harmonic function in the unit ball, estimated by some constant, you know that you can extend it to complex function in if my ball was a unit ball in rd. I can extend h to a harmonic function in some domain in cd. If this was my ball in rd, I have a kind of nice domain in cd where I can extend my function with a control. And then classical Berstein-Walsh theorem tells you that it means that you can approximate by polynomials with this kind of thing. But for harmonic functions, just use series over harmonics and estimates of the derivatives in usual form. And if we know that plus rms inequality, we can do quantitative unique intonation from sets of positive measure for harmonic functions. About what I need about q is that q of r goes to 0 when r goes to 0. And I choose my r0 so that this q of r0 is less than 1. It's a decaying function, thank you very much. So this is a nice small parameter and goes to 0 when r goes to 0. So now I suppose that I have a harmonic function and I control it to know some set of positive measure. I want to extend this estimate from this set of positive measure to a small ball. I'll assume that my set E is a subset of r0b. Or let me take some r here. Then I claim that there is an estimate of this form, like in the three spheres that we have seen many times now. Gamma is from 0 to 1 sum parameter depending on r and the measure of E. So gamma and c and the measure of the set but not on the geometry of the set. It's a simple exercise to combine those two to get this one. The only thing that I should mention is that you can easily generalize rms inequality to high dimensions. You just take slices, look at your set, look at the polynomial. If you have a subset in higher dimension, there are many slices where you know that the measure of the intersection is large enough. You can extend estimate from slice to interval and then do the opposite thing. So there is a simple way to generalize rms inequality. You'll get a new constant here and d will appear in this exponent because you have to iterate it several times. But combining those two, you can have this kind of estimate. Before I go further, I want to rewrite this in a more nice way. So I have maximum over ball, maximum over smaller ball, and maximum of the set E. I want to rewrite the last estimate so it looks more like the first rms inequality and the claim that this estimate follows from the following one. The maximum of rb of function h is bounded by maximum over the set. And here we'll have constant, constant measure. I'm trying to write something that looks like rms inequality and what should I put now here instead of the power of the polynomial? It's exactly the frequency or doubling index. Roughly speaking, the ratio that I have here, you can rewrite it in terms of the doubling index, the ratio between those two. If you do it carefully, you see that this estimate implies that one. And this is what we call rms inequality for harmonic functions. Sorry? Yes, E is contained in rb and h is harmonic in the unit ball. We think about r as a small one. So it's another demonstration of the fact that the frequency plays a role of the degree of the polynomial. Frequency of harmonic function is a doubling index of harmonic function tells you what kind of polynomial it looks like. And what the claim is that we have the same without real nudity. In this outline, how to prove this kind of result, I was referring to the fact that I can approximate the polynomials in a very good way. There is no such good approximation for elliptic equation. There is a approximation, but not as nice as here. But I claim that still the rms inequality holds. And if you have solution to elliptic equation, as before, my A is uniformly elliptic, uniformly elliptic. Symmetric, no other terms than this one. Suppose that this is in the unit ball. E is part of the small ball. Then we claim that the same holds maximum over rb. h is bounded by h is u. So maximum over the ball is bounded by the maximum of the set of positive measure times this exponential thing. And this implies three-ball theorem. This is joint work with Sasha Luganov. It's an archive that appeared this year, probably. So let me rewrite it once again. So there are lots of reformulations, but all of them are just the same thing. So instead of proving this inequality, I prefer to think about that one that is rms. And instead of thinking how to extend, estimate from set of positive measure, I want to find what is the measure of the set where the function is small. So reformulation is the following. Still have solution of this equation. Say, well, renormalize it so that the maximum is equal to 1 in the unit ball. And I'll consider a set of points where the value is small. And I'm saying that this set is not very large. I have an estimate of the measure of the set where the function is small. This is when a goes to infinity, a small and smaller neighborhood of the zero set. And I'm saying that I now control this measure. And it will go to infinity in the rate some absolute constant, 8 over n. So the constant c and beta, they depend on the elliptic constants in the operator. Yes? H over 2b, but is h only defined on b? Right, you're right. You're right. Let me write it. H over b over 2. Well, we're always in a small thing, probably here. We'll also have to add our estimate with half of the ball. And this will be half of the ball here. Now it's the unit ball. Yes? Before I'm going to give you the ideas of the proof of that one, let me go back to the lecture on Tuesday and tell you something about the connection of the doubling index and the control of the zero sets. We'll not use it, but we will borrow some ideas from what was known before. So we'll go pause for a while from the proof of this one and go to doubling index and the length of the nodal set. There is a lemma that I'm going to use, but I want to formulate it. Suppose that we have a solution of our elliptic equation and we control the doubling index. Then there is control of the zero set, both from above and below. Assume that I'm taking some ball. Radius of this ball will be r. And there is a function of n times r to d minus 1 and another function of n times r to d minus 1. This is a relatively old result. This estimate is simple if you allow f of n go to zero when n grows. And there is really conjection, Lukanov's theorem tells you that you have uniform control here. But the result was known before that you can control this if you control the frequency in some way if you allow this function to go to zero when n is large. There is also control from above. Thank you. Should be zero at zero to know that we're not talking about positive solutions. There was also initial estimate from above due to Hart and Simon. And it's a very nice paper where solutions of elliptic equations are approximated by polynomials on a very small scale. And you can get some kind of estimate, but their function here grows more like n to the power n. But it tells you that if you control the doubling, you at least somehow control the zero set. You have some estimates on this one. If a is real analytic, you can do more than that. And you have a very good control from above. All the same assumptions for real analytic case, this is bounded by n times r2 d minus 1. And it's due to Donald and Hefferman. And they were using it to prove Yau's conjecture for the case when the metric is real analytic. To prove estimate from below in real analytic case, they use the following nice thing. Now, suppose that we are on a compact manifold. We already know that the doubling index for each ball inside is bounded by the eigenvalue, the square root of eigenvalue. So we have a compact manifold and we have an eigenfunction. Suppose that you cover your manifold by balls and compute the doubling index for each of them. Cover it so that when you double the balls, you will still control the multiplicity of the covering. If you sit and think for a while, you will see that there is no way that all balls have very big doubling index. If you have a lot of doubling happening around each point, use the fact that an infinity and L2 norms are comparable. Think about L2 norms. If you double each ball and your L2 norm grows much, you cover everything initially. When you double everything, you cover it finitely many times. There is no way all balls have large doubling index. But what they noticed for real analytic metric, so I will add real analytic here, cover the manifold by balls, produce the scale of one square root of lambda. Approximately, we know that in each ball like that, there is a zero set from the first lecture. And we assume that it's a fancy way to write down that when I double the ball, I have control of the multiplicity of the covering. Some of the characteristic functions is bounded by a constant. And then from these balls, there are many with small doubling index, all right, 1 over c, part of the balls. You can convince yourself that there is at least one ball where the doubling is small. Take the ball where the maximum is obtained. But what they claim that if the metric is real analytic, then you have many balls. There is a portion of balls where the doubling is finite. If you combine it with this estimate for the nodal set, you'll get estimating yaw conjecture from below for real analytic metric. And it was how it was done originally, but don't learn from it. But the idea is that you have to hunt for balls with small doubling index. You know that globally the doubling index is bounded by square root of lambda, in this case, by our global doubling index in the case of solutions of elliptic pitties. And you want to find balls where the doubling is less than this maximum one. So now we go back to quantitative unification. And the first technical thing that I want to do, I want to replace balls by cubes. It's much easier to divide everything into cubes than into balls and then think how they cover everything. So I'll define the doubling index for a cube and a function, say function h cube q, first like the log maximum over h of q. And here we'll take the cube and take a large copy of it. I don't want to double it. Technical reasons, I want to multiply it by a constant k to tell you the truth. What we want is that if you have a ball that is circumscribed over the small cube and then ball that is inscribed into the large cube, the ratio is like two between those. Then you have to multiply a cube by a large constant depending on the dimension. So this is a doubling index for a cube. First observation that we'll do that is true for any function at all, is that if you have a function in a cube and you suppose that for each cube, the doubling is large for each small cube. So I will consider a function in a cube. Divide this cube into many small cubes and say that if for each small cube, the doubling is large, then on the big scale here, the doubling of this cube is very large. Let me try to formulate it carefully. So we have a function h is a bounded function on cube q. q is partitioned.