 less than e2 minus beta AM There is nothing fancy there very simple algebra. Yeah, you can take one of a K here But you don't need it. Yeah, you can forget about that So once again where this inequality comes from I divided cube into small ones. I have K minus one cubes like that and One cube like this one, but I have also renormalize the measure right the measure in each Cube here. It was multiplied by Coefficient one over K and then this Where these coefficients come from but as I said you don't need to think about it So what is surprising? It looks for me if you do base of induction In honest way you get a very bad estimate in Dependence on the doubling index if you use this very simple Observation that there is one cube where the doubling is less than one half of the initial one In use induction. You're able to reconstruct precise Estimate there is no hope for a better Estimate on and then we will have here. So this was the proof it's outlined on the nodes and It's he's even more simple than it was in the notes and they can comment a little bit What goes next? So What we did here we propagated smallness from set of positive measure instead of Open set in the ball. We said we have a some wild set of positive measure. We know the measure let us Propagate the smallness of solution from this set But really the zero sets are of dimension one dimension D minus one So if we have an estimate on a set of larger dimension if we know that our solution is small on some set It is not positive measure, but in a sense has dimension larger than D minus one We should be able to do a similar thing and it is well known for real and antique functions You can propagate smallness from Subsets of co-dimension larger than one, but it's also true here so let me define Hossdorff content of a set It's almost who's of measure, but it's a better thing It's Infimum of the sams of the ready to the power K if I take My set E and cover it by balls of ready R. J. K is the dimension The Advantage of Hossdorff content if you compare to Hossdorff measure is that Hossdorff measure is Almost always zero plus infinity if you think about a set for all different Case K is not integer. It's a positive real The Hossdorff measure is either plus infinity or zero and Hossdorff content is some Quantity it's noise number bounded for bounded sets So instead of propagating smallness from sets of positive measure what we can do is to propagate from sets of Dimension larger than D minus one If delta is positive and E is set as before with D minus one plus delta Hossdorff content of E fixed then There is an estimate Very similar to what we had before just just the same thing So use solution for lipstick equation as always with controlled coefficients Then there are constants here such that This is true delta positive is marinatural threshold If you think about D minus one one dimensional set It could be a zero set your function and there is no way you get in equal to like that The zero sets of condimension one, but if you control your function on a Set that has good dimension Larger than one you can extend smallness in this way here the proof is Similar but is based on a very nice lemon distribution of Dublin index that is due to Sasha Logan often publish this year And he said that if you divide a cube into many cubes as we did it Yesterday we iterated this Idea that if you have one cube do it several times you find Many cubes with a smaller Dublin index, but the truth is the number of Q where the Dublin index is larger than one half of The initial one is like that There is a constant C and a constant B naught so that if you divide Q into B to the power D small cubes The number of cubes with large Dublin index is bounded by D minus one minus C We did it yesterday without this one. It's a simple The truth is you can do one dimension further and This is what allowed him to prove estimates for the nodal sets that have dimension D minus one in our Proof everything was simple because we were estimating something of dimension D If you want to estimate the nodal set you need More fine tools and this is where they come from when you know that the number of this bet cubes Behaves like a power smaller than D minus one There are two questions that I want to formulate in connection to to this Result let us forget about these small sets now and go back to Positive measure I can say now that I have my equation in Omega E is subset K is Combo subset of mom ago then Iterating the estimate you can easily get This one with some Constance C and gamma and the question that we don't know how to answer is what is the dependent? Of this constants to the boundary to do the following quantity I want to think About the distance from E to the boundary of the domain When you go closer and closer your constants start to blow up you lose a control but to have some noise Estimates on this constants on that will give you Quantitative way to go to the boundary would be extremely nice and one of the reasons for that is a very Old and well-known open problem that is problem about harmonic functions just harmonic functions assume that you have a harmonic function in The unit wall are D or 3 I will assume that this function is Smooth up to the boundary how smooth it's up to you to choose say C2 on the closed ball there is Set You noted boy F now On the boundary of positive Measure should say positive How's Dorff measure or mention to It's set on the boundary of positive measure the dimension of the boundaries to in case of R3 suppose that H is 0 on F Together with its normal derivative or together with gradient doesn't imply that the function is zero in Dimension to we have complex analysis that helps a lot and tells you that this is the case without no any smoothness assumptions if you have harmonic function that vanishes on a Positive part of unit circle with its derivative, then the function is zero Here it's known that you need some assumption on some smoothness assumption 1 plus alpha is not enough But say C2 if you don't like C2 C infinity a very smooth harmonic function up to the boundary can you Do that at zero if you know that it has zero cache data instead of positive measure If you have very good control on how our constants depend on The distance between the set and and the boundary here in this inequality there is hope to to get to prove this this one and the second question is What about the gradients of her of solutions of elliptic PDs? we know that you can Propagate smallness for the gradients in the same way as you do it for Solutions themselves Say this way, but this is simple for the gradient use You would assume that the zero set is of codimension D minus 2 So the threshold should be much further and the question is can you do it? From sets of codimension larger than 2 what we can do is So the question is if Delta is positive can you extend Smallness from the gradients if you assume that the dimension is larger than 2 and we don't know the answer what we Know is that there are some situations of codimension larger than one when you can still do it so The answer is yes for some Delta That is Some Delta between zero and one But we can't move this bound Down to to zero so I think it's a good point to stop with those two Nice question that you can think about tonight and thank you very much for your attention This one is not known what we know that there is some Delta we can go a little bit below D minus one But not all the way to D minus two that should be expected Sorry Yes, yes, yes Yes, it Know here the the smoothness it looks like smoothness is important probably can produce an example. That's infinity. I Don't know but the feeling is that if there is some smoothness there is a hope to get to get this one another Question is if positive measure is the right characterization or you can do nice Think about F and say that you you find a lot of nice capacity that corresponds to to this property there Any other questions? Thank you once again