 This is about time to get started. This is the last day. We'll see how far we get. Not nearly as far as I had originally thought we might get. That turned out to have been wildly, wildly over-optimistic. But we'll do what we can. So the next thing that I want to do is to at least sketch the proof of the famous theorem of Koiffman, Macintosh, and Maier, which says that for A-lipschitz, the operator C-A, that is the Cauchy integral along the graph of the Lipschitz function, A, is bounded on L2. And in particular, we have the following estimate. This will be L2 of r, because remember, this is the one where we've written it in parametric form using the coordinates induced by the graph, the Lipschitz graph. This will be less than some uniform constant times 1 plus the Lipschitz constant to something like what power do I get? Fifth power. This is not optimal, but it's plenty fine for almost all purposes. Okay? So you recall that actually as a consequence of the exercise that you did yesterday, or at least you started to do yesterday, we already know that this is true, provided that the Lipschitz constant was sufficiently small, okay? But Koiffman, Macintosh, and Maier removed the smallest condition, all right? So we're gonna deduce this, or at least sketch the proof of this using the T of B theorem for square functions that we proved yesterday, the theorem of SEMS, okay? So there are two steps. Step one is a resolution of this operator as a sort of, in a sort of colder on reproducing formula, right? It's gonna be, in fact, it really is a sort of colder on reproducing formula, except that instead of reproducing the identity operator, it reproduces the Cauchy integral operator, okay? So step one is to prove the following resolution of the Cauchy integral operator. It's gonna be an integral from, at least formally, I'll explain in a second what I mean by this, or how one makes it rigorous, okay? Where theta T is what? Theta Tf of x, as usual, is gonna be, of course, we're in one dimension here, an integral against some function psi T of xy, where in this case psi T of xy is gonna be one over two pi i integral on R, T over x, sorry, there's no integration here, I'm just giving you the kernel. T over x minus y plus i times A of x minus A of y plus T squared. Okay? All right, so notice that psi T is in the little wood-pated class, right? Because A is Lipschitz, that's easy to check, it satisfies the standard little wood-pated size and smoothest conditions in the one-dimensional setting, okay? In a moment, we'll see that it actually does nice things to a particular accredited function B, and therefore, the square function associated with theta T is gonna be bounded. We'll come back to that in a second, but let me just briefly make a comment here that how do we make sense of this? Well, remember what the operator CA is, maybe I should call it G, right? That's the operator who's L2 bound on us we're trying to obtain, okay? So what you actually do here is get a resolution of a slightly altered version of this thing, let me call it CA epsilon, and the only difference is that we had an epsilon here, okay? And notice that that keeps us away from the singularity, right? If you let the, you know, along the diagonal when x equals y, you've still got an epsilon in the denominator, which keeps things from degenerating. So it's actually the CA epsilon that one resolves this way, and technically, you're actually integrating from epsilon to infinity, but you get bounds in the end that are independent of epsilon, and then you pass epsilon to zero, okay? Yes, I do indeed, thank you. In fact, just for clarity, let me make that a square bracket, thanks. Yeah, the whole business is squared, absolutely, yeah. Okay, so then notice that, okay, if we have this kind of variance of the color and reproducing formula, that means we can dualize this guy, all right? So to prove that CA, or really CA epsilon is bounded in L2, we let G and H be in L2 of a line, and it's enough to consider CA of G paired with H, okay? And what we want, of course, is that this is less than or equal to a constant times, well, the appropriate dependence on the Lipschitz norm of A times the L2 norm of G times the L2 norm of H, okay? So in turn, using this resolution of the Cauchy integral operator, and then after we dualize, again, I'm gonna proceed formally and pretend we can do Fibini's theorem, and in fact, if you're really going from epsilon to infinity, that's legal, then you flip the outermost theta T over onto the H, and what you've got is this, one plus IA prime theta T of G times theta T, and I'm not, this pairing here is without complex conjugation, so this is, again, the transpose, not the adjoint. This is at x, this is at x, dx dt over T, okay? And then you use Cauchy-Schwarz, of course, and you'll have from here, one plus the Lipschitz constant, and then what do we get? We get two square function estimates, or two square functions that we have to estimate, theta T G of x squared dx dt over T to the one half times the same thing for H but with theta T star, okay? And so then, oh, by the way, I should mention this proof that I'm showing you is not the original proof of Cauchy and Macintosh in my year. There was a hands-on proof, the proof preceded the advent of T of B theorems. This particular proof I'm showing you is Sem's proof, okay? That's why he proved that T of B theorem for square functions to do that, okay? So obviously what things boil down to now is that what we need here, oh, by the way, another thing I should mention, how does one actually get this? I'm gonna omit the proof here today for purposes of saving time. You can find the proof in the notes. So proof in the notes, it's a complex analysis proof, okay? It's just a matter of using Cauchy's formula in a sufficiently clever way, and this is what comes out, okay? This is purely complex analysis here, and then writing things in the parametric form. All right, not an obvious thing, but if you have had a course in complex analysis, you can easily read this. It's elementary, even if clever, okay? So step two then is to prove that we have the following square function bound. And similarly for theta T star, which, if you think about it for a moment, you realize it's actually minus theta of minus T when you take the transpose. I need a square on this side, thanks. Thanks, whoops. Okay, and then once you have that, you're done. Okay, but how do we get that? Well, we've already observed that the kernel of this guy is a little with Paley kernel, okay? So we prove this, let me call this star, okay? To prove star by theorem 4.4, which is this T of B theorem for square functions. It's enough to show there exists an accretive B such that, well, in fact, we're gonna do even better than just theta T of B is giving rise to Carlson measure. We're actually gonna have theta T of B is zero. So it's a very particular case of the, very special case of this T of B theorem, okay? And what's the B that we use? We're gonna take B to be one plus IA prime. Of course, that's accretive. A is Lipschitz, so this is bounded, and of course the real part is one, which is certainly bound away from zero. Yeah, question. Oh, oh, RN was out of habits. Yes, any place here? Yeah, yeah, yeah, yeah, you're absolutely right. Thank you. Yeah, that's out of habit. And it's one here, okay? Sorry about that. Okay, there we go. All right, so why is this? All right, let's just see what that is. Well, okay, so let G of Z in the complex plane be the constant function one, okay? Sort of this analytic, all right? So then what is theta T applied to one plus IA prime? Okay, at say X, all right? Well, that's one over, well, let's pull the T out. T over two pi I times integral of one over X minus Y. Plus I times A of X minus A of Y plus T quantity squared. I ran myself out of room here, but one plus IA prime of Y, DY. And if we set, if we take Z to be the point X plus I times, A of X plus T, and then realize that, and then parameterize as W equals Y plus IA of Y, what you see is that this is just T times T over two pi I times the integral of one over Z minus W squared DW. Which by integrated, well, this is integrated over R, and this is integrated over the Lipschitz graph gamma, the graph of this function A. All right, but by Cauchy's formula, this is T times G prime, and G prime is zero, okay? All right, now you may worry slightly that this is an unbounded curve. How do we apply Cauchy's formula? Well, the thing is you could approximate by ever expanding bounded curves, closed curves, and you can pass to a limit because you have sufficiently rapid decay in infinity here. Okay, and that's the proof. All right, maybe one more comment is in order, this square function estimate actually goes back to Carlos Kenick's thesis, but on the other hand it was, when he proved that that was before the advent of T of B theorems, and again, doing this by hand, this was a serious piece of work. This just shows the power of the T of B methods that I mean it comes out just like that once you have T of B theorems. Okay, oh, and by the way, one last comment. I've glossed over this precise dependence how do you get this dependence? Well, you go back and think about how the constants arise in the T of B theorem. It's just a matter of keeping track of the constants as they arise in the application of the T of B theorem. That's how you get this precise dependence there. Okay, okay, so the last topic that we're gonna discuss in the course is something called local T of B theorems. These exist for both square functions and singular integrals. We're just gonna talk about the square function version. It's gonna turn out that the thing we're gonna do is sort of a toy version of the key step of the solution of the Kotl problem, which my original plan was to talk about that, but we won't get there. If you're interested, it's in section six of the notes, which we will not get to. But this already sort of gives you some idea of some of the, you know, gives you an indication of some of the essential ideas that underlie, underlie that proof, okay? Okay, so what do I mean by a local T of B theorem? So remember the T of B theorem, say of SEMS, until finding some particular accorded function, B, on which your square function operator behaved nicely, okay? You found some single testing function. So the idea of a local T of B is that you find a family of testing functions, typically indexed by a single function, of testing functions, typically indexed by the dyadic cubes, all right? So for each cube there's gonna be a different testing function, and instead of sort of having globally good behavior of your operator acting on this one accorded function B, what you allow is that you just need to know that on a, localized to a particular cube, Q, your operator acts nicely on the testing function associated to that particular cube, Q, okay? So the good part of this is it gives you a lot more flexibility. I mean, a principle, you know, it's a lot easier to find, to find a particular testing function that works nicely, locally, than to find just one that works well globally. The drawback is that, of course, the proof, you know, you have to pay a price somewhere, so the proof becomes a bit more delicate, but it's not too bad, you'll see, all right? We just have a couple of additional ingredients here. So the first thing that we wanna do is to prove a certain lemma, which is, what's my numerology here? Lemma 5.1, and this is a John, this is a sort of John Nirenberg lemma for Carlson measures, okay? And what do I mean by John Nirenberg lemma before we get into the proof? If you recall the John Nirenberg lemma, the classical John Nirenberg lemma about VMO, that is this, frequently in harmonic analysis, if you have some kind of local scale invariant estimate that works on every cube in a scale invariant way, then it can self-improve somehow. So with VMO, of course, the self-improvement is that having the VMO estimate self-improves to give you an LP or even an exponential VMO estimate locally, all right? So we're gonna see a similar sort of self-improvement phenomenon here, okay? So the situation is this. Let mu be a non-negative measure on the half-space, Rn plus 1 plus, and we're gonna suppose that there exists constants. What am I calling a constant? C1 and eta, two positive constants, such that for every dyadic cube q in Rn, we have the following. For every dyadic cube q, there exists a family f, or to be specific, f sub q of dyadic cubes qj. These are gonna be non-overlapping dyadic sub-cubes of q, such that, first of all, the measure of a set that I'll call eq, which is the complement in q of the union of the qj's, the measure of that set is at least an eta portion of the measure of q. So the qj's are failing to cover up some ample portion of the measure of q, some eta ample portion of the measure of q. And second, such that our carlesson, well, our measure mu behaves like a carlesson measure, perhaps not on all of the carlesson box associated to q, but on some set, which I'll explain what that is in a second, where this eq star is defined to be the carlesson box rq associated to q minus the union of the carlesson boxes associated to the qj's in this family f. Okay, and remember here rq is the carlesson box q cross zero length of q. It's the n plus one dimensional q associated to q. Okay? So then the conclusion is that, in fact, mu is actually a carlesson measure, I should have said such that for all q, we have this, okay? Then, in fact, mu is a carlesson measure with, first of all, the dyadic carlesson norm of mu, which is, by definition, the sup over all q dyadic of mu of rq divided by the measure of q. This is going to be less than or equal to c1 over eta. And in fact, in the full, I should say c dyadic, c for carlesson. Okay? And then the full carlesson norm of mu, which is the same thing but taking the sup over all q's q, that's going to be less than or equal to some purely dimensional constant times c1 over eta. And the fact that the last one comes from this one is a very easy exercise. For carlesson measures, it's enough to test on dyadic cubes. If you have a dyadic carlesson measure, then it's a carlesson measure. Okay? That's easy to check. Alright? Maybe I should give these names. Okay. So let's prove this. Alright? So, by the way, again, what's the self-improvement? Well, the self-improvement is that you're initially not, you're given a priori, not the mu's of carlesson measure, but that there's this sort of ample sub-region of rq where mu behaves well, where mu behaves in a good way. And since this happens in a scale invariant way for all cubes, it turns out that this self-improvement to give you that the fact, the thing actually is a carlesson measure. Okay? So maybe I should start with a picture and see what's going on. Alright? So here's the cube-q down here. Here's the carlesson box rq. And we have some cubes qj here. And we're removing their associated carlesson boxes. Okay? And there's still an ample portion of contact here that's not covered up by the qj's. And then the eq star is this part up here, this sort of sawtooth, dyadic sawtooth domain. Okay? Alright. So how do we prove this? The proof is quite easy. And you just do the natural thing. So what you do, of course, we're trying to, again, as we said, it's enough to work with q dyadic. Okay? So, of course, you're trying to control the size of mu of rq. Alright? So you split it into two pieces. There's the piece in eq star, that is the sawtooth region. And then there's the part that's covered up by the rqj's. Right? So now, by assumption of a hypothesis, this part is less than or equal to c1 times the measure of q. And now we use the fact that this happens, this hypothesis holds in a skill invariant way for every cube q. So it holds again in the qj's. So we iterate. So we have here sum over j of mu of the associated sawtooth for qj, eqj star. Okay? Plus a sum over j. And then for each qj, it has a family fqj that we're now going to index by i, say. And you'll have mu of rqji. Okay? So where the... Oops. qji. Maybe I should... Sorry. Reverse the roles of the j and the i here. Put j up top. And i here. Where this thing indexed by i is the family associated to the cube qj. Okay? And now let's see what kind of estimate we have so far. All right? So in turn, this is going to be less than or equal to what? Well, we have c1 times the measure of q. And then what is this guy? Well, this is less than or equal to... by hypothesis applied to qj. This will be less than or equal to this sum on j of c1 times the measure of the qj's. And then we have plus the other bit. qij. Okay? But now, how big is this? Notice that the complement of the union of the qj's is at least this big. And the qj's are not overlapping. The sum of the measures of qj's is less than or equal to 1 minus this. Okay? So this is less than or equal to... this is less than or equal to c1 times the measure of q times 1 from here plus 1 minus eta from here plus this bit. Okay? And now you iterate. When you iterate, you get c1 times the measure of q times this convergent geometric series. And then you're done. Okay? Just add that up. That's the bound you get. Okay. So now let's state the T of B theorem. The local T of B theorem. And what's my numerology here? Theorem 5.5. Okay? So as usual, we're going to let theta... theta T of f be defined by an integral in Rn of some kernel psi T of x, y f of y dy and psi T