 The things is going to be left to you as an exercise. A couple of standard results that we're going to need. I don't know why this is a theorem in the notes. It's really just a lemma, but it says the following thing. It says that if B is a BMO function, does everybody know what BMO is? It's in the notes, OK? By the way, the notes, some people you have to download the app to get the notes, OK? This is a broken record by now. So B is in BMO, and theta t is going to be defined to be, as usual, psi t of xy, f of y dy. And we're going to assume that theta t of 1 is 0, which means, again, of course, that this integrated, just integrating the kernel in the y variable is always 0 for all x and t. And I should have said psi t satisfies the little with Paley bounds, OK? Consequently, in particular from the previous theorem, theorem 2.10, we know that the square function associated to this guy is bounded in l2, OK? And then the conclusion is that the measure on the upper half space, d mu of x t, defined to be theta t B of x squared dx dt over t is what's called a Carlson measure, OK? And for short, from now on, I'll just abbreviate those as Cm. Cm always means Carlson measure, OK? And what does that mean? That means that the soup overall q, you could use balls just as well, of the expression 1 over Lebesgue measure of q. This is q contained in Rn. Of the mu measure of Rq, I'll tell you what Rq is in a second, is finite, OK? Let's give this expression a name. Let's call this the Carlson norm of mu, OK? And here, Rq is just the Carlson box associated to q, OK? Meaning we're in the upper half space. Here's a cube q on the boundary. We extend it up into the upper half space to make an n plus 1 dimensional cube, OK? So this is Rq, OK? So in some sense, it's an n plus 1 dimensional measure that scales like an n dimensional measure, OK? All right? So in other words, for this particular measure, from you, this is saying in this particular case that the soup overall cubes q, one other measure of q, integral 0 the length of q, integral on q, theta tb of x squared dx dt over t, is going to be less than or equal to some constant depends on dimension and the little wood payly bounds for psi t times, of course, by the scaling here, it has to depend on this, the BMO norm squared. Start a note to the BMO norm, OK? So the proof of this is an exercise, OK? And now there's one other thing that I'm going to mention without proof and just give you a reference. Theorem, how am I calling it? 225 in the notes. This is one of the fundamental facts about Carlson measures. This is the so-called Carlson embedding theorem, or embedding lemma, I should say. Again, I don't know why I call it a theorem. It should be a lemma, but OK? And it says the following thing. Let mu be a Carlson measure. And let's say f, a continuous function in the upper half space, OK? That could be relaxed a little bit, but this is good enough for us, OK? That's the continuous function in the upper half space. Then the following is true. That for all p between 0 and infinity, the integral in the half space of f of xt to the p power integrated against this Carlson measure is controlled by a purely dimensional constant times the Carlson norm of mu times the Lp. Well, let me say it this way. The integral of the non-tengential maximal function of f to the p power. And let me remind you that the non-tengential maximal function, the non-tengential max at a point x is the soup over yt in the cone with vertex at x, OK? And I'm going to leave out the proof of this. You can find a proof, well, in many places. But for example, see the book of Stein entitled Harmonic Analysis, what is it? Chapter 2, section 2, OK? The proof uses, well, it's an exercise in the use of the Whitney decomposition level, OK, which breaks. Yes? Is this Carlson embedding if and only if? Are you asking if the conclusion here requires that mu be a Carlson measure? Yeah, I guess it does. Yeah, I think so. Never thought about that, really, but probably it does. I only have ever needed it this way. OK. All right, so now we're going to move into what's sort of the heart of the matter, well, starting to be the heart of the matter, not quite yet, but getting there. What we're going to talk about here are so-called T of 1 theorems, OK? So what does this mean? So what's the idea here? Well, let's back up for a minute. Remember what I said the course was ultimately about is L2-bounderness criteria for singular integral operators and square functions, OK? And as we said, in the convolution case, this is just a matter of using the Fourier transform and Planche-Roll's theorem, OK? All right. What about the nonconvolution case, which is what we're really interested in here, OK? In the nonconvolution case, the big idea is I've run out of room. The big idea will have to go over here. The big idea is that one seeks criteria that reduce the question of L2-bounderness to the question of behavior on some selected set of testing functions, very specific testing functions, OK? So in the T of 1 theorem, well, you can probably already guess where we're going with this. The testing function is just the constant function 1, OK? So somehow, the entire question of L2-bounderness of either singular integrals or square functions is going to be boiled down to what the operator does to the constant function 1. Now, we've already seen a very special case of that for square functions, theorem 2.10. If theta T of 1 was 0, then we were done. Then we got the square function bound, right? That was just out of the quasi-orthogonality argument that we were discussing a moment ago, OK? All right. But of course, one might expect that kind of complete cancellation where theta T of 1 is 0, or maybe your singular integral operator kills constants. Maybe that's stronger than what you really need to get L2-bounderness, and in fact, it is, all right? So we'll talk about this as we go on today's lecture and, well, probably extending into the next one as well. But let me just make a historical comment. The first theorem of this type, which ultimately we will prove, is due to David and Journée. And it was with respect to Sigmund Rennigals, color-on-sigmund operators, OK? The proof that we're going to present, though, is not their original proof. The proof that we're going to present is an alternative proof of Kristin Journée, which goes by a related T of 1 theorem for square functions, OK, which I'll go ahead and state now. This is theorem 3.1, T of 1 theorem for square functions, and this is due to Kristin Journée, all right? And they proved it on the way to giving an alternative proof of this David-Journée T of 1 theorem for Sigmund Rennigals. This is one of the themes of this mini course is that a lot of times to control a square function, sorry, a lot of times to control a color-on-sigmund operator, a singular integral, the best approach is to control some related square function, OK? That's not an entirely new point of view that people have known there for a while, OK? All right, but it's worth emphasizing, OK? So this T of 1 theorem says the following thing. So let's let, as usual, theta T be given by integration against, I've really run myself out of room here. Let me do it like this. The usual deal, where psi T is going to be a little with Paley family, right? Satisfies the little with Paley sizes with these conditions, OK? Suppose also that the measure d mu of x T defined to be theta T of 1 of x squared dx dt is a Carlson measure, OK? So we've already said what that is, what that means. OK, and then the conclusion is that the square function associated with theta T is bounded, OK? So then we get the square function bound for the theta T's, OK? And here the C, of course, is going to depend on dimension, on the little with Paley constants, and of course on the Carlson norm for this Carlson measure, OK? So let's see how this is proved. This is actually going to be an easy consequence of things that we've just done on the one unstated or unproved thing that I stated for you, which is the Carlson embedding law, OK? So what we're going to do, and I should say that this theorem stated explicitly in this form is due to Christian's Journée, that somehow some of the ideas in the proof are implicit in work of Coifman and Maier, who had given previously an alternative proof of the Dewey-Journée theorem, OK? So here we're going to use this idea of Coifman and Maier. And we're going to write theta T f of x as theta T minus theta T of 1 of x times pT, whoops, I put the parenthesis in the wrong place, times pT applied to f, and then evaluated at x. I'll tell you what pT is in a second. Plus then we have to add this back in. Theta T 1 of x, pT f of x. And this is going to be, we're going to give this a name. This thing in the square brackets I'm going to call rT for remainder term. And this term just is what it is, OK? All right, where pT is what? pT is going to be a nice approximate identity, OK? Here we're going to define pT f of x is defined to be pT star f of x, where phi T of x, as usual, is T to the minus n phi of x over T. Phi is going to be, let's say, radial, although you really don't need that, real C0 infinity with support, let's say, in the unit ball, and with integral 1, OK? So we'll refer to this thing from now on as standard modifier. Or sometimes we'll call it just a nice approximate identity, because that's how they're often called, OK? So then with the ingredients in place, these terms are going to be easy to handle, OK? You have to, obviously, what we have to do is handle the contribution of each of those terms, the rT term and then theta T of 1 times pT of f term, separately, so that each of them satisfies the square function bound, OK? All right, but this guy, theta T of 1 of x times pT f of x, dx dT over T, OK, we'll look. Theta T of 1 of x squared times dx dT over T is by hypothesis of Carlson measure, OK? So then we just use the Carlson embedding lemma, OK? And this becomes something controlled by the Carlson norm of mu, where mu is that mu, right? It's the guy that's built out of this, times the non-tangential max, the square of the non-tangential max of pT of f. But if you recall one of the very early lemmas that we stated yesterday, I don't even remember what was the numerology, lemma 2.3. Lemma 2.3 tells us that this cumulatively is bounded in L2, OK, right? The non-tangential max of a standard modified bounded in L2, this was lemma 2.3, OK? So that term is fine. What about the RT term? But the RT term does something special because of the way it's been built, OK? So what we notice is that, first of all, that pT of 1 is 1, right? pT is an averaging operator. Preserves constants, y, because the integral of phi is equal to 1, OK? All right? So therefore, what is RT of 1? RT of 1 is going to be theta T of 1 minus theta T of 1 times pT of 1, right? You just set f to be 1. pT of 1 is 1, and so this is 0, OK? So the RT kills constants. Also, the kernel of RT, let's call it psi tilde T of x, y is equal to what? Is equal to, well, we get the kernel of theta T, which is psi T of x, y. We know that's a little wood-paley theory, a little wood-paley kernel that's given to us. Then we have minus what? Theta T of 1 of x times the kernel of the pT, which is phi T of x minus y, OK? Well, in particular, the little wood-paley size condition tells us that theta T of 1 is uniformly bounded, OK? And phi T is as nice as you could like. So this kernel, cumulatively, is also a little wood-paley kernel. So we have an operator with a little wood-paley kernel which kills constants. And so the square function bound for the RT is immediate from the theorem we've proved or sketched the proof of at the end of the hour last time, theorem 2.10, OK? So then by theorem 2.10 we're done. And then we're done, OK? All right, so that was easy. All right, now the next goal is to use this to prove the T of 1 theorem for singular integrals. This will be a little more work. First, we have to develop a little bit of background, OK? So the first thing that we need to do is introduce the notion of the so-called weak-boundedness property, OK? Which from now on, we will abbreviate as WBP. And it says the following thing. So remember that the way we defined a generalized Coulter own-Ziegman operator, it's a mapping from test functions to distributions that's associated to a standard Coulter own-Ziegman kernel in a certain way, OK? So for T, a Coulter own-Ziegman operator, generalized singular integral. And for any, let's say, F and G in C0 infinity with support in an arbitrary ball B of x0 r, OK? The weak-boundedness property says that if you look at the soup over x0 in Rn and the soup over radii r bigger than 0, take the pairing of Tf with G, the distributional pairing, take the absolute value of that. This is going to be less than or equal to some uniform constant times this radius r to the n times the following factors, OK? It'll be the l infinity norm of F plus r times the l infinity norm of the gradient of F times the same thing for G, l infinity norm of G plus r times l infinity norm of grad G, OK? All right, so that seems perhaps a little mysterious because I'm taking the F and G to be supported, compactly supported in a ball. In the same ball, set it at x0 to radius r. No, no, no, no, ah, ah. And of course, over F and G, yes, thank you, thank you. F and G in C0 infinity of B of x0 r, yes. Right, thanks. I don't want to say it this way. You're right, thank you. WPP says the following. For all x0 in Rn, thank you. R bigger than 0, FG in C0 infinity of the ball, OK? Then we have this estimate where C is uniform. This is uniform with respect to F, G, and x0 in R, OK? Thanks, yeah, you're right. That was terribly unclear. Or worse than unclear, it was just not right. OK, so this may seem a little mysterious, but what's going on here? For example, suppose that T is bounded in L2, OK? Then this becomes trivial, OK? Choose F and G to be supported in this C0 infinity with support in the ball of radius r, set it at x0, OK? If T is bounded in L2, then this is bounded by the operator norm of T times the L2 norm of F times the L2 norm of G, which is less than or equal to, well, the L2 norm is bounded by the L infinity norm times the square root of the size of the support. So that'll be L infinity norm of F times r to the n over 2, same for G. And of course, this expression is less than or equal to this expression, so you get weak boundness for free, OK? So that's one reason it's called weak boundness. Real boundness, L2 boundness would give you this estimate. So you don't have quite that, but you have this, OK? At least to start with. Now, you may think that that's assuming a lot, but it's actually not assuming a lot. This is going to be one of the hypotheses of the T1 theorem, but it's actually not assuming so much because we have the following thing, OK? So here we're going to let k of x, y be a color on the Zygmunt kernel, and we're going to further suppose that it's anti-symmetric, meaning that k of x, y is minus k star of x, y. And here, at least for now, when I write k star, I mean the transpose, not the adjoint, right? So I'm not taking a complex conjugate. It's just the transpose. So in other words, this is minus k of yx, OK? So you flip the variables and you put a minus sign in front, OK? So then the principal value operator T associated to k exists in a sense of distributions and satisfies WBP, OK? So what I mean is that for all f and g in season 3 of our n, we can define Tf paired with g as the limit as epsilon goes to 0 of T epsilon f paired with g. And I'll tell you what T epsilon is in a second. Where T epsilon f of x is, by definition, the integral, or what we truncate away from the diagonal at scale epsilon, k of x, y, f of y dy, OK? So of course, this integral is an absolute conversion integral for f and c zero infinity, OK? So let's see the proof of this. Because for these truncated singular integrals, we have absolute convergence, we can actually write this as an explicit integral, all right? So this will be the integral of the double integral integrating where x minus y is bigger than epsilon of k of x, y, f of y, g of x.