 A little worried yesterday about the rapid pace. We're going to slow down just a little bit because we're starting to get into the heart of the matter of what I wanted to talk about. But also, even before doing that, I think it's worthwhile since we were running out of time at the end yesterday wrapping up the proof of this theorem 2.10, the square function bounds for the theta t's. I think it's worth revisiting that just a little bit because there's an important concept or important idea embedded in this theorem and it's proof. So remember, we were proving that we have a square function bound for the theta t's. Remember where these theta t's are given by integration against the kernel that satisfies the little with Paley conditions. And so what I want to emphasize is that the only thing needed to make this work, we only needed the quasi-orthogonality, right? Which, let me remind you, said that in the operator norm, theta t qs up to some constant was less than or equal to the minimum of s over t, t over s to some positive power beta. Okay, and if you'll forgive me, let me write it this way because that's actually how things in the proof, this is how it actually arose. We got the square of the operator norm, okay? And what, yes? Well, because true orthogonality would be that these guys give you zero if s is different than t, right, when you compose them. So we don't have that but we have decay away from the diagonal. It's not purely diagonalized but it's, it decays away from the diagonal, okay? All right, so, and why was this important if you recall how the proof went? This meant that equivalently for this same beta, if you had maximum s over t, t over s, I'm writing something stupid and obvious here. Theta t qs operator norm was controlled by then, the square of it was then controlled by the minimum s over t, t over s to the power beta. And if you recall how the proof went, this is all that we needed to make the integral converge that I had written way down in the far right hand corner, right, and that finished things up, okay? Yes, yes I do, thank you. Of course I do, certainly, yep, thanks. Okay, all right. Okay, and I glossed over one thing here. You need this for, we need the quasi orthogonality for a family qs that satisfies the propositions 2.4 and 2.6, which were the Koldron reproducing formula which let us resolve the identity in terms of an integral of the qs's. And 2.6, which was the fact that the qs's themselves give rise to a square function bound analogous to that, okay? And those were the only ingredients needed, all right? Okay, and so in fact, for the theorem that we stated, theorem 2.10, this came from the following facts, okay? Well, it came more generally from what's really needed to prove that is that, okay, maybe let me say this at a philosophical level, okay? Let me state it, for example, in the case s less than or equal to t, you just reverse the roles of the theta t and the qs in the other case, of course, okay? So the quasi orthogonality then uses, oops, I'm losing my, there we go. Okay, what this uses is morally speaking, smoothness of the kernel of theta t, right? Which we called psi t of x, y, and cancellation zeta s of, let's say, y minus z, which was the kernel of qs, okay? And of course, some adequate decay for both. Now, in our situation, the zeta s is actually a compact support so that a k part was trivial, okay? The cancellation for the zeta s's was that they integrated to give you zero, okay? All right, so in our case, in the statement of the theorem, the psi t's satisfied a point-wise smoothness condition, right, it was this Lidlwood-Paley smoothness condition which entailed local Helder continuity in the y variable, okay? But you can get by with less than that, all right? Part of the moral of the story is that this quasiarthogonality estimate doesn't really take as much as what we stated in the hypotheses of the theorem, of theorem 2.10, okay? So again, maybe just because we went over this in a hurry last time, or actually, I didn't even talk about this at all last time, let me just briefly indicate, say in the case, s less than t, how one gets quasiarthogonality under the circumstances of theorem 2.10, okay? It was enough to show, for s less than or equal to t, the following estimate, that point-wise, theta t qs applied to some function h is gonna be less than or equal to, up to some constant, s over t to the beta times the maximal function, or even an iterated maximal function of h squared, right, this point-wise bound. Here, m squared just means the iteration of the hardy little one maximal operator with itself, okay, you take the maximal function of the maximal function, okay? Of course, the maximal function is bound on an L2. So if you have this, you take the L2 norm, of course, you end up with this decay factor times the L2 norm of h, all right? And that's all we need. So where does this come from? Well, let's see where it comes from, okay? So if we write this out, all right, using the kernels, what's this gonna be? It's gonna be the integral of psi t of xy, of course, the integral's over our n. And then qs of h is zeta s of y minus z h of z dz, okay? And now here's where we use the cancellation because the mean value of this guy integrated in the y variable is zero. That means we can subtract from this guy anything that's independent of y. We're just subtracting a constant. This doesn't see constants, okay? So when this is the same as integral psi t of xy minus psi t of xz times zeta s of y minus z h of z dz, okay? And I won't take it any further. I'll leave it to you to check the rest of the details, but it's, in fact, you're gonna do an exercise which goes even a little further. But all you do at this point is bring absolute values in. Here you use the littlewood-Paley smoothest condition. Here you just use that this is less than or equal to, up to a constant, one over s times the indicator of the set where y minus z is less than s, right? So therefore this average, okay? Or this thing is controlled by an average which in turn is controlled by the Hardy-Littlewood maximal function of h at y, okay? The smoothest here that you get since y minus z is less than s, you get an s to the alpha. And so then you divide and multiply by t to the alpha and it just works, okay? The thing that you end up with here is also controlled because of the very first proposition that we stated. This will be controlled by, again, the maximal function of this stuff, okay? And it's easy from that point, all right?