 OK, let me do it then. Good enough for me. So it's not really going to be any kind of complete proof. But let me write down the theorem again. So you have this is a scalar manifold. And this is a holomorphic line bundle. And it comes with a metric e to the minus phi. And the scalar manifold has a metric with scalar form omega. And then you assume that dd bar phi plus the Ricci curvature is greater than or equal to some theta, which is greater than or equal to 0. Then for all alpha d bar closed, l valued such that the integral over x with a finite l2 norm, there exists u. It's a some section of l such that d bar u equals alpha, and the l2 norm of this section is less than or equal to, again, I did this. You use this non-negative form theta. Yeah, so theta is a 1, 1 form. And this is the, so you think of it as a metric. And you compute the norm with respect to that. So if it's degenerate, then you have to, it's an algebraic expression. You have to make sure that this is finite. Think of it as strictly positive for this purpose. So 0, 1 form. So if you write alpha equals alpha bar d z bar i, and you write theta at theta i j bar d z i wedge d z bar j, and the raised indices mean the inverse, then alpha squared theta is alpha i bar alpha j bar bar theta j i bar. Right, but then how does this inverse exist? It doesn't exist, but I mean the formula for an inverse is a formal formula. You can write it down, and for this to be finite, alpha has to be in some direction 0. So just think of it as strictly positive. It makes you, it's a good point. I didn't want to worry about it too much. OK, so as I said, it's just going to be a quick sketch. So this d bar operator maps sections of L to 0, 1 forms with values in L, and 0, 1 forms with values in L to 0, 2 forms with values in L, and so on. So you can compute an adjoint for this thing. So it's going to map a 0, 1 form to a function. And the way you do that is you take a smooth section with compact support, and you define the adjoint. This is the formal adjoint by this formula. So this is a norm for sections. This is a norm for 0, 1 forms. And then you form this Laplacian d bar d bar star with respect to the data that's defining the L to norms here, omega. Yeah, yes. And so you make this Laplacian associated to the d bar operator. So what you're going to want to try to do is solve the equation. This is another 0, 1 form, say v equals alpha. OK, so you want to somehow argue that this guy is invertible. So there's a little trick to try to estimate it. So you can think of alpha in two ways. First, you can think of it as a 0, 1 form with values in the holomorphic line bundle L. And so that has a d bar operator, and that's where our operator comes from. But you can also think of it as a section of t star 0, 1, x, tensor L. And then you may want to try to apply a d bar operator to this guy, but you can't, because this is not a holomorphic vector bundle. So you want to turn it into a holomorphic vector bundle, which you can do by using the metric. So by using the metric, any 0, 1 form will turn into a 1, 0 vector field. In the metric omega, you get sections of this guy, and then you have a d bar operator associated to. It's not a section. It's a d bar operator associated to sections. And if you conjugate that operator, you get an operator that's called nabla bar. And so you can try to compute a Laplacian for this operator. But now this operator maps sections of this vector bundle, which are not seen as differential forms, into 0, 1 forms, tensored with this thing. So they're like 0, 1 forms squared. They're not 0, 2 forms. But anyway, so you can also compute an adjoint for this guy with the appropriate inner product for such sections. But remember that now this is a section. So when you form the Laplacian, you don't have this term. So then you have another Laplacian, nabla bar star, nabla bar. And the magic is that formally, these two operators, they differ by a 0th order operator. So this is an identity, I think, first written down by Kodaira. But it comes from a technique sort of worked out by Bachner. I've done more carefully here. Amazing thing is that this is, in this case, exactly dd bar plus the Ricci of omega. OK, so now what you've got is a, well, this operator, this Laplacian is always, any Laplacian is always non-negative. So if you write d bar, d bar star plus d bar star, d bar, say it to some form beta, take inner product with beta, that's equal to the norm d bar star beta squared plus norm d bar star d bar beta squared. These are different norms for different spaces, but they all come from the same data. And so what you get here is that this is equal to nabla bar beta squared plus some operator that comes from the action on these forms from the curvature. So you have to appropriately raise indices to turn it into an operator that acts on forms. But I'll just write it L of, well, maybe it's L is fine. It's a multiplier. So you can now forget about this term. And let me just write L for short. And if you know that this quantity is non-negative, then you get an estimate here. If you have it in terms of this thing, then you get this kind of an estimate. At this point, if you want to proceed rigorously, you have to invoke a little bit of precise functional analysis. But morally, it's already clear what's going on. So this operator L gives a kind of lower bound for this smallest eigenvalue of the Laplacian. So if it's positive, then you can invert the Laplacian. So that's morally what's going on, even though there are all sorts of issues of regularity, but we just assume that those work out in the end. And so now that means that if you've got any 0, 1 form, you can solve the equation Laplacian v equals that 0, 1 form. But that's still not Hormander's theorem. Actually, the way you do this is you use this norm to define a Hilbert space. This inequality tells you that it is positive. And then you just use the Reiss representation theorem. And it tells you that the solution v will have a norm, this particular norm, less than or, this inequality says that the inclusion of this Hilbert space in L2 is bounded. And then when you invert it, the norm of this solution will be less than or equal to the norm of this alpha in L2. Just say that you get a solution v such that, so d bar, d bar, d bar star, v plus d bar star, d bar v equals alpha. And the norm d bar star v squared plus d bar v squared is less than or equal to that integral. It should be theta. So this is really, I didn't really do it a lot of justice, but this is really linear algebra of some sort. So OK. But this still doesn't give us the theorem that we're looking for, except that if alpha happens to be d bar closed, then you can apply d bar to this whole thing. And this term will go away because you have d bar squared. So you find that d bar, d bar star, d bar v is equal to 0. And so then in particular, the inner product with d bar v is also equal to 0. But that's just the norm of d bar star d bar v squared. So that's 0. So this term here is gone. Or if you prefer, you can write this as d bar star d bar v v. That's also 0. And now you move the d bar to the other side. And so this d bar v is equal to 0. And so what you've now got is you're solving this equation. And if you let this be u, then the estimate is right there. So that's basically the existence part without any irregularity. OK. Maybe you regret asking me to perform Anders's theorem now, but I hope not. OK. So now I want to move on to the statement of Barenson's theorem. I have to set things up a little bit. So the first thing I need is a family of metrics for my holomorphic line bundle, l to x. So I'm going to usually write this family as e to the minus vt and t is something in some domain omega and cm. So the rigorous way that I need to define this, which will be slightly useful in a moment, is you look at x cross omega. And then you have a projection map to x. And then this x has a line bundle. And so you can pull back this line bundle. And what you're asking for is a metric of this thing. But then every fiber here, of the form x cross t, is the same as this original thing. And the line bundle is naturally isomorphic. So when I write this, I mean I identify the metric I've got for here with its restriction to a fiber. I need that because I want to define the l to norm that I'm about to define in such a way that the base doesn't depend on t. Integrating over a space that doesn't depend on t. So here is the Hilbert space. So the notation is going to be, is this too low to see? Risk it. It's OK. So anyway, it's high at this point. So if I've got one of these metrics, this is going to be the set of all sections, which are holomorphic. And then I have the l to norm I've been using there. f squared e to the minus vt. And I've got some fixed metric on x. It's a scalar metric. OK. So in this way, we can define a vibration over the domain omega so that the fiber over t corresponds to this h phi t. So I'm sometimes going to write that as h phi subscript t to indicate that it's a fiber. So part of Bernstein's theorem involves making statements about this as if it were a vector bundle. But it's not necessarily a vector bundle. And even if it is, because it's typically infinite dimensional, it's a little bit hard to say what that might mean. It's not something I want to wade into so much. So I try to make some simplifying assumptions. It still becomes a very useful theorem. The point, the thing you first do is that you put this x as a relatively compact domain inside some larger manifold y. And then I need some technical hypotheses that I don't want to go too deeply into. They are not 100% necessary, but they make everything easier. So I'm going to ask for this to be a Stein manifold. So most of you probably know what that is. But if you don't, it's a closed sub-manifold of some complex Euclidean space, complex sub-manifold. And then this domain shouldn't just be any domain. It should be so-called pseudo-convex. So if you don't know what that means, then it means that you have an exhaustion function of this thing. So there's a smooth function on x, which is strictly plurisubharmonic. And so the sub-level sets, so all the points below a certain given level, is relatively compact inside y. That's what pseudo-convexity means. In particular, this thing is also a Stein manifold. But the advantage is that now I can have my data be defined over here. And then the L I'm using is just a restriction to x of the ambiantly defined thing. I can also have my scalar metric coming from this ambient manifold. So in other words, this data here, it all extends to some larger space. And then you've got these holomorphic sections. So they're nice and smooth. And so if you're in a Hilbert space for any given t, then this same f is in the Hilbert space for all other t. So somehow what that means is that if you think of this vector space as a subspace of the holomorphic sections of L, then it's independent of t. So that's one thing. But also the norms that you get here, they're equivalent norms in the analysis sense. So these are all quasi-isometric. So you have a family of Hilbert spaces that are quasi-isometric. That's the situation we want to be in. OK, so now the next thing I need is to define sections of this vector bundle. OK, so what is a smooth section? So it's going to be a section of this line bundle here. So whose restriction, let's give this section a name, say s. So st, that's the section on the restriction to x cross t, but thought of as on x, naturally. So this st should be in hv t. So you automatically get some smooth things, but they're actually holomorphic in L2 on each fiber. Now the reason you do that is because you can now define a d-bar operator. Now you have a d-bar operator for this thing. You just take d-bar of s. And then you restrict it to fibers. You get 0, 1 forms in the horizontal direction. But because they're holomorphic on the fiber, you don't get any d-bar component vertically. OK, so we're sort of turning it into a holomorphic vector bundle at the same time. OK, so what's a holomorphic section? So maybe let me introduce notation. This will be written as s is in gamma. Yeah, no, it's the same. It's just what the fibers are by definition. So the holomorphic section is s in gamma of omega hv, such that it's actually a holomorphic section of this bundle. So in other words, d-bar of s is 0. And then the notation will be s is in gamma o of omega hv. OK, so if you remember the criterion for Griffith's positivity and involve the dual bundle, so now I need to give you a definition of a holomorphic section of the dual bundle. OK, so well, first of all, let's say that the dual bundle is the bundle of Hilbert spaces that are fiber-wise are the dual spaces. So I just remind you that the norm of a vector c in the dual space over t is the soup of, so you take some element f in hv t, f is going to be in hv t, and you pair it with c, which is an element of the dual, and then you divide by the norm with respect to t. And then, so now if you have a section, say it's going to be a section of this dual bundle, then you can ask about what is its regularity, like under what conditions, for example, is it holomorphic? Well, if you take any f in, say, one of these guys, let's just say hv 0, that f is automatically in all the other ones by our hypothesis, then you can look at the function that sends t and omega to this complex number, and this should have that regularity. So for example, if you want a holomorphic section, this should be a holomorphic function. If you want a smooth, then this should be smooth. If you want measurable, this should be measurable. It's more than that. So if it's holomorphic on each xt, I think you meant x cross t, it's a section of the pullback of L over x cross omega. Yes? No, it's more than that. It also varies holomorphically in t. Yeah, so it's both. If it's holomorphic, it's a holomorphic section of this whole thing. So it's holomorphic in the. So the restriction is always a holomorphic section of L. That's the definition of the vector bundle. In addition, you want holomorphicity in omega. Yeah, well, also L2 in the fiber. But I mean, a lot of times we're doing this shrinking of domains, so it's effectively under this hypothesis of shrinking and having all the data living from the ambient space. Because it's smooth beyond the boundary, and it's a compact set. I want to give some examples of sections. One, not holomorphic, and two, holomorphic, of the dual. You're just asking your questions too quickly. So here's an example of a smooth section. So let's take some section, which is a holomorphic section of, or I should say, is the holomorphic section of the original vector bundle, not the dualized bundle. And now you can define, so CT, this is going to define a section of the dual. So CT acting on some F, and by definition, the inner product of F with S, ST. So that is the inner product associated to this Hilbert space. So that clearly gives you a linear functional on every fiber, but it's almost never holomorphic. Even if this section, the section S that you started with, is holomorphic. So you can even put a complex conjugate here. Well, that's not right. And it's in the wrong Hilbert space. You can put it in L2 or something. So you might think it's not holomorphic because it has a complex conjugate in the inner product, but that's not it. The problem is really that the weight doesn't vary holomorphically. So this is an example of a non-holmor. So the second example is almost trivial. So if you take F and H, I was going to give an example of something in H. So let me just say that in words. If you take an F in H of phi 0, then it lives in H of phi t for all t. So that's a holomorphic section because it doesn't actually depend on t. So this is because this bundle is actually a trivial vector bundle. It's not only trivial, it's sort of trivialized. It's handed to you trivial. You don't have to trivialize it. So if you've got one vector, you can just constantly translate it and you get a holomorphic section. So that's not a very interesting example. Maybe a more interesting example is this one. So let's let x be a point in our manifold capital X. And let's define c tilde superscript x by c tilde superscript x acting on some section of H of phi is just the evaluation of this F, the point x. So point evaluation. OK, so for each x, that actually gives you an element not of, it's not a complex number, but it's almost a complex number. So the trouble is that this is in L, the fiber of L, over the point x. So if you knew what 1 was in L sub x, then you would know what this complex number is. But it's effectively a complex number. To get a complex number out of it, you need to pick some e in the dual bundle. And you can pick it in such a way that, say, e, I don't really care too much. So then if you take F of x tensor e, then that, if you redefine it, remove the tilde by putting tensor e here, then you really do get a complex number. And it's almost point. It's point evaluation given a vector in the fiber of the dual bundle. OK, so I'm going to kind of ignore this point. Just pretend that I do know how this is isomorphic to c. OK, so well, this is clearly something that takes a vector in h and assigns to it a complex number. But that, in itself, doesn't make it a section of HP star. Because the problem is this notion of dual. In infinite dimensions, duality is not just linear functionals. It's continuous linear functionals. So you have to know that this thing is continuous. And it is because this is a fact which is given as an exercise in the notes that I gave out. This is called Bergman's inequality. This is a general equation of it. So F of x squared e to the minus phi of x is less than or equal to some constant that may depend on x times the t norm of F. And the way we've done this is constant doesn't depend on t because all the data extends to a larger domain. But it could depend on t as well. So this tells you that you have a bounded linear functional. So this is actually a very well-known object. So let's drop the tilde, Cx F and F of x. Let me say a little bit about this object because we'll meet it again soon. So by the Reiss representation theorem, for each such x in x, you can find a section, let's call it k super x sub t in Hv t such that this number, in this case it's an element of Lx, is achieved by taking L2 inner product with respect to this section. So say F kx t inner product is equal to this F of x. So this is not exactly correct. This thing is not taking values in, I guess it is exactly correct now. Yeah, sorry. So it's a section of L. Question? The boundedness, the Bergman's inequality exactly means that this actually is an element of the dual. This says that the norm of this number is less than or equal to some constant times the norm of the vector in the Hilbert space. That's what boundedness. That's what continuity of the linear functional means. Well, yes, you want to show it's an h star. What does it mean? No, it's bounded linear functional. Otherwise this representation theorem doesn't hold. And then the output is F of x. Well, let me do this for each t. So it's a section, so F sub t if you want in ht. Yeah, but I'm evaluating it at particular x. I've chosen some x and I'm evaluating it at that particular x. Yeah, still depends on t. So I just want to check that it's in the dual space in each fiber. So I drop the dependence on t. It's you should, I guess. So I should really write this. And now I need to prove that it's bounded. No, it just says that for each t it's in the dual space. But it is actually because we've shrunk things, it is actually bounded uniformly in t. It doesn't matter here. Yeah, so this is the point I was trying to address before. So because it's an element of Lx, you can't talk about it being a complex number. So you have two choices. You can choose a vector. Or you can put another metric on this thing. And that's e to the minus phi is the metric on that line. So it's bounded with respect to that structure. OK, so then what you do is you put this. It's a valuation. It's point evaluation. So that's the most important example of a holomorphic section. It's like a delta function. So in some sense it's all the examples. All of them are some average of this. Right, so if you check what it means for this to be a holomorphic section, it means that if I plug in some f, this should depend holomorphically on t. But it's actually independent of t. Or if you start with a holomorphic section, f, it's holomorphic in t. And it's holomorphic. It's a little confusing point. Yeah, if you check the dependence on holomorphicity, if you look at that formula for the inner product, you'll see this phi sub t. Yeah. On this case, it also gets screwed up by the complex conjugation. But that's the less important part. OK, so then there's this. So this guy here, it depends holomorphically on some variable that's not x. But how does it depend on x? Well, so what we do is we define kt of x and y to be maybe bad notation. You take this k superscript x, plug in y, and take the complex conjugate. And this is a well-known object in complex analysis. It's called the Bergman kernel. So one thing that you can check, it's not difficult, but it's worth doing. So this kt actually is a holomorphic section of something. I have some non-standard notation here. Let me explain it. So this x dagger is the complex manifold expo with a complex conjugate structure. So that's, from this conjugation, it's clear that it depends anti-holomorphically on y. But if you put the complex conjugation, it depends holomorphically on y bar. Then it's also a section of the line bundle. So the fiber over x comma y is lx tensor with a complex conjugate of ly. Complex conjugate structure of l at the point y. OK, so that's one thing you can check. The other thing is that this is a kernel from this formula that when you integrate f times k sub t, you get the value of f at the point x. So it's basically the identity map written down. Well, it's a point evaluation. But so if you choose an orthonormal basis, if g i is an orthonormal basis for h, p, t, then this kt of x and y is the sum over i of g i of x, x tensor, g i of y conjugate. So that's really the usual formula you see in a complex analysis class for the Bergman kernel. Yeah, that's right. And that's what this is, the Bergman kernel for each t. Yeah, via the restrepresentation theorem and appropriate conjugation of some sort. That's right. Maybe there's one last thing that's worth knowing. And I'm going to put it almost invisible because I'm not going to use it. So you can actually compute this thing as a solution of some extremal problem. This is the soup over all g with norm t equal to 1 of g of x squared e to the minus vt. And the restrepresentation theorem, yeah. So you could do it that way, or you could do it from this statement by first choosing an orthonormal basis for the things that vanish at x. And that's a hyperplane because the evaluation is bounded. So there's only one other direction, and this guy is the guy in that direction. That's a kind of explicit way of finding the actual section. OK, now I can finally state Bernstein's theorem. No, I'm going to put it over here. Not like this. It's not. OK, so we can call this theorem 1, which is its number in Bernstein's annals paper. So let x be a relatively compact pseudo convex domain in the Stein-Kehler manifold. And then let l to x be a holomorphic line bundle with family smooth metrics e to the minus vt. So if the curvature of these, I shouldn't say it like this. Let me explain, and let me just write it down. So this is the phi as it depends both on the x variable and on the omega variable. So this d bar operator is in the space x cross omega. Then you have the projection to x. You pull back the Ricci curvature from x to x cross omega. In other words, this term doesn't depend on the omega variable. But if you compute this quantity, and this is greater than or equal to 0 or greater than 0, then the vector bundle h phi to omega, together with its Hermitian metric given by the Hilbert space norms over omega t, it's this norm squared sub t, which I just erased up there. This has non-negative, respectively positive, Nakano curvature. Curvature that is non-negative in the sense of Nakano. So that's finally the statement. And there's the one-dimensional version. Sorry? This thing? This one? This one. No, I mean it's a, this is a vector bundle. So it means that you have a Hermitian inner product in each fiber, Hilbert spaces. So remember, when we had a vector bundle, e over y, with a Hermitian metric, we can talk about the curvature of this Hermitian, the churn curvature of this Hermitian metric. So the h is this guy. So yeah? A section s? OK, it's to each fiber t, there's an 8, there's this norm sub t. I think there's no confusion. It's a metric on a vector bundle, which means it's fiber-wise a Hermitian inner product. But that's right. So the claim is that it's positive as soon as this guy is sort of fully sub-harmonic in omega. And the dd bar x is bigger than the Ricci curvature. So a lot of people, they don't look at sections of l. They look at sections of l tensor, the canonical. And that eliminates this Ricci curvature. And then the statement looks more concise, but I think it ultimately is a bit more confusing. So this is a special case of a one-dimensional base. So that e to the minus phi, the metric for the pullback of l to x cross omega. This is what I should have written there. So now I'm going to change omega. I just have a one-dimensional base, which I'll just think of as a unit disk. Assume that this metric satisfies the same condition. So dd bar phi plus phi pullback of Ricci of omega is greater than or equal to 0. And there's a corresponding statement for strict positivity. Then for any holomorphic section of the dual, that's not identically 0, let's say, the function t maps to log square norm is sub-harmonic. So this is the one-dimensional base version of that theorem. It generalizes two bases with a higher dimension. But the statement would be Griffith's positive, not Knoll Poglian. I'll have something more to say about that at the end, depending on how far I can get. So I want to get back to this Bergman kernel for, say, a little bit more about it. And then we'll end there. Next time we'll prove these two theorems in a reverse order. So let me say something about the Bergman projection. So we started out defining these Hilbert spaces of holomorphic sections. But the holomorphic part was kind of self-induced. You could avoid the holomorphic thing as well. So you could, for example, define L2 of phi t. So this is just a set of sections, such that, but not necessarily holomorphic. You could start with the smooth sections and then just take the Hilbert space closure. That's like the complex analytic geodesic definition. And then in measure theory, you prove that these things are measurable and square and integrable, and so on. And then you have this subspace, which was what we defined before. And this Bergman's inequality actually tells you that it's a closed subspace. But why does it tell you that? Well, remember, the Bergman's inequality, without writing the weights down, well, I'll write the weights down. x is less than or equal to the norm f t squared, when f is holomorphic. OK, so now suppose you have a sequence of fj's converging to f in L2. Well, you can look at fj minus f, and that will converge to 0. So maybe I'll just say it more quickly than that. This says that the L infinity norm is controlled by the L2 norm. So if you have a sequence of things converging in L2, then they actually converge locally, uniformly. And then Montel's theorem tells you that the limit is holomorphic. So that's what this means, that this is a closed subspace. Once you have a closed subspace of a Hilbert space, that's equivalent to having a bounded orthogonal projection. And that bounded orthogonal projection is exactly given by this Bergman kernel. You can see it from this formula, for instance, but probably from many other places. Right, so now we have this orthogonal projection, L2 of dt to h2 dt projection. And now I'm going to combine that with this Hormander theorem over there. Yeah, so did I not use 2 before? This is the Hilbert spaces I'm talking about. Sometimes I wrote h2, and sometimes I wrote h1. OK, right. So now suppose you want to, it seems like an aside, but this is the main reason I introduced Hormander's theorem. So suppose we seek to solve d bar u equals alpha with good estimates. Well, Hormander's theorem is the thing that gives us good estimates, but what would be the solution that would have the best estimates? Well, if you've got two sections whose d bar is the same alpha, then the difference is holomorphic. So if you want to get the one with the minimal norm, you need the one orthogonal to the holomorphic subspace. So another way of saying that is if you take u and subtract away the projection, then this thing will solve the equation d bar equals d bar u, and it will also be the solution of minimal norm. So in particular, Hormander's theorem will apply to it, and then you get that the integral of u minus pt u squared e to the minus phi t over omega, if you have a theta like in Hormander's theorem, this will be less than or equal to the integral, not omega, dv x d bar u. So this is a fact that has, you don't need to think about it directly as a statement about the solution of the d bar equation. It's just an estimate, and this should say theta. It's just an estimate that we need. And we will use it to, morally speaking, I won't use this language, but we're going to compute the curvature of this guy, and then the curvature of this guy will be given from the curvature of this guy by adding the second fundamental form. And we're going to want to estimate the second fundamental form, and that will be the key result that we use to estimate it. So that's, morally, what the proof is going to go like, but practically, I won't need to use those words. So OK, that's good for today, I think.