 So, it's a bit strange to give a lecture in honor of myself so that we would give sort of an anti-lecture and discuss failed problems. Which I think is like after all not such a bad idea because it's something that we, maybe we talk to when we discuss between us. But it's something that we don't really highlight so much. Maybe we should because after all the ideas involved, maybe, I don't say it's the case with this one, but it may be interesting even if they don't work right now. I had announced as somebody, as many people noted, three failed projects. I decided that was a bit over ambitious so I will stick to one failure here. And it's about a problem I actually don't know so very much about, but it's a central problem in convex geometry. Okay, so let me describe what it is. So it's about, I'll read this here. So we have a convex body in Rm and I normalize it so that it has volume 1. And then I look at the covariance matrix of that. So basically that's the, it's written there M ij is the integral of the scalar product of the coordinate psi i times psi j. And then I subtract the expected value of the coordinates. I might very well arrange things to start with in such a way that the very center of the convex body is 0. And then the last set of the terms, we would disappear out of just having integral of psi i times psi j. And the question is to estimate this. So the question is, the question I'm going to talk about is, does there exist a universal constant, not dependent on anything, such that Lk, which is by definition the determinant of this covariance matrix raised to the power of 1 over twice the dimension, it's just a normalization, should be less than this constant. That's the question. So this is the interest of this, it comes from apparently many different sides and analysis. And I don't know very much about it in any way, I don't really want to spend time about it. But it is related to the problem of having a convex, if you have a convex body, can you find the slices of that convex body that have, if the volume of the convex body is normal as we want, can you find the slice so that the volume of the slice bounded from above by a universal constant or bounded from below by a universal constant. And we have some reductions that I don't know so much about. It actually turns out apparently to be equivalent to this problem of dominating the determinant of the covariance matrix. Okay, so I decided to solve this problem, so much hope really because it's been open for a long time, but still one can always try. And so I approach it then by looking at this function phi x to have that, which is the logarithmic Laplace transform of the uniform measure on k. So I take the, well the integral over k with respect to the big measure in this case, over k of e to the x times psi. And then I take the logarithmic model and the convex function. That's fairly well known since I take an exponential function and I just take its average with respect to the positive measure of the logarithmic model with the convex. And then the metric m that we had before there, it is the hessian of that function at the already. The determinant is the mon champere of the function at the already. So what we want to do is to estimate the mon champere of such a function at the already. And we can notice that the growth of the function is very easy to control. The x times psi is, to access the variable here, you take the supremum of both psi and like the x, then you get the supporting function of k. And so phi will necessarily be smaller than the supporting function. And there is the general fact that if you have a convex function on Rn, and the supporting function, and the function does not grow faster than the supporting function, then the mon champere, the integral of the mon champere, the mon champere mass of the function has to be less than the mon champere mass of the supporting function. And the mon champere mass of the supporting function happens to be the volume of k. So we want, we have certainly a bound for the integral of the mon champere to bound it at one particular point instead. So that's the observation on the next page. Now, it would be very good if I had to suggest one. So, and in previous slide. So for the supporting function, the mon champere is really concentrated at zero, right? Yes. This is also part of your motivation that mon champere is? No, not really, I think. It's just, maybe it should be. Because what you have here is like a smoothed out version of it. I have a smoothed-in version of the supporting function, right. And then, well, if you estimate the supporting function at the, well, you want to estimate actually the mon champere at that. But that's a very bad function, though, because the mon champere is a measure. But morally it's concentrated. Yes, yes, yes. Okay, yes, sir. So anyway, so in order to go from an estimate of the integral of the mon champere measure to a point-wise estimate, it would be very good if this function, phi, where the logarithm of, say, you know, the mon champere of phi, if that has some sort of convexity property, then I could estimate its value at the point by some sort of average around it. So I wonder if there is such a convexity property, and this is what I found, that the logarithm of the mon champere measure as a function, not as a measure, but I look at the density of mon champere there, and then I add n plus 1 times phi. So I make it more convex by adding this because phi was already convex, and then this resulting quantity will be convex, that's the theory. So I will say a few words later about the proof, but let's see if I have something more there. Yeah, so what does that give about the problem then? Well, you can couple it with the following proposition. So say, so define more generally, the function logarithm n a phi plus n plus 1 times phi was called u, now I define the function u sub b there by changing n plus 1 to a constant b. And the proposition says that if I have that for a certain constant b that this function is convex, then I get an estimate of lk, this quantity that I want to estimate, by b over a square root of b over a. And then I should say, so b is just the b there, the b n plus 1 in the previous theorem, and then I should divide by a, and a is just the number, which happens to be equal to the constant in the Mahler conjecture. So I should say, let me write that, because I don't have any slides here, but the Mahler conjecture is that I take the volume of k, one of the k, and I must satisfy the volume of the polar, and then, so this is called the Mahler volume, and this is by Hullian-Milmann, greater than 8 to the power of k in July, and the precise value of the earlier theorem is Mahler volume, but say that we just give a name to the best constant there, and that's exactly what we do exactly yesterday. So what's known about a is that it is greater than pi, the conjecture is that it is 4, and greater than pi is the theorem of Kupferberg, and that there is some constant over the Hullian-Milmann from 1980 or something like that. So how do you see the proposition? Well, we use that if you have any probability measures with barycenter 0, then, and you have a conjecture function, then you have that u at the barycenter is smaller than the average. But I'm sorry, didn't you say that this conjecture mentioned where they were symmetric? Oh, okay, right, right, right. You're quite right. So in the non-symmetricate constant are slightly different in that case. But I will actually focus on the symmetric case. Apparently, for the slicing problem, it doesn't make much difference. So for intellectual economy, I will focus on the symmetric case. But you're precisely right. So then if I want to prove the proposition, we have this observation that I can estimate the conjecture function by its average with respect to probability measure, I need to choose the probability definition. So you can play with a lot of different choices here, and here is one of them that use the case when b is equal to n plus 1. So I just take the uniform distribution. Yeah, it is, yes. But still I can hope to get a better b. The proposition says that there is a b. Exactly. But I only know it for b equal to n plus 1. Yeah. But exactly, that's the thing. One would hope that one can find a better b, and then one would get a better theory. If, for instance, you could find, if you could see that b equal to 1 works, then this slicing problem would be solved. Conflict case here, because I only have b equal to n plus 1. But still. So if we try to, if we choose, as our probability measure, the uniform distribution on the polar body, then we have this first z inequality there by Jensen's inequality. So you can put the logarithm outside of everything, and then x times psi, if x slides in k, if the polar slides in k, then this exponential there is the exponent to less than 1. So you will get that whole thing will be smaller than 1. And on the other hand, if you integrate the logarithm of the motion pair measure with respect to this, you can estimate it by taking the integral over everything. And then you will get the logarithm of 1 over the product of the volume of k with the polar body. Except that you don't really get that because you get actually the quotient. You get k, the volume of k divided by the polar body. But since I have normalized things so that the volume of k is equal to 1, it's not a difference. So I had this, and now it's the Mahler-Borsch-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler-Mohler. So when we did the very first integral on this page, you were integrating over the polar of k. Oh, phi is also defined on the whole of ORM. Phi is defined on all of ORM. Phi is defined on all of ORM. Phi is defined on all of ORM. So I get that integral of k and I get the integral of the lower integral of the modern model. And my sure is smaller than lower than one or the Mahler-Borsch-Mohler-Mohler. And then I just used this observation before that I put those two things together there and then I get the chain of inequalities that have there. And now we get it. So I don't get exactly what I said because I have a couple of two here but it doesn't matter so much. It's changing the arguments. I mean, this case would be easy, but I won't have a d that might be now again plus one. And if a is really four, square root of four is two, so we can cancel against that and then put that exactly. And that's where it would end. But I'm sorry. But the dimensional constants that are known for this estimate, like I guess from John Steer and the next slide here. I assume now that that phi d times phi plus the lower end of the moment from there that is my new is convex. So let's say if it's a fine then you get equal. Yes. But then you but your hope, you don't have that, for example, being independent of n, do you still have hope for that? Let me again come to that a little bit. I don't know if I have any hope because I really have no intuition about it. But the failure is that so far it just gives the square root of n here. I want it to be independent of n that I got to see what would still work. So, so you will get like this inequality. And this is known in this business that people have worked on this problem called the easy bound. So it's not anything that's made to be famous. It's not, it doesn't seem to be so easy. I mean it uses some facts but some standard facts conflict geometry. So this is just from John's theory. This is what? I think that I have seen reference with proof that he used it as a theorem. But now, by now the best constant is the power of log n. I think that's a fairly recent theorem of Klartag and Lehek. But the story is that Organ had n to the power one fourth multiplied by a logarithm. Then Klartag removed the logarithm to be what n to the power one fourth. And then I think the great breakthrough was by Chen just to figure out who got down to something like well, say, any power any small power n would do. And then Klartag and Lehek had lower written n to the power four or something like that. That's a lot of work. Is Chen this probabilistic? I don't know or a huge Chen in the world. Anyway, the theory is short. In a way that in a sense that I will explain now. But first I will prove the theory. I will sketch the proof of the theory because it has something to do with complexity. So I consider the Paley Wiener space of functions associated to the compact keg. So they are all functions that are entire functions on cn that are for the last transforms of the functions in L2 of keg. I defined that the norm of such an entire function is the L2 norm of the function f on keg which is just any L2 function of keg. That's my definition of the norm. That's the space that occurs in the classical Paley Wiener theorem. Now, if you look at the complex version of the exponential you take e, you take e to the phi you extend it to a complex number to the phi of a real part of c and that happens to be the Bergmann kernel for this space. That's fairly easy to say to see because in order to get the Bergmann kernel you want to estimate how big is the function h at the particular point and you just use the Cauchy-Schwarz inequality basically and the definition of that. And you will see right quickly that this is actually the Bergmann kernel for the Paley Wiener space. And now I will launch up Lf of phi is the volume form of the Bergmann metric of the Bergmann metric to take d bar that is cn of phi and to take the volume form you take the Montchampere of phi. And then I was interested in the logarithm of the Montchampere density and now I look at minus ddc, I would like ddc to be positive, I look at minus ddc that's the reaching curvature of the Bergmann metric. And now there is a wonderful theorem of Kobayashi which says that virtually for any Bergmann metric on an n-dimensional space the reaching curvature is bounded by phi n plus 1. This is basically because you can use the Bergmann space to embed your manifold in this case cn into sub-creative space and you know about the curvature of creative space and now your manifold is like a sub manifold of creative space it has a smaller curvature and from that you get that the reaching curvature is bounded by n plus 1. So in concrete terms that means that ddc of lobe Montchampere is smaller than n plus 1 and ddc of phi. That follows Kobayashi's theorem. So there is what if I mean in the estimate what if you take your lobe Montchampere and you rescale it so that the limit is a support function and you don't rescale it how do you multiply the manifold by some k and then you divide by one more k and you divide by k so the way that the limit is the support function so you get to scale log the past transform convergent towards support function it's like I would have to think about it before I say something about it but just curious what your estimates give you in the limit yes, yes, yes but let's think let's type in a natural word because I'm worried about the limit that you in so far because you impose it then you free to ddc of minus ddc is smaller than this and then if you move over to the other side you just get that the logarithm is plus n that's one times phi is complex so whatever you can say about that argument it is a combination of complex and complex so can you get something better but the problem is that the argument works not only for when I define phi by integrating with respect to the big measure to recall what phi was I hate people that go back in that slide but now I'm going to do it myself so the definition of phi is the logarithm replace d lambda there but some other probability measure on k I prefer to say that the support is exactly equal to k then I could have exactly the same argument and I would get the same theorem that this log ma phi plus n plus one times phi is complex would be precisely the same argument and in that case in that sense it turns out that the theorem is sharp because I can find the probability measure on k such that so the example that shows this is that you take your k to be a simplex you take k to be a simplex and you take the measure to be equal point masses at all of the vertices then it turns out that your phi is basically the function that defines the Fubini study metric on creative space and you will have exactly what I talked about before so you will have exactly the end of one so the question is can you do better by using something that we have another measure since the Kobayashi theorem is so general it's hard to say why you can actually feed in something more some other information so you can ask them if you can find which is more explicit and maybe you can see from there that if you have another measure so I don't have so much time but let me I don't have so many slides either but there is actually another proof of it so another proof of the theorem which gives an explicit formula and this I like it because it looks a little bit like Robert's exercises Determinant so I will have to do it very quickly but let's try to do it anyway so you take any you define phi in the same way but to take an arbitrary probability you write down the x again I have a formula for that there it looks pretty much like before but now I don't have I have to look at it not only at the origin so I really need to think the other the parts the mu x will now be mu wasted by the exponential of x times psi it looks like the Hessian of i looks like the mu i j that's the first term minus mu i times mu j where these are the first all the moments so I want to compute the determinant of that matrix and I do that by some exercises first I enlarge the matrix to the n times n plus 1 matrix so I should I have my matrix one there so I get the beautiful there enlarge the matrix and then by some elementary I get that the determinant of the beautiful then no sorry it's not it's not really what I have here the matrix to the determinant of n as I had before I did that as an exercise I give you a hint that you use row reduction and then which I think you should be able to do now I have n functions okay these are the coordinate functions psi i and now I add one more function psi 0 which I in this case is just one I call that function psi 0 and now I get a vector psi in n plus 1 space psi 0 psi 1 psi n because the vector value function is that this here the matrix beautiful m is just the scalar product of this psi i psi j so it is a ground determinant and then I get for each factor I have a vector I have n plus 1 factor so I get n plus 1 vectors I put them together into a matrix and I look at the determinant so you can look at it more closely but I get some sort of determinant and then actually I have an explicit form of this thing here u x is equal to this it is the logarithm integral over the n plus 1 full product and then I have this x times the probability it looks doesn't it look a little bit like your I just said in German there is an n plus 1 full product but okay so just two remarks then so imagine that you take away that this one was not there this is the trace of the determinant you know like this what would I have there well first it says that this is convex because I just integrate exponential here because we have to positively measure the logarithm of that it could always be convex this proves to your but so we want to understand what it is say that I didn't have this factor there is it that I think the scalar product acts with vectors so psi or super is the vector of psi on the J's factor so it's a little complicated because there are so many variables but how do you take this away you can think of this this is here you have the probability and it defines the you have m's to test it identity distributed and independent variable vector values variables of psi one psi zero out of psi and you take the sum of those and you look at the generated function for the sum of those so it looks like the kind of thing that you deal with when you look at the central limit zero but now you multiply by the determinant instead you would like to say something intelligent that's the probability of this using this if you know something more about mu let me just finish by by saying one case when you can actually make the n plus 1 smaller you can get it down as small as you want you can get it as small as you want so if you forget about what I wrote yeah just look at the number 2 here so the function psi is also the logarithmic Laplace transform of a measure so you just forget about what I said if you have a logarithmic Laplace transform of a measure mu one k that's a convex function ok so not all convex functions are logarithmic Laplace transforms but all logarithmic Laplace transforms are convex functions ok so I call them LLT they behave a little bit like line models I think in a way because you can sum to LLTs and you get the new LLT because that would correspond to convolution of the measures so I can do that I can sum them and therefore I can also multiply them by integrals but I cannot take one half of what I don't think that would be like taking the square root of a line model which you can do but then you don't get the line model you get the q line model or something like that but now pretend that that might were such that it's divisible actually is equal to n plus one times psi where psi is an LLT so say that my measure is the n plus one called convolution of some other measure so so I can think of I think the probability says that the probability is always divisible if it is divisible n plus one times well then you are very happy because then you look at LMA phi plus phi without any coefficients I plug in phi was equal to n plus one times psi I will get the LLT of this then I will actually get out some n plus one thing there but that is just the constant because it matters so much and then I have n plus one times psi and now this was convex so this one will be convex then this is convex so it means that the analog of this slicing thing works perfectly well you have to estimate that you want in case that the measure you have is divisible n plus one times