 this p to the negative three-halves will get canceled and you'll get a bound which is an absolute constant. And then you have that each of these integrals are ij is less or equal than an absolute constant c prime. You take the product of powers, so a j squared sums up and they sum up to one. So the proof is finished as soon as you have the bound on the characteristic function. So how to obtain such a bound? Upper line is very easy. This is just partial identity. So first of all, the measure of t side of that, the absolute value of phi xj of t is greater or equal than s, is by Markov or Chebyshev less or equal than the norm of phi x to l2 squared over s squared. And we have to estimate the l2 norm of the characteristic function, but for this we have Parseval, that this is the squared norm of the density function and the squared l2 norm is bounded by the product of l infinity norm and l1, sorry, in Parseval's identity we have coefficient two pi and l1 norm, l1 norm of any density is one and l infinity norm is the maximal density and we assume that the maximal density is one, so this is one. And so I have two pi over s squared, which is claimed. So this is a complete triviality. The proof of the main part is also very simple but it required an idea and this idea belongs to Halas who turned it into a powerful method of proving small ball probability estimates. In this case in continuous density it's absolutely elementary. So let me show you this idea. We will use a calculus inequality that for any x real and for any l natural, the absolute value of sine lx is less or equal than l absolute value of sine. Okay, now let's look at the characteristic function. Phi xj of t, this is the expectation of the exponential of i xj t and let's take the squared absolute value. One way to take the squared absolute value is to take an independent copy of xj, say xj prime and multiply it by the exponential of negative i xj prime t. So xj prime is an independent copy of xj and this is the exponential of i xj minus xj prime t and let me call this say x tilde and x tilde is a symmetric random variable which is great because then I can pass to the real calculations from complex. So this is the expectation of the cosine of x tilde t or this is one minus twice expectation of sine x tilde t over two squared and let me call the right hand side one minus psi of t and then by changing the variables we get that x tilde t over two squared the measure of t such that phi t is greater than s is the same as the measure of t. Sorry, to prove that the measure of this is less or equal than c square root of one minus squared is the same as proving that the measure of all t such that psi of t is less or equal than y squared is less or equal than y, c y. Just take y being one minus squared. Okay, so this is what we have to prove and actually we know this for a reasonably large y. Say if y is zero is, I don't, reasonably large, I don't know say three quarters then we know this result because I can translate it backwards and if y is three quarters this is a constant number and instead of this estimate that we do not know I can use the upper line, this is a constant. So for a fixed y zero I am fine. So holds and now I have to make y smaller than y zero. Let's see how to do it. First I'll let l be a natural number and then I want to estimate the measure of all t such that psi of t is less or equal than y zero over l squared. Key, l squared goes to the left hand side and I have l squared psi of t, but psi of t is a sign expectation of sine squared and so I can use this calculus and equality and write that this is bounded by the measure of all t such that psi of l t is less or equal than y zero squared. But Lebesgue measure is scalable so this is one over l times the measure of all t such that psi of t is less or equal than y zero squared and this is less or equal than c y zero over l c y zero over l and I proved it for all such points y zero over l and of course the general case follows by easy interpolation. And we are completely done with one dimensional case. What about multi-dimensional case? Again, it splits into two parts so let me consider the multi-dimensional case rank p equals d and for the multi-dimensional case I have that the projection p can be written as the sum j from one to n a j squared u j u j transposed where a j is the norm of p e j and u j is the vector p e j normalized. I assume that the norms of the projections of all coordinate vectors are non-zero because otherwise we can throw this coordinate vector from the decomposition. All right, and then there are two cases there exists a j such that a j is greater or equal than one half and for all j so very a j is less or equal than one half and let me start with the former hard case. Case two, in this case we would do precisely the same as we did in the one-dimensional case. We will use Fourier transform or characteristic functions and we will write the characteristic function of p x of t this time t is in r d as p x of t the product of the expectation of exponential of i t i sorry a j inner product t and u j so everything goes like in one-dimensional case but we are missing the next step. Helder wouldn't work here because a j is now a sum of not to one but a j square is a sum of not to one but to d. Fortunately, there is a kind of a helder inequality which will allow us to carry precisely the same proof but to find it you have to dig in the same source in geometry and this is brass complete inequality. So if we have this decomposition I'll give the formulation of Keith Ball who used brass complete in many problems in geometry so if we have this decomposition then for any positive functions of j the integral over r d of the product f j to the a j to the negative 2 of t u j d t is less or equal than this helder diagram j to n of the integrals of over the real line of f j of x d x the integral raised to the power negative a j. This is an amazing inequality. Just a few trivial examples. One situation if p is an identity then u j's are for an orthonormal basis all a j's are one and what we see here is Fubini's theorem. The second example if all f j's are Gaussian densities then if you take these sorry it should be positive a j squared. No, right. If you take the product of the Gaussian densities plug in these inner products put a j squared inside you get this identity and again brass complete inequality turns into equality. The original proof of brass complete was rather long but it is quite elementary that you use only one deep theorem namely Brunminkowski inequality in convex geometry. Later Frank Barth gave a very short and nice proof of brass complete based on measure transportation and even later just a few years ago Barth, Cordero-Ares-Cannon-Maret found a proof of brass complete using Brownian motion. So if you are equipped with this brass complete holder type inequality everything carries on literally as in the one-dimensional case you get the same product of one-dimensional integrals and the estimates for the one-dimensional integrals are ready. So this deals with the non-trivial case. What remains is the formal trivial case which turns out to be quite non-trivial in the multi-dimensional setting. If you condition the reason is that if you condition all other variables except of J's you'll get the bound which is just linear, you don't... So let me reformulate it to make clear where the problem is. Actually in a multi-dimensional case instead of proving the bound for the density we can reformulate it for the Levy concentration function. If Y is a random vector in Rd I denote L of Y and say S is a positive or epsilon is a positive parameter L of Y epsilon is the supremum over Y small in Rd of the probabilities that Y is close to the point Y small up to an epsilon. And in the language of Levy concentration function I can write that F, the supremum or say the L infinity norm of F Y is less or equal than say M is the same as for any epsilon positive the Levy concentration function of sorry here it is Y capital Y capital epsilon square root of D which is important is less or equal than a constant times M to the power D. To get it you just integrate one direction you integrate the density over the ball in the direction you use the Lebesgue differentiation theorem and so what happen if we fix all coordinates but one and consider Y to be the projection of X we get a linear bound on the Levy concentration function and we need the bound to the power no epsilon of course we need a power epsilon to the power D so we have to do something more involved and actually in case one the proof proceeds by induction but let me I will not be able to perform this induction but let me at least show the how the induction step should look like and say some words about it so if we assume that any projection Q of rank D minus 1 and for any say Z in Q R and the probability that the norm of Q X minus Z less or equal than epsilon is less or equal than C epsilon or say M epsilon to the power D no not C epsilon we have to multiply by the square root of the radius of the ball the ball is D minus 1 dimensional and this is less or equal than epsilon M to the power D minus 1 and then we have to prove that for any Y in P R and the probability that the norm P X minus Y 2 is less or equal than epsilon square root of D is less or equal than epsilon M to the power D and this shows why this induction is delicate we have to keep M the same and here this is an elementary calculation but it should be done accurately you condition on some variables and then estimate conditional probability accurately this is all elementary but you have to keep track of everything including the fact that here you have square root of D minus 1 here you have square root of D this is this also is going to play a role I presented the detailed calculation in the lecture notes right now I have about 3 seconds left so I wouldn't do it and stop here