 Okay. Welcome everybody to this mathematics seminar this morning. It's a pleasure for me to have to introduce Tony Carberry from the University of Edinburgh. This is his first visit to SCP. Pleasure to have you here, Tony. You speak to us today about geometric and multilinear problems in the nuts. Thank you very much, Emanuel. I always wanted to come to the yes there. I've been familiar with the notion of the city for a very long time and it's the first time I've come and really loving it so far. So thank you, Emanuel, for letting me be here. It's nice to meet all of you as well. So I want to give quite a general talk about multilinear techniques in analysis, especially as they relate to geometric problems, but there's not going to be any big theorem. I really want to talk about some of the ideas and some of the connections that arise. So don't look out for a big theorem because there isn't one. Okay. This is shooting later. Why is it not moving the page? I don't know how to make this. Let's try again. Okay. All right. So I want to talk about some, as I said, some problems which have a geometric labor and approve the meanable to multilinear approaches. So my only interest in these problems arose quite specifically in harmonic analysis, the area that I work in. But one of the nice features of this is that understanding these problems involves other areas of analysis, some aspect of multilinear analysis, certain aspects of functional analysis, both product dimensional and infinite dimensional, and also discrete analysis. And of course, the influence of these techniques has been why are still being used geometric measure theory, number theory, etc. So the paradigm is that multilinear inequalities often underlie interesting geometrical phenomenon. So we've got some geometrical phenomenon. We might not recognize that there's a multilinear structure there or that multilinear techniques can be useful, but often they are. We're used to trying to deal with problem by using linear analysis, but sometimes multilinear analysis can bring new insights to what the teacher needs. So let me just set out a few of these problems. So the first problem is the cacaia problem, which says that if you have a set in Euclidean states which contains a unit line segments in every direction, must it have full dimension? And I don't really care whether we've housed or born in Koski or whatever to mention any sensible motion of dimension. Then there's the classicalized parametric problem. If you take a set E in Euclidean space with a given surface area, boundary of a given surface area, what's the largest that it's volume can be? And then third, perhaps, so the first two are probably more familiar to you, certainly the second problem is very familiar to everyone. The third one is probably a little less familiar. Given a finite set of lines in Rn, so a finite set of dotted input of straight lines, how big can this set of joints be? And a joint for a family of lines is some point in Euclidean space where n of these lines meet and these n lines, the directions of these lines, span the whole space. So a corner which lives inside a plane doesn't count, but if something has, in three dimensions, if it has three vectors and they want none in directions such a purely independent, then that's a joint. Then we'd like to know how many joints you can have given a finite set of lines. The problems I want to focus on. So a surprising insight perhaps that certainly was surprising for me was that this large problem, this joint problem, is really intrinsically multi-linear in nature. It's certainly not apparent from the statements of the problem, but in fact it really is a multi-linear problem. There's no avoiding it. Okay, so some of the techniques and ideas that we'll maybe meet on the way, things from modern era analysis, I mean not the most sophisticated stuff that we all should have learned to say, as a graduate student, Ralph, it's point theorem, the Borsig-Uran theorem, variants of mini-max theorems, the key fan mini-max theorem, for example, functional analysis, as I said, both infinite dimensional and finite dimensional, and geometric analysis. So I don't want to go into any detail here, but the example I have in mind in particular is that if you have a convex set in the Euclidean space, and it's in some sort of standard position, let's say, I don't want to be too precise about what that means. If some hypersurface by sex, this convex set, so it's by sex into two regions of equal volume, then the area in which it does so must be large, and of course you have a long sausage and chop it down the middle, and that's not true. So you have to put it into a standard position first, but once you put it in a standard position, then you can expect the bisecting the other. So if you look like a ball, you bisect with a hyperplane, and then obviously the intersection, the disc where it intersects has large area. So this side of the problem, by dear. So in more generality, these are things called cheaper cuts, which arise in respect to the theory of manifolds, but I don't think I want to go into that. And also discrete analysis script that can be connected to this, into this geometry that this joint problem that I mentioned a minute ago, you can see that it's somehow connected with this geometry. Okay, so let's talk about a few of these problems. Kakea problem, as we said, a Kakea set is a set which contains some unwind segments in all possible directions. So Kakea set is necessarily big, and the question is how big must it be, must it have full dimension N? So this is known to be true in two dimensions by the work of Roy Davies, and also almost at the same time Charlie Peckman and Donio Borla, but as soon as the dimension is bigger than two, nobody knows why it's wide open. So Kakea sets to fill out a whole N dimensional subset of RM, then it must mean that the way that the lines overlap, but they can't stay, you can't have all these lines piled up on top of each other, but then they wouldn't fill out a space, a whole ball, or you would have a line taken in every direction. So the overlap of these lines must be severely controlled if this is going to happen. And so we want to quantify this in a discrete and finiteistic way, so we can do the analysis more and more sensibly. So we fix a small problem with the delta, we replace unit line segments by a finite family of tubes or cross-sectional area deltas, so they're just slight patternings of a line segment. And we try to understand this overlap function, so this is the thing which we just reached point X, we count how many of these tubes it's in, and the intuition is that in order for a prepared section to have large to fill out a whole N dimensional subset, then this must be under some size control, otherwise we haven't got a home. Okay, so we assume that the tubes, they have little cross-sectional area, cross-sectional width delta, so basically you can assume that the tubes have delta separated directions, because any two tubes which are within delta units of each other angle-wise, then one is contained in the double of the other, so there's not nothing really going on there. So we're going to assume that the tubes have delta separated corrections. And as I said, it's qualitative, upper bound from this function, signal control over the overlap. So the way we will, that's called round-list, we will bound this thing in an LP space, so the first thing to think about, well, one, there's no useful information here because we just have this bound, which is useless, doesn't tell you anything at all. And when P is infinity, this measure is maximal overlap, the maximum number of points the tube can be, and again, that doesn't really tell you enough. So you've got to choose a P which is in between one and infinity to get some value for the information. Okay, so we'll just change notation and state everything up and work now with tubes, or length N rather, and unit cross-section, and the directions are separated by one over N, so I'll just scale things up because I'm a vertical level, big scales, working small scales. Okay, so the correct exponent it turns out to help this problem is not one or infinity, but the number N over N minus one in between. And so here we have a modified version of the overlap function, I've now called my tubes R, and so I've got some coefficients AR before I just have all the ARs with one, but I have some coefficients, and I've got this modified overlap function in L, N over N minus one, and I need to get control of that by the little L, N over N minus one, more of these coefficients. So if all of them are one, zero, this is just a video. And the correct control to get is N log N. So I don't want to go into my simple example to generate this as a as natural numbers here. And again, if you could prove this was true, anything else that you could say of interest on the state of experiments here without other experiments here would follow trivially from interpolation to other things, like I just mentioned. Okay, so this is the thing that one would like to understand. And solving this is called the rigged problem, this is the, yeah, so it's trying to prove inequality one is what's now called the KKM, maximal problem, it's called the KKM, maximal problem, because it's the linear dual of a problem about a maximal function. And the maximum function, I'm not going to go back into this maximal function formulation at all, but for those of you who are familiar with the high level maximal function and the strong maximal function and so on, just to make point of contact with that. So instead of taking maximal averages over balls of various radii containing a point or rectangles, which are axis parallel, we take the maximal average, and the barrier at this time is we look at directions on the on the sphere rather than points. So for a direction only got on the unit sphere, and if f is a function defined on Rn, then we take the maximal average over rectangles of psi cross-sectional area one and length n over all such rectangles, which are parallel to that direction. So it has a spirit similar to the high level maximal function, strong maximal function, there's a difference. And the conjecture, which is dual to one, is that this maximal function is bound on Ln with some sort of logarithmic grow up in the index large areas. So this is why this is called a concare maximal conjecture because it's the dual of a classical maximal state. Okay, so let's now go to the isoparametric problem. So of course it is proved to form the isoparametric problem. It asks us to show that we can control the volume of a set by the appropriate power of its surface area of its boundary, and of course we'd like to identify the best constant and its extremizers, etc, etc. And of course this is a millennial problem goes back to the ancient groups, of course, the satisfactory solutions with any number of degrees of rigor and generality, especially here in Italy, are known for hundreds of years. And there are, of course, untold variants, this problem, it's just the most emblematic simple problem I mentioned there. Okay, there's a very satisfactory solution of this problem by the linear Sobolev inequality, which takes a function, a C-finity function, all that support, and bounds its N over N minus 1 norm by the L1 norm. It's gradient with a sharp constant, but I don't want to take this point of view today. First of all it's linear, but secondly, it's also non-positive, there's cancellation involved, there's a gradient, we're integrating something. Yes, so I don't want to take that point of view today, I don't want to regard that as a little too subtle on the state crude. So what's about a multinational approach to isoparametric problems? Well, there's a very well known approach closely related to the Gallaudet-Ardor-Nyrenberg form of the Sobolev inequality, but which is intrinsically mild to the Indian. So just go through this. Slowly, and it begins with a tautological observation, if you take a set E, then it's contained in the Cartesian product of its projection on a hyperplane with the direction perpendicular to that hyperplane. So if you want to write that in mathematical formulae, so the characteristic function of a set is at the point x is dominated by the characteristic function of the projection of the set where you chop out the variable which is perpendicular to where you're doing the projection. Okay, so that being true, it also holds for the permutations of it. We think about projecting on a particular corner hyperplane, we can do it with any corner hyperplane, so we take that inequality and we do it for all what's possible corner hyperplanes, and we get that, and then we take the geometric mean of this, not a geometric mean but an appropriate power of this. Just multiply these together and then take a power, because all the numbers are zero or one, no matter what pattern you take. Okay, so that's a simple geometric observation that we have this inequality here, and that's just repeating what I said on the previous slide. I'm going to call xj the characteristic function of the projection of the characteristic function of the projection of E on the chase corner axis, and we have people who believe this would be inequality, they believe this would be inequality, a general inequality, mass-mini inequality tells us no matter what L1 functions we have on Rm minus 1, if we evaluate these things on projections of the corner hyperplanes, take this same power that I've been advertising here and integrate it on Rm, then we have this, the control of this quantity here by the appropriate mean of the L1-1-1 physical functions, so this is the luminous whipping inequality, and there's an immediate consequence of that applying it in this sense, and here we have the volume of the, is of course less than or equal to the xj to the characteristic part of the pije, so that's the geometric observation that we have here, and if we work out the L1-1-1 of the characteristic function of the projection of E on the chase corner axis, it's certainly no worse than the, the surface area of the, of the, of the boundary, that of the boundary which is perpendicular, and then we get, we get the version of the isoparametric inequality out of this, albeit with the wrong constant, so I'm not, I'm not really upset in that constant state, I'm more wants to think about the, the methods and the techniques. Okay, so that's a little bit about the isoparametric inequality, so the point is that the multi-linear luminous whipping inequality helps us qualitatively explain the isoparametric inequality even if it doesn't give us the sharp constants, so that's the moral of this, okay, so let's unpack the rubors with the inequality, there it is again. Let's take fj, so this applies to all L1 functions living on Rn-1, I'm just going to fj be a step function, which is the sum of characteristic, it's a, it's a sum of weight, it's a weighted sum of characteristic functions of balls to the balls via unit balls in Rn-1, each unit ball we put some weight to hj being certainly an L1 function, and we can work out what the inequality says, well it's easy to work out the L1 model of fj, and what is fj of pi jx, that's simply the same weighted sum, but now of the characteristic function of the tube, which lives above the ball b, so the tube in, in a Rn whose projection on the Rn-1 is that form, so what do we get, we just rewrite the numbers with inequality in this case, what is it, it's the integral of a product of this weighted sum of characteristic functions of tubes, and the tubes t in this jth component of the product here are all tuned to exactly parallels with the jth coordinate axis, and this is just how the L1 balls come out, so this is a, this is a particular case of the Loomis-Whitney inequality, and it has a sort of more, more geometric flavor talking about geometric objects, tubes, tubes which are, have units across sections because these balls are unit balls, and it's a talk about how the overlaps of these things, of these different families of tubes when you multiply these forces together on control, so this is a special case of the Loomis-Whitney inequality, but in fact it's an equivalent form of the Loomis-Whitney inequality, basically just by dilation invariance, so if you want a group of Looms-Whitney inequality of general s, you can assume that the f's are, I have compact support, and you can, you can assume that the f's are step functions of, because step functions are data mail one, and then you can say, well for a step function there's going to be a smaller size step function, then I just work with that smaller size, and I scale everything up, the inequality scale invariance, so I can assume the size is one, so they're actually equivalent state maximum, and this, this is the formulation I want to focus on, because it's similar in spirit to the formulation of the Kikea problem, so Kikea meeting my solitary system, Kikea meeting Loomis-Whitney rather, so the Loomis-Whitney inequality just to remind you is this inequality here, where we take curly Tj's with the family of all one tubes which are parallel to Ej, we take this product, these families raise them to the power one over A minus one, we dominate by the product of these L1 mormons, the little L1 mormons, the coefficients, and the Kikea problem, the Kikea maximal problem, it has tubes of a bit of a length r, typically in the Loomis-Whitney case my tubes are infinitely long in both directions, in the Kikea case they have length r, sorry length n, and they have a condition on the directions, they must be n minus one, n to the minus one separated, but it's a similar sort of thing, we're looking at a weighted sum of characteristic functions, these things raise to a power and we're trying to control that by an appropriate Tj of these coefficients here, so I will sort of draw out the parallel between these two formulations, that's more what I want to do than anything else, and what we will do is make a common approach to these problems, and we will retain the common form of the net hand side of these two inequalities, or at least the spirit of the common form of these inequalities, of these formulations, these quantities, but we will relax the conditions, so in this case we have tubes exactly parallel to Ej, and in this case we have tubes which can be some condition on the directions, but we'll kind of do a hybrid where we consider tubes which are roughly parallel to Ej, but not necessarily exactly parallel, so we'll have some of the flavours of both of these. Okay, so we begin, we write down the same thing as the Lewis Whitney inequality, this is what the Lewis Whitney inequality was, if the Tj, if the tubes Tj and the J family exactly parallel to the J family, they suspect that this is the Lewis Whitney inequality, but we're going to relax that, and we're going to ask whether this might be true, if the directions of the tubes in the J family are allowed to be within say 10 degrees, some small amounts of the J standard basis factor, so that might seem a fairly mild relaxation, and you might think well this is going to be proof, it's going to be similar to the proof of the rubbers, but in fact it's really, it's actually quite a lot harder, I mean it would be more than enough for example to handle a substantial portion of the care problem, it wouldn't solve the care problem, but if you think about it a little bit, it would solve the substantial portion of the care problem, because if we think about in the KK problem we have a bunch of tubes in different directions, if we let Tj be the tubes from the KK problem with directions close to EJ, then the contribution from these families of tubes is under control, the transversal contribution to the left hand side in KK is essentially this, because we have something like this raised to the power n over n minus one, we want each of those back, the n that we're raising to we write out n times, so we have n lots of things to the power one over n minus one, and then in each of those multiple cans we consider only the tubes coming from the J direction, so we get something exactly like this, and so if we have something like two by held, we will have something like this, and using held as inequality on this, we get something which is actually much better than what we, well they're quite a bit better than what we need for the linear care problem, so on the one hand we have a little error one bound here, and there's not even a lot of so, so it's not just, it's not just the kind of an intellectual exercise to consider things like this, it would actually make a substantial contribution to the KK problem, and it's not as obvious as it might seem, so many other possibilities come to mind, I don't think I want to go into that very much, one thing I just do want to say is that there's nothing particularly special about the standard or normal, the standard or normal basis, we can take any linearly independent set of unit vectors and something similar will be true if we use Whitney inequality, so if we use Whitney inequality becomes this, if you project on to the hyper plane perpendicular to a unit vector omega j rather than the unit vector perpendicular to the j standard basis vector, you have to pay with some determinant factor, so omega one wedge, wedge, wedge omega n is just the volume of the parallelepiped whose sides are omega one, omega two, omega three and so on, so when it's a standard basis then that number is just one, and when it's not small, okay so that's the version of doing this Whitney inequality, and there's a corresponding version of the care inequality, the multi linear care inequality, so we have to, yeah if we're going to consider a more precise version of the multi linear care problem, we will have to take a cat of this, you can't avoid thinking about this to some extent, but this is just simply defining various considerations, okay, what about multi linear care then, so yeah this is a result which we've proven from John Bennett and Terry Tao quite a long time ago now, which applied to the situation where the out tubes are within 10 degrees of the standard basis vector tubes, and we've proven what we want to prove except that what we really want here is we want that power q to be one over n minus one, and we want able to prove that, we're able to prove it only with q bigger than one over n minus one, you might say well that breaks the, that breaks the scaling variance of the problem and of course it does, in fact that's why that's why we're dealing with tubes, because we couldn't prove the scaling variance inequality, end point inequality, so we had to break the scaling variance and might have something which we could do, and that's what we could do, so okay that's all well and good, so the more precise version which was proved a few years later by Murray Goof and then the more sophisticated form by Paul Gatt and Goof, and then a little while after that step, I'm probably going to give an explanation of it, which I think was slightly more, so we'll have to be careful and remember that the thing that's being recorded, which is more compensable to normal people than perhaps the proof of Larry Goof was, or at least to analysts let's say, so anyway, so that's a more precise version, and now noticing now that we do have the correct exponent, so this has been the got the correct exponent, and it's actually got the correct determinant, which is what I think, because remember on the previous slide, we have this, oops, we have this factor here, so I put that onto the other side of the inequality, and there indeed is the other side of the inequality, so e at t is the one's unit vector in the direction of the vector, which is t, okay, so a few words about the proofs, the proof of the plain vanilla theorem with Bennett and Tao, we did it by heat flow technique, and the idea was that somehow we take the positions of the tunes, which are somewhat arbitrary, and we flow them, we flow their positions, so that each tune starting off, so we start off at time zero, by the time we've got to time one, the tunes have moved parallel to where they were, but they now all at time one, they all pass through the origin, and the virtue of this is that, and this is where the hard work was, that if you look at the left-hand side of the multilinear inequality we were trying to prove, of course there'll now be a time parameter in it, but that left-hand side is basically increasing, not exactly increasing, but it's quite a dime on it having increasing under this flow, so if you can control matters when t is one, then you can control matters when t is zero, and controlling matters when t is one, it is trivial, but the whole tune is passed through one point, a very simple calculation reveals that it's true, so that the depth will be the monotonicity under this, on this heat flow. The full theorem, it was given by a tiny different method, this is goodness argument, involving duality complex geometry, algebraic topology, isoparametric considerations, algebraic geometry, a plenomial method, this is the theorem, so the later proof, I did this that involving us and somehow streamlined, it took the algebraic topology, which was supercocomology classes and products, and the Luchtenics, Schnellman, vanishing lemma, none of which I could understand, it took those things out of the proof and they replaced it by a simple off-the-shelf application of the Borsig-Ulam theorem, which is something that at least an analyst can kind of get their head around readily, so okay, that's just a couple of remarks, so let's look at some aspects of the latter approach, the approach to the Gooth's argument that we're using, some of our synthophonetics, so that's the multi-linear concair theorem, trying to prove, and here we just take one step up into our extraction just because it's going to be helpful to us, so we observe that this inequality we're trying to prove, which looks quite complicated, it's really, I mean it's a special case of a more general setting where we have a positive multi-linear operator, so in this case I've got a positive multi-linear operator given by, it's acting on some coefficients a1 up to an, it's acting on it in this way, and it's giving me a function on one Rn, that is the way, well, the way for it to be a positive multi-linear operator, and the integral kernel of this multi-linear operator is precisely chi-t1 of x up to chi-t, that's chi-tn of x, there's actually a lot of those subscripts there, e of t1, way to reach up to e of tn, so it's just the way of thinking about it, this inequality, it's just a bit of, thinking about it functionally and analytically it's helpful, so it's an inequality for a positive multi-linear operator that has this form. So just earlier this year, I've proved with my former student Michael Tang that there's a very nice duality theorem for multi-linear operators of this sort, and now we're doing a bit of functional analysis for a while, so suppose we have a, suppose we have a bound of this sort, we have some cubic in the wrinkles, one, we have a multi-linear bound on a positive multi-linear operator, the kernel of this sort, so this is the same, this is what we've been talking about with, I guess, this is of the form we have multi-linear pair, then the statements of the duality theorem, or the hard part of the duality theorem, is that if this holds, then for every g in the dual space of lq, there exist functions g1 of x, y1 and so on up to gd of x, yd, this is d-linear now, so there's no relation between the degree of linearity and the dimension of this space, this space now is just an abstract measure space, not anymore, so yes, so we can somehow, if we take g of x to the power of d and multiply it by the kernel of the multi-linear operator, then that can be point-wise dominated by a product of these functions g of x, y1, q, d of x, yd, and such that, so getting that by itself is easy, just let little gj's be very big, so we have need some upper bound control on the gj's, well this is the upper bound control that the gj's, if you integrate them with respect to on the space x, they're uniformly in y, they are controlled by the lq prime norm of the plot g, so that is kind of a nice theorem, the converse theorem that if you have this control here, and you have this inequality here, then you have this inequality, that's an easy application of helms and inequality, so that's the easy half of what I'm saying now, the hard half is if you have this then you get that, okay, but the eagle-eyed amongst you will notice that this doesn't actually quite make sense, at least at the moment because I've got what is being, so I didn't tell you what being was, so provided we have an auxiliary condition, this is where this beam comes in, and you know I don't want to go all about this auxiliary condition because basically I don't really understand it, I mean I think it works, and there's a load of special cases in which it's true, you can apply it, but I don't really understand it, and I quite like to get to the bottom of it anyway, so the auxiliary condition is that the kernel can be dominated quite wise by some other functions, and if you, well there's this multi-linear inequality between these, I mean I don't want to obsess about it because I don't understand it, but in some cases it's extremely trivial to check, so for example if we're in the case of the plain vanilla multi-linear KKF here and where we're only looking at tubes whose directions are within EJ, within 10 degrees of the J standard basis vector, then it's completely trivial to check this auxiliary condition if beam comes back, but beam was positive, so it shouldn't put you off, although as I say I don't claim I fully understand this, and then there's certainly no hint that this is a necessary condition, but without it, without this condition here, what I was kind of pretending to prove before definitely isn't true, you do need some extra condition, but exactly what the correct condition is, it's not the right thing to do, anyway so this is the easy half which I mentioned already, the auxiliary structural condition that we can't get away without it, but if we have it then the two statements are equivalent, we call it a duality theorem, because when the degree of multi-linearity is d, it just reduces to a kind of something which we learned in the first year of regular school or was not a graduate, but if you have a linear map t going from l1 to lq, then that's equivalent with the bounded linear map with the actual one going from t going from lq to infinity, it just reduces to that, so it's an instance of a duality theorem in a multi-linear sense, okay, right, so what's the main step in the duality theorem, the main step is to use some version of the key fan minimax principle, so the minimax principles tell you, well if you have some function, two variables, the super b is always less than or equal to even the super, that's always a good exercise for first year undergraduate students, but you can reverse the order under certain conditions, so under certain continuity convexity and compactness assumptions on a and b and five, you can reverse the inequality, and it's kind of, well again there's a whole industry of study of minimax theorems, and again it's very strong here in Italy, the point I want to make is that somehow it's the existence, it's the fact that this is not just the nymph, it's a minimum, it's the existence of this minimum, which at the end of the day furnishes the the D-truple in our theorem, so it's really kind of involved that this nymph is a nymph and not a nymph, but again that's a feature, and when you've got compactness you expect an nymph to be immune anyway, so okay so the proof of this minimax theorem is non-constructive, it's kind of morally at the same ballpark of the Braupitz point theorem and the Hanban theorem, so you know you don't know how to do this constructively, and at the end I've got a question which relates to a very specific case of the application of this dualism theorem, which I hope one or more of you might be interested in, yeah okay, tantalising over one of the years, okay so back to multi-linear skincare, so Goof's argument for multi-linear care basically involved constructing this sort of factorisation I've told you about, now the abstract theorem doesn't tell you how to construct factorisation, but Goof managed to do it by their hands, of course you don't get sharp constants and things, but still be managed to do so, so we're right very much on topological and convex differences in Euclidean space and it's not really clear how you proceed in other contexts, so what we have to do is show that every, so just unpacking the abstract duality theorem in our context, but every G in ln of our n, it's kind of important that the dimension, the ln space, the n there at the same time as the n there, somehow a reflection of the dimensionality of our n, so for every G which is constant on the lattice of union cubes, you've got to find functions g of q and t such that if you take capital G of q raised to the power n and take this square product to t1 to tn, then this is dominated, by the product of these functions gj and such that if you assume, you say any two at all and you look at the, if you sum the quantities gj of q and t over all cubes which live inside particular q and t, then that's dominated by the ln norm of the function of star, so I mean the reason we proved the duality theorem was really because we didn't understand like Goode's argument worked, I mean to me it just seemed mad that you could even think about doing this and okay, but with the benefit of plain sight, with the duality theorem, you can see that it kind of had to work, so it wasn't that, but it still doesn't tell you how to do it. Okay, so what's the heart of Goode's construction? He builds these functions g by first thinking about the constant machine called directional boundary by vertical directional surface area. You take a hyper surface z, a reasonable hyper surface in Rn and a unit vector e, then let's look at the amount of the surface area of the hyper surface z which is due to being in direction e, so this is if you like it's the component surface area in direction e, something like that. And so it gives the main thing in the main step in Goode's arguments was if you fix a function capital G, then there's a polynomial hyper surface of control degree, degree control by the ln norm of G such that for every q, q, every unit q, q, and each entry for the tubes meeting q, we have this in accordance here, the q of q to the n is less than or equal to, sorry times the square root product, is less than the product of a certain areas. So this is exactly the sort of thing we're looking for, these are the now, these are the candidates for the little gj of q and t's that I was talking about before. So this is the main step and indeed if we can do this quickly take the functions little g sitting to be the surface area, directional surface area functions, then this is the first unit quarter we want, the second unit quarter we want, which is this supremum of the sum of the q meeting t's of the gj of qt's, this follows there from the zoos theorem because if we think about summing up over all cubes meeting a tube and we're looking at these surface area quantities of z in the set q, but if we sum them up, this is looking at the surface area of z in the set t, so you've got a tube t, you've got some sort of hyper surface passing through any number of times and if you, if you're sort of looking at, so just take one of the axes or one of the lines parallel to the tube, as you go through that line, you'll pass through the hyper surface a certain number of times, that number of times is controlled by the degree of a hyper surface, so the zoos theorem gets you the rest of the, if you want to meet. So how does one prove the visibility theorem of q? Well that's hard, let's just have a taste of how it goes, that's what we'd like to do, but let's move on to a much easier task, just to illustrate the idea, the easier task is to find a polynomial hyper surface of degree at most the same thing, such that for every cube, each ancient pool of tubes transversely, meeting q transversely, we have this, so why is this easier? Well because if we take the, so if we're considering only two ancient pools of tubes meeting q transversely, that means that these quantities, these wedge product quantities are all one and if we then take, so they're all one, we then take the n throughs of both sides of this, this says that q is less than or equal geometric mean of these surface areas, so what we're going to prove and state is the easier thing is that q is less than the arachnetic mean of these surface areas, or geometric mean is always less than the arachnetic mean, so we're making much, like much easier for ourselves if we're only over the arachnetic mean, and that's nice because the arachnetic mean of these surface areas is just the, is basically just the standard surface area for that cube, so this is a much less technical thing to prove, and but it still will give the main idea, so given a function capital G on ln of Rn, let's assume it's finally supported and constantly on the cubes, and we break up each cube and it supports into G to the n congruent sub-cubes, little cube, so that the certain number of cubes I've got all together is capital G of q to the n for each one, so I have the sum on that, that's the number of cubes I have all together, and let's identify the space of polynomials pd with R capital D, where capital D is a plus d choose d, which is about d to the n, where some constant depending, I don't care at all about constant depending on what we're going to mention here, so the same polynomials of degree and most d that can do it, and how many coefficients do you need to specify a polynomial of degree and most capital d, you need basically n plus d to the d to support elementary convolytorics, okay, so we now consider this map phi, which takes the unit sphere in this space of polynomials to a very large Euclidean space of dimension, the number of little cubes we've got, and what it does is for each polynomial p in unit sphere of this class of polynomials, it takes it to the integral of, you look at the amount of, the amount of space in q, which is taken up where p is positive, and you subtract off the amount of space in q taken up where p is negative, and then you do this for every cube, little cube, in our family, and you have a bunch of numbers there, and you notice that this function phi of capital p is equal to phi minus phi of little bit, yeah, if you replace p by minus p, you simply swap around, that becomes p negative p less than zero, that becomes p bigger than zero, you switch the sign, okay, it's also a continuous function, that's a little bit of water, but it's continuous, but this is where the Balsakoula theorem from elementary non-linear analysis, if we have a continuous map from a sphere into a Euclidean space, which is odd, or antipodal, or whatever the word is, it must take the value zero somewhere, provided the dimension of the sphere space here is at least as big as the dimension of the target space here, so that the Balsakoula theorem tells you that, so that means that provided the dimension the parameter little d is at least as big as the little ln more than the g's, then there's a polynomial of degree d, whose zero is set there by sets each cube, so because phi of something being zero means that the amount of mass on the two parts is equal, and then because these cubes are nice cubes, by Chiega-Cas, we have that the surface area of the place where the cut takes place is large, and if by scaling the correct power, the correct largeness is the n minus 1 power of the diameter of the cube, and that's the same as g of the cube to the minus n minus 1, because that's how many, we put that many cubes in, we pack that many q to the n small cubes inside the unit cube, so now we just sum over the different small cubes, and we get with the surface area of z into that cube, so it's bigger than i assume to have the g of cube, so that's a kind of a potty version of the easier result, which catches the spirit of the result. Okay, so final 10 minutes or so, am I supposed to stop at any particular time? So final 10 minutes, so let's go to the discrete setting and talk about the joints problem. Okay, so much of what we've talked about at least makes sense in principle in more general settings, and I think it's quite a challenge to understand what it might mean in some particular settings more general than the euclidean space, so let's for now focus on a discrete setting where the role of euclidean space is replaced by that of vector space fn over an arbitrary field, so something's changed, something's changed in an obvious way, the weight measure becomes counting measure, tuned to become light, so with some kind of sense we're going back to carrier again, we're going back to the line, so we don't have any states, so because we're not assuming the field is the real, there's no between the structure and there's no angles, so two lines either meet a point or they don't, we don't care about whether they're nearly parallel or not, they either meet to the point or they don't, and the determinant vector, which we had the e of l1 wedge, wedge, wedge, e of ln, because it's discreeted either from zero or one, it's one of the directions span and then zero otherwise, so in some sense a lot of things begin to simplify, but those that are not so good are we going to have topology like the Borsig-Ulan theorem to help us, so the previous argument very much meant the Borsig-Ulan theorem, we just don't have that in finite fields, so how much of what we've done can we recover and with what level of understanding, that's somehow what I'll do for the rest of the, so the container set and maximal problems, they have satisfactory resolutions using the polynomial method from Tevilla and for Elibur over in Intel respectively, so I don't want to focus on those, I want to think more about the analog of multi-linear care, so the analog of multi-linear care in this setting is set of joints and multi-joints, so as I said at the very beginning you've got a set of lines in fn, the joint is a point where n of these lines meet and where the directions of the tubes, the directions of the lines are linearly independent and a multi-joint is a similar thing, but this time we now have n families of lines l1 to ln, a multi-joint is a joint, so it's a point again where n of these lines meet, but there has to be exactly one line from each of the families and the directions are linearly independent. So the crude versions of the joints and multi-joints problems consist in asking to bound the number of joints or the number of multi-joints, just by the sizes of the set l or in the multi-joint setting, the families of the sets, but as we've seen, it's been beneficial to think about the overlap function in more general settings and I'll come to that in a minute, but the joints here in its crude form were solved by Larry Gooth and Nets Katz and then at least in three dimensions in the Euclidean space, Kilo-Gran in the same week, Katz and Schuyer and Schuyer didn't prove it in the any-dimensional case in the setting of RN and then several other people figured out that it was basically you can adapt the proof to general arbitrary fields. So that's the multi-linear, so that's the vanilla joint theorem without any multiplicities and the other line, this was probably the one. This didn't resolve the more subtle problem where we expect to bound the number of multi-joints by this product and the numbers of the members of families tomorrow, very minus one. Where are the variants where we look at multiplicities and that as we've said, as I've said, it's been proved to be very beneficial. So looking at multiplicities, we wanted to take the basic, the overlap function, this is the analog of the overlap function that we had before in the linear case and this is the analog of the overlap function that we had in the multi-linear case and it was eventually proved by Rishane Jang that for the joints problem, we can indeed count the multiplicities of the joints and get the answer, you expect. So in the earlier theorem like X and Tau and so on, we were just estimating these. We had the same thing but we said that we had an effective one, so this is a sharper result having something much bigger in there. And in the multi-linear joints problem, the similar thing. And the interesting thing, very interesting to me was that these problems, these are actually equivalent to each other. So this one is explicitly multi-linear. This one, you can't see any of this multi-linear structure there, but it is there because they're equilibrium problems. Okay, the multi-joint problem is indeed the discrete analog, it's the correct discrete analog of multi-linear. So this has been solved, but something that I'm not satisfied with is that somehow Jang's argument and another one by T-Dor, you and Jiao, that they both established these results directly. So if we go back to what I said earlier about the equivalence between the k-maximal function and this LP bound, this overlap function, basically the arguments of Jang and T-Dor, you and Jiao, they work on the side of the maximal function. They don't work on the side of the dual side that I've been working with throughout today. So it became an interesting question whether we could do a proof on the dual side and indeed we proved, well Michael Tang proved in one way and Michael and I proved entirely separately in another way this result, which let's not worry about the detail of it, but the point is this is the formal analog of Goose theorem in the discrete set. So we managed to prove that we don't have any of these tools. We didn't have anything to do with the Borsigoula theorem or we had to prove things by very, very different methods. So the proof by Michael Tang was done by a very careful understanding of the arguments of T-Dor, you and Jiao and the proof that Michael and I did together was to, we basically applied, we took the result of Jang and we applied the duality theorem to it. So that was kind of a cheat if you like. But at least formally the arguments that Goose gave, not the argument, that the Goose result is kind of true. But we don't understand, really understand the geometry that's going on behind it. So you know, what's the model of the arrangement of directional surface area and visibility, which I didn't mention, do they have some analog in this setting? So that's still kind of a little bit unclear. So there's some questions, yes. So can we realize this last theorem in terms of geometric concepts, which are similar to the ones Goose used, is there a direct proof, looking at how, looking at the other way around a direct proof, it's not in the other theorem on the maximal function side. We don't know the answer to these. Yeah, the auxiliary structural hypothesis is an issue, whatever, yeah. Anyway, so final slide, this is a question to tempt you. And there's a very beautiful inequality, which is due to, you know, probably I should have phrased this slide separately. There's a very beautiful inequality of vector, which is the sharp young convolution inequality. And the sharp, young convolution inequality is can be presented in a more more sort of setting in the context of what are called Brass-Cappanese inequalities. And in particular, there are there are classes of what are called geometric Brass-Cappanese inequalities. And so there's a geometric Brass-Cappanese inequality, which is equivalent to one of the special cases of sharp young inequality, which says the following, that if you've got three unit vectors, only the one, only the two, only the three, in R2, and they have angles 200, so 120 degrees, whoops, yeah, that's right, yeah. So 120, what's that, two pi over three, three, let it go by three unit vectors. Then if I project, take a function on the real line, fj, if I, if I can close that with projection on the main perpendicular to only the j. So it's like a loop with the inequality, except I've got sort of three, three factors here in a pseudomational situation, and the power of the difference, but more is like a loop with the inequality. That's, that's the, that's what we get. So the sharp constant in this inequality is one, and it's obtained by testing on gaskets. So the duality theorem that we stated tells us the following, for every function g in L2 of R2 with norm one, then there exists three functions on R2, little g1, little g2, little g3, such that capital G is dominated by the geometric mean of these little g's, and such that for every bridge for three j's, from all of every line parallel to omega j, we integrate gj over L, we get something which is less than or equal to one. So this is a, this is a true fact, and the challenge is to prove it. Can you actually, can you, the recipe, can you, can you, can you understand how to build gj satisfying this in terms of the elements? But the point is at the moment, it's an, it's an application of the abstract duality theorem, so it uses all this minimax stuff and keyfan minimax lemmas, and you know, we just don't have a handle on how to construct these functions, but we do know they exist, and given that I know people here in the audience are, they like questions about sharp constants and extremizations, and things that Durbeck has been interested in. So I leave that with you as a challenge. So thank you very much. There are any questions or comments from the audience on the Zoom, if you guys are on Zoom, see you on the air at the market, and if you want to ask anything, type it in, so that you can unmute yourself a lot. Well, a very stupid question, right, it's a arbitrary thing, we may not find it. No, a lot of arbitrary people. Do you consider also, are we, yes, yes, yes, absolutely, yeah, absolutely. Yes, some problems make sense, so the cacaia problem makes sense only in finite fields, but the multi-native cacaia makes sense in arbitrary fields. So probably if I have it so correctly, on your last slide, you're affirming, right, that there is a sharp inequality realized by the options, the one in blue, that's a fact. And you just, you check it with the standard Gaussian and it's just true, yeah, and the red one is also a fact that comes from your theorem, and you want to unlock the constructive recipe. Yeah, at least to some extent. And this would be the first kind of non-trivial case to think about. I mean, three, you chose with three angles with three, there'll be no interesting thing with two. No, because it's in two, it's between this Whitney, and I can write down factorization statements, like, and write down these sort of statements explicitly and easily, or by doing this Whitney. And then I also assume that you would have a similar statement in Western with four or five. Oh, yeah, yeah, yeah, yeah, this is just, this is just the basic case. Yeah, yeah. Of course, because we know these things, we know that Gaussians are really important in this, so we, obviously Gaussians are going to be really important in this too, but I don't see any way to, I don't see how. I've struck the little Jews in terms of the big Jews in the event. Yeah, I mean, you know, it ought not to be impossible. I'll be honest with you, even in the case of Loomis Whitney, where doing the analog of this is trivial, it still took me six months to realize why it's trivial. So, I mean, it's a big fact, right? Are there questions or comments from the audience here or on Zoom? I'm the audience, are the only externalists on this? I think so, I forgot, but I think so, yeah. Yeah, I think so. I should know that. On Zoom, okay. Kevin, thank you for the great talk. Okay, so there are no questions here. Let's thank Tony again for the very nice talk.