 So this is the basic notion seminar, and so what I will do today is to tell you some basic notions of projective geometry, algebraic geometry. So I will not assume that you know anything about algebraic geometry, but I would like to motivate these basic notions by connecting it to an applied problem of the geometry of tensors. So let me start by defining the tensors I'm interested in. So I will in general fix k complex vector spaces. And I consider the tensor product of this vector spaces. So this is again a complex vector space w. And so this vector space is generated by vectors that we call in the composable. So I don't think this is, OK, I can just show. These are what we call the indecomposable vectors. So these are just tensor products of vectors in the VI's. And in general, an element of this tensor product, this vector space, is just a sum of indecomposable tensors. So the minimal number of summons in such a decomposition is called the rank of the tensor. So we have this vector space, and every vector has a rank, which is the minimal number of elements that you, and the composable elements that you need to write it as a sum of, say, primitive or indecomposable tensors. So this is our easy definitions. And so again, let me keep them here. So there are, so the people that study the composition of tensors, they're interested in several questions. So let me mention just some problems that are important in applications. So given, if you're given a tensor, how can you compute the rank of this tensor? Minimum, meaning the minimum number of indecomposable tensors so that you can write it in this form. And in general, what is the generic value of the rank? Meaning if you pick a random element in this vector space, what will be its rank? And, well, other important questions is, when is this the composition unique, and how to actually find, if you give me a vector, if you give me a vector, how can I compute a minimum of the composition of this sort? So these are problems that people study, and they are very important for applications. So I would like to mention some applications of the composition of tensors. So they are very important for computability and complexity of algorithms, algebraic statistics, phylogenetics, signal processing, and many other applications. So I just listed a few. And what I want to do today is to motivate the definition on some very classical objects in algebraic geometry using the problem of tensor decomposition. So I will concentrate in this problem what is the generic rank of a vector of a tensor. So we are going to consider these questions for some types of certain vector spaces like this. So we want to know the generic rank. And I will explain how this can be important for computability and complexity of algorithms. So this is the goal. So I will start with the application, the specific problem of complexity of algorithms. And then we are going to see how ranks come by. And then finally, we will introduce the projective varieties that are associated to this problem. OK, so this is a very simple problem, which is matrix multiplication. So if you have two matrix, A, B, C, D, and E, F, G, H, and you want to compute the product of this matrix. So we learned this in high school. So this is the usual algorithm that we use to compute the product of these two matrices. So if you look at this, you will notice that we have to compute the product of two general matrices. We have to perform eight multiplications. So these are listed here. And of course, when you increase the size of the matrix, then you will increase the number of multiplications needed. And it turns out that this is very caustic computationally to multiply matrices in this form. And then you can ask, well, maybe is there another algorithm that I can use to multiply, say, two by two matrices using less multiplications, even if I have to use more additions? So additions are sort of cheap computationally, but multiplications are expensive. And as it turns out, there are other algorithms to multiply matrices. And there is this called the Strassen's algorithm, which I have put here. And this is why I could not improvise this on the blackboard. So you have to trust me that there are some rules that you do. So we want to compute this matrix here, which is the multiplication of these two. And as it turns out, there is Strassen's algorithms tell you how to do this just performing. So here you're performing seven multiplications. And then by performing the seven multiplications, then you can just perform some additions and subtractions to obtain the product of the matrix. So what I want you to keep in mind is that there is a way to multiply two by two matrices using only seven scalar multiplications. And what I want to explain to you today is, so actually Strassen's got to this algorithm by trying to prove that you need eight multiplications to multiply these two matrices. And then when he was trying to prove it, he ended up running into this algorithm. And what I want to explain to you today is why maybe you should already guess that there could be an algorithm only with seven multiplications. So for that, now I will introduce a tensor, and then we're going to see how to interpret this problem from the point of view of tensor decomposition. OK, so I will consider my space of two by two matrices, so with complex entries. So this is a complex vector space of dimension four. And I want to understand, so my goal is to find out an algorithm, say, to compute multiplications. So I consider this linear map from the tensor product V, tensor V to V, that associates to a pair of matrices, sorry, AB, its multiplication A multiplied with B. So we can view such an operation as a tensor in this tensor product. So this is one way to interpret the multiplication of two matrices as a tensor. And let me make this very explicit. So let me tell you how if you give me an element in the composable element in this vector space, so this means that you give me, say, a linear functional alpha and linear functional beta and a matrix C, then I can compute, I can define the following linear map. So you apply it to AB, and you just compute alpha in A, beta in B, and then this is, so these are numbers, these are complex numbers, you multiply them, and then you multiply by the matrix C. So this is very explicitly how from a tensor here, this tensor here defines in the composable tensor, defines an operation in this way. And now what we want to do is to understand, so if you give me now the, if you are considering now the multiplication tensor, it can be written for some R, for some rank, it can be written as a sum of in the composable tensors of this form. And the question is, what is the rank of T? And roughly speaking, the rank of the multiplication tensor T is exactly the number of scalar multiplications that one needs to execute it computationally, the minimal number that one needs to compute it using R, using scalar multiplications. So now we are interested in understanding this, and now what I want to explain is, so what is the generic value of the rank of a tensor in this particular vector space? And using algebraic geometry, what we will see is that this generic value of the rank of a tensor in this vector space is precisely 7. So this could tell you a priori that while maybe then there could be an algorithm that requires only 7 multiplications to multiply 2 by 2 matrices. So now, so now we, so we passed from the problem of multiplying matrices to the problem of tensor decomposition. And now I want to introduce algebraic varieties in context with this problem. So let me start with a very general definition of the projective space. So we, I will define the projective space of dimension N. So this is going to be the one dimensional linear subspaces of C to the N plus 1. So we consider the vector space of dimension N plus 1 minus the origin and we identify all the vectors that lie on the same line through the origin. So this is how we define projective space and we, so a point, any point in C N plus 1 different than 0 gives a point in the projective space and we denote this point in this way. So this is what we call the projective coordinates of the point and notice that this is not uniquely defined. So whenever you have a point that is given by this coordinates, I could multiply everything by lambda and it still corresponds to the same point. Okay, so these are the projective space and inside the projective space we consider the projective varieties. Also these are algebraic varieties. So these are going to be sub varieties of the projective space, subsets of the projective space that are defined by zeros of polynomials. But now there's only one subtlety which is, well, if you are in C N, you can define an algebraic variety as the zero locus of polynomials. But now we are in the projective space so instead of considering zero locus of polynomials doesn't make sense because, you know, if maybe you have a polynomial that vanishes on these points, but if you multiply everything by lambda, it should be the same point but it doesn't vanish anymore. So this is in general, it's possible. But what we do is instead of taking just polynomials, we take homogeneous polynomials, sorry, this should be just a polynomial ring. So we take homogeneous polynomials and the good thing about homogeneous polynomials is that if it vanishes on some vector and you multiply this vector by some lambda, it still vanishes. So this now becomes well-defined. So if I take, if I fix this homogeneous polynomial and consider all the points in the projective space for which this polynomial vanishes at this point, then this gives me what we call a projective variety, an algebraic variety. Okay, and in general, so I define this, you know, as just the quotient of C N plus 1 minus the origin by this equivalence relation, but in general I can do it more abstractly for any vector space. So if I have any vector space W, I can consider the projective space of W. This is going to be just the space of lines through the origin or dimension one linear subspaces. So this is just W minus the origin, a model of the equivalence relation that identifies vectors that are multiple of one another. Okay, so these are projective spaces. These are projective varieties. And now let's go back to our problem, you know, computing the complexity of matrix multiplication. So let me remind you then, okay, we had this tensor, the multiplication of two matrix could be interpreted as a certain tensor in this tensor product, in this vector space. In this case, V and V dual are both isomorphic to C fourth. So this tensor product has dimension 64, four times four times four. And so the projective space is going to be a projective space of dimension 63, always one less. So we have now a point, though this tensor corresponding to matrix multiplication corresponds to a point in a projective space. And now what I'm going to do is, so we want to compute the rank of this tensor. So what first thing I'm going to do is to define an algebraic variety, a projective variety inside this projective space corresponding to tensors of rank one, meaning in the composable tensor. So this is very easy to do. So I consider here the product of this three projective space. So this is P three times P three times P three goes to P 63 taken. So you take points corresponding to this vectors and then you just associate the tensor product. So the important thing that I'm not explaining here is that it turns out that if you look at the image of this map, so this is called the sagri embedding. So the image of the sagri embedding turns out to be an algebraic variety, a projective variety. And actually one knows exactly what are the equations, the homogeneous polynomials that define this projective variety inside P 63. So we have a projective variety inside this big projective space which corresponds to the tensors that have rank one. So this should be thought of as the space of all tensors. And now inside this space we have a smaller variety of dimension three plus three plus three of dimension nine corresponding to the composable tensors. Now what about the ones, the tensors of rank two? How can we define them from this point of view? Well, if you have a tensor of rank two, that means that it can be written as the sum of two in the composable tensors. So this is from the point of view of tensor decomposition. Now if you view it from the point of view of algebraic geometry, what this is saying is that T belongs to the line in the projective space that joins these two points. And then one can go further if you look then of a tensor of rank r, this is going to be in the r minus one plane spanned by r points of x. Okay, so given this let me then define something in general. So if you have any, well here there is this note, not the general, it just means that it's not contained in any linear subspace, but it does not so important. So we have a projective variety of say of dimension n. Then we can define this d second variety of x in the following way. So I take for any d points in x, I consider this, well maybe I should take here for d general points in x. I can take the linear subspace spanned by these d points, so this is a p d minus one if these points are general. So I take the union of all these p d minus ones and then I take the closure. So this is what we call the d second variety of x. And so what happens, so we have the second, so for d equals two this is just the union of all the closure of the union of all the lines that are secant to x and so on. And now we can, we will want to understand and it will be clear why we want to understand what is the dimension of this variety. So we have a very natural candidate which is called the expected dimension that is very easy to compute. So what are we doing here? We are taking d points of x. So x has dimension n, so for each point of the, so for each choice of a point that means that we get n dimensional family of choices. So we have d of them, so this is d times n and for each choice of these d points we are taking a d, a p d minus one. So this is dn plus d minus one. So this is the expected dimension of this variety. So it's expected dimension but in general it may not be exactly, we are going to see an example when this is not coincide with the correct dimension of the variety. But at least no, no, this is sort of easy to compute. And well, of course, this is, this is, if this is less than n, so in fact the right, the right formula is that the expected dimension is the minimum between this number and the dimension of the ambient space. Okay, so now let's go back to our problem. Again, we had this tensor t that corresponded to the multiplication of two, two by two matrices. And we considered the sagra embedding, so this is the, this is embedding this product of projective spaces as a certain variety corresponding to rank one tensors in this big vector space. So this is our x and the dimension of x is 3 plus 3 plus 3, 9. In this case, and then we are going to consider the second varieties of this x. So the expected dimension of the second variety of x is given by this minimum. And in this specific case, it is known that the dimension of the expected, the expected dimension of the second variety is in fact the dimension of the second variety. So if one computes the dimensions using this formula while you get the dimension of the second varieties is 19, the third secant 29 and so on, until you fill out the whole space taking the seventh second variety. So this is what I said before. So this, what this is saying is if you pick a generic point inside this p63, meaning if you pick a generic tensor in this, in the tensor product of these spaces, its rank will be precisely seven, because this is what fills out the whole space. So this is why one maybe should have expected that there could be an algorithm for multiplying two-by-true matrices, making use only of seven multiplications. So this is it, this is how I wanted to motivate these notions. And I would like now to, yeah, so this is what I said, the conclusions that the rank, the generic rank is seven. Now I would like to maybe discuss some other very interesting example. We are going to consider the following space. So we are, we start with the vector space C3. And inside this vector space, I will consider the, well, I take the tensor product. So in this case, if you just take a tensor product of the same space, you can view this as three-by-three matrices. So this is V tensor, V can be interpreted as the C vector space of three-by-three matrices. But I will not be interested in considering all possible tensors in V tensor V. For some reason, well, depending on the problem that I'm working in, I am interested in considering only the symmetric matrices. So this is what we call the second symmetric product of V. So these are just the symmetric tensors here. So in terms of matrices, these are only the three-by-three symmetric matrices. So this is the space that I'm interested in. And again, I want to compute what is the generic rank of a symmetric tensor. And again, we are going to interpret this from the point of view of algebraic geometry. So again, I have a projective variety parameterizing rank one tensor. So this is very, so what I'm doing here is I'm viewing a vector. So I'm taking a vector V in C3. So I view this as a column vector. And if I consider V dot V transpose, this is a symmetric three-by-three matrix of rank one. So in this case, the rank of the matrix is precisely the rank as a tensor. So in this case, we see that so this is a symmetric matrix of rank one. And I can consider then this map that takes a vector to this symmetric matrix. Again, the image of this map. So this is called the Veronaise map. It's also a very classical object in projective geometry. So if I consider the image of this map, again, this is a projective variety in P5. So the symmetric three-by-three matrix is form of vector space of dimension six. When I consider the projective space, so this becomes a P5. And inside this P5, so this is just a P2, right? So V has dimension three. So this is a P2. So the Veronaise map is something that goes from P2 to P5, take the point corresponding to a vector V to the point corresponding to the matrix V times V transpose. OK, so again, this is a projective variety. One can tell, well, it's easy to write down actually the equations, the homogeneous polynomials that define this equation in P5. And it has dimension two, because this is just an embedding of P2 inside P5. OK, now let's consider the secant varieties to x. So if you compute the expected dimension of the second and second variety of x, this is already five. So one would expect that the secant variety to this is called the Veronaise surface is already the projective space P5. But it turns out that this is not true. So let's see. So if you have an element in the second secant variety of x, so what does that mean? That should mean that it can be written as a sum of two rank one tensors. So it should be written as a sum of two symmetric matrices of rank one. But see, if you add two symmetric matrices of rank one, you get something of rank at most two. So being in the second variety of x is equivalent of having rank at most two. So having rank at most two is exactly saying that the determinant of the matrix vanishes. So the determinant of the matrix M is just a polynomial, a homogeneous polynomial in the coordinates of the point corresponding to the matrix. So this is what we call a hyper surface in the projective space is cut out by only one equation. And when you define a projective variety just by using one equation, it's the mention, sorry, it's the mention. So this is what I'm saying. So this is the projective variety defined by the determinant. And that means that the dimension is going to be one less than the projective space. When you cut out by one equation, the dimension drops by one. So you see that the actual dimension is smaller than the expected dimension. So that does happen sometimes that the expected dimension is not the actual dimension. So it is actually a very difficult problem in general to compute the actual dimension of these secant varieties. OK, but see, OK, this is a basic notion seminar. So also I would like to not only, so I use them to define, sort of motivate the definition of projective varieties. But I also want to explain to you why we got our computations wrong in this case. So why did we compute that the expected dimension is 5 and why is it not 5 but only 4? So let me, for that I will have to tell you a little bit more about this veronizing padding that I defined here. So we have this P2, let me again, this is our map. So we have our P2 that by this map V is sent to a certain variety inside P5, our variety X. I think there's only one geometric aspect that you must understand in order to understand why we got the dimension count wrong in this case. So if you take a line in P2, what is the image by the veronize map? So here the definition is here. So if I take a line, so that means that I am taking a vector and I'm varying its coefficient linearly. When I compute the corresponding point in P5, I am going to get, so I'm taking V dot V. So this is going to be not, so a line here is going to be sent to something of the grid 2. So a line is always sent to a conic. So this veronizing map takes lines to conics. And this is enough to understand why we got the dimension count wrong. So let's try to understand this. So let's suppose that I pick the point, I point P in the second second variety of X. So let me erase this for a moment. So what does this mean? This means that it lies, if I take a general point there, it lies on a secant line to X. So this is the definition of the secant variety. Now, so we have these two points here that determine the line passing through P. So in P2, then they will correspond to certain two points here. And I can consider the line joining these two points. Now, if I take the image of this line, this is what we've just seen. They're mapped to a certain conic through these two points. But now let's look at this picture. So a conic determines a plane. So this conic here is contained in determines a certain plane. That contains, of course, this line passing through P. So everything now is happening at this plane P. But now notice, if I take any other line through this point containing in the plane, it will again meet the conic in two points. So a line in a plane, so I'm working over the complex. So a line on a P2, on a plane, always meet a conic in two points. So see what we got the wrong dimension, because when we make that dimension estimate, we counted this point, this point should be counted infinitely many times. So there is a one-parameter family of second lines that pass through this point. And so this is what is causing this dimension here, sorry, this dimension here to drop by one with respect to the expected dimension. So what I'm trying to illustrate here is that the dimension, the expected dimension being bigger than the actual dimension. So this occurs for some geometric reasons. So in this case, the geometric reason is very easy to describe. And there's a lot of questions in general on when is the expected dimension not equal to the right dimension? And in this case, what is the geometry that is behind this? OK, so let me just state now a very general problem, generalizing those that I have described. So let me, in general, I will start with, so this is going to be, this one is going to be a mix of the two examples that I explain. So I will fix some complex vector spaces, v1 up to vk, each one of dimension and i. For simplicity, let me just order them in increasing dimension. And for each one of those, I consider the space of symmetric tensors of order d in vi. So this is the subspace of this tensor product corresponding to symmetric tensors. And now I take the symmetric product of all these things. So this is a mix of the two types of tensors that I described. So there is another important type of tensor, which is the anti-symmetric tensor. And again, the anti-symmetric tensor also corresponds to a very important projective variety. So these are what we call the grassmanian varieties, or you may have heard of grassmanians. They appear exactly as the projective variety corresponding to in the composable anti-symmetric tensors. But for this talk, I just want to concentrate on, so this sort of mixed product of symmetric. So this is symmetric in different pieces. So now I consider this problem. And again, I have a projective variety so I can take. So in this projective space, I can consider all the in the composable, or in other words, the rank 1 tensors. So these are exactly parametrized by this product. So if I take points in this vector space, in this projective space is corresponding to these vectors, I take their symmetric power vi to the di, and they take this tensor product. So this is a rank 1 tensor in this space. And again, this is a projective variety, so this is a mixed situation. So this variety is called the segre veronaise variety. So this is a segre veronaise embedding. And so again, this is a projective variety. And the problem, so this is an open problem in general, is when is the expected dimension equal to the actual dimension? So this is the problem that I have been investigating. And in this generality, it's very much open. So very little is known. So I will tell you now what is known for this problem. So again, let me just, so I'm just repeating the notation here. So I fix some vector space of dimension n i. I order them by the dimension. I consider this space of tensors that are symmetric in each piece of this. So this is this tensor product. I consider the segre veronaise variety in this projective space, which is the variety corresponding to rank 1 tensors. And I want to understand when is the expected dimension equal to the actual dimension of the segre veronaise. So this problem, basically, if I can solve this problem, basically I'm telling you what is the generic tensor for every space, the generic rank or the rank of the generic vector in any tensor space like this. So the problem of computing the rank of a generic tensor of a certain type is rephrased by when computing the dimension of a certain projective variety. OK, so for this problem, let me tell you what is known. So the first case when k equals 1, so k equals 1 means that I'm just considering just one factor. So I'm looking at the symmetric tensors of some order, of vector space. So in this case, I already showed you an example here when the expected dimension is not equal to the actual dimension. And in this case, this problem is completely solved. So this is a very important paper. Alexander and Hyshevitz show in 1995, they have a list. So in the case of when the degrees 2 was in this case, this is always what we call defective. So we say that the variety is defective when the secant does not have the expected dimension. So in the grid 2, this is always defective. And then, except for the grid 2 cases, there are four cases that they list, and that's it. All the other cases, the expected dimension equals the actual dimension. So this is completely solved. The problem is completely solved when you have one factor. Now, if you have more factor, then the known results are some very special cases with few factors. So for instance, only two factors of dimension, one of them having dimension 2 or some very few cases are known. And the first general result, not just when you don't really bound the number of factors when you just allow anything, the first general result was given in 2003. And so Catalizano, Jeremy and Jimigliano show that the expected dimension is equal to the actual dimension whenever for this type of tensor products, so this type of tensors, whenever s is less or equal than n1. n1 is the smaller dimension here. So this was the only general result for any number of factors that was known until recently. And then let me just state a theorem that I proved recently with Alex Masarenti and Ricky Riste. So we improved this bound by showing that the expected dimension is equal to the actual dimension of the second variety. So here, this was a bound that was linear on n1. And here, and this did not take into account the degrees of the symmetric powers here. So here we took this degrees into account by adding this exponent here. And so far, this is the best general result for this problem taken in general, of course. If you look at fewer cases of certain types, then you may have better results. But for general results, I think this is, as far as I know, this is the best one so far. And so I don't want to explain to you the proof, but just by connecting to this, let me just say that the proof is very geometric. And we studied the osculating spaces. So here we were looking, and in this case, things were sort of very simple. But for our proof, what we do is we look at osculating spaces to this project of varieties and compute them explicitly and use them to understand the geometry of the second varieties. So this is all I had to say. Thank you. It's mysterious, right? Yes. Let me say how I can, OK. So this is, I don't expect that this is sharp or anything, but this was the way our proof went. So the proof had, so what we did is the following. Let me try just to give you an idea of what was the best method known before we introduced this approach by looking at osculating space. We had to look, so we had this variety, the variety X whose sequence you want to understand. And then there was a method for computing this dimensions, which was to take, you take general points in X and look at their tangent spaces and look at the linear space spanned by all these tangent spaces and project from it. And then try to understand the fibers of the projection. So this was the known, the way that people usually approach this problem. So what we do is we instead of considering, so the problem of this is that you need to consider many tangent spaces. And things easily get out of control if you have too many tangent spaces, you cannot control what's going on. So what we do is instead of looking at the tangent space, we look at the bigger osculating space. And in such a way that we can, so this osculating space has this property that if you degenerate these points to a single point, the image of the span of the tangent space is still going to be contained in this osculating space. And then we sort of apply the same ideas. So this log 2 appears there because this is, in a sense, it contains information of how many tangent spaces we can deform to an osculating space at each time. And then so if we can say, let's assume that we can degenerate three tangent spaces, let me see if we can, let's assume that we, in fact, we can degenerate exactly n1 tangent spaces to be contained still in a suitable osculating space. And then again, we take n1 points in general position, take their osculating space, and then again deform. And so we do it always by taking n1 of those. And then that's why that log 2 appears there because this is sort of telling us how many times we can actually do this degeneration. Maybe it was not. It appears technically in the degenerating process. So this is a good question. And I don't know. I haven't looked at them from this point of view, but maybe they could tell you something. I really don't know.