 Okay, welcome to the morning session. The first talk today will be about non-negative practical and practical and graphic programs. And Sebastian will be here. Hello. So yes, hello everyone. I'm happy to be here. So I will present you some work I've done with Hervé Fournier, Guillaume Malau, and Mon Sous-Termin, who are in Paris. Me, in fact, I'm not in Paris. I'm in the University of Saint-Mont Blanc, still in France, but really smaller cities. So yeah, and I will talk about the relation between algebraic branching program and rank-measure in the non-commutative world, and particularly in the case of non-negative, when we don't allow negations. So just the plan will be to be in contradiction with what is a branching program and what is work by Nissan before, and can we get a characterization also in the case of the Monoton case. So I admit the complicity, so just to say two words. So if you are usually interested by problem, which can be said only by addition to actual multiplication, so in fact, it just means that if you want to compute some polynomials, I put here some polynomials which are quite used to, we're familiar for people who are doing arithmetic complexity, but in fact, there are a lot of other polynomials which are very interesting. And so the question, it is in arithmetic complexity, in general, we want to look at arithmetic model. So we want to say that the basis of our calculus, it's addition, subtraction, multiplication and not operation on bits. And so in fact, in this talk, we will consider this model. So this model is what it's called ABP, arithmetic branching program. So this branching program already exists for Boolean world. So in fact, what it is, it is a directive acyclic graph, which is there I think to understand on the figure. So it means that it's a graph, there is a source, there is a sink, and it's directed acyclic. So you can go only from the source to the sink. And along an edge, you compute some linear form, which is written on the edge. And in fact, the value which will be computed at the edge will be the sum of all the path from S to this gate. And the value of the path will be the product of the linear form. So if you want to say, at the beginning, we compute one. We look at the first linear form saying x1 plus x2, we compute x1 plus x2, here we'll compute 2x2, here we'll get to minus x3. So at this gate, we compute this plus this, which will get x1 minus x2 times x3, and so on like that. We compute all the polynomial, and we have our polynomial at the end. Something we'll use in the following, that's really important, but we'll really use it. It's a fact we'll consider branching program, which are layered. So it means that we can partition all the vertices. And each time we have edge from one layer to the next one. We don't have other edge. And in particular, if it are layered at each time for going to one layer to another layer, you multiply by your linear form, it will compute homogeneous polynomials. And so in this talk, always we'll assume that the ring when we are doing the computation, it's non-commutative. So for our algebraic branching program which really means that we are doing the product this on the left times this, which is the next one times this in this order. Okay. So in fact, non-commutative ABP, it was something which is very well-known because in fact, Nissan already in the 90s found some very good characterization. And in fact, characterization is based on some rank of some matrices. So it's what we call the matrix Mi of P of a polynomial P. This is called the Nissan matrix. In fact, it is just that we take a matrix, we put as index on the columns all the monomials of the degree D minus i. We put as index of the rows all the monomials of degree i. And now what do we put in a cell? In a cell we put, we look at in our polynomial, the monomial which is R which comes from the row of degree i times S which comes from the column of degree D minus i. We look at the coefficient in the polynomial and we put that in the matrix. And in fact, so Nissan proved that in fact we, people noticed after that it was already known is different notation by a fleece already in the 70s that the value of the rank of Mi of P it give exactly the smallest possible width for any ABP which computes this polynomial. Any ABP you look at the width, it can be larger or equal and it can be equal. Equality can be a shield. And in fact, you also notice that we can do that simultaneously for all layers. The first point only talk about one particular layer. But in fact, if we want to compute the smallest ABP, so we want to minimize the sum of all the layers, we can say exactly the sum of all the rank. So it's exactly, we can get this minimal size of layer at each time at the same time. But I told you in addition, in fact, I will be interested in monotone. So whatever we call monotone in general in this model, it means that we don't want to allow subtraction. So in fact, we don't want to allow negation. So at the beginning, we want to start with only non-negative coefficients and we want addition, multiplication, and not subtraction. And so we would like to see what will be ABP monotone because if you want to say like that, we can say the ABP monotone, the easy definition will be that all coefficients which appear in the linear form, they are all non-negative. The result by sign was the characterization of the size of the ABP by the rank. And in fact, it's a good thing because in the rank, there is a very natural definition of non-negative rank, the non-negative rank of a matrix. It will be just as the rank is the minimal sum of rank one matrices we can get, the rank of a matrix, you can say exactly the same that if you look at rank one matrices, it is a minimal sum of rank one matrices we get the polynomial. And you have to be the same. We can look at what will be rank one non-negative matrices, it's just that one of the columns times one row vectors to be rank one where all the coefficients of the columns, all the coefficients of the row are non-negative. And now we want to say the non-negative rank of main is just the minimal number R so that we can write M as a sum of R rank one non-negative matrices. And in fact, already in the same paper, Nissan proves that the characterization of the layer still works completely for the non-negative rank. It means that the rank non-negative rank at one layer exactly corresponds to the minimal size of one width at this layer. But in fact here, it doesn't prove we can get the thing simultaneously for all the layers. So we can say that the minimal size of a monotone ABP computing P, it has to be at least that because each layer has to be at least the rank. So the minimal size will be at least the sum but it lets open the fact that it can get equality or not. So resetting in fact, we know that the, we have a characterization for the minimal size of a branching program. It's exactly the sum of the rank and we just have a lower bound for the monotone ABPs. And so the question is, can we get the equality? And in fact, so we will prove that no is not possible. There are some times when the minimal ABP is strictly larger than the sum of the ranks. So more precisely what we'll show, we'll show that there exists one polynomial of degree 3M and 4N variable. So the sum of the rank will be in 7MN and the size of the minimal monotone ABP to compute this polynomial, it will be 8MN. So we gain an additive thing which is just a number of variable type degree. In additive. So to look at that, maybe the first thing is to really see what was known already in Nissan if this works. And so what was the result for Nissan if this works? So the first thing when we want to say about one layer, so about one layer is quite immediate because if the rank of MI equals R, it just means that MI, we can write it as R columns times R rows. And that if I expand each column with the corresponding rows, it's just a sum, that is really the sum of the rank one, of the column of one and rank one, that is just the definition. Okay, I know that MI, how it was defined, it was defined as a coefficient of UV. So polynomial, it is really the sum for all UV, U of degree I, V of degree D minus I of MI of P, UV. So if I use that, I replace here. I have the sum of R. I put this sum, what is MI of P? MI of P is just, so LA R is the same. And now there are single dependent UV only in V, I can factorize. And now immediately, it means that here, we have the sum of R things. And each time we have a polynomial that depends only on the first part of the polynomial, so we can compute it. All the CGU, here we can compute all the sum of a LGV. And the sum to regroup here is just outside R. And in fact, here, I put the implication, but you can notice that we can reverse all the implication. If you have something like that, we can decompose again. If you have this decomposition, we can come back, find the decompose along these things. And here, so if you have that, it means the rank is R. So we really have a characterization. Okay, what happened, they say in monotone? And in monotone, exactly the same, there is no problem. We start with the monotone rank, so we just know here that the coefficient on the column vector and the row vector are non-negative. So it means the coefficient with the L to the row here are non-negative. Okay, here it means the coefficient CU and LI are non-negative. Exactly what you appear here as the coefficient of the EVP. So we directly have the, the first thing is really directly for the monotone case. Okay, but the second part now, if you want to simultaneously get all the minimal size layer. So if we start from our matrix MI, so we can look at one column. And one column, you can notice that it will be, you can consider, I will always identify the column as the polynomial in the U. Each time you have a coefficient time a monomial in U, plus a coefficient time a new monomial in U. So we can really identify the column of the matrix MI as polynomial in U variables. Okay, one first, second thing to notice is, I say one value matrix MI characterizes the polynomial, but how we can go from MI to MI plus one? In fact, it's emigrate because the coefficient in front of U, Y, V, it will be the same here than U, I, V. So in fact, if you look at Y2 and Y2, this matrix will be exactly the same as this one. Because here we'll look at the question of U, I, Y2, V, and you're gonna get the same U, Y2, V. So in fact, to go from I, MI, to MI plus one, you just cut it along the first variable and we adjust putting the new first variable is there. It's just reordering the blocks. Okay, now what we know, we know that if the rank of the matrix is R, we can extract the CR columns, which generate all the columns. That is the classic definition of the rank. And so now to construct the IVP, what we do in fact is the idea just want to put the nodes which compute these polynomials. Because if you want to compute these polynomials, in fact, we want to compute it from what we had before. We just need to compute the column of A1, but we have it because it was all columns were generated. So just multiply by Y1, this column, so just multiply by Y2 and this column, we already have it. Because the first thing, generate everything. Why it doesn't work now in Newton? It seems that the proof is quite immediate and there is no problem, would be a problem in Newton. In fact, there is a big problem in Newton. It is the start. I claim that in general, if you have a family of polynomials, the rank, you can achieve the rank by finding the subfamily exactly of this side. And I think that is not true, in fact, at all in Newton's settings. For example, if you look at, you take R3 and you take the hyperplane which are cutting up to the three acts. And now you look at four points, A, B, C, D, like that, mostly in a square position along this plane. In fact, what does it mean that we can be generated by a non-negative linear combination? It means mostly we are in the cone of the other points. So in fact, here we can notice for these four points, if we take A, it's not in the code of B, C, D. If we take B, it's not in the code of A, C, D, and so on. So in fact, with A, B, C, D, if we remove one point, we cannot generate all of them. But in fact, the model rank of A, B, C, D is three because there is a family of size three. You just have to put these three points, S1, S2, S3, they really generate all the four points, A, B, C, D. Or one can start looking at what happens for already in a rank two. In rank two, in fact, it's quite easy because rank one, there's nothing, it means that there is one that is nothing. But in fact, in rank two, it means that you have only two points. We have a family. We know it can be generated by two points. So it means there are two points otherwise in the semax, we generate all of them. The side means that if you take the extremum points in our X, already generate all of our family. So in fact, in rank two, we can always extract a family of size two. We generate non-negatively of the family. And so after I write the formula, but I think it's easier to reconstruct seeing and looking at the M by as the figure as I showed before. Okay, but now we want to prove that it generates not true. In fact, we'll prove that already at rank three, we can get some counter examples. What will be our counter example? In fact, we'll make a polynomial H which will depend on 10 variables. In fact, it will be set with linear. People will know it means that I will start to multiply to use one variable from X1, X2, X3, or X4. After I will use one variable Y1 or Y2. And after all the monomials, they will use variable Z1, Z2, Z3, or Z4. So it will be of degree three with 10 variables. And in fact, here I want to show that the, so in this case, the rank of the matrices, M by matrices of H for the intermediate thing because there are two zero and the XMT, it will be always one. But it will be three. So they are small rank. But the size of the minimal ABP, uncotative momentum ABP, which computes the polynomial will be at least not nine. It will be nine. But that is, so if we did one plus three plus one, it will be eight. So we already gained one in the settings in the degree three and 10 variables. After we'll seek out to amplify this phenomena. I want to say that, okay. So to say that, let's look at these four vectors. They are quite famous in fact. They appear quite regularly in the commutative settings because the property is the simplest family. So that the momentum rank of this family is four. But the rank is three. How that can happen? In fact, I didn't put a link there. If you look at in R4, so it's always a bit harder to look at in R4. And you cut by your plane. So the plane here, we can assume what is it? It is the fact that X1 minus, X1 plus X2 minus X3 plus X4 equals zero. And the fact that, yeah, it's a, yeah, so it's not a, I think the plane. X1 plus X2 plus X3 plus X4 equals two. We have two conditions. So in R4, it gives us a plane. And in fact, these four points are exactly the intersection of this plane with axis coordinate, the axis coordinate. That's the corner if you look at the intersection with R non-negative to the four. So in fact, it's the maximal point when you see S1, S2, S3 before is the maximal point we can get. But in fact, this thing, the rank is at most three because you can easily get this one. If you take this two, we have 111, everyone, minus this one, we have minus one, minus two. We have this one already. So the rank is at most three. Okay. So we are looking to matrices. I didn't put a drag, but baby, it should be easier. So in fact, ABCD are rarely, we take the intersection point like that. If you look at this plane, and you look at the values inside the plane. So now I look at the matrix M1. So with four rows like that, and one, two, three, four, five, six, seven, eight columns. Like that. So the first column is A. The second column is B. A plus CD by B2 and C like that. What is this idea of this matrix? It is this idea of this matrix to generate it. I need A. I need to generate A. I need to generate B. I need to generate A plus CD by B2. I need to generate B plus CD by B2. I need to generate C. And I need to generate C plus CD by B2, which is exactly the same thing as A plus B by B2. So the rank, probably, yeah, modern rank would be three. It would be a small modern rank. It's what we want. And look at this matrix, which is already say M2. It's just always the same, I don't have choice. When you fix MI, I say MI plus one is fixed. You just modify the blocks. So you put the blocks there here. Okay, so what we know? We know that ABC non-negatively generate all the points. All the points are in the cones generated by ABC. It's the same here, AC, BC, CD. We can easily notice that you can generate everything. So the rank of this matrix, non-negative rank is three. The non-negative rank of this matrix is three. And now we want to show that we cannot compute in size eight. Okay, let's assume we can compute in size eight. So to compute in size eight, at least in a non-monotone, we have to be in this form, one, three, three, one, so we cannot do other things. So it has to be this form, one, three, three, one. And in fact, so we have computing some polynomial here, some polynomial here, some polynomial of degree one, some polynomial here of degree two. And in fact, the idea we want to show that P1, these three polynomials, they have to generate all the current of M1. And in fact, what happens in the same year, all this current have to generate all the current of M2. And in fact, what happens that P1, it has to be A, P2, it has to be B, and P3, it has to be C. In fact, there are really no other choice, we will say. And the same similar here, we want to say that P1, it has to be AC, P2, it has to be BC, and P3, it has to be CD. And in fact, the idea is just simple. That we really need to generate all these points. And we are already in the intersection with the non-negative, in fact, quadrant. So really, to generate A, B, and C, to generate A, we need to put the point A. To generate B, we need the point B. To generate C, we need the point C. We can put nothing here, take nothing here. So the first one is immediate. Now, to generate this, in fact, we need A. We need to be here at the top. Here, we need to be B at the top. We need to be C here, C here. Here, it's a more trust, because what happens there? In fact, yeah, formally, what you can say, but we can look at this vector space, and you can look at where are the zeros. We have to generate all these things. We know we start by AC, BC, we already have it. And we can show that we have to compute C from the top things, we have the A thing like that. And here, so it will fix the relation to compute these columns. And as soon as it will fix the relation, we'll get this relation below. And so we'll get the D, which will be below. And so now, the question is, here, we get one. How we can get to get more? And in fact, just amplifying a thing. What we'll get, we take that if we h with our R polynomial, now we'll take h to the m along the h to the m. So, and we'll sum along different set of monomials. One thing which is good in monotone settings is as we sum in a different set of variables. In fact, what I claim is in the ABP, if we are in the gate, we know in each summand we are. Because in the summand, you can look at variable which are used to compute until this gate. And in fact, variable cannot appear in different summands. Otherwise, we multiply as there will be a pass which use different set of summands and after another summand on the right. And that will not be able, because it will not be able to consolidate the monomials while doing some monomials we don't want at the end. So it's not possible. So already, if you have a sum of different set of monomials, we know that in our ABP, we already have the layers for the different levels of the sums. And now, as it's a non-commutative, in fact, the h to the m, we choose that we look at the partial part because we have to compute in the order, the first instance of h, the second instance of h, and so on. So in fact, we'll gain one m times on each sum and n for all the elements of the sum. This way, we'll get something in m times n. And now, to finish, it's just that when you do the proof here, it was easier to do the proof like that in the time. But in fact, we do it for the version which are a bit harder. We can model such that in general in non-commutative, there are a model which is really already interesting is the fact that we just don't want to allow cancellation. What we do is that I want that monomials which appear somewhere in the polynomial have some monomials of the target polynomial at the end. But I allow now to use negation, to use the non-negative numbers. And in fact, even in setting, we can get the best of the two things. In the sense that we know that there is a polynomial such that the weakly-monotone ABP is large. So even if we allow more power of the ABP, it's still large. And here, for the lower bound, we still use, in fact, the same polynomial, the monotone rank and with the strong definition of the rank. So do you like to separate from monotone and give it an approximation? And is it possible that it's a constant factor of approximation of the weakly-monotone? Yeah, I thought recently. And in fact, I don't know. And this is a good question. Maybe there is a stupid response. In fact, you don't think about that when we do this work. And exponential separation between non-negative rank and rank. Yeah, I won't expect. I expect that we can approximate in some things. But in fact, it's not even clear how we can do it. Yes? Yeah, who sees the same ideas? It's what can we... I don't know, in fact. The separation, we can upper bound the separation between the monotone rank and the rank. But say anything less than trivial, but probably there is something. In fact, I don't know why. We don't think about that when you do the work. And at some point recently, I was talking to some people. They say, why, that isn't... Even what is this above only? I don't know. At which part? Yeah. At which part it's already not bad, completely. In fact, we're really focused on the characterization thing. We're really focused on the fact that it was not the characterization. And yeah, we didn't look at it exactly what is the error we're going to get. So I had one. So often, this non-negative rank is used to capture things like extension complexity and stuff like that. Yeah. Are there suitable analogs of that? They think, I mean, like this. What does summation not negative rank capture instead, if not for MVP complexity? Ah, some of non-negative ranks. Because in fact, you don't know how yet to... For me, the main role, I don't know exactly direct relation between, for example, extension complexity and complexity like that. The main thing in general, they go by the proof to get lower bounds are quite similar. In fact, it's looking some matrices and mostly the other ideas often to get the rank, to get a upper bound of the rank is just to say that the matrix cannot get a lot of big rectangles. And so there will be a lot of zeros, and the wrong cannot be too large, which should be the same, in fact, already in combination complexity and so in extension complexity. So the relation, in general, more related in the fact that the tools for the proof are quite similar, but I don't know after the proof exactly how to relate both things. Thank you.