 Is it on? So we are going to start with our basic notion seminar. So it's a great pleasure for me to introduce Alicia Degas-Thanesi. It's a great honor for me. So she is from the University of Buenos Aires, and she's going to talk about tower multivariate Descartes rule, but still far away, so thank you very much. Thank you. Okay, this is meant to be elementary. I hope I can do that. So I will remind you this Descartes rule of science, and then I will try to present two multivariate generalizations. This is a rule for polynomial in one variable to count the number of positive roots. And then I will try to summarize and show you how wide open is the complete generalization, even if it is very easy to state. So Descartes, in 1637, so this is long ago, he wrote an appendix called La Geumetrie to this book, which is mainly in philosophy, and the book is available. You go to Gutenberg.com or something like this. You can download the book and see how. And then the rule is very, very simple. It says as follows, suppose you have a polynomial with real coefficients. Then the number of positive real roots, so the number of complex roots is equal to the degree. So here, I'm not saying this in this slide, but we count with multiplicity, okay? So the number of complex roots is if CR is non-zero, it's R. But the number of positive real roots that I will denote with this symbol, he says it's bounded by the number of sign variations V of F in the order sequence of coefficients. This means for each of these numbers, I take the sign. For instance, in this case, here C0, I don't know, it has some sign. This 3 is positive, minus 90 is negative. 2 is positive and 1 is positive. So this would be my list of signs, okay? So I order the monomials. I put here the list of signs. And I count each time there is a sign change. So here from here to here is plus plus. I don't count. Here is one sign, one variation, one variation. And here it depends on the sign of C sub 0. So there could be two or at most three jumps. So this means that this polynomial, independently of the degree, this could be any number, okay? Independently of how many zero coefficients you have of the degree, it cannot have more than two or three positive roots. It's really so simple that it's amazing that we cannot generalize it. Well, maybe because it's too simple. So this I repeated this. And there is something else that it's easy to prove is that these two numbers, the true number of positive roots and this upper bound have the same parity. So for instance, if the C0 is minus one, I get a sign variation is three. This means my polynomial could have either three or one positive root. I cannot decide with this, but it's quite accurate. And in general, it's a short bound because if one picks a polynomial with all real roots, then it is an equality. And in fact, his proof, in 1637, there were no complex number of fields. Complex numbers were not there. It was just imaginary thing that didn't exist at most. So for him, all the roots were real. And in this case, it is an equality. So one important case in which you have all real roots is we have the characteristic polynomial of symmetric matrix. So you do this stupid count and you get exactly how many positive eigenvalues you have and how many negatives in this case is obvious. But if you want to count the number of negative roots, what do you do? Well, you can count positive and zero and subtract. But no, you cannot subtract because you don't know how many real roots. How do you do? How do you count negative? You put f of minus x and count positive. So the odd exponents, you will change the coefficient. And so the one consequence of this, which is important, is that the number of positive real roots is independent of the degree. It depends on how many non-zero terms. If I don't have many non-zero terms, I cannot have many positive roots. For instance, just if you take x to the n minus 1, there are n complex solutions, all the roots of unity. But I have a coefficient of 1 minus 1. The rule says I cannot have more than one positive root and indeed the only positive root is 1. OK. He, of course, did not prove the result in general. But we can prove it very easily. You can use your way home in the bus 6 or, unless you are driving, you can prove it in your way home. And the main point is Roll's theorem. What does Roll's theorem tell you? What does it tell you? So it tells you that between two consecutive roots of f, there has to be a root of the derivative. So I have two roots here. I have a point with horizontal tangent here. I have two roots. I have another one. I have three f has three real roots and f prime has two. And if you count with multiplicity, this is also the case. OK. Of course, we could be either we can just displace like this. We add the constant so we don't change the derivative. So in this case, the number of roots of the derivative in all these three cases is true. But here I have only one positive root here. I will have only one negative root, not positive. And here I have three. And then the main thing about Roll's theorem, it says that the number of positive roots of f prime is at least the number of positive roots of f minus 1, which says that you can bound the number of positive roots of f knowing how many positive roots your derivative has. And so it's very easy to do a proof by induction. The degree is induction in the degree. You differentiate. You assume this holds for f prime. And you go up and you use this thing. OK. So I spoiled the proof because it's just once. But you have to deal a little bit with what happens in the boundary. But it takes one minute. OK. So how could one phrase a multivariate rule? So one way of doing this is the following. So we pick n polynomials in n variables because we want to have a finite number of solutions. Want to count, we need a finite number to count. So we start saying one polynomial. It could be enough in one variable you take. You could have a polynomial with a single positive root in general. You expect to have n polynomials in n variables. I'm going to call as before r plus 1 the number of monomials. And then I consider a system of these polynomials. So the input of this system, there are two matrices. One is the matrix of exponents, which I am going to call a. And this exponent vector is going to be the j's. The exponent vector is going to be the j's column of my matrix. So I can assemble the exponents as column vectors of this matrix and the coefficients in this matrix. So I have two matrices of the same size. Both have n rows and r plus 1 columns. So I have two matrices and here a is in principle a positive integers but really if you are going to look for solutions in the positive order you will get real exponents. So we have these two matrices of the same size and so the question is given this matrix and I'll go and assume that the rank is n otherwise you will have probably infinite number of solutions. Also I have to assume that c has maximum rank otherwise you have less than n polynomials but this is minor so I have these two matrices and the question is how to find a simple rule of size in the spirit of the card rule that gives a sharp upper bound for the number of positive solutions means solutions x, all whose coordinates are positive of this system. This would be the goal. What happens is that in any number of variables there exists so this thing that I remarked that the card rule says that the number of positive roots depends on the number of monomias and not on the degree. So there was a conjecture that this holds in general and this was indeed proved by Askol-Hovanski in the 80s it was really a breakthrough but he only takes into account the exponents and he gives a bound in terms of the number of monomias which is really non-sharp. It's incredibly big but it's fantastic because there is such a bound and then in 96, it's already 20 years ago Ilya Etenberg and Marie-François Rois tried to give a conjecture of what should be the card rule in fact the conjecture was disproven quickly but they gave a lower bound of the upper bound so they constructed systems with that many solutions so the upper bound should be at least this number but the conjecture was the upper bound but this was not. As I said I will present two partial generalizations and the full generalization is really open so you can work on this. And so the first generalization is joint work with all these people because we met at Dachshul which is like an overworld hackbook for computer science we all were working in applications to these biochemical reaction networks I talked about the other day and we realized that there were many science everybody has different things about science and in many applications of mathematics even in economics or whatever people are trying to understand how many positive roots so there are many different theorems that were used in science and we asked ourselves why we gave some theorems that somehow abstract all of this but what we can answer is to give conditions that imply that there is the most one positive root and then the other generalization is something I did with Ferdinand Pijan he's from the University of Sao Paulo in France and what we did is just if you wish the easiest the following case is we assume that we have if you have n plus 2 monomials in n variables they are finally independent they are like the vertices of a simplex this is just essentially like a linear system so this is obvious so the first case is if you have n plus 2 monomials in n variables this is called a circuit with the tiny detail that I will define and I will try to show you that even in this simple case it is so complicated that we understand why it is difficult to generalize so the input I call it generalize because I assume that the matrix of exponents is also here but again I have two matrices of the same size here its size is going to be it's a technical r and not r plus 1 you will see in a second y but I also have r positive parameters just let me go back one second one sees I had seen the cardulo of science many times and it seems to say something about your polynomial it does not it says something about the family of polynomials with the same size as the coefficients as your polynomial really the value of the coefficients you don't care what you care is which are the signs of the coefficients this means if you take your polynomial you multiply all coefficients by positive numbers the answer is the same ok so in fact you are talking about this so this is what we are doing we are multiplying each polynomial by the for any i for the fixed the positive constant so we are keeping the signs of the coefficients and more than the signs of the coefficients so now so a certain now it's a stupid thing if I want to write this equals to y so I have I have this polynomial f k i for each i equal to y k i equal to y i I can do like this put it equal to 0 and now I have a new monomial which is the constant term now I have r plus 1 but now just deciding the number of positive solutions of this system deciding that this is at most 1 means that there is at most given y there is at most 1x such that fk of x equals y so this means that it's the same thing as saying that this map f is injected ok so the statements are going to show you for a while we will talk about injectivity but injectivity of f is the same as saying that this system of equations could have at most 1 solution ok so we are going to call sigma the component wise sign of the vector so for instance sigma of 1 2 0 minus 2 is this vector plus plus 0 minus lies here ok and then the theorem says the following so this is a simplified version but it's enough for all purposes the following statements are equivalent the map fk is injected and here this red is main point not for a fixed choice of coefficients is injected for any choice of coefficients if and only if this condition holds let me try to parse for you what is said here first this is implied by the fact that the rank of the matrix is n ok I have a matrix of r that implies that the kernel of the transpose is 0 so this is a statement about the rank but this statement says the following so the image of a transpose so a is a matrix both a and c I am here in all the story we are assuming that r is at least n otherwise if I have too many monomers that doesn't make sense in fact even bigger so I have this matrix which is n times both r I guess and then the transpose will be like this will be r times f so the kernel of c and the image of this map which is the column space lies in r to the r ok so both the kernel c and the image of this of the transpose both lie in r to the r so if we have a subspace in the plane lying in the plane it will be it could be like this it could be like this or it could be any of these two ok but each of these subspaces hits or meets some orthons and not other or what's an orthon a choice of size for each coordinates so in fact this is an orthon with my definition an orthon lies in 3 to the m so it could be all two coordinates positive or here n is 2 ok two coordinates positive one positive one negative one positive whatever so you have a subspace it will hit certain orthons and not other ones it will always hit you subspace will go through zero the matrix c is a it's a matrix of coefficients sorry I didn't say it so we have we have our equations are c, i, j, x to the a, j so c is the matrix of coefficients there are i goes 1 to n is the number of equations and j is how many anomalies you have i, j are the columns of I put them as columns of a and so what this says is well if it happens that this is an if only the only way in which for no k for all k sorry it is injected it's when these two subspaces don't intersect the same orthon unless it is the origin they have to live in different orthons that's a comment when this is zero this is a question of the rank of the matrix this condition depends on the as I said in the orthon that they meet there's a case in which we can certify that there is at most one positive root each one a equals c it could happen that a equals c if this happens and the matrix has rank n this condition is satisfied as I said but now we need to look at this condition what happens with this condition we have a and here we have a transpose this matrix is like this this matrix is like this could it be that some vector in the transpose is in the kernel of the matrix if we take here a vector v which is in the kernel but we also have this vector v in the row span of a so if this vector transpose if this vector is here it has to be in the kernel orthogonal to itself so unless the vector is already here this cannot happen so if a equals c this condition is certainly satisfied and so if we pick as the matrix of coefficient the same matrix of exponents there could be at most one positive root and this is in fact the content of famous theorem in statistics which is called Bertz theorem ok now let me be more explicit what is in there ok where do we took these conditions ok in fact there is a third condition which says that the Jacobian is invertible oh sorry which says that these two conditions are equivalent and in fact this is the way the proof goes to the fact that the Jacobian determinant is non-zero but here the quantifiers are crucial here it says if for any x and any k this happens then for all k this is injected here there are no k's no x's nothing ok this statement is equivalent not for a fixed k it's equivalent for all k it happens if this is never zero for any k for any x so let me try to show you what the statement says so if what happens is that over the complex numbers locally allomorphic map is invertible if and only if the derivative is non-zero but over the real numbers this is not true we can just take one cube it's a fabulous perfect injected function and yet the derivative vanishes at one so vanishing of the derivative does not imply that the function is not injected but what the theorem says is the following this is where the k play a role they say ok this might be not injected but the fact that the derivative vanishes at one so let me go back sorry if the derivative if this is zero for some x this cannot happen for all k ok this means that there is a choice of positive numbers so that if I multiply the coefficient of my polynomial by this positive number this will have more than one positive rule this is a statement not that because there is no bijection ok but what is nice about this statement that this is somehow depends or in the combinatorics or the linear algebra of the exponents and the coefficients ok so from this we can deduce the first what I call the first part of the cart rule now we put one polynomial on the other as I put before now I can decide when how many I given a y I given the matrix of exponents and I given the matrix of coefficients and before and I give you a y in Rm and so I want to a condition because I have a condition I have this condition I have this condition but I want a way of checking this condition a way of doing that is the following so you compute all the maximal minors of this matrix if it happen that the corresponding minors have all the time the same sign or all the time the opposite and at least once this is non-zero so there is a choice of minors so that both are non-zero and this has this is positive and positive and then if you look anywhere else any other minor they have to be all the time the same sign ok if you check this it is equivalent to this condition that I sorry that I had so if I have two matrices and this product of determinants is either zero has the same signs mean all other non-zero such products if it is positive it is positive all the time and moreover there is at least one non-zero then there could be for any choice at most one positive solution now there are some signs here there are the signs of the minors but we say all no but if it is non-zero locally it is injective if it is zero in the real case you cannot deduce anything what you can deduce is if it is zero for some coefficients with the same signs it won't be injective this is what you can deduce inverse function no there is no inverse function covian is locally is locally injective no there is a covian conjecture but it hasn't been proved in the complex case the real case is false anyway over the complex case the covian conjecture says that if you have a map whose covian is a non-zero constant then the invertible and the inverse is also a polynomial it means it has to be because you take this function e to the z the exponential which the derivative is never zero and is locally invertible but it's a strong assertion about if you have a polynomial map and blah blah blah now the question is okay there could be at most one when is it that there is one okay and so before looking at my slides look at me for a second I have this let me write the whole vector f so I have as like this I have this then this is the following it says I take the matrix C I multiply by the vector x to the a1 x to the aR and I get the vector y this means that the vector y is this number times the first column of my matrix plus this number times the second column all the way okay this vector is a linear combination of the columns of my matrix and the coefficients are these if there is a positive solution these are positive numbers okay so this means if I'm going if I call c1, cr the column vectors this means that if y is in the image it has to be a positive combination of the columns positive combinations called the cone generated by the columns so a necessary condition to be an image is so why necessary condition to be in the image is that why lies in this cone it's a positive combination of the columns the vector given the mues there it's just these are the mues but in fact what here we say we need to add a couple of hypothesis and say then the converse is true hypothesis is okay here I find the same oriented matrix to be quick but what this means is all the time the minors of a and c have the same sign in the previous statement it was a little bit more general one could be zero and the other not zero but here say if all the time the minors of a and c have the same sign and the column vectors of a lie in an open half space then once why could be in the image it is and this statement uses degree theory so we prove it in a quick way because it was a previous paper but took all first who had already used degree theory but this uses this is not easy I mean not so basic okay now let me tell you what we did for circuits okay so now we have n plus two monomials so my r is n plus one I have n plus two monomials and I have n polynomials with n plus two monomials the number of monomials goes from zero to the indices go from zero to n plus one for some historical reasons and then we have this exponent matrix now the notation is from zero to n plus one which is has n rows and n plus two columns and the coefficient matrix the same size with real numbers and we assume that both have full rank n and so the question is we are going to call n a of c so a are the exponents and c is the coefficients and we want once we fix a and c understand the number of positive solutions of this system in terms of the sign variation of some sequence that we have to determine so we don't know for the moment but this is the problem so instead of considering just a we are going to enhance a we are going to assume that a is the matrix to say something one two zero one one one three two then we are going to consider a bar and a bar means that we are going to add a row of ones why do we want a row of ones because what now the question is let me stop for a second before we wrote this statement that I am going to tell you in a while I decided to stop and think why this condition should depend on so if I have a system like this and polynomials like this does it really depend on the actual value of the cij the answer is no it only depends on the row span of the matrix c on the linear space they span because if I instead of my polynomials I pick a generic linear combination of them I will get the same result if instead of writing f1 and f2 I write f1 and f1 plus f2 so it's really something that should depend on the vector space generated by the rows of my matrix c so as such it has to depend the way of picking a point picking a subspace is picking a point in a grass minion so it has to depend on the way of embedding the subspaces of certain dimensions this is the subspace of dimension n in r to the r is by the minors of the matrix so it should depend on the minors I noted the particular coefficients it was the first observation and what happens with a so with a what happens that it should depend on the affine on a not only linearly so linearly would amount so if I make a linear change this is in the exponents would be just a monomial change of variables but also if we are to look in this equal to 0 because in this instance now I'm writing this equal to 0 n plus 1 say r equal to 0 then if I I'm looking for x which is positive all coordinates are non-zero if I multiply here by any monomial whatever monomial I want I'm going to get the same positive solution because a monomial does not vanish so I should be able I could add a constant thing here so it should be independent of translations so if you have three points it doesn't matter if if they are linearly independent it matters if they lie on the same line what matters is the affine geometry of your exponents not on the linear geometry of exponents it has to depend and adding these ones sorry I deleted but adding these ones I have my matrix A and I add a row of ones this matrix has the property that the kernel of this matrix are the affine relations among the columns so affine relations linear relations are all the the coefficients add up to 0 and these are the things that are so it should depend on the affine structure of A so it should depend on the minors of this matrix so whatever answer is there it has to be dependent on the minors of C and on the minors of A bar this is the first thing that we thought how we didn't know for the moment we take this matrix and maybe you know now we have a matrix so we have N times N plus 2 now it's N plus 1 times N plus 2 so if I take and it's full rank so the kernel has dimension 1 and it is a generic way to pick an element in the kernel of a matrix of corrank 1 is you skip each one of the columns you get a square matrix you compute the determinant of that and you put it with size and this gives an element of this so there is a generic way of solving such a system so what here it says is that this matrix has a kernel of dimension 1 and the generator of the kernel is the vector obtained up to sign but the determinants you skip the first column then the second etc. and you compute all the minors this is the generator of the kernel okay so we are going to assume that this is a circuit circuit means that all under J are non-zero because otherwise the points will lie on a line three of the points will lie on a line we don't want any three of the points are finely independent and so this is just all under J is non-zero if you have four points so n plus 2 if any 2 is 4 you have four points in the plane there are essentially two different configurations either like this they are the vertices of their convex hull or one point is inside the convex hull of the IS these are the exponents and we are going to see that the number of positive roots here is more restricted than here so the relative positions of the points in the exponents will bound the number of positive roots and if you know if I start moving moving this point up to here when this point gets here one of the minors of A bar will have to be zero if I move it here one minor that was here positive will become negative the other way about so you cross from this configuration to this configuration but at some moment one minor becomes zero so the minus of this matrix explain to you which is the relative positions of the points in space so as before if sorry if there is a positive solution then if there is a positive solution to this equation with zero here zero has to lie in the positive cone spine than the colors of my matrix because if there is such a solution this vector is going to give the coefficients of the collinear combination so this is a necessary condition for the existence of at least one positive root okay so we will assume this because this is but what is interesting is that we will see a way that and this will explain why we could work with the case n plus 2 because the matrix so I am having matrices of size are of size n times n plus 2 and the rank is n what is the dimension of the kernel of C so I have here a basis of the kernel this is a matrix of size n plus 2 times 2 so the two columns here is a basis of the kernel this is what is called a gale dual matrix so this is going to be zero sufficiently zero but now instead of looking at the columns I look at the row vectors how many row vectors do I have I have n plus 2 row vectors so these row vectors that I am going to call p0 up to pn plus 1 are called the gale dual configuration of the column vector of my matrix I can look at the columns, my matrix or dually I can look at the rows of these matrix whose columns is the basis of the kernel okay what happens if I do this I am going to call p this vector and what is easy to prove this is well known that instead of checking if zero is or the condition that zero lies in the positive con of the columns is equivalent to the fact that these row vectors lie in an open half space in the plane so this is the case for instance so in this case all lie in this open half space and it could be okay they are and then I look they could be linearly dependent they cannot be on both, if they are on both sides then zero is not in the positive cone and there is no positive real root if there is a positive real root then they lie in a race like this and in this case there are four races so p0 and p1 is here, p2 this from p3 to p6 here so there are four different race containing them this number, the number of races I am going to call k they have eight vectors from zero to seven k is the number of lines containing the piece and we will see that the theorem says they cannot be more than three positive roots the maximum number if there are four is three if there are k is k minus one so the theorem is a little bit complicated, let me try to parse it for you so we assume that the rank of this matrix a bar is n plus one as I was doing the rank of csn and zero lies in the positive cone of a column so the necessary condition is satisfied and we call k the number of different lines containing the pis and then we are going to number all the points to number the lines but we are going to number the lines I said counterclockwise but this first, second, third, fourth we need to order them this way but we also number the points so if this is bigger these numbers have to be smaller than these numbers so I need to here I could put n and p0 but this could go after this one, I number them in this way according to this ordering of the lines so we do this and then here is the theorem let me try to read it for you we can skip this it says the number of positive roots of the system of n polynomials with exponents in a and coefficient matrix c is at most the minimum of the sine variation of a sequence that I am going to tell you in a second but this sequence has k elements so the sine variation is at most k-1 and this number that you see there is the volume of the integer volume of the convex hole is like the Bezubon is a bound, is the BKK bound is the number bound for the number of complex solutions to the system I am not going to put my finger now on this but this number you have a you take the convex hole and you compute in general it a spans z to the n, you take n factorial times the Euclidean volume and this is a bound for the number of complex solutions which is in particular case of this is the classical Bezubon say have polynomials degree d1 dn there could be the generic number of solutions d1 times d2 times dn so when you have sparse some monomials there the bound is this one but let's forget about this so this is another bound but this is smaller than this so it's the sine variation of a sequence that has k numbers here and k is the number of lines what I wrote here here is what are the lambdas so what are the lambdas so I was calling lambda 0 lambda n plus 1 these were the minus of this but this was the basis of the kernel and what this says is this lambda bar alpha i is the sum of the lambda j each time pj is in the alpha i line so these are linear combinations of these elements here which linear combination these linear combinations is determined by the 0 minors of c because I didn't say but when two p's lay on the same line you can see this on c saying that some minors of c and 0 that there is a perfect translation between minors of c and minors of this gale-dual matrix with the p's so here says these are linear forms in the lambda that comes from the dependency in the matrix of coefficients for the 0 minors in the matrix of coefficients and moreover the way you order because you can have this sequence or you can have this sequence this has some variation 1 this has some variation 2 so it depends in which order you put them the order you have to put them is the same order in which you are taking these lines in gale-dual space for the c c tells you it's not there these are just adding these things in the kernel of this matrix but in which order and in which sums are prescribed by the linear dependencies in the matrix c so you see it already yes so in general this k is going to be n plus 2 so the maximum possible number is n plus 1 this depends this depends you cannot have more than this this plays a role we will see that the number in the plane the maximum number is 3 but for instance if you take the unit square the unit square has this number is equal to 2 so in this case this is more than that so in general if your polytope is sufficiently big it's just the same variation that comes ok I'm not going to tell you how we prove this it's not very complicated but there are many many many details but the reason why we could prove it is that we are also the gildual thing and we have two variables where we de-homogenize and we land in one variable this is one thing and in one variable we use a kind of a version of the cat rule the cat rule holds not only for monomials polynomial is something you write as a linear combination of monomials but you can write as a linear combination of some other functions and if these functions satisfy certain properties you can also read the cat rule from there this is in a book by Polia it's not Polia Sego, Polia and someone else I don't remember now so but essentially we go back to the case of one variable but there are lots of lots of tiny details here and they are kind of very complicated and also we immediately pass to a gildual and de-homogenize and we had the proof and I couldn't it took me several months to write down the proof because I saw the proof I understood it locally and I didn't understand it so I couldn't write it and then in fact the way the paper is written we go back to the minus of C we don't talk essentially about it we use this gildual in the proof but not in the statement the statement is about C because when you homogenize when they take it out you don't make any choices when you understand what is happening you cannot prove it but it's the way of understanding what is happening and until I did really understand it I couldn't write it well okay so for n equals one we recover the no rule I won't do it but you believe me as I said the number is at most k which could be at most n plus one and then one question is is this a sharp bound? when is the bound attained? and so I need to introduce some further definition but this is easy it's called the signature of a circuit for instance if my points are like this this means that this a positive combination is a positive combination of these other two let's say this is the point 0, 1, 2, 3 this means that if you look at the vector of the kernel 0 and 2 will have some sign and 1 and 3 will have the opposite sign there would be two pluses and two minus or two minus and two pluses while if my configuration is like this one one point is inside then this is a positive combination of the other three okay and you can put plus, minus, minus, minus or the other way about so the difference here have one plus, three minus are here two pluses and two minus and this is what the sides where this point is inside are outside in this case the signature of the circuit is two in this case the signature of the circuit is the signature is number of pluses number of minus and you put first the small number and here is 1, 3 so the signature is this pair so you look at this and this determinants of A bar tells you the relative positions and you count and then the easy consequence is that if the signature is AB where A is the smaller one if A is not B it could be at most to A and if it is B it could be at most to A minus one which means that in this case I can have up to three and in this case there cannot be more than two even for any N in any dimension if my points are the vertices are finely independent and I pick the last point inside the convex hole in any dimension I cannot have more than two positive real loads okay so what about the optimality of the bounds so in fact we prove not that any A the bound can be attained but we prove that given an N and a signature there is a way of choosing a configuration so that the bound is attained and it uses a very kind of complicated construction which is called Viro's patch working which comes from tropical geometry this is again it's kind of standard but it's tricky it's not it's an ad hoc construction so this is ad hoc and then another corollary is that if the number is the maximum possible which is N plus one then all the maximum minors of C have to be have to be non-zero because zero minors imply two P's in the same line so if the numbers is maximum then no minor of C can be zero and moreover the signature has to be more or less half pluses and half minors unless your circuit has this condition and your metric has that condition you cannot get N plus one at a positive roots and what about the parity you remember that I told you that the card rule in one variable is very easy to see that the sign variation has the same parity that the number of positive roots one here is more tricky or trickier I don't know how you say it in English it is true that the parity of this count is the same unless could be not true if it happens that all the points in A lie in two parallel hyperplanes then it might not be the case or if C is not uniform if some minor is zero it might also not be the case but it is very tricky even to decide the parity let me show you which are the missing pieces to generalize the theorem to many variables so as I try to explain to you in one variable the proof goes using road theorem and this result that bounds the number of positive roots of F in terms of those of F prime and in the first result that I told you we didn't need to order the minors because we took all minors there was a Jacobian inside the proof I didn't show you why but there was a Jacobian and in general what happens that in many variables there is no way of relating even if I have so I have this is F equals zero and G equals zero I have two beautiful curves and they intersect infinitely many points etc there's no natural way of relating the zeros of the Jacobian if they are isolated count in multiplicity there is no known relation how to bound how to decide on the number of intersections knowing something about the Jacobian vanishing of the Jacobian there's no such result for instance the result is in degree theory it uses the Jacobian but it's an alternating sum so essentially the only thing you can get is parity you don't get the true number there are some results that use this but it's very tricky so that we don't know how to relate vanishing of the Jacobian here vanishing of the Jacobian means that the two gradient vectors are parallel and there are some partial results one of us is by Hovanski and then behind and Sotili have another paper and it says okay I have these two points of intersection then you look at one of the curves and the intersection of one of the curves with the Jacobian equal to zero this will be such a point because here these and between these two there has to be a point where the Jacobian vanishes but if you have n you skip one and replace by the Jacobian but if you start iterating you have two Jacobians there even in the plane it's complicated because there is another there's a not sharp bound and there is another term which involves the number of unbounded components of this curve and that how do you count and how do you use them it's not possible to recurse on this the best results are this and also we have no idea how to order the minors because as we had the only condition that we have is that zero is in the positive cone so the vectors have to lie on a hyper plane but if you are in dimension three how do you order vectors in a hyper plane I go like this just an angle one angle but even already in dimension three I have no idea how to order the minors so because to have some sign variation you need to order your numbers in some way you have no idea how to order the minors so and really this is I think this are difficult question maybe the answer is the homology or I don't know there may be this I have no idea what's involved here but certainly it's an idea that I don't have for a moment so just as a summary so I think that this case that I show you already shows that it's a complicated problem and new ideas are needed even to conjecture which should be the formula not to prove it but to conjecture we don't have an idea how to write what could be conjecture is such a rule and it's completely open and just to end I end with some polynomials beauties that were drum with using surfer which is a fantastic software that you can download from that direction thank you