 Okay, welcome back. The next talk is by Makran Sina. Makran is a postdoc at CWI. Before that, he was a graduate student at University of Washington, and even before that a student at ETH Zurich and IT Kanpur. He has done a lot of amazing research in areas such as communication complexity, quantum communication complexity, extension complexity and so on. He has a unique distinction of being a workshop organizer as well as a speaker. And today he'll teach us about extension complexity. So, this is going to be a tutorial. This is my first ever tutorial. So, please ask lots of questions because I'm not sure I did a good job. Okay, so now I'll switch gears and we'll talk about something a little bit different, extension complexity. And then in the next talk you will see how it's related to lifting. Okay, so let me start. So, the story starts back in the late 70s when Kachin actually came up with a polynomial algorithm for linear programming. So, this was a big breakthrough in those days. You should read the newspaper articles because they're very interesting. So, it's a linear program. So, you want to optimize a linear function, which is a function of n variables, subject to some n-linear constraints given by these inequalities. So, geometrically this corresponds to you have a polytope in n dimensions with m facets and you want to find the point in this polytope or in general polyhedra that gives you the best possible value of this linear function. Okay, so Kachin and later Karmakar's result showed that if you have such a linear program, the time to solve it is polynomial in the number of bits you need to describe this system. And even though it's polynomial, the more inequalities you have, in general, the more time your LP solvers will take to solve it. So, here I should mention that sometimes it's possible to solve linear programs even though they have exponentially many inequalities in polynomial time, but this requires more ingenuity than just feeding it to an LP solver. So, but in general, the time to solve the linear program depends on how many inequalities you have because that's the description. So, we can encode lots of problems as a linear program. So, here's one famous problem, the traveling salesman problem. You're given a graph, you want to find the shortest tour in the graph given by some non-negative bits on the edges. So, there is a very natural linear program that encodes it. You just optimize the linear function over the polytope, which is the TSP polytope. So, you have a variable for each edge here, which is either 0 or 1, and the TSP polytope is just the convex hull of all possible solutions in the complete graph and end vertices. So, this is a natural LP you can write down for this. And you can encode any graph, you can solve the TSP over any graph because you can always encode your graph in the objective function. If you don't have an edge, you just put the weight 0. And if there is a non-negative wave, then that's the weight of the edge. And here the polytope only depends on the number of vertices you have, it's independent of the graph. So, this is a very natural, but the special kind of linear program. So, in general, if you try to feed it to an MP solver, you will run into a problem because they are exponentially, I mean firstly we don't even know what the fastest of this polytope are, but we know that they are exponentially many. So, how can we reduce the size of this linear program? So, back to the TSP polytope. So, how can we write down a small linear program that solves this problem? So, one very natural idea to do this is what we call extended formulations. So, you have your polytope which is in the space of original variables X. You just add some extra variables Y to help you describe this linear system more easily. And so, this geometrically lifts the polytope up to high dimensional polytope. And what you want is somehow the polytope in the higher dimension is shadow in the original space is the original polytope. So, then if you can, if you want to solve the problem over the original space, you can just optimize over this higher dimensional lift. So, the hope is that using some extra variables allows you some extra freedom and allows you to reduce the number of inequalities. And in general, you know that there are examples where this freedom allows you to reduce the number of inequalities from exponential to polynomial. So, this is what is called an extended formulation. It's just a higher dimensional lift of a polytope which projects to the polytope under some linear map. And the extension complexity of a polytope is just the minimum number of facets in any extended formulation or any higher dimensional lift for P. So, it's the minimum of inequalities you need to write down with some extra variables to describe this polytope. So, let's come back to the traveling salesman problem. So, I guess in the late 80s people were really excited after Kachin's and Karmakar's breakthrough and they were very optimistic back then. So, they thought you could solve DSP in polynomial time. So, SWOT claimed that DSP had a polynomial size linear program that solves it and this would imply P is equal to NP. I guess we are not so optimistic now to be able to even try to prove this. So, it was different times. All right. So, but SWOT's proof was incorrect. Hit a bug. There were several iterations. They all had bugs. So, I guess the legend which was before my time was that Yanukkah has got frustrated and he proved his famous result which is that any symmetric extended formulation for DSP must have two to the n size. So, this killed SWOT's claim completely because his formulation was symmetric and I will not define what symmetric means, but it means that if you permute the nodes of your vertices then your polytope does not really change. So, this was back in the late 80s and I guess it was an open problem for a long time whether any the symmetric restriction is necessary whether there is still a linear program or extended formulation you can write down for DSP that is a polynomial size and I guess it is known that for some polytopes allowing you non-symmetric extended formulation it helps quite a lot. So, from sub exponential you get exponential if you just if to polynomial if you just allow non-symmetric non-symmetric extended formulations. So, this was a long open problem for a long time and this was solved like six, seven years ago by Fiorini, Massar, Pakuta, Tiwari and the Wolf by using connections to communication complexity. So, they in fact proved that extension complexity of the DSP polytope which means that any extended formulation must be of to the N size. So, this was sorry they proved to the square root N size. So, this was a big breakthrough and we are going to see the proof later today. And let us consider a slightly different problem which is maximum matching because this is also important part of the story and again you can give them a graph with positive weights and you want to find the heaviest matching in this graph. And as we all know there is a polynomial time algorithm for it but there is also a very natural linear program you can write down for this problem which is you optimize this linear function over the matching polytope where again the matching polytope is defined similarly to the DSP polytope you just take the convex hull of all possible solutions where it is matching in the complete graph. And it is also known that this has to do the N facets which we will see later. So, this is a polynomial time solvable problem but this LP you write down is exponential size and even though it is known to I mean you can solve this LP in polynomial time using a separation oracle but this requires some more ingenuity as I said before. But what is the shortest LP you can write down to solve this problem just the first question and this was also answered by Yanakakis back in the late 80s. It showed that any symmetric extended formulation even for the matching polytope which for which you can solve the LP in polynomial time also must be of two to the n size. Okay and I guess the long time open question is whether symmetry really helps here and in fact the examples that I talked about they're going to non-symmetric extended formulations helps a lot. They are in fact some versions of the matching polytope so this was a long time open problem to remove this symmetric condition and again in a breakthrough Rothfuss in 2014 he showed that any extended formulation must be of two to the n size and by reduction this also improved the prove the tight bound for the DSP polytope that is extension complexity must be two to the n. Yeah. Yeah it's the number of any quantizing you need. Yeah really you can use as large coefficient as you want but even though it is still still be of exponential size. Yeah you can always assume this. Okay so these are the two results we are going to talk about today so mostly the outline of the rest of the talk will be first I will talk about how we will be proof lower bounds for extension complexity and this will require going back to the old proof of the anikakis and relating it to communication complexity and then I will talk about how does the lower bound for the traveling salesman problem go and I think here I should be able to give a complete proof because it's not so complicated and the lower bound for matching I will briefly sketch it and at the end I will close with some other directions and open problems okay so let's start with lower bound technique so here is a very general lower bound recipe it's already given by anikakis you want to prove extension complexity lower bounds and it's related to something called the slack matrix and it's non-negative rank and this is related to improving lower bounds on communication complexity as we will see later in fact all this connection was known but the big breakthrough was realizing how to prove non-negative rank lower bounds using communication complexity which led to these lower bounds okay so let's look at the first connection so what's the slack matrix so you're given a polytope let's say it can be described in two ways the first is ax less than b which is a facet description of the polytope you're describing any qualities of this polytope and the second is the convex hull description which is describing the vertices of this polytope okay let's take a vertex of this polytope let's take an equality so the slack matrix is a matrix where rows are indexed by facets of this made this polytope the columns are indexed by vertices so it's a very large matrix because in general there are exponentially many facets and inequalities so in vertices and the sij entry is just the distance from of that vertex from that inequality which is just bi minus aixj so it's the slack of the corresponding vertex from that corresponding inequality okay so the first thing to notice is that this matrix is non-negative because any vertex in the polytope satisfies that any every inequality here because that's how your polytope is defined so this slack will always be non-negative so this is a non-negative matrix and what we're interested in is a non-negative rank of this matrix which is the minimum r such that there are two matrices we can factorize this matrix as uv where u is like a long and skinny matrix okay so I mean one of the dimensions is r where r is going to be the rank and here u and v are required to be non-negative so this another way to say is this the minimum r such that you can decompose your matrix s as a sum of r non-negative rank one matrix is so this is just the usual definition of rank with the non-negativity condition so all right so this is the slack matrix and we're interested in this non-negative rank okay and in particular another way to state is that if you look at the i-th row in jth column of u, i and v then the inner product between those vectors which are r dimensional vectors is going to be your slack matrix entry okay so so how do we go from extension complexity to slack matrices so this was the connection given by Yanaka case that extension complexity of a polytope p is equal to the non-negative rank of this slack matrix okay so i'm hiding some technical conditions that p has to be half dimension at least one otherwise there is already a factor missing but yeah for now i will ignore this all right so since we're interested in lower bounds here let me try to prove this direction of inequality at the non-negative rank gives you lower bound and extension complexity okay and you can try to prove the other direction yourself is not that hard okay so let's look look at the polytope p and q and q is the extended formulation for p okay let's say which is given by eax plus fy less than g okay so by assumption this has r inequalities and we want to find a non-negative rank factorization which is has rank r okay so if you have a point xj in the polytope it has some point which projects to it in the high dimensional lift let's call it xj yj you can pick any such point and if we look at any quality of this polytope at the bottom it's also claim that it's also valid over the higher dimensional polytope because you can say you can choose any y here the coefficient will always be zero it will never matter so it will also be valid over a higher dimensional object and again here i'm for this profile choose the facets of the bottom polytope but you can take any inequality that is valid or satisfied by all the points of p okay okay so one very basic fact in linear programming is that if you have an equality that is valid over a polytope you can always write it as a non-negative combination of the inequalities defining inequalities of the polytope so i mean one way to think of it is these inequalities give you the minimum possible description of this polytope and if you add an extra inequality it's just redundant so it can be derived from all the other ones by taking non-negative combinations okay so our factorization by ui will just be this non-negative combination that we choose because since we have our inequalities this is going to be an r-dimensional vector and this is non-negative okay and the properties it satisfies the written here but for now yeah if you take these non-negative combinations of these inequalities you get your more time inequality the green inequality all right so this will be our ui and what are vj vj will be the slack of the a point xj yj at the top and by slack i mean you look at the slack of this point xj yj from all the inequalities in q so again there are inequalities so this slack vector will be of length r and again since these inequalities are valid over the polytope the ex plus f y is less than g this slack will also be non-negative so this will be again r-dimensional non-negative vector all right so now I've set everything up correctly and then if you take the inner part of these two vectors the first term here will be just give you bi the second term will give you ai and the last term will be 0 so this will this is exactly like the slack matrix entry okay so these ui and vj is they give you a non-negative factorization of your slack matrix okay any questions yeah yeah yeah it's for the original pole though yeah yeah but this is the non-negative rank factorization of the slack matrix that can you can choose to be anything okay so we're just showing that there exists the non-negative rank factorization but we're using the standard formulation to find it okay and again as I mentioned here before this is the here I just took the in the slack matrix I took the inequalities to be the facet defining inequalities of p but here you can choose any inequalities that they're valid over p and will still give you a lower bound okay and this will be important later on so let's remind you so now we reduce the problem proving lower bounds on extension complexity to proving lower bounds on the non-negative rank and how do we prove lower bounds on one non-negative rank this was also an old observation of the Yenakakis the non-negative rank is bounded by the rectangle covering number of the slack matrix so we define what that means probably a lot of people here know so you have your slack matrix here which can be written as a product of two non-negative vectors okay and again rectangle so I'm not sure what this audience is like but probably everyone here knows it's a subset of rows and a subset of columns so so let me erase all the numbers here and just replace all the positive entries strictly positive entries by plus okay so now if you take any two columns any two positive entries in a column and any particular row and any positive entry there and yes this gives you a rectangle which covers some positive entries in this matrix okay because the product of any two positive entries was always positive there are no cancellations here and now if you look at the second row it will give you different rectangle and if you work it out if you have r rows it gives you r different rectangle that totally cover all the positive entries of your matrix okay so if you can prove a lower bound on how many rectangles you need to cover all the positive entries of your matrix that gives you a lower bound on the non-negative rank okay so I mean another way of as people from communication complexity know that a tank covering number is just a non deterministic one non deterministic communication complexity if this was a Boolean matrix which is essentially is if you replace their pluses with ones okay right so this is one bound one way to prove lower bound on non-negative rank this was known for a long time but here we have totally ignored the what the entries in the matrix are we only care whether they're positive or not so that's a lot of information we have lost and sometimes it's not enough to sometimes we need to use that information so there is another way of proving lower bounds which is the hyperplane separation bound or it comes it's analog of discrepancy in community from communication complexity so now you put some weights on the entries of your matrix and here the numerator is just the total weight of your matrix slack matrix and if you can prove the weight of your any rectangle small and again the inner part here is just this you're just looking at all the entries in your matrix and it just you're summing the total weight so if you can prove the weight of any rectangle is small then this lower bound essentially says that the non-negative rank must be bound lower bounded by width divided by a maximum entry of s times contribution of each rectangle okay this is how many rectangles we need to cover it essentially weighted rectangles in some sense okay so is the statement clear okay so let's go over the proof because it's pretty simple so let S be your matrix it can be decomposed as a sum of our non-negative matrices small little r non-negative matrices each has a rank one okay so now if you look at the center product you can kind of write it this way so I pulled out the maximum entry of each r i here and this this entry is always bounded by alpha okay so this is bounded by alpha here so I mean this is our as our assumption says that this is bounded by alpha where r has one if your entry is in the rectangle is zero otherwise and in this case in the bottom here each entry here is at most one and you can prove that the worst case of this occurs when the entry is exactly one or zero so this is always bounded by alpha okay so we get this bound and now again so you can prove that since you are adding since s is a sum of our non-negative matrices there are no cancellations so each entry of r i is bounded by maximum entry of s so this gives you this bound and if you rearrange it you get statement okay so the only difference from communication setting is now we have real entries so we have to care about what the entries in the matrices are now so okay so these are two ways of proving lower bounds to generate ways and we're going to use them to prove lower bounds today so if you don't have any questions let's try to move on to the lower bound for thp okay so this is the thp polytope just to remind you again and we're going to prove that the extension complexity this polytope is at least to the square root n okay and we are not directly going to prove a lower bound for the thp polytope we're going to use another polytope the correlation polytope any questions there okay so correlation polytope so correlation polytope is a you can prove that it's a phase of the thp polytope so it's so and if you have if you can prove that the phase of a thp polytope has high extension and the whole polytope itself must have high extension complexity and we are going to show that the correlation polytope has extension complexity 2 to the square root n which will give you this lower bound and the correlation polytope just to define it again is the convex hull of all n square root n by square root n rank one boolean matrices okay so you have to take a vector square root n long you just view the corresponding rank one matrix and you just take the convex hull of all such boolean matrices okay all right so what does the slack matrix for this polytope looks like so I should mention that the facets of this polytope are I don't think are known completely so that's why I will use the freedom we had before to choose different inequalities that are valid over the polytope okay so the vertex of this polytope are just bb transpose where b is a zero one matrix a zero one vector and the inequalities we choose with this polytope by which it was the I guess the big contribution of this curini et al paper because they found some valid inequalities that really allowed you to analyze this slack matrix we'll choose these valid inequalities okay so the inequalities are given by a where a is again a string of or a vector zero one vector in square root n and then equality is given is this okay so this notation means that you put a on the diagonal of this matrix and a transpose and x is just a matrix in the convex hull of this polytope in these vectors so it's again a convex combination of such rank one matrices so I have not proved to the here that this inequality is valid but we will see it in a second so in the slack matrix entry will be given by these vertices and these inequalities okay so the slack for a particular vertex and equality is given by this quantity this is just by definition because you plug in bb transpose there instead of x and now if you work it out this is equals to one minus the inner product plus the inner product square and this is which as we all know is this okay so since this is a square now you show that the slack is always non-negative okay so this also shows that this inequality is always valid over the polytope because the slack is always non-negative at the bottom x yeah this here this should be xx transpose I guess yeah and then I'm plugging in bb transpose there yeah okay and I think so now to really analyze this matrix yeah yeah yeah but the slack will be yeah yeah exactly yeah so to really analyze this matrix it will be helpful to view or these zero one vectors as indicator vectors for sets over square root n universe okay so if you view it that way then the inner product of a and b is just the intersection size of a and b so now this entry is exactly it only depends on the size of the intersection of a and b and okay and again if you work it out now I'm only going to care about two types of entries in this matrix so if the intersection size is zero so that means the sets are disjoint then this entry is just one okay and the other type of entry is when the intersects intersect in exactly one element and then the slack matrix entry is zero because we plug in one there no that's what I said right that if the proof that we just saw you can choose any valid inequality and it will still give you lower bound the upper bound equality is not true but like it will still give you lower bound and it makes it easier to prove lower bounds because you can choose anything you want okay okay and the rest of the entries are not going to care about so I'm going to cover them with stars but we know that they're at least they're positive but yeah and so as we all know from communication complexity this is the unique disjoint matrix and the rectangle covering number of this matrix it's been known long being known it follows from our work of raspberry that was observed by the wolf that the rectangle covering number of this matrix is to the square root n so this already gives you a non-active rank lower bound on this matrix okay so at least this is clear okay but if you know don't know this how does this lower bound go here I'm going to show you complete proof which is pretty simple okay so and this is due to carbon wealth okay so this does not prove a randomized communication lower bound is so lower bound only in the non-identity communication of one of unique disjointness and I'm going to prove that rectangle covering number of this matrix is 1.5 to the square root n so you need this many rectangles to cover all the one entries in this matrix okay so how many one entries are here okay so it's the number of sets that are disjoint so for each element in the universe square root n yeah yeah the rectangles can cover the star but they cannot cover the zeros yeah so how many one entries are here so the number of one entries are the number of disjoint sets over square root n size for each element you have three choices because it cannot include any element in both sets and so the number of entries is number of one entries in this matrix is three-theta square root n and what the main claim will be that the maximum size is any set of one entries that can be covered by one single rectangle with those zero entries is to the square root n okay so then you need at least three to the square root n divided by two to the square root n rectangles to cover it and this proves the rectangle covering number okay okay so now I'll just focus on proving this claim okay so let me call any such set let's fix any such set it has one entries that can be covered by a single rectangle so this is the blue part here and I call this set t okay and t can be covered by a single rectangle sorry so again I think my picture is again wrong so this I changed it from ones to zeros okay so this zero yeah this can be covered by a single rectangle but these zeros shouldn't be here okay so maybe a better picture next time all right so these imagine they can be covered by a single rectangle and you don't you never hit a zero so for this proof it's actually better for us to view them as indicator vectors because then notation would be simpler so now I'll divide this matrix into four parts so now I'll index them by strings which are square root n minus one bits and here the last bit is zero last bit is one last bit is zero last bit is one okay so just partition going into these four halves and a and b which will be square root n minus one bits they're disjoint if your element is nt because one entry in the slack matrix means your sets are disjoint so all the bits are disjoint so even these square root n minus one bits are also disjoint okay so let me define two sets t r and t c which are going to be subsets of the set t r stands for rho and t c stands for column you will see in a second so t r will be all the entries of t that are in this half this quadrant okay so this set for example we take all those entries and in addition to that also you'll also include in t r all the entries that are in this quadrant what that whose corresponding thing if you flip the bit is not in the bottom quadrant okay so it will be these entries here so this is how t r is defined and we define t c analogously but along the columns okay from the top part so definition is clear all right so okay this is you have to believe from now but you can check it in your mind if you have nothing else to do if you zone out for a second that my induction you can prove that now these square root n minus one bits uh these t r and t c will give you like a set of one entries that can be covered by a single rectangle but now they are on universe of size square root n minus one and by induction you can prove that this is of size that moves to the square root n minus one the base case is the empty set so it's easy to check so now this implies that now i'm going to claim that this t r and t c is not going to be like this picture it will completely cover this set the blue elements here okay and so t must be size twice to the square root n minus one so this gives you an upper bound okay so this is all that is left to prove is that this is true and then we will be done okay so let's ignore all the non-relevant entries of this matrix and focus on only the relevant entries so if we have any entry that's uh not covered by these elements it must be in here because we covered all the elements already okay by definition so it must be in here in this quadrant and if i look at the corresponding entry at the bottom this must be in our set because that's how we defined it okay if an element is not covering the set then it must be covered in here and similarly if we go along the rows also that first corresponding element will also be in our set here so okay so now the key point is that if you look at this entry which will be in this particular rectangle i mean your this set is covered by a single rectangle so this entry is supposed to be covered by a non-zero rectangle but by definition these a and b were disjoint and in this rectangle we we are flipping the last bit so we've included the last only the last element and none of the other all the the rest of the subsets are disjoint so this entry uh the sets uniquely intersect here so this slack matrix entry is actually zero so this must be in our rectangle and so this this is a contradiction and so we're done okay so that proves that the rectangle covering number of this matrix is 1.5 to the square root n and this gives you our lower bound okay so this was the first lower bound okay so let's move on to the matching polytope okay again the definition is the convex all of all matchings in the complete graph and again the theorem we're going to prove that the extension complexity of this polytope is true to then okay so yeah so we'll prove a lower bound of true to then on the matching polytope and it turns out the matching polytope is also a face of some of the tsp polytope and that will give you the tight lower bound of true to then okay so for this actually let's go back to what the facets of the matching polytope looks like which probably most of people know here so already proven by Edmonds back in the late mid-60s so the polytope has three defining qualities you have the non-negativity constraints that all the variables must be positive non-negative you have the matching constraint that each edge each vertex can have at most one edge incident to it and the bottom constraints are the odd cut constraints that for any odd set of vertices you can pair at most q minus one over two of them in psi and one edge always has to go outside the cut so again this has two to the n the two to the n constraints of the last type roughly so this has exponentially many facets and again let me mention that if you are considering just a bipartite matching polytope which is the convex hull of all matching is in the complete bipartite graph then you don't need outside constraints and then in fact the polytope has a small description so what's the slack matrix for this polytope look like so again the columns will be the vertices of the matching polytope instead of picking all possible vertices I will only pick some some of them so this will be a sub matrix I'll only pick the perfect matchings okay and complete graph and the rows are the facets of the matching polytope and here instead of picking all facets I will just look at the facets corresponding to odd cuts so the u m entry of this matrix is given by so you have to believe me what you can check this easily that the u m entry of this matrix is given by the edges of the matching m then cross this cut minus one up to some scaling okay so an example if your cut is the the green thing here maybe it's two and you have this matching of this perfect matching then here three edges of this matching are crossing this cut because these two edges don't cross so this entry is three minus one over two which is one okay just an example okay and we're going to prove that the non-active rank of this slack matrix is essentially two to the n so let's see so the first tool we have is rectangle covering so let's just try that so but unfortunately it doesn't work because the rectangle covering number of this matrix is n to the four and see that so note that this entry as the of the slack matrix so it's exactly one edge is crossing one edge of m is crossing u which means that this vertex is tight for this constraint then this entry is zero otherwise this entry will always be positive which means that at least two edges of m will cross and these are the only non-negative entries okay so now I'll pick a rectangle I put any two edges even a e2 of and I will look at this rectangle which is given by all cuts where these are the only crossing edges where these are the where these are just cross they go outside the cut so I will never pick a cut where these edges are inside and for matchings I will take any matching that extends this partial matching of two edges okay so this defines a single rectangle I cover all these cuts and matchings by the single rectangle and if I go over all possible rectangles we're n to the four of them then essentially you can see that they cover all the positive entries in this matrix okay next question is this this covering actually give you a non-active rank decomposition so if you just sum these entries these rectangles does it actually equal to s or not and again if you try to if you have an entry where you have k crossing edges and this entry here will be k minus 1 over 2 but if you have k crossing edges for example here you have five crossing edges then that entry will be covered by like five choose two rectangles okay so in general to be covered by k choose two rectangles you cannot see it but yes k choose two okay so the weight of the right hand side will be much much larger okay so the weights will not match up and the first question is this always true okay and to formalize this now we will need the hyperplane separation because this rectangle separation was not enough just to remind you I will put weights on the entries of this matrix which will be large the total weight will be large but the weight of each rectangle will be small okay so let's see what the weights will be okay so let me define qk to be all the pairs all the entries in this matrix all the pairs un where k edges are crossing so the entry here will be k minus 1 over 2 and let's define the following weights so the weight so if there is one crossing edge which means that the entry in the slack matrix is zero then I will put the weight of minus infinity okay so minus infinity you can just take some very large number okay and so basically this is meant to ensure that our rectangle will penalize this entry very very badly because this weight will be minus infinity so we don't want it to cover that and for the one entry which will correspond to three matching edges I'll put one plus epsilon and I'll wait I'll spread it uniformly over all the three entries for the two entries I'll put a weight of minus half and I was again spread it uniformly over all the five entries and all the other entries I'll put a weight of zero okay so so what is the weight of the whole matrix so the only contribution here it comes from so the blue and yellow entries because the other entries always contribute to zero either the weight is zero or the entry is zero okay so the total contribution is given by so I spread a weight of one of plus epsilon here and the entry here is one and the q3 will cancel out so this will give you one plus epsilon the second term will the entry here is two and here is minus one half so this will give you one so the total weight of this matrix is epsilon okay and epsilon will choose it to be let's say some small constant 0.01 or something the total weight of this matrix is some small constant okay and we're going to prove that the weight of any rectangle is in fact two to the minus n then by the hyperplane separation bound we get this lower bound the maximum entry of this matrix is again it's number of edges which is polynomial so you still get a exponential lower bound okay okay so let's briefly sketch how this follows so let me define again qk was all the entries with k and pk is just the uniform measure on all the k entries or all the entries in qk so all the entries in slack matrix with k crossing edges so the main lemma that Roth was proved is that if your rectangle really avoids all the zero entries then the measure of all the three entries and all five entries then the rectangle must look like the following it must have a lot of blue entries which corresponds to the five entries than green entries okay so this is like a corruption round sort of statement okay if you're familiar with communication but you just say it's just saying that the measure of the blue entries must be much much greater because if you plug in the measure of the blue entries was one then it's saying that the measure of the green entries must be at 0.4 roughly so it must be very unbalanced rectangle up to some small error point to the minus cn so now let's look at how this implies our statement if you look at the weight of our rectangle this is one plus epsilon times p3 of r because i spread the one plus epsilon weight uniformly over all the entries in q and the contribution of the five thing will be minus one half okay and the first term here by this lemma if i plug this here it's one plus epsilon times 4 over 10 minus one half okay so i guess the key point here is that if you choose epsilon to be small enough the first term here is close to 0.4 second term is minus one half so the first term will be negative so this entry can bound by 2 to the minus cn okay and this will give you a load bound okay uh so i mean the crucial difference here is that communication is that here we're really choosing the weights of our matrix could be negative okay and that really helps us all right so this proof of this lemma is actually fairly i mean it has some very nice ideas but i think this is too much for this this venue so i'm not going to talk about it okay but i hope you get the idea so so this proves the load bound for the matching photo okay yeah sorry yeah yeah yeah it just shows you that these extended formulations are not really that's i mean in some cases they we thought they were very powerful but i mean there are problems which you can solve via a linear program with separation articles in polynomial time but you can still they still don't have like a small extended formation so it's sort of a weak model of computation but again yeah these bounds are totally unconditional they don't rely on any assumptions so that's the i guess the cool part okay so i'll briefly mention so there has been lots of work here here is some some list of all the papers which i know of there are more which i don't know so i might have missed some but there has been a lot of lower bounds and proving extension complexity in many explicit poltopes because there can always do these reductions between poltopes or you can relate it to communication complexity and use other techniques to prove lower bounds like the school's gen Watson paper so these are lower bounds for explicit poltopes and their extension complexity and there is lower bounds for how much you can do with extended formulations when it comes to approximations and there are a lot of lower bounds known here a lot of work here and i guess the next talk will be provincial talk about x-card and csps and here you actually use some lifting theorems to prove these lower bounds in some cases and there are also the notion of positive semi-definite extension complexity which i'm not going to define here but it's analog of uh is high dimensional lists for ps for stps and here also we have some lower bounds now but not for matching but for some problem some exponential lower bounds and i guess there is also connections between circuits and proof complexity so like already pretty much mentioned there is a connection between one-to-one card from a reduced game and extend formulations that's how this kushen Watson paper proved this lower bound actually the lower bound doesn't work on that but that's how the connection would notice what there was uh there were some results which actually prove upper bounds for extended formulations using circuits and again this is a very recent result which essentially shows that if you prove strong enough lower bounds for non-active rank or extension complexity actually do get monotone circular bounds or even circular bounds and uh some uh span programmed lower bounds i guess all right so that was the brief summary what's out there in the world okay so let me mention couple open problems so first this concerns the knapsack problem so the knapsack polytope or more precisely the max knapsack polytope which is defined in the convex hollows all zero one vectors satisfying the single inequality so you have if you want to pack you want to take at least b items and given by these weights and you want to pack as much of them and it's known by result of kushen Watson that the extension complexity the exact extension complexity is true n over log n and we know that you can do one plus epsilon approximation for this with uh n to the one over epsilon size and for can we prove any lower bound for this or is this tight or is this the polynomial because there's a f beta stipe lp that approximates this this is widely open nothing is known about approximation here okay so this is a beta stipe behavior it's polynomial in n but not for a fixed epsilon and is there a f beta stipe lp or is there a lower bound that's not one okay the second problem is the matching polytope so i'm not going to define what the positive time direction extension formulation is but is there a post can you prove a lower bound or is there one of is there stp that represents the matching polytope that is of polynomial size okay so yes that's the end thank you okay questions so here again like the motivation for doing the extension is to reduce the number of inequalities right and then still be able to deconstruct the solution by doing the projection say what was the last thing you said and still like after you introduce new variables you still want to get the solution yeah so does it in any like is there an increase in generality by just asking the projection to not just be you know like a simple projection down to the first x variables but you can like use the values of the new variables that you added y in a non-trivial way so i mean i just defined it to be this way but you can take any linear map there as long as linear so linear reduces yeah yeah because you can always transfer apply linear transformation assume this is the always the setting okay yeah non-linear yeah i don't know yeah so that's the right answer we know for a random polytope that's the right answer it's known like it's 2 to the n over 2 or something and i guess this goes in what's on paper that comes closest it goes to the n over log n but yeah there is some gap there right thank you