 Sir, the president introduced the next speaker, Lakta, who is a colleague at TIFR. Before TIFR, she was at IDSE, and before that she was a post-doc at TIFR. Kalta is somebody who, you know, I keep telling her that she works in game theory. She is, she unfortunately believes it's more of my, of my vision in graph theory. But by now, she's been in a lot of game theory committees and given up talks, and I think that, you know, So, um, compared to somebody who's done a lot of work in a table matching, you know, 70% of the results are due to her, and in fact, what she's going to talk about are, you know, shortcomings of table matching and why popular matching should be more popular. Thanks to Siddharth and Umang for this invitation. And indeed, I don't really do algorithmic game theory. But popular matchings is something that I hope fits in with, with the theme of today's workshop. And in fact, we'll be just doing a tour in the landscape of popular matchings here, the positive results, some hardness results, and some results on popular mixed matchings. Yeah, so the input here is a stable marriage instance. So that's a bipartite graph where every vertex has a strict ranking of its neighbors. So A1 regards B1 as its top neighbor, B2 as its second choice neighbor, and there's no edge between A1 and B3, and so on and so forth. So it's a bipartite graph, and the two sides of the bipartite graph are usually called men and women. And in fact, this is traditionally called a marriage instance. And what one seeks here is a stable matching. So this is an example. The bold blue edges represent a stable matching in this instance. What is a stable matching? A matching is stable if it has no blocking edge. When does an edge block a matching? If the endpoints of that edge strictly prefer each other to their respective assignments in the given matching. So we assume strict preferences throughout the talk. So if you look at the small example, the matching consisting of the black edges is not stable because the blue edge SB blocks it. Both S and B prefer each other to their respective partners in the black matching. However, there is a stable matching here, which is the singleton edge SB. So this is, I'm sure everyone is familiar with it. A classical result of Gail and Shapley from 1962, that stable matchings always exist in a marriage instance with strict preferences, and they can be efficiently computed. And in fact, this algorithm is as simple and clean as it gets. It can as well be described in five words, men propose and women dispose. So if you go back to our example instance in the previous slide, this was the input and when we run Gail Shapley algorithm on this, these are the men, these are the women. So both men prefer to the top choice neighbor, that's B. And B herself prefers S to A. So she accepts S proposal and rejects A. A has no one else to propose to, and that's the end of the algorithm. And we get this singleton edge SB. Maybe we just do it, run it on a slightly larger example. So this is an example with three men, three women. And the initial proposals are all men proposed to their top choice neighbors. That's this. And then B prefers S to A. So she accepts S proposal and rejects A. Now A has someone else to propose to, that's V. V has temporarily accepted U's proposal, but she prefers A to U. So when A proposes to V, she rejects U's proposal and accepts A's proposal. So we end up with the blue matching here. So I'm sure all of you are familiar with this algorithm. I just wanted to run it on a couple of examples, because we'll be using it later on also. And stable matchings, this really simple and clean algorithm, has seen extensive real-world applications. And one of the main applications for stable matchings has been in matching students to schools, colleges, universities, all over the world, including India. So in fact, it's used to admit students to IITs and NITs. This is a paper written by several IIT faculty and Yash Kanaria, who was a former student here, who's now at Columbia, on using JEE main scores to efficiently match students to IITs and NITs. It's been used to match students to colleges. Claire Matthew had a keynote talk on this at last year's essay, and it's used in New York. So this has really been one of the primary applications of stable matchings in the real world, in a similar flavor to match medical residents to hospitals. So it's used in the US and Canada. And in all these applications, we want a large matching, a matching of large size. Of course, we want to match as many students as possible to colleges, schools, universities. We want to match as many medical residents as possible to hospitals. And in that parameter, the size of a matching, stable matchings actually are probably not so good. For instance, it's a very interesting result that all stable matchings, there could be an exponential number of stable matchings in a given instance, but all of them have the same size. And more interestingly, all of them, in fact, leave the same set of vertices matched, and the same set unmatched, of course. This is commonly called the rural hospitals theorem. And the size of a stable matching could be as low as half the size of a maximum cardinality matching. For example, in this instance, a maximum size matching is the black matching of size 2 while a stable matching has size 1. So the constraint that you cannot have any blocking edge, so the size of a matching suffers due to that. So one could say, well, if the size of a matching is important, why don't you as well ignore preferences and just impose a max size matching in a given instance. But that's not good from a social happiness point of view. After all, vertices do have preferences. And to say, for the sake of overall size, I'm just completely ignoring preferences. It's certainly not a desirable solution. So I just wanted to motivate this. We have two extremes. At one end of the spectrum, we have stable matchings that are absolutely sensitive to vertex preferences, but the size may suffer. And at the other end of the spectrum, we have a maximum size matching, which is as good as size is concerned. But as far as stability is concerned, it's being totally ignorant of what preferences vertices have expressed. So the question one could ask is, is there a notion of optimality that is sandwiched in between size and stability? So in fact, let's go back to the ranking a vertex has over its neighbors that naturally extends to a ranking over all possible matchings. This is similar to what Nisarg was talking about at the start of his talk. It's just a ranked voting in a way. Every vertex has a ranking of matchings, which is just the ranking of its partners in the respective matchings. So it does not care about what partners some other vertices have in M or N. It just looks at its own partner in M, its own partner in N, and if its partner in M, that's M of U, is ranked better than its partner in N, it says U says I like M more than N. And if it's the other way around, it says N more than M. And if it gets the same partner in both, it's indifferent between M and N. So every vertex has a ranking over all possible matchings in the given instance. And we can compare a pair of matchings by actually holding a head-to-head election between them. So what happens in this election? Vertices are voters. And let's count the number of votes one by each matching. The matching that gets fewer votes is a loser in this election. I hope this is clear. So this is really going to be central to the rest of the talk. How we compare a pair of matchings? So let's run it on an example. We have two matchings here, the red matching and the blue matching. And these three vertices, A3, B1, B2, prefer the red matching because A3 gets its second choice partner in the red matching and its third choice partner in the blue matching. Similarly, B1, B2 get better partners in the blue matching. So they are happy with the red matching when compared to the blue matching. On the other hand, A1 prefers the blue matching to the red matching. And A2, B3 are matched to each other in both matchings. So they are absolutely indifferent between the two matchings. So in this head-to-head election, the red matching wins three votes, blue matching wins one vote. So the blue matching is a loser in this election. So a matching that loses an election is not a good matching, let's say. And this is very closely connected to the notion of Condorcet winner, again mentioned by Nisargini's talk. So this is a notion introduced by the Marquis of Condorcet more than 200 years ago when they were discussing democracy and when do you say a candidate wins an election? So the notion proposed by the Marquis of Condorcet was let's hold head-to-head elections between every pair of candidates and a candidate who wins the election against every other candidate, he or she is a winner. So we are not really demanding that our matching has to win every election. We are happy as long as it does not lose an election. So that's a weak Condorcet winner. Maybe we just illustrate this on a simple example. There are three candidates here, ABC. 30% of the population says A is our top candidate, B is our second-choice candidate and C is the worst candidate. Another 30% says B top A second C third and the remaining 40% says C top A second B third. So if we should hold head-to-head elections between A and B, it's easy to see A defeats B between A and C, A defeats C. So A defeats each of the other two candidates in their head-to-head elections. So A is the Condorcet winner. Unfortunately, even weak Condorcet winners need not always exist. So this simple example where one-third of the population says A is the top-choice, B is the second, C is the third. Let's do a cyclic shift. B is the top, C is second, A is third. Another shift. C is top, A is second, B is third. Here it's easy to see A defeats B, B defeats C and C defeats A. So there is no weak Condorcet winner even. Forget a Condorcet winner. So if we take two different table matchings, would it be that they are all can we say that none of them win against each other? Indeed. Exactly. We'll be a weak Condorcet winner. So we'll see it in one slide. Exactly. So coming back to our setting, so our candidates are matchings and we compare two matchings using this delta function. Delta is the difference in votes that the first matching gets and what the second matching gets. Matching is a weak Condorcet winner if delta m comma n is non-negative for all matchings n. So exactly. Do weak Condorcet winners always exist in our setting? In a general setting as we saw in the previous slide the answer we already know is no. In our setting fortunately the answer is yes because as you observe every stable matching is actually a weak Condorcet winner and this is an observation made many, many years ago by Garden Force. So just to prove it when we hold an election between a stable matching and any matching if a vertex u prefers the rival matching to the stable matching then it's partner in the rival matching. To begin with it has to be matched in the rival matching otherwise there's no way to be left unmatched is the worst choice for every vertex. So it certainly has a partner in m its partner has to prefer the stable matching to the rival otherwise the edge u comma mu is stable matching and we know a stable matching has no blocking edge. So the number of votes for the rival matching is at most the number of votes for a stable matching that is every stable matching is indeed a weak Condorcet winner and instead of you know a mouthful weak Condorcet winner let's have a more compact name popular matching. So popular matching is nothing but a weak Condorcet winner. Yeah so in fact one can show a stronger statement that every stable matching has a minimum size popular matching and going back to the starting example the black matching is a popular matching though it's not a stable matching it is a popular matching and being a perfect matching it is a maximum size popular matching and indeed as we saw we were not happy with the size aspect of a stable matching and so when we have generalized to the world of popular matchings what we really want is a maximum size popular matching. So the question we ask is we know finding a min size popular matching is easy run the gale shapely algorithm what is the complexity of finding a maximum size popular matching so we'll spend some time on this. Yeah so a first attempt to find a maximum size popular matching maybe to imitate our maximum size matching algorithm so since we know a minimum size popular matching that's a stable matching let's start with any stable matching and imitate our max size matching algorithm so while it's not a max size popular matching we really do not know how to test it but let's keep it as a black box and assume we know how to test if a given matching is a max size popular matching or not let's find again a suitable augmenting path with respect to the current matching and augment the current matching along this augmenting path so would such an approach work to begin with before we worry about how do we do this what is suitable and so on so forth the invariant should be that our current matching is always popular unfortunately this method wouldn't really work so maybe we just don't go into all these numbers out here but believe me the pair of horizontal edges is a stable matching here so is the pair of vertical edges and there is no so both of those are popular matchings of size 2 however there's no popular matching of size 3 here but believe me the 4 red edges form a popular matching of size 4 so there can be gaps in the size of popular matchings we have one of size 2 we have one of size 4 but none of size 3 so such an augmenting path based algorithm does not look promising so it's really quite different in flavor from a maximum size matching algorithm with an additional restriction which we can somehow manage we need a fundamentally new or a different approach for this and why don't we adapt the really nice and simple algorithm we saw in the beginning the gale shape algorithm so one road block is stability is a local property edge by edge we need to check does it have a blocking edge whereas popularity is a global property so something that happens in one corner of the graph some structure could really affect the popularity in the other corner of the graph so some concept like stability works really in a local manner how do we use an algorithm to compute that to ensure a global property and much more basic questions which we need given a matching how do we test it does not lose an election to any rival and as we saw in the previous algorithm how do we check no larger matching is popular so these are all questions we need to answer but let's in fact take inspiration from gale shape and see if we can somehow make that work so how about we build a new bipartite graph whose vertex set is exactly the same as a given graph so the same set of men women on either side but the edge set is different so for every edge in our original input instance let's have two edges and visualize them as this is the edge directed from A to B and this is the edge directed from B to A so there really no directions is just for a visualization so if this was our starting graph input instance let's look at the bi directed version that's all I am trying to say replace every edge with edges in both directions and we also of course what's important is a preference order if a vertex preference order is as follows let's make its preference order in this bi directed graph as all outgoing edges in the same order of preference followed by all incoming edges in the same order of preference is this okay so going back to this example in the bi directed graph let's look at the vertex B S was its top choice A was its second choice so outgoing to S is a top choice outgoing to A is a second choice then come S and A again but incoming is third choice incoming from A is fourth choice everyone else so let's look at this bi directed graph which really looks like a duplicated copy of the starting graph but as far as stable matchings are concerned stable matchings here can look drastically different from stable matchings in the starting graph for instance neither SB nor BS is a stable matching in this bi directed graph so neither of these singleton set consisting of this a matching here is exactly the same as a matching in the starting graph every vertex can have at most one matching edge incident to it neither this edge the singleton set consisting of this edge alone is not stable similarly the singleton set consisting of that edge alone is not stable though in the original graph this was the stable matching so what are stable matchings here in fact we'll run it on this instance we will get the pair of horizontal edges these two edges as a stable matching here and indeed this project to the matching that we wanted to find here so maybe we just run the gale shapely algorithm on this bi directed graph so as before the men propose along their favorite edges and b prefers as proposal to as proposal so b accepts as proposal rejects a and now a proposes along its next favorite edge that is the incoming edge into a and now b prefers as proposal to s proposal so this is rank 2 this is rank 3 and yeah so b now accepts as proposal rejects as proposal s has someone else to propose to and who happily accepts as proposal and that's the end of the algorithm so we are left with these two edges and just ignore the superscripts and that's the matching that we wanted to find a max size popular matching in a starting graph so drop all the directions superscripts we are back to original original graph and we have a matching in this small toy example it certainly gave us what we wanted would it always work so let's just run through this algorithm so construct the bi directed graph and every vertex prefers outgoing edges to incoming edges among outgoing edges is original preference order run gale shapely algorithm in this bi directed graph to compute a matching in this bi directed graph and now just project the matching back to the original graph just drop the superscripts yeah so we claim the projected matching is popular any larger matching is unpopular that's enough to show it's a max size popular matching and interestingly though popularity was a global property and till now we were totally in a combinatorial world we'll be using LPU duality to show that actually every popular matching has a locally checkable certificate of popularity and this was actually given by the superscripts which we paid no heat to during the algorithm but the superscripts are the ones that would certify the popularity of our matching is it okay yeah so we'll use in fact we'll formulate a linear program in g with a suitable edge weight function for any edge a b the weight of the edge so this edge weight function is a function of the matching m our algorithm has computed the weight of an edge a b is just the sum of votes of the end points of the edge for each other versus the respective assignments in m and so a vote of a vertex for some neighbor would be 1 if it prefers that neighbor to in this case it's partner in m it would be minus 1 if it prefers it's partner in m to that vertex and it would be 0 if b equal to m of a and similarly here so this is just the sum of votes of a and b for each other versus the respective partners so it's easy to check the weight of any edge is one of it's either minus 1 minus 1 or plus 1 plus 1 1 is a plus 1 1 is a minus 1 or both could be 0 if the edge a b is present in m and an edge weight being plus 2 is exactly the same as the edge blocks m and in fact it would be convenient to work with perfect matchings so let's just augment g with self loops and assume every vertex regards itself that is if it is matched along the self loop then that's the same as being left unmatched but it it's really makes things very easy for us to assume we are dealing with perfect matchings so any matching can be made into a perfect matching by including self loops on unmatched vertices and we also need to extend our weight function to self loops and that's just the weight or the vote a vertex gives to itself versus its partner we can assume its partner could be itself also if m left it unmatched then m of u equal to u in which case the weight of the self loop is 0 otherwise minus 1 because m of u was a genuine neighbor and we are asking how would you like to be matched to yourself compared to a genuine neighbor and that's always it's always unhappy to be matched to itself because it means it's actually being left unmatched so the way the weight function has been defined for any matching m weight of m under this edge weight function is just 0 and this is really a useful observation for any perfect matching weight of that perfect matching under this edge weight function is just this delta function the difference in votes that n gets and m gets in their head to head election so m is popular is the same as for any perfect matching n the weight of the matching is at most 0 that means delta nm is at most 0 so no rival matching defeats m in their head to head election so a max weight perfect matching with this edge weight function weight subscript m has 0 is the same as saying m is a popular matching we can say it's at most 0 and it's precisely 0 because m itself has weight 0 under this edge weight function and this is a familiar LP the linear program for max weight perfect matching so under this edge weight function we want to find a perfect matching which has the largest weight so m is popular is the same as the optimal value of this LP is 0 more than this primal LP the dual LP will really be useful to us dual LP has a very simple formulation we have variables alpha sub u for every vertex u and this is an edge covering constraint for every edge the weight of the edge should be at most some of alpha values of its end points and similarly for the self loop it should be at most alpha sub u and we want to minimize the summation of all the alpha values and please recall the alphas need not be non negative so making it perfect was a very useful thing for us alphas can be negative as well so m is popular is the same as if you can show a dual feasible alpha where sum of all the alpha values is 0 and this is another proof of popularity of a stable matching every stable matching has a very simple dual certificate the all zeros vector that is because an edge weight was positive was plus 2 only if it was a blocking edge to a matching and a stable matching has no blocking edge so with respect to a stable matching the weights of all edges are non positive what about the matching m computed by our algorithm the one on the bidirected graph does it have a simple dual certificate yeah so for each vertex take its superscript that is used in the matching m prime in the bidirected graph if u plus was matched in m prime set its alpha value to plus 1 if u minus was matched in m prime set its alpha value to minus 1 and if it is left unmatched set so for every edge included in the matching m prime sum of alpha values of its end points is 0 and because for unmatched vertices we set alpha u to be 0 the sum of all alpha values is 0 because we can just pair up a vertex with its partner in the matching and their sum of alpha values is 0 plus sum of alpha values of unmatched vertices which is anyway 0 for each of them yeah so it is easy to check all the dual feasibility constraints are obeyed by this alpha values it is easy to check that for any self-loop its weight is at most alpha of that vertex and this constraint the main dual feasibility constraint I would not really go into the proof of this but it follows in a rather simple manner by using the stability of m prime in g prime that is all our algorithm was doing it was finding a stable matching in that bidirected graph that allows us to show that our setting of alpha was actually dual feasible this is a certificate of popularity so edge by edge if we check this constraint actually that ensures a global property and of course sum of all alpha values should be 0 that ensures popularity yeah and we also wanted to show no larger matching is popular so any larger matching has to use an edge incident to an unmatched vertex and for any unmatched vertex the unmatched its alpha value is 0 while all of its neighbors have alpha value plus 1 so this edge is slack its weight is 0 but sum of alpha values of the end points is plus 1 and complementary slackness would certify that any matching that contains a slack edge cannot be a primal optimal solution which means its weight under the edge weight function weight subscript m is less than 0 that is it is less popular than m so m itself defeats every larger matching so any matching larger than m is unpopular yeah so this is a proof of correctness of our algorithm so I just came up with an assignment so I just ran gel shaping algorithm and that gave me some u plus v minus some edges in m prime I just used m prime to define this I did not really solve any so this is a linear time algorithm this is exactly like gel shaping but on twice as many edges so this is as simple as gel shaping so this is just in the proof of correctness that we came up with this certificate thanks yeah so this was a proof that m is indeed a max size popular matching and yeah like this like the quality of the stable mastic cases integral that is all in the bad part of the talk this is the good part of the talk my title was good bad and mixed so yeah so I was complaining that a stable matching could have size at most half that of a maximum size matching and in fact for a size popular matching we have a better lower bound it is easy to argue that the size of a max size popular matching is at least two thirds that of a maximum cardinality matching so basically there is no length 3 augmenting path with respect to the matching m which are algorithm computed so if you look at a length 3 augmenting path yeah so if a b s t is an augmenting path with respect to our matching m so just by the first example we had where I said neither this edge nor this edge is a stable matching in that bidirected graph the same argument shows if we had really such a path with respect to our matching m then one of these would be a blocking edge with respect to the matching m prime in the bidirected graph one just has to argue it out it is a very simple observation so we do indeed come with a better guarantee on the size of m when compared to a stable matching so yeah the matching m computed by our algorithm has a nice property it is more popular than any larger matching we saw that explicitly complementary slackness showed us that so let us give a name to such popular matchings a popular matching that is more popular than any larger matching let us call such a matching dominant and in fact dominant matchings have a nice characterization that precisely stable matchings in the bidirected graph so this observation this characterization really of dominant matchings helps us solve another natural algorithmic question in popular matchings suppose we have a favorite edge e is there a popular matching in our input instance that contains this edge e I am not really interested in minimum size popular or maximum size popular matching I all I want is some popular matching but one that contains my favorite edge so this is joint work with Agnes Che so in fact this is a well studied problem in stable matching theory given an edge is there a stable matching that contains it a simple variant of gale shapely algorithm solves this so check if there is a stable matching with the edge e if so job done otherwise check if there is a dominant matching with the edge e but dominant matchings are nothing but stable matchings in our bidirected graph so we again use the stable matching machinery to solve such a problem in G prime which means if you find such a stable matching in G prime indeed we have a dominant matching in G with the edge e if either of these two steps gives us a matching that's our problem is solved and if both the steps give the answer no then interestingly there cannot be a popular matching in G with the edge e so if there is some popular matching in G with the edge e there either has to be a stable matching with the edge e or a dominant matching with the edge e that's a proof of correctness of this algorithm it follows from a decomposition of every popular matching into so these are the two essential parts of a popular matching some a stable part and a dominant part so this decomposition leads to the correctness of this algorithm so the next natural question is what if you have two edges is there a popular matching with both these favorite edges is this problem easy to solve there need not be a stable matching with both e1 e2 there need not also be a dominant matching with e1 e2 and actually this problem is NP hard so the bad has already started so this is joint work with faenza, pavers and zang from columbia university so in contrast given any set of K edges is there a stable matching that contains all these edges or as many of these edges as possible all these are easy to solve problems in the stable matching world whereas in popular matchings as long as K was one we knew how to solve it but for K equal to 2 it's NP hard so just to again show the landscape of popular matchings finding a max size popular matching was easy we saw an algorithm finding a min size popular matching is easy a stable matching is that however is there any popular matching in between so a slice here is easy to solve a slice here is easy to solve but anywhere in between this large set is it even empty or is it non-empty so this question itself is NP hard so that's it we just have two slices of tractability in this entire set though this was a natural notion popularity so we conders a winner that's it we just have two tractable slices and is the middle part empty or non-empty is NP hard so stable matchings are our nice subclass of min size popular matching which we understand well and the same machinery allows us to understand dominant matchings also and both these classes are always non-empty tractable however to decide if there is any popular matching that falls neither in this slice nor in the top slice is NP hard yeah so let's talk about a generic optimal popular matching problem suppose there is a cost function and we want to find a min cost popular matching that's NP hard so all these hardness results in fact they imply the hardness of min cost popular matching the min size popular matching is easy min cost is hard what is the reduction from this so we could show reduction from sat actually from three sat that's question in contrast finding a min cost stable matching is very easy there are many polynomial term algorithms known for it and many of them basically find an extreme point of the stable matching polytope and the stable matching polytope has a nice linear size description which was given by van der Waate in 1988 but he assumed a complete bipartite graph a very elegant formulation was given by Rothblum in 1992 actually so while a stable matching polytope has an efficient linear size description so in joint work with we showed that so what other speakers also have alluded to so this extension complexity of the popular matching polytope so we could show an exponential a near exponential lower bound for it so we used in fact a recent result by goose et al on the extension complexity of the independent set polytope which has such a lower bound and our NP hardness result so we could also from sat independent set we can show various hardness reduction so that allowed us to show that the popular matching polytope is hard to describe while the stable matching polytope was had a very elegant simple description so in order to again get back to some tractable results so we relax stability to popularity saying let's find more optimal solutions but we learned our lesson other than just two optimality criterion we couldn't really do anything more with popular the notion of popularity could we relax it further to approximate popularity for the sake of tractability so what is a popular matching no matching defeats m in an election so let's call a matching quasi popular if no matching a matching can defeat a quasi popular matching but the extent of defeat is sort of bounded so if no matching wins more than twice as many votes as m in their head to head election so how about taking this as a notion of approximately popular so our definition of quasi popularity so let's take that as a notion of approximate popularity and the generic optimal quasi popular matching how easy is it to find it well unfortunately we fall down again it's NP hard to find a min cost quasi popular matching so though we relaxed popularity to quasi popularity the hardness remain invariant so as far as a popular matching is concerned if we say I want a popular matching but I'm ready to relax opt to any multiplicative factor types opt that's hard to find but even in the world of quasi popular matchings finding an optimal quasi popular matching is NP hard so I don't care about the match but maybe half of the even that's hard but if I less half to say one third so that's again unknown absolutely so Ruta is asking instead of saying no matching wins twice as many votes what if I make it three times as many votes or K times as many votes would that be tractable I really don't know sure but the question we ask here is given that finding a popular matching of somehow almost or some any multiplicative factor times optimal cost is hard how about ask a by criteria approximation question can we efficiently find a quasi popular matching whose cost is no more that of an optimal min cost than optimal popular matching so opt was a cost of a min cost popular matching so in the bigger world of approximately popular matching finding a least cost one is hard but can we find one whose cost is at most of the opt is the optimal popular matching cost and here the answer was yes finally we had a positive answer to this and as a going back to this polytope formulations both the popular matching polytope and the quasi popular matching polytope have near exponential extension complexity the same m over log m exponential in m over log m however we are able to show an integral polytope sandwich between these two polytopes that has a linear size extended formulation and for this we need to actually put our hands into fractional matchings so so this the formulation of this extension see for that we need to use some notions on popular fractional matchings and the popular fractional matching polytope so all of us are aware of it a fractional matching is just a point in the matching polytope and a popular fractional matching is basically one that does not lose to any integral matching so we define delta for two integral matchings but it easily extends to comparing an integral matching with a fractional matching and this is a popular fractional matching polytope that find a point in the matching polytope that does not lose to any integral matching however yeah so as I said delta mx is a linear function of x all these are linear constraints however there are exponentially many constraints here one per matching in the input graph can this polytope be described compactly and so this is Rothblum's elegant formulation that I referred to earlier so it is not Rothblum did not really give it in this language of weight subscript x we saw this weight subscript m for a matching m earlier and that again naturally generalizes in a linear manner for any fractional matching we can define a weight subscript x so weight of every edge is non-positive that is essentially Rothblum's stable matching polytope formulation and a generalization of the stable matching polytope is instead of insisting that every edge has non-positive has weight at most zero let's relax it and say as long as the weight of every edge is at most sum of alpha values its end points it's fine but also insist that sum of all alpha values has to be equal to zero so this is a natural generalization of the stable matching polytope when we take all alpha values to be zero it's precisely the stable matching polytope but we are giving the freedom of well some edges can have positive weight but as long as all alpha values sum to zero we are happy enough with it and this is precisely a compact extended formulation of the popular fractional matching polytope so this it's original ideas were in an older paper with Julian Messere and Meghana Messere but we did it in a different setting in this setting I did it in 2016 and so this is the extension I was talking about of the popular fractional matching polytope this extension need not be integral we'll see an example right on the next slide however one of its basis is integral and that's an extension of this polytope this integral polytope sandwiched between the popular matching polytope and the quasi popular matching polytope I was talking about a few minutes ago and optimizing over this phase of x we get a quasi popular matching of cost at most opt so going to this extension of our popular matching polytope we said this need not be integral so let's look at this example where all the A vertices have the same preference order B1 is their most favorite neighbor B2 is a complete bipartite graph for all the AIs B1 is a top choice B2 is a second choice for all the BIs A1 is a top choice A2 is a second choice and A0 is the worst choice there's only one stable matching here the blue matching and that's a lone popular matching also here the red matching is not popular it's defeated by matching A0 to B2 and A1 and B1 to each other however the fractional matching that has weight half on this four cycle that's also a popular fractional matching and this is not a convex combination of popular matchings so this is a popular fractional matching that's an extreme point of the popular fractional matching polytope and that's not integral yeah so a nice way a convenient way to think about a fractional matching it has a convex combination of integral matchings and that's what is a mixed matching a mixed matching is a probability distribution over matchings that is a mixed matching pi is let's take m0 with probability p0 mi with probability p i and so on and this is basically a lottery over matchings so this is quite well studied in certain areas of economics mixed matchings so analog is to elections between matchings you can also hold elections between mixed matchings and yeah everything is as we expected to be when we compare two mixed matchings it's just really the expected some of votes difference of votes we see when we sample a matching from this distribution with these associated probabilities and from pi prime with the respective probabilities and a mixed matching is popular and never loses to an integral matching which means it would never lose to another mixed matching either so finding a min-cost popular matching was hard however finding a min-cost popular mixed matching is easy because a mixed matching is equivalent to a fractional matching and you can optimize over the popular fractional matching polytope efficiently though there are exponentially many constraints it has an efficient separation oracle so the ellipsoid algorithm finds the best popular mixed matching so it's good when we generalize from popular matchings to popular mixed matchings we are able to again get tractability results so we get an optimal matching which is a set of matchings with associated probabilities and this is not really speaking we have compromised we wanted one matching and we have ended up with a sort of bulky set and to implement this it's a lottery we need access to random bits to implement such a lottery and so this is a price we will have to pay for the sake of tractability so there is a drawback of generalizing though it's tractable the solution has become hard to describe not hard but more complex to describe and difficult to implement but let's recall we did have an efficient and much nicer compact extended formulation of the popular fractional matching polytope so this was the description and the linear program that gave rise to this formulation had actually a very interesting property called self-duality so this is the LP and this LP is identical to the dual so this is a primal LP write the dual you get exactly the same LP and this self-duality allowed us to show that the extension polytope was actually half integral we did see an example sometime ago of a half integral extreme point that's really as bad as it gets it is a half integral polytope so there's always an optimal popular mixed matching which has a rather simple form that is it has support on just two matchings and equal support on both matchings so this is joint work with Chinchung Huang and to implement this lottery all we need is one random bit so given that finding a best popular matching is hard it's very nice to know that by generalizing to popular mixed matching we have tractability and such a nice structure also on the optimal popular mixed matching so I would like to end here and since my whole talk was on max size, min cost popular matchings I thought of ending with this coat of oiler that