 Okay, good. So welcome everyone. It's really a pleasure to have a full house today for a talk by Nima Amari. So before I do the presentations, the introduction, let me go around the table and say hi to everyone and see if I managed to put you in full screen. So our first group is a group led by Andre from MPI. And you can see that these guys know how to do TCS plus events. So just two minutes ago, the thing was full of pizza, but now it seems like it's starting to run out. So we have a lot of talks from Europe, a lot of participants from Europe today, maybe because Europeans are good at algorithms or maybe because the time change sort of is slightly more helpful than usual. So that was a group from MPI. Then we have Benjamin from Madison. Then there's a group with Budhima from EPFL. So more Europeans, welcome guys. Then Dixia from University of Toronto, welcome. Then there is Irfan from Indiana University in Bloomington. Hi. Then we have Guillerm from University of Lisbon. Then there's Junish just around the corner from Caltech. Jiang Yang is joining us from Virginia Commonwealth University. Li Boer is joining us from RCQI. I'm actually not sure what RCQI is, I'm sorry. Okay. So Li Boer is joining us from RCQI. And finally, we have our speaker, Nima Amari, joining us from Stanford. So welcome, everyone. I should say that those not present here but part of the team are doing a lot of work in the background. So that's Clemence Kanon, and India Day who is here, Gautam Kamat, Ilya Razenstein, and Oded Ragev. So thanks guys for the TCS Plus team. I'll also say before we start that the next talk two weeks from now will be given by Artur Sumaj from Warwick and Artur will tell us about one compression for parallel matching algorithms. So onto today's talk now. So it's a great pleasure to have Nima Amari give the talk. So Nima got his PhD a couple years ago with Satish Rao from Berkeley. And then he spent a little time as a postdoctoral fellow at the Simons Institute in Berkeley. And since I think one year he's been at Stanford as a postdoctoral fellow. So Nima has done a lot of work on algorithms, including algorithms for approximating the permanent and many others. But today he's going to tell us about joint work with Vijay Vazirani showing that the planar perfect matching is in NC. So welcome Nima. Thanks. So hi everyone. This is a joint work with Vijay Vazirani. Let me just jump into the problem. So I hope that most people here know what a perfect matching is. If you're given a graph you want to pick a set of edges such that every vertex is adjacent to one of the edges you picked. So there are basically three main computational questions you can ask about perfect matchings. I've written all of them here. This talk is about the middle one. So let's go over them. So the decision problem is given a graph. Does it have a perfect matching? The search problem which we're discussing today is if the graph has a perfect matching just output one of them. And there is also a counting problem which is how many perfect matchings there are. So the type of algorithms in this talk that we are considering are in this computational model called NC. So this is a theoretical model for parallel algorithms. You can roughly think of it as you have polynomially many processors. So the number of processors you have is very large. Not a very realistic setting. But it can be as large as a polynomial in the input size. But then by getting this many processors you want to be super fast. So the running time of your algorithm has to be poly logarithmic in the input size. And then all of these processors have access to the shared memory. Again, an unrealistic assumption in the real world. But this is a theory whether or not you have access to randomized bits actually makes a lot of difference in parallel algorithms. So I have to make this distinction. See an RNC is where you don't. And this talk is about the deterministic NC. So no randomization is allowed. Okay. So if this is such a real unrealistic model of computation, why do we even care about it? It all goes, sorry, is the, is everybody hearing me fine? Well, yeah, we can hear you. But sometimes it gets caught a little bit. And I was wondering if that could be because you're on a wireless network. The connection is a little jumpy. Yeah. I mean, if you have a cable, you can plug in right away. You can do it. Otherwise, I think it's, it's fine. It wasn't. Yeah, I can't do anything about it. Sorry. Okay. So if it's really a problem, I'll let you know. If it's a problem, then in the worst case, we can always switch off the video. Okay. That's true. Yeah. But let's, let's try it this way. And I'll ask you if you need to stop the VLC maybe. Sure. Okay. Sorry. So, yeah, so it all goes back to this central problem in computational complexity called polynomial identity testing. So what is this problem? You're given a multivariate polynomial in m variables. The way you're given such a polynomial can be in many different ways. You're you can assume that you're given a circuit, a formula, or even an oracle that evaluates this polynomial at arbitrary points for you. And your task is to determine whether this polynomial is identically zero or not. So there is a very simple randomized algorithm. And Schrodinger's Ippel were the first ones to prove that this simple randomized algorithm works. And the way this randomized algorithm works is you just plug in random values for the variables. The output is zero. The polynomial is identically zero. Then the evaluation is always zero. And Schrodinger's Ippel showed that if you're taking your random bits, if you're taking your random inputs from a large enough set, then if your polynomial is not identically zero. Yeah. The problem is still there. So maybe can you just close the VLC window? Sometimes this is what uses most of the bandwidth and it'll be easier. Okay. All right. Is it better now? Well, you have to speak for a while, but it looks okay now. Yeah, sure. Okay. All right. So real identity testing has this very simple randomized algorithm. And one of the central questions has been whether we can design deterministic algorithms for it. There are a lot of works on this problem. It's quite likely a very hard problem, because if you can provide polynomial time deterministic algorithms for this problem, then you can prove non-trivial circuit lower bounds. Okay. So because polynomial identity testing is likely very difficult in general, people have considered several special cases of it. I've listed three of them here. The first one is bipartite perfect matching. The way it reduces to polynomial identity testing is the following. So suppose that you have a bipartite graph and you consider it's bipartite at JCC matrix given by the Aijs. So Aij is one whenever there is an edge between i and j and zero whenever there is none. And then you multiply these by symbolic variables Xij. So if you compute the determinant of this symbolic matrix, it becomes the polynomial in the variables Xij. And if you just look at the formula for the determinant, this polynomial is exactly non-zero whenever there is a perfect matching. Any term you get in the determinant would have distinct variables Xij, so there is no cancellation in the computation of the determinant. Is the sound fine? Yes, so far so good. So bipartite perfect matching, you can generalize this idea to perfect matching in non-bipartite graphs. Over there you have to work with this slightly more general matrix called the taut matrix. But again, the same reduction holds. The determinant of a symbolic matrix is identically non-zero if and only if the graph has a perfect matching. I'll go back to what this taut matrix is later in the talk. And then there is this third problem, which is a slight generalization of the previous two. So you're given a graph, even a bipartite graph, and you've colored its edges red and blue. And you're asked whether there is a perfect matching that uses exactly k red edges. So all of these problems can be reduced to polynomial identity testing. And because of this reduction, there is randomized NC algorithms for all three problems. Okay, all right, perfect matching and perfect matching in general graphs. We already know polynomial time algorithms that don't use any randomization. And these algorithms don't use this reduction at all. So it's for a completely different reason that these two problems are in P. So if we really want to de-randomize polynomial identity testing in these three cases, we have to be working at the right level of complexity. So asking whether there are deterministic P algorithms for perfect matching is not the right question. So we have to go down one level and ask whether there are NC algorithms for perfect matching and bipartite perfect matching. If you're working with exact matching, then it's fine to ask the question whether there is any polynomial time algorithms because we don't know any such algorithm for this problem. Finally, there is also you can define a search version of polynomial identity testing. So the search version would be if your polynomial is not identically zero, just give me a monomial whose coefficient is nonzero. In the three problems I mentioned, these reductions basically also preserve the search version. So if you can answer that for polynomial identity testing, you can answer that for bipartite perfect matching, perfect matching and exact matching. There is a very general technique called the isolation lemma that basically also solves the search version of all of these things in randomized NC. So this is the main reason we are interested in this question. So here is a brief history of the algorithms found for perfect matching, parallel algorithms for perfect matching. So Lovage was the first one to show that there is a randomized NC algorithm for the decision problem and then later Carp Apfel and Wittkerson showed that even the search is in RNC. And then a year later, using this very general technique called the isolation lemma, Momuli Vazirani and Vazirani gave a different proof for why search is in RNC. And then over the last couple of years, there has been renewed interest in these problems because of this amazing discovery by Fenner, Gordje and Thierryph that if you slightly relax your definition of NC and allow for not just polynomial lemma processors, but quasi polynomial lemma, then search and decision, both of those problems can be solved in quasi NC. And then this idea was later extended to general non-bipartite graphs by Svenson and Tarnofsky. Okay, by the way, if you have any questions, just feel free to interrupt me. Okay, so this talk is about the special case of planar graphs. So why do we why do we consider planar graphs at all? In general, the decision search and counting problems for perfect matching are perceived to be in the increasing order of difficulty. The reason for this is both search and counting. I mean, decision can be reduced to both search and counting. So search and counting must be at least as difficult. And for general graphs, counting is known to be sharply hard, whereas the previous two are in RNC or even in polynomial time. But for planar graphs, there has been a for planar graphs, it's been known for a long time that counting can be done in NC. So this is a result of Castellan and then Vazirani generalized this to a slightly more general class of graphs called K33 free graphs. Okay. But the the search version was still a question mark. So we know polynomial time algorithms for it, but no deterministic parallel algorithms. Okay. I should mention that for the even more special case of bipartite planar graphs, it was known that the search version is in NC. So the first proof of this fact appeared in a paper by Miller and Naur. And then later, Mahajan and Varadarajan gave a very different proof. And our algorithm is actually inspired by this work of Mahajan and Varadarajan. So I'm going to go over it in details. Okay. So this is formally the result that we obtained, that there is an NC algorithm for finding a perfect matching on planar graphs. Okay. I should mention that since since our results, there has been Vazirani have generalized this to a more general class of graphs called one minor crossing free. So this includes planar graphs as well as some other graphs. And then Sankovsky has shown an alternative algorithm for the same problem. And he has already, he has also shown that the relevant problem of maximum cardinality matching in bipartite planar graphs can also be solved in NC. So in general, maximum cardinality matching, there's a reduction for it to perfect matching. But the reduction destroys planarity in general. So this is why the second result is non-trivial. All right. So Vazirani have matching in them. So how do you count perfect matchings in a planar graph? So let's consider a bipartite planar graph for simplicity. You have an adjacency matrix. It's a bipartite adjacency matrix. So the rows show the vertices on the top and the columns show vertices on the bottom. There's a one whenever there is an edge. And this is exactly the quantity that you want to compute. The permanent of this matrix is exactly the number of perfect matchings. Right? So permanent is in general a sharp hard to compute. So it's not clear why planar graphs make it easy. And there is a similar quantity to permanent that we can always efficiently compute. And that's the determinant. So there is a superficial similarity between the determinant and the permanent in that you have, you're summing over the same terms in both. But in the determinant, all of the terms appear with either a plus sign or a minus sign. So there could be cancellations here. Right? So Polia was the first person to ask this question. When you are given a zero one matrix, can you maybe change some of the ones to minus one so that the determinant becomes the permanent of the original matrix? Right? So in other words, you want to change some of these ones to minus one so that all of the signs here become plus. Okay? So that there are no cancellations. And unsurprisingly, for bipartite planar graphs, you can always do this. That's the reason planar graphs are special. So if you're working with non-bipartite graphs, you have to work with a slightly more general matrix called the taut matrix. So this is a skew symmetric matrix, meaning that its transpose is its negative. So here, the rows and the columns are both all of the vertices in the graph. And whenever there is an edge, you put a plus one on one side and a minus one on the other side. So you have a choice here, whether to put a minus one on the top and a plus one on the bottom or vice versa. Okay? So what we know in general about the taut matrix is that its determinant is very related to the perfect matchings. So it's a sum over perfect matchings of some particular sign. The sign is determined by the way you chose the signs. And then the whole thing is squared. Okay? So again, if you can manage to find a signing so that there is no cancellation in the sum, you can compute the number of perfect matchings by computing the determinant and then taking the square root. Okay? And this is what Castellan showed that in planar graphs, you can always choose the signs in the taut matrix so that there are no cancellations. Okay? And then Schanke had shown that you can compute the determinant in NC using parallel algorithms. And so this gives you a way to count the number of perfect matchings in NC. Nima? Yeah. So if you're trying to compute the permanent of a matrix and the underlying zero non-zero pattern is actually, let's say, a bipartite planar graph, is it easy to then compute the permanent? Yes. Yes, exactly the same signing gives you the same. Okay. Yeah. So it's not necessary for your matrix to be to consist of plus ones and minus ones. You can have arbitrary coefficients as long as the non-zero patterns form a planar graph, you can do this. All right? So we can count in planar graphs, but how does that help search? Okay? So for this, I'm going to go over the algorithm of Mahajan and Varadarajan who used counting to solve search in bipartite planar graphs. Okay? So roughly the sketch of their algorithm is the following. You find a point in the matching polytope. So what is the matching polytope? It's a polytope whose vertices are the indicators of all perfect matchings. Okay? So this is a pictorial depiction. So you find this point that could be in the interior of the whole polytope. The vertices of this polytope are basically all of the perfect matchings. And then from this point that you found, you move to a lower dimensional face, and then from that lower dimensional face, you move to an even lower dimensional face, and so on until you reach a vertex. Okay? So where does counting come into the picture? Counting is exactly used in the first step of this algorithm. Okay? To get the initial point in the perfect matching polytope. So if I wanted to compute the average of all of the vertices of my polytope, which is by definition in the convex hall, then I could use this trick. So think of an edge E and count the number of matchings, perfect matchings that contain that edge E, and divide that by the number of perfect matchings. So this gives you some number between zero and one. And if you consider all of these numbers for all of the edges, this is exactly the average of all of the indicator vectors of all perfect matchings. Okay? So you can use counting to find that starting point. And then the rest of the algorithm of Mahajan and Varadarajan doesn't use counting anymore. So the polytope that I mentioned whose vertices are all the perfect matchings in the bipartite case is, it's a very simple problem. Nina? Yeah. One more question. So you use this counting algorithm to find an initial point in the interior of the polytope. Right. When you go to a lower dimensional polytope, do you again like invoke the thing or is just one call? No, no, no. Just one call at the beginning of the algorithm. Okay. Yeah. So Neema, just, yeah. Sorry. Go ahead. Describing on that. So how do you, how like this step by step should take too many rounds? How do you do this? Right, right, right. Yeah, I'll go over that in the next couple of slides. Yes. So yeah, so this is the matching polytope. This is basically the inequalities and equalities describing it. So for each edge, you have a variable describing whether that edge is in the perfect matching or not. And then the first constraint, equality constraint is just telling you that each vertex must be adjacent to exactly one edge. And then the second type of constraint is just that X is should be non-negative. Okay. So this is known to be the exact description of the perfect matching polytope in the bipartite case. Right. And how do you move inside the polytope described by these equalities? You can move using this thing called the, using, using a cycle basically. So consider this highlighted cycle here. If I add epsilon to the, to the even edges and remove epsilon from the odd edges, then I'm again, I'm still satisfying the first type of constraints. Okay. Because at every vertex, my changes just plus epsilon and minus epsilon. So they cancel each other. Right. So as long as I can do this and, and have my X is being non-negative, then I'm moving inside the matching polytope. Okay. So you can, you can basically increase epsilon on this cycle until one of the edges becomes zero. Right. Once one of the edges becomes zero, you have basically made one of these constraints tight and you're at a lower dimensional phase. Okay. So, so this way you can basically remove the dimension of the face that you're at by one. Right. But if you want to do this naively, this can take a palmily many steps. So, so the way you, you paralyze this is you find many, many disjoint cycles. So, so let's say in this picture, we have three cycles, they are all edges joined. And then on each one of them, you can do this operation in parallel. Okay. So, so in planar graphs, you can generally find omega and edge disjoint alternating cycles as long as some minor conditions are satisfied for the planar graph. I'll get to that conditions at the end. But that's the rough idea. You can always find rough linearly many edge disjoint alternating cycles. And then you can, you can do this operation of removing an edge for each one of them. And this basically reduces the dimension of the face that you're at by at least a constant factor. Okay. Because you made omega n of these constraints tight. All right. So you rotate these in parallel. And then omega n edges disappear. And then you have to clean up the remaining graph and repeat. So this cleanup is related to what I said about the conditions you need on the planar graph. So in general, not all graphs have this many edge disjoint cycles. Think of just a single large cycle, right? It only has one cycle, right? So the condition you need here is roughly that the graph doesn't have any vertices of degree two. Okay. So this cleanup basically gets rid of the vertices of degree two. And it's very easy to deal with those vertices. Okay. So once you clean up, you again find linearly many alternating cycles. And you can repeat this until you get basically to a perfect matching. Okay. So what's the challenge in non-bipartite graphs? The challenge is that the polytope is not much more complicated. So Edmonds was the first person to show that this is the description of the perfect matching polytope in non-bipartite graphs. So we have the two constraints we had before, but these are not enough. You have to also add a third type of constraint, which is saying that an odd set of VCs. So here in this example, I have three vertices. Any perfect matching must have an outgoing edge from the set. Okay. This is easy because all of the internal edges basically cover an even number of vertices. So there must be at least one vertex not covered by the internal edges, which must be covered by an external edge, right? And Edmonds showed that if you add this third type of constraint, then you have exactly the description of the perfect matching polytope. Notice that the third type of constraint is much more complicated than the previous two because there are exponentially many of them. Okay. All right. So how does that make our lives harder? So previously we were finding an even cycle and I can still try to add epsilon and remove epsilon from the edges of it. But now the third type of constraints might be the one that blocks me, right? So here is an example. So if you consider this graph and you put one third on all of the edges, this is a point in the perfect matching polytope. But if you try to add epsilon to the even edges of this cycle and remove epsilon from the odd edges, then the constraints for this cut becomes violated for any epsilon bigger than zero. Okay. That's because before you did this operation, the value of this cut was one third plus one third plus one third one. And now you're removing two epsilon from this value. So the cut would go below one, right? So the third type of constraint can still block us and it might not be possible to remove an edge from such an even cycle, right? But here is the saving grace. So if you have a set, an odd set of vertices that's blocking you, meaning that the cut value of it is already one, then there is a well-known operation that basically preserves perfect matchings and reduces the size of the graph. So this operation is basically you shrink all of the vertices inside the set S into a single vertex S. So you contract your graph. And the point is that if you had a point in the perfect matching polytope before the shrinking, the new point induced by the previous point is inside the polytope of the new graph, right? The reason for this is that because this set S was tight, now the degree constraint for this new vertex S is satisfied, okay? And there is also a second fact, which is that if you find a perfect matching in this shrunk graph, you can always extend it to a perfect matching in the original graph, okay? So it seems that if I'm blocked by the third type of constraint, I can still make progress, right? So for each cycle, you basically go until you are stuck by one of the constraints, you've either removed an edge or you are able to shrink an odd set, and that reduces the size of the graph, okay? Now the challenge is that we want parallel algorithms. So we don't want to just remove one edge or shrink just a small set, but we want to remove either many edges or basically shrink a lot of the graph. So here are the three main ingredients that our algorithm uses. So the first ingredient is about moving inside the polytope. So before in the bipartite case, we were doing this by moving along each cycle in parallel. I'll show you that in the non-bipartite case, you can't do that naively anymore. You have to be more careful and more clever. Then there is this question of once you're basically blocked by an odd set, how do you find it, right? Because there are potentially exponentially many odd sets and you can't just enumerate over all of them. And then once you find these odd sets, potentially many odd sets, one for each of the cycles that you're working with, how do you basically shrink them? Because these odd sets, you've found an odd set basically for each cycle, but they can be crossing each other arbitrarily. So it's no longer clear that you can basically shrink them at the same time. You have to somehow make them disjoint. So I'll go over these ingredients one by one. Let's go over the first one. So remember that in the bipartite case, we were able to find omega n edge disjoint cycles as long as your graph doesn't have vertices of degree two. That's still true for non-bipartite graphs, right? But the cycles can be odd, right? So in this graph, we have these two cycles and they have odd length. So you can no longer do the previous operation of adding and removing epsilon because if you just do that, then the degree constraints for these two vertices wouldn't be satisfied, right? So how do you fix this? You can basically join the odd, the cycles that you find by some path. So morally, you should think of two joint cycles as a single cycle with repeated edges. So if I'm joining these, traversing one triangle first, then going over the joining edge, then traversing the other triangle, and then going back on the joining edge, okay? So this is morally an even cycle with just repeated edges. And if you do the accounting of the plus epsilons and minus epsilons, you'll see that you have to basically remove twice the epsilon from this joining edge. So you can still move within the perfect matching polytope by these even cycles, right? But here's the big problem. If you have disjoint even cycles in the bipartite case, you can move along them in parallel, and nothing goes wrong. So what do I mean by that? So here is the quick matching poly at the central point, and we've found two disjoint cycles. Each one of them gives me a movement. So basically the two yellow arrows show you the direction of the movement. If I increase the size of my arrows until I basically hit a face for each one of my cycles, then the combination of the moves still takes me to a point that's inside the polytope. This is because the constraints, the inequality constraints defining the bipartite case, are very simple. They're just coordinate non-negativity. So yeah, so moves can be done in parallel when your graph is bipartite. But in the non- bipartite graphs, things are more complicated. The picture is more like this. So you can think of this additional face as one of the odd set constraints. So now if I have two moves, there's some no longer takes me to a point inside the polytope. It can just jump outside of the polytope. So here is how that situation happens in the graph. So think of an odd set S and two disjoint cycles that basically intersect it. Now if I find the slack for epsilon, the maximum slack for this set with respect to epsilon 1 and the maximum slack for this set with respect to epsilon 2, then the maximum slack for the addition of these two moves is the maximum of the two slacks, epsilon 1 and epsilon 2, not the addition of epsilon 1 and epsilon 2. So if I do my operations on these two edge disjoint cycles, this set can go, the constraint for this set can become violated. So how do we fix this? The way we fix this is by invoking the counting algorithm again. So here is the idea. So suppose that I'm at this point in my perfect matching polytope and I have two moves denoted by the yellow arrows. Suppose that I found the hyperplane which basically separated these arrows from the rest of the space. So all of the arrows live on one side of the hyperplane. So this hyperplane is basically defined by a weight vector w. So it's equivalent to saying that there is a vector w such that the dot product of w with all of my moves is negative. Now if I found the minimizer of w dot x for this weight vector w, this would be maybe this point denoted in the black. This point has already exhausted the moves parallel to the yellow arrows. In other words, starting from this new point, this new black point, I can't move along any of my cycles anymore. Because if I could, I would be able to reduce w dot x. So that's the whole idea. You find a vector w such that the dot product of w and all of your moves is negative. And then you find the minimizer of w dot x. So here is what it translates to. So you can think of w as a weight vector on the edges. And this condition that the dot product of w and each of my moves is negative is basically saying that for all of my even cycles that I found, the alternating sum of the weights should be negative. Now if I have a bunch of edge disjoint cycles, it's very easy to construct such a weight vector. You basically make the weights of all of the edges zero, except for one in each cycle. So for each cycle, you just pick one of the edges that appears with a negative sign in here. You put the weight of that edge to be one, and all of the other edges you put their weights to be zero. So this makes sure that the dot product is negative. But then how do you find the minimizer of this w dot x? That's the main question. So it seems like you're solving an optimization problem. And we couldn't even find the vertex of this polytope to begin with. The point is that this minimizer doesn't have to be a vertex. You can find a point inside the minimizer phase. So here is the lemma. If you put weights on the edges of the graph, as long as the weights are bounded by some polynomial in n, then a fractional minimizer of w dot x can be found in nc. And the way you find that is by counting the number of minimum weight perfect matchings. So similar as before, except that everywhere you replace matchings by minimum weight perfect matchings. Okay. So how do you find minimum weight perfect matchings? It's just a simple generalization of the previous idea. So previously we had this dot matrix, which had plus one, negative one, and zero entries. Now you replace those entries by in this symbolic variable t. I raise t basically to the weight of every edge. And I put that instead of the plus ones and minus ones in my top matrix. Now if you have a perfect matching, the term that appears in the determinant of this matrix for that perfect matching is just t raised to the weight of that perfect matching. So if I can evaluate this as a polynomial in t, I can basically extract the lowest order term. And that gives me the number of perfect matchings of the lowest possible minimum weight, the lowest possible weight. Okay. So there are many ways to do this. One very simple way of computing this as a polynomial in t is to just evaluate it at different points and then use polynomial interpolation. Okay. So you just plug in different values for t, determinant, you already know how to compute in NC. And then finally you just do polynomial interpolation to find the coefficients of t. Okay. So this shows you a way of finding the minimizer of w.x, not necessarily a vertex minimizer, but some minimizer and some point in the minimizer face. But that's enough for moving inside the polytope. So now let me go to the second ingredient, which is finding the tight outsets. Okay. So the idea was that if you had a cycle and you moved along it or you exhausted it, so it no longer can be, so you no longer can move alongside it, then it must be either blocked by an edge or by an offset constraint. It's easy to determine whether it's blocked by an edge. You just figure out whether x is zero for any of the edges. But if that's not true, then it must be blocked by an offset. How do you find that offset? Okay. So you want to basically find the violate, so you want to find the set s such that the value of the cut for that set s is exactly one. So the idea is that if you move alongside the cycle slightly by a very tiny amount epsilon, then now you've exactly violated that constraint. And if you choose your epsilon to be small enough, you've exactly violated that constraint alone and no other constraint. So you move by a tiny amount epsilon along that even cycle, and then you find the violated constraint that has taken you outside of the polytope. But by solving this problem called the minimum odd cut problem. So in this problem, you're given some weighted graph. Now the weights are given by XEs. And you want to find among all odd sets s the one that minimizes the cut value. So previously, before moving outside of the polytope, this minimum was exactly one, or at least one. Now that you've violated the constraint, this minimum has gone below one. So it's fine. So it's enough for you to basically find the odd set that minimizes this. So finding minimum odd cut is an old problem that has been solved by Pat Bergen-Rau. And their idea was that you can use this thing called a Gomori Hutri. So let me just describe what a Gomori Hutri is. So suppose that you're given an undirected graph. It's potentially weighted. There is a tree on this undirected graph that has the same set of vertices, but whose edges do not have anything to do with the edges of the graph. So here's an example. You have this graph and you can find the tree on it with the highlighted edges. And the property that this tree satisfies is that all of the ST cuts in the graph are given by cuts in the tree. All of the ST minimum cuts in the graph are given by ST cuts in the tree. So let me parse that for you. So given a graph, there are exactly n minus one cuts defined, given a tree, there are exactly n minus one cuts defined by the tree. The way you find the tree cut is you just remove an edge of the tree and then it just decomposes the vertices into two subtrees. So any tree defines this n minus one cuts. And the property that the Gomori Hutri satisfies is that this exhausts all of the minimum ST cuts for all pairs of vertices ST in the graph. So what Pat Berg and Rao showed was that not only does a Gomori Hutri exhaust all of the ST minimum cuts, but it also exhausts the minimum odd cuts. So as long as you're able to construct this Gomori Hutri, you can basically try all of the n minus one cuts defined by it, and you're guaranteed to find the minimum odd cuts amongst those. Now, how do you construct this Gomori Hutri? There is a standard polynomial time algorithm for doing that. It just uses algorithms for finding the minimum ST cuts or maximum ST flow. What we showed was that if you can do this in NC, which for planar graphs we can, due to a result of John's sense, also construct the Gomori Hutri in NC. So we basically took the same construction for polynomial time and made it into an NC algorithm. So that's how you find an odd set. So for each of your cycles, you can find an odd set that intersects it. Now the question is if you have many different cycles and all of them give you some odd set, these odd sets can be crossing each other. How do you make them disjoint so that you can shrink them at the same time? So as I said, tight odd sets can cross each other arbitrarily, and this really genuinely creates a problem that you can't shrink them at the same time. But there is a standard way of uncrossing, which is that if you have two sets, S and T, that cross each other, remember these are odd sets, right? Then depending on whether or not their intersection is also odd or not, you can either uncross them into S minus T and T or S union T, and both of these are guaranteed to be again tight. So the reason for the tightness of these two is that it's basically the submodularity of the cut function. So we have this inequality, the cut value for S plus the cut value of T is bigger than both the cut value of S intersection T plus the cut value of S union T and the cut value of S minus T plus the cut value of T minus S. So if S intersection T is odd, let's say, then S union T is also odd by just the standard parity argument. So there are these two odd cuts, and the sum of these two odd cuts is at most the sum of the cuts values for S and T, which are both guaranteed to be one, because these are tight cuts, right? But these can't be lower than one, therefore both of them must be equal to one. The same standard argument applies in the other case, when S intersection T is even, you can consider S minus T and T minus S, and then both of them are going to be tight odd cuts, right? Notice that in this picture, I haven't drawn S intersection T, or in this case, or T minus S in this case. The reason I haven't drawn those is that they are subsets of these larger sets, right? T minus S is a subset of T and S intersection T is a subset of S union T, right? At the end of the day, I want to shrink sets, so sets that are contained inside other sets automatically get shrinked at the same time, right? So I'm only tracking the high level sets, okay? So what I just said basically boils down to this, that if you have two sets that cross each other, you can either replace both of them by S union T or remove their intersection from one of the two sets arbitrarily from either of the two sets, okay? And which, and whether you can do case one or case two only boils down to the parity, right? Whether this set is odd or not, or whether this set is odd or not, and you're guaranteed that one of these two cases happens, okay? All right? So you can basically uncross two given sets, but if you do this arbitrarily, it might take a long time. It's not even clear that it takes polynomial time, let alone polylogarithmic time. So how do we basically parallelize this procedure? We use the standard trick of designing a divide and conquer algorithm. So what's the goal of this problem? You're given some odd sets S1 through SM that cross each other arbitrarily, and the goal is to uncross them out with this joint set of odd type sets that basically span the original ones. So you can basically divide this list into two roughly equal sized halves and then uncross them recursively to get P1 through PK and Q1 through KL, right? Now, the only type of intersection that remains is between a PI and a QJ, right? These sets are already uncrossed. These sets are already uncrossed. So you only have PI QJ intersections, right? So the last step is to basically remove these intersections, okay? Now, this merge procedure is where the meat of the argument is. You can basically distill all of the information about PIs and QJs into this thing called the intersection parity graph, okay? So this is the one side, you have PIs. On the other side, you have the QJs, and then you put an edge between a PI and a QJ when they have an odd intersection, and you put a non-edge when they have an even intersection, right? The point is that PIs and QJs have no tree-wise intersections because if you pick any three sets, two of them are going to be either on the PI side or on the QJ side, therefore the intersection is empty, and you only care about the parities of the intersections or the parities of the regions in the Venn diagram of these sets, and this graph basically captures all of those parities. So you can distill all of the information you need into this bipartite graph formed by the PIs and QJs, okay? So now given such a graph, how do you uncross it? Lucky question, which we can hope for. Remember that one of the two operations we could do for uncrossing was taking the union, right? So in the previous graph here, I can basically uncross by merging P1 with Q1, then merging the results with P2, then merging the results with Q2, and all the time I'm just taking the union, right? So the lucky situation would be if I could do this for my graph and take the union of all of the oddsets, right? So there are two things that basically prevent you from doing this. One is that if my graph is disconnected, then I can't do this, and I can never basically take the union of two things from two different connected components, because there would be never an edge between any sets I get from one connected component and any set I get from another connected component. So this is needed. There is also one other thing which is needed, which is that this union, if I wanted to be an odd tight set, it better be odd, right? So the parity of this union is basically the same as the parity of the number of edges plus the number of vertices on my graph. This is a standard parity argument. So this is also needed for the whole union to be a tight odd set. And what we showed was that these are the only things needed. So if you have a graph that's connected and whose number of edges plus the number of vertices is odd, then you can take the union and the union is an odd tight set. The way you prove such a thing is by induction. You basically find subgraphs that satisfy, again, the same properties. And then you find two subgraphs that satisfy the same properties. And then you show that you can merge each subgraph into one set and then take the union of the two subgraphs. Now, if my graph is connected, but the second condition is not satisfied, in other words, this whole set is even, then there's a slightly more tricky thing that you can do, which is to find two subgraphs for which the number of edges plus the number of vertices is odd. And you can basically get two sets as the output of this procedure. So for every connected component in my graph, I can either merge it into a single set or two sets. Now, the point is that once you do this, the rest of the problem is easy because now my graph is basically empty. I no longer have any odd intersections between my sets. And now you can just basically order your sets and remove the intersections of sets one through i from set i plus first. So removing intersections is a trivial operation. And these two lemmas allow you to get to a situation where you only have to remove the intersections. So that's how you uncross. Now, if you've seen the three main ingredients I've shown, you might be tempted to try this algorithm, which is you find many, many even cycles. So this is a recap of the ingredients. So you try to find many, many edge disjoint cycles, then you move alongside all of them at the same time by finding a weight vector. And then from each cycle, you've either removed an edge or you found a tight odd set crossing it basically. So if you remove many edges, you just recourse on the remaining graph, as in the bipartite case. If you didn't remove many edges, you found many odd sets, you uncross them, and then you shrink the uncross sets. So from the graph on the top, I get to this graph, which has only four vertices. Now in the shrunk graph, I find the perfect matching and then I extend this perfect matching into each of the shrunk pieces to get a perfect matching in the whole graph. The thing that's guaranteed is that this shrinking procedure reduces the size of my graph by a constant factor. So the recursion depth here is logarithmic. But here's a problem with this procedure. The problem is that even though the recursion depth is logarithmic, the time it takes is not polylagorithmic. And the reason is that there is an inherent sequentiality here. So I have to find the perfect matching in this shrunk graph before I can extend it into the shrunk pieces. So I can't go ahead and basically extend the matching in each of the shrunk pieces because I don't know which external edge I'm using. So this algorithm is not polylagorithmic, but there is a fix to it, which is suppose that when I uncrossed all of my odd sets, I didn't find many, many odd sets, but I only found one giant odd set. So this is what I call a balanced tight odd set. So an odd set who's inside and outside are both linear in the number of vertices. Now, if you have this balanced odd tight set, you can basically decompose your problem into two independent problems and recurse on that. But how do you find such a balanced tight odd set? Yeah, it is used as before by finding many small pieces of odd sets and reduce the size of the graph multiple times. So from the graph on the left, I go to the middle one. In the middle one, I find two odd sets, I shrink them again. And if I do this many, many times, I get to a constant size graph, let's say a graph with 100 nodes. Now, one of the nodes in this 100 node graph must contain at least 1% of the original vertices. So I've highlighted that by this ACDEF node in the last graph. Now, if you unshrink all of the things that we shrunk, this gives you an odd set in the original graph. And this odd set is balanced. It has a linear number of vertices inside and outside. Now, you can basically decompose your problem into two pieces by just selecting an edge, crossing this single odd set, and just extending this single edge into each of the pieces whose sizes have been reduced by a constant factor. So that's basically the whole algorithm. So a couple of nodes. I think I'm running out of time. Do I have a couple more minutes, or should I just go to the end of this slide? I think if you are at the conclusion slides, you should have time to go over them. So the same ideas can be extended to find the minimum weight perfect matching, not just a perfect matching, but a minimum weight one, as long as the weights are palomaly bounded. Now, there are exactly three places where I used the planarity of the graph. One was to count perfect matchings. One was to find linearly many edge disjoint cycles. And one was to find maximum ST flows. So all of these three places, you can do the same three operations for bounded genus graphs. So basically, the whole algorithm works for bounded genus graphs as well. And then, as I mentioned in the beginning, recently, it has also been extended to this class called one crossing minor free graphs. So the graphs in this class are not necessarily bounded genus. Okay. And here are a bunch of open questions. So the main open question is, is planarity needed? Whether or not the matching problem in non-planar graphs can be found, can be solved in NC. So here, the decision problem is not solved either. So that's the main question. So this weird problem of, remember, the red-blue graph, the red-blue matching problem called the exact matching, we don't even know a polynomial time deterministic algorithm for it. So if you're not very interested in parallel algorithms, you can think about this problem. So as I said, maximum cardinality matching in general can be reduced to perfect matching, but the reduction doesn't preserve planarity. So in a surprising result, Sankovsky has recently shown that if you have bipartite planar graphs, then you can solve the maximum cardinality matching problem by reducing it to non-bipartite planar perfect matching. But the question is still open for maximum cardinality in non-bipartite planar graphs. And then the final question, which I think might be slightly easier than the rest, is exact matching in planar graphs. Because you can count matchings in planar graphs, you can also solve exact matching, the decision version, or even the counting version in planar graphs. But can you solve the search problem for exact matching in planar graphs in NC? And that's the end of the talk. Thanks. Okay. Thanks, Neva. So if there's questions, speak up or type something in the chat. So the reason you put a single cop for the last question is that? That's my evaluation of how much coffee you need to solve those problems. So even the hard ones are not that bad, right? You have to scale it by something. Questions? I mean, do you have anything more to say about the first question? So is it, I mean, you said where you said things about how planarity is used. So does it mean this is a wild, you know, you think that this is, there's no way you can go build up of what you have been doing? So the reason that, so okay, so I'll maybe say a few things about the first question. So the two recent developments have been about this problem and they've shown that matching in non-planar graphs can be solved in quasi-NC, right? So the way they work is by just de-randomizing the so-called isolation lemma, right? So the isolation lemma is that you want to find random weights such that, and that's basically the only way we can these problems, right? We don't know of a way to basically reduce the, there is no known way. So what the two papers have shown is that you can find quasi-polymally many weight functions, one of which is guaranteed to isolate a perfect matching, right? I think it's, that's the main hurdle. The only way we know how to attack these problems is by the isolation lemma and it's not likely that there are polymally many weights that basically isolate a perfect matching in a graph. So the weights, the weight functions they construct are sort of oblivious to the graph. The weight functions do not even look at the graph and they're just quasi-polymally many of them. You can just try all of them. I think if you want to be oblivious, it might not even be possible to isolate a perfect matching. Yeah. But there's no lower bounds of that form. Are there lower bounds on? I'm not sure exactly. I think, I think if you're not talking about perfect matchings, but slightly more general class of polytopes, then there are no lower bounds. Not really sure if they're the same holds for perfect matchings or not. Someone has a question. Maybe can you read it, Nima? Or maybe I'll read it for everyone. Can you mention which of these open questions are still open questions? I see. Even if you allow randomization. I see. So none of them. Yeah. Because none of them are open when you allow randomization. Do we have other questions? Okay. If not, maybe I'll take us offline. So thanks. Thanks again, Nima, for giving the talk. A couple weeks from now we'll have our tour humus from our week telling us about round compression. Okay. So thanks everyone for joining and I'll see you now in a couple of weeks. Thanks.