 So I'm going to talk about the unique games conjecture by the measures of breakthroughs that we have seen. The conjecture itself is 10 years old. So anything meaningful that probably can be said will not pass the time of test, but it has generated a significant amount of interest. And a lot of things have developed around it. And through this talk, I'm going to give you a flavor of what has happened in the last 10 years. And mostly focused, the second half of the talk will focus on the algorithms for this conjecture. So let me start by motivating where this came from. So the starting point is trying to understand approximability of NP-hard problems. So we start with a three-sat formula, which is a bunch of clauses over variables x1 to xn. The goal is to find an assignment, a 0-1 assignment to these variables, which satisfies as many clauses as possible. So I will denote by opt. So if this formula is phi, then I will denote by opt phi the maximum over all assignments of the fraction of clauses satisfied by that assignment. So this is the number between 0 and 1. So henceforth, any time I talk about any optimization problem, the optimum will denote some number between 0 and 1, 0 being bad and 1 being good. So our goal is to decide whether opt of phi is equal to 1 or less than 1. And this we know is NP-hard. So this is the classic reduction due to 2 to 11. OK, so if you want to prove hardness of approximation of these problems, the starting point is what is called the PCP theorem. In fact, the PCP theorem can be stated equivalently as a hardness of approximation result for satisfiability. So the theorem says that there exists some constant, which is strictly bigger than 0 and doesn't depend on the input size, such that it's NP-hard to distinguish between the following two instances of 3SAT. So in one case, the formula is perfectly satisfiable. And in the other case, the formula is, so every assignment violates at least C fraction of the clauses. How many people have seen this formulation of the PCP theorem here? So it's not very surprising. So this is the starting point. So this theorem lies at the core of all hardness of approximation results. But the theorem, when it was first proved, could only prove a small number of hardness of approximation results. So what made it so powerful? So I'll explain on a point of view another way to look at this theorem, which gave it significant power. And that is what is called the label cover view. So let me reduce the task of checking whether an assignment satisfies the 3SAT formula to a combinatorial problem. So imagine you have a bipartite graph where the nodes, so you're given some formula phi. So I'm going to construct a bipartite graph, which has two sides. One is the left-hand side, where there is a vertex corresponding to each variable in the 3SAT formula. And on the right-hand side, there is a vertex corresponding to each clause. And you put an edge between a variable and a clause if there is containment, if that variable is contained in that clause. So this is the bipartite graph that comes out of it. What is your goal? So in 3SAT, we want to find an assignment to the variables which satisfy all the clauses. So for that, we have to come up with a notion of what assignment here satisfies. So what are the assignments to the left-hand side? What are the assignments to the right-hand side, which are satisfying? So on the left-hand side, you are expected to assign a value which is either 0 or 1. So this corresponds to the standard thing that you were supposed to do to assign the variable 0, 1 values. On the right-hand side, I will ask you to assign triplets 3 bits per clause, which are the intended bits for the variables that appear in that clause. So think of this as maybe 0, 1, 0, or whatever, one of the seven 3-bit configurations which satisfy this clause. So it's a R, so except 1, seven of them satisfy. So you are supposed to assign one of these seven configurations to this vertex. And on the left-hand side, you are supposed to assign one of the two configurations to this vertex. And you will get a payoff of 1 for an edge if the labeling that you assign to this vertex is consistent with the labeling you assign to that vertex. So let me give you an example. Suppose this clause was x1 or x2 bar or x3. And let's say this one was x1, x2, and x3. So if you look at x1, then you will get a payoff of 1 if you assign 0 label to x1. And this position in CJ has to be 0 as well. Or if you assign 1 here, then this has to be 1. So there has to be this consistency. So in particular, note that once you fix the label of CJ, then if you want to satisfy this edge or the constraint corresponding to that edge, then the label of this becomes fixed. So in this way, it becomes a constraint satisfaction problem over the bipartite graph where the constraints are derived naturally from the three-side formula that is given to you. OK? So what does the PCP theorem tell you about this problem? So the goal in this problem then becomes to find a labeling which satisfies, which maximizes fraction of satisfied edges. So I want to prove some hardness of approximation for this problem. So let me show you that if the formula was satisfiable, which means that if there is an assignment which satisfies all the clauses, then there is a labeling with opt equal to 1. So that is easy because just take the assignment which satisfies all the clauses, take an assignment to the variable which satisfies all the clauses. It is 0, 1. Now for that assignment, just extract the three bits that that assignment has assigned to each clause and assign it as a label here. Everything is satisfied. OK? So if your original formula had one, then this label cover instance that you have produced is also fully satisfiable. OK? So this is the completeness of this reduction. On the other hand, if the label cover, if the formula is not satisfiable, OK, in fact the PCP theorem says that, so the no case of the PCP theorem gives you a formula where every assignment violates at least C fraction of the clauses. Then, so what can we say about the opt of this quantity here? Can this be 1 still? So how big can the, how many edges can you satisfy given that there is no good assignment for the formula? And the answer is 1 minus C by 3. And here is a way to see that. So fix any assignment for the SAT formula. OK? And that corresponds to a labeling of this or rather the other way round. Fix a labeling for this label cover instance. And that corresponds to an assignment of the three SAT formula. Now you know that this assignment violates at least C fraction of the clauses. Right? So you go and look at the C fraction of the clauses here. Now each of these clause has three edges going out, right? And they are not satisfied. So any label that you assign to them, you must have at least one violation, right? So for every unsatisfied clause, you will lose one edge, right? And that is where the C by 3 comes from. So you get this gap. So notice that the PCP theorem says that C is something which is strictly bigger than 0, like 0.0001. And hence, you will get that this is some number which is strictly less than 1. Now why are we doing all this? The reason why we are doing all this is because of the following. So this hardness, we can amplify now. Now, so before I go on, let me just formally write down the label cover problem. So in this problem, you are given a graph which will have a left side, a right side, some edges. There will be a label set for the left-hand side. There will be a label set for the right-hand side. And for every edge, there will be some constraint. Your goal is to assign a label from 1 to L to a vertex in V, every vertex in V, a label from 1 to R to every vertex in W, and maximize the fraction of the constraint satisfied. So what we have just shown is that it is NP-hard. So there exists some constant C prime such that it is NP-hard to distinguish between the following instances of label cover instance. So the opt of L is 1 and strictly less than 1, actually 1 minus C prime. The next thing to do is to amplify this hardness. So the PCP theorem was used to prove a lot of initial hardness of approximation results, but they were nowhere close to optimal. So the next step that came in, the picture was the parallel repetition theorem that also after a long series of work eventually razed through the version which allowed one to obtain very strong inapproxima ability results. So one way to amplify hardness is to take powers of this graph. So that's one view of it. If you want to really go back to your satisfiability view, it corresponds to taking, so instead of putting one clause here, you put k tuples of clauses for every k tuple of clauses you create a vertex here, and for every k tuple of variables you create a vertex here, and then it is immediate what constraint it is. So that is the tensor graph. And RAS could prove that if you take k tensors, the soundness drops essentially as a function of, so which means that the label cover problem remains NP hard to distinguish between yes, in the yes case the opt is one, and in the no case. And by picking k large enough, you could make it smaller than any constant delta. Of course, the size of the instance that you're producing is growing with k. So the size of the instance is n to the k. So you can only hope to take constant many repetitions. If you are interested in proving NP hardness results. So this theorem really drove the state of art in hardness of approximation and culminated in a work of Hastad to establish optimal inapproximability result for 3SAT. So for 3SAT, there is a very simple algorithm, which is to just assign a random assignment to every variable. And it follows that this assignment will satisfy 7 by 8 fraction of the clauses. Now, Hastad proved that this algorithm is essentially optimal. So he showed that for every epsilon 3SAT is hard to approximate to a factor better than 7 by 8 plus epsilon. And yeah, so there were, of course, there were a lot of new techniques and new ideas involved. Hastad's result itself built on the long codes by Belaric Goldreich and Sudan and so on. So there were a lot of new techniques. But really, this source of hardness, so we still needed a source of NP hardness. And there, the label cover combined with Rajasparl Repression Theorem, suffice. So around the same time as Hastad proved the result, people start making progress for many important problems like max cut. And even though Hastad's result was tight for 3SAT, his result was far from tight for max cut. There were other problems as well, which escaped all these approaches, like vertex cover. Coloring, sparse start. So none of these problems fell down to any of these attacks, and it was felt that some new source of hardness may be required. So one thing to note is that 3SAT is a constraint satisfaction problem. By that, I mean that you have variables, you have constraints. Each constraint will involve a certain number of variables. So for example, in 3SAT, every constraint corresponds to three variables. Whereas if you look at problems like max cut, this is also a constraint satisfaction problem. But the number of variables per equation is 2. And somehow, if you look at, if you go dig deep inside the Fourier analysis of Hastad, it seems to get the tight results, you really needed the test to have, so the CSP to have arity at least 3. So something was missing for two CSPs, or CSPs which depend upon every constraint is determined by two variables. So quote, in 2002, now came up with a conjecture, which is now known as the Unique Games Conjecture, which postulated a new source of computational hardness. So let me try to explain to you what this is. So the Unique Games Conjecture is a hardness assumption on a certain problem called Unique Level Cover Problem. So the Unique Level Cover Problem is exactly like the Level Cover Problem except for two main differences. So the two differences are that your label sets are the same. They're supposed to assign the same, so the size of the label set for the left hand vertices is the same as the size of the label set of the right hand vertices. Notice that this is not the case. So there, the left hand side has size 2, and the right hand side has size 3. Sorry, 7, because 7 possible assignments. And when you take tensor, the label set grows as 7 to the k for the right hand side and 2 to the k for the left hand side. So he made one change. And the other, which is the most crucial, is that all these constraints, which could be quite complex in the Level Cover case, are bijections. So what does it mean? So if you look at an edge in your Level Cover instance, so it's between some two vertices, V and W. And so the satisfying set of label assignments to V and W is a subset. So if the satisfying to V and W is some subset of L cross L, so these are pairs which will satisfy the vertices V and W. But the extra assumption in the unique Level Cover problems are, problem is that this set is actually a bijection. So for every vertex on the left, there is exactly one satisfying assignment on the right and vice versa. And that is why this problem is referred to as a unique Level Cover problem, or now more popularly as unique games. Now so what happened? So why would you want to make these simplifications? And what kind of conjecture can you think, what kind of hardness can you think you can hope to get? So can you hope to get a hardness result like this for this problem? Can you hope to distinguish instances where the optimum is one and the optimum is less than some delta? OK, so first thing to observe is that in this case, because these are bijections, if I tell you that the unique game instance has optimum value one, you can actually find it in polynomial time. So this cannot be hard. So it's easy. And actually it's really easy to check that if you are given a unique game instance with optimum value one, one can find it. And this just follows by propagation. So you guess a label for one vertex, you propagate it because you knew that everything is satisfied. If there is any counter example, then you just go to the next label to the starting vertex. And since there are only k choices, this algorithm terminates. Such a strategy won't work when these constraints are not bijections. So with this modification, he essentially made the conjecture, which is analog of this. And let me just write it down here. So the conjecture says that for every epsilon, there exists some k, which is a function of epsilon. It's hard to distinguish between the following instances. So of course, even if you look at it here, if you want this number to be less than delta, then you have to pick k large enough. So that is a natural thing. Another thing to note is that if, yeah, so another reason why something like this should be there is because if you just make a random assignment, then you satisfy one by k fraction of the equations. Yeah, k are hard to distinguish unique game instances on where the label set size is k. So now the label set size is k. So this conjecture was made in 2002. And it was not until in the initial paper code proved that this conjecture already has some implications which escaped or which were not susceptible to these has that or label cover type techniques. So it was some nice observations. But really what made it take off was these two results, which came out around 2004. So the first was a result due to court kindler, Mosul or Donald, who proved that assuming the unique games conjecture, MaxCut is, let me just write it in another way. As I said, there was some algorithm floating around for MaxCut called the Gommens-William algorithm. And assuming the unique games conjecture, one cannot beat the Gommens-Williams algorithm for MaxCut. So more precisely, for every delta, MaxCut is hard to approximate to a factor better than 0.878 plus delta. And the results previous to this, which were based on NP-hardness, were something like 16 by 17. So this was the first theorem, which was very surprising. At that time, it also relied on some isoparametric conjecture called the majority-stabilist, which subsequently got proved. But the motivation of that conjecture, which has now applications in social choice theory and beyond, came from this paper. At the same time, or around the same time, it was also proved by Kort and Raghav that, again, assuming the unique games conjecture, vertex cover is inapproximable to a factor better than 2 minus epsilon for any constant epsilon. And this matched the rather simple algorithm. This is the rather simple two-factor approximation algorithm for vertex cover, which was just pick a maximal matching and pick all its endpoints. And this you can show is within a factor of 2. So these two theorems really excited the community. So I think this was at the height of excitement generated by the unique games conjecture that two big open problems got reduced to 1. And then the floodgates opened. So it was a breakthrough in the real sense, the literal sense, that these people started collecting problems which were henceforth till that time didn't have any inapproximability results or whatever, and started proving hardness of approximation results. So problem 1, problem 2, problem 3, pretty much any problem that you could think of, its hardness started to be based on this unique games conjecture. So then someone quietly asked a question, what if it is false? Because before we get carried away, we have this huge number of reductions which assume the hardness of this problem. So what if it is wrong? What if the conjecture is not true? What do you get? And the answer was quite scary, nothing. Because if the conjecture is false, which means that there is some algorithm which can distinguish between these two instances of unique games, I don't know. I mean, I can't really use that algorithm to solve MaxCut. I can't really use that algorithm to solve vertex cover. And I was really not interested in unique games before 2004. So what do I get out of it? But you know, skeptics are everywhere. So this result, as I said, led to a lot of results which no longer depended on unique games. For example, one was this majority is stablish, which is now used in social choice theory. Another one was to find tilings of Zn, a very nice result. And this was also inspired by the unique games conjecture. And one of them, one of the other one was a disprove of something called the Gouman's linear conjecture. And all these three results were inspired by the unique games conjecture and eventually proved to be nothing to do with that. So these are results in pure mathematics, but came out of studying this conjecture. So I'm not going to go too much about this direction, which is the hardness direction. But I must say that this is a beautiful direction to pursue one of the landmarks results here was a result due to Raghavendra, who essentially combined a lot of inapproximability results into one. And if you want to read one paper in this, I think this is the one that you should go and read. But on the other hand, the validity of the conjecture or what will happen if it is false, were still open. And people had, so this question was slowly gaining momentum. So in the next half an hour, I will give you an overview of what happened in the algorithmic domain for unique games conjecture. How close are we to disproving it? Surely we are not anywhere close to disprove this conjecture. So this is still, sorry, surely we are not anywhere close to prove this conjecture. But people got excited recently in some attempts to disprove this conjecture. So let me build up towards that and tell you what all has happened in the last month or so as well. So Quote in his paper already had an algorithm for unique games, which was based on a semi-definite programming technique, which is quite natural. I mean, to write down a mathematical relaxation for this problem and then try to solve it. So how many people don't know what a semi-definite program is? OK, I apologize. I'm not going to be able to tell you that, but I'll try. I mean, it doesn't really matter. I'll show you some mathematical programming formulation and then just believe me that you can solve it in polynomial time. But let me first try to encode this problem as a mathematical program, which I would not worry about feasibility. So remember that in this unique games, you are given a graph. And let's just assume that it is a non-bipartite for the moment. You are given a label set. And you are given a bijection for every edge. So pi is a bijection. So for every edge, pi is a bijection from K to K, which means that its inverse is well-defined as well. So what I do is I create a variable for every vertex and for every label to it, every possible label to it, I will create a variable v sub i. The intention is that it is 0 if you assign the if label to vertex v and one otherwise. So this is supposed to take a value which is either 0 or 1. Now I want to start constraining and try to write the objective function in terms of these variables. So the objective function is the following. So I want to maximize the fraction of the edge is satisfied. And how do I capture whether an edge is satisfied? So I look at an edge, let's say vw. And let's say the pi is the permutation corresponding to it, to that edge. I will look at vi and w of pi i. So this quadratic formula I claim, if you restrict it to 0, 1 values, captures the objective of the unique game's problem. You'll get an incentive of 1 exactly when the if label is assigned to vertex v and the corresponding label is assigned to the vertex w, the corresponding variable to the permutation pi. Of course I need to make some restrictions that, so exactly one of the vi's is 1. So one way to write it is that vi vj is equal to 0 if i not equal to j for every v. So this just ensures that no two of them can be 1. But I need to have one of them to be at least 1. So I would write down this constraint, which you can verify is also. So these constraints, if you restrict these variables to be 0, 1, essentially give an exact integral solution. And this objective function then maximizes the fraction of the equation that you have satisfied. So the value of this function, value of this program is exactly equal to the opt. But the problem is that we can't really solve this. I mean this has these quadratic equations, which are supposed to be 0, 1. So now the standard thing to do in mathematical programming is to relax these things to something called semi-definite programs by replacing these vertices by vectors. Yeah, you're right. But once you're in the 0, 1 world, you don't have to do that. But I'm going to make a relaxation here, because I really can't impose that condition of 0, 1. I'm going to insist that these are just some vectors in some high-dimensional space. So rather than being 0, 1, which I cannot restrict, I will insist that these are vectors and these becomes inner products. And now everything is just a linear program over positive semi-definite matrices. And positive semi-definite matrices have a separating hyperplane. That is just the minimum eigenvalue. So if you can solve the minimum eigenvalue problem, you can separate over PSD. So you can solve that. So you can solve that and so what? So as I said, so this semi-definite program appeared in the previous paper of Feige and Loas, but quote showed the following. Other simple, you know, inoc was looking, it just did the simple thing. You solve these vectors, you do some kind of randomized rounding, you do some analysis, and you get some result. And the result was the following, that if your opt is at least 1 minus epsilon, so you're given a unique game instance with opt at least 1 minus epsilon, then you can find something, and I'm not being precise here, but something like, you can find a labeling which satisfies these many constraints. These numbers are not correct. 1 by 3 and 5 are not correct. But I don't remember what exactly. So how would you do it? So suppose you are given a promise that opt is at least 1 minus epsilon. You will solve the SDP. If the SDP value itself is less than 1 minus epsilon, you will say no. Because then you know that it's a no instance. Otherwise, you will recover a solution of value at least this much. So does it come close to disproving the unique games conjecture? The answer is no. Because if you remember, you wanted to distinguish between 1 minus epsilon versus epsilon, whereas this is just distinguishing between 1 minus epsilon versus something like 1 minus some f of epsilon and k. And as because k depends upon epsilon, this number can, yeah, so this can never give you this kind of a result. So this technology was optimized, and then there came a theorem due to Charikar Makarichev and Makarichev who showed using a slightly more sophisticated rounding technique that this SDP actually gives you 1 minus epsilon versus 1 minus root epsilon log k. So this is significantly better bound than this one. So for unique games, yes. So that was observed by Feige and Loewaas in their paper 90 or something. So SDP is 1 if and only if the opt is 1. But for general label core problem, it is not the case. OK, so this, and then for some time, people were stuck here. And then it was asked whether this can be improved because in the mid 2000s, SDPs were our most powerful algorithmic tools. And then, well, I think it happened somewhat simultaneously. But in 2005, in a paper with court, among other things, we actually constructed a gap example for this SDP, which essentially constructed a family of examples for unique games, which have this property that the SDP value of these is at least 1 minus epsilon. But the opt value is less than 1 minus root epsilon log k. So this was, at that time, considered as the first serious positive evidence that something like this, the unique games conjecture, might even be true. Because now, we created a barrier. So now, if you want to disprove the conjecture, you have to cross these SDP barriers. So you have to come up with ways to solve these instances first. So these are explicit instances. You know them. The SDP cannot do better than this. And as long as you are restricted in this domain, where this function depends upon k, you really have no hope. You really want, ideally, an algorithm where this function just depends on epsilon. Any time it will depend on k, all it is saying is that in the unique game conjecture, the function k has to be large enough. So any algorithm which will disprove has to have this structure, that if given a unique game instance with value at least 1 minus epsilon, it can find something which is 1 minus f of epsilon. So this example created this SDP barrier. And in fact, in some subsequent results, it was shown that even under stronger Sherali-Adam hierarchies, the example stands. I'm not going to go into that. So people looked at stronger and stronger relaxations of these SDPs and showed that the example still stands. So the first serious threat to unique games came in 2008. So in 2008, so in a paper with Aurora, Court, Kola, Steyer, Pulsiani, it was shown that a 1 minus epsilon versus 1 minus f of epsilon algorithm is possible if you assume something about the structure of the constrained graph. And more precisely, if the unique game instance U, which is a graph, has, so if this graph is an expander, then the unique game's conjecture is false. Now what exactly I mean by an expander? Let me just explain. So notice that there is no assumption on the constraints. It's just the structure of the graph, which is sufficient. So a graph is an expander if the second eigenvalue of its Laplacian is large. So let me just write down a quantity related to a graph, which is the following. So minimize over all x1 to xn, which are assignments of some real numbers to vertices of the graph of the expected. So if you pick a random edge and you look at the square dilation of across an edge versus if you pick a random pair from all vertices and you look at it. So this quantity here, lambda 2, measures how good an expander a graph is. And the results are that if this quantity is large, then the unique game's conjecture is false. So how large? So again, lambda 2 is at least 0. It can be any number. It's normalized properly. So it can be any number between 0 and 2. But really the difficult range is, so if you're given a unique game instance with, so if you're given a unique game instance with opt of u equal to 1 minus epsilon, really the interesting barrier is epsilon. So lambda 2 can lie on this side of the barrier or this side of the barrier. So lambda 2 of the graph. Now the theorem says that you can always recover. So if epsilon, then there exists a poly time algorithm that recovers a labeling of value 1 minus epsilon by lambda 2. So if lambda 2 is significantly better than epsilon, then this is a very good labeling. So this will then be significantly bigger than epsilon. And then you can distinguish between the yes and the no instances of unique games. So the proof of this result, they were really two proofs. Only one of them made it to the paper, which turned out to be the wrong proof. So there was another way to, so they were, as I said, there were two ways to prove this result. There was another way which was eventually distilled by Kola. So Kola gave another proof of this theorem, 2010, I believe. And this was more spectral in nature. So the proof that appeared in this paper is more SVP based. And that is more spectral. And in fact, she observed the following very remarkable fact in the paper that, OK, so what are expanders? So expanders are, so if I fix epsilon, so which is the epsilon corresponding to the completeness of the unique games, if I fix epsilon, then an expander can be just thought of as a graph, none of whose eigenvalue falls in this interval, OK? So let's just say that is an expander, because such graphs are good for us. Because for such graphs, we have, we can, you know, if lambda 2 is, well, it will not be, this is some 100 lambda 2. So if lambda 2 is equal to epsilon, then you can recover 0.99 fraction of the equations. Such graphs are good. So then she asked, OK, so what if there are a small number of eigenvalues of G that fall here? Can you say something meaningful? Because as I said, in the case of expander, nothing falls in there. But what if let's say two eigenvalues fall in there? And then she gave this rather remarkable brute force algorithm, which showed that if there are, if G is such that there are R such eigenvalues in many R such bad eigenvalues, R eigenvalues which are between 0 and epsilon, then there exists an algorithm that runs in time n to the R. And this is very nice, because not only does it prove the expander result in a different way, it also implies that if the graph is somewhat like an expander, which means that it has, let's say, 100 eigenvalues which are very close to 0, still you can solve any games, instance. In particular, she was able to show, using this, that this KV instances have a quasi-polynomial time algorithm. And quasi-polynomial is not considered hard by any means. So quasi-polynomial means, so it's very easy to look at the KV instance and show that the number of small eigenvalues are polylog, polylog in n, number of vertices. And hence, you can solve any instance whose base graph is the KV graph in time n to the polylog n, sorry, n to the polylog n, n to the polylog n. Yeah, that sounds easy, right? OK, so this observation was really used and non-trivial by this paper of Aurora, Barack, and Stoider, an algorithm for unique games which runs in time roughly 2 to the n to the epsilon. I'll explain what this is. So Kola showed that if your graph has a property that there are a small number of eigenvalues which are bad, yeah, I can, yeah. So let me just say that. So Kola showed that if the graph has a small number of eigenvalues which are bad, then one can, and if this number is r, then one can, in time n to the r, find a good labeling, right? And why this is, so as Sanjeev asked, so why should one expect something like this? And the reason is that unique games, so now I'm probably answering mostly to him who is also an expert in many of these things. You can construct a label extended graph for unique games where you replace for every vertex K vertices, which are the supposed label. Now a labeling to this, so now think of a 0, 1 labeling to this, which correspond to a very long vector, which is of size nk. So this is the coordinates corresponding to vertex v1, vertex v2, and vn. And each coordinate here is supposed to be 0 or 1, whether the vertex got that label or not. So ideally you are expecting a vector which is 1 at some coordinate and 0 everywhere else, 1 somewhere and 0 everywhere. This is the thing you are looking for. You can show that if the graph has a small number of these bad eigenvalues, then there exists, so the vector which correspond to the label which is promised to you, the 1 minus epsilon labeling has very good correlation with the subspace. So think of this vector as the vector which correspond to the good labeling, the 1 minus epsilon labeling. And look at the subspace generated by these bad eigenvalues, the eigenvectors corresponding to the bad eigenvectors, bad eigenvalues. Then this vector has a very small angle to this subspace. And now what I'm going to do is just search over this whole subspace. So I'll just construct a grid on the subspace and just, all I need is a decoding algorithm. So if once I will come close to this guy, I should be able to recover a labeling. So that is the main idea. It's very nice. OK, so then Aurora, Barack, and Stoyer showed a decomposition result, which is completely graph theoretic. And it argues that any graph can be decomposed into a small number of graphs, each of which has a small number of bad eigenvalues. Moreover, in this process, the number of edges that you have thrown is also very small. So you're given this graph, you know that if you're in this collage case, then there is a, so if you have that every, if the graph has at most r bad eigenvalues, then there is n to the r time algorithm. So what ABS do is to decompose the graph into subgraphs, each of which has at most n to the beta bad eigenvalues. So r is n to the beta. And the fraction of edges which go across is some order epsilon log 1 over epsilon, where epsilon was the unique games epsilon. And using this, so this is a very nice decomposition theorem. It relies on doing random walks. It was not known before, actually, because all the previous results lost the number of edges, which was some function here, which depend on n. Any time you will get a function here, which depend on n, you will not satisfy these edges. Whereas by throwing away these many edges, you're still fine, because eventually you wanted to satisfy some f of epsilon fraction of the equations. So using this decomposition, they could prove that given a unique game instance with opt at least 1 minus epsilon. For every beta, there exists a decomposition of, so G can be decomposed into these sets S1, S2, ST such that what is called the epsilon rank of each SI, which means the number of eigenvalues which are below epsilon is at most n to the beta. And the fraction of edges across is something like order epsilon. And using this, they could recover a labeling, which is 1 minus epsilon by beta, something like this. So these parameters I'm stating are not optimal. So you could pick beta to be any constant, any small enough constant. And by Kola's argument, this would give an algorithm which runs in time 2 to the beta. So what does this mean? Let's just reflect on it a little bit. So it means that there is a sub-exponential time algorithm for unique games. And this puts it in somewhat dubious category of problems. So the other problems which have sub-exponential time algorithms are something like factoring, for example, or graph isomorphism. SAC is conjectured to not have anything better than 2 to the n. So that's the exponential time hypothesis. So this result, although laid some doubt on the validity of the unique games conjecture, it's not clear, at least to me, whether this will lead to a disproof of the unique games conjecture. But it has really given us more algorithmic tools to play around with. So irrespective of the validity of the unique games conjecture, all the spectral graph theory, which has come out, is very nice. So let me in five minutes now conclude as to what is next and what is to be done. So there are two new results, which I like in this area. I'll just tell you one. So as I mentioned, so one witness to the hardness of the unique games conjecture were these integrality gaps. So the integrality gap example showed something like it is hard to distinguish between 1 minus epsilon versus 1 minus epsilon log k for a large number of rounds of something like Shurali Adams' DP hierarchies, something like polylog and rounds of the hierarchy are still unable to do anything better than this. Now this has been disproved in the sense that there are some new results due to Parak, Harrow, Kellner, Stoirer, and Jau. So they show that eight rounds of Lasser hierarchy kill all known bad examples. As I said, I'm skipping one result. So in between, there was another result by six authors who constructed a new set of bad examples. So that also goes away. So this result, I think, shows us many things. So one, for example, the power of Lasser hierarchy. I mean, it's one of the, in my opinion, I mean, this hierarchy has been around for a long time. But this is really, I think, the first powerful result that has come out from that. That something where SDPs or Shurali Adams' SDPs were fooled, this guy would detect that in eight rounds. And for those people who don't know much, I can tell you about these hierarchies later. So what about the unique games conjecture? I think all this is very nice. As you can see, it has resulted in a huge amount of excitement, lots of techniques from different areas combining discrete geometry, Fourier analysis, spectral graph theory, in SDPs, God knows what else will be in the future. But I can say that. I mean, I don't think we are anywhere close to proving or disproving the unique games conjecture. But I think one big problem that remains right now is to find better hard candidate examples for the unique games. And I'll conclude with that. Thank you.