 Welcome, everyone. Sorry about the tiny technical trouble we had. So welcome to today's TCS Plus, given by Dor Mincer. Before I begin, let me thank the fellow organizers. Gautam Kamat is here, helping with the operation on India Day. Thomas Kanon, Ilya Razinstein. And also before we begin, I should mention that in two weeks we'll have Michael Kearns. So maybe quickly go around the table. It seems like everyone is here. So let's try to quickly go around the table. We have a full house today. So, G. Great. I'll take us around the table. First off, we have Amit, who's leading the group at BU this week. We have Andre, who's bringing us a contingent from MPI. We have Irfan, who's at Indiana University. We have Fang Yi at Michigan. Govind, who's representing the gang at MIT. We have Huck, who's with a group at Northwestern. We have Zhen Cheng. I believe that I'm not sure what the affiliation, I'm sorry. We have Josh Groccio with a group at CU Boulder. We have Samson at Purdue. We have Sefer. Again, I'm not sure what the affiliation. We have Shravah at NYU. We have Thomas Videk, who's with a group at Caltech. We have Vedat with a group at Waterloo. And I'm sorry, I can't read the name. And that's all the groups. Back to Oded. That's group looks like CMU. So welcome everyone again. So today's speaker is Dorminze. Dorminze's graduate student in Tel Aviv University is supervised by Muli Safra. Dorminze won several prizes, several scholarships, the Wall Foundation scholarship, and the Kloor Scholarship. Dorminze is interested in theoretical computer science. Early in his studies, he, together with Muli and Subash, they made a big breakthrough on open-question monotonicity testing, but that wasn't enough for Dorminze. So recently, he started working on the Unique Games Conjecture. And that's what we're here today, another very big breakthrough on proving the 2-to-2 conjecture. So welcome, Dorminze. Thank you, Oded. And thank you for the invitation. So I'm going to talk about 2-to-2 games in relation to expansion on the Glassman Graph. This talk is based on several joint works with Iridino, Subash Kort, Guy Kinder, and Muli Safra. So before we get to talk about anything in the title, let us begin with a motivating problem and that of the click problem. So given a graph, a click is a set of vertices such that any 2 of them are connected by an edge. And the problem that we're interested in is the following promise problem. So we are given a graph and we are promised that there exists a click containing 49% of the vertices. And we wish to find the large clicks possible efficiently. Okay, so can we find a click of size 25% of the vertices? Okay, maybe a little less. Maybe 1 over 16. I mean, this we can do. Still too hard. What about anything non-trivial, anything non-littable? So we don't know how to do that either. So here is the click. And in fact, we suspect that this problem is in fact hard. But we don't know how to prove it under standard assumptions. So instead, what we do is we prove it under stronger assumption, namely the unigam consumption, which we define next. Okay, so what is the unigame? So here is the definition. Unigame consists of a set of variable x and a set of equations. And each equation is of the form xi minus xj equals b and modulo sum prime number q, which is large. So one example of unigame is the max cut problem. Well, for each vertex, we have a variable. And the equations are simply xj minus xj equals 1 for any edge uv. And what do we want to do? We want to find good assignments for unigames, meaning we want to find an assignment 0, 1 to the q minus 1 to the variables so that we satisfy as many equations as possible. Okay, so this is the definition of unigames. Okay, so what do we know about this problem? We don't know much, but we conjecture a lot about it. So this is what is known as the unigames conjecture by Subash Kort. This conjecture states that for every epsilon greater than 0, if I give you an unigame instance and ask you to assume whether it has value at least 1 minus epsilon, meaning you can satisfy at least 1 minus epsilon of the equations or it has value at most epsilon, then this task is NP-hard. Okay, and perhaps the reason this problem is so interesting is because it has a lot of implications, especially in the field of parameters of approximation. So for example, you can prove that the click problem that we started with is NP-hard, assuming UGC. And in fact, you can get much more. So here's a quite partial list, but there are much more results that you can prove using that. Okay, so we can prove a lot of assuming it, but we don't have much evidence for it. We don't know much about how to prove it. So if you think that the conjecture is true, then you might try to prove it, and indeed people tried, and there are some partial results in that direction. And recently, there was also a candidate construction by Kotlin-Moscowic, but it is still far from achieving UGC because this is only a candidate construction, so it's not proven. And even if it is proven, then it's only a first step towards it. And if you believe that the conjecture is true, then you might try to do that in algorithms, and indeed people did. So there are a lot of STP-based algorithms, but perhaps the most well-known algorithm for this problem is the Sub-Explanation Time Algorithm of Faro Barakat Stora. So once you see this algorithm, you start wondering, maybe UGC is false, maybe it's just a matter of time before somebody finds a better algorithm or improves the analysis or something like that, thereby refuting the Unigan's conjecture. But this is not known, and as I tried to explain to you later, this is probably not the case. So in reality, all this algorithm really says is that it gives a certain lower band on the blow-up size that the reaction needs to have from a free subset. So strictly speaking, this talk is not about Unigan's, but about something related, which is 2-2-2 games. So what is a 2-2-2 game? So a 2-2-2 game is very similar to Unigan's, but instead of having the equations of the form xi minus xj equals b, we allow xi minus xj to take two values, b or b prime. So for this problem, again, we don't know much about it. So we conjected things, and in the same paper, introducing the Unigan's conjecture, Corto also conjectured that this problem is very hard. So we conjected that for every epsilon greater than zero, if I give you an instance, you cannot tell if it is fully satisfiable or if it is at most epsilon satisfiable. So just a small comment. It's obvious, but does b and b prime can be different from one equation to another? Yeah. So one point about 2-2 is that here, in the complete case, where value equals 1, not 1 minus epsilon, like in the Unigan's conjecture, this is because for Unigan's, there is an obvious obstacle to that. It's cp false, but here you can hope to get that. But other than that, the Unigan's conjecture is stronger. Okay. So if this talk would have happened about two months ago, then I would state that this is our main results, our main result. And I tell you that there is a certain combinatorial hypothesis, such that if you assume that it isn't there to distinguish between value at least 1 minus epsilon and value at most epsilon. But this talk is good enough, two months ago, so I can say something stronger. So this theorem already has some implications for Unigan's, and we'll talk about it later. But the theorem that I said today is the following. So I can take this hypothesis down, and I can just say that this is true. It isn't the odd to distinguish between 1 minus epsilon and epsilon. But for that, you need to work a little more. Okay. So I will not describe the reduction today. I hope to give some ideas of what goes into it. And probably the main tool in this reduction is the following object called the Grasmagrath. So given v, linear subspace, linear space over f2, let k be dimension, and let l be a much smaller integer. Then the Grasmagrath consists of, the vertices consist of all l-dimensional subspaces of v. And two subspaces are connected by an edge if their intersection is of dimension l minus 1. Meaning they are almost the same, but they are not the same. Okay. So once you see this graph, you start asking yourself, okay, why should we study this graph? And in particular, why should we study expansion on it? So I hope to convince you that these two questions are interesting. But before I do that, let me just give you a general recipe of PCP reductions. Okay. So in a PCP reduction, you have a static point, which is really quite a generic one. You have your basic PCP theorem, and usually the sound is too high, and you want to make it smaller. So you apply parallel to the theorem. Sometimes you can do it smoothly. And then you need to do something called in a PCP. And this in a PCP, it really depends on what you're trying to prove how it's for. But usually it amounts to designing some, take the long code and design some test on it, which is called a dictatorship. And this test should use what you're trying to prove out as for. But don't worry if you don't know what it is. The reason that I'm telling you that is that we're using something slightly different. So instead of using the long code, we're using the other code. And the reason is that this graph has linear structure. So we need some code that behaves nicely with linear structure. And this code is very nice. And the test is something we call the grassland test, which we next describe. And accordingly, because everything is linear here, we also need a static point to be linear. So the static point is the max railing problem, shown to be hard by Aster. So let me describe to you what the grassland test is. So just to recall, this is the vertices and the edges of the graph. So before I describe the test, I need to discuss the grassland code. So the grassland code consists of a set of codewords. And for each linear function f from v to f2, we have a corresponding codeword. And what is this codeword? This is an assignment to the vertices of the grassland graph of linear functions. Namely, we assign the subsets l, the restriction of f, to it. So you take a linear function, you do restrictions to the subspaces, and you get a codeword of the grassland code. So this is the code. And what are the edges? Well, the edges correspond to local consistent checks. And what do they mean by that? I mean that the edge l, l prime test whether the assignment on l and the assignment on l prime agree on the intersection of the subspaces. So this is the picture. Okay, so let us make a few observations on this test. So first of all, if we have a legal codeword of the grassland code, then this test passes with probability one. Because these are restrictions of the same function. So of course, they agree on the intersection. But the question is what happens if this test passes with probability one. We next discuss. But before that, let us know that this is a two to two test. And what do I mean by that? I mean that the assignments of l and l prime align in pairs according to the intersection on l intersection l prime. So what I mean by that, if you take any function of l intersection and prime, it has two extensions to l and two extensions to l prime that are linear because you have only one more dimension. So these are the agreements. So this is the two to two. And what I thought is that the following is the interesting question about this test. So suppose we have some word, not necessarily a codeword that passes the test with probability epsilon. Epsilon is non-negligible. Does this word have to be somewhat related to a codeword of the grassland code? Meaning to some global linear function f? Okay, so this question is rather vague because what does it mean to correspond to a global function? And in fact, you can consider this many meanings of this question. And if you consider two simplistic meanings, then the answer is no. It is false. But nonetheless, you can consider more complicated versions which turn out to be both useful for the two to two application and also their tool, which is very good. Yeah, so we'll discuss that. But before that, let me just tell you a little about what two to two implies. So in fact, two to two has some applications, not as many as the Unigames connector, but still. So what do we have? So of course, with the two games between one minus epsilon and epsilon is hard. And if you think about it for a little bit, you immediately get hardest for Unigames between half minus epsilon and epsilon. Okay. And then you have hardest results for independent set, which was actually the thing that we studied at the start of this line and also for coloring and for max cut gain. And so I won't really read all these things. You get some hardest for independent set. Well, we think that the most interesting aspect of it is that the sound is vanishing while the complexity is some constant. You just improve the harness for best cover and for coloring all of the four colloquial graphs and for mass cut game. And also there is this thing called the intermediate CSP conjecture by Boas Barat. And yeah, so assuming ETH, we have a, we showed that this conjecture is also true, meaning there are alpha smaller than beta, such that you have some CSP, namely Unigames between some two constants, 0.49 and 0.01, such that this problem can be solved in two to the end to the beta time, but not in time, two to the end to the alpha. But this is again assuming the ETH. And we didn't actually look for too many implications because we think that the two to the thing is really the thing that is interesting, but as far as we know, there could be many more. So let me recap the talk so far. We've seen probably that there is a reduction from three to two games and we established that what we need to study is the Grasmab test. And the question was, suppose we have some word that passes the test with non-negligible probability, does this test, does this word correspond to some global function? And again, in some non-obvious manner. And up next, we'll discuss how to study this test via expansion properties. So if there are any questions, this is a good time to ask because we switch topic. And so let's discuss a graph expansion. And so let me remind you what expansion is. So let GBA be a regular under-record graph. So given a set of vertices, the expansion of the set is simply the number of edges that go between S and its complement divided by the total number of edges that touch S. Okay, so just to give you, just to remind you, a graph is usually called an expander if the expansion of any set containing at most half of the vertices is bounded away from zero, meaning it is at least some epsilon, which is constant. But for us, we will call good expansion if the expansion is close to one. So we want expansion to be almost optimal. Okay, so what can we say about the expansion in the graph? And so here is a fact, which is not too hard to prove. Suppose you have a set of vertices of size delta N, then the set of vertices contains at least delta squared fraction of the edges in the graph. Yeah, this is unfortunate. But if you play around with this fact, what you achieve is that the expansion of the set is at most one minus delta. So there is a limit to how close to one can we get. But if you wish you can move the window, maybe you can try, but that's true for any graph, right? Not just for the grassman graph, right? If you have a big set, it's not going to have expansion more than one minus delta. I mean, you can always, you can expect to see to get this, if you take a random subset, you expect delta fraction of the edges to happen to land inside the set, just because the set is size delta. Yeah, for random set, but this is true for any set. Right, right. Yeah. So the question is, are there any sets for which this is not tight, meaning are there any sets for the expansion is much smaller than one minus delta? And the answer is yes, there are some sets. And we are going to show two examples. Okay, so these are very important examples. They are very instructive to what you can expect from this graph. And the first example is what we call a zoom in example. So let us fix x some non-zero vector and consider the set of subspaces that contain x. Then I claim that this set is expansion roughly half. So why is that? If we fix any such a subspace that contain x, then a random measure of it has half of the points in common. So the probability that one of them is x is roughly half. So l prime is in sx with probability roughly half. This means the expansion is half. So are there any more examples? So yes. Here is a dual example, which we call zoom out example. So now instead of taking a point, let us take a hyperplane. So w is a subspace of co-dimension one. And now we take a set of subspaces that are contained inside w. Then by similar argument that I'm not going to say. This set also is expansion roughly half. This is because w contains half of the points. So now the question is are there any other sets of size half? We've seen two examples. And yeah, of course there are. You can take one example and a little bit and you get a new example. But this is cheating. This is not really a different example. So we wish to say if there are any inherently different examples, but we need to define what does it mean. Before we do that, let us make two observations. So these examples do not come out of nowhere. In fact, they are both induced subgraphs of this graph, which are by themselves grasping graph of lower dimension. In fact, they are very special because if you consider it's trivial, if you focus on those induced subgraphs, then suddenly this set becomes all of the vertices. So they are very not random sets. So we are going to try to capture this notion of being close to these examples. Maybe you want to say that those two sets, especially the first one, they are actually tiny, tiny sets and they still don't expand. So that's the worrisome thing, right? Yeah, that's an important point. Thank you. So we need to capture what does it mean to be close to these examples. And here is an important definition. So we say a set S is r epsilon to the random if any combination of zoom-ins and zoom-outs, meaning going into these subgraphs, increases the density of the set by at most epsilon. Okay. So this example is a little technical, so let us consider two examples. So first of all, what happens if we just pick a vertex set at random? Then random vertices don't care about which induced subgraphs we look at. They look like they look the same. So they don't increase the density by much, meaning they're highly pseudo-random. But in sharp contrast, the previous two examples we've seen are very not pseudo-random. They are not one minus liter of one pseudo-random. Because as I already said, they have liter of one density. But once you zoom in or zoom out once, they become everything. So they have density one. So a very large increase. And now the statement that we like to study is the following. Suppose S as low expansion, where here low means bounded away from one, then the set must not be pseudo-random. Meaning there is some combination of zoom-ins and outs that increases the density significantly. Okay. So counter positively, the statement says that if we have a set which is pseudo-random, then it must have near perfect expansion. And this is the way we are going to think about this data from now. So okay, this is a statement about expansion, about pseudo-randomness. But of a set, so why should we care about this statement? Why should we study it at all? So I didn't tell you what are the examples that show that this basic question in the Grassland test fails for the most natural tries. But the quick component there is, in fact, sets that have small expansion. So we thought that, okay, if you want to study the test, you must first study what are non-expanding sets. And hopefully this will lead the shed light on the more general question of the test. And yeah, it turns out we're sort of right in the following sense. So here is inferring by Barak, Kofarian, and Storero. The statement in the above actually implies that the Grassland test works. So if you prove this statement, you immediately get for free that this more complicated version of the local-to-global thing actually works. So if you wish to resolve the to-to-to-conductor, all you need to do is prove this statement. Maybe just organize the high level again. I mean, be more specifics. There are lots of moving parts. I think none of them is trivial. So maybe give us a picture again of what's actually going on here. Yeah, so we've seen the government test. We said that if you want to analyze the PCP, the to-to-to construction, then you need to study this local-to-global test. Okay. And then it turns out that to study this test, all you need to do is prove this statement. You need to study what sets have far from perfect expansion. Okay. And this reduction between the statement and this statement was done by Barak Farré in store. Does he clarify the high level? Yeah, so this gives you the linearity test, but then you still have to work and get it into the whole PCP. Yeah, you need, yeah, you still need to work on the outer PCP and so on, but I won't say anything about that. The plan is to focus on the local-to-global test. Yeah. And it's not only on this statement because we have this theorem now. Okay. So what do we know to say about this statement? And before I tell you what we know, let's recall a definition of pseudo-randomness and recall a statement. So here's something that you can prove. This is what's done with Dino, Kord, Kinder, and Zafna. If we have a set of addresses that is one epsilon pseudo-random, the net expansion must be much larger than half what we've seen before. And in fact, it must be even almost larger than three over four. Okay. Again, this pseudo-random means there are no zoom-ins or outs and equals density by epsilon. But again, we want to study higher expansion. We want to get close to one. So can we say anything more? So in the same paper, we have to show that if the set is more pseudo-random, meaning it is two epsilon pseudo-random, then now the expansion must be at least seven over eight minus a little bit. Or keep interrupting you, but those statements are tight, right? If the number is seven-eight. Yeah, yeah. Okay. It is tight. We'll see. I hope to see why. And yeah. And this is, if you want to prove partial results, this is already enough to get new results for two games and new games. But yeah, to resolve the whole thing, you need to get all the way to one. And this is what we did recently. So together with Kotlin-Safra, we showed the following. So for any eta larger than zero, there exists r and epsilon, such that if s is sufficiently pseudo-random, meaning r epsilon pseudo-random, then the expansion is at least one minus eta. So this confirms the statement that we started with. Okay. And again, this statement is only for small sets really, but let's forget about it for now. Okay. So let's recap again. We've seen a theorem stating that if you have a pseudo-random set, then it has near-perfect expansion. And we've seen that this theorem implies NPRS4 to do two games. And also NPRS4 in the games between half, minus epsilon and epsilon. And now I'd like to tell you that this is actually, this puts some barriers on algorithm. And what do you mean by that? Then I mean that if you now want to refute the Inegan's projection, then you really must exploit the near-perfect completeness of it. And after speaking, this is because most, if not all of the algorithm techniques that we know perform equally well when you give them Inegan's with value half or Inegan's with value one minus epsilon. So unless you think that this is wrong, then you really need to use the near-perfect completeness. So next in the rest of the talk, I'd like to tell you a little bit about how we prove this expansion results. But before that, if there are any questions, this is a good time. So what goes into the proof of these expansion results? So Fourier analysis and lots of it. And it comes in the two forms, Fourier analysis on the graph and Fourier analysis on the Boolean-Albert cube. So what is Fourier analysis on the Boolean-Albert cube? Yeah, so this is quite simple. All it says is that if you have a real valued function on minus 1, 1 to m, then you can express it as a linear combination of monomias. And this decomposition has many useful properties and was used in many, many implications. But unfortunately, for the graph, we don't have anything like that. And we need to use something less explicit, which is a block decomposition. So to discuss what is the block decomposition, I need to define the notion of level functions. So what is the level I function? So let's go slowly. So suppose we have some function with a real valued function on the vertices of the graph. We say it is a level I function if there exists some function gi defined on i-dimensional subspaces such that the value that f gives to l is the sum of values that gi gives to subspaces of dimension i of l. So this is a technique. So let's take a few examples. So what are the levels in zero functions? Well, we only have one zero-dimensional subspace. It is your space. So the function is only one value. So this is constant functions. Great. So let's do something a little less trivial. So what are the level one functions? So here we have two examples. So here's the first one. Let x be some non-zero vector and define the function fl to be one if and only if l contains x and zero otherwise. Then I claim that this is a level one function. So why is that? Let's take g1 of y to be one if y equals x and zero otherwise. So what happens if l contains x? If l contains x, then one of the summands is one. So you get equality. And if l does not contain x, then all of the summands are zero. So you get zero. So we have equality again. Okay. So as it happens many times in these things, once you have one type of example, you also have a dual example. So now instead of taking a vector, we take a hyperplane. And now the function assigns the value one to l if and only if l is contained in the hyperplane w. Then I claim this is also a level one function. So why is that? Now we define g1 of y to be two to the minus l if y is inside w. And otherwise minus two to the minus l. So let's take the equality. So if l is contained in w, then all the summands are two to the minus l. They are two to the l of them. So we get one. But if l is not contained in w, then half of the vectors in it are in w and half are not. So we have equal number of classes and minuses and we get zero. So again we have equality. Question? Yeah. Is there some explanation of these levels in terms of like something like the representation theory of GLN? Yeah. But somehow for me it's difficult to think in these terms. So yeah, I'd prefer this view. But there would probably be good spaces of that plus in the thing that your graph basically writes. We'll say that in a moment. Okay. Thank you. Okay. So now that we have this notion, let us return to the block-tip position, say what it is. So the block-tip was in the following. So it's a fact which is not too hard to prove. Any real-valued function on the graph can be written as a sum of f0 plus f1 plus delta plus fl, or fi is a level i function, which is in addition orthogonal to all the previous levels. Okay. So this is quite easy to prove, but it doesn't give us much information about this decomposition. But you can prove it's not too hard that these functions actually have more properties. So here's what we discussed a moment ago. These functions are in fact eigenvectors of the Grassman graph with eigenvalue 2 to the minus i. Okay. So now that we know that we can give an overview of the proof. So suppose we have set s, whose expansion is bounded away from 3 over 4, so most 3 over 4 minus eta. So we want to apply our machinery. So let's define fl to be the indicator of s, meaning it is 1 if l is inside s and 0 otherwise. Let's say, so if you work with expansion, it'll be to do some inner products and things like that. You get that expansion is 1 minus half times the mass on level 1 minus quarter of the mass on level 2 minus 1 over 8 mass on level 3 and so on. This half quarter and so on are simply the eigenvalues of the eigenvectors. Okay. So now you look at it and you say, okay, whatever is if the weight on level 1 is 0, then we get 1 minus quarter minus 108 and so on. So you get 1 minus a quarter at most. So you have more than 3 over 4 expansion. So what it means is that the weight on level 1 cannot be 0. And in fact, it has to be non-negligible. So now the question becomes, what can you say about functions that have non-negligible weight on the first level? Can you say that again, Doris? I miss something. Yeah. So we have set s, whose expansion is the most 3 over 4 minus eta. And then we write the expansion of s using the block decomposition. So it is 1 minus half times the mass on level 1 minus quarter and mass on level 2 and so on. And we think about the first level, we say, okay, can it be 0? Then the other thing here is that no, it cannot be 0, because then you get at least 1 minus a quarter, which is more than 3 over 4. Okay. The other terms, but the w sum to 1. Yeah. So what it means is that the weight on level 1 cannot be 0. And in fact, it has to be non-negligible. So now the question becomes, what can you say about functions that have non-negligible weight on the first level? Okay. So let us consider a simpler case, a toy case, where instead of having non-negligible weight, we have all of the weight on level 1. So here is the question. Suppose we have some Boolean function on a Western graph, which is purely level 1, meaning it is a sum of constant function, delta, which is the density of s, plus a level 1 function, which is also orthogonal to the 0 level. Okay. So what can you say about something like that? So let us think for a moment and see that something strange is going on here. Right. So f is Boolean valued. It gets only the value 0 and 1, and delta is constant. So this means that this large sum can only get two values, minus delta and 1 minus delta. On the other hand, its application is 0, because it is orthogonal to constant functions. So we get a deviates from it expectation with very high probability. Never around its expectation. So something happens here, something happens that we should explain. And to see that, let us make some abstraction. So instead of looking at dysfunction g, let us look at random variables, z1 to z2 to the L, where z1, z2, and so on are the values of g on the first point of error, the second point of error, and so on. So what we know is that this sum also always deviates from its mean, and we ask ourselves, how can it happen? Okay. So here is the same question. This is sort of an anti-concentration of measure question. So we have a bunch of random variables that have zero mean, and we know that the sum deviates from its mean, and we ask ourselves, how can it happen? So we have a lot of theorem that say when the sum is close to its average. So if you have enough independence and loneliness, then the sum is close to its mean. So if it doesn't happen, then it means that either there is dependency or something is really unbounded. So this means that either there is a variable that has some large moment, so a large fourth moment. Otherwise, there must be strong correlations between the random variables z1 to zL. And formally, this is done by considering high moments of the sum, and what you conclude then that in the first case, the sum zoom in that increases density by a lot. And in the second case, you conclude that there exists some zoom out that increases density by a lot. And this is really what most of the work goes into. In fact, you need a strong job structure from the composition that I've stated, but still you can get it. So okay, this was for the first level, for a toy case, what about high levels? So the case of the second level is already more challenging. And the reason is that there are many more correlations now. There are many different kinds of correlation, and you need to control them all. You need to know what to do when they're large. And it is possible with the data, but then you have a lot of cases. And then you try to extend it to higher levels. And you already have too many cases to end them. So you must do something more systematic. But the question is how? And this is precisely what we did in the last paper. So we developed a more systematic way to analyze these correlations, which are four-wise correlations. And also, we actually work with a closely related graph, a Cayley graph, which is not a Grassman graph, but it is closely related to it. And I want also to mention that there were some joint work with Tana Moskovich also on the Johnson graph. And this is somewhat related, but it is also simpler than the Grassman graph. So this is all I really wanted to say. Thank you. Okay, thank you. Any questions? We have plenty of time. So I was wondering if you could say something more about the local-to-global condition? I mean, I guess the question is, for this local-to-global thing, or what is the statement? Yeah, what is the statement? Or can you give some intuition for what's happening? Yeah. So the first hypothesis is that suppose you have some word that passes the test with non-anxiety, then you say, okay, maybe it is close to some code word in the Grassman code itself. And this is false. You can sort of take a bunch of these non-expanding examples, put some global function in each one of them, and then put them together. So they are consistent within the cells, and maybe edges stay inside. So you don't have any global structure, but still you have consistency. So this is why the most obvious candidate fails. So what is the statement really says? It says the following, if you have some word that passes the test with non-anxiety probability, then you can take some small set of vectors, Q, and a small co-dimension hyperplane, W, so that if you now restrict yourself to sub-spaces that contain Q and contain EW, then now you do have a global structure. Yeah, and this sounds a little complicated, but this actually turns out to be true. Maybe you should say, this is not obvious why this would be enough. In your PCP, usually in your PCPs you expect a global structure, but somehow this was enough for you, even though there's really no global structure that potentially many linear functionals all mix together, but the fact that you can isolate one somehow is enough. Yeah, in the AutoPCP, I don't really want to discuss AutoPCP, but if you have this sort of structural result, then you need between two AutoPCPs, you need to somehow correlate these Q and Ws, and yeah, this is not obvious, but not unless it can be done. Are there any conjectures as to how to take this to the next level, namely to unique games? Yeah, so this is a good question, and maybe, I don't know any concrete path, it's possible. The issue of perfect completeness is something you think can be doable? So the issue with perfect completeness is, we don't know the completeness at any point in the reduction, which is simply that the starting point is a linear problem, so you cannot hope to have the... We should call Johan Hastad and complain, right? It's his fault, not ours. But you could imagine starting from another problem, except it breaks all your linear structure. Yeah, yeah, so it would certainly be nice to have perfect completeness, but we don't know how to achieve it. Is it a question? Oh, dead? Yeah, it's a question to me, right? I don't know if you have to put me up. Great, my question is, I guess this cross one graph has a natural two-to-two structure, like if you take an L-dimensional subspace and change one factor and get another one. I guess you have like a natural two-to-one structure, if you just have L and L minus one dimensional subspaces together and you just have restrictions? Yeah, yeah, I should have said that. Actually, I said the earlier results for two-to-two, but they actually hold for two-to-one, with the precise thing that you stated. Oh, no, it's... Yeah, so instead of adding L and L, you have L and L minus one, and the edges, if one is contained in the other. So you get two-to-one conjecture with imperfect completeness? Yeah, yeah. Okay, thank you. Question, do you have to unmute yourself? Oh, for that? Yeah, go ahead, I'll hear you. Yeah, so thanks for the talk. It was very interesting. So I'm just drawing analogues to the vertex cover result by the Nuran-Safra, right? So they were also picking subsets of size S, but they were just picking subsets of CSPs in some sense, right? And here you're very specific in picking subsets of like a three-lin, and in fact not just subsets, it's a subspace structure that's crucial to your analysis. Am I getting that right? So we take a bunch of equations, we look at all the variables that are in these equations, and these forms this is a large space, and then we take this space and do the grassland graph there. So this is sort of the reduction. So it has some similarity to the Nuran-Safra thing, but the reduction has some limits. Like the sum is there cannot be close to zero. Is that there? We can hear you. Oh, actually we cannot hear you, even though you unmute the strange. Is there time for a few more comments or questions? Yes, you can go ahead. Okay, yeah, so there's a question that was asked earlier about understanding the Fourier analysis of the grassland graph in terms of representations of GL and Q. And I just wanted to say that in fact there is, and we might be getting into too much algebra here, but I'll just suffice it to say you can identify subspaces of the vector space over the field of VQ by taking cosets that correspond to the parabolic subgroup GLN-K, Q, and GLK, Q. And so once you make that identification, then the representations are actually quite a beautiful object and they're parameterized by the number of views. So if you know what I think about Young Tableau, we can call them eigenspaces of the grassland graph and they are parameterized by the number, they have two rows and each row, so the second row has either one cell, two cells, three cells, or K cells. So I wanted to say that the eigenspaces of this grassland graph are studied a lot in finite geometry and even this area of extremal combatorics called erratical-erratical combatorics. So you might consider both of those areas as good resources for studying the nature of the, the algebraic nature of the grassland graph. And I wanted to say, I wanted to, so now I have a question and that is, have you looked at the isoparametric properties of the grassland graph to say things about expanding sets? Because there are things that are known, it just could be that what you need, you need something stronger than what you're curious about. Sorry if that went too fast. So my question is, are you aware of any isoparametric results for the, for the grassland graph that you could use to say something about expansion? So the simple answer is that I don't know about any isoparametric results about the expansion, otherwise I'd probably stay there. And yeah, part of the difficulty is that we didn't really understand the structure of the decomposition there. So we did what we could, but at the end of the day we had to switch to this scaling graph, which also makes life easier. But yeah, this is a, I hope this is a satisfactory answer. So I guess it's possible that one can find a more elegant approach directly using grassland graphs. It easily, those things easily get very messy. Maybe one more answer here would be that Dor, I think you need more than maybe just isoparametry, you really have to characterize those, you know, those non-expanding sets. Maybe this is another property. Yeah, you have a nice combinatorial characterization of those sets. It's not so much a functional thing, more combinatorial thing, but perhaps we could look at that too. Yeah. Thank you. Any more questions? Hi, and can you hear me? Yes, go ahead. So you get like an imperfect completeness for the two to two, I think, but do you get something about like how well LaSaire does? Like, can it not distinguish between one and epsilon? So for LaSaire, you get perfect. Yeah, because you are perfect for trillions, so it transfers. And that's for many levels of LaSaire, right? Okay. Great. Any more questions, Ryan? Well, maybe this is a little bit boring and premature, but did you work out the dependence of like epsilon on the alphabet size? Yes, but I don't remember it now. It's not too terrible. Yeah, the soundness is, I can't say, off the top of my head. Okay, just curious. So we've got no more questions, so you're welcome to stay here, chat offline. We'll take us offline a minute. Just remind you that in two weeks we have Michael Kerns. Okay, so hope to see you then. Thanks everyone for joining. Thank you.