 I don't know what you're talking about. This is a broader... So this is... Don't worry, after German, everything will be great. So... Okay, so today we're going to talk about the indie games conjecture. This is really a fascinating... A fascinating question. Perhaps in theoretical computer science, this is one of the most important questions that doesn't seem hopelessly unsolvable. That seems like a question that we might be... We are learning more every year. We're learning something more about it. And eventually, learn a little bit about it, of course. The setting of the video is conspicuous. You should find the CSP, instance, subset 0.1, the train over. This is, I think, the FTP. F supply is unrelated to this one. Yes. Yes. Yes, yes. So, simply, it's just an alphabet. And an organics is just 0.1, but it shouldn't be any finite alphabet. We define the I of X, the value that the assignment is given to X, is the I, and these are constraints, and they take their average value. For example, we can do 3-star. So let's start with the case of 3-star. So 3-star, traditionally, finding out whether the value is 1 or it's less than 1, then this is the rate of the valentine. Thanks to the n-time, you can believe that you can do it faster than you can. So, there's an alphabet, and it's kind of, I don't know, since that's for data cover, and the same vocational relation, so you can read it into the cube, go into the omega, or something like that, and you can read it better if you really do not depend on the number of this way. Let's say the 0.1, and then there are the time of day, and so, then there are examples, like the exam problem, and we can solve it with the old math card to find out the graphics by far, then we can solve it with the n-time, and this unique problem to define, and the way I cannot like to separate these guys is because this this is not a name, this is not a standard that I use, this is what I call a propagation. I reach a leader, the algorithm to solve them is an algorithm which basically guesses one solution and then book gets this solution, and now because in both, say, when the graph, how do you solve it, you know, you put one vertex on one side of the graph and then you kind of propagate what you guess is the graph, which is bipartite, you never get a good problem. And so, linear, there are ways to generalize it, this notion of what it means to be linear, using this notion of putting more things in the future, and these are the propagation structures, and they are hard, and the quantum conjecture is basically every CSP plus basically this is NP-hard, in fact NP-hard in this one, in time, and this is the P, and in fact the algorithm in because it's not a single propagation technology in some kind of a specific algorithm, so it's typically in the time there is a particular question I don't see or something, less than that, the conjecture says that basically every problem can be solved in every time into a small exponent or and this is the version where this is by injecting a lot of trigons and and so this is the modifying the version of the conjecture from the 19.5 and it's very I think the people that openly have no doubt that it's true, it has been proven for binary alphabets and it has been proven for ternary alphabets and there are there are there are a program in place to prove it in quality and basically in slow and steady progress so my understanding of that is that for a given predicate do we know how to set it? so they have some kind of so they have some kind of a test this is this notion of the quality of disease which memory basically if you give the value of predicate quality of disease in some kind of the way where you say if you have say suppose you have like some polymorphism is basically where you have some number of satisfied assignments a and there is some operation that you can perform on mode then come up with a different line if coordinate wise operation that will also be satisfied you can come up with three x or you can just x or together and you will see the satisfaction and the polymorphism is some kind of a generalization of that and then interestingly they know how to prove that CSP doesn't have the polymorphism then it's going to be hard the path that's open is to prove that Gaussian type algorithm solves all the cases kind of right in the sense that people that were in my crypto class remember me complaining about Gaussian elimination it's really annoying the algorithm it's hard to understand it's an open wish maybe it didn't exist because it doesn't have a lot of use for us it's mostly annoying because it doesn't allow us to prove that something is happening we hope it will be hard so for example what we do in cryptography we want to base cryptosystems on the path of the path on the why people in conjecture among many high school students and unfortunately even people that are older than that that it is impossible to solve in your systems and to do that because noise especially in the discrete setting in the continuous setting noise is not so bad minimization but in the discrete setting noise comes completely through the Gaussian information and we can indeed show this show this formally so we can talk about complexity of say approximation we have you're only going to see that the instance has value 1-mc and suppose you go you find something that's also close to 1 so let's just say 1-final of 1 versus 1-final of 1 approximation so the goal is to find if the instance has something that I can find really satisfactory solution but we don't care exactly about the dependency so so basically for this the best thing we can do is get 7 over 8 plus 2 to the of 1 even when this seems like 1 minus 2 to the of 1 and again deleting this so this we can do in trivially basically we can take a random assignment and deleting this requires a bunch of time play the cover again somehow and you will be something like a bit size and deleting this will be better so even though it's easy in the instance it's actually having the approximation says the best thing we know to do in this half plus 2 to the of 1 approximation and but the propagation says you can still do it with basically now maybe with some combinatorial algorithm but the best way to do it is using semi-determined programming basically we can do 2 sum of squares and then you can do it so you can basically in all these cases find something of the form 1 minus epsilon you can find it as 1 minus epsilon f10 to 0 f10 to 10 to 0 so the immediate conjecture so basically for the approximation if you actually want to understand the approximation better you want to understand what is this value of f of epsilon the immediate game conjecture is really about understanding this kind of dependence in terms of for example for max cut and I think you said in the same for basically say max cut the reason factor depends on the side but basically if you think of that as constant then what we know we can get the 1 minus 12 epsilon approximation which is how if it's something like 1 minus 15.201 epsilon and the UGC tells us that this is the optimum and you see that even more sudden than that it's even more sudden than that because if the UGC is 2 it will not jump from polynomial to exponential it will help explain some of the so so basically this picture of the one denominator regulator but so basically the reason my sense is that we still don't completely understand we're still not looking at these things and to me even more interesting than we saw the UGC objective would be to find the right way to look at these things such that the answer is obvious then you have to prove it but somehow I feel like we looked at the some way to understand this whole picture maybe as far as like a bigger picture of not just this piece but somehow it would be obvious to us what the value of this function and then it means it would fall out so just fall out but anyway let's state the the texture so in any game in sigma in the CSP where the property of predicates is sigma 2 to sigma 1 but basically the predicates of the form PXY PX1 X Y Y sigma 2 to sigma 1 for pi is some permutation for most of single meters you have some graph you have some graph G from vertices you label their edges with these permutations and again you assign you assign to the vertices I you assign some text in C by N and the value typical edge that the text I I so this is the this thing is the question and the unique game is a problem and the unique game is conjectural you see that for every epsilon there exists a sigma such that the 1 minus epsilon versus epsilon to G sigma problem is empty like I said so this is the unique game conjectural and like I said before the order of conjugation is pushed if sigma fix then there is an algorithm that solves an arbitrary close to 1 epsilon satisfies arbitrary close to 100 percent so unlike label copper or CXO where you can get completeness 1 close to 1 which is that for the this is the unique game conjectural some people are excited about it is this result for the renderer that I mentioned this week so the renderer the unique game conjectural is full and it is basically the we do some squares we have some linear on top of it it is optimal approximation for where this is so it basically tells us if the unique game conjectural is true then we can find out exactly the shape of this function F and in fact we can do it in more general for every kind of machine parameters we can we can find exactly what is the best we can do in polynomial time and beyond that we will have to jump so this end the unique game conjectural is a very good friend of some squares because it tells us that some squares is optimal whether some squares is a friend of the this is the main thing we see so far it seems to insist we are trying to break it but maybe it will stop just short of it okay so this is the unique game conjectural so let me say what we have planned to do we want to define a small set expansion program which is more equivalent to the unique game but somehow it is easier to work with and then show the sub-exponential algorithm expansion and maybe the sketch how to generalize the unique games and then I want to discuss how this algorithm and some of the analysis type examples of game back and forth and maybe if we are trying to talk about some approaches people have in UGC and what the sum of squares has to do with these approaches so this is the plan of work so if there are no questions I will talk about now so so so the unique games conjectural we can define it in this way so at least the judge so we can also look at it as a problem on graphs we define a motion called the label extended so if we have a time let's me call it G in the label so it's about the label with limitations on the sigma then we can define G hat on N sigma vertices so so basically this is called the label extended so basically what we do is we take these hand projects back and we replace every vertex by a cloud of size of the alphabet size and J were neighbors and they had a particular permutation then we basically put this matching between the corresponding to this permutation so if the original graphics you always think about the original graphics way it was regular and had a green B so it had a D and a V so we have an actual mapping between an assignment in sigma M naturally map to the subset of the number of strains every violated constraint is violated and basically this guy and this guy have got this matching assignments then this edge goes out and this edge goes out so basically two times the number of constraints violated it was the number of edges going out so in particular if the number of constraints violated was a strong number N which is the total number of constraints then this would equal the P and the side of the assignment so we map every assignment to a set where we use one guy or a colleague in every cloud just to know that the size of the set is M which is the size of the graph divided by the other side so the set is small so in some sense we map every assignment that satisfies a certain number of constraints to and to a set it doesn't make sense the assignment was very good and the set doesn't make sense with graphs, with clouds, etc. it's annoying so we need to find the smallest expansion problem it's basically the general question of the smallest expansion program with the general question of graphs so the smallest expansion is a given amount the set is the number of vertices the set is number epsilon for every set S depending on the little bit of slack the delta is almost correct the smallest expansion hypothesis is very similar to the individual structure for every epsilon that exists the delta the smallest expansion the delta and let me say that morally the hypothesis should be people many people are actually not sure that but it seems so one direction the least direction that the negate form is easier than the smallest expansion problem that direction seems obvious it will more or less show the reduction from taking indicating things into a graph unfortunately this is also open so this reduction it's a very natural reduction very simple reduction but it's not sound the reason it's not sound is because the original negate instance could have had could have had non-expanding sets but it's also basically basically if you find so this reduction we show where he marks an assignment that satisfies most of the constraints into a set that doesn't expand but we have to also take a set that expands and map it into an assignment that satisfies most of the constraints this is obviously the set somehow if it's a set of small size and it's in the section if every cloud is very small then everything kind of falls out but if it's a set that basically has very more uniform intersection with the cloud say some clouds contains completely some clouds then then we are in trouble in particular for example if the original negate instance was composed of some number of restored parts then this reduction kind of would be trivially unsound and we haven't found a way to fix it this direction is open the other direction which is much less actually no natural reduction but I can say that that was actually so basically that it's more satisfying in the business community that's actually the result of the end result it's a very contributory reduction that also goes out to some extent that's some kind of non-future transformation but that part is not but to me is that some sort of expansion is in some sense what should have been the right way to formulate the indigans texture in the sense that it seems more related, it seems basically related to the indigans texture in the same way that the fastest cut the max cut or trigger is related to all these things so with this kind of picture we can try to show the shape of the work so if you take a smaller expansion it's kind of the easiest the easiest problem in the game is still influence so so let's kind of say this is kind of a status because without this it's probably easier this is the thing that our basically smallest expansion is like the easiest problem in the indigong field and introduces the indigames and we can also use it to produce for instance fast cut which we don't know how to use for indigames so it's kind of useful at the starting point for the basic conjecture the indigames itself then there are this agro kind of propagation propagation csp max cut too soft max to be and indigame implies those guys and then you have you also have to have for instance the hardest the label cover then you have guys csp that completely don't need propagation so there is a natural way to think about what does it mean not to need propagation so you cannot so for a signer guessing one variable you cannot guess another variable and one way to say that a predicate doesn't allow for propagation is to say that a predicate supports a pairwise weekend distribution so basically given a sign for one for one variable any one of the other variables that participates in the seven is going to be anything so we cannot say the proper name and it is so basically this pairwise somehow so that this pairwise the position you do two guys one that are says and one that are not subspaces so for example vx so it is going to the subspace of the dimension you know the subspace of dimension 2 inside the dimension 3 the final equation a plus v plus c so a plus v plus c is 0 or 1 like this kind of and then there are the guys that are not subspace so what is interesting we have reduction from negative cover to this guy right now we don't have reduction to negative cover you guys that are not subspaces so at the moment this is the part this is the part that is empty hard I kind of believe that this should also be empty hard and in fact probably we have some squares low bounds we work with C1, C1, C1 and C1 we have some squares low bounds in your low bounds so that is what we do guys that are exponentially hard this is a relatively new reduction by C1 hopefully someone will also show this equation this is the current situation of the hardness side on the easy side we have a sub-exponential time algorithm here and I kind of believe that this should be sub-exponential here and of course in these games the hardness is closed down so in these games also it loses to all these things potentially for this reduction because this is the problem that you can solve with sub-exponential time so you can already see some open problems of the week such as give sub-exponential time algorithm for this one, this one or this one for example give a 2 to the square order time algorithm for mascot which part of I think was the open problem of the week when John sometime ago when we have so if you are interested in this one in John but in some of the but it's reasonable to believe that there should be sub-exponential time SOS based algorithm here I think it's also reasonable to believe that there should be an empty hardness reduction here and I think both of these things we kind of go along to help clarify this picture it's also interesting to understand what are the things that you need to know and whether there are some interesting things that I have to deal with and of course it will be that the volume in these games is in P which basically we believe that this slide is not really empty hardness or sub-exponential but rather empty versus P this is kind of the now basically I want to show you so maybe I mistake maybe I mistake what I mean by sub-exponential but I'm going to deal with the the new issue in all of our part just looking now currently just from what we have SOS lower bound so basically below this line actually we have these results one of them was in SOS we have these results before it was so in 2006 it's like 3XO was an empty hardness result before it was in SOS the other one I think was an SOS before it was in empty hardness and it inspired it so here we just have a SOS hardness result here the slide is captured by SOS so maybe we just don't know what's actually in that task what's this TSA or what are the other forms so the TSA it's pretty nice that contained a pair-wise independent distribution but they are not a linear subspace that is a pair-wise independent so the TSA is a very simple derivative it's X1 X2 and so X5 and then there are more complicated things along those lines and it's what I think when simply now that you can show that it's a linear based derivative cheating but these are predicates where there is a pair-wise independent distribution supported on them but it's not a subspace and in fact if you look at the random predicates that's a random predicate so you take a random predicate of a certain density like in some cases most of the choices for density will for most choices of density will be here and not here so so one of the nice kind of one of the nice kind of predictions of the indigates projector is that if you take a random predicate of a certain density then there is I think that it's basically a super super sparse it's easy and you can't everything has to be hard and you still don't know how to pull that but this kind of thing it's just yes it's just generally approximating so approximating it to to a regime which is ok so here you have to be a little bit careful of the regime because there is a certain regime where max cut becomes hard so let's just say this is like 1- epsilon versus 1- epsilon 0451 so this is kind of a regime where we don't have an algorithm so one of the tricky things here is that exactly the all this much and these problems if you ask for 2% approximation then it becomes harder then you have your abstraction and it becomes exponentially hard you begin to understand the problem in fact even in the indigates problem itself when I say in the indigates regime of 1- epsilon versus epsilon you will find some other regimes where the problem is actually exponentially hard and becomes down here so let me just state the condition time algorithm it's taking something in both of these it's less or it's less so this is the degree this kind of a degree and it becomes something a little bit better so abstraction epsilon versus both so set of sides epsilon set of sides so if you choose better to be epsilon to one-third then you get 2 to the n to the one-third algorithm for the indigates problem and 2 to the n so this basically gives you to the n to the epsilon to the one-third and 2 to the n to the epsilon this gives you 2 to the n to the epsilon algorithm for some expansion and 2 to the n to the cube root of epsilon for indigates so better is the kind of parameter you can choose if you want more time versus more quality but you need to choose better to be less than epsilon to one-third for instance in Wikipedia you need to choose epsilon better to be less than epsilon better to be more better to be more than epsilon and that's what we do yes that's inside the loop so here it's basically fights it's better to fight against epsilon to better to fight something better in the loop we see that basically we put 0 to and then we use the 0 to one-third no, I think we're getting into it somewhere else here was somewhere else here that's what I knew long time ago when I was suddenly really tired Yeah, am I just focused on this? I understand I said that was funny Let's go Somebody can do it It's hard Where did that come from? No Yeah, I used to I used to I used to I used to I used to Oh, there you go Yeah, so Geometry Geometry Actually, it's a new fashion You can actually go You can go I, yeah That's not what you're doing Well, I like to do it on my own I would like to do it on my own You guys use that a lot Because the way that I want to use it You can use it as a guide You don't have to do it You don't have to do it You can do it So maybe You can use it as a tool Can you take the soft phone? Yeah, yeah How did you know you were a baby? Yeah Okay, so if you play games If you play games That's why you're a camera If you play games Different interpretation That's not so useful So So don't think of it as a game But I'm sick, I just think of it as A constraint satisfaction problem So I don't get it So I think of it as I think the best way to do it Is to have a few minutes Oh, yes You want to think about Do we get a few minutes Questions About X, I That's not so interesting So I'm seeing A lot of people Have something very unique Of what I would say a medium So you take A few months To take An equation to model Some large number Some large number And the four equations Import just to write So that's that No, it's not like It's a dynamic field It's a linkage So The vision of the game Is Large number Of A Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of Of we use, which is a local chigger equality. So chigger, chigger equality tends to be called. So chigger, 1 by 2 by chigger equality is the following. We usually might have to become a plus here, but we write it with the, so let's identify g. So think of g as the same as a normalized r or d. So chigger equality tends to be calling if x transpose gx is at least 1 minus epsilon. Then we can find set s, say, outside of most m of the room, such that a number of mj's will be calling for s to be complemented with most force, so epsilon. But this is chigger equality, but it doesn't give us a small set. So local chigger equality, what it does is the following thing. Local chigger equality tends to be the following thing. If, in addition, we have the property that x is analytically sparse, what is analytically sparse would mean that one of x is at most the n, two of x. Then we can do it. And how do you do that? Let me just sketch it. The local chigger equality, I think we've proven several times. Yes, so it's x for all x. Right, yes, so given, if you give an x like that, you can find set s like local. So if there exists an x like that, then let's just get a sense more for my sector for your sake, because I always get confused about these things. Let's just get a sense that we get the right scaling. So for a technical remark, if x was exactly of the form of 1s, and the size of s is delta m, then the norm 1 of x would be delta m, and the norm of each filter would be, would be square delta m. Now you know that you got the inequality in the right direction. So the general, generally the way it works is that the sparser vector is that when you make it sparser, you benefit the higher norms more than you benefit the lower norms. So basically, whenever you see a condition like that, basically a larger norm is larger than something in the smaller norm. When they make the vector more sparse, this is going to be more likely to make it. The spikier things are like when you take them into a higher power, then you give them away. This is the lower figure, and now you can make it. So I just want to tell you that this condition is vectorless. Right. Maybe you could make it so that you would always get, I guess the norm of g would also have this upper condition. But at least more than that, it's more automatic. So I suppose that you would be far from the norm thing also as this upper condition, which might already imply that at least it's a little bit better than that. I mean, that you have to satisfy this condition because it's not vector, and it's going to automatically, it might have sucked, it might not be completely over to 1, but it couldn't have a lot of weight in it. So it's projection 1 does not explain why it has this logic and this logic in correlation to the vector. So here is a sketch. If the sketch is there are two steps. It's step 1, which is when the support acts as the size of most of the data. So that part basically requires going through the pool of jigger inequality and doing it again. And then it's the same that basically this pool involves taking a level set. So suppose if we got this set x, most of the things are zero, and then some of you turned out it doesn't really matter. It doesn't really matter. And basically what you can show is that the jigger, you can consider jigger if we choose some level set. This thing and that level set, so that level set can be, this will be the set S. And so it will be contained in the support. So if the vector was actually sparse, it can go carefully through the pool of jigger inequality. You can see that if the vector is actually sparse what jigger will give you is sparse vector. So now the second thing, the second thing you want to move from, so given x, if x satisfies this property that is number 1, let's go with that m and number 2, then there exists x5, that's x5 minus x. Okay, so by the way, I might need to lose some of the epsilon zero. So we find some x5 such as x5 minus x. So let's do 2 of that, or I'm not sure, I don't know if we need it to be 0.01 and x5 itself is actually sparse. This is a kind of counting thing. There's calculations. Basically, in this condition, it says that if you take all the coordinates of... Don't put it in the lower average. Basically it says that a vector like this that has one smaller than this might not be sparse, but something like this. It has some small coordinates and some large coordinates. And all the small coordinates, if you move them, it will not lose a lot of the 2 nodes. You only have actually delta and coordinates that contribute to the 2 nodes. So you can make the vector sparse where you're not losing a lot of the 2 nodes. So if you don't lose a lot of your 2 nodes, then it will still satisfy... So it will be actually sparse and still satisfy this property. This is just a calculation. So actually the second step is easier than the first step. And the second step is actually... This thing is something that you need to send again and again. This type of property is a very good property for being an actual sparse. We also saw this in this class as the vector problem. The thing is that sometimes it's easier for the computation and it's not always easy to work with this. Sometimes you want to move the 2 versus 4 thing. Some other laws, but this is kind of the general thing that we keep doing when we want to analyze something that has to come up with something sparse. We move these kind of soft boxes for sparse. So basically given this thing, then what we want to show is the following. So the standard exponential algorithm you can follow from the following demand. It's going to be a bombastic, the structure theorem. So structure theorem is the following thing that has to be done.