 So today's talk is by Saisandeep on a conditional hardness of coloring three colorable grass with a constant number of colors, that's all there is. Hi, this talk will be about B to 1 hardness of coloring take a little grass and this joint work with Venkat from CM. The talk is going to be relatively shorter, so feel free to interrupt for any questions. So we are interested in the three-color problem. So the problem is that given a graph and you want to know whether it's three-colorable or not. It's one of the classical NP-hard problem. So we are interested in approximation algorithms for this problem. So that is given a graph that is promised to be three-color and now can we color it with the four colors or with hundred colors in the polynomial time. And there have been some really nice algorithmic works on this problem starting with Victor from 4K83. So he showed that you can color such a graph that is promised to be three-colorable with square root n colors. And there have been some improvements over this, primarily using STP rounding-based techniques. And since that so far we are able to get it to something so little less than n to the point two, order of n to the point two colors. But this talk is going to be about hardness. So what do we know on the hardness side? So we know that you can color it with n to the point two so many colors. And for the hardness, it's easy to think in terms of the decision version. So the decision version is the same given a graph and you want to distinguish between the two cases. Yes, it can be colored with three colors. And in the no case, it cannot even be colored with some c greater than three colors. And for this problem with small c equal to three is just the hardness of graph three colors. And unfortunately, like even though there has been significant progress in the algorithmic point, the hardness side actually hasn't seen much work. And it's only very recently, I'm going to break through what this even like small c equal to five case is a result. And it's still the best known. So we don't even know if we haven't yet ruled out coloring the graph that is promised to be three-colorable with six colors. And so to get around this barrier, people have tried to look at conditional rules. So conditional reasons are like I'll assume some sort of hardness conductor, and then you want to prove that graph coloring is hard as well. And usually many of these conditional reasons start as in like a base and unique game conductor. But however, for our problem, unique game conductor doesn't work because we need perfect completeness. That is in the yes case, we need the graph to be all day, just like the coloring is a valid property going, it's not like a few fraction of the violator or something. But similar to unique game country, there are other conjectures that do have perfect completeness. And we will be focusing on those conjectures. And so one of them is this D to one conjecture. And D could be anything. And so assuming two to one conjecture, also later I'll explain what all these conjectures mean. And assuming two to one conjecture, it's already being proved in the nice part of D-Nerd, but four versus C-coloring is NPR for everything. And even three versus C is also proved to be in here. And we had assuming a fish shaped version of the unique game conjecture. And like before going forward, I'll explain what all these conjectures mean. And all these conjectures are variants of this label cover problem. And in the label cover problem, you have a bipartite graph G with LR and edges. And there are constraints across the edges. So you could think of it as a binary CSP, but with a special form of constraints. Each constraint is a projection constraint. That is, once you fix the value of a vertex on the left, the value of the vertex on the right that is adjacent to it is uniquely determined if you want to satisfy that edge. And this has been a classical problem in the field of hardness approximation. And many of the hardness results follow from this problem. And we do know very good hardness for this. So like, by the time we know that it's simply hard to distinguish between the two cases. In the first case, when all the constraints are satisfied versus the utmost epsilon fraction of the constraint can be satisfied for any constant epsilon. And we do recur perfect completeness. In the sense that by perfect, I mean that in the completeness case, all the constraints are satisfied, as opposed to saying that at most some delta fraction of constraints are violated. And so this all the conjectures that I mentioned previously, they're all variants of this label cover. They're all saying that a certain structure form of label covers is also hard. And one of the such kind of this is this so called D to one conjecture. And D to one conjecture has been introduced by codes and the same type of that introduced the unique conjecture. And in the D to one conjecture, you have the label set on the left side is D times L and the right side itself. And we do know that for every label on the left, there is unique label on the right that satisfies the condition. But in the D to one conjecture, the other types is also two, but with the D labels. So for every label on the right, there are exactly D labels on the left that satisfy a particular. So if you if you think of D as one, that's just a unique game. But the advantage of having larger D is that you can you can also have perfect complexity. And this so now you can have increasing D and different forms of the conjecture. So the strongest of the family of conditions that two to one conjecture. And we actually prove that the results that previously are known under the two to one, which is the hardness of four versus C, but we prove that so the those hardness isn't even like three versus C color, we can obtain them assuming the D to one conjecture for any constant D, not just D equal to two. So the hope is that somehow it could it is probably easier to prove the D to one condition for a large constant D. And then you you'll get the this three versus C color in the long journey problem. And yeah, so as I mentioned earlier, D equal to two cases proved in D in a muscle, only for like four versus C. And we prove that it's a bit. Also, at this point, I'll remark that there's been some nice recent work that showed the two to one conjecture, but with imperfect completeness. In a long line of work, they showed two to one. But so but they don't get perfect completeness and perfect completeness is very crucial for us. And their techniques seem to be the interview using linear type of long chord things. So seems that they don't really extend to perfect completeness case. Okay, so these are our main results. And now we'll discuss how we go about proving it. So we do it in two steps. And first step, we prove that for any parts into the D, D to one condition implies that 2D versus C calling is NP hard. As in like in the yes case, not a graph is not three colorable, it's 2D colorable. And in the no case, of course, it's C cannot be colored in C colors for anything. And we combine this with a very nice recent work due to croak in upper shower watchmen with me where they use this arc graph, the properties of the chromatic number of our graph to show that it essentially suffices to show the three versus C, if you can show like any constant versus, like if you want to show that three versus C then we have for everything, it's okay to show like any constant versus C like so here we show that D to one implies that 2D versus C, which then implies by their result, even three versus C coloring is hard as well. So in the rest of the talk, I'll just focus on this step one. And okay, so to to put everything together, so what I've done is that we have to do this D to one, like we explain this D to one, then the 10. And you want to prove that that implies three versus C. And so the updated goal is that you want to show D versus D to one condition implies the 2D versus C coloring is NP hard for all together include. And if you just pull flagging D equal to this is the same as the one that the new muscle and reggae have proved. And the approach is the we was the same approach as their result. And which is to use label color long chord based reduction. And the key difference from their work is that at their reduction, they need to use a symmetric Markov chain with some properties. And that we generalize that object to any D, which then implies the same hardness for any D. And so before going further, we just recall what a symmetric Markov chain is like, you just have the state space and the probability of going from one state to other is the same as it is the same as going from G to I. So you could just think of it as a N by N symmetric matrix with non negative entries and some of the elements in each one. So simply it's a symmetric published or that's the matrix. And what are the properties that we need from it for the reduction to work? First, we need that the support is disjoint. So if there is a transition probability from you want to UD to we want to really we need that the you want to the sex you want to UD and we want to really disjoint. And second property that we need is that the spectral radius of this metric Markov chain is less than one. So this this actually means that there's only one eigenvalue with absolute value equal to one. And so how do we construct such a symmetric Markov chain? In in the dinner muscle regular work, whenever they started this problem with for D equal to two, they constructed this matrix manually. So they just showed some states and some probability decision and it worked out. But we want to do it for any general D. And so how do we do it? So the technique that we use is this matrix scaling. So what is matrix scaling? The matrix scaling you have you first start with some arbitrary matrix. And in our case, it's just a matrix of certain graph. And in every step, we scale either a row or a column of the matrix. So so and the goal is to construct a doubly stochastic matrix at the end. And you want to do it by these row or column operation. However, then this leads to the question, when does such a such a see when when does there exist such a series of operations that gives us a doubly stochastic matrix. And it's been a pretty well studied topic. And we know certain initial conditions of the original matrix that you're starting with, under which it's guaranteed that you'll end up with a doubly stochastic matrix. So to recall, we need some properties from a Mark Orton and you want to we are trying to construct using adjacent to matrix of a graph. So now, what are the conditions under which we know that this thing terminates? And one of one such condition is this original matrix A has total support. And if you translate it, that condition to the matrices that are adjacent to the matrices of a graph, it, it translates to the condition that every edge of the graph is a part of the cycle curve. cycle curve of a graph is just a union of what it's showing cycles that cover all the vertices of the graph. And so now, instead of requiring some properties from the symmetric Mark Orton, we now reduce the problem to requiring some properties from a graph with vertex set 2d raised to the D. And what are the properties that we need from this graph? First, we need that every edge is part of the cycle curve. This is to ensure that when we apply the matrix scaling, we end up with a doubly stochastic matrix. And we also need some other properties of the symmetric Mark Orton just to recall one, we need the design support property. And two, we also need the spectral areas of that to be less than one, which translates to the property that the graph should be connected and it should not be a bipartite graph. So to summarize, we need these three properties from this graph and with the vertex set of the graph is the 2d raised to the D. And now we show how to construct such a graph. So here q q is supposed to be 2d. We add two, two types of edges. We just start with the empty graph and we add two types of edges. So the first type is we added between u, u1 to ud and v equal to v1 to vd. If they're of the same ordering, as in for every i, k, mod j, they follow the same ordering as u1 equal to uj if and only if ui equal to uj if and only if v i equal to j and similarly, other conditions as well. So in some sense, we are breaking the vertex set into equivalent classes and then adding edges between vertices in the same class that have disjoint support. And the second type of edges that we are adding are the same as above for the case when u1 are equal to ud. Like all of them are equal. So this essentially ensures that you have connectedness and the graph is also not bipartite. So we already have gotten two properties for connectedness and non-bipartite. And since we're adding edges between vectors that have support disjoint anyway, so we also have the disjoint properties. So the only property that we need to verify is that this every edge of this graph is part of a cycle tower. And that is not difficult to see simply because the type one all the type one edges are regular. So this you when you fix a class of ordering, which which fix so and among that class is just a method graph because we are just adding edges between vertices that have disjoint support and they are regular. So it and one every regular job in every edge in a regular job is part of the cycle tower. This is because you can just have two copies of the graph and then construct a bipartite graph. And that bipartite graph is bi-regulate. So it has a perfect matching. And so every edge is part of it. Even more every edge will be part of part of my thing. And so you can consider that every edge is part of the cycle tower as well. So we have constructed this we showed the existence of this graph, it's all these properties, which then implies the existence of the symmetric Markov chain as well. And now finally, we just describe the reduction. How do we go from this symmetric Markov chain to how do you use this symmetric Markov chain in the reduction from this B to one context to graph color length on. First, we actually reduce the D to one context to this other variant called D to D conjecture. So it's proved that D to one conjecture implies the D to D conjecture. So in the D to D conjecture, it's so the conditions are no longer projection, but they have this very special type of structure. So you could think of the alphabet is some details and there are two combinations by one by two, one for for every edge, whether two combinations by one by two, one for the vertex on the left one for the vertex on the right. So that the the graph that is obtained by adding pairs that are in the relation is of this form D to D. So pion inverse x and pi two inverse y belong to this D to D and D to D is this relation. So essentially, you can think of it like there is this D to D structure and then you you can permute the labels on the left or on the right. So this is the D to D conjecture. This is simply easier to work with. It's not hard to prove D to D conjecture from D to one conjecture. Okay, and so we also first recall what the traditional label code long code setup. So you start with the label coordination to recall you have a graph and then a consequence for every edge. And then you want to output a new graph based on this label coordination so that if the label coordination is satisfiable, the graph can be colored with 2D colors. And if no assignment can satisfy some actual prime fraction of the conscience in the label code, the new graph G prime cannot be colored with three colors, but we actually prove it from the statement that is not even an independent set of size that's fun G prime. And how do how do you go about this? So we replace each vertex of the label cover with a set of nodes FB of size 2D to the DL. This is a long code what is it? And what are the edges that we add? So if we look at every edge, you come up B, an edge of the label cover engines to G with some projection, not the constant pi, which has, which is represented by the permutations pi one pi two, pi one by two, are the ones that are present here. And, and we also have the symmetric Markov chain that we have constructed. And we add edges between x1, and xdl and y into idea. So that the this this conscience, it may look weird. When the first site where you are at this M of all the D to D to D set of, so you want the D to D set, you look at every pair in the D to D. And then for them, the Markov chain support should, for that phase, the Markov chain support should be, it should be in the support of the Markov chain. So the idea is that the potential come from the completeness case, you want to ensure that when you decode each every to a dictator function, this should be a valid 2D coloring, which is clear for us because the Markov chain has a property that there are only transient property between two sets if their support is disjoint. And we also have additional properties of M, which are useful in this context. So, and how does the analysis go through? As I mentioned earlier, completeness is actually fairly straightforward. You just suppose that in the case that there is a labeling to the vertices of the label code that satisfies all the constraints, you decode each node with the dictator function corresponding to that labeling. And this is a valid coloring simply because when the constraint that we have added. Okay, yeah, and so if you decode to to declare the functions here, the edges will intersect. So the support is non zero. So in the graph that you get G prime, if there are no edges between what is that I assign the same label, so they have G prime will be 2D color. And the soundness analysis is usually the harder case. And how does the soundness work? You know that there's a large independent set in G prime. And using that we want to show that there's an assignment to the label code that satisfies large fraction of the constraint. And usually, and in this case, we use invariance based principle to decode to the small set of labels. And that that satisfy a concern. And how do we do that? And okay, before that, I'll state the invariance principle that we need here. And let's see be a symmetric Markov chain that with spectral radius one, you have two different functions f and g. And each of them have reasonably high expected value. And they have the problem that the inner product of f and g is non zero. If this property is true, then there is a coordinate where both the functions have reasonable low degree influence. So low degree influence is a variant of traditional influence where we only look at the Fourier coefficients with a degree at most some particular value. And this is very nice for us because we could decode to these low degree coordinates. And, and this, this principle, this initial base, this directly shows the consistency of the decoding. So and okay, to, to just explain this decoding mechanism for, for someone who have not seen all type of predictions usually. The way things work is that in the soundness case, you show that, okay, this is the whatever the graph column problem has some structure calling, let's say here, there's a large infinite set. Using that, we show some important coordinates or important labels to each label curve. And then we want to see a sample a label from these set of labels to each vertex. We then show that this labeling the random sample labeling satisfies a constant fraction of the edges, which is, which then proves that there is a labeling to the label constant that satisfies a constant fraction of this, which is, which is a contradiction. So that which is exactly what we want. And here, the labeling is all the coordinates of the so the functions f and g are the independent set function. They have high expected value because overall, they have high, we know that it's, it's one fraction, there's an infinite set and you could just look at a like it's one by two. We know that there are it's one by two, a fraction of labels that have it's one by two expected value. So we can just stick to them. And the decoding is the set of coordinates that have high low degree influence. And this K, the degree, the degree of the low degree influence is obtained from that plan itself. And the, so the consistency of the decoding is follows from exactly this result. So we know that for every edge, these labels intersect based on the constraint. So the final part that we need to prove is that these labels, there are not too many labels. That is actually fairly straightforward because you just have that sum of low degree influences that most K. So that K is also functional epsilon. So overall, the number of decoded coordinates is just a function of epsilon. And that that concludes the proof as you, you now have decoding to some f of epsilon coordinates, which which is a consistent decoding, then you, you, you wrote that there's a labeling that satisfies some f of epsilon fraction of the constraint. And that's that's pretty much the whole result. So to conclude, we showed that d to one conductor implies the versatility for every D. And there are several interesting questions. One is that I think the most important is whether what, what, what function of D the soundness need to be? That that seems to be the case. Like somehow you may not need epsilon to be like, can you characterize what function of D the epsilon need to be for this result to work? That will be very interesting. And the other question is recently, there have been some interesting baby PCP type of whether such baby D to one implies 32 C is also very interesting. As is the proof, the proof for step itself, we, we only take a particular some fraction of the vertices of the label cover. And that's the break stone if you have baby D to one. So that that is also very interesting. There is some other way to somehow show that baby D to one could imply three verses the color in hardness. And for people who haven't seen this baby D to one, you can you can ignore it. And that that's it. Yeah, thanks. Yeah, I was quicker than I expected. Yeah, you're quite quick. So we have a lot of time for questions. Yeah. So I wanted to ask exactly. The question of the thing you discussed the last so baby. Yeah. Well, I know that you use these, you know, fractions are important. Yeah, yeah. It's like any how plausible that this step can be avoided in your proof. You know, when I look like it to me, but does it seem like possible at all? Yeah, even for us, it seems somehow it's inherent that we need the existing. But but I don't know how you would pull such a thing like that you need this but it seems. Yeah, sure. I mean, it's not just asking about your feeling because my yeah, I think we did spend some time but it seems like you do need the full power of it. Yeah. Okay, one more question. Can you please go back to this reduction from label cover to the three coloring? Because when I read the paper, it was clear now. Actually, the graph seemed bipartite to me. So you start with the bipartite graph. Isn't this graph bipartite actually? Oh, yeah, actually, I was just like, since we are only dealing with D to D. So then the original graph need not be bipartite. Alright, so you essentially need not bipartite inputs. Yeah, yeah, that's a good answer. Yeah, just confuse me a bit. Yeah, that's a very good point. Yeah. D to one, you need I think that actually explains why you have to go from D to one to D to D. Yeah, D to one, you are forced to have bipartite because there are two types of labels, right? In D to D, there's no such requirement. Yeah. Okay, so again, but if you restrict D to D to bipartite, you're still getting Yeah, I think the reduction shouldn't work in that case. And so the reduction cannot work. So I mean, that's now I'm still confused. I think No. Yeah, good, good. Isn't it that like D to D, if you restrict to bipartite inputs, it will just be easy for small enough epsilon. Like unique games conjecture does not make sense if you if you require the graph to be bipartite, right? What am I wrong? No, you mean it's okay, right? It says that I think this reduction doesn't hold life for you. Yeah, you know, we cannot hear you, Venkat. Oh, you might be muted. Shouldn't be muted. Now, I was saying, you can also put these just inside a long court. So we did not hear you, Venkat. Please, can you repeat what you said? Hello, can you hear me now? Yeah. Yeah. I was saying you could you could also put these disjointness is just inside a long court table. Yeah, but are we using that somewhere? But you must then. Yeah, I guess you must in some sense. I mean, in the usual way you convert unique games to D to D is you sample two neighbors. Yeah, I'm gonna do it with replacement. So if there is a chance, you will sample, you know, in fact, you will sample the same long court twice, and you'll have D to D constraints between them. So, right. So this is not likely what is happening under the hood. So yeah, so this is not what actually the reduction does exactly, right? No, it is that the D to D starting point is non bipartite. That's the difference. But the question whether D to D on bipartite instances has an algorithm that that is bothering me. It probably doesn't. Well, no, okay, my question is, yeah, no, no, I mean, is this the reduction you actually do or not? No, this is the relation that we do. I'm sure. But I think there is some implicit structure, right, because you could also have a D to D thing, forget bipartite or anything, you don't even need bipartite. If it has an independent set of size epsilon, that itself will be enough, right? So somehow you need this density type fact that any epsilon fraction of vertices will have some, some epsilon square constraints between them. And you get that by sampling, you take a unique games, which is D or D to one, and then you do the sampling of two neighbors, I think, and then you have you don't have any of these issues. Because somewhere in the soundness, you will, you will have to say that on those epsilon fraction where you have good mass, there are enough edges. So you will actually get a good assignment for the D to one. And I think there, it will break down if it's bipartite, right? Maybe I didn't explain. So you think the bipartite version of bipartite will not work. In fact, you will need something even stronger, you'll need a property like in the constrained graph. It should be dense in the sense that if you take epsilon fraction of variables, there should be some, you know, epsilon cube constraints within them, which in particular rules out, of course, bipartite or any even largest independent set. And I think that's kind of tacitly used in the soundness. Right. So so D to D for bipartite is actually easy. That's what you're saying. I'm not sure I'm saying that, but I'm saying the reduction will not I understand and understand what you said. But then I think still like either D to D is easy or this reduction bipartite won't be easy because even D to one is not easy, right? So on bipartite. So liberal, I think in this reduction, we are somehow implicitly assuming that every epsilon fraction of the vertices have inner features. Because we are never understand it. You need some assumptions on the D to D. And the reduction from D to one to D to D actually satisfies those. So we don't need to worry about it. Yeah. And that's actually the place where the baby version also breaks down because you have to use that some like you're missing out some edges. So maybe we can move to another question. You can ask a question about those conjectures. So is there any conjecture that is weaker than all of those D to ones? Or you know, that's, there's nothing weaker. So that's just as weak as you can go. Yeah, the baby D to one is one of them. But among the popular ones, I don't think there's anything else that is weaker. Okay, more questions? Okay. There are no more questions. Let's thanks again.