 Thank you for an introduction. So this talk is on the power of combined basic LP and affine relaxations for promised CSPs. And this is joint work with Venkankar Aswami at CMU, Martin Rokna at Oxford, and Standa Zivni also at Oxford. So just to give a high-level picture of what's going on. Contrained satisfaction problems are a very vast field and there's been a number of variants of CSPs that have been of interest for the last couple decades. These include valued constraint satisfaction problems, counting constraint satisfaction problems, quantified constraint satisfaction problems, or the topic of our talk today, promised constraint satisfaction problems. There's even nowadays combining a couple of these like promised value constraint satisfaction problems and so on. So there's all these different rich types, and I'm also admitting quite a few such as random constraint satisfaction problems and others. But I'd really like to emphasize today promised constraint satisfaction problems because they're rather, as you'll see, they have a rather fundamental importance to this field. So the overall agenda of today's talk is going to be to understand a simple principle to algorithm for a broad class of promised CSPs. In particular, what we're going to first have is an overview of what promised CSPs are as well as the most important tool for understanding promised CSPs, which are polymorphisms. And then we're going to jump into some ways of a couple of algorithms for solving promised CSPs, which are the basic linear programming algorithm, as well as the affine relaxations of promised CSPs. We'll give an analysis for a particular class of promised CSPs, and we're also going to characterize the power of these promised CSPs, and then I'll have some concluding remarks, which should help connect back the results of this talk to the broader CSPs. Also, feel free if you have a question to jump it at any time, if you want me to clarify anything. All right, so let's get started on the first one, explaining what promised CSPs are. So let's just start with a quick refresher. So when we talk about a CSP, we have a set of variables, which we'll call X, which will typically number X1 to Xm. We're also going to have a finite domain to which we'll assign the variables, which is often 01, although this talk will apply to any finite domain. And we're also going to have some relations, which are subsets of h to the k for some suitable k. A simple example of a relation would just be a 1 and 3 set, where you have a 3, 2 pull, and you allow exactly one of the three variables to be 1. This collection is known as a relational structure, and we'll just use bold A to denote this relational structure. So from this, there are two different, there's sort of two major classes of notation that people use to discuss CSPs. I'll just give both for completeness. So let's just consider a simple example of 1 and 3 set. One way is you can write down your CSP as like a formula. So for instance, we have like r of maybe some variables like X1, X2, X3, which just means this X1, X2, X3, these have to be an element of the domain of the relation r and so forth. And this is going to be a conjunction of all of these conditions. The other major way you can write this is homomorphism style, where instead you create another relational structure X, which bold X, which is your tuples of variables, and you ask if there's a homomorphism from this X to your original constraints A. These two are completely equivalent. It's just, it's sort of a matter of the situation, which one's more convenient. So that's all for just CSPs. What's a promised CSP? A promised CSP consists of actually two structures, A and B, which have basically two versions of a relation, a strong relation and a weak relation. And the important property of this is that there's a homomorphism. If you think of like the domains A and B the same, you can think a simple case would be this homomorphism would just be the inclusion of RA into RB. But there's a variety of other homomorphisms, which I'll get to in some examples. But one way of looking at this is there's a decision problem, which is given to structures A and B and your template, basically your formula X or your variable structure X. You want to know if there's a homomorphism from X to A or if there's no homomorphism from X to B. No depth because we said that there's a homomorphism from A to B, that if there's a homomorphism from X to A, there must also be a homomorphism from A to B. Thus this is actually a proper promise problem, there's some middle ground, which we're not accounting for. Another variant of promised CSPs, which is commonly studied is called search promised CSPs, where I promise you that there's a homomorphism from X to A and I want you to find an explicit example of a homomorphism from X to B, which is guaranteed to exist. So definitely if you solve the search promised CSP problem, you'll also have an algorithm for decision PCSP problem. But in general, solving the decision PCSP problem doesn't solve the search PCSP problem. We're going to, for this talk, we're going to focus on the decision PCSP problem. So it's in some sense it's not going to be the final word on some of these algorithms, but extending them to the search domain is an interesting open question. I should also say as an important remark that ordinary CSPs are captured by setting A equal to B, in this case the homomorphism is just the identity map, and then the distinguishing whether there's a homomorphism from X to A or from X to not be is literally just, does there exist a solution to a given CSP formula? Any questions so far about the definitions of promised CSPs also go for a number of examples now in case something's not clear. Let me start with the following example, which is the classic approximate graph coloring problem. So for instance, does there exist three coloring of a graph X, or sorry, assume there exists a free coloring of a graph X, can we find a five coloring in polynomial time? So in this case, the way you would set it up is the domain of three coloring is one, two, three. The domain of five coloring is one to five. The relations are just on each domain, basically the pairs INJ, which are not equal, and our inclusion sigma is just going to be the identity map, because A is a subset of B. In particular, any three coloring is also a five color. This is actually a very, although it's simple to state to set problems very nontrivial to analyze, and it's actually only because of the development of promised CSPs that we now know this was and be heard as of last year. Another example, which is actually one that sort of kickstarted the field of promised CSPs is this A, B, SAT, or also called like two plus epsilon SAT due to Ostr and Gurusome and Asthan, which is you have some B SAT formula, so you have a SAT formula in which every clause has B variables, and I promise there's an assignment in which at least A of the literals are satisfied. So it's not just basically you're having over and above structure on a satisfying assignment to this clause, and the goal is just to find an ordinary assignment. So the way that you can write this one is obviously the domain is Boolean, and the primary relation which is this having these literals is the set of clauses which sum to at least A versus the sum of literals which sum to at least one, or some of a variable sum to at least one. And in particular, we also have a second relation which is this not that one variable is the negation of the other. This is more of a technicality so that you can allow for negations of variables so I can say that like A is not equal to B, notice it doesn't change, so even in the relaxed version, variables which are specified to be negations are still negations. You could also omit that entirely and just have a separate version of this first R1 for every possible combination of negations you could have in one of these clauses. And again, the containment is just sigma by is equal to i. This has a polynomial time algorithm if and only if the ratio of A and B is at least a half. And yeah, this was in some sense the first result for the theory of premises piece, module of previous work that had come before on approximate graph calling. Okay. Another example which is sort of like a hypergraph generalization of approximate graph coloring is called one in three set versus not all equal set. So in this case, you have your strong relations are one in three set so each clause has to have exactly one variable set to true, but the relaxed variables have each clause can have either one or two variables set to true. What's interesting about this is both one in three set and not all equal set are NP hard by themselves, but when you allow for this promise structure, there's actually many polynomial time algorithms to solve this problem. And in some sense, the goal of this talk is to give another algorithm for this problem. So those are just some a few examples of promise CSPs. So in order to analyze the structure of promise CSPs, a key property is to analyze is something called a polymorphism. A polymorphism is essentially a way of combining solutions to get more solutions or more formally imagine you have some function which takes basically an L tuple from the first formula A and then outputs something from B. And the key property is that if I have, say, some relation R, oh, sorry, that's a type of that should be RV. If you have some satisfying assignments to RA, then when you combine them column wise, you get a satisfying assignment to RV. So in particular, it's it gives you like a high dimensional symmetry of your problem or it tells you a way to combine solutions. Let me just mention that like a trivial way that you can do this is you just output one of the original one of the rows. So if I just take a single one of the rows and output it and then so that it satisfies RB, you would apply to homomorphism Sigma. But in a sense that essentially you would only just look at one of the rows, this would be known as a projection or a dictator of polymorphism. And these are rather and are sometimes called like the, in some sense, the trivial polymorphisms of the problem. We'll give some examples soon of some non trivial polymorphisms. Another way to think about this is you can just you can take a suitable power of your structure A and then you're looking at homomorphisms from A to the L to B. The set of all these polymorphisms together is called a poly B and this will have the structure of what's known as a minion, which we'll discuss in more detail later in the talk. Is there any questions about the definition of a polymorphism? All right. So let me just give a couple of examples. So in some sense, one of my favorite examples is two set. So imagine you have like the following two set formula. And let's say I have three solutions as follows. If you take the majority vote of the solutions, you'll actually get another solution to your problem. And you don't just have to take the majority vote of three solutions. You can actually take the majority vote of any odd number of solutions and you'll get another solution. Sort of the contrasting example to two set is three set where the only polymorphisms are essentially unary operators, which are basically I take f of x is equal to x i. So these are also known as projections or dictators. And in some sense, the reason why three set is NP hard, whereas two set is easy is that three set only has these essentially unary operators, whereas two set has these rich polymorphisms. Just to give a couple more examples. Let's say you had some linear equations, say AX equals to B. If you have three solutions to AX equal to B, then if I take say X1 minus X2 plus X3, this is also a solution to your linear system. So in particular, the function which takes three inputs and outputs A, B, C and outputs A minus B plus C is a polymorphism with this problem. And in fact, you could take any odd number of solutions X1 minus X2 plus X3 minus X4 plus X5 and so on, and these would also be polymorphisms. For this other promises I alluded to earlier, one and three set versus not all equal three set. There's a very special polymorphism known as alternating threshold, which is sort of the Boolean variant of these linear equation operators, where you basically take an alternating sum of X1 minus X2 and so on, and then I threshold whether you're at least one or at most zero. And this is actually a polymorphism of this problem. And it's also an example of what's known as a block symmetric polymorphism. So if I take the odd indices and I permute the inputs at the odd indices, it doesn't change. If I take the even inputs and I permute the indices, it also doesn't change or permute the inputs. It doesn't change. So in particular, we can partition the input into two blocks such that when you permute each of these blocks, you don't change. So this is what's known as a block symmetric polymorphism, and these will be important later on. So the big picture of polymorphisms is what's known as sort of a Galois connection. There's stronger versions of this statement, but this is all we need for this talk is that if you have some polymorphism AB is a subset of some polymorphism CD, then in some sense, the promised CSPs from CD is easier than promised CSP AB, in particular, there's a relevant log space reduction. There's much more complicated things you can say, or that's much complicated, there's other things you can say. For instance, if there's what's known as a minion homomorphism or more complicated structures, but even just subset gives you the idea that having more polymorphisms, if you have more a richer family of polymorphisms, then you're much more likely to be have like a polynomial time algorithm. In particular, there's this, the high level picture I want to give is that if you have your polymorphisms are very symmetric, so like things like majority or alternating threshold, where there are many symmetries going on, there aren't no coordinate is determining the outcome, then your problem is tractable. And if it's sort of more skewed, as in, there's your output only depends on one coordinate or a small number of coordinates, or there's some way of looking at the polymorphism so that it only depends on a small number of coordinates, that's MP heart. This is definitely over simplifying the picture, but this is sort of a good rule of thumb to decide like whether you're should have a polynomial time algorithm or if you should be MP heart. If you're at Le Bore's talk last month on promise CSPs, he mentioned, he gave like a rich criterion for just deciding if your promise CSP is MP heart. In some sense, we're going to do to reverse in this talk, which was we're going to give a set of criterion such that if your promise CSP satisfies this criterion, then you have a polynomial time algorithm. And before I jump into the main results, let me just give some example, just some prior work. So I already mentioned this two plus epsilon set work Austrian gurus, I'm going to start. There's also some work on getting a dichotomy for basically binary symmetric relations. So what that means is you have a binary Boolean domain or a, yeah, sorry, I shouldn't say Boolean, not binary. You have a Boolean domain and every relation has the property that if I permute the inputs, whether you are in the relation does not change. Then could I were able to give a dichotomy if you have the assumption that you allow for negations, you have this not relation in your family. And then just last year, it was extended to a full dichotomy. There's all different things. So like, for instance, if you have polymorphisms, which are known as threshold periodic, which captures some things like majority and other things like that, then you can actually have a polynomial time algorithm. And there's also some recent work which generalizes two plus epsilon set to higher arity domains. So there's quite a few algorithmic works in this space. Well, a number of these give both algorithms in hardness and what I want to say is of these papers, we actually supersede the algorithmic results for essentially all of them. So yeah, just to give you a high level again, we're going to try to give like a principled algorithm for understanding a broad class of promised CSPs. I just gave a refresher on what promised CSPs and their polymorphisms are. And now we're going to jump into some actual algorithms. Are there any questions at this point about just promised CSPs in general? OK, and just to recap, a promised CSP has two structures, A and B, which have strong relations and weak relations. And we want to distinguish, basically, being able to satisfy a formula using the strong structure versus having no solution for the weak structure. So think like three coloring versus not being five column. So during this, I'm not going to write out these relaxations and everything they do in full generality. Instead, I'll just write them out for this one in three set versus not equal set. Problem where you have one in three set and then you have not equal three set. OK, so let's get into our first relaxation, which is the basic linear programming relaxation. The high level idea here is we're going to assign a probability distribution to each variable in class. So in particular, since we're assuming Boolean, we're going to give each variable probability distribution on zero and one. So basically, we'll have two variables, PI of zero, PI of one, such that PI of zero plus PI of one is equal to one. We'll also have a clause for each clause. We're going to have a probability distribution on its assignments. So we'll have one zero zero zero one zero zero one being now negative and some to one. Notice we're only going to encode the clauses of the strong version A. Actually, B doesn't show up at all in the relaxation. We're just going to prove that if we can solve the relaxation, then we've actually solved B. And then between the clause probability distributions and variable distributions, we're going to have basically marginal consistency checks. So for instance, the probability that P one is equal to zero is equal to the probability you have zero one zero or zero zero one. And the probability you're one is the probability you get one zero zero. And we'll just have a list of these checks. So obviously, if you have a complicated formula, you're going to have a lot more. I'm omitting quite a few clauses, but you're going to have every equation you can write this type will be your relaxation. And this is what's known as the basically your programming relaxation. And by using a certain algorithm, such as the ellipsoid algorithm, you can find a solution over directionals and polynomial time. So let me jump in to affine equations. So I'm writing this up sort of in a very specific way, which is we're basically going to do the exact same thing. Except every time where we said probability distribution, we're now going to say like, oh, we're going to assign integers. So we're no longer going to assign that the variables are non-negative. So these can definitely go be negative. But otherwise, they're exactly the same. So we're going to have some variables like Q i's for the variables, which sum to one. So for instance, like three and negative two or something like that. And for each clause, we're going to have variables, which sum to one. So this is going to represent like your satisfying assignment being like an affine combination. So it's going to, this will capture things like linear equations, which the basically linear program by itself cannot capture. And again, we're going to have basically sort of like an affine distribution for each variable and affine distribution for each clause. And then we're going to have some consistency checks, which tell you if that the the marginals you get for each zero and one are the same. It looks a little funny for affine equations. But if you have this intuition of probabilities, and then I'm just going to say, oh, instead of probabilities, I'm going to have integers. Then this roughly makes sense. And by an algorithm due to Kahneman-Vachem, you can actually find a solution in polynomial time. Okay. So the talk was about basically the combined BLP plus affine algorithm. So, all right, let's give it. Well, we're going to run the basic LP algorithms. We're going to write down this linear program. We're going to check if there's a rational solution, rejectiveness solution. We're going to write, we're going to write down the affine algorithms. We're going to write down this affine relaxation, see if there's a solution over the integers, reject if there's no solution. And if we pass both of those checks, we'll accept. Notice we're not outputting a solution. Actually, we're not even touching B at all, the structure B at all. We're only working at the structure A. But this is actually, in some sense, we're just solving the decision problem. So this is fine. I should say that what I said has a subtle error. This is not correct. This is not what we write in the paper. But to motivate the issue we had to go around, in some sense is also the primary, one of the primary things that distinguishes the algorithm we're going to give versus the previous literature. Because actually both of these algorithms already appear in a couple of our papers. We'll motivate it by trying to prove things work and seeing that we fail. Okay, so those are sort of the two algorithms and now we're going to jump into an analysis. Before that, is there any questions about what I mean by like basically your programming relaxation or an affine relaxation? So now in our analysis, we're going to jump into talking about polymorphisms. And just to recap, polymorphisms are ways of combining solutions to get another solution. And this is actually going to be pretty important the way we use it in the analysis. We're going to basically, what we're going to do is we're going to sort of construct a bunch of solutions to our problem RA. Or they're going to be partial solutions. And we're going to use the polymorphism to combine them to get partial solutions to RB. But what we're going to show is that all the solutions to all the R, every clause for RB is actually consistent. So it's actually a solution for the whole thing. So I actually haven't said the main theorem, or actually this isn't quite the main theorem, but this is part of the main result, which is the BLP plus affine algorithm correctly solves PCSP AB. If the polymorphism has infinitely many symmetric polymorphisms. So polymorphism is symmetric. If for all permutations pi, the output doesn't change. So as this equation down here shows, you can also write, there's also a couple of mutations for talking about applying a permutation. So this will also apply for block symmetric, which I'm going to, which I mentioned earlier, like this alternating threshold, where you actually partition the variables into blocks. And within each block, you're symmetric. But for simplicity, I'm just going to talk about symmetric polymorphisms for most of the talk. All right, so let's do a first attempt at the proof where we're only going to use the basic linear program. So let's assume our polymorphism has infinitely many symmetric polymorphisms and we're going to pick some L. So we're going to solve our linear program and we're going to pick an L sufficiently large, such that you have some big symmetric polymorphism. I should also say I'm slightly over simplifying. I'm using one in three set as my example, but I'm using a symmetric operator. The proof for block symmetric is very similar, just a bit more notation. So I'm just simplifying it on the slide. So let's take one of our clauses, say, RA of X1, X2, X3, and let's take some solutions. So let's say L is five. We have these five solutions. So we'll have like maybe 1, 0, 0, 1, 0, 0, 0, 1, 1. OK, and we're going to set them so that they're roughly proportional to the number of copies. So basically, if I gave weight w 1, 0, 0 to this distribution 1, 0, 0, I'm going to give that much weight. And I'm just going to scale this up by L. OK, and I'll do that for 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, and so on. And the idea is if I combine these with the polymorphism, because the polymorphism is symmetric, it only depends on the output on the proportion of zeros and ones in each column. But by exactly because of how we set up our basic LP, the proportion of zeros and ones in each column is in some sense exactly the same for every clause, because we had these marginal consistency constraints that the probability distributions over the variables is equal to the probability distributions over the clauses, if you look at them in the right way. So in particular, even though we just output a solution for a single clause, we'd output the same solution for every clause. Yeah, so this is what you'd hope to work. But there's a big problem here. Which is by doing this, I'm assuming that L times W, each of these W's is an integer. In particular, L is a multiple of the least common denominator of all the rational numbers in your LP. And this is too much to ask for, because, for instance, it could easily happen that, for instance, if you're looking at maybe 2SAT and you have majorities, you have majorities of all odd arities. So L has to be an odd number, but it might very well be that your W's have even denominators. So this is no good. This actually will not work in this situation. And actually, let me show you a very explicit example of this. So we have 2SAT, like just the 2SAT formula, in which you have basically all four versions of the clause. So obviously, you can't solve all these at once, because each of them excludes one thing, so there's nothing to do. But there's a BLP solution. You have P1 of 0. Basically, you put a 50-50 distribution on each of them. But your weights have these one-hamps. So as I was mentioning, because these are even, an even denominator, we can't actually get them to be integers. So one thing you could hope to do is round the solutions. But actually, because this doesn't have a solution, there's no way you could possibly do the rounding to make everything consistent. But another reason I want to bring up this counter example is it actually has an even bigger problem. The BLP plus alpha in the algorithm I gave doesn't actually pass for this formula. I can also give you an affine solution to this clause. So I'll give, basically, I'll put the integer 0 for the assignment 0. I'll put the integer 1 for the assignment 1. And then I'll put the, and then you can put a certain distribution on each of the clause distributions. So this is actually pretty bad. Because, so what this is basically saying is, even for a problem as simple as 2SAT, for which there's been many algorithms over the years, this BLP plus alpha in the algorithm can't even solve that. So the major problem here, or the main observation, is if you look at this BLP solution, I put half, let's look at, like, say, the first clause. I put half weight on 1, 0, and 0, 1, but I put no weight at all on the assignment 1, 1. Thus, we can actually throw out 1, 1 from this clause and we'll instead get the clause x1 does not equal x2. This is known as, in general, this is known as refinement, where I solve the basically near program. I throw out things that are assigned to 0 and then I basically get some sense of simpler problem. Now, if you look at these not equals, it turns out if you would run the Appian algorithm on this refinement, you will actually find a contradiction it won't reject, which was basically our general algorithm. In the next slide. Well, so our algorithm is we'll run the basic LP algorithm for A and we're going to reject if there's no solution. Then what we're going to do is we're going to do our refinement. We're going to throw out things that don't have weight 0. You have to be a little bit careful. It's not just that there doesn't exist. There's some subtle details with which things you throw out because just having weight 0 for some solution isn't enough. It has to have weight 0 for every solution. But you can see the full paper for some details on that. And then once you have this refinement, you run the affide algorithm and reject if no solution. So I want to say that this refinement idea is probably the key distinguishing factor of the algorithm presented in this paper versus some of these older papers. So now let's do sort of a quote correct analysis. What we're going to do is essentially we're going to do essentially the same thing. But if you look down here at the table, we're going to pick an integer m, which is a multiple of all the denominators. So m times w is an actually proper integer for all w's. But because we're going to pick some l that's such that a polymorphism exists, but this l might not be a multiple of m, it could be a prime or large prime or something. So what we're going to use is we're going to use the n. We're going to have n be l minus m. This is sort of our deficit. And we're going to use the affine solution r to do a correction. So we're going to do m times w plus n times r. And if you do the math because we had that the sum of the w's has to be one and the sum of r's has to be one, the whole thing is going to add up to l. So this will actually work. You actually have an integer number of things. The only thing you have to check is that in each row, you're actually, or sorry, each of these mw plus nr's is actually non-negative. And the reason that is, is if w is positive, then we'll have some slack. So if we pick n sufficiently small compared to m, we'll have a little bit of margin to pick r, such that whether r's positive or negative, this thing's still non-negative. And then the key thing is that if w is zero, let's say w is zero, then r must also be zero because we said that we threw it out of our relaxation. In our refinement, we threw out anything with w equal to zero. So r will also be zero. So in particular, that will also be non-negative. So this will all work. And because of our consistency equations, we'll note that when we applied a polymorphism to each column, we'll actually get a consistent assignment to everything. This is the heart of the argument. And this is in some sense probably the trickiest part of the talk. So if there's any questions about this, please ask. Okay, just to sort of say it one more time. The idea is before l wasn't necessarily a multiple of the denominators. So l times w was non-integer, so you had to do some weird rounding. So now we can sort of, basically we're doing the rounding properly by multiplying these w's in a way so that you actually get an integer. And then you're doing a correction term based on these affine equations. Okay, so that's the analysis of the algorithm. But now what we want to know is what is, how powerful is this algorithm? Did we actually characterize all the promise CSPs for which this algorithm solves? Or is there like other promise CSPs for which you could actually look at other promise CSPs that we're missing? And it turns out there's the following theorem. ELP plus affine correctly solves this a promise CSP if and only if the polymorphisms have infinitely many block symmetric polymorphisms of two blocks. So actually it's an interesting corollary of this. We were able to show that the algorithm works for any promise CSP with a block symmetric polymorphisms of any number of blocks. We only showed on the slides one block but it also works for two, three and so on. And it turns out for any of these problems you actually have block symmetric polymorphisms of two blocks. So this is actually a rather non-trivial fact about minions which I don't think was known before. So the heart of this theorem is there's sort of an algebraic characterization of when it works which is there's a there's a minion which we'll call BLP plus affine we'll define it on the next couple of slides which tells you and there's this also this thing called a minion homomorphism such that if there is such a minion homomorphism from this object to poly A B then you have then the algorithm correctly solves and this is an infinite only. So what's really cool is this is like a succinct algebraic representation of when the BLP plus affine algorithm works. So let me just say I mentioned minions before but let me just define it formally now. We say that a minor F so let's say we have sorry we have some polymorphism F a minor of it G has the property that there's a map pie from one to L prime to one to L such that if I plug in inputs from L prime you get F on inputs from pie applied to those. So for instance, oh sorry this should be sorry just a second, sorry the map should be going from L to L prime my bad so if there's a function from X one to XL which is equal to so I never way to think about this is there's a way to like contract the coordinates. So for instance, if I have like a majority on maybe a hundred maybe like a hundred or maybe like a hundred or maybe like a hundred or maybe like a hundred a hundred maybe like 99 variables and I can set every three variable consecutive variables equal like break the variables into groups of three and set them to be equal then I'll get a minor on 33 variables. That's the main idea that's trying to be captured by this and then the idea is a minion is a collection of functions which are closed under minors and a minion homomorphism is a map which preserves already in commutes of minors. So basically the idea is each function in a class of functions has an arity the number of variables it has as input the number of coordinates and this minion homomorphism needs to preserve arity but it also needs to commute with these minors. So if I take a relevant minor then I'll also get before I take the homomorphism I'll also take a homomorphism. And important fact is that the polymorphisms of AB is always a minion. And in fact, if there is a minion homomorphism from one set of polymorph from one poll AB to poll CD then you also have a relevant polynomial time production. So in our case, the minion that we care about is what's known as the BLP plus affine minion. There's a few ways to define this since I'm you actually can define this without using functions at all but I'll just use it with functions. It's functions from let's say rational numbers takes all rational numbers to, sorry, there's also a slight typo here this should also just be to the rational squared where when you take the functions you get W1X1 plus WLXL is equal to R1X1 to RLXL. So the W's, sorry, the typo is that the W should be in non-negative rationals and the R should be integers. And you have the property that if you plug in all ones into this F, you get one comma one so the sum of the WI is equal to one the sum of the RIs is equal to one and then the following important property which corresponds to the refinement that we talked about before is that if some WI is equal to zero we must also have the RI is equal to zero. Yes, so yeah, sorry about that. The WIs are non-negative rationals and the RIs are integers. The important fact here is that BLP plus affine correctly solves the promised CSP if and only if there is one of these minion homomorphisms from BLP plus affine to the polymorphisms of A and B. So in particular, there's essentially there's sort of a copy of this minion inside of the polymorphisms of A and B. And if you notice Vera, you almost immediately get the corollary to BLP plus affine solves PSCSP AB if and only if polyB has infinitely many block symmetric polymorphisms of two blocks. And the reason you get this corollary is that one thing that always works is you can take WI is equal to take L to be odd, WI to be one over L and then the RIs to be minus one, basically alternating plus and minus one. This sort of corresponds to the alternating threshold that we showed before. And this is gonna be block symmetric because if I permute the even coordinates or the odd coordinates, we'll actually get the output is preserved. And because when you have a minion homomorphism, that means there's sort of a copy of this function inside polyB, which means there's an ever function in polyB with the same arity L, which has this exact property of permutations of coordinates. So because of time, I can't say much about the proof, but there are a number of similar ideas. So for instance, if there does exist this homomorphism, then you have these many symmetric polymorphisms. So the algorithm we presented earlier is enough. And sort of the big idea is if the algorithm works, then you need to be, you can identify within the polymorphisms structure that behaves a lot, that there must be some structure inside the polymorphisms that is behaving well with both the basically near programming combination, basically near program and the affine relaxation. And this structure can be, if you look at the structure, you can pull out of it, sort of using some sort of like a infinite tree type argument to actually extract out, you can actually build the homomorphism, sorry, the homomorphism explosive. Okay, so this was, so just to give a recap so far, we've discussed what promo CSPs are, we've given the algorithms that we did, we gave an analysis, we discussed the power of these promo CSPs. And now let's just give a couple of concluding remarks. So one nice result is that you can actually, you sort of get an all in one algorithm for Schaefer's theorem. So Schaefer's theorem says that any tractable in CSP has one of the following polymorphisms. So there's like a list of six of them, which I can show, like you have the constant zero, the constant one, or if two bits and of two bits, majority of three bits or XOR three bits. Now a key property about CSPs is that because the input domain and the output domain and the polymorphism are the same, you can actually bootstrap polymorphisms to get more polymorphisms. So for instance, if you know constant zero is a polymorphism, you can actually get that constant zero of all arities as a polymorphism, or if you have the or of two bits, you actually have the or of all arities or not. If you have majority of three bits, this one's a bit more non-trivial, but you can get the majority of all odd arities. Of, sorry, sorry, let's just say odd, of all odd arities. If you have the XOR of three bits, you can get the XOR of all odd arities. So in particular, by looking at this, we can actually solve all of these in a single algorithm because all of these are symmetric families of polymorphisms. So yeah. Yeah, I think that's all I have to say about that. Another thing you might wonder is, okay, great, you can solve Schaefer's dichotomy. Can you do the full, fully index of the full CSP dichotomy? And the answer is no, due to Jakub crucially shared this example. So imagine you have a domain of size five, zero to four. You have the following relation, which is zero one, one, zero, two, three, three, four, four, two. So here it is as a graph. So basically it's like a directed graph homomorphism. If I give you a directed graph, is there a homomorphism to this? Or another way to think of it is, is every, does every component, connected component of a graph, directed graph have basically either a two coloring or does it have basically like a three cycle coloring? And it turns out that this prompt CSP, so it's actually not even, not just a prompt CSP, it's actual CSP. It doesn't have symmetric or block symmetric polymorphisms of any area. That said, it does have cyclic polymorphisms of every primary, maybe a very sufficiently large priority. And it has, in particular, it can be solved in polynomial time by doing like Sharelli-Adams lift of the basic linear program. So this kind of leads into what do I think about the future of Promo CSPs? So I kind of see that there's two ways you can go here. One is there's a number of existing algorithms for CSPs or that we don't really know the power of them. So for instance, there's these bounded width algorithms due to which were fully characterized for CSPs by Barton Kozak. But there's also Sharelli-Adams, LaSaire hierarchies. Also, these recent Promo CSP papers show the power of adding in some affine relations into these hierarchies. So what happens when you add those? What's the power of those? There's also the general Blu-Latav and Zhuk algorithms for CSPs in general. And it would be nice to understand like what connections these have the Promo CSPs? Can we generalize these two classes of Promo CSPs? Do they work in some capacity with Promo CSPs? These would be interesting to know. We'd also like to thinking about the polymorphisms themselves. There's also very interesting classes. For instance, instead of these fully symmetric where you're using all permutations, imagine you only have cyclic permutations, which is like the example on the previous slide. That would be amazing. And it would actually, if you could solve this case, you'd already supersede a Blu-Latav and Zhuk. There's also transitive symmetric. And I think there's others you can come up with interesting classes to understand. And then also the even bigger picture, which is closing the gaps between algorithms and hardness, like much is still open for hardness as Libor's talk showed. And it would be cool if we could eventually meet in the middle at some point in the future, although that still looks quite a ways off in general. But perhaps something like Boolean Promo CSPs, this is a feasible target. All right, and that's it. Special thanks to Libor, Andrei, and Jakob for valuable comments or valuable help throughout. Okay, thank you, Josh. Thanks for a nice talk. Any questions for Josh, please? Right, okay. And maybe I can ask a couple of questions first. So in your theorem, you wanted symmetric polymorphism with two blocks, but I guess you want to exclude the case when one of the blocks is very small and the other is... Oh yes, yeah, sorry. Yeah, I forgot to mention that, yeah. You need the, yes, you can always have one of the blocks be very solid. So for instance, a dictator has, I guess technically two blocks, one of which is just a single coordinate. Yes, I meant to say both blocks are arbitrarily large. All right. Or you have two blocks which are the same, almost the same size, right? Yeah. Yeah, actually you can get just from this, from the homomorphism, you can actually get that the two blocks are off by one. So they're literally as close as possible to each other. Okay, thanks. So my other question is this. So you need refinement from a BLP to a fine. Can you do the refinement the other way? Oh, that's a good question. So if you, I haven't, I'm a little bit worried in that case. So the reason I feel like the answer is no because if you go back to here, what would happen? If you did the refinement the other way, what are you saying? You're throwing out things which have value equal to zero. I presume that's what you would mean by refinement if you can set it equal to zero, throw it out. That would be fine, but then the problem is, wait, actually, oh, sorry. The problem is you could still have negative solutions to your affine. So like imagine your affine gave some variable a negative value, but then when you solve the basic linear program, you got zero for the W. So then you would have M times zero plus N times a negative number, and that's negative. So the reduction work in that case, there might be a way to play around with things, but I feel like once you start playing around here, you're essentially gonna come back in this situation. Also, I guess by dominion homomorphism, I feel like that order of quantification because it's equivalent. I feel like it's, it is the right way to think about it. Okay, thanks. Any more questions for Josh, please? Hello, can you hear me? Oh, hi, LeBar. Hi, yeah, sorry for joining late. I'm somehow in a car on vacation. So just a question. So you have these two blocks or many blocks which are totally symmetric, and then as an open problem, you say like infinitely many cyclic, which is kind of optimistic, at least it includes CSPs. Do you see anything in the middle? Somehow like, you know, what else could be tractable and maybe easier to tackle now? Yeah, actually I was, while making the talk, I was thinking about this. I was kind of struggling to think about this. I mean, maybe a simple, maybe this is too easy, but just something that comes off the top of my head is like, the symmetric group has like, even permutations and odd permutations. Right. So maybe I actually don't know off the top of my head if you only have the even symmetric group. Oh, that might actually be block symmetric then. I'd have to, no, no, no, actually I went because that would still be, you'd have odd permutations there. Yeah, so even something like that, although there's a huge, because the symmetric group isn't, I guess what's the word solvable, there's kind of a, I kind of struggle, like once you even throw in like cyclic plus, like maybe a one over identity, it can quickly grow into the whole symmetric group. So you have to be a little bit careful when deciding which identities to throw in. But I do think there probably are some interesting intermediate case or even just solving it in the Boolean case because Boolean symmetric, a Boolean cyclic, I mean shapers dichotomy theorem is known, so there's no CSP barrier to overcome. That might be the correct problem to think about. Right, thanks. That's right. Any more questions for Josh, please. I have a question. Yeah, go ahead, Petrina. Many comments. Hi. Should you think about combining the BLP with other kind of interruptions and over the integers because some of the thing is that, so when you talk about the analysis, in the end, do you want to have like probability distributions on the possible assignment, right, in the structure A? And the fine part is integer number, which is, which can be negative. And the reason it works is because, I mean, you use the contribution of the fine part only to normalize the number that you obtain. I mean, the question that you obtain from the BLP. And does it make sense to consider accurate accession on the, I mean, where the variables take values over the positive integers? And maybe, oh, sorry. Like maybe trying to do some kind of refinement that makes it work. Yeah, so there's actually a slightly different approach, which I brushed over in this talk, but there's a previous paper of mine and Venkat's, which does, so instead of doing a relaxation over the rationals over the integers, you do a relaxation over what's called Z join root two. So you actually set each, so basically you solve the, what do you do? So if you go back to the basic LP, you go back to the basic LP and you add an additional constraint that instead of being rational numbers, they're actually in the ring Z join root two. And this actually also works. So actually when this was proposed in a different paper and it was only until this paper that we realized that that approach actually also does all symmetric or block symmetric polymorphisms. So there's a way you can do that to also get things to work. And so in that kind of, so in some sense you're getting like integer-like things with that are where you are allowed to specify that you're non-negative. It isn't quite as simple though as, sadly it's not quite as simple as, you still have to do some work to get integer values to plug into like this table here. But it is another approach that's viable. I'm also fairly sure that just solving a linear equation over at integers ultimately is not gonna be the most general thing you can do. But I'm not aware of at this time, like what's the correct algebraic relaxation? I feel like on the, yeah, I guess that's what I have to say. Okay, can I ask you also what is your guess about that? So do you think that when you think about that combination of the Shiralee Adams and their finalization, what is your intuition? Do you consider some generally? I mean, because my question is that you combine the Shiralee Adams with the same hierarchy level of their finalization, is this or do you think it's going to work also with? I mean, like take the Shiralee Adams solution and refine the finalization with respect to this solution would be fine. Sorry, is the question, sorry, is the question, what? My question is, do you think that the Shiralee Adams needs to be combined with the same hierarchy level of the finalization or because of the refinement isn't as much there. So you take the end of the- Oh, I see. I see. Well, I guess for the refinement to sort of, I feel like for the refinement to be fully used, you would want the affine relaxation to be the same hierarchy level or else you're throwing away information you got from the refinement or actually is that true? Yeah, because you'd also have distributions because you would add in a clause. I'm thinking of Shiralee Adams, it's like you would throw in an additional clause for every basically the trivial clause for every subset of like K variables. And then you would sort of solve the, I guess you would essentially be solving the basic LP on that and then you would solve, then you would refine it and do the affine. So I guess it's the same level, I guess is what I had in mind. It is an interesting open question, like how many levels do you need? Like I think it's known for CSPs, like at least sort of basically in your program, how many levels it, like three levels or some, or maybe it's two levels or three levels. It's become, you don't get any more power out of it, but I don't share anything like that so I'm for promises, please. Thank you. Okay, thanks, Katerina. Any more questions? No, no, thank you. That's okay. Well, somebody else. Well, if there are no more questions for Josh, then I guess we can thank him somehow virtually. Thanks for letting me speak, yeah. Thanks, Josh, and you will be notified about the next meeting of our seminar. I'll say that you can find the paper on Arcade. If you're interested in reading it, it's, you can also, I'll make sure that these slides, maybe some of the typos corrected, be put on the, I guess, either the YouTube or my website, wherever it's, I can post these.