 So, hello everyone. Welcome to the next seminar. It's my greatest pleasure to introduce the speaker for today, who is Katarina Viola. She will be talking about some combined basic linear programming and affine in-teacher programming relaxations for PVCSPs, I believe, on infinite domains. Thank you. Thank you, Jacob. So, this is joint work with standard GIPNI. And I start with the definition of constraint satisfaction problem. Many probably one, every one of you knows what a constraint satisfaction problem is. So, we start from a relational structure, which is a couple A, B, sigma, where D is a finite set that we will call, and we will call it the domain, and sigma is a set of relations over D. Given a relational structure, A, the sigma, an instance I of the constraint satisfaction problem for A or CSP of A, consists of a finite set of variables, B, and a formula psi, which is given as the conjunction of finitely many relations from sigma applied to some of the variables from B. The output of an algorithm solving the instance I of CSP of A decides whether there exists an assignment alpha for the variable C from B with value C B that satisfies all the conjuncts that is an assignment which makes psi to be true in relational structure A. So, in this talk, I will talk about free extensions of the notion of CSPs, which are infinite domain constraint satisfaction problems or infinite CSPs, promise constraint satisfaction problems or PCSPs, and valued constraints and satisfaction problems or PCSPs. So, I have a technical problem with, okay, so in an infinite domain constraint satisfaction problems, the domain of the relational structure is allowed to be an infinite set. In promise constraint satisfaction problems, the task is to find an approximately good solution to an instance of a typically hard problem when a good solution is guaranteed to a test. And to model this kind of a transmission problem, we have that each constraint is formalized by two relations, a sweet one and a relaxing one. Finally, valued constraints and satisfaction problems capture optimization problem. And here, to model preferences, the constraints are expressed by cost functions rather than relations. So, now I will motivate these extensions of the notion of CSP with some examples and I will give you the formal definitions. So, we start with an example of infinite domain CSP, which is the acyclicity problem. We are given a directed graph and the task is to find an ordering L of the vertices such that L of U is smaller than L of V for each directed age U to V. So, as you can see, for a particular graph that is for a particular instance of the acyclicity problem, the size of the domain is bounded by the size of the set of the vertices of the graph. However, we want to have an algorithm which works for every instance of the acyclicity problem. And this is why we can formalize this problem as a CSP only by allowing the domain to be an infinite set. So, infinite domain constraints and satisfaction problems are defined exactly as classical constraints and satisfaction problems. And the only difference is that the relational structure is allowed to have a domain which is a possibly infinite set, so a domain with arbitrary character. Okay, and we have that the class of infinite CSPs strictly contains the class of classical CSPs. Next example is free five coloring in which we are given a graph which is known to be free colorable and the task is to find a five coloring. This is an example of a PCSP or promise constraints and satisfaction problem. So, in promise constraints and satisfaction problems, we start from a promise template rather than from a relational structure. A promise template is a couple AB of relational structures over the same set of relation sigma with domains A and B respectively and such that the relational structure A is a morphic to the relational structure B. An instance i of the promise constraints and satisfaction problem for the promise template AB or PCSP AB consists of a finite set of variables and a formula psi given again as a conjunction of infinitely many relations from sigma applied to some of the variables from B. And so far the definition is the same as the definition of the input of a classical constraint satisfaction problem. The difference is in the output of an algorithm solving it. In fact, the output of an algorithm solving PCSP AB is yes if there exists an assignment for the variables from B with values in the domain A such that psi is true in the relational structure A and the output is no if for all assignments beta for the variables with values in the domain B we have that psi is false in the relational structure B. So, we can encode classical CSPs as PCSP by taking the promise template with the same relational structure. So, CSP of A is the same as PCSP of A. Okay, our last example is a mean vertex cover in which we are given a graph and the task is to find the minimum size set of vertices W such that each age has at least one endpoint in W. This is an example of PCSP and as I told you PCSP capture optimizes shock problems and therefore we need to express preferences so relations are not enough we need cost functions and therefore we don't start from a relational structure but from a valid structure. A valid structure is a couple gamma d tau where d is a finite set called the domain and tau is a set of cost functions over d that are functions phi from d to the n with values in q union the set containing the symbol plus infinity for some RTn. Yeah, I mentioned the fact that having phi of x equal plus infinity for a tuple x means that the cost function is not defined on x that is the tuple is not in the relation domain. Given a valid structure gamma d tau an instance i of the value to constraint satisfaction problem for gamma or PCSP gamma consists of a finite set of variables v an objective function capital phi which is given this time as the sum of finitely many cost functions from tau applied to some of the variables from v and finally we are given a threshold that is a rational number. The output of an algorithm solving the PCSP of gamma is an algorithm solving the PCSP of gamma decides whether there exists an assignment alpha for the variables from v with values in v in the domain v such that the cost of the objective function which is defined as the value of capital phi of alpha in gamma is at most the threshold view. So yeah all CSPs can also be encoded as PCSP by replacing relations with cost functions with values in zero and plus infinity. So if we have relation we can encode it as a cost function by assigning f of t equals zero when the top of t is in the relation and to plus infinity otherwise. Okay so v CSPs, infinite CSPs and PCSP are three different extensions of the notion of CSP and as I showed you in every one of these extensions we modify a part of the definition therefore we can combine these three extensions and this is what we did in our work with stonda so we consider the class of the infinite domain promise value constraints and subject problems so in the tv CSPs for short. So in infinite domain promise value constraints and subject problems we start from promise value template over arbitrary domains that is a couple delta gamma of valued structures over the same set of cost functions tau and with arbitrary domains g and c respectively and such that delta is fractionally homomorphic to gamma. I will tell you what fractionally homomorphic means in a moment. An instance i of the promise value constraints instruction problem for a promise value template delta gamma pvcsp delta gamma consists of a finite set of variables b an objective function capital phi given as in the vcsp setting as a sum of finite many cost functions from tau applied to some of the variables from b and the threshold and as in the promise case the output of an algorithm solving this problem is yes if there exists an assignment for the variables with values in the domain d such that capital phi has cost at most u in the value structure delta and no if for all assignments for the variables with values in c we have that capital phi is not bounded by u in the value structure gamma. Okay so now I can tell you what is a fractional homomorphism assume that we have two valid structures gamma and delta with domain c and u respectively and over the same set of cost functions tau. So a fractional homomorphism from delta to gamma is a probability measure on the set of maps from d to c with non-empty support and such that for every cost function phi from tau and every capital from d to the rt of phi we have that the expectation for phi of h applied to a in gamma where h is distributed accordingly to chi is at most the value of phi of a in delta. If there exists a fractional homomorphism from delta to gamma we say that delta is fractional homomorphic to gamma so in a promise value template delta gamma the assumption delta is fractional homomorphic to gamma guarantees that if there exists an assignment which makes phi at most u in delta then there exists an assignment in gamma which makes phi at most u in the value structure gamma. So the notion of fractional homomorphism can be generalized to the notion of fractional polymorphism as in the for homomorphism and polymorphisms and so given two value structure gamma and delta with domain c and d and with same set of cost function tau a k-ary fractional polymorphism omega from delta to gamma is a probability measure on the set of maps from d to the k with value 60 with non-empty support and such that for every cost function from tau and every choice of k many tuples from d to the rt of phi a1 up to ak we have that the expectation for phi of g applied to a1 to ak in gamma where g is distributed accordingly to omega is at most the arithmetic average of phi a1 phi ak in delta. Yeah I mean we can also define fractional polymorphism for value structures rather than for promise value template and the fractional polymorphism of a value structure gamma is defined as the fractional polymorphism of the the promise value template gamma gamma. Okay let's go back to our PVCSPs so we consider so our setting is the following we have a promise value template delta gamma and we assume that the the domain d of delta is a finite set while the domain c of gamma is an infinite one. Consider i an instance of PVCSP delta gamma with objective function this capital phi given this finite sum of phi j applied to the top of variables xj and now consider the following integer program with variables zjt and wxia so this is an integer program and so as you can see here the variables zjt and wxia are required to take values in the set containing zero and one having wxia equal one corresponds to the fact that the the variable xi is assigned to the value a from the domain d and similarly having zjt equal one corresponds to the fact that the the top of variables xj is assigned to the top of values from from d t then we have that the second row of this program says that every variable has to be assigned exactly to one element from the domain the first row expresses the the the marginality relation that are supposed between the z and the w's and finally we require that zj of t is zero whenever t is not in the domain of phi j meaning that phi j of t is equal to plus infinity okay so it is easy to observe that if we have that the the optimal value of this integer program is at most u then we have that that um capital phi is at most u in delta and the answer to the distance i of pvc speed at the gamma is yes uh we ask whether having a i pi delta uh not bounded by u imply that phi is not bounded by u in gamma but before answering answering this question we have another problem the problem is that solving an integer program is npr in general and therefore we consider two um relaxations of the the program that i showed you in the previous slide which are known to be solved only in polynomial time the two relaxations considerate are the basic linear programming or vlp relaxation and the fine integer programming or or a lp relaxation uh so they are defined exactly in same way as the the the previous integer program that i showed you before but in the b lp relaxation we require that the variables that we call uh we call them uh lambda and moon um they are required to take values in the set of non-negative uh rationals and in the finite relaxation we require that the variables q and r um take take value in the set of integers integers so um um our algorithm is a combination of the vlp and the a i pi relaxation so um we combine these two relaxations using a um refinement step so the refinement is defined um in this way so we first solve the b lp relaxation and we pick a feasible solution lambda star moon star um to to this problem um and now the refinement of the a i pi relaxation with respect to this special solution to the b lp is um the a i pi relaxation to which we had two um two more um the rows more rows and um we require that q jp is zero whenever lambda star jp is zero and that the the correspondence of r r r x i a is zero whenever moon star x i a is zero okay so um our algorithm works in uh in this way we start from an instance i uh b capital phi u of p v c sp delta gamma and uh we first ask if the optimal value to the b lp i delta uh is bounded by uh u if the answer is no then the output of our algorithm is no otherwise we'll first find a special feasible solution lambda star moon star to the um to b lp i delta and then uh we compute the refinement of the a i pi um i delta uh with respect to the special solution we ask if the the refined a i p uh i delta is uh bound and set most u and if the answer is is no then the the answer of the algorithm is no otherwise the answer of the algorithm is yes how do we find this special feasible solution to the b lp so um we know that uh if the the optimal value of the b lp uh i delta is at most u then we can find a solution lambda star moon star having the objective value at most u and such that it is either a relative interior point in the feasibility poly top of b lp i delta or it is a relative interior point in the optimal poly top of the b lp accession i delta uh so we uh want to know when this algorithm correctly solve a pvc sp so we answer this question uh in terms of algebraic properties of the the promise value template and in particular we have the following prm so let this then gamma be a promise value template such that delta has fine domain um well and and gamma can have a possibly infinite domain we assume that for all natural number l there exists a block symmetric fractional polymorphism of delta gamma with r t 12 plus one having two symmetric blocks of size l plus one and l respectively then the b lp plus a i p algorithm correctly solves pvc sp delta gamma in polynomial time so this prm is an extension of a result by brokensik gurusvani rohe ne jeepney from uh 2024 pc sp that was presented some weeks ago uh by dr and and what we did is to leave the analysis both to the valued and the infinite domain case however uh even if the algorithm is uh i mean from the feasibility point of view is the same um the analysis of the algorithm required um more attention in particular the refinement step in the valued case needed some additional care um and also in the analysis of the algorithm we used uh um the notion of a by mostly set structure for which we took inspiration from two papers one from colmogorov tappers jeepney from uh 2015 for bc sp and the other by manu budiski map person and tapper from um 2013 for infinite domain sp okay now i tell you what is uh a block symmetric fractional polymorphism so we know that uh an emery map is symmetric if it is uh invariant under permuting its arguments an emery map g is said to be block symmetric if there exists a partition of the coordinate of the map g into blocks such that uh g is permutation invariant within each coordinate block uh examples of symmetric operations are uh max and arithmetic average while examples of block symmetric operations are uh the alternating sum and some kind of moving averages like the one that you see in the second row um yeah and the fractional polymorphism is said to be block symmetric if its support only contains a block symmetric operations in the same way a fractional polymorphism is said to be symmetric if its support only contains symmetric operations i mentioned the fact that um if the promise value template template has a symmetric fractional polymorphisms of all arities uh then uh the pvc sp is correctly solved in polynomial time by an application of the BLP relaxation alone uh so i show you an uh an application of this result for promise vc sp to uh for uh infinite vc sp without the the promise assumption before i have to give you the national sampling algorithm so given a valid structure gamma uh with a final set of cost function and domain c something a sampling algorithm for gamma takes as input a positive integer d and it outputs a finite domain value structure delta d uh with domain d and and having the same set of cost functions that is structurally homomorphic to gamma and such that for every sum of cost functions from sigma um with at most variables and for every threshold u we have that there exists an assignment uh for the variables with values in the domain c such that the cost in the value structure gamma is at most u if and only if there exists an assignment uh for the variables with values in the domain d such that the the cost function phi um the such that phi has cost at most u in the value structure delta d uh we also say that the sampling algorithm is efficient if it runs in polynomial time so it is an easy observation that for every uh d natural number the couple delta d gamma is in particular a promise value template so uh because of this we can state this corollary uh so uh consider a gamma to be an input domain value structure uh with finitely many cost functions that admit an efficient sampling algorithm assume that gamma has a block symmetric fractional polymorphism of varity 12 plus 1 with two symmetric blocks of size plus 1 and l respectively for all um natural numbers l then vcs p of gamma is solvable in polynomial time and the idea of the proof is that for every uh uh natural number d and for every d sample delta d of gamma uh we can build a block symmetric fractional polymorphism of varity 12 plus 1 with two symmetric blocks of of size plus um of size l plus 1 and l respectively for all l and we can build this uh block symmetric fractional polymorphism starting from um the fractional polymorphism between the the the two structures and the the I'm using the the the theorem uh the main theorem that I showed you in the previous slide um we know that there are concrete example of value structures and meeting an efficient sampling algorithm uh that are plh value structure um that are value structure uh whose cost functions are first order definable over q using the uh the order um the one and the scalar multiplication by uh rational elements I saw and um using uh the this corollary uh I mean this corollary this corollary gives a new tractability result for a class of plh value structures having a special form of convexity uh but so this corollary can be somehow extended back to the the promise um case to the promise setting uh in this way we have um two uh a promised value template with finitely many cost functions and this time uh both uh value structure in the promised value template can have uh in three domains uh assume that gamma one admits an efficient sampling algorithm and that the promise value template gamma one gamma two has a symmetric uh fractional as a block symmetric fractional polymorphism of rt 12 plus one with two symmetric blocks of sizes l plus one and l uh respectively for all uh natural numbers l then we have the pvc sp of gamma one and gamma two uh is solvable in polynomial time yes I mean this is uh this porem uh is is I find it nice because um we we can assume that both the the the value structure can have uh in three domain okay so this was um everything I wanted to tell you about our work um there are now some open questions that I would like to answer so um and to discuss with you uh my first question is the first thing that I want to know is how to combine higher levels of the shiralli adams hierarchy for um LP and AIP and uh find their pickability tool I mean the pickability of this combination through uh pvc sps uh but uh nothing is known so far I mean uh for the best of my knowledge uh even for uh classical pcs sps uh yeah but I think this is this is something um that that um yeah I mean it it captured my attention on this problem and another thing is that um to investigate the pickability conditions for the AIP relaxation alone um so the the the pickability conditions for the blp and the the combination of the blp and the AIP um extended that the the sufficient condition for application of the these relaxations in the uh classical kind of domain pcs p setting so um I expected to have the same also for the AIP relaxation meaning that the the sufficient condition for the pickability was the uh existence of um alternating fractional polymorphism um however it seems that the the approach that we used for the blp and for the combination of the blp and the AIP uh is not enough for the AIP relaxation alone but I also have to admit that we didn't spend so much time on it um I think I think it is uh it is interesting to know even if I mean a result will be subsumed by the combination of the two uh uh next question would be what other results known for csps bcsps bcsps and pcsps can be extended to the combined uh the combined setting and finally um I would like to to know about new applications of uh in free domain pbcsps uh okay so this was everything I thank you for your attention I don't know whether I was too fast as usual but now I cannot passion very much um we actually have quite some time left so we can go back into some details if people think it was too fast okay I mean questions right now yeah hello can you please go back to the statement of the main theorem yes I'm sorry my computer becomes crazy with the the warm weather yeah I mean yeah so in the statement uh for every odd rte you have uh two blocks of size l and l plus one I know that in this uh original pcsp result you uh you can you may have just block symmetric polymorphism where the size of the minimal block grows to infinity and then it's how follows that you have such uh block symmetric with two blocks for every odd rte so I assume the same is true in here or is there any issue okay so uh you can prove the sufficient condition uh also if you have um like other kind of blocks that if you have um another number of blocks and of different artists this I mean the proof can be generalized the the the the yeah this one also can be generalized yes um I mean we didn't prove that if you have if you have any blocks in magic fraction polymorphism then you have one with these two symmetric blocks so I mean um it all's on the feasibility side but we didn't uh I mean we didn't investigate this question uh what I can tell you is that here you have a little trouble because uh you want that every operation in the support has um the same uh the same blocks okay so this might be a problem but uh okay but the tractability result for that it's enough to have weaker assumptions but it's not clear that the condition implies this condition which is written here yes okay and uh one more question so we have these relaxations for example the linear but the domain now is infinite uh so isn't that somehow an issue no because uh the the um as you can see you only apply the relaxation to one uh of the the the body structure that is the reason why I started with having the first body structure in the template with finite domain um and actually this is an idea of the paper that I that I saw um I cited from Manuel Johann Tapper and MacPherson that you uh you use essentially the fractional homomorphism and uh I mean you don't you don't care about the the applying the algorithm to the the the infinite domain structure you only care about the the finite domain one um and then you use the fractional homomorphism okay that is the sampling step right so you actually first do a sampling step before starting this yeah I mean in in the classical setting in the here in the in the if you assume that the first structure as a finite domain then you you don't care about the other I mean this actually also in the the pcv case and it is like a straightforward consequence of the the um the result for for uh I mean the the combination of the the BLP and the AI relaxation for classical pcv I mean at least on the sufficient for the sufficient condition can be extended to to to promise template in which the second relational structure as in print domain thanks another questions I have one question so so linear programming is in p and affine integer programming is in in p and are there other infinite domain structures that would be relevant for for finding other other algorithms than blp and aip but in the in the setting of so this is the the the same question that I asked to to Josh for like two weeks ago when when he gave his talk to the I mean uh in principle there are other relaxation that can be used instead of the the aip but yeah I mean uh more can be founded in in this paper by uh Joshua and the from 19 I think 19 but I don't know how to use them so here what what I can tell you is that there is a problem with uh affine programming uh because in the fine programming the the solution that you get are um are not fractional they are integer uh they are integers so uh they can also be negative integers so um and you you can I mean my intuition is that we could use it in like in the in the way we did because we started from a fractional from a fractional solution which is the solution given by the blp because in the end we want to to have something which is fractional in the in the in the valid word I mean this is maybe it's also in the case but in the case this is really needed so I think that you can combine some other kind of reactions but you um need something that produces in the end some like a solution which is a fractional solution in in this case this was um achieved by by this refinement step uh so I mean I still don't know uh what other um relaxation are are are good candidates but this is an interesting question and yeah I mean a trivial a trivial answer to your question would be yes try to consider higher level to the shiralea atom so you consider the two free level for the blp and the two free level for the aq and try to combine them but yeah I mean uh I don't know so far thank you maybe just like one more comment if I'm not mistaken you can actually run a single relaxation for both of these uh at once right this if you adjoin z with current of two or something like that then it would actually achieve the same thing at these two combined am I right is there anyone understanding this uh can you can tell me again please so if you replace q and z with a single uh domain uh like z adjoin square root of two then this is still solvable and somehow achieves both at the same time but this is something I just vaguely remember from the paper I had the impression that there's actually the other way around that that this uh z with the with square root of two is solved by exactly this uh or at least that's I thought that they they somehow claim in the paper it is equivalent it is equity yeah but but okay it's uh I was hoping that somebody can confirm this but yeah I definitely remember that just to be told that uh they first discovered the other relaxation that you are talking about before discovering this one which does not seem much but yeah what I can tell you is that in the value case we have more more problems because we want to also optimize in the uh so my my intuition is that you have to run one after the other but but yeah I don't know that I have one more question so from what you said in this last answer somehow feels that the the affine integer programming relaxation like does not translate well for for for vcs pcs is is that true like I mean for characterizing it that you can't really produce efficiently a solution from yeah so that um on its own I mean as I told you we didn't spend so much time on it but but because the the solution that you get is something which is integer and can be negative on some coordinates so I mean I don't know how to translate it like in terms of fractional polymorphism and so this is so that that is one of the obstacles there yeah so thanks there's some more questions I have a very general question do you have any examples of an actual computational problem that can be modeled in this like infinite domain pvcs p and not in the more specific domains or is that part of your open problems what you what you mean I mean that an example of a problem which is only that you can formalize only in the extended setting and no okay no well the the the couple infinite domain structure having sampling is an example of promising vcs p over infinite domain because what you get from the the the sampling algorithm gives you a promise value template which is arbitrary domains but besides of these I don't have others and actually when I started to look at the vlp and the ap relaxation I wanted to solve classical infinite domain vcs p's and so this was actually my motivation to start looking at this and I only observed later that the promise the promise setting would help me in in looking at infinite domain vcs p's thank you if I may I actually know one problem that is that is at least in the finite part so that's that's pvcs p that is quite natural and that is that is the promise is that you are given a graph that is say three colorable and you want to decide whether there is a big independent set so independent set of density like one fourth this can be quite naturally formulated as pvcs p but it still has a finite domain so I assume something like that could also work with infinite domains but I don't have anything at hand I'm sorry some more questions okay I guess if that's all let's thank Katharina again