 Is that the matching relations at the same RITs? Question to you if I show just like this, do you see anything or not? Do you see anything? No? Yes. Yes. Yes, you mean moving this pointer? Yeah Okay, so the condition is that there is a homomorphism from A to AW and both things are finite So this W stands for weak So how I look at it, we have two versions You know strong and weak So we have a strong version of the domain with D and a weak version of the domain which is DW and strong version of relations and a weak version of relations Okay, so now when When the domain is fixed, you can define this pcsp or this pair of relational structures There are two versions, one version means we are given another structure of the same type We should answer yes if there exists a homomorphism from X to A And we should answer no when there is even no homomorphism from X to B to know that this yes and no are disjoint just because Okay, this B is not B but AW Okay And the condition that there exists a homomorphism from A to AW Just mean that yes and no are disjoint And there is something in between in which case we don't care what the result is What I prefer to think about is the search version Which is not not known to be equivalent but seems to be quite tightly related So on input we are given a structure such that there exists We are promised that there exists a homomorphism to the strong version to A and Our task is to find some homomorphism to the weaker structure Okay, so here's some picture typical input may look like this So we have some variables like Z, Z2, Z4 and so on and we have some relations between them and We are promised in the search version that we can evaluate the variables in the strong domains So that the strong version of each relation is satisfied And what do you want to do is to find an evaluation in the weak domains so that the weak versions of the relations are satisfied right So this should be clear because I know all the time I'm talking about PCSPs. So Just shout out loud if This is not clear, right? So If this is okay, then the main question for us here is What is the computational complexity depending on the template? It's sometimes solvable in point of time. It's certainly always in NP The conjecture might be that it's either P or NP compete. We don't know yet One example of such a PCSP a well-known example is approximate coloring say so if we take the strong structure to be Three element domain together with the inequality relation So complete graph on three vertices And the weak structure is seven element domains and again the inequality relation Then what we are trying to do in the search version is we are given a three colorable graph And we want to find a seven color Or in the decision version, we want to decide between graphs Which are three colorable and graphs which are not even seven color So two remarks here, uh, first of all note that the normal fixed template CSP or a is the same as PCSP the strong and the weak versions are the same and second remark There is an obvious multi sorted version which I will use so in this version. We will have more strong domains and more weak domains The relations can go across domains. So there can be a relation which is a subset of say domain one times domain two and appropriate weak versions and the problem is the same just obvious generalization And just a smaller remark General PCSP Is equivalent to a version where all the relations are actually graphs or functions The the strong ones and the weak ones It's just because if you look at a binary relation Which is a subset of D times D. This is essentially the same for these purposes as the pair of Mappings like our first projection to D and our second projection to D Okay, so you're going to replace this kind of relation in our template by two graphs of mappings and nothing changes the complexity doesn't change So such PCSP's Or the full version of this where we have all graphs of function. It's usually called label color All right, so this is about PCSP Now what I'm going to talk about is this so this is the story Okay and then my What I draw doesn't erase unfortunately, which is a bit inconvenient Can I erase everything somehow clear maybe? clear So so this is the story This is basically the abstract of the talk if you read it So one nice thing about CSPs We have some sufficient condition Which enables us to reduce a CSP or one structure to a CSP of another structure So this less than or equal should mean there is a polynomial time reduction okay, and This sufficient condition also gives us some NP harness criterion And it's actually now known by the by the famous result of Guelotov and Duke that this is This criterion is good enough for NP harness for every CSP in this very strong sense. So if this CSP of A is not NP then This sufficient condition applies and we get a reduction from CSP B to CSP A So we get a reduction from any CSP to our fixed CSP Okay, now PCSP Uh The situation is a bit different. So there are some some nice things for example, essentially the same sufficient conditions that works and the second thing is That the reduction is in fact trivial. So I will explain this on false flight. I guess of course What I mean by trivial reduction first trivial reduction just doesn't change the input And of course, this is rubbish here because Well, we need to change input if we are moving between two relational structures But if you look at this CSP in the right way, then actually the reduction is trivial in this very sense not changing the input The bad news is that This criterion is not good enough for NP harness. So there are NP hard PCSP so that the criterion doesn't apply So we need something better and there are some better conditions One of them very general one is due to Brands, Verkhna and Givni the last version The bad news is that It's not of the same shape as in the CSP. So it doesn't doesn't follow from a general reduction From some sufficient condition for reduction between two PCSP's like in the CSP case And also it uses complicated results. So in this work What we do we give a better sufficient condition for existence of a reduction Similarly to the CSP the reduction is actually not trivial, but it is very simple obvious obvious reduction And well it implies the NP harness criterion of Brands, Verkhna and Givni With the different that it doesn't use any complicated result and the proof is quite simple And I hope to be able to show you a substantial part of it Okay, so so this is about the The sufficient the old sufficient condition for existence of reduction. So this is a theorem It says that One PCSP reduces to another PCSP Whenever there exists a minion homomorphism from the polymorphism Minion of this CSP to the polymorphism minion of this CSP of PCSP Okay, so here I want to explain these terms so by A polymorphism authority x or x is some finite set. I just mean a homomorphism from the x power of a to The weak version of a so a mapping that sends toodles In the relation a to the weak versions of it Okay, and the minion of all polymorphisms, which is denoted by m It's simply the set of all polymorphism from a to aw So I call it minion What is a minion? It's simply a set of operations From some domain to some other domain, which is closed undertaking miners By a minor, I mean Just merging shuffling Variables and introducing the millions. So I hope this example will make make it clear what I mean by minor So let's consider some seven array function f And a mapping from a seven element sets to three elements set given by this rule So one is mapped to one two is mapped to one three is mapped to three and so on Now the minor of f determined by pi Is this ternary function phi pi? Which is defined like this. So if you want to know the value on some triple d1 d2 d3 You just apply f to the tuple d1 d1 d3 d2 d3 d1 d3 Okay, and what these these elements are dictated by the mapping by Okay So this In the complexity for for this existence of minion homomorphism We actually don't care about the concrete polymorphisms. We just care about what they do How the minor function is performing so So the structure we actually need here is is is this one on the right So for every x we have some set mx of x array polymorphisms Okay And for each mapping from x to y We have a mapping from mx to my from x array polymorphisms to y array polymorphisms Given simply by uh by this minor operation Okay So I don't want to say it loud because uh people often turn off their attention if Some categories are mentioned. So this is this is just a factor. Okay So, um, so this is a minion and now this minion homomorphism This is natural transformation So minion homomorphism is a mapping, uh, which sends function in one minion to functions with the other minion And preserve the RITs. So in fact for every x we have a function sending x array polymorphisms to X array polymorphisms and it should preserve minors in the obvious sense. So if you take a function and it's minor Now shift the whole picture by by this xi this minion homomorphism Uh, this still works. Okay, the results will be still minors So now, uh, I hope this theorem everything is explained there if you have a minion homomorphism from here to here Then, uh, we get a reduction Okay, so now from here it also follows, uh, that we have some criterion for hardness and P hardness So let's deal by I a trivial minion By which I mean a minion which is somehow contained in every other minion I can describe it as Minion whose whole members come from a single non-constant function from a to aw Which we are guaranteed to have and again silently Okay, and uh Now an observation of this theorem is that if we have a homomorphism from a minion Of of this template a to the trivial minion Then actually this pcsp is in p hard in this very strong sense This theorem gives us a reduction from any pcsp to this given pcsp Okay, but just because because this i is has a homomorphism to any n Okay And now it's also helpful to somehow Uh, so I know this is so far abstract, but now it will you first start making sense for everyone This condition of existence of a homomorphism from m to the trivial minion Can be phrased as false so for uh, we need to have this mu Which is a mapping sending an x-ray polymorphism to one of its coordinates So think of you want to select for any polymorphism one important one important coordinate Okay, and it it needs to Behave nicely with minors. Okay, so here is some abstract Thing what it precisely means, but let's just look at this example right this example of a minor And uh, this condition simply mean that if we select the fifth coordinate of f as the important one Then we need to select The third coordinate of of f pi as the important one. Okay so So somehow it behaves in this obvious way nicely with minors For every function, we need an important coordinate And the important coordinate behaves nicely with minors. Okay So this is some kind of uh, mp harness criterion And this is the one which is not sufficient Okay, the old one, but this is the one which is sufficient for every csp actually So there are no questions. Let me move to the other slide Okay, so now I want to explain why Uh Why this reduction is actually in fact trivial. Okay, so here is somehow one Observation, which is essentially already contained in some Papers, but uh, which was not phrased this way. So let me phrase it this way any pcsp is actually equivalent to pcsp Where the strong Strong relations are actually all relations. Okay, so every relation is in in the template And then there are some some relaxations Okay, so let me try to say it again. So usually pcsp What we have is for some relations We have uh, there are relaxations In here, we actually have all relations and there are relaxations Okay, so this is also interesting for csp And it tells us that actually csp is is generally pcsp. It's a more natural problem with pcsp Well, okay, so how do you do it? How do you construct this relaxation of every relation? And actually it was constructed in the previous slide So if you want to relax a graph of a function These are such a binary relation between d and e Well, the relaxation Is simply the mapping in the previous slide. Okay, so the relax domain is the set of dra polymorphisms And the relax function is the you know taking minor function Okay, so now for every graph of a function We have its relaxation And we also can relax other relations if you wish to do so By the same principle Okay, and what is essentially proved in this paper of uh, uh Burin-Krokhin-Operchal is that these two pcsps are equivalent Yeah, maybe as just small remark Actually, this structure contains infinitely many relations. So we always need to consider just finite fragment Which we need. Okay, this is just technical technical remark But now If you work instead with the pcsp with this better version, let's let me call it Then the reduction in the previous slide is actually trivial. So here is it Here it is written if we have a Minion homomorphism from polymorphisms here to polymorphisms here We get a reduction from this pcsp to This better version of the other pcsp And now the reduction is trivial. We just don't change the input Okay, so now you can ask So maybe let me look at it this this right-hand side pcsp is an enemy Okay, and we are we are fighting for this big guy We want to solve some pcsp problem. So we are given an instance And now we want to ask an enemy some question and do Do something with it. So create a solution in In this weaker structure So what we do here, we just ask the same question. It is my instance. I want to solve Please solve the same instance Okay, and also you see if you look at it this way that we are like too nice to the enemy Right, we can ask way more complicated question which can help me Help us better to solve this original pcsp Okay, and this is the basic idea by Of the obvious reduction Okay Still good Okay, so now Here here here are some NP harness criteria for pcsp Here I repeat the criterion we already derived. Okay, if we have a function or familiar functions So that We assign an important coordinate to every polymorphism So that it behaves nicely with miners Then our pcsp is NP complete So here is again the minor and I will illustrate it here at the bottom I will illustrate all the conditions on this example So if we select Second coordinate to be important for f We need to have first coordinate because d1 here important for the minor right I was slightly better criterion Is this so we don't need to select one important coordinate, but say at most seven important coordinates Okay, by the way that Simplest criterion is already good for for example for proving that forecoloring as recoverable graph is empty So now the better criteria we need to select at most seven important coordinates But now what we require is not that it behaves really nicely With uh miners, but somehow weakly nicely with miners. Okay, so if we take a function We take the important coordinates And apply the the identification of variables we get some subset here If we take the minor first and apply At the take the important coordinates we get we get another function And the condition is these two sets have to do something with each other. We need to intersect non-empty Okay, so let's look at the bottom example again So if we select say this two elements subset of coordinates as important so the third and fourth one Uh, we don't require the second and third coordinates to be important for the minor We just need that two or three appears there. Okay, so the important coordinates can be say the first and second That's fine because there is this common two Okay So this criterion is already good enough for All mp hard pcs p's over symmetry Boolean domains. This is proved in here here is in special Case of it and it's also Uh, it's also enough in the case that Uh, the left hand side structure is a cycle of odd lengths And the right hand side structure is k3 Okay, so of course there is the question like how to select the the important coordinates And there are like many ways which are used combinational analytical and in this case topological and there is no like Um So far they don't have anything in common. So It would be great to have some journal explanation, but we don't have it there And uh, so the last criterion is uh, even better stronger Some version of it appeared in this paper already. So but but uh, these authors finally managed to formulate it in the best way so far so The situation the structure is the same we need to select say at most seven important coordinates for every polymorphism But we we require something even weaker than uh than before So we only require that for any chain of minors Okay, now this diagram is commutative. So the minor The function from x1 to x3 is actually composition of The function from x1 x2 to x2 x3 That some of the scars are weakly commutative. Okay, so some of the minors which we saw in this picture work Uh, at least a bit nicely Okay, so it's a natural generalization, uh of the previous condition Yeah, it is useful. Uh, for example There is a result by denouregoff and smith that, uh, coloring Three uniform hypergraph Which is too colorable by by any number of of fixed number of colors is NP hard and it's This great theorem is almost enough in their proof. So I'm a bit cheating here um The cheating is That they actually don't have this seven to be fixed number But something which grows slower slowly with x Okay, so mainly it was some logarithm of x I was told by andre that michael brawner brawner can prove that Actually, we can do it with this criterion only without without cheating. So I'm not quite sure about it I don't know if michael brawner is Is here whether he can confirm Yeah, I think I think it's probable. I didn't write a good count. I think it's probable for constant. Yeah, right Okay, so so this criteria is is actually enough and it's also enough For certain symmetric known when dcsps. So this The paper where the condition was actually formulated Uh, all right, so So here what I didn't like is that, you know, this criterion comes from a general result Where we don't have identity here, but we have a general General meaning of the right hand side Whereas for the other criteria, we didn't have anything like that. Okay, and this is what our theorem says actually that It works in general so So this is the main result Yeah, by the way that That homomorphism from the last slide I would call seven five homomorphisms. So seven would be Uh, you know the number of important coordinates I'm allowed to select and five would be the length of the chain Okay, this is not A notation I would suggest you just follow this thought So when there is this weaker homomorphism between the pcsps or the polymorphism minions Then we have a reduction but uh Now by an obvious reduction. Okay, so this is something which Which we didn't have so we didn't didn't have the criterion But also we didn't have that the the reduction is obvious so The notion of seven five homomorphism. I am not going to explain it just here. It's the same as the last slide Let me just explain what I mean by obvious election Well, I mean the reduction which will Come up to my mind and just that's the first better than trivial so So here's the reaction. We have some original instance Something like this. Okay, some variables these in some relations between them And now we have this advantage stage that we can ask whatever, right? So why? In the trivial reduction, we ask the same question We here we are more demanding. So we have this first layer where we just have the original variables and We can put the constraint there if you wish so. Okay, but then we also have second layer Where we ask for values of pairs of variables Okay, so this z one two should be Value for z one z two together Okay, so the domain of z one two is like domain Of z one times domain z two But also what I can require already in the domain that I only allow the admissible values. Okay, so the domain of this variable will not be the whole d square but only r in this picture So I ask about values for all pairs of variables And then I also ask for values of all triples of variables and so on up to some constant which may be huge but constant And then I include the obvious constraints So for example, if I know a value for variable z one z two z three, which is coded in here I of course know the value For variable z one z one two z one and z two, right? So there is this projection mapping from this variable to this variable Okay, so this is my reduction Well, I find it obvious. I don't know But that's what I Just the first more demanding version of the den true In In our proof, actually we only require For this version, we only require five layers carefully chosen layers and only this projection that means not say for example the original original constraints All right, so this is the main theorem Now Let's go let's move on now Well, I will I will talk about only the intermediate condition Where uh, you know the the middle one. So we are to every Polymorphism we are given as seven elements of variables So it behaves somewhat nicely with miners and we'll not talk about the the brand's Givny condition, which is the most complicated so Here is a proof why uh, indeed this this middle condition You know the bbko in the slide gives us an p-armus. Okay so uh First of all, what is label cover by by this? I just mean csp over graphs or functions Okay, a little bit too in the first slide already Now this problem get label cover one over 49 What is this so input? We are given a satisfiable label cover instance Okay, so there's a picture of label cover instance here on the right. There's some variables and projection constraints uh, so input is such a satisfiable instance And uh, our task is to find an assignment that satisfies at least one over 49th fraction of the construction constraints Okay So this is an npr problem, but the reason is quite complicated because it's based on two deep theorems One is the pcp theorem Where which proves basically that there is some constant Less than one so that this is npr And then the parallel repetition theorem i think this version would be enough Which tells us that we can push the constant arbitrarily Okay, so like one Thing which is not so nice here is that this is actually not a pcsp. So we are outside the theory. This is a pvcsp promise valued csp There is a Another version which I call here got label cover seven What it is. So again, we are given a satisfiable label cover instance At this time, we are not looking for an assignment, but a seven assignment And we want to weekly satisfy all the constraints So seven assignment is again, uh We are assigning two variables seven tuples or less of domain elements These two subsets of domain elements of size at one seven And we don't want to exactly satisfy the constraints but weekly Meaning so here we have some subset if I uh move it by the constraint I get some subset And this needs to intersect the subset for for this part Okay, there must be some intersection Okay, now, uh, this is the proof of NP harness You know, uh, this NP harness criterion Uh, the sequence of two reductions one is a reduction from a gap label cover one over 49 You get label cover seven And a reduction from get label, get label cover seven to this pcsp of m What are the reductions? Well, both are trivial. Okay now, uh To see that this reduction works. It's simply uh, so this first one. It's simply a probabilistic argument If we select randomly uniformly from each subset Some element then this gives us the soundness That no instances are mapped to no instances and the completeness is obvious and here again the soundness uh, it just Trivially follows from the condition we have on on this menu. Okay, so we have these mappings assigning to every polymorphism some seven tuples of Variables and this mapping we used to actually Find the seven assignment Okay, so if you look at it this way then well There's obvious question, you know this this thing This complicated result or NP harness of it is actually not needed, right? What we need here is only NP completeness of this problem. Okay, so I was wondering. I mean this can have easy proof And uh, well, and this is indeed the case and this is what I call the baby PC PCP theorem So here it is So The PC the baby PCP theorem stated that for any CSP This can be even PCSP doesn't matter really for any CSP The obvious reduction Is a correct reduction to get playable color? So Okay So in particular if I choose some NP complete CSP this gives gives me NP completeness of this problem But also this there is this added information that the reduction is really the obvious one. Okay And I want to show you proof of this more or less Because it's quite simple So let me try So let's say for simplicity the domain is Whatever it's not important here, but let's say it's three element and the template consists of all binary relations, for example, okay So now we have some instance of this CSP that looks like this and we create A new instance like this is the same principle as before But we just use two layers a and b For some carefully chosen a and b. Okay. Well not carefully chosen. They just need to be large enough Well a needs to be large enough and b needs to be large enough compared to a So now we have a variable for each a to both the original variables like this and a variable for each b to both of original variables and the obvious projection constraints Okay, and I claim this is a correct reduction reduction. Okay So what what we need here to prove that the reduction is correct is Is this fact? So if we are if we are given some seven assignment for this new instance We can somehow use it to find a solution to the original instance Okay, so this is the decoding step we need to do here So now the seven assignment for this new instance gives us a very clean combinatorial structure, which I was trying to describe in here. So what is the structure? For any set of variables of size a so these are these will be then denoted capital U smaller sets of variables So for each set of variables of size a We have some set of evaluations so like opinions On values of these variables. We only have at most seven of them. Okay, so at most should be less than And uh, we assume that there are there are formed by partial solutions Okay to Let me just say it again for any set of variables of the original instance of size a we are given seven possible evaluations Okay, and now we have the same for sets of variables of size V B They will be denoted by by V. So again, we have seven element sets of partial solutions Now the the fact that this new instance is weakly satisfies Satisfied gives us some weak consistency Which is this condition reconformally, but let's look at the picture So for this a element set of variables you we are given Some seven or at most seven assignments. Okay So one assignment is that z1 is map to one z2 is map to two and so on Second assignment is this one and so each row is one assignment now we have some Some set of variables of size b that contains you And we have this Seven evaluation for this set as well And then they need to to be weakly consistent meaning at least one option in here Is consistent with a one option in here Okay, so we have we have one two two one one three one two And we have the same in here. Okay So this is very like simple and clean combinatorial structure Which we get by uh seven assignment for the new instance Now what we want to derive is a solution to the old instance But we actually don't want to work with the old instance at all So what we actually find is a mapping from variables to the domain Said that for each pair of variables There exists this v containing these two variables So that this mapping is among the seven options for this set Okay, and uh, you don't need to pay too much attention to this but just because This sv's are already formed by partial solutions. We are guaranteed That this f is actually a solution to the old instance just because the relations are just binary Okay, so this will not play my my role, but I just want to say We just can't ignore the instance all together and just work with this called combinatorial setting Okay, so now I want to to show you a proof and Now it's going to be technical, but just maybe so that you see it's really no No No complicated at all So here is almost a proof. Okay So here is our combinatorial situation for each set of size a We have some seven guesses But now let's consider the general case that we have q possible evaluations Okay, not seven but q now for every big set of size b We have some r options for the evaluations We know that This object is weekly consistent And what we want to get is a solution Okay In fact, what we prove is that for each q and r Each sufficiently big big a and each sufficiently big b This claim will be true Now the strategy is kind of induction There are a bit different arguments for two cases One case is that we already know The claim where the smaller sets are allowed to have q minus one guesses q minus one evaluations and the bigger sets are allowed to have r and we go one step further with the q Okay, and this this is what I'm actually showing and there are actually two sub cases and I'm showing one of them Okay, the other one is even easier Nothing close. Okay, so this is the proof for the situation So we assume we have it for q minus one and we try to prove it for q So say some a prime b prime work for this Situation where we know the claim is correct. Okay, so if we have here guesses for sets of size a prime and b prime Here we have q minus one and r. We know that this guarantees a solution Now we choose somehow sufficiently big a b and then some auxiliary c. I'm not specifying That is sufficiently big, but we just work with sufficiently big Now here is the assumption we make. Okay, so this is the assumption Coming from the fact that we are considering only one case out of two Okay, so what we have this is drawn on this picture. Here is the formal Formal formula So let's say that we know that there exists this set of size x which we fix Set x of variables of size a prime a prime is The size we know for the induction of hypothesis the size of the small ones So that the following holds so for each 2 to d of for each evaluation d For each set of variables y containing x of size c There exists v of size b v so v depending on y so that among the r options we have There is no one which is consistent with this d Okay, so I I know that this this is complicated. There's two quantifiers, but For each x for each y there exists some some big big set So that no guesses are consistent with this d. Okay, so it's just some assumption We have to remember but but well I guess I am trying to To explain all the details, but I mean the details are here with the rest is to be checked. So This is our assumption. Now we fix the special set x. Okay What we do next so here we worked with the with the svs. Okay with this part Now we will work with the other part what for the with the opinion from the small sets What we want is to find d The x is already this is this picture. I'm showing this is the fixed x. We want to find d So that for any w of size a prime There exists some u u in here so that if we look at the Small q opinions the seven opinions for the for the small set There is some d in here Okay Some tuple so that for any w you can find something bigger so that it's somewhat consistent now, uh This is possible to do whenever Whenever a is big enough compared to a prime. Here is a proof. Just let me go quickly. I guess I'm losing everyone now But whatever so here is our x and let's say some w doesn't work for some specific d And then some other w doesn't work for some other d and so on And if you don't find the good d Then the union of all these things Actually gives us a contradiction. Okay, so we need this union to be smaller smaller size than These are very simple if you think about it for five minutes, but I don't have five minutes Now we are ready to actually define this kind of situation But for the prime sizes where we know already the solution is guaranteed. Okay, how we do it So for the small ones for the sets of size a prime We just take the corresponding u like in this picture We take away all the tuples which have d here Okay, and just Restrict the result to this w. Okay, so this is We take away take away the d's and See and and these are in these these tuples are our set sw set new set sw Okay, so what we achieve here that we Because of this condition here The number of elements of s prime is less than q so we can use the induction hypotenuse Okay, and for the big sets here is a construction so M is the set we want to of size d prime We want to define s s prime m So what we do for each w w of the small size We consider the corresponding q which we like it before Now take union of all these And use this y this union as the y in the assumption. Okay and This uh this assumption guarantees us that some d with nice property exists The nice property being there is no d in here and this is what you use. Okay Now if you just which was impossible, I'm sure but if you follow the instruction or return to it, you will see that We did it so that This thing is really weakly consistent Just because we took away two plus which have d and here we have no d that's simply the reason. Okay And uh, and that's it. So this is I mean, this is the most complicated power of proof and it's already very simple Okay, just like dealing with sets the only complication is this Two changes will go into first Well, so a comment here, uh, this seems like genuinely different from from dinos proof of p cp theorem I don't know the other proof of p cp theorem unfortunately yet but You know dinos proof the uh It's not just simplification of the proof because in her proof All like the ways how how things are selected were always just very simple counting This free analysis you count how many times function agrees with another function this was for for the Alphabet reduction step or you just take simply majority vote for another step And this doesn't seem to be like this at all. It's it doesn't seem to be like counting Counting argument at all just because of you know, this accountifers they some of this construction doesn't seem to To be phraseable in you know, in analytical way counting way So, well, yeah, so this this seems to be different And also it's simple all right, so Let me show you Like one Disappeared here. Yeah one like stupid application. So Uh, what is it good for? Uh, well for a good feeling so far mostly but You know, so there's one nice application of the old theorem the basic theorem that uh existence of homomorphisms Give us a reduction. So It's also kind of useless theorem, but at least at least one instance that it helped morally Which is a reduction from this hypergraph coloring result to Coloring result. So this is five coloring of three color graph and this reduction This yeah, and there is a It's quite simple to show there is a homomorphism. So we get reduction for free then Just by composing we also get that the criteria criterion of brands vroghna and jivni is satisfied, especially After this result of But you know the the way how it it was found was really by using This general theorem for just the npr mess criteria So this is one nice application of the previous like in abstract instance There is one Kind of application this new abstract nonsense The seven five homomorphisms give you reductions So, uh, here's a simple fact which was not known this, uh, we think So if you take pcsp over some minimum And you add say all seven other functions We don't get easier. Thanks. If you add, uh Polymorphisms then in generally you may get easier You never get harder But here we actually don't get easier. It's just because there is an obvious No, seven homomorphism from this menu And uh somehow the moral is to remember that That the complexity indeed, uh, doesn't depend on low already polymorphisms. Okay, you can just ignore them all together or Moreover, you know, if you add to a minimum something which is mp complete by the bwg Criterion then the complexity doesn't change This is a bit funny because actually this reduction Here comes from the fact that some six are a polymorphism is absent here So, but this tells us if you add it, uh, it doesn't get harder anyway So one way how to look at it, you know, you can you can remove some junk from your minimum because Without making it So this is a possible future application, but I don't have any concrete and complexity result. I can show you I didn't look for it So this is more or less it. So I have a one more slide It's um, so report. Yep question Listen, yeah, so how is this uh, so the last thing you said cannot be true for csp's right? So so something here is pcsp specific Uh, what cannot be true for csp's? Oh, I see. So you just if you just add, okay, never mind. Yeah, so I guess I was just thinking we have a csp. Obviously, we have a low additive polymorphism That'll make it easy, but the the thing is you will make it a mini and you won't close it under Right. Yeah, you won't make it a clone. Yeah Yeah, if you want to add something to a clone and make it still a clone and you have a way harder job some For millions So it would be hard to, you know, find application with csp Well, now it's pointless because you know that even the simplest criteria is good enough, right? But but yeah, there is nothing specific for pcsp here. It's just for For csp as well, but but the claims are kind of flow from all facts Thanks So let me just make a quick summary that I was trying to tell you And some questions So summary so one information I wanted to deliver Is this Basically just viewpoint on known results That every pcsp is actually equivalent to full pcsp where you can ask whatever questions you can You can create whatever instance you wish and don't care about the language The second information is that there is this weaker notion of homomorphisms, which is still good enough to give us Reductions and more or less more over the reductions are obvious Not true And the third one is this baby pcsp theorem that the obvious reduction From any csp proofs and p hardness for of this version of Gap label cover I was told but by Venkat that A very related problem is called mean rep label cover, but it's it's not exactly it's this way Just call it temporarily like this Okay now like few questions here. So one question I already a bit talked about It's known that the criterion The second even the third criterion the best one remains true if We have slightly super constant Thing here. So we don't have we don't need to have seven here. It's enough for example to have logarithm Square logarithm to some power of well, we need to be smaller than some fixed polynomial We don't have yet Such version, you know of of this discrete of this baby pcp And uh, yeah, here I just recall originally, you know, this Super constant version was required for hardness of hypergraph coloring not anymore because of michael rochner But yeah, it might be useful The question is whether such version is actually true. It's not It's not obvious because I want to use so many obvious reductions for example, okay so Or whether whether you can still find you know some simple proof than the news in called the smash memory of pcp and in particular Does the obvious analog of the parallel repetition theorem work in this setting in this, you know Uh baby baby pcp setting It's not clear now the second question. Is there an analog of baby? Analog of d2 one conjecture, right? So there is a theorem that if d2 one conjecture is true then K-coloring of alcohol graphs Is always in PR. Uh, this actually this result is written here is by Uh, we've got guruslamian size and deep Finally the reduction here is also trivial if you look at this pcsp As I told you Anyway, we don't have baby version of this. This would amount to prove Well, for that it would be for example sufficient to prove our combinatorial plane When these parameters a and b are close to each other. Okay, so say b is equal to a minus one And this seems like, you know, doable problem if it's true To be an obvious target We didn't try it, but Be nice to have Unfortunately, this will not give us Hardness of this pcsp. Okay, they they use really the b2 one But anyway, the reduction is still trivial. So it would be nice to understand So and the third third sort of questions is that there are more and be completeness or hardness results Which somehow Not do not follow from this general criterion. I present it. So one of them is Wang's result that Which is this one, okay, I'm not gonna read it. It's about coloring of grass With some parameters Well, I have no idea what's going on. It would be too nice to have, uh, you know some Generic some some theoretical explanations some kind of Homomorphism which would be good enough to prove this Well, yeah, another thing is this D2 on hardness. I talked about another general criteria and say to get D2 on hardness of pcsp. Nice to have And This last one is maybe maybe most torturing me It's a reduction which was used by Rohne and Jimny to prove For example, uh, so to improve this result of Wang to to this result And a lot doesn't need to be a lot anymore. Okay, and like one piece or The crucial piece is a reduction from pcsp over k6 and k 38 over 19 to pcsp k4 k38 Okay I don't think uh, I didn't prove it formally, but I don't think it's captured by this by by their criterion But the obvious reduction still works. Okay, so you would need some better homomorphism to to capture even Some better capture when obvious reduction works and obvious without any change. Okay, this is the same obvious reduction Although they phrase it completely differently, of course Uh From one day, I actually have some notion of homomorphism, which is ugly, but but you know, I don't have any unification from From what we have. So this is So it's only a way Where I need to go to somehow to have a uniform Way how to prove hardness of pcsp and then prove as dima or andrei that the rest is easy All right, so I think that's it. Thanks for listening Okay, thanks Libor, so let's uh, let's upload Uh No, uh questions for Libor, please So question So Libor, I I assume you also get that layered version which uh By the same reduction just because you had many layers and yes, yes Well, the argument is actually again the same kind of Yeah It's simple Like before yes Yeah, also the layered one you told me you still consider it kind of a hack but Now it doesn't seem to me like a hack anymore, right? So you may decide to somehow work with only two layers and you may be smarter and work with more I mean, it's not end of the story. Obviously as the last Reduction I mentioned shows us but but it's no no longer a hack Any more questions, please um question I was wondering for this question about a super constant instead of 7 Do I understand correctly that it goes down to this dependency of your constants a b on q r? Yes, yes, and and you have some estimates about what you get a new proof Well, no, we we didn't try to optimize it even now. The proof is kind of asymmetric It's simple, but it's kind of asymmetric and we didn't make any effort to optimize it But it may be hard It's our approach, but it's certainly Worse, so we would need to know somehow longer. It's me better So Leboard, can you say how your a depends on the seven? Suppose you want gap label cover seven. How big a domain do you need? Well, uh, yeah, I cannot tell you It's it's huge actually in our proof as you know, as I presented It's not kind of runcy number huge, but it's you know exponentially huge Yeah, by the way, can I have a question? So, uh I'll call you michael marching marching rojna would be so kind when somehow explain me in Sometimes like how we prove this constant for hypo graph coloring Ah, sure sure. Okay. I'll try to write it down soon. Uh, but it's in the end It's a simple modification of the existing proof. That's just one trick about, um, those never pass Any more questions, please So what okay, then I can ask a question perhaps So leboard one, um One of these baby grows up what you want it to be Well, yeah Of course I want to be at uh, not baby, right? So this was my kind of original operation prove baby pcp and then Just make probabilistic version and that's pcp and then move on to You know g2 one and many games. So this is some of the motivation behind it So, uh, what's the chance the paper ready leboard is is the paper available for distribution? Uh, it's almost ready in preparation. So we haven't started Okay, yeah Yeah, one last chance to ask a question A technical one Uh, okay. Sure. Post the video anywhere Sorry the video I go into post this presentation anywhere. It's It's recorded to say as I can see Um, yeah, uh, we will probably post it on the on the website of the seminar if uh people are interested So we are recording because If I managed to somehow get it to my computer Yeah, uh, I will still have to look into technical details But we are recording because uh, I've got a few requests from people who could not attend today I think even just posting the slides would be great. Uh, yeah, I will I will do it I will post them into my website at least Thanks All right. So well, well if there um No questions Then um, yeah, let's uh, thank liver again Thank you liver and Everyone please send us your suggestions for the future talks