 Great, so hello everyone, thanks for coming and today's talk is by Alex Kazda, who is currently a visiting professor in Boulder, Colorado, and he will talk about some minion homomorphisms and how they give reductions for promised values constraint satisfaction problems. Thank you very much Jakub and good morning everybody. So I'm going to be talking about that so I'm not content with usual CSP I want to talk about promise valued CSP so let's begin by explaining what that is. So for my talk, it will look like this. I've got a cost function. That's a sum of some parts and each part consists of some, something I call valued relation or not just me but it's called valued relation, which would be some are like a prefabricated cost function applied to some variables so those V11 V12 and so on they are just symbols that stand for some variables. And then I can multiply each of the each of these hours by little R which is a positive rational number. All numbers are rational because we cannot represent all real numbers so I'll just stick to rational ones, and I make a sum of them. Now, the thing is this is a promise problem. So actually each of these hours can stand for one of two things. So for each other see each symbol are I've got two functions that go from either and power of some set a or and power of some set B to rational numbers, all or also I allow the infinite value which basically means that some couple of inputs is not feasible. And I've got yes instances and no instances so it's a promise problem so sometimes an instance is neither yes or no. If you seem to need talks about promise CSP this this should be a familiar thing. So the yes instances are such that there exists some sigma that assigns values from a to the variables so that when I evaluate the cost function in a in here, then the value is at actually I think I may have forgotten to tell you about the queue as part of the input so that's a threshold value, which when you can just imagine it to be zero, it doesn't really affect things too much. And I've got no instances where it's so that whenever whatever value assignment I choose so whatever values I give to variables in B, I will never be below a queue. For this to be reasonable I need a and B to be some specific structures. So I need that there exists some homomorphism from a to B, so that I cannot have an instance that's both yes instance and a no instance it should be so that when there is a cheap solution in B in a kind of automatic fashion. And I'm not putting this on the slide because it would be anything too much definitions, but a and B need to be somewhat reasonable to make this problem work. So one way to make this reasonable is to choose a and B to be this exactly the same things. So then it's valued CSP in the decision version. So maybe your favorite view of value CSP is search version where you want to actually find a satisfying assignment. So I'm just going to do it in this decision version because then it's easier to reduce. Yes, no problems, but the search version is not really very far away. Now PV CSP AB will be the situation where I fixed the the domains and the cost functions. So each domain together with cost functions. I will sometimes call it valued relational structure so that's a and B, and it's a similar situation to CSP of a or PC SP of a so I want to know how difficult to solve this is and everything will be finite to for for this talk. Although I'm sure that something can be generalized. I will stick to just the finance stuff. Okay, so in study of CSP, you probably heard about the algebraic approach or know a lot about the algebraic approach where it means studying operations that are compatible with relations on the structure. So what are weighted operations. So I'm going to use my version of them, which, however, is not that different from what other people have been doing. I'm definitely not not the first one to talk about weighted operations. So what's weighted operation for me, it will be a weighted some like this so I've got some weights on the inputs so I do this by giving weights to projections. So this that's the fi fi is some non negative number there for each projection. And then the interesting stuff begins here on the right hand side of the arrow which is just the delimiter that tells me okay this is, this is where we go. And here are weights given to operations so the G and age and so on. These are honest operations that go from and power of a to B. The condition is that weights are non negative because that would be strange if they were not. And I want the sum of the weights on the inputs and the outputs to be the same so you can think of this as somehow loosely transforming the inputs into the outputs, according to some recipe. It's a little bit strange. So this is probably the most general view of weighted operations. So here's an example which is actually not very strange and probably you've heard of this example which is some modularity. In my view of the world some modularity is weighted operation from the second power of 01 to 01 and the coefficients everywhere the weights everywhere are just one so it takes first projection and second projection, and it outputs a formal sum of bitwise and and that's where everything's got the same weight, let's say one. Now, as I said I'm not the first one to do this so the first paper that I know about that's using this for CSP is from 2013 it's David A. Cohen, Martin C. Cooper, Mike Reed, Peter Jevons and Stan Bajivny. So I don't think that this is exactly the way they would like it in that paper but it's basically the same thing. And they use them already as polymorphisms. So what are polymorphisms? Well polymorphisms should be operations that respect relations. Here is a reminder of what operations are and weighted polymorphisms should work on these weighted relational structures so some universe and some weighted relations and the compatibility condition should be something like this. So for each n, the n-array weighted polymorphism consists of those weighted operations that map tuples in a given relation to something that costs at most as much as the inputs. So the weighted polymorphisms don't increase the cost. You can see on the left-hand side that's like the cost of the projections that corresponds to this left-hand side and on the right-hand side of the inequality symbol I've got the costs of the outputs. So basically I'm taking the formal sum and kind of applying it to the inputs and the outputs. And again, if you looked into VCSP you might have seen this maybe in a slightly different form but it's not anything very new. So for submodularity things become quite nice. That's probably how you would see submodularity in a CS paper. So a relation is submodular if when I take any two tuples in the relation, some C1 and C2, then their cost is at most the cost of the sums of the bitwise and of the two tuples together with the cost of the bitwise or of the two tuples. Now sneakily I'm changing A to B on the, you can see when I go from the left-hand side to the right-hand side, but that's not really a big difference conceptually. Of course it means that I cannot compose for example, but otherwise not much is going on. And the set of all weighted polymorphisms, so when N goes from 1 to infinity, that's the weighted polymorphism minion. So now, what's this talk about? It's basically about me trying to be like the cool kids. So Bartoboulin, Opershel and Crokin wrote a paper about how homomorphisms of polymorphisms give gadgets reductions. So when you've got minion homomorphisms of CSP polymorphism clones, sorry, minions, not clones, you get some very natural reductions between PCSPs. And I would like to know if we can do this for a promise valued CSP where instead of polymorphisms we've got weighted polymorphisms. And this talk is about how we can do it, but first I need to answer the question, what's even a reasonable way to define these homomorphisms. So I've got these weighted polymorphism minions, but what are the right homomorphisms? So again, just a reminder of what I'm talking about here, what the operations are. So those will be some weighted polymorphisms are some operations like this. So I'm going to denote by W Paul plus the support set in the sense that those are the energy operations for which I can find some operation among the weighted polymorphisms where the F has a positive weight. The F is a normal operation like the everyday operation. And sometimes these operations have a positive weight in some weighted polymorphism. And if that happens, I toss these operations into the support set. The support set is actually a minion, which means it's closed undertaking miners. So if I take all of these W Paul plus for all RITs, I will need it to choose A and B so that it's non-empty. Not like the effect of the universe for all A and B, but for reasonable A's and B's. This will be non-empty as long as I've got some weighted operations. This will be non-empty. And I want it to be closed undertaking miners. And that's the effect of the universe. So if some F belongs into the support set, then its miner will belong into the support set. A miner is what happens when I take an operation and I start messing around with the variables. So there is some mapping sigma that goes from 1, 2, 3, 4 to N to 1, 2, 3, 4 to K. And it assigns new indices to access like this. So I'm not going to be talking about miners too much about indie stock, but they'll make an appearance. So support minion is closed undertaking miners. That's some exercise that basically follows from these weighted polymorphisms being closed undertaking miners, which again it's not very difficult or deep. So homomorphisms. Well, I take some minion homomorphism on these support sets from weighted polymorphisms AB to weighted polymorphisms of CD. So what's homomorphism here? Well, I wanted to commute with miner taking. So when I take miner, it shouldn't matter whether I take it inside the homomorphism or I do the homomorphism first and then I take the miner. And many weighted minion homomorphism is such a mapping on the support set that respects the weighted operations. So whenever this big thing is weighted polymorphism from A to B, then when I take phi and I apply it on the right hand side so you can see on the right here this red thing, that's the only change in the formula and the big formula. I want this new formula to lie in the weighted polymorphisms from C to D. So if I do this, and it always happens like this, then I say that phi is giving me a weighted minion homomorphism. Now, I'm actually lying to you a little bit in the sense that I simplified things to get the full power of the TRM. We can actually make it more general. And instead of taking one phi, we can take the probability distribution over the some different lines so I can take some linear combinations in here, but I think that's too much to try to explain because then I would have one more sum in here and everything would be even worse when it comes to complexity of this formula. So for this talk, let's just say that it works like this, but it could be made a little bit more powerful. Okay, so I want to do reductions and I'll do reductions through an intermediate problem. This is again inspired by the methods from the Boolean-Barto Operschal Crockin paper. And my intermediate problem is something called valued promise minor condition problem with the lovely acronym PVMC. It's got parameters N, A and B. So N is just some number and I'm not going to talk too much about N, but basically for the reduction to work, N needs to be some big but constant number for the given structures. A and B are these valued relational structures and the problem looks like this. It's got, it's a little bit complicated. It's got lots of inputs, but it's all pieces that are needed. So I've got again a threshold Q, that's the same kind of same thing as for CSP. Then I've got a finite set of minor conditions. What are those? Well, I'm formally not going to define them. I'm going to give you an example in the next slide, but there are identities of the form. Some operation equals some other operation where I'm also allowed to take minors of these operations. So really it's minor of one operation equals minor of another operation. And these operations will have operation symbols F1 to FN. So those are just symbols. So I need to, I need, I'm basically solving an functional equation here. So this operation symbols will need to be realized by projections or by some operations from this support set. That's coming in a moment. But it's also a valued problem. So there'll be valued maps, alpha i and beta i for each fi. I'm going to have a valued pair of valued maps that give me the costs of choosing each operation to be something like that. Because it's a promise problem, I've got two for each operation. So there'll be one for the no instances and one for the yes instances. Each fi has some already the big N has only one one rolling here, which is it's an upper bound on the RITs that that can show up in this problem. So I cannot make any RITs at all at one time I can just choose RITs at most N, which makes this problem, not explode in some crazy way. Finally, there is probably the most technical part, which is each alpha i and beta i each pair of these costs needs to be compatible with weighted polymorphisms, which I think if it's like you if your situation is like me which is it's in early it's kind of overwhelming all of this stuff. Maybe this is the thing that could be ignored the most. So alpha i and beta i can have some properties but it's not really that that important on the first pass, what they are, but it's basically like relations being compatible with operations kind of situation. So what are the possibilities here. Well, I can have yes and no instances. A yes instance is an instance where I can solve the minor condition in projections. And I can do so cheaply. So cheaply means my cost is at most Q and the cost for the yes instances is given by the alphas and alphas are each telling me how much a given projection costs. I've got Xi, which assigns to each fi some projection the Xi i projection, and I need to satisfy this system of identities Sigma, and I need to do so, so that the costs are at most Q. So that's that's yes instances. No instances are such that whenever I tried to do this in the support minion instead of projections. I will actually not manage to both satisfy Sigma and make it to be cheap so at most Q, but notice here I'm using beta eyes. So I switched from from alpha eyes to the other to the other costs. So beta eyes needs to assign value in our costs to each member of the of the weight of the support minion of the of the weighted polymorphisms. I've got yes instances I've got no instances. It's not obvious but it turns out that if I've got this compatibility going on then I cannot have a yes and no instance at the same time. So compatibility is this like, I mean imagine alpha and beta our relations and then compatibility is basically that. Compatibility keeps alphas and betas honest and somehow not not not crazy. So, I will say that a pair of these costs alpha and beta is compatible with weighted polymorphism. And the end AB, if whenever I've got an array weighted polymorphism, alpha and beta are both they have some already, I'm kind of hiding this from you but they've got some already and whenever I've got an array weighted polymorphism F, then the costs of the projection, the dimensions weighted by the by the by the weights from F are at most the costs of the of the outputs weighted by the again outputs of F. So, I mean it's a lot of letters but but I think this is pretty natural as a choice for compatibility. It follows the pattern for relations for example. Where, where are these because and so alphas are defined. Alpha is a map from projections right. Yeah, from projections to rational numbers I don't allow infinity in here so it's just. There is nothing rational numbers and beat us go beat us go from an array part of the support minion to again rational numbers. And alpha, each alpha I goes from just projections like projections with an inputs. So, so they, they both have the matching RTA. Yeah, if you, yeah, they are actually all the specific to your function, and they have the same already as those function symbols. Yeah, so there is one alpha I beta I for each function, and each function in the input has some RTA, and the RTA is match. So, so that's, that's how it is and actually my input is kind of big because I what I need is that as a part of the inputs I'm getting these alpha eyes and beta eyes, and I'm giving them by I'm getting them as tables so for projections that it's not so bad but actually the table for beta I will be kind of big, but since I've got this big and as my as my upper bound on the on the RITs. This will not be not be a problem for some asymptotic complexity but but beta I is a pretty big object actually. So this is probably not a problem you want to solve in your everyday life, but it will be useful as intermediate step in the, in the reductions. Allow me one more question just so that just to make sure I understand to the small and in the definition of compatibility has nothing to do with the other small ends. Ouch, that's, that's great. Great question catching catching something that's wrong. Yeah, excuse me, this is this are different small ends. Yeah, so I see one small and here, which is always the same up until the compatibility where I was not very nice. So this this last this small and here in the, in the compatibility part, it probably should be something like M, or I just didn't notice I already used this, this in this number so that's my fault. So this end doesn't have, it's not, it's not this, this one for compatibility I'm just thinking about one, one pair I'm taking this one pair I'll find beta at the time. So there is no interaction with anything else when I check compatibility. Okay, so here's an example. Hopefully this will make it a little bit more understandable. There are some relational structures on 01 and let's say that I've got an instance where I've got F1 and F2 that are both binary and Sigma will be the system of these identities. So here's an example of a minor condition, which, if you look at it. The best is that F1 is symmetric and by the same token F2 needs to be symmetric. So that's the system. I just picked it at random to choose something kind of reasonably small. Now the cost will be as follows, Alpha one will give weight will give for cost one to any projection and beta will just calculate the values of the operation on the input 01 and one zero and sum them up in let's say integers. So here it's not modular, modular to here it's in normal way. So that's that's some way to assign this cost of course there are many others but let's say this one. And for simplicity let's just say that Alpha two beta two are constant zeros so that they get out of the way. So it's never a yes instance. Turns out no, because in order to be a yes instance I would need to satisfy the system, Sigma in projections, and there is no projection that would be symmetric. That's kind of a very easy exercise that no projection will satisfy X, Y equals Y, X, so whatever projection you pick this will not work. It's not a yes instance which means algorithmically we are always safe saying no to this, but that doesn't answer the question whether this is an official no instance. And there the answer is it depends and for in the for the, with the idea of explaining this problem a little bit better. I'm going to look at the no instance situation. To not be a no instance, there needs to exist some cheap solution in the weighted polymorphism minion. So there need to exist F1 and F2 that are both symmetric they live in the binary part of the weighted polymorphism minion. And their cost measured by beta, which is which was defined remember by summing up the values at zero one and one zero is at most q, which q was one. So so the, the sum is at most one. Basically what we want is commutative F1 in the weighted polymorphism minion that where zero one costs at most one, why just zero one because F1 is commutative so so actually this is kind of simplifying itself. And because we are operating we are going from zero one to zero one. That means that F on zero one and one zero needs to return the value zero. So this happens if and only if the binary part of the weighted polymorphism minion contains either a bit twice and nor x nor or constant zero. Those, those will be those binary Boolean operations that return zero on different inputs. So many can draw the table and they are four of them. And that's that's which operations, those are now to save time I didn't go into details about this possibility but A and B are also restricting the, the, the possible beta so the beta I picked here is not possible as part of the input always but only when A and B have some specific properties. But I'm just kind of hand waving it away because I want to finish the talk in some reasonable time, but there is a little bit more to the, to the example. So now, what's the, the, what's my approach. I want to do reductions from one PCSP to another. I'm assuming I've got the weighted polymorphism what weighted polymorphism minion homomorphism and some fi and I want to imitate the Boolean-Barto-Opercial-Crockin paper by going through an intermediate problem and reducing PCSP's first to this intermediate problem and then getting a pretty straightforward reduction between these intermediate problems and then going back. So I've got three arrows in total in here. I assume that I've got homomorphism from AB to CD, the reductions go the other way, which is kind of normal in this situation. So I start in PV CSP CD. I want to reduce it to this promise valued minor condition problem where I am talking about CD and I pick N to be something big. So just something like I need to have basically the size of A to some to some power of the RIT of the largest relation something like that. So, so N is some, but it's a constant, some big constant, and I want this reduction. Then I've got this red arrow in the middle, which is the easiest reduction of them all between these promised valued minor condition problems. I have these problems so that the reduction is very easy basically. And then I need to reduce back to PV CSP, but notice that now I switched CD for AB. So I want promise valued minor condition problem to reduce to PV CSP of AB. And I will talk about all of these arrows, but actually only the red arrow will be done in reasonable detail. So how is the reduction in the red arrow going. So basically I'm doing almost nothing. I just need to do a one composition with five in there, and that's it. So it will be pretty easy. What's not so easy is the equivalence of PV CSP and PV MC for reasonably big N. And I'll talk a little bit about that one too. So first the red arrow, the part where things are kind of beautiful and natural. So I've got this minion homomorphism for valued weighted minions. And how does the reduction work. So I'm going to keep almost everything. So somebody gives me an instance of PV MC CD, which is this big thing with Sigma, you, alphas and betas. And I do nothing when it comes to Sigma Q and the alpha eyes. So I copy and paste them in my new problem. And then I change betas. So I'll have beta prime I, which will be, oh, I'm missing an index in here so this is meant to be beta I in there, which is beta I composed with, with spy. A quick check that this makes sense. So my goal is to get an instance of PV MC AB. So if now I get some operation from a from a to B. Well, how do I give it the beta prime cost. Well, I first met it into an operation in the support minion of weighted polymorphism clone minion weighted polymorphism minion from C to D. And then I apply beta I, and that works. So at least it's, it's reasonably well defined. I'm just going to sweep compatibility under the rock but it turns out that this this this alpha I beta I primes will be compatible with with the weighted polymorphisms. I just I don't want to get into too many inequalities so if you want it's it's not so it's not so hard to verify it in a kind of straightforward way. So how is it with the instances so to get the reduction I need yes instances to go to yes instances and no instances to go to no instances. Well yes instances will go to yes instances in a kind of boring way, because yes instances are all about projections, and I didn't do anything about the projections and didn't do anything about Sigma, and I didn't do anything about the costs of projections. So yes instances go to yes instances in in kind of a straightforward way. How is it with no instances. So suppose some I've got a no instance of PV MC CD does this go to a no no instance of PV MC AB. Yes, because well let's assume for a contradiction this does not happen so suppose I somehow produced an instance of PV MC AB, where I can find the cheap solution in the weighted polymorphism. The support set of the weighted polymorphism minion of a to B. So suppose I can find some psi that assigns to the F's some something from the support minion that satisfies Sigma, and the costs are at most q this beta prime costs. Then I look at what beta prime is beta prime was obtained by this composition. So this means that the costs are like this, the costs are beta product beta I applied to Sigma applied to whatever f gets assigned and you sum this up and you get the most q. Now. Okay, I, I see I've got a type on my slide so actually this this there should be there is a missing fi in here. So, because this is, I guess a little bit confusing with me. Let me do a, let me do a hot fix in there. And let me let me add the right solution here. I'm about to write fi in here. Okay, it's compiling now and to the fi you are imagining here will come in the moment. And I claim that this is a, this is a, this is a witness that actually the instance I was given originally is not a no instance. Well, because fi is a minion homomorphism, it will map something that satisfies Sigma to something that satisfies Sigma here is where I'm using that fi is a minion homomorphism. And the costs are basically written here this this tells me that the costs in the betas of the solution are at most q. So, so this shows that actually it was not a no instance initially. Okay. Maybe this is a good time to ask if there are any questions or people needed needing me to clarify something. Because this was the nice part of the reductions doesn't seem to be the case. Okay, so this is, you can see a course of the Boolean part of a shell crock in paper in this it's it's like I slept on a lot of cost functions on their paper and I just kind of arranged things so that that things work. Okay, now I've got two more reductions to do, and I'm going to do them. Well, one by example and one by hand waving because time is short. So one by example is reduction from PV CSP to this promise valued minor condition. So, again, ideas from from the Boolean bar to a shell crock in paper. And my example will be on some per CD where the universe of C is just zero one to keep things reasonably small. And let's say that this is the input instance. Actually, most of these numbers are kind of useless we will not use them for much of anything, but let's say this is the input instance. And what I need to do is to enumerate support sets of these relations. Actually, I, the two slides back I said that the, the end here will most supposed to be as big as the SA to some largest RIT of a actually that was wrong it's the C, the size of the university to the largest RIT of a relation here so different thing but basically and will be the size of the support set of the largest relation in here. So, let's say for the sake of the example that RC is less than infinity for exactly these inputs for these for these pairs, and otherwise RC will be infinity on 00 that for me this that I would say that support set of RC are these three pairs. So something is in a support set if this cost is less than infinity. So it makes it like a feasible part of a feasible solution of feasible couple. And let's say for the sake of the example that as is less than infinity at zero and it's infinity at one. How am I going to create this minor condition problem. Well, I'm going to create this set Sigma of identities. I am going to take an operation symbol for each variable so I've got two variables in my input so there is x and y and on and a symbol of for each relation in my. For each summand in this cost function for so I've got FR here, and I've got FS here. So those will be my, my operations, and then I'm going to create this identities which are always of the form variable operation equals relation operation, and they encode the feasible sets they encode the support support sets, basically so that's that's what's going on in here. For example, here is the, let's begin maybe at the last line. The last line says that why needs to be something in in s. So the possible, the RITs of the of the variables correspond to how many possible values these variables can take so for each one to be equal to 0 or one so they are binary. And for FS, I only know that I know that FS can only be zero in a kind of finite cost situation. So I put FY x zero x one has to be FS x zero. This can be then generalized to make it so that FR is actually ternary so that corresponds to each of these feasible, these feasible pairs. So I've got FR, you see the here, I'm repeating this pattern so I've got FR x one x one, FR x one x zero, FR x zero x one that's exactly the, the person here except I'm reading them then as lines and those are the minors of f that I've got here, and then I've got f x and f y in here, why because here I've got x and So if you heard any talks about the the PCSP paper, then this should be familiar because this this is this is exactly their construction exactly the same trick. And what this enforces is that if I'm doing this and if I'm solving this in projections, I will need to be feasible in the sense that I'll need to have finite cost. So maybe the cost will be over 42, but at least it will not be infinity. And you can see this because how do you solve this in projections. Well, FS is X zero, FY needs to be then the first projection, because that's the only way to make the satisfy the last identity in projections. So this propagates upwards so FY is X zero in the second line. What does it mean well then FR needs to be X zero, it needs to be X zero when it's when I take this Minion so it really needs to be the second the projection to the second coordinate. And that finally propagates to the first line. So FX needs to be the the second projection. And so we see that there is only one feasible solution actually to what I set up in here. It doesn't really matter. I don't know what the cost will be but at least it will be finite. So it's sending y to zero because of the constraint s and then sending X to the only other thing where like this matches with zero which is one. So it's what I did here is like a consistency checking in a really small small case, but it demonstrates hopefully how things interplay. So this this is what enforces finite costs, and then for the to make sure that the costs are actually as in the CSP. What am I going to do is I'm going to do. I'm going to change the, the alphas and betas I'm going to set them up so that the the costs work, but now in the gray part of the of the slides. I just realized I've got a bit of a typo. Alpha I should correspond the cost of the ice couple, which is, you look at how you, how you where you send X and Y, and you can send them to one of those three things. So I've got three possibilities here and to each I'm going to associate a cost. So, the, the costs will be, for example, if, if I if I'm taking the second projection so that means I'm taking the, the second, the second of these two of these three choices, then alpha are of the second projections should be the cost in C of one to accept I've got a typo there it should be two times the two is this to inherit it from the from the instance. So that's where the numbers will show up and alpha are one will be the cost of the double one one and so on. So, they are kind of more exciting, the betas give me the costs when I map everything to be so when I try to solve this instance in in in D and so it will be our D cost of the homomorphism that that sense these things somewhere so this is applied to apply age to each line, this is like a short short for for that. And similarly, I'll do it for alpha as beta as that's where I would use this funky number 57 over three. So that's where it would it would appear and for the variables, I'm just going to have zero costs so actually I don't need alphas and betas in there at all. So variables. I'm just not I don't really need anything. And it's an exercise to show that this will be compatible with weighted polymorphisms, because I'm defining alphas and betas by relations. It's just some straightforward verification that this will be compatible. So, what do we have well cheap solution of the PV CSP in in C will correspond to a cheap solution in projections. That's sort of the, like the previous slide together with these costs. So that's, that's how it will happen. While on the other hand, if I've got no cheap solution in PV CSP in D, then there is no cheap solution of the promise minor condition problem. The argument is basically like two slides back so I'm going by contradiction I'm assuming there is a cheap solution of the PVC PV MC problem, but then I will have some way to make this betas be of low cost. So I will have these betas to actually map things into the by basically using this the values of these of these operations. And so I'll find that actually there was a cheap solution of PV CSP. Okay, running a bit low on time, but I'm also running out of slides so that's, that's going to work. I need to sweep a lot of things under the rug for the for the final reduction so I need to reduce PV MC to PV CSP so I need to somehow find the PV CSP instance that that encodes PV MC. This thing will work for any, any and so previously we had to choose some big and here and can be anything you want it's easier for small and sexually. The input consists of a bunch of alphas alpha eyes beta eyes together with some system of identities and Q and so on. And here, we need to show that we can simulate alpha I and beta I somehow using a PV CSP instance. So somehow we need a pair of relations for each alpha I and beta I. And what's happening here is that I will use Farcash lemma to show that this is actually possible. But there are lots of fiddly things in here so for one fiddly thing is that the relations can be kind of infinite values. So I need to be careful about infinite values. This would be basically just use Farcash lemma and everything will work out. It's infinite values of relations so infeasible parent apples. I will need to be a bit careful, but it's possible to do the simulation, and it will work just like what we expect so again we want that PV MC in solvable in projections will give me a cheap solution of PV CSP in a. And if there is no cheap solution in B, then there is no cheap solution in the weighted polymorphisms. So here I didn't really tell you much just told you what to believe but but it can be done. And that's it. So, thank you for your attention. Okay, so we've got about eight ish minutes for questions. Any questions. Can I have a question Alex. So, in the BB KO, when they do the reduction from the minor to PC SP this is the old fashioned indicator problem construction right. Do I remember it correctly. Yeah, yeah okay. So, which one, which one I took. One of them is just like the indicator. Right. That's what I think. So, did you then, did you then try to adapt the construction we have in the whatever it's called the paper you mentioned at the beginning of the talk where we, I think we call it like weighted indicator problem or something, sort of the natural indication of the indicator construction to, to weight it to the weighted setting so do you try to do some something similar here since you mentioned package lemma. Yeah, so I'm definitely getting inspired by this paper so maybe I should actually go back to the 2013 paper and look at whether I can just take take everything from you because the thing about this paper. I felt it at the beginning it's always in development and I've changed things around several times, hoping to get something as nice as possible. So, I'm actually I was trying, mostly I was reading your later paper about. I think it's, it was with, with Tupper about the Galoa connection and trying to make it work from there but, but maybe I should actually go back to the roots and look at the 2013 paper. I shouldn't I think that paper is not written very well but I think Libor had a student an MSc student couple of years ago and his master dissertation has a nice exposition of these things. I think he got things nicely nicely explained so maybe that's a good source I don't remember his name Libor, Libor might. He's explained then in a nice way. Oh, it's it's young one, but I think, you know, this, this is the top presentation with Alexis. I wouldn't go back. You mean without all the details. Well, I mean, unfortunately, I couldn't just make this into a paper so. The concept is just the concepts are now seems just just right. I hope so. So, yeah, unfortunately, the other the written paper the written proof is kind of always epsilon away from from being finished. And, well, I definitely looking at, I was definitely looking at the previous papers and taking inspiration from them but so I don't know how it felt to you when you were writing this 2013 let's say paper but to me, it felt like this is much more complicated than the indicator problem at least as long and when you allow infinite values if you allow finite values then it's basically just as far as lemma and life is good. But when you've got infinite values. Whenever I try to do this properly. It's always just very, very annoying and fiddly. I agree and I think this is as liberal said this is the right way this is nice way. I was just asking how it relates to that old construction I sort of assume it's very similar it's just doing some tricks on top of it but I guess I'll have to wait for the paper to. So, so what I'm definitely, yeah, I mean it's definitely basically the indicator problem so I'm not not there is no a new way to do the reduction and some brand new approach. So, well I'm trying to do it on my own for good or bad but but the idea is, yeah you could call it the indicator problem so so I mean, I've got like bonus slides where I try to explain it on an example but not sure if that answers your question so I would use Farcash lemma form some to get some things like this because alpha and beta are compatible. So they are compatible with something that's compatible with relations. So Farcash lemma gives me that there are some relations and some couples that witness that so I will have something like this. And then I use that to cook up the CSP instance. So the constants I get from Farcash lemma and the variable names I get from these these values here. So it's probably something that that you're familiar with seen in here. And the only thing is even on this bonus slide. I've got under the rack this thing where I need a constraint that ensures that basically feasibility so so I need to stay within this w poll. So in this support minion. And that's the most annoying part. So the weighted the support minion is not the same thing as polymorphisms from A to B, for one thing, which I'm sure stand and also but I'm saying this for the general audience. Thanks Alex. Thanks a lot that helps. Thanks. Yeah, so the short answer is yeah it's it's it's it's roughly it's roughly the indicator problem. Yeah. And some other questions. I might actually have a question. When you define these weighted minion homomorphisms. You define it as. So you have you actually have two minions right you have the abstract meaning of the weighted polymorphisms. And then I've got the support minion is that the function minion, which is the support minion. Yes. So, here for the sport for simplicity I kind of said that one five works for everything. So, yeah, so my question to my question is you define the weighted meaning homomorphism as a mean homomorphism on the support minions that somehow respects the debates, right. Yes, in the talk that's that's that's what I'm that's what what I'm doing. As I said in the full version, I can also take like a distribution over the over the the function minion homomorphisms and make this into a weighted homomorphism but I think that's just over complicating things when explaining. So my question is since since the weighted polymorphisms are actually an abstract meaning how is this different from a minion homo from the abstract meaning homomorphism of the two abstract meanings. I'm not sure which abstract minions you mean you mean the, instead of the functional homo minion homomorphism, like this first point, I should be taking an abstract minion homomorphism or. No, you think you take. So if you consider that the abstract meaning of weighted polymorphisms right. So you waited. The elements are weighted polymorphisms. I see close under minors in some way. Right. Whatever taking a minor means. Oh, yeah, I can define taking a minor awful for weighted polymorphism that's that's not. Yeah, so that means that it is an abstract minion right so I was asking whether, what does it. You would, oh, you would do it this way. These abstract minions. Well, if you haven't thought about it like. So, so, well, one reason why this is not obviously the same thing is that there might be but I actually don't know if there are but there may be some strange minion homomorphisms that don't respect things like taking taking the linear combinations. So I suppose you're actually this is an interesting question in that maybe if it's like a minion homomorphism that also respects taking like conic combinations so sums and multiplying by non negative numbers. And then maybe this will actually be enough. But I think just abstract minion homomorphism may result in something strange happening something that doesn't respect the way. I'm definitely not suggesting that it would work. I'm just like asking what what is the difference. So you say there is a difference with something else. Maybe maybe your weighted meaning homomorphisms are the abstract meaning meaning homomorphisms that have some additional structure this like respecting the points, as you say. Yeah, so this is actually that's a great question gives me an idea maybe that I should be defining this as as like million homomorphism plus plus this plus some weights. Maybe this will actually be better. So let me get back to you about that. It's at some point. But I think that if you just do it like abstract minion homomorphism, then there will be probably I don't have it but I expect some country example something horrible that will not have the form like in this last line is actually telling you the, the, how to translate one minion homomorphism to another. But I expected that if you just assume just compatibility with minor taking that you will have some something that's not of this format or something that's that's kind of crazy. Okay, thanks. Okay, maybe we have time for another question. Last question if there is one. If there is none then yeah thanks. Thanks Alex once more and thanks for the thing you all for coming. Thank you all for your attention and have a nice day or evening.