 So hi everyone. Today's talk is by Marcin Wrochner on topology and adjunction in promise constraint satisfaction problems. Okay, thanks a lot and thanks everyone for coming. Yeah, so first I'm going to introduce promise CSPs in case you don't know. Like the simplest example is promise graph coloring. For example, you're given a free color graph G and you want to find the one hand coloring at least, or the decision version you want to distinguish free colorable graphs from those that are not even one hand colorable. So you have this strict notion of satisfaction, free colorability and some weakened notion of satisfaction, one hand colorability and you want to distinguish between them. So in this case, the best we know for polynomial time algorithms is that we can do this for free versus square fifth root of n in polynomial time. And that's really the best polynomial time algorithm we know. As for hardness, the best we know is free versus five. If you look at free colorings, then the loosest thing we still know is NP hard is free versus five. So there's a huge, huge gap between those two results. And the conjecture for now is that at least for the constants, it should be NP hard. Probably it's hard for a much wider regime, but at least we'd like to prove it for constant, but for any two constants, C versus C prime coloring is NP hard. And we know this is true assuming the variant of UGC, but well, we don't know about UGC, we don't know about variance in it. So we really don't know much about these kinds of problems. And I'm going to look more about graph homomorphisms, but of course, it all generalizes to homomorphism of general structures. So graph homomorphism, these are functions from vertices to vertices, which map edges to edges. So you can look at them as a kind of colorings. If you have graph G or graph C five, then the homomorphism from G to C five, you can think of it as a coloring as usual. So color of vertices of C five, and the edges of C five is some kind of constraint. Or the way I'm looking at it is as a kind of embedding of G inside C five. So this view turns out to be quite different and quite useful for, at least for graph homomorphism. And I write GROH if there is any homomorphism. And I also say that G is H colorable, because as we know for clicks, it's the same G is KK colorable, if and only if it's colorable with K colors. And then DCSP GH, for general structure for graphs GH, is the following problem, given a G colorable graph, can we at least find an H coloring? So it's always that G has homomorphism to H, so G colorability is stronger, H colorability is weaker, and we don't care about the cases in between, we just want to distinguish G colorable graphs from those that are not given H colorable. So that's the problem. And the conjecture here is that all the non-trivial cases are NP hard. So the trivial cases are when G or H is bipartite, or when one of them has a loop, that's our trivial, and all the other cases we conjecture them to be NP hard. And the reason I like this conjecture is because on one hand it's quite general, and on the other hand it simplifies immediately. So this conjecture it's without loss of generality, you can assume that G is a large odd cycle, and H is a large click of any size. And that's simply because if you look at any graph, it always has a homomorphism to some sufficiently large click. And if it's not bipartite, then it always has a homomorphism from some odd cycle. So if you want to prove this conjecture, it suffices to focus on odd cycles and on clicks, so very simple graphs. But on the other hand, it's still a very general conjecture. So on one hand, it generalizes the classical coloring conjecture, so the click versus click case. And on the other hand, it generalizes the held message for the VRM, which is the G equal to H case. So we understand that quite well that CSPs or undirected graph templates aren't the hard way. We understand that quite well why this holds, but still even the shortest proof is a case analysis on what structure of this graph is, to some extent. Whereas if we prove this more general statement, this conjecture on top, then there really can be any case analysis on odd cycles and clicks you can do. So you would have to prove it's using quite different methods, it seems. So that's some reasons for studying this specific conjecture. I would promise CSPs, but of course we don't understand the promises in general, an approximation in general, and the things associated with this. And this looks like the main result I want to show. And the first half of this whole graph is the following, that we prove like the left half of this conjecture. So we proved that PCSP, GK3 is on the hard for all graphs G, for which it makes sense for all graphs G, which admitted on what is G. So if you look at this in a form of office here, every graph is comparable to some graphs in here. So what we're proving is that the left half, that's if you pick any two graphs in here, then the PCSP problem you get is on the hard. So let me say a few words about the algebraic approach because that's the approach you take to prove how this results here. So we study polymorphisms just as for classical CSPs, we study polymorphism and the set of four polymorphism has some structure. So for every integer n, we have the set of another polymorphisms. And in the case of promise CSPs, these are defined as just homomorphism from G times G times G, so the nth power of G into H. So that's the polymorphism, we have a bunch of such polymorphisms. And then really the only thing we can do with them is we can take minus. So for every function from a final set n to a final set K, we can turn n-ary polymorphisms into K-ary ones, just by this usual definition that if you have a polymorphism f of r at the end, you have this function pi, then you can define the f, the minor of f through pi, just with this very simple definition. So we say that f pi is a minor of f and really it's just a composition of f with the homomorphism from G to K into Gn that's induced by pi. So there's not much to work with, but that's the structure we have for promise CSPs. We cannot compose those polymorphisms with each other, we can only take minus of them, but that's enough to study minor conditions. So minor conditions are simply systems of equations of the form f through pi, the minor of f through pi is equal to G. So in general, we have a bunch of such equations. So there's a digraph with some symbols fgh, for example, and some arrows pi tau pi prime. The symbols have specified arities, some integers, and the arrows have specified functions between the corresponding sets. So maybe a better way to depict this is this diagram where the nodes are finite sets, the arrows are functions between those finite sets. And if you really want, you can label these nodes with symbols fgh. And we say this minor condition is satisfiable in pole gh if we can map those symbols to appropriate polymorphisms. So you can if you can find a polymorphism f authority m in pole gh such that all of these equations described by those arrows are satisfied. So that's what it means for a minor condition to be satisfied in this structure of polymorphisms. On the other hand, the same thing, the same diagram that we have here, it's an instance of label copper. So this diagram here, we say it's satisfiable as a label copper instance. If you can pick an element from each of those finite sets from each of those domains, if you will, you can pick an element if from the domain of f, the rt of f, such that if you look at any arrow and say the arrow pi from f to g, then pi of f has to be equal to ag. The thing you pick for g has to be equal to some function of f as defined by the diagram. So these minor conditions, you can think of them as equations about polymorphism or you can think of them as instances of label copper. And the idea is that in many cases polymorphism would behave very similarly to just choosing one coordinate arc of m and because of that they're similar to label copper and because of that they're NP-hot. So in particular, if something is satisfiable as a label copper instance, then it's satisfiable in polymorphism. Whatever the structure g, regardless of the structure g of h, g of h, you always have the projection polymorphisms which are defined as projecting g to other m-t-cell coordinates and then composing this with some arbitrary polymorphism from g to h. This form of this doesn't matter. So if something is satisfied as a label copper instance, it's satisfied in any polymorphism minion and in this sense it's trivial. And it turns out that these minor conditions are the only thing we need to study. So the theorem in this case is that the problem pcsbgh is log space equivalent to distinguishing minor conditions that are satisfiable as label copper instances on one hand and conditions that are not even satisfiable in polymorphism. So satisfiability as label copper is this very strict notion of satisfiability and then satisfiability in polymorphisms is a much weaker, much looser notion of satisfaction. So again a promise problem, but now we don't have the structure g and h anymore. We just look at more abstractly of polymorphism and equations they satisfy. And as I said, the idea is that in half cases this polymorphism should be similar to projections. So it's similar to just choosing one of n-coordinates. And then this problem you get here, it's just label copper. It's nothing else than label copper. It's just exactly label copper and that's why these problems aren't the problem. And this problem here is distinguishing between those conditions satisfiable here and not even satisfiable there. We call it a PMC for promise minor condition and is parametrized by the structure of polymorphism. So let's apply this abstract theory to proving that pcsb is on behalf in the case of odd cycles versus c3. So c3 is the same as k3. So while we look at polymorphism, so we look at graph homomorphism from ck to dn into c3. And we want to prove that somehow they behave like projections. Choosing such a polymorphism is the same as choosing an element of fan, or just a few elements of the fan for example. And the main plan, the main idea here is that in this case when k is fixed, because we fixed k and we look at polymorphism of very, very large rt and much, much larger than k, then this polymorphism is, I claim, they look like a junta. So a function that depends on only a few inputs, plus some noise. And the way we smooth out this noise is that by looking at this polymorphism is a continuous function. We look at it as a continuous function up to continuous transformation. And it turns out this will smooth out all the noise, but will forget all the noise. And all we left with is a function that indeed depends on only a few inputs. And because of that, it's again similar to projections, it's similar to a gap label, for instance. And because of that, the problem will be at the heart. Okay, so we want to look at the polymorphism, or any graph homomorphism as a continuous function. So how do we do this? Well, the crucial construction here is the box complex. So box complex is something that, to every graph, assigns a topological space. So to graph G, it assigns a topological space box of G. For example, for a cycle, an old cycle, gets the box complex of that will be a circle. Or it should start with a click on K vertices. The box complex of that is a sphere and K minus two dimensions. And if you have a graph homomorphism from G to H, then this will correspond to a continuous map box F from box G to box H. So the exact construction of box G is not that important. Here you have two examples. The idea is roughly that you duplicate every vertex and then for any set of white vertices and black vertices, which are fully adjacent, you add a face between them. So that's the rough idea. For example, for K3, for you, gets us some complex which looks like this. And this is for topologies that's just equivalent to a two-dimensional sphere. But really, the only important thing about this construction is that it behaves well with respect to products. So for example, if you look at the products of a few odd cycles, like in our case, what you will get is a torus, probably the circles of torus. In general, if you have a product of n cycles, you get them n-dimensional torus. And finally, I have to add, this is slightly more technical, but in order to say that some topological maps are not trivial, it's useful to look not just at topological spaces, but actually at topological spaces equipped with an involution. So we have a function minus, which swaps some vertices. Here we swap white and black vertices. And this gives some additional structure on this topological space. So the box complex is not just a topological space, it's a topological space with disempolution. And from a homomorphism, we don't just get a continuous map, we get a continuous map which respects disempolution. So f of minus x is the same as minus f of x. Okay, so that's the box complex. So what does it mean for our case? So we look at homomorphism from CKTC3. So we have this graph homomorphism. And if you apply this box complex to it, you get a continuous map from an entorus to a circle, as we said. So what we have is a map from an entorus to a circle and how do we analyze this? Well, first we can look at maps from just a circle to a circle, the simplest case, the one-dimensional case. So maps from a circle to a circle up to continuous transformations, these are characterized by the winning number or the degree of death map. So it's just the number where you just count how many times the input cycle lines around the output cycle. And then in general, if you have a map from a torus to a circle, then you can look at various circles inside the torus. So you have the meridional and the longitudinal cycle and the two-dimensional torus. And in general, if you look at an n-dimensional torus, and for any coordinate i and n, you have the degree of that coordinate, which is just defined by looking at the circle, the entorus, and applying n-looking at what the function f is on the circle. So formally, we look at the map from x, x is on this input circle. We output f of 0, 0, 0, x 0, where 0 is just any constant in the circle. So we look at these different circles on the torus and we evaluate these degrees. And this gives us, for every coordinate i and n, this gives us a degree of death coordinate. And it turns out this degree, it behaves very nicely with respect to minus. So that's the important thing because this behaves well with respect to products, because these are all algebraic constructions that are very natural. They respect products. They behave well with respect to minus as well. So if we have this function f with many, many inputs, we can look at its minus of rt2. So those are functions of the form x to y, maps to this, for example. And then we can measure the degree, say, on the first coordinate, x input. And then it turns out, because those definitions agree with products, it turns out that this is just the sum of the degrees of those coordinates, which are x here. Okay, so the degree of the coordinate, it behaves very well when you identify some of the coordinates, it just sums the degrees together. On the other hand, if you look at all possible minus of rt2, then it turns out there's just finitely many possible. There's only so many you can have, because all those minus, in the end, they came from a homomorphism, from a graphical homomorphism. So if you look at rt2, like n equals to 2, there's only a bounded number of graphical homomorphism like this you can have. And you can only have three to the k square many. And that's independent of that. So even if you start with f of very, very large rt, there's only finitely many two arid minus you can get. And because of that, there's only finitely many degrees you can have. Okay, so this means that if you look at those sums, there's only a bounded number of such sums you can get here. And that means that only a bounded number of degrees can be non-zero. If you had many non-zero ones, then you can switch them on and off here and get up to the many sums. Since you have only a bounded number of sums, there's only a bounded number of those degrees that can be non-zero. So there's only a few coordinates i and n whose degree is non-zero when n is much larger than k. On the other hand, we should also add that, well, someone has done zero degree, otherwise everything here will be just trivial and nothing can follow from it. And for this, this is where we use this equivalence. So if you look at maps from a circle to a circle, which are equivalent, so f of minus x is equal to minus f of x, that's a basic fact of topology that those maps have odd degree. They have odd winning number. And this means that if I add up all the degrees together, then what I get is an odd number. And in particular, that simply implies that some of the degrees have to be non-zero. And that's really all the whole proof. So this is how we take a homomorphism, we take a homomorphism, so graph homomorphism, Ck to n, n to C3. We look at it as a continuous function. We look at those basic algebraic invariance that characterize such continuous functions up to continuous transformations. Those are those degrees. And it turns out that if you look at what constraints those degrees satisfy, there's only a few coordinates that have non-zero degrees. So there's only a few essential coordinates in a way. And in this way, there's choosing a polymorphism like this is equivalent to choosing just a few things inside the set map. And because of this, the variant of label cover we get, this PMC problem we get for this polymorphism menu, turns out to be equivalent to, or just as hard as a version of gap label cover. And this concludes to prove that pcsbgk3 isn't the hard for all g, well, all g that have homomorphism to k3. Okay, so that's the idea of the proof, like the topological proof here, that the left half of this conjecture, the graph on an office, is true. So for the second part of the talk, I want to talk about attraction, and we will see a connection to the previous part in a moment. But first, well, two definitions, functors and fun attractions. So a graph think functor is just a construction, a function from graphs to graphs, such that whenever you have a homomorphism from g to h, this implies a homomorphism from lambda g to lambda h. And thing here just means that we only care about the existence of homomorphism. We don't care about anything about compositions or respecting products or anything like this. We just ask that if there exists a homomorphism here, then there has to be a homomorphism here as well. And examples of such constructions are the thing I did note by lambda k, so just subdividing edges, replacing every edge with packing k edges. Here I have an example. And another example is gamma k, which puts an edge between endpoints of every k block. So if you look at any two vertices, if they are connected by a walk of length k, you add an edge on top of that. So that's like the k-walk power of g. And I didn't know this by gamma k. So fin adjoints are defined as follows. So two such constructions, lambda and gamma, are fin adjoints. If we have the pull-in property that there exists a homomorphism, lambda g2h, if and only if there exists a homomorphism from g to gamma h. So there's this way to swap lambda on the left with gamma on the right. And for example, the two constructions here, the subdivision lambda k and the case power gamma k satisfy this. So they are fin adjoints. And more generally, like this subdivision is an example of gadget replacement. We replaced every edge with a gadget here in the path. And gamma 3, it's an example of PP power. We define edges by a positive primitive formula. And so this is one of the most fruitful way to get a similar action context. You always have, with every gadget's replacement, you have not joined PP power. So lambda 3 and gamma 3 are some examples. But in general, the fin adjoints, the reason they are interesting to us is because this is essentially the same as saying that lambda is a reduction from this PCSP to this PCSP. And this is very easy to see. This is just trivial from the definition, mainly if you have an instance i and you promised that it admits a homomorphism to g, then because lambda is a factor, it means that lambda i has a homomorphism to lambda g. So it satisfies this promise. And on the other hand, if you find that this reduced construction, this reduced instance lambda i admits a homomorphism to h, then by definition of adjunction, it means that i has a homomorphism to gamma h. So indeed, this is a proper reduction. This is a correct reduction from this PCSP to this PCSP. And this is really the same thing. This is just stated as a property of graphs. But it's the same as being the reduction from between those PCSP. So that's why we care about deductions. Our more precise the gamma h here is like the strongest graph for which we can show that this reduction holds. And in fact, you can always, even if you don't know what gamma h is, if you just have lambda defined, for some reason, you have some lambda you wonder how reduction it is, you can always take gamma h to be the infinite union of all structures such that lambda i admits a homomorphism to h. So it's kind of trivial. There's like nothing here. It's just a name for the structure. But the interesting thing is that there's many examples of adjunct factors that people have studied for other reasons, which are useful for this reason, which have very nice descriptions. So this is kind of an open description as an infinite graph. It's, it's, it's not very helpful. And it turns out that there are examples that are adjourned and shall find out, like, for example, there's those constructions here, the edge of the placement and the PP power specifically didn't be the edge subdivision and the k work power. Those are very simple examples of adjunctions. And because of that, we have immediately some examples of reductions between PCSB problems. And it turns out that when we studied a few examples of such examples, adjuncts that were known from your literature on graph homomorphisms, it turns out that all the reductions, quite a few of those reductions we we observed turned out to be surprisingly powerful. So let's look first at the example of gamma k. So gamma k with k work power, as I said, it has this left adjunct subdivision by k. But it's known in graph theory and the study of graph homomorphisms that it also has a right adjunct omega k. So gamma k has a left adjunct lambda k, but it also has a right adjunct omega k. The description is quite technical, but like the intuition is that omega k G it behaves like topological subdivision on the box complex. It like blows out your graph into a larger and larger balloon. Or if you have any phase, any surface, for example, and you think of the graph as a triangulating or quadrangulating surface, then omega k makes that triangulation denser and denser as k increases. So formally, we had a following statement that the box complex of omega k G is equivalent up to continuous transformations to the box complex of G. And for example, this can be used to show that the chromatic number of the graph is at least this graph omega k applied to a clique is at least n. Just by the protocol m, f, r, m, because this graph looks like a sphere like k n I mentioned before looks like a sphere. Omega k just blows the sphere and makes the denser and denser trigonometric of the very same sphere. But as a topological space, we get the same topological space. There's still a sphere of dimension n minus two. And because of that, it cannot have a homomorphism into a smaller sphere. That's by the protocol of theorem because you do not have an equivalent map from a larger dimensional sphere to a smaller dimensional sphere. So you can use this to show that chromatic number of this graph is at least n. And this can be used to prove, for example, counter-examples to head in this conjecture. Like the smallest counter-examples we know that follow forward beside the firm's construction. So that's why people, for one of the many reasons people study this construction, it's actually much much stronger in that for any continuous map from the box context of G to box context of H, for any continuous map between those topological spaces as twisted as you want, any such continuous map can be turned into graph homomorphisms, not from G to H, but from omega k G to H. So like I said, you get a dense and a denser triangulation and this eventually allows you to replace this continuous map to see this continuous map to represent it as an actual graph homomorphism. And we use this property to show that, in a sense, only topology matters. So these are maybe abstract properties, topological properties, but as you see they have implications for purely graphical questions, like in continuous conjecture, and in a sense that they have implications, very nice implications also for studying PCSP problems. Namely, we get the following DRM. So say H is a graph such that PCSP from any graph with any graph G and H is always hard. So PCSP G, H is always NP hard, right? So that's the property we showed for H equal to k3. And say we have a graph H prime, which has the same topology. So if you look at the most complex is say we have a graph that has the same topology, then I claim that we can show that H prime again has this property that PCSP GH prime is NP hard for all G. So this means that this property that we proved for k3 it only depends on the topology of the box complex of k3. So we really only relied on the fact that the box complex of k3 is a circle. And indeed you can view from the proof for the case of k3 that we only really relied on the fact that the topology of k3 is a circle, but it's actually true in general. If you were able to prove this for any graph, you replace it with any graph of the same topology, we'd have the same result. So this strongly suggests that topology, it should be the best way to prove these kinds of results. And the proof here, I don't want to go into the details, but just to show you that it's a very simple application of just the definitions and the properties of the function of mega k that I mentioned. This is the whole proof. So it's a very simple proof from some basic principles, some basic algebraic principles, and then this very strong property of omega k, this kind of magical factor that has these very nice properties. So that's one example but another very interesting example is the arc-digraph construction. So again, it's a different construction. So it's similar to the line graph, but it's specified for digraphs. So for digraph D, we construct a digraph delta D as follows. So its vertices are simply the arcs of D and the arcs of delta D are the pairs u, v, v, w of arcs in D. So those pairs of arcs in the original graph are at the endpoints here, match. So it's like the line graph for undirected graphs, but it's defined as a directed graph. And the striking property of this construction is that it decreases the chromatic number in the controlled way. So the chromatic number of the digraph, I just look at the undirected graph and look at the chromatic number of that. Or in other words, like formerly the property is the following, that for any undirected graph, I look at it as a directed graph with edges in both directions. I apply delta DG and then that's delta G, it commits to homomorphism to KM. This is unparallel if and only if G is colorable with inches and half columns. And so in other words, the chromatic number of delta G is determined by the chromatic number of G. Namely, it's the smallest number n such that m to the half is at least the chromatic number of G. So in other words, this means that the chromatic number of delta G is roughly logarithmic and the chromatic number of G. It's exactly normal. You can actually specify a function here that's exactly some function of chromatic number of G. But like the main point is that it decreases the chromatic number of the graph in the control plane. On the other hand, you see this is very similar to an adjunction and in fact, this is an adjunction. So in fact, this construction delta, it has a right adjoint and it's just that the right adjoint in this case when applied to a clique KM turns out to be a very nice graph named in K another clique, large clique size n to the half. And by this property of adjunction by this very simple definition that I showed before, this means that we have a reduction from PCSP on those larger cliques to PCSP on those smaller cliques. Okay, so just for free, just by applying this construction and this property known from the study of reform office, we get this very interesting construction. If you know that this problem, this PCSP problem for large chromatic numbers is NP-hard, then this applies that this problem for a smaller chromatic number is NP-hard as well. And this allows us to show two interesting things. One thing it allows us to show is to improve the state of the art for this NP-hardness of PCSP, okay, MKK problems namely the state of the art before was once fear ran that this is NP-hard if n is large enough and K is something like exponential and so that's the largest gap that was done to hold. That's the largest gap for which we still know that this problem is NP-hard. It's something roughly exponential. And by applying this reduction over and over, so the exact proof is pretty technical with the idea is just to apply this as much as possible. If we apply this reduction as much as possible, then what we get is the following that PCSP KNKK is NP-hard for any K that's at most inches and a half minus 1 and any n that's at least 4. So that's interesting for two reasons. One is that this form now, it applies even to very small n, even to n equals to 4. And secondly, this function is stronger than KPRM. It's roughly 2 to the n over square root n. So it's a larger gap than KPRM. So just by this very simple construction, we were able to, using KPRM as a black box, we were able to improve it to something more. And another corollary of this reduction is the following. Say we want to show that we want to consider the property of a graph G that PCSP GH is NP-hard for all graphs H. So this is like similar to the property with proof of K3, but now it's the other way around. Now with G is a fixed graph. And we ask, is it true that PCSP G for any graph H is NP-hard? And the thing you can show is that if there is any such graph, all the time graph really, then K4 is such a graph. In fact, later we were able to improve it to K3. The idea is, sorry, the idea is that, well, the idea of improvement from K4 to K3 is simply that if you apply the algebra construction twice, like delta of delta of K4, without forgetting about the orientations, I went applying the second delta, then this is this diagram I'm going to use to K3. And that's what allows you to reduce the K4 case to K3. And in general, the property we use is that whatever graph or diagram you have, you can apply delta many, many times. And if you do that sufficiently many times, you will get a graph that is free colorable or a diagram that admits to home is K3. And then this result follows very, very easily. So that's what we're able to get from this very simple reduction. So a question, Martian? Yeah. So is there a sense in which this delta is like the extremal? Could there be another miraculous adjunct, which allows you to go even bigger than n choose n over 2? Is there anything known in that regard? Yeah. So that's a very good question. And we really have no idea. So that's the problem. So this kind of graph construction does adjoint graph constructions were studied before in graph theory. And they have many, many different applications. But the situation was always very similar. But we have those two examples, the delta graph construction and this omega k graph construction, which prove very interesting results. And then we don't know many more interesting examples. Somehow it's very hard to, like, it's a very nice theory. It's a very nice definition. We're surprisingly nice corollaries. But it doesn't tell you which, where to look for interesting constructions, unfortunately. And yeah, specifically for this question, I don't know how to look forward for better construction. We know that for delta, if you apply delta element over, then k3 is the best undirected graph for which you can prove this. So this is the best you can do with with this specific construction. Actually, if you look at diagrams, then then you delta k3 as something earlier in the model. So you can prove something slightly stronger, mainly that pcsp delta k3 is hard, or the square k3 is hard, and so on and on. You can apply delta as many times as you want. But it's always a diagram of this. This will get you a diagram that always contains a directed cycles. So it's only interesting if you care about directly graphs. If you care about undirected graphs, this is the best so we can do this construction. And we don't know any any construction which would do something similar, which would do something interesting like this. Thanks, yeah. Yeah, so yeah, so those are the open problems, like this is precisely one of kind of very growth open problems. We have this nice definition of action functions. We know that many simple examples give us very powerful reductions. But the question is, are there other interesting examples of such kind of action functions? So more specifically, we know that in those interesting cases, I mentioned that omega k is the right engine of a function gamma k, which is itself a right engine to lambda k. So we have actually triples of the gene functions. And the same is true for delta. And it's interesting to look at these triples of fin and jet functions and to ask if any exists. And there's not much now about it. It seems we should look at more, more interesting construction than just gadget constructions. So another open question is, we used one's healthness result here as a black box. So I think it would be very interesting to try to look at the proof and apply data here directly and see if those two proofs can be merged into one. And if it gives something powerful. It could be hard because once proof is not based on the algebraic approach, but more like analytic properties of polymorphisms. But I think it's a very interesting question. Another question here is that, well, topology is not the first time it has been used for even for promise CSPs. There's a very prominent example by Dino Regev and Smith, where they used also the Borsig-Clarin theorem to prohibit hypergraph coloring. And then for all the notions of hypergraph color is that similar ways to apply topology. But all those ways seem to be very different from the way we applied topology. So it would be interesting to find a common denominator of those proofs. And so far we were unable to find anything like this. And then finally, well, for the topological proofs, it would be very interesting to push it further, namely to push it to PCSP for odd cycles versus the next clicks, okay, four. This would mean that we would have to understand maps from the entourage into S2. Or if you take this equivalence under account that maybe a better way is to think of it as to match from entourage to the projection plane. And, well, we're working on that with and hopefully if we understand the topology enough, we can prove something. But then the problem is, well, we hope how we can do something for K5 and so on. But as soon as you get to K5 already, there's a problem, namely that if you look at continuous maps that correspond to them, even just the projections, then the continuous maps you get up to continuous transformations, they satisfy all minor conditions. So that means that this box F, if you forget all this combinatorial information, it turns out the function you get, it doesn't have an interesting information anymore and it's useless for proving hardness. But somehow we proved that those things only depend on topology, that the hardness of this problem only depends on the topology of K5. So it's not quite a contradiction, but it's something I think we don't quite understand. We don't know how to use topology in a different way, where we don't just look at the topology of one function, but we do something different and really it's another question, what could we do differently in those cases. So that's all I have to say, so thanks all for listening. Thanks Marcin. Other questions? So maybe this is what you said in the first point here, Trupuls, I might have missed it. Is there a theory of adjuncts for higher arity relations, which could shed light on some other problems? You can use the same definitions for other relational structures. Everything works the same, it's still an adjunctness of construction on relational structure, it will imply all of this, a reduction between PCSB problems. But there are examples of functions that people would use in nature which are to graph homomorphism, where I study a lot, but hypergraph homomorphism is not so much. I don't know of any natural example, but yeah, it's very underexplored, like maybe there's some very simple example of functions on hypergraphs that would prove interesting results as well. Some more questions? I might actually have a question in the meantime. Do you think there might be some quadruples of adjoint functions? No idea. It's really hard to look for them, it seems it's not the right definition, it's not the right approach to take this very general definition of adjoint functions that you ask, what are the possibilities of adjoint functions? There is this work of Tautif and others that show that if you add some restrictions, like if you require the left mouse and the next left to be to the actual adjuncts in the sense of category theory, and then we ask about further adjoint, then there's only a few examples. But yeah, in general, you don't have to be restricted to, as you don't have to, there's no need to assume that restriction that was there in the paper and then I have no idea to even start looking for such mantas. There is this idea to look instead of graph constructions that replay an edge with a gadget or a gadget with an edge, like in dp powers, to do replacement gadgets, like this is something you mentioned at some point, but yeah, it's not that we have to... I think it's just this replacing gadget with gadget, that's basically what Tautif does, I think. I know there's some very general principles in category theory that say that you cannot have more than five or something like this, but yeah, I don't know, really. You cannot have more than five adjoint. I'm not exactly sure, but I think it works for like any categories. Okay. For some reason, you cannot have more than five. Interesting. Okay, so I don't know any example of for any case of graphs. Maybe before we get too deep into category theory, are there some more questions? If not, I guess thanks, Marcin. Thanks a lot.