 We start by defining the null or equal predicate. So as the name suggests, the NAE predicate is evaluated to one where the inputs are not all equal and it's zero when the inputs are all equal. So here the input variables are boolean variables and we assume that they take values in minus one one. So the max NAE set problem is just the constant satisfaction problem defined by the NAE predicate. More specifically, an instance of max NAE set consists of a finite set of boolean variables X1 to Xn and a finite set of clauses where each clause is a null or equal predicate applied to a subset of variables or negated variables. And the goal of this problem is to find an assignment that satisfies as many clauses as possible. So a few remarks. So we can actually assign a non-negative weight to each clause and we usually do it in such a way that the sum of the weight is equal to one. So we can talk about stuff like sampling a clause or expected value of a solution and so on. Now we can restrict this problem to certain clause lengths. For S, a set of natural numbers, we let max NAE set to be the problem where all clause lengths are from S. And we can change NAE predicates to other predicates to get definitions for other CSPs. So let's look at an example. Suppose we have four boolean variables X1 to X4 and four clauses, NAE2 of X1, X2, NAE2 of X2, X3, NAE2 of X3, X4 and NAE2 of X4, X1. So for this instance, we can actually think of this variables as vertices in a graph and clauses as edges. And what we need to do is to essentially color, assign Y or minus one, or give two colors to this graph so that the number of edges across different colors is maximized. For this instance, we can actually satisfy all these clauses. And as you have probably realized, this is just the max cut problem. So max cut is max NAE2 set with non-negative variable. Or more generally, max NAEK set with non-negative variable is the max hypergraph cut in K-uniform hypergraphs. The max not equal set problem is also very closely related to the max set problem, which is the CSP defined by the OR predicates. So we claim that max K set is actually reducible to max NAEK plus one set. Now the idea is very simple. We just add a special variable X0. And for every K set clause, we add in X0 to create an NAEK plus one set clause. And why does this work? So given an assignment to the NAE set instance, we have two cases. If X0 is equal to false, then we just do nothing. In this case, in order for the NAE set clause to be satisfied, at least, because X0 is already false, at least one literal among L1 to LK should be true, which means the original K set clause is true. And if X0 is equal to true, we just flip the assignment. And here the observation is that the NAE predicate is an even predicate, meaning that if we have a solution and we flip it, it's still a solution because a set of variables are not equal, if and only if the negations are not equal. So if we flip the assignment, we just get back to the first case. And this reduction preserves the number of clauses. So if we have an approximation algorithm for max NAEK plus one set, it will immediately give an approximation algorithm for max K set with the same approximation guarantee. So the question we wanna ask is, what is the approximation ratio of max NAE set? And just to make sure we're on the same page, the approximation ratio is defined to be the maximum over all polynomial time algorithm A, the minimum over all instance phi, the ratio between the performance of A on phi and the value of the optimal solution to phi. So what do we know about this algorithm? What do we know about this question? So previously, for hardness result, Hallstatt showed that a ratio of seven eighth plus epsilon is NP hard for three set. And by our previous reduction, it's also hard for NAE four set. And any four set is a sub problem of max NAE set. So a ratio of seven eighth is essentially best we can hope for for max NAE set. But on the other hand, we actually know there's a seven eighth approximation algorithm for every individual clause length. So what are these algorithms? So when K is equal to two or three, it is known that an SDP followed by the hyperplane rounding achieves a ratio of point A seven eighth, which is a little bit above seven eighth. And for K is at least four, a random assignment will achieve a ratio of one minus one of the two to the K minus one. So this is a probability that a clause that is satisfied by a random assignment. So as a note, so for K equals two, this is known to be optimal assuming in the unique games conjecture. And for K is at least four, this is also optimal assuming P is not equal to NP, but for K equals three, we actually have a better algorithm. So the question now becomes, well, can we sort of combine this different clauses and design a seven eighth algorithm that works for all lengths? So our result is the following. It is actually unique games hard to achieve a point A seven three nine approximation. So this is a little bit below seven eighth for max energy three, five set. So this is the problem where we allow clauses of size three and five. So in the following of this talk, I will first introduce rather than just SDP laxation for constant satisfaction problems and a generic family of running schemes called RPR squared running schemes. And then I'll discuss the notion of moment functions of RPR squared running schemes. And this will be a key concept in our proof. And finally, I would briefly sketch the proof of our result. Any questions so far? Okay, great, let's move on. So let me talk about the SDP approach to CSPs. So I believe this originated from the Gomez and Williamson work on Max Cut. So they gave the following semi-definite programming laxation from Max Cut. Given a graph G, we have an SDP variable of V i which is a vector value variable for every vertex i. And the goal of the SDP is to maximize the sum over all edge ij y minus V i dot bj over two subject to the condition that all these vectors are unit like. So this is a relaxation because an integer solution corresponds to one dimensional vectors. So for one dimensional unit vectors is either one or minus one and this will induce a cut in graph. And this SDP can be solved within an arbitrary decision in polynomial time. So the first step of the Gomez and Williamson algorithm is to solve this SDP and get a bunch of vectors. The second step is to turn these vectors into integral solutions. So for this, they used so-called hyperplane rounding which is a following procedure. We sample a random Gaussian vector R and if R dot V i is positive we set X i to be one and if R dot V i is negative we set X i to be minus one. So this is called hyperplane rounding because we can think of the Gaussian vector R as the normal vector of the hyperplane. So vectors on one side of the hyperplane will be assigned one and vectors on the other side will be assigned minus one. Gomez and Williamson showed that this gives a 0.878 approximation to Max Cut. And this was a great improvement because prior to this work, I believe the best algorithm for Max Cut was like one half plus some little of one. And so this was a great improvement and people were inspired by this and they tried to came up with the kinds of different SDPs with different CSPs. But as it turns out there's actually a one canonical SDP relaxation that works for all constraint satisfaction problems. And more specifically, Robert Van Drut showed that there's what he calls basic SDP that is essentially optimal assuming the Unigames conjecture. So what he means by this is that the integrated gap curve of the basic SDP is essentially the same as the Unigames hardness curve. So here the integrated gap curve is defined to be the function S of C which is the infimum of the optimal value of phi over all instances phi with S to the value of C. And the UG hardness curve is the function U of C which is the best part number of time achievable value on instances with optimal value of C assuming the Unigames conjecture. So Robert Van Drut showed that these two functions are essentially the same. And let me tell you the intuition of this basic SDP. So what he does is that it first searches for a set of global biases and perverse biases bi and bij. And their intended meanings are, so bi is intended to mean the expected value of the variable xi and bij is intended to mean the expected value of the product xi times xj. And then the SDP locally searches, so for every class Ci, the SDP locally searches for distribution of assignments that matches this biases and perverse biases. So the first moment of this distribution is given by bi and the second moment of this distribution is given by bij and it maximizes the probability that the class Ci is satisfied by this distribution. And the goal of the SDP is to maximize the sum of these probabilities. So let me illustrate this intuition with Gomez and Whittinson SDP. So I wanna show that the GW SDP is actually the basic SDP for a max cut problem. So the first step is to search for biases and perverse biases. So here biases are actually not needed because max cut is an even CSP. So if you have an assignment and we flip it, now we just get an assignment with the same quantity. So there's no reason for a variable to be biased against one or minus one. So we can assume that every variable has a zero bias and the perverse bias bij, here it's given by the dot product bi dot bij. So next step is to search for a local distribution that maximizes this probability that this edge is cut. And here as it turns out that if we have the perverse bias, then the probability this edge being cut is already determined because the edge ij is cut if and only if xi times xj equals minus one. So the probability that this edge is cut is equal to the probability that xi times xj is equal to minus one, which is equal to one minus bij over two. So this is the probability that this edge is cut under this local assignment. And then the goal of the ESDP is to maximize the sum of the probabilities. And that's exactly what we're doing here in the Donets and Williams and SDP. Okay, so after we have the basic SDP, we feed in the CSP instance and we'll get a bunch of biases and the perverse biases. And the next step is to come up with a rounding scheme which is an algorithm of turn this biases and the last biases into integral solutions. So let me talk about this generic family of rounding schemes called RPR squared rounding scheme. So RPR squared stands for random projection followed by randomized rounding and it's proposed by Feig and Lamberg in 2001. So the RPR squared works as follows. It chooses a function f which maps a real number to minus one, to the interval minus one one. And given vectors V1 to Vn, so these are SDP vectors, it samples a random Gaussian vector R. And the first step, random projection is that it computes for every I the projection of VI onto R. This is the dot product of VI dot R. And the second step, random rounding for every I, it independently assigns xi equals one with probability one plus f of ti over two and xi equals minus one with probability one minus f of ti over two. So this RPR squared rounding scheme captures a lot of rounding schemes that we mentioned earlier. So for hyperplane rounding, it's the same as the RPR squared rounding scheme using the sine function. So if this projection is positive, then we assign the variable to V1 with probability one. And if the projection is negative, we'll assign the variable to V minus one with probability one. And the random assignment corresponds to the RPR squared rounding scheme using the zero function. So no matter what the projection is, we'll assign the variable using an unbiased cornflake. So as you can see, these two functions are both odd functions and can actually show that an optimal RPR squared function should be odd. And there is a generalized version for the RPR squared, which is essentially a higher dimension version. So instead of sampling one random direction, we can actually sample D random directions for some constant D. And the RPR squared we now use a higher dimensional function, which maps a deep dimensional vector to the integral of minus one one. And now for every variable I, we compute D different projections and feed these projections into the function F. I wish so, so the idea is essentially the same. We just turn this into a higher dimensional version. So as a note, Rokavendra actually used this idea, the generalized RPR squared to give a optimal rounding scheme to the basic SDP. But that rounding scheme, it's to the proof of the optimality is indirect. So it tells us nothing about the approximation ratio. So now let's move on to the moment functions of RPR squared rounding schemes. So how can we analyze RPR squared rounding scheme? So one observation is that how a group of variables are rounded, depends only on their pairwise biases. Or if we view this geometrically, how a group of vectors are rounded, that's nothing to do with the locations of upper vectors. So it's determined, and it's determined only by the locations of this relative locations of these vectors. And that's given by this pairwise biases. So inspired by this observation, we define the moment functions as follows. So for RPR squared rounding scheme, with function f, we define its kth moment function, denoted capital F sub k, bracket f. So this is a function on k choose two inputs, which are the pairwise biases among these k vectors to be the expected value of this monomial xi times x2 times dot dot dot times xk. And since we're using RPR squared function, this is also equal to the expected value of R, f of vi dot R times f of v2 dot R, dot dot dot f of vk dot R. So here the vi's are unit vectors, such that their pairwise biases are defined by the, their pairwise dot products are defined by these pairwise biases. And xi is the variable obtained by rounding vi. We have a few remarks. So first of all, we are meet f, bracket f, if the meaning is clear from context. And this definition should also work for more general rounding schemes. And in general, the moment function should be a function on both biases and pairwise biases. So in our case, we're not an equal predicate. We can assume the biases are zero, so we can just omit this from the input arguments. But in general, the moment function is also affected by biases. So let's look at a few examples. For example, suppose, well, we have a more general problem where we have biases, we can do epsilon vi biased rounding, which is the following procedure. For every variable xi, we independently assign xi equals one with probably one plus epsilon vi over two and xi equals minus one with probably one minus epsilon vi over two. So we can compute the moment functions for this rounding scheme very easily. So it's kth moment fk, which by definition is equal to the expected value of xi times xk times da-da-da times xk. And by the independence of our rounding scheme, this is equal to the product of the expected value of this variables, which is the expected value of xi is equal to epsilon vi. So the value of the kth moment is equal to epsilon to the k and then the product of this biases. And if we set epsilon to be zero, this will correspond to the uniform, the unbiased one random assignment. So as we can see from this formula, if we set epsilon to be zero, then all the kth moment is equal to zero. So for the random assignment, all the moment functions are zero. As another example, let's compute the f2 of hyperplane rounding. Okay, so we call that the hyperplane rounding is the following procedure. And by definition, f2 of bi-j is equal to the expected value of xi times xj, where their corresponding sp vectors vi and bi-j has dot product bi-j. So xi, xj, they're one minus one variables. So their product is either one or minus one. So to compute f2, let's just look at the probability that this product is equal to minus one. And this is the probability that vi and bi-j lie in different sides of the hyperplane. And I believe you have problem of synthesis. So if you have two vectors vi and bi-j with dot product bi-j, then the angle between them is given by r cosine bi-j. And the probability that a random hyperplane goes through this angle is r cosine bi-j divided by pi. Okay, so this is the probability that xi times xj is equal to minus one. And to compute f2 of bi-j, it's simply one minus two times this probability, which is equal to one minus two times r cosine bi-j divided by pi. And this is the plot of this function. As you can see, this is a nice looking function. It's monotone. It's monotonic with increasing. It's an odd function. And it's a convex function in the interval zero one. It's a concave function, the interval minus one, minus one zero. So we will see this plot again later. Okay. So the question now is how to use this monotone function. So why did we define this monotone function? The result is, and the answer is, to use, we can combine this with the Fourier expansion. So we know that every boolean function can be written as a multilinear polynomial. For example, any two of x1, x2 is equal to one minus x1 times xk over two. So this is the same as an x-cut. For NA3, it's equal to three minus x1, x2 minus x1, x3, minus x2, x3, all divided by four. And you can verify this very easily. So if all the inputs x1, x2, x3 are equal, then all these three health products are equal to one. So it becomes three minus three over four, which is equal to zero. And if they're not equal, then there's one variable that's different from the other two. So two of these pairwise products will be minus one and one of this pairwise product will be one. So it's three plus two minus one over four, which is one. And in general, the Fourier expansion for NAEK is given by two to the k minus one, minus one. And then you subtract all the products with an even number of variables. And then all of this divided by two to the k minus one. Okay? So what we can do is we can think of this x1 to xn as random variables because they are the outputs of the running scheme, which is probabilistic. Then we can take expected value of the Fourier expansion. So the expected value of this NAE class is equal to one minus the expected value of x1 times x2 over two. And the same goes for NAE three. Here we use the linearity of the expectation. And so on. So here the expected value of xi times xj. If you use a RPR squared function, this is the same as expected value of f times vi dot r, f of vi dot r times f of vj dot r. And this is by definition the f2 of vij. So the idea that I'm trying to convey here is that expected value of our running scheme can be expressed in terms of the moment functions. Okay? So let's do this for the NAE predicates. So for NAE two, it can be expressed as one minus f2 of v12 over two. And for NAE three, it can be expressed as three minus f2 of v12 minus f2 of v13 minus f2 of v23 and divided by four. For NAE four, in addition to f2, we also have an expression involved in f4. So f4, as you can see here, it's a function with six inputs. So in order to analyze the expected value of our running scheme on these classes, it suffices to analyze the behavior of these moment functions. So let's look at f2 first. So what do we know about f2? So as it turns out, f2 is actually pretty well understood. It's the same as the Gaussian Moistability function. So even a function f, the Gaussian Moistability of f at b is defined to be the expected value of f of z times f of z prime, where z and z prime are standard Gaussian variables with correlation b. So as you can see, this expression is very similar to our definition of f2. So we have the following simple fact. If vi and vj are two unit vectors with dot product b and r is a standard Gaussian vector, then vi dot r and vj dot r are standard Gaussian variables with correlation b. So f2 of b, which is equal to the expected value over r, f of vi dot r times f of vj dot r is just the same as the definition of the Gaussian Moistability of f at b. Now we actually know a lot about the Gaussian Moistability. So suppose f is an odd function, then f2 of b, which is the same as the degree of f at b, can be written as an infinite series, an infinite power series with only odd powers and non-negative coefficients. So it immediately follows that f2 of b is an odd function that is non-negative, it's non-decreasing, and it's convex on the interval of zero one. So we have computed the f2 of hyperplane rounding earlier and you can see this matches all these descriptions. f2 of four, the hyperplane rounding is actually, it's actually an increasing function and it's convex on zero one, it's non-negative on zero one, it's concave on minus one zero. So knowing f2, so to analyze na2 or max cut it spices to analyze f2. So using the properties described earlier, Adano and Wu showed that the hardest distribution for max cut is supported on at most two points, one and some negative role. So here by hardest distribution, I mean the hardest distribution of clauses for max cut, the clause is the same as it's pure as bias. So this is essentially saying that for the hardest, for the hardest instance, the max cut, it has only two types of k-wise biases, either one or some negative role. And why is this? So we call that the expected value of na2 is given by one minus f2 over two and the sdp value is one minus bij over two. And the idea is to keep the sum of bij invariant while increasing the sum of f2 of bij. So if we keep the sum of bij invariant, that means the sdp value is fixed. And if we increase f2, that will just decrease the expected value of around this game, which means the problem becomes harder. For example, suppose we have two different negative biases. What we can do is just we take their average and that will keep the sum of biases invariant. But since this function f2 is concave, when it's concave on negative inputs, if you take their average, that increases f2. So that will decrease the value of around this game. So this shows that for the hardest point, we can only have one negative bias. And that's just an example. So in our work, we extended this analysis to na3 set and obtained the approximation curve for max na3 set. But that's not the main focus today. We want to analyze three-five set. So to understand clauses with length at least three, length greater than three, we need to analyze f4. As it turns out, f4 is actually very difficult to analyze. And one reason is that f4 has six inputs because f4 is the moment with four variables and we have four choose two equals six different analysis biases. And it behaves very strangely. For example, we know that f2 of x is non-agreed on non-negative inputs, but the sum running scheme at four can be negative on positive inputs. So it's a weird behaving function. So one possible way to simplify this is to consider f4 when all the inter biases are equal. And in this case, we can actually prove something. So we show that when all the input biases are equal, suppose we have this x, which is between zero and one, then f4 of all x is at least f2 of x squared. Okay, so this is our key lemma of f4. And the proof is not difficult. So let's take n vectors v1 to vn with pairwise dot product x. This is achievable because we can just add these vectors, share a common component of length squared of x and let them be orthogonal beyond that. And we round these n vectors and obtain random variables x1 to xn. Then we have f2, so I omit the inputs here. So f2 of x is equal to the expected value of xi times xj and f4 is equal to expected value of xi times xj times xj times xl. So here these indices are distinct and we have the following inequality. So the expected value of sum of xi's to the fourth is at least the expected value of sum of xi squared and then the whole thing squared. Well, this is true because while the difference between the left-hand side and right-hand side is equal to the variance of the sum of xi squared. Okay, so let's count the number of f4s and f2s and f2 squares in this inequality. So for the right-hand side, expected value of the sum of xi squared, as you can see, it's actually equal to n plus n times n minus one times f2 because xi's are one minus one variables. So if we have a term of xi squared, it's just going to be one. And if we have xi times xj with distinct i and j, it's going to be f2. So from this term, we have the n squared times n minus one squared times f2 squared, this many f2 squares plus some lower all the terms. And we can do the same thing the left-hand side. For the left-hand side, we have n times m minus one times m minus two times m minus three at four, and then plus some lower all the terms. And then the lemma will just follow by dividing both sides of the inequality by n to the fourth, and then let n go to infinity. Then the lower all the term will be gone and we'll be left with f4s, at least, I have just any questions. Okay, let's move on. So now we're ready to give a proof of the result in its unigames hard to achieve a 0.8739 approximation for max NaE35 set. Recall that by Raghavendra, it suffices to construct an integrated gap instance for the basic SDP. So I put the three expansions for Na3 and Na5 here. So for Na3, we put same log times, Na5 is equal to 15 minus all the pairwise products and then minus all the four-wise products and then whole thing divided by 16. Okay, so we wanna construct a distribution of clauses. So let's do it. So for Na3 set, let's consider the cost with pairwise biases all equal to minus one third. And for this clause, the SDP can come up with the following distribution of satisfying assignments. They basically take the average of the assignments where there is exactly one truth or two wrong. So this is a distribution of satisfying assignment, meaning that the SDP will consider this clause to be perfectly satisfied. And let's verify that this matches our pairwise biases. So for example, let's look at X1 and X2. Then with probably one third, with probably two thirds, they have different signs and with probably one third, they have the same sign. So their respective value is minus one third. Okay, so this matches our pairwise biases. And for Na5 set, let's consider the cost with pairwise biases, ViJ equals one third, for IJ between one and four, and Vi5 equals to zero for I between one and four. So for this distribution, for this pairwise biases, the SDP can find the following distribution of satisfying assignments. Again, it's a satisfying assignment on the, there's a distribution on the assignments where we only have one, where we have exactly one truth or two wrong. Let's verify that this matches our pairwise bias. So for example, take X1, X2, with probably one third, they have different signs. So in the first two assignments, they have different signs. Now with probably two thirds of the last three assignments, they have the same sign. So the respective value of X1 times X2 is equal to one third, which is the same as the plus bias. And let's take X4 and S5, then in the first three assignments, they have the same sign, which have weight one, one half. And the last two assignments, they have different signs, which also have weight one half. So the respective value of X4 times X5 is equal to zero. So this matches our pairwise biases. And since this is a distribution of satisfying assignments, the SP again thinks that this class is perfectly satisfying. Now we can express the value using moment functions. So for NE3 set, this is given by three minus three times F2 of minus one third divided by four. And since F2 is an odd function, this is equal to three plus three times F2 of one third divided by four. And for NE5 set, it's equal to 15 minus six times F2 of one third minus F4 of all one thirds, and then all divided by 16. So here we actually use very simple fact that any moment involving X5 is going to be zero because all the pairwise biases involving X5 is zero. So that's a very simple fact, very easy to prove. And then using our key lemma, we have this is a most 15 minus six times F2 of one third minus F2 of one third squared, and then divided by 16. So we have expressed the value of our round scheme on these two clauses in using just one variable, which is F2 of one third. And the following, then it's clear what we should do next. We just try to find the minimum of these two functions. So that T be F2 of one third, we plot these two functions here. So the blue line, three plus three T of the four is for the NE3 clause, the red curve, 15 minus six T minus T squared divided by 16 is for the five-side clause, NE5-side clause. So you can see that these two, so the Y axis, so the Y coordinate for the intersection of these two curves is about 0.8739. So what this means is that no matter what round scheme you use, no matter what RPR squared function you use, no matter what F2 of one third is, you're going to perform worse than 0.8739 or either a three clause or five clause, okay? So this essentially shows that no RPR squared function can do better than 0.8739 on this distribution of clauses. Well, but that's Sebastian's intuition for this proof, but to make this formal we still need to construct a gap instance and that's what we're going to do next. So one idea is to consider the unit sphere as set of variables. Can sample the clauses with the same distribution of pairwise biases. And what do I mean by this? So for example, if we want an NE3-side clause with pairwise biases minus one third, minus one third, minus one third, we just sample three vectors from the sphere with pairwise dot products minus one third. And we can translate our arguments, our previous arguments, in terms of actual assignments to the unit sphere. So as you can see that this, if we take the unit sphere as set of variables, then this vector themselves form an SP solution. So this construction preserves the integrator with the SP value. And if we can prove, if we can translate our argument to lower bound the performance of actual assignments and that will obtain, and that means we will obtain an integrator to get it. But however, this is an infinite instance. It has infinitely many variables. So we need to discretize. So in our case, it's actually very easy to discretize because we only have just deal with one pairwise bias which is one third. So we can handpick the points from the sphere. So let V be the set of points in our end with exactly three now zero coordinates, which being either one over square root of three or minus one over square root of three. Okay, so suppose now we want to sample K vectors with pairwise bias one third. So we can use the following distribution which we call DK. We sample two K plus one distinct indices and two K plus one independent coin flips. And we just let these vectors share one same index and then use two new index after that. So maybe it helps to visualize this in a table. So all these vectors will share this same index I one and they have the same sign on it. And then every vector we use to new indices. So this distribution using this distribution, we can sample K vectors with pairwise bias one third and we can translate our key lemma as follows. Given an actual assignment A, we define F two to be respected value over V one, V two assembled from V two. So they have pairwise bias one third, A of V one times A of V two. So essentially we just changed in the original definition we have R square function F, we changed that to an actual assignment A. And similarly for F four, this is equal to the respected value of V one, V two, V three, V four, sampled from D four and the corresponding code. We can prove in a similar way that F four is at least F two squared. And now we need to sample the clauses. It's five clauses pretty simple. We'll just sample four clauses, four vectors from D four, and they will have pairwise bias one third. Then sample V five, that doesn't share any non-zero coordinate with the previous four vectors. So we'll be orthogonal to the first four vectors and we output any five of these five vectors. And we'll show that given an assignment A its respected value of this distribution is 15 minus six at two minus at four over 16. Same as before. And sampling and sampling of three clauses a little bit more complicated because now we want them to have bias minus one third. So what we do is we sample six distinct indices. I want to add six. And sample six independent cornflips, you want to be six. And then assign them according to this expression. Again, it might help to visualize this in this table. So essentially for any pair of vectors we want them to share exactly one non-zero coordinate on which they have different sides. Okay. So they will have pairwise, they will have the top product minus one third. And again, oops, and again, given an assignment A its respected value on this distribution is three plus three times F two over four. It's again same as before. So let's wrap up. So we end up with the same expressions, the same expression for three clause, the same expression for five clause and the same expression for the PLMR. So we can just translate our biggest argument very easily here. But this time we conclude that no actual assignment can do better than point A seven three nine. But STP thinks that this is perfectly satisfiable. So this gives us, and I need to grab the gap instance and this completes our proof. Any questions? Can you say something about how you go from RPR square rounding to no assignment doing better than point eight seven three nine? So back to the, so we have to find the F two and F four differently here. So now in the definition, we changed the, it's originally the definition of F two. It was originally it was in terms of RPR square functions. But now we changed into actual assignments. So now in order to prove this, we just say we take an actual assignment and then we have this F two and we have F four and we express the value of the rounding schemes in terms of F two and F four and the same argument will apply. There's a question in chat. Oh, could you remind me where, yes, yes. So by right of end just result if we have an integrated gap instance and that would imply the huge harness. Okay, I still have a few more minutes. So let me show you some experimental results that we have. So as I mentioned earlier, we have determined the approximation curve for next and reset. So it's showing this figure. So the X axis is the completeness C and the Y axis is the sum is S. So this graph essentially saying on instances with opting value C, we can achieve a ratio of, we can achieve a value of S. So the approximation ratio for this, for any three set is approximately 0.9089. And we also tried to find the optimal RPR square function for satisfiable max and even max and E set instance. So max for max and E three five set, we get this step function. So it achieves the conjectured ratio of 0.8729. So it's very close to our upper bound which is 0.8739. But we don't know how to prove this, how to rigorously prove this ratio. And what's cool about this is that it's a step function. So previously people only used the, not people only used the model tau functions for our square. So it turns out that being non-model tau here actually helps, seems to help. And for any 378 set, so we allow crosses of three, of size 378, we have this step function. So it achieves a conjectured ratio of 0.8698. So we believe that this configuration is actually the hardest for satisfiable max and E set because by adding in more clauses of different size, the ratio doesn't seem to be affected. Okay, so let me end this talk with a few open problems. So again, what's the approximation ratio for max and E set? And can we give a better analysis of F4 and higher moment functions? So finally, apply this method to other predicates. So that's it. Thanks for your attention. Okay, we can give a virtual round of applause for them. Thank you. Now questions please. Can I ask a question? Yeah, sure. So you said max, satisfiable max NA3 and five. So satisfied, did you also investigate satisfiable max NA3 set, just NA3 set, because how do you think that the known algorithm for that is optimal? So we have, so this approximation curve for max NA3 set. So actually, I think the case for satisfiable instances. So the worst case is for satisfiable? For this problem? No, no. The worst case is around complainers point nine, something, I don't know exactly. Like, I guess for satisfiable is, I guess the bound remains the same as the previous thing. So, I was just wondering if you get any improvement for the satisfiable case also, or at that point, the value is the same as the previous known things. I'm not sure if there's any previous work that works with the folks only on satisfiable and recent. Thanks. Okay, any more questions, please? Yeah, go, did you have a question there? Okay, go. Naive question. Just wondering why you're calling this Fourier expansion. So they don't look like Fourier expansions to me. So maybe I just don't understand. Known as the discrete Fourier expansion. So yeah, I'm not, maybe they can come up with a different name. But that's how it's called in the literature. Are you happy there? Okay. Same, same, same, same, same, same, same, same, same. Yeah. Right, any more questions, please? Sanyan, when you say you want to apply it to other types of predicates, what sort of results would you be aiming for? Well, the next predicate we're trying to look at is max set. So, the max set is more difficult because it's not even CSP. So it's, so we also have biases. So the moment functions, now it's even more difficult to analyze because even for F2, you'll have three inputs, the I, the J and the FJ. So the knowledge on Gaussian non-stability is no longer enough. And yeah, we don't have much result right now. Yeah, thanks, Sanyan. Apparently, Libor has a question. Yeah, can you hear me? Yeah, yep. So my question is, you allow negations as predicates. So you have not only gone, you can use negations. What if you don't use negations? That's like, you cannot apply your methods at all? Yeah, that's a good thing. Yeah, I haven't thought about that. So I need a little more time to think about it. Okay, any more questions for you in place? One last chance. Okay, so it seems that there are no further questions. So let's thank Nyang again. Thank you, Nyang. You can talk. Cool. So next week, next week, we have a talk the same day, same time. And the speaker will be for the speaker, Aditya Puttokhuchi. So yep, just see you all next week.