 Okay I hope you can hear me. Yes. Okay so thank you very much for the invitation and yes today I would like to talk about random constraint satisfaction problems and the effect of introducing a bias in the measure that describes the set of solutions. So this is the outline now I start by giving some definition and introducing the phase transition some of the phase transition that arrives in this random constraint satisfaction problems then I define the the bias measure that we have studied and its effect on the clustering transition and then I look at the particular case of the large k limit where k is the number of variables involved in one clause. Okay so for the definition so a constraint satisfaction problems so it can be seen as a set of n discrete variables that are subjected to a set of m constraints and the solution is an assignment of the variable so that all the constraints are satisfied and in this talk I will focus on one particular problem which is the bi-coloring on k-hypergraph and in instance it's defined as follows so it's an hypergraph of n vertices and m hyperranges where an hyperrage links a subset of k variables and on the vertices lives the the spin variables so there are boolean or here I take the notation of spin variables and okay solution is an assignment of all the spin variables so that all the constraints are satisfied where a constraint is that the k variable involved in one hyperrache shouldn't be all equal so there is at least one variable that is plus one and one variable that is minus one okay and it's so in order to study the typical properties of this problem it's interesting to study to introduce random ensemble of instances and so here it's a random ensemble of k hypergraphs so here I will talk about the two main ensemble common ensemble which are the regular ensemble and with the vertices of the the degree of the vertices are fixed and they are just any ensemble and okay and see so it's very interesting to look at the thermodynamic limit where you send the number of variable and the number of constraints to infinity but at a fixed ratio alpha which is so the density of constraints and so when you increase the value of alpha in this thermodynamic limit you will see that the set of solutions will undergo several phase transitions that I've okay tried to describe here so the squares represent the space of configuration and the black region corresponds to the set of solutions so the most prominent phase transition is the satisfiability transition above which typical instances in the larger thermodynamic limit doesn't admit solution and below there is solution and in this talk I will focus on the clustering transition which can be defined in several ways so the first one it's from you can see it from this sketch so below the clustering transition the set of solution is rather well connected and you can reach any solution from the other one by making a small rearrangement of a small number of variables and above the solution sets fit into an exponential number of clusters and these clusters are separated by a free energy barriers okay it can also be defined as follows so when you run a Monte Carlo Markov chain to sample the the set of solutions above the clustering transition it will have an exponential roll abstention time while below it will be able to equilibrate in a polynomial time in N and finally it's also related to the apparition of long range co-elections which are called the point to set correlation that measures the correlation between one point and the set of variable that are a distance d from it and this is related to a an information theoretic problem called the reconstruction on the on k-r-i patterns in this problem okay so the what is interesting to look at is to look at the typical properties of the algorithm that try to find a solution on the typical instances so can we relate their typical performances to one of the first position that I have mentioned and is there a barrier above which a value of the density of constraint above which it's not possible to design an algorithm that would find a solution in a polynomial time so in order to understand this okay we have to look at different regimes so in the regime of small k actually there is some algorithm that are very efficient and that can find a solution in polynomial time up to density of constraint that are very very close to the satisfiability threshold but in the large k limits so the situation is quite different and up to now the best algorithm can find the efficiently solution up to density of constraint that are at the reading order for the same as the clustering transition so here I give the scaling for the clustering transition it's two to the k minus one log k over k and this has to be compared with the satisfiability transition which scales as two to the k minus one log two so between these two values there is a wide range of alpha in which typical instances do admit a solution but up to now we don't know any algorithms that are probably efficient in this range okay so now here's the approach that we have adopted so all the many of the phase relations that I have mentioned and in particular the clustering transition they are obtained for the uniform measure that over the set of solutions so this is its expression but give us the weight when the configuration is not a solution and it gives the same weight for all the configuration that are solutions and if you instead you introduce a non-uniform measure that still gives zero weight to so configuration that are not solution that weight differently the solutions then you will be able to move the value of the clustering threshold but not the value of the satisfiability threshold and so the goal is to design a bias measure so that we can increase the value of the clustering threshold and we have seen that at least for the performance so for the simulated annealing algorithm it will increase its performances so there is some work that I've used this idea of binding the measure so in the first example it's on the problem of packing hard spheres which can be seen as a constraint satisfaction problem on the second item is okay the solution are weighted according to their number of frozen variables and on the third item so the solution are weighted according to their local entropy which was introduced in the in the previous talk by by Carlo Rucci-Bello and so here I I define the the bias that that we have studied so the first one introduces a interaction inside the that factorize on the clause so this is the expression of the measure that we have used and so it's a product over the constraints and it's defined through this function omega that takes as argument the set of k variables involving one constraint so it gives still a zero weight to the configuration that violates the constraints so when all the spins are equal to either plus one or minus one otherwise it gives a non-zero weight but through this parameter epsilon it will give a different weight according to the number of almost violating clauses so this equality means that there is only one spin that take the opposite value compared to the other variable involved in this clause and okay this is a second example so there we introduce some interaction at distance one so this is an example of a small k-i paragraph with k equal three so the vertices are represented by black dots and the hyper edge are represented by white square squares and so k equal three in this and okay so the bias that we study it will introduce interaction between one variable and its neighbors at the distance one okay and we have chosen this bias to count the number of forcing clauses we are forcing clause so let's say that a is forcing clause this means that the variable that are surrounding it except i are all equal and this will force the variable i to take the opposite value so that the constraint is well satisfied and so this take this following form where p is the number of forcing clauses okay so now this is the result that we obtained for our finite k so this is the clustering threshold for a random regular graph so where the w is fixed and is rated to alpha through relation and so we have computed the clustering threshold for k equal five and six and so this is the value of the clustering threshold in the second column for the uniform case and you can see that when you optimize on the parameter that describe the bias of the first and the second bias you are able to increase indeed the value of the clustering threshold and we check that this has a positive effect on the simulated annealing algorithm and okay so now i move to the last part that look at the large k limit of the clustering transition so in the large k limit the clustering transition is this following asymptotic expansion so what is known rigorously about this asymptotic expansion so the dominant term has been proven for so by coloring of k-up hypergraphs as well as other constraint satisfaction problems for so such that for instance the q-coloring problem in which the variable can take this time q value and also there is some result about the third term so the by this two paper by Alan Slay and Zhang about so some inequalities on the third term the constant term gamut and so we with my advisor games in our genre we have estimated the value of this constant and we have shown that it was the same for both for the problem of by coloring on k-up hypergraphs and for the problem of q-coloring and so now what is interesting is to look at the effect of the bias that we have introduced on this asymptotic expansion of the clustering threshold so for this we need to choose a scaling for the parameter that described the bias so for the intra-closed bias we took this scaling at epsilon scale that constant over a square root of k log k and so till the epsilon is now our rescaled bias parameter and for the interaction at distance one we have taken a specific choice where we distinguish between the case where there is p equals zero so there is no forcing closes on the case where there is at least one forcing closes and b is our bias parameter that we take constant as a case growing so with this choice of scaling we showed that the clustering transition arise at the same the same scale as the uniform case but now the okay we can make some improvement on the third term of the expansion so gamma d depends now on the the bias parameter epsilon and till the epsilon and b so okay this is the result for the intra-closed bias so this is a plot of the rescaled parameter till the epsilon square against gamma d and the dots represent the clustering transition so the uniform case corresponds to till the epsilon equal to zero and so you can see from this plot that as long as you turn on the the bias you decrease actually the value of the clustering threshold so this is not what we expect we would like to have a bias that increase the clustering threshold so this bias is not it's not doable but instead with the bias that introduced interaction at distance one it is possible so this is the plot of okay b is the bias parameter so we recover the uniform measure when b is equal to one and okay now the the clustering transition is represented by the crosses so you see that when you move b from the value one you are able to increase the value of the clustering threshold and the optimal value is found around the b equal to 0.4 and for this we can obtain an increase of the the third term of the asymptotic expansion so we obtain gamma d is equal to 0.98 which has to be compared to the value for the uniform case where gamma d is equal to approximately 0 0.87 okay so now I can conclude so with this this um bias measure we were able to increase the value of the clustering threshold both at small k and at large k so at small k we also check that this has indeed a positive impact on the performances of simulated annealing and at large k we we could improve on the third term of the asymptotic expansion so of course the perspective now is to so it would be better to be able to improve on the more leading term of this asymptotic expansion and to do that we could try to use more generic bias so our first direction could be to try to increase the range of interaction because here we have a look only at interaction at distance one but we could try to increase it also we have only looked at the bias that only take the number of forcing clauses surrounding one one variable but it could be more general and carry more information um okay so thank you okay