 I would also like to take the opportunity to thank the organizers for organizing this workshop, which is very diverse and interesting, and also for the opportunity to take part in it. And I would like to talk about the replica symmetric phase of random constraint satisfaction problems, which is joint work with my, with Amin Kuya-Oklan and Tobias Kapitanopoulos, also both from Frankfurt. And because this article is rather long and time is short, I would like to give a very high-level overview, beginning with an example that illustrates one side of this article, which is the planted coloring model. Afterwards, I would like to take the other side of the, like to take the other point of view, which is basically from random constraint satisfaction problems and present a very small sample of the results. So let me begin with planted coloring, which is generated in two steps. So in the first step, you have given a vertex set of n vertices, and you have two colors. And for each vertex independently, you draw a uniform color from the set of all possible colors. So in this example, in this very sketchy example that you see here, we have three colors, and the partition could roughly look like this. Then in the next step, you generate some edges between vertices of different colors. So for each vertices, which have been assigned different colors in the previous step, you include an edge into the edge set with probability d over n, and it does not depend on the particular colors that you consider. So in this way, you generate a graph, and in the next step, somebody takes away the colors, and you're just given the graph. And the problem is to draw conclusions about this planted coloring from the first step just by looking at this graph. Now, in this inference problem, of course, there can be various particular questions that you could look at. And the particular questions that I would like to focus on for now is first the very easy, not easy, but I mean, the most basic question to tell whether the graph you're given has been generated by this particular procedure that I described just now, or whether it was generated as a binomial random graph, or a Erdisch-Renie random graph in which the edge probability has the same expected degree as in the planted coloring model. So this would be the first question. Can you tell this planted model apart from some null model? And the next question would be a bit harder. So can you infer something of this coloring? So does there exist a polynomial time algorithm such that this algorithm outputs a coloring of the graph that does better than the random guess that we already heard about today with high probability? So this, of course, depends on the information that we have for each vertex, which is measured in the number of its neighbors or in this particular neighbors. So it's natural to parameterize this problem by the average number of neighbors of each vertex in this graph. And in the analysis of this problem, it has been turned out to be very useful to have some idea of how this planted coloring sits in the total number of solutions or colorings of your given graph. So this is an adaptation of a picture in an article by Khrtchakhala and Zdivorova. And so for a small average degree, there are, of course, many solutions or colorings, and they are well connected. You can go from one to the other by just changing small number of colors. And then as you increase the degree, first this number of coloring shatters into an exponential number of clusters. And then one of these, you can see your planted coloring. Afterwards, after this second threshold, you see that, or maybe you don't see it in the picture, but the idea is that this cluster in which the planted coloring sits contains more colorings than all the other clusters together. And at some other threshold, you just see the last cluster not associated to the planted coloring vanishes. So by looking at this picture, maybe can you see how well you can guess this planted coloring? And just looking at our two questions that I've mentioned two slides ago, there are two thresholds which have been predicted in the article by Kressel, Kirchakalamur and Zdivorova. So they conjectured that there is this condensation threshold below which, above which, it is information theoretically possible to discern the planted model from the null model. On the other side, there is this second threshold, the Kastenstigung threshold, which in our case is q-1 squared, above which it is, in fact, efficiently possible to find an approximation of your planted coloring. And in between, the problem is conjectured to be hard. So there has been some previous work on this by Noga-Aleon and Nabil Kahale from the 90s and they give a deterministic spectral algorithm with expected polynomial running time that finds a coloring correlated to the ground truth with high probability if this average degree exceeds some large absolute constant, which is larger than q-1 squared, the Kastenstigung bound. And this algorithm roughly proceeds in three steps. So first you do some clustering based on spectral methods and the adjacency matrix of a slightly modified graph. The next step, you perform some local improvement. Afterwards, you uncolor some vertices and then you color these again by doing some exhaustive search, say, that is compatible to the rest of the coloring. So this is some rigorous result from the 90s, which that's the important point, it finds a coloring. It's not just some assignment of colors to the vertices, but it's actually a proper coloring. And our result would be that we exactly verify the prediction on the location of condensation as an optimization problem. It was known before only for a large number of colors or asymptotically. And then we also identify this condensation threshold as the threshold below which it is not information theoretically possible to discern the planted model and the null model because below the graph is contiguous to the Erdos-Renyi model. Okay, so let me know in the second half of the talk come to the other side where there's no planting, but you maybe first generate the graph and then look for colorings. So in the general random constraint satisfaction problem, you have some variables which are bound by constraints. In our case, the constraints are generated as in the Erdos-Renyi model. And you can picture this by the usual factor graph, but your constraints are the square nodes and the variables are the round circle nodes. And each constraint binds exactly the same number of variables in our case. And then you assume that you have a Poisson number of constraints for simplicity such that on average each variable participates in the constraints. Okay, and now also for this problem, the question would be how the solution space evolves when the density parameter D increases. And here it's maybe I don't need to explain this to you, but here we can introduce a Boltzmann distribution, which is in our case, if we look at random constraint satisfaction problems, it just counts the number of solutions. So for each constraint, it forbids some value configurations of the adjacent variables. And we forbid some of these. And if we combine all the constraints, we can count the number of solutions to these constraints and normalize them to get a probability distribution. And then we look at the evolution of the space of solutions as the average number of clauses in which each variable participates increases. And here is again an adaptation of conjecture by Grzakala Montanari, Richard Hesengi, Semeagian and Stébora. So as you can see, maybe the picture looks quite similar up to the condensation threshold to the picture that I showed you before. But afterwards, maybe there are more clusters. And suddenly then after the satisfiability threshold, you have no solutions anymore. And so one result is the identification of this condensation threshold, after which the number of clusters is sub-exponential, and the mass is carried of the Boltzmann distribution is carried by sub-exponentially many clusters. And so this functional B is a compact version of the beta-free entropy. And from this functional, we can read off the condensation threshold. And then the theorem states that the condensation threshold is strictly positive and not infinite. And before condensation, you can explicitly determine the exponential growth factor of the partition function. And roughly, if you had no constraints, this would be the number of solutions would be Q to the n. For each vertex, you have Q spins. And then this xi factor is the average impact of a constraint. So for each constraint, you get a penalty factor. And this in fact marks a phase transition because the partition function behaves differently after condensation, which is the second result. So there is a probability 1 minus e to the minus omega of n. It's strictly smaller. And the main result of the paper is maybe interesting because it has, to my knowledge, no physics prediction match is that the determination of this normalization of the partition function, and it converges in distribution to this random variable here, which comes out of subgraph conditioning technique. So these are the two main results. And this is the third part in a series of articles. So the previous two articles dealt with problems with soft constraints. And this is the main difference to our result that we deal with hard constraints, which is combinatorially considerably harder because much of the proof is based on based on the impact tracing of adding an edge or removing an edge. And this can have a significant impact if you can destroy lots of solutions by adding a hard constraint. Okay. And to end the talk, let me return to the beginning to the planted coloring problem. So two open questions in this setting would be, first, does a variant of the Alhann-Kahalle algorithm work maybe down to the Kasten-Stiegum bound or maybe even down to condensation? And a second question would be, it's conjectured for three colors that the reconstruction condensation and Kasten-Stiegum coincide. And this is also an open problem. And finally, I would like to advertise a workshop, which is also held online, which is inference problems, algorithms, and lower bounds from the 31st of August to the 4th of September. And if you would like to register, you should send an email to this email address. Attention. Thank you. We'll move to