 This is Mark Neu from the Polytechnic University in Catalonia. Last year we tried to invite Mark, but he had over commitments, but we are very happy to have him this year. Mark works in graph theory, in combinatorics, and he focuses mostly on enumerative combinatorics, and what is very important for our community, he links combinatorics and graph theory with logic, so he published several papers on 01 laws and limiting distributions. And today, limiting probabilities, and today we'll hear, we'll listen to the talk co-authored by Alberto LaRauri and Tobias Müller on limiting probabilities of first order sentences in sparse random graphs. So please, Mark, the screen is yours. Okay, thank you very much. Well, first of all, I'm very grateful and very honored for this invitation. And my co-authors are Alberto LaRauri, he's a PhD student under my supervision and Tobias Müller from Groningen. Now, my main area of research is combinatorics, but since several years I've been interested in logical properties of random structures. So just to be sure that we are on the same language, of course, all of you know what is the first order logic, right, the classical first order logic. And in the case of graphs, we add an adjacency relation that this binary predicate we shortened to x adjacent to y. We assume this to be symmetric, so there are no directions and antireflexics, there are no loops. So what can you express in this first order logic of graphs? Well, maybe not too much, but you can express interesting things. Like, for instance, the graph contains a triangle, contains three mutually adjacent vertices. And if you think for a moment, you see that it can also express the existence of a fixed H as a subgraph. You use as many quantifiers as vertices in the graph, and you postulate the adjacencies between them. Now, even more complicated sentences, and this example will be of importance later, there are at most a cycles of length at most k, when a and k are fixed, 10, 50. Well, this would be a rather long sentence, a bit boring to write down, but if you can imagine, well, there are no more than eight plus one subsets of k vertices inducing a cycle. And probably it is very well known that you cannot express more global properties, like connectivity, planarity or three colorality, that you can indeed express in monadic second order logic. But today is only going to be about the first order logic. Good. If there are any questions you can ask, I have planned my slides to have plenty of time. So you are invited to ask or questions. Now, this is for the logic for the graphs, we are using the classical GNP model of random graphs. We fix some p between zero and one. The vertices are from one to end. So this means that the vertices are distinguishable. We will not care about automorphism. And then we put every edge ij in this random graph, independently with probability p. This is not the uniform distribution. The probability of a graph is p to the number of edges times one minus p to the number of non edges. So n to two is the total number of possible edges. So this distribution is not uniform, but this model has been very well studied for the last, I would say now, 60 years. The expected number of edges is the probability of an edge times the possible number of edges n to two. So you see it depends as p n square over two. If p is constant, then the number of edges is quadratic. And this is called dense graphs. But if p is proportional to one over n, then you see if p is one over n, then the number of edges in expectation is linear. And this is what I will call sparse graphs. And today we are going to concentrate only on sparse graphs. And sparse graphs, it's a regime in which the most interesting phenomena in this model of random graphs occur. So again, I just shortened g n to the random graph with the probability of an edge is c over n. Okay, I hope this is clear. And then maybe the most important thing here is the well-known phase transition. If c is below one, then all connected components in the random graph are either trees or have a unique cycle. The unique cycle is a unicyclic sample. And moreover, they are small. They are of size at most log n. But the important thing here is that all components are simple. Simple means that the number of edges is at most the number of vertices, either n or n minus one. And a complex component is a component in which the number of edges is larger than the number of vertices. But for c larger than one, a sudden change occurs. And then there is a unique component of linear size, which is called the giant component. Now, dc is the expected degree of a vertex. And now this phenomenon is very well understood. If you are starting at a vertex and then explore the neighbors and the neighbors, this is a branching process whose expected value is c. And if the branching process is subcritical with this condition, then it will die away very soon. And this is, let's say, the intuition behind this radical change of structure for the subcritical random graphs and for the subcritical. Now, one can say or we could say that first of the logic cannot capture this transition. Why? Because in first of the logic, we cannot express if a graph has a cycle or not, or if it has a unique cycle. We can express that it has a cycle of length eight or any constant. But we cannot express, and this is provable, that first of the logic is not powerful enough to express being a cyclic or not. But this intuition here was made precise, very precise in the following result. So again, gn will be this random graph. And given a property, a, given by a first order sentence, right, in which every variable is quantified, so that this express a graph property invariant and their isomorphism, we are interested, which is the probability that the random graph satisfies this property? For instance, that it contains a triangle or that it has four cycles of a given length or that it contains k4 as a component. And we are interested in the limited probability when the number of vertices goes to infinity. So we will always be talking about the asymptotic regime. Now, James Lynch, now maybe almost 30 years ago, he proved the following. For every property, this limit exists, or this limit will depend on the property and will depend on the value of c that we have taken. But here, I emphasize that I consider this as a function of c. And this function of c can be expressed only just basic sums and products and a set of constants and exponential. So it's not only a continuous function. It's a c-infinite function. It's an analytic function. There is no change, no nothing that happens when you go from c less than one to c larger than one. So in some technical sense, this says that this phase transition cannot be expressed, cannot be captured by properties that are satisfied asymptotically or not when the number of vertices goes to infinity. So you don't see any difference with c. You really don't see any difference with c. Can I ask a question? Yes. But you could potentially construct something like e to the x over one plus e to the x, like a function which goes from zero to one using the exponential sums and multiplications. No, no, no, no, no, no. These are not exponentials of exponentials, right? It's not every combination of this, right? So these exponentials are just parameters of Poisson distributions. And you just take linear combinations of this. It's not that you can use any combination of exponentials, Sergei. It is really analytic and well defined. So what does it mean when you say set of constants and exponentials, what does it actually mean? Well, maybe I should have been more precise. I should have been, these are the set of constants of certain distributions, parameters of Poisson distributions, and single exponentials and linear combinations of this or sums and products of this. Okay. Good. Now, Alberto, my student has recently generalized this result to sparse hypergraphs. And maybe I will say something at the end about hypergraphs, which is also an interesting line of research. So these are the limiting probabilities. And then the lesson is that first of the logic cannot distinguish between the two regimes. Now, what we did here is we consider not a single first order sentence or property, but we consider the set of all limiting probabilities for a given C. We consider the set of all limiting probabilities of first order properties. All of them. Well, this is a countable set because the number of sentences is countable, is symmetric with respect to one half just by negation. And we consider the closure inside the interval 01. And then we ask ourselves, how is this closure? When you take the closure of the set of limiting probabilities, what happens? And then we discover this rather unexpected thing. There is a constant, which is the 0.9 something. It is the unique positive root of a certain simple equation. And then the following happens. This closure is always a finite union of closed intervals. Secondly, if you are above C0 or you are exactly at C0, then it is dense. The closure is the whole interval. But if you are below, and of course positive, there is at least one gap. There is at least one sub-interval that is not covered by the closure. This was a bit of a surprise. We were looking for something that changed given value of C. And more or less, maybe we were expecting that this change would happen at C equals to 1, which is the phase transition, but it is not. And then, which I think is even more interesting, there is this constant, which by the way is below 1. And there is a sudden change into the, not the single limiting probabilities, but the set of all limiting probabilities. Good. So this is the statement. And the rest of my talk, besides some comments at the end and some extensions that we are working in, will be to give you an idea of how to prove this theorem. Okay. Good. Well, which is the motivation? Well, the motivation is that some years ago, with some Tobias Müller and other co-authors, we looked at random planar graphs. This is a different model. We take the set of all label planar graphs. We consider the uniform distribution with n vertices. So there are no independent probabilities. And then we ask ourselves the same question, what is the closure of the set of limiting probabilities? And then we saw it was a finite union of intervals. In fact, these many intervals, which are very short. For the class of random forest, the cyclic graphs, these are four intervals. And these are given numbers. For instance, this 0.6, it's e to the minus 1 half. And for every class which is close under minors, and well, this is a very natural concept in graph theory. And this is always a finite union of intervals. So our starting point was, well, we have seen this phenomenon. We know some techniques, how to prove these kind of results. What happens in this sparse class? Because minor close class are sparse. They have at most a linear number of edges. So this was the motivation. Now, what is the sketch of the proof? Again, here is the statement. You take the set of limiting probabilities of first-order properties. You take this special number, which is the unique root of this. And then these are the few statements we have to prove. And there will be three parts. When c is above 1, there is no gap, right? The closure is everything. When you are below c0, there is at least one gap. You miss at least one sub-interval. And then the last part, which is maybe a bit of more work, is that again, between c0 and 1, there is no gap. And if you put this together, you get these two statements. And along the proof, we'll see that this is always a finite union of intervals. So it's not a complicated set. It's a finite union of intervals. But the behavior depends on c. And we'll see which is the role, which is the combinatorial meaning of this constant. So we proceed with the first part, no gap. This is, in some sense, the simplest one. And what we do is we take in this random graph, we take a random variable, which is the number of cycles of length k, which is the number of k cycles. And in fact, it is well known that this random variable converges in distribution to a Poisson law, whose parameter is precisely c to the k divided by 2k. Well, if you are familiar with these kind of probabilistic statements, this is not difficult to prove by the method of moments. You compute the first moment, the second moment, and all factorial moments, and then using that this Poisson distribution is characterized by the moments you prove this. And moreover, if you fix k, let's say k up to 10 or 20, then the number of triangles, quadrangles, and cycles of the k are asymptotically independent. Now, we can see the this random variable, which is the number of cycles of length at most k. Right? And since they are independent, this converges to a Poisson law, whose parameter is the sum of these parameters. Now, I observe that this mu k, this is when k goes to infinity, this is the divergent. If c is larger than 1, well, this is an exponential. And if c is equal to 1, this is a harmonic series. But in any case, when c is either 1 or larger than 1, then this is divergent. Is this clear? Okay. Again, if you have questions, you can ask. Now, I recall, as I said at the beginning, that the probability that the number of k cycles is at most a constant, this is expressible in first order logic. This is a piece in the argument. Now, by the central limit theorem, since these are independent and asymptotically, and this is okay, then this converges to a normal law. So the probability that this is less than mu plus this deviation of order square root n, then this converges to the distribution function of the normal law. And this is a continuous distribution. So we have a sequence of discrete distributions. And by the central limit theorem, they converge to a continuous distribution. Now, we want to show that every number p can be approximated by limiting probabilities of first order properties. So I take my p and I take an epsilon, which will be the error. And I want to be more than this error. Because this is continuous and this function covers all the values between zero and one. There is an x such that the error function is equal to p. And then because of this convergence, we take mu zero such that this probability is closer to p than epsilon for mu greater than zero. This is what doesn't mean to be this limiting. And finally, we take a k such that mu k is above this mu zero. And this is guaranteed because the limit of mu k is equal to infinity. Now, this property here, okay, the property that this Poisson is less than this is the same that this random variable is less than this value. So this is first order expressible. And we have shown that every p between zero and one can be epsilon approximated by some limiting probability of a first order sentence. Hence, this is dense and the closure is the interval zero. Well, this is an easy argument. And you see, we are only using elementary tools. So if everyone is happy, then we move to the second part. Okay, we move to the second part. And we want to show that when we are below the C zero, there is at least one sub interval that is not covered. Now we have to do two things to go deeper into the structure of this random graph. And again, we have to explain what is the combinatorial meaning of this constant C zero. So what Erdos really proved in this subcritical, remember that here all components are either trees or unicycles, but even more is true. This is a union of trees, which is a forest, and age is a union of unicycline graphs. This is given by the result I just mentioned. But then something else happens. Every fixed tree appears in this forest as many times as you wish. For every fixed T and multiplicity, this random forest contains at least N copies of T. This means that if we restrict to this forest, to the forest part, and we forget about the unicyclic part, then there is a zero one law. The limiting probability that this forest satisfies a first-order property converges and it is either zero or one. Why is this? Well, FN contains arbitrarily many copies of each tree. So to these two random instances of these random forests cannot really be distinguished by all properties. If you know the Eremfeuchte Freiste games, then this is clear, because in this two-player game, duplicator can always win, because you can always find a copy of a tree where you can play your round on the game. If you are not familiar, I'm sure that in this community, many of you will be familiar with these combinatorial games, which is a basic ingredient of finite model theory, but this is the intuition behind if you are not familiar. So here, either the limiting probability when restricted to the forest, either it is zero, it's almost surely false or almost surely true. Now we go to this constant and I say the limiting probability, well, F is the property of this random graph being a cyclic. So is the limiting probability that this HN is empty, that you only have components which are trees? What is this limiting probability? It's relatively easy to show because of these asymptotic independence that this is infinite product given by the Poisson distribution. You do a little algebra and you find this function. Here is a plot of this function. When C goes to zero, the probability of being a cyclic goes to one. When you approach the critical point of the phase transition, the probability of being a cyclic goes to zero. And C zero is exactly the point, which is zero point nine something when the probability is one half. So C zero is this constant when the probability that the random graph is a cyclic is exactly one half. Now I take a first order property. What I'm going to write down here should be limiting probability of A when N goes to infinity, but then the screen would be too crowded. So I'm using this as limiting probabilities. And I just use this elementary law of total probabilities. The probability of A is the probability of A conditioned to F times the probability of F, the probability of F was this, and the probability of A conditioned to non F times the complementary probability one minus F. Now we have say that this probability of A conditioned to this random forest is either zero or one. If it is one, if this is one, then this probability is at least FC. But since we are here below, the probability is larger than one half because this function is decreasing. At C zero, the probability is exactly one half. So if you are below, this function is decreasing and then the probability will be larger. Good. And so you see if this happens, this probability is larger than one half. If on the other hand, the probability of A conditioned to the forest is zero and this is zero, then this probability is at most a probability times one minus F of C, which is larger, smaller than one half. So no probability, no probability can be between one minus F of C, which is less than one half and F of C, which is larger than one half. And this is a gap. Just by this property, well, I remark this is not a first order property. Okay. Being a cyclic, this is not a first order property. But if I take any first order property conditioning to F, it's either one or it's either zero by the arguments before. Good. Very good. So this is the second part. Any questions so far? Now, we have to show finally that in between this critical value here and one, again, there is no gap. And also we'll see that these limiting probabilities, the closure is a union of finding many intervals. Good. So the structure of the random graph here, since we are in the subcritical regime, actually, this should be less than one because the case C equals to one has already been covered here. So maybe this, I really should have put here less than one because we are in the subcritical regime and then there are only trees and unicyclic components. There is another piece that we need, that this part of the unicyclic components in expectation is bounded. So you have to imagine this random graph that has not a giant forest, a super giant forest, that with high probability contains everything except a constant number of vertices. And this constant number of vertices form the unicyclic components. Maybe I should have put a drawing. I hope everyone knows have the image of a unicyclic graph as a cycle in which we attach rooted trees to any of its vertices. Restricting to this forest, we have a zero one low. So this forest with respect to first order property, they always satisfy the same properties. So whether these random graphs satisfies a property depends only on HN that we call the fragment. Well, this notion of fragment was introduced by calling the army in a different context. And we find it convenient to use also the fragment. Okay, well, I'm making signs with my hands, but I realize you cannot see me because this is really my, my first online talk. So I will try to explain verbally. There is this huge forest contains almost everything. And there is a bounded part that is bounded in expectation that contains these unicyclic components. And we call the fragment. Now, what do we prove? And this takes some, well, this needs some work. The probability that the fragment is isomorphic to a given collection of unicyclic graphs converges as n goes to infinity. So the limiting probability that the fragment is isomorphic, let's say to one triangle with these trees attached one pentagon with these trees attached converges. And the limit depends only on this function F that we defined before on the constant C on the number of vertices, and on the number of automorphisms. The number of automorphisms is rather natural because this H is an unlabeled graph. And what is the probability that this label graph is isomorphic to this unlabeled graph? It is quite natural that the automorphisms of these play a role. Good. So the probability that the fragment is isomorphic to this fixed H converges to this constant. Now, what happens that the probability that a random graph satisfies a property, you have to sum over all fragments that together with the forest make the property whole. Because we just said that this property depends only on the fragment. So the probability that G n satisfies the property, you have to sum over all fragments that make this probability, this property whole. So this is a this is a sum, probably an infinite sum. But it is a subsum of this convergent series. Well, we have to show that this is a full probability. So if you sum this pH over all possible fragments, you get one. And then it follows that this closure we are interested is the collection of subsums of this series. It is not immediate. Okay. So when I say that this limiting probability is equal to the sum, then you need some work to show that actually the closure is the collection of subsums of this series. This is not difficult. You have to write it down. And this this is this is so in our paper, which is available in the as an archive file. But at least now we have a description of this set, which is the thing we are interested in. These pH are well defined for every fragment. You can compute these pH. You take this series, which is convergent, and the sum is equal to one. And then you consider all the subsums, either finite or infinite. These are non-negative terms. So even if you take infinite of them, this will always be convergent because this is an unconditional convergent series of non-negative terms. So this is what really makes the analysis possible, that we have characterized our set in terms of subsums of a series. Okay. Any questions before? No? Okay. I'm checking the time. I think I'm doing fine. Now there is this result. It was somehow mentioned by Kakeya, Kakeya, the famous Kakeya of Kakeya sets, about a century ago, but he really didn't prove it. But it was then there were some intermediate papers. This is just a result on analysis. Maybe I could say first year analysis. Well, for good students of first year. It says you take a convergent series of non-negative terms. Right? So the PN are non-negative and the series is convergence. And we are interested in the set of sums of this convergent series. If we have this condition that for every term of the series, the i-th term is less than the tail for every term in the series, then the set of subsums is the whole possible interval between zero and the sum of the series. So this is the empty sum. This is the sum consisting of all the terms. And if this condition holds, then the set of sums is precisely this. This is a very interesting paper because it says that if otherwise the term is larger than the tail, then this set is either a cantor set or a modification of a cantor set. So you have this really two extremes. Either it's a very simple set, which is a union, it's a whole interval, or it is a cantor set. And if you want to just check, take the geometric series of ratio one half, which follows into this. And if you take the geometric series of ratio one third, then you get exactly the cantor set because you are the triadic expression of the numbers between zero. Good. So what we do is the following. This is the sum we are considering. Well, this doesn't look like a series, but we can turn it into a series by ordering the limiting probabilities of the fragments. So we have all these possible infinite fragments. And then what we do is that these probabilities of the fragments here, we order in decreasing order. Let me tell you that always this ordering will depend on the constant C because since these probabilities depend on C, this ordering will change with C. What does not change is that the larger probability is always that the fragment is empty. And fragment empty, okay. Fragment empty means that the graph is a cyclic. Is that clear? Okay. Fragment empty, fragment empty means that the graph has no h n. So no unicyclic graphs. So it is a, it is a cyclic. Good. Now we have to check this condition here. That the term is less than the tail. And then we consider how many vertices has the fragment h. And we say, okay, if we sum the probabilities of the fragments for all vertices with k vertices, because of the previous formula, we get this. And then we show that this condition that the term is less than the tail. It's true for all the fragments of size, at least four. Because you can have enough h with very few automorphisms so that this sum grows very large. And you can prove that this sum is above the i-th term of the series. Now every fragment has size at least three. And the only fragment of size three is a triangle. And there you can complete the argument. And as I said before, p0 is the probability. Can you remind what is the pk? Oh, the pk are the limiting probabilities of the fragment ordered in decreasing order. But what does the index stand for? The index just stands here. The largest probability, which always is going to be the probability of being a cyclic, and then maybe the probability that the fragment is a triangle, all the possible fragments, each of them has a given probability. Okay, so what we do is just we order them in decreasing order, whatever it is. But here this is not a pk. This is the sum of all p i when the number of vertices in the fragment is k. Is that clear now? Okay, good. So this is true for all fragments of size at least four and for fragments of size three. But there is still one, which is the fragment, empty fragment. And here we have to show that p0, which is the large of them, the tail, because it is ordinarily is one minus p0. So we have to check this condition. But this condition means that p0 is about one-half. But p0 is about one-half precisely because c is about c0. Because when p is equal to c0, the probability of being a cyclic is one-half. When we are above, this probability is larger because this function was decreasing. When we are above this value, the probability is less than one-half. So really we have covered all the cases. And this completes the proof. Well, except that there is another companion of this theorem, if this condition and the term is less than the tail, not for all i, but for i large enough, then this set is a union of five and many intervals. And this is a condition that is already contained in this check that we did below. So summarizing, we have shown that this is always a union of intervals. Above or at c0, there are no gaps. And below c0, there is at least one gap. Good. There are more questions one could ask. How many gaps are depending on c? And if anyone is interested, you can ask in the questions about these things. Just to summarize, what do you call a fragment? The fragment is related to the sentence from the first order sentence or from the first structure. The fragment is the collection, I think it's defined here. What is the fragment? Yeah, the fragment is defined here. No, it is, where did I define the fragment? Here. The fragment is the union of unicyclic components. There is no logic. It's only combinatorics. The components are either trees or unicycles. The fragment is like a complex subgraph. It's not complex because they have complexity zero. The number of edges is equal to the number of vertices. There are no complex components. So union means non-overlapping? Yes, I mean the connected components are the joint, right? So you take those that are trees and the union forms a forest and the union of unicyclic graphs, it's the fragment, right? And then you numerate over all finite possibilities, like all possible shapes. Exactly. All possible fragments. Yes, exactly. All possible fragments. Well, these graphs are finite, so this has to be finite. Good. How do you deal with the automorphisms? Like they're almost surely one? No, the automorphisms are here. And this is an unlabeled graph. So it's a class of isomorphic graphs. And then these, in the GMP, the graphs are labeled. So the probability that this label graph is isomorphic to something depends on the size of the graph, on the constant that defines the probability model, and of course, on the number of automorphisms. This is the number of automorphisms of this unlabeled graph. Okay? I mean, this is something that you can check. You just write the probability of each unicyclic graph. You take the probability of all of them. This is a computation. And the number of automorphisms appears here. Well, now what I wanted to say are two things. We have generalized these to sparse hypergraphs. What is a hypergraph? Okay. What is this? Oh, no. Okay, now this is d-hypergraphs. Well, a hypergraph, this is a uniform hypergraph in which every edge has a d vertices. So ordinary graphs are two hypergraphs. And then we take every d-edge independently with probability p. Now, sparse means that we take the probability proportional to 1 over n to the d minus 1. Notice that it d is equal to 1, d is equal to 2, and this is 1 over n, which is what we had for graphs. The expected number of edges is the probability of any edge times the number of edges, so the number of d subsets. And sparse means that this is linear. And again, we have proved something similar, which is we take the unique possible root of this equation. Now, you see the d is here. So the root of this equation will depend on c. And then again, we prove that the limiting probabilities, the union and the closure from the union of intervals, and we have the same theorem as before. Well, it is not difficult. It is extending the things, but then you have to deal with automorphism of hypergraphs, and you need some more work. Good. And I think now my time is almost over. So this is my last slide. Just to mention that there are several variants that you can investigate. Okay, we have, I have worked on random planar graphs, random forests, random graphs from a minor close class. We have worked on GNP, and then we generalize to sparse hypergraphs. And this is a project that we have started with my student, and he's a colleague now in Barcelona. And then we consider random graphs with a given degree sequence. This is a model that has been very much studied in the last decades, because maybe it is closer to real work, real world networks than the GNP model. And then this DIN is the number of vertices of degree i. Right? So you consider for every n a sequence of numbers that give you the number of vertices of degree i. Well, this has to be a feasible degree distribution. It has to be sum to an even number, blah, blah, blah. But suppose this is the case. Well, the total number of vertices is n. And suppose that this holds that the proportion of vertices of degree i, among all vertices converges to a constant. So the proportion of vertices of degree i is asymptotically some lambda i. And you assume that the first and second moment are finite. So this is the expected degree. And this is the second moment of the degree distribution. Now there is a classical result. And this has been extended and refined, in particular by my colleague Guillem, that when there is a giant component in this model, well, this model are sparse graphs, because you see the expected degree is finite. So the number of edges with high probability is going to be linear at most. And then you take this combination, which is the second moment minus two times the first moment. And there is a giant component, even if this is larger than zero. And again, you can use a heuristic, exploring a graph from a first vertex by a branching process, and then studied when the process dies out with high probability. And we consider the same. We take the closure of the set of limiting probabilities of first-order properties for random graphs with this given degree sequence. Well, maybe you are aware that to study this model, you take what is called the configuration model, and then you take perfect matchings among half edges to produce a random graph with this distribution. And then the question is what is this? And we are trying to investigate this, because in fact, the GNP model at the end will be a kind of particular case of this, because in the GNP model, the degree distribution is a Poisson law with very high concentration. And we have found some partial results that I will not mention because this is working ongoing work. But we believe that probably, just depending on this first and second moment, we will be able to characterize this limiting probability, the set of limiting probabilities and the closure, and maybe to find some result which is similar to this. And that in particular implies this. So this is just to say that there are several interesting models that you can analyze. And we are not aware of other groups that have investigated not single limiting probabilities, but the set of all of them. And why do we study them? Because I think we believe it's a nice problem. Okay. So this is the end of my talk. And thank you for your attention. Thank you very much. I do find it really interesting and nice to work with. Yes, let's thank Mark for this nice talk. Are there any questions, comments, ideas for all the models? Okay, please. So maybe remark, because with graphs with given degree sequence, like I could suggest a straightforward generalization by taking graphs with degree constraints, which is like a Boltzmann relaxation for given degree sequence. And if you say that the threshold point is expressed as a probability of having no unicycles is equal to one-half, then like of course you could try to express very similar things. Like it's interesting because the point of phase transition in the model of graphs with degree set constraints is also shifting. So like to see how they both could shift or maybe overlap, like I think it's totally doable. And very interesting. Yeah. Because you mentioned your approach is not like pure analytic, but you're rather like, I think both can be done in this case. No, no, I think we have already obtained the limiting law for the number of cycles, but it's not so straightforward as in GNP. And this is why in the team we have Gilliam, which is the really expert, the real expert on this. For instance, Gilliam together with Reed, they have really extended the conditions for the existence of a giant component. And but there are some things that you can prove and about also if you are in the subcritical case, what is the shape of the components and all that, but it's too early to really to, but we are confident that we will be able to get to the end of this problem. Okay. Thanks, Sergei. Are there any other questions or comments? If not, I would like to ask a very general question which concerns hypergraphs. So I know about this face, face phenomenon that happens for regular graphs. Is it the same for hypergraphs? And that's why you take this P, which is of this order? Is it so that if P is less than of lower order, then there are no big components or? Yeah, yeah. It's the same. Yeah, there is a phase transition. There is a phase transition, which if I remember correctly, it's d minus two factorial over n above this probability, then you have a giant component and below you have the same structure of trees and unisactiles. Of course, in a cycle, in a hypergraph, I'm saying that I never remember is the title of the loose model in which the edges of the cycle intersect just at one vertex consecutively. But I don't remember who did this, maybe Karonsky or someone from the Poznan group. And again, there is a phase transition very similar to the one in random graphs. Okay, thanks. And one more question. Have you considered any monadic second order sentences in this setting? Yeah, for monadic second order just in GMP, there is not necessarily convergence. So this result of Lynch, there is a paper by Shella and Spencer, and they show already that in MSO, you don't necessarily have convergence. And yeah, but even though, and I believe that in this model, as we did for planar graphs, because for planar graphs, we work on MSO, I think it's the case that every MSO can be approximated by FO limiting probabilities. So very likely, the closure will be the same. But the main difference is already here, there are MSO properties that the limiting probability alternates between zero and one. This is this technique of Spencer that allows for construction of sentences that do not converge. Okay, Shella and Spencer is random structural algorithms, and you can find a discussion there. Yes, but this is a very good question in MSO. Yes. Thank you very much. And if there are no more questions, let's thank Mark again. Oh, yes. There's a question very much. I'm sorry, I didn't notice. Question in the chat. Yes. Oh, in the chat. Okay, I should be able to access the chat. Okay, so the question is, okay, I fear myself. I don't know why. Excuse me. Can you hear me twice or once on me? Okay, now it's okay. Some technical problems. So the question is, can the closure of LC is a finite union of intervals? Can be shown by using the fact that all what is are here? I'm sorry. Okay, this is this. I think this is how you construct exactly this set that you can, I don't know whether you see the question, the challenge will be much easier for you to understand. This is almost my first time we zoom. So I think, yeah, there is a chat. Oh, no, there is a chat. There is a chat here. Chat. Okay. Chat. And, okay, can these be shown by using the fact? Okay, my answer is I don't know, because I don't know what is an O minimal structure. Okay. So, I mean, I'm a relatively new comer to logic. So I cannot answer this question. But I would be happy to discuss this, because so real, real exponential, I guess this is the reals together with some product and taking exponentials. Yes. Yes. So the my question is that the set LC is the definable set in the first order logic with the language of Rx. No, I don't think so. I don't think so because these are the limiting probabilities. How are you going to express these limiting probabilities in first order? But according to Lynch's theorem, these limiting probability can be expressed by some finite combination of process. Well, yes, by some finite combination, right? So they exist and there is a finite combination. Yes. So the my question is that there is some so back to could you back to go back to the Lynch's result? I think it is here. Okay. So my question is that so I don't know, I don't know the proof of the Lynch's result, but can we obtain some? Okay, so it's a bit harder to explain. Maybe we can discuss later. Yes. Yes. Sorry. The proof actually depends on just taking, well, you take the quantifier depth of formula. Suppose it is L and then it is enough to look at cycles of length at most L of radios at most three to the L. This is a typical proof of this engagement result. And then you see that everything depends on this. And then there are finally a number of types and then you sum of the types essentially. Okay. But we can we can discuss later if you wish. Okay. But okay. So I understand that maybe the answer is no. Thank you. Okay. And I would be grateful if you tell me about these minimal models that I don't know. Okay. Okay. Thanks again for this question. It might be very inspiring. Are there any other comments or questions? Okay. I don't see and I don't hear anymore. So let's thank Mark again. And with this we end this session and we meet at