 worth is at least five. These are all easy to write out what the sentence looks like. But some properties which are not definable are things like graph being connected, or three colorable, or planar, or having an even number of edges. And when we talk about Erwin Freud-Frasay games, we'll talk a bit about how you establish these properties. Okay, so one early program of research in finite model theory, which was started by Gorevich and others in the 1980s, looked at the key theorems in classical model theory and to see what happens to those theorems when you restrict them to finite structures. And so two of the central pillars of classical model theory, so the compactness and completeness theorems fail, break down when restricted to finite structures. So let's just look at this quickly. So the compactness theorem says that if you have a theory, or a T, instead of finite sentences, then T has a model if and only if every finite sub-theory, every finite subset of T has a model. And this fails on finite structures, which is easy to see by the following example. So if T consists of the set of sentences saying there exist at least n elements in the universe, then every finite subset of T, finite set of sentences has a finite model that's any large enough finite set. But T itself has only infinite models. So this shows that compactness theorem fails on finite structures. Similarly, the completeness and incompleteness theorems of Gertl say that the set of first-order tautologies, the set of first-order sentences, phi, such that phi holds for all structures A, including finite and infinite structures. This set is recursively innumerable, but not co-Auri. And it's interesting that on finite structures, the opposite holds. This is known as Drachtenbrot's theorem that the set of finite tautologies is co-Auri, but not Ari. So that's another big difference between classical and finite model theory. And maybe not surprisingly, nearly all of the classical theorems which rely on the compactness principle turn out to fail on finite structures. And this includes a large class of classical preservation, interpolation, amalgamation theorems. But understanding exactly which classical theorems fail on finite structures and which hold on finite structures turns out to be quite a delicate matter. So I'll give a quick example of this. Okay, so here are three of the classical preservation theorems from classical model theory. So the first one, the Volsch-Tarski theorem says that formula phi is preserved under injective homomorphisms if and only if phi is logically equivalent to an existential sentence. So don't worry about the, I'm not going to go into any detail about these results, just wanna point something out. So don't worry if the statement doesn't make immediate sense to you. But so a related theorem is Lindens theorem which says something similar, but this time preservation under surjective homomorphisms is characterized in terms of being equivalent to a positive sentence. That is the first order sentence without negation. And then a third theorem known as the homomorphism preservation theorem is sort of looks like the intersection of these two theorems. And it says that if phi is preserved under all homomorphisms or if it's preserved under both injective and surjective homomorphisms, then it's equivalent to a sentence which is both existential and positive. And so these results are statements about the class of all structures. But if we restrict to just finite structures, then it turns out that the first two of these theorems, Volstarski theorem and Lindens theorem turn out to fail on finite structures. And counter examples were given by Tate in 1959 for the first one and later for Lindens theorem. But it turns out that the homomorphism preservation theorem which looks very similar and looks almost like the intersection of these two, turns out to hold on finite structures. So it's a bit of a delicate question which theorems hold and which ones fail. So this is one program of research in finite model theory. And but one technique which applies to both the infinite and finite settings is that of Ehrenfreude-Frasse games. Okay, so that was a very, very quick overview of some of the differences between classical and finite model theory. And the rest of the talk will be substantially different. So if that went too fast, you can ignore it. Okay, so now I'll go into a bit of detail about Ehrenfreude-Frasse games and maybe I'll try to slow down a bit here. So, and please again, now really feel free to interrupt me with questions if something's not clear. Okay, so an important parameter of first order formulas is quantifier rank. So this is defined as the maximum nesting depth of quantifiers in a formula. So there's an inductive definition for this. If we have atomic formula x equals y or r of x one to x, r of some tuple of variables, this has quantifier rank zero, there's no quantifier. You know, negation and preserves quantifier rank and you know, disjunction and conjunction, you take the maximum and then quantification increases the quantifier rank by one. Okay. So there's a notion of k equivalence of two structures. So we'll say structures A and B are k equivalence if they satisfy the same sentences of quantifier rank k. So, and we denote by this, a quiv sub k, this is the notation for k equivalence. So one key property of this notion of k equivalence is that for any fixed finite signature, there are only finitely many k equivalence classes. And another nice property of k equivalence is that it's a congruence with respect to a broad class of well-behaved structural operations. For instance, disjoint union or categorical product of two structures. So for example, a Pfeffermann-Vott theorem says that if I have two pairs of structures, A and A prime, which are k equivalent and another pair B and B prime, which are k equivalent, then the product of A and B is k equivalent to the product of A prime and B prime. Okay, so k equivalence also gives us a kind of way of proving that a class of finite structures is not first-order definable. So just an observation, if we have a class of finite structures C, then to show that it's not first-order definable, it's necessary and sufficient to show that for every k, there exists a pair of finite k equivalent structures A and B, such that A belongs to the class C and B does not belong to the class C. This is a very useful, necessary and sufficient condition for showing that some class is not first-order definable. So to see why that is, we can look at the kind of contrapositive statement. So suppose that C is first-order definable. So then there's some formula phi, which defines C. But this means that if we have structures A and B, which are equivalent for, k equivalent where k is the quantifier rank of that formula phi, then A belongs to C if and only if B belongs to C. Okay, anyway, this is a simple observation, but I'm going to make kind of implicit use of it. Okay, so the Ehrenfreude-Frasse game gives us a way to characterize the quantifier rank you need in order to distinguish two structures in first-order logic. And this was introduced separately by Ehrenfreude and Frase in 1953, 1963, 1961. So okay, so you've seen this already, but I'll tell you the definition of the game one more time. So the k-round Ehrenfreude-Frasse game on a pair of structures A and B has two players, which we call spoiler and duplicator. Although there are other names in the literature such as Samson and Delilah, non-equivalent and equivalent players, but here I'll use spoiler and duplicator. So the duplicator wants to prove to establish that A and B are k equivalent, and the spoiler wants to refute that A and B are k equivalent. So those are their goals. So the way the game work is that in each of the k-rounds of the game, first the spoiler selects an element in either structure. So he selects a structure and an element in that structure. And then the duplicator selects an element in the other structure. And so this continues for k-rounds. And at the end of the game, after all k-rounds, we have a sequence of k-distinguished elements in A and a sequence of k-distinguished elements in B. And we say that duplicator wins the game if and only if these two k-tuples of elements describe a partial isomorphism between the two structures. So that's the definition of the game. And I'll work through some concrete examples, so this will hopefully be very clear later on. Okay, so the key theorem about erneford-frase games is the following. So the duplicator has a winning strategy. Oh, so by the way, this is a zero sum game, so either spoiler or duplicator has a winning strategy for any given k and any given pair of structures A and B. So the duplicator has a winning strategy in the k-round erneford-frase game on structures A and B if and only if A and B are k equivalent. So this is the key property of erneford-frase game. And this has a straightforward proof by induction on formulas. And there are similar games and variants of this game which characterize the finability in other logics. Okay, so now let's look at some concrete examples of erneford-frase games. And to prove that certain classes of structures, of first order structures are not first order definable. Okay, so the first example I wanna look at is the class of even linear orders, of finite even linear orders. I wanna show that this is not the finable and first order logic. So a linear order is just a structure, and here I'm talking always about finite structure. So a finite set together with a linear order on that set. And by even I just mean that there's an even number of elements. So the class of even linear orders is not first order definable. So how do you prove this using erneford-frase games? Well, for an arbitrary K, we show that we give an example of a pair of structures A and B where A has even size, B has odd size, and yet these are K equivalent. So in other words, duplicator has a strategy in this game. So the structures we're gonna use will let A be a linear order of size two to the K, and B will be a linear order of size two to the K plus one. Okay, and we're gonna give a winning strategy for duplicator in the K round game on this pair of structures. So let's just see graphically how the duplicator strategy looks. So here are the two structures, the two linear orders, and we can see that B has one more element than A does. So let's say in round one, spoiler gets to pick which structure he plays in, so spoiler selects an element in one of these structures. And let's just say he picks this red element in B below. So duplicator has to select an element in the other structure. And what he'll do is, he'll notice that this red element is closer to the left endpoint of B. So he'll match that distance to the left side. Okay, and in round two of the game, spoiler again gets to pick one of these two structures and select an element. Let's say he plays this blue element in structure A. So this blue one is closer to the red point than to the right endpoint of A. So the duplicator will match that distance in the structure B. And now in round three of the game, maybe spoiler plays this green element, which is now closer to the right endpoint than to its neighbor to the left. And so similarly, duplicator will match that distance on the right-hand side. And so on. Okay, so through these five rounds of the game, we see that we have a partial isomorphism, you know, the pairs of pebbles that have been played in both structures give a partial isomorphism so far. So so far, duplicator is winning this game through the first five round. But actually at this point, the spoiler will win in the next round by playing this orange element in the structure B to which duplicator has no reply. So no matter which element of A, he colors orange. And of course, you can color the same element with multiple colors, but it won't be a partial isomorphism. So I hope this example visually illustrates kind of how duplicator can win for as long you know, for a certain number of rounds. So what can we say in general for these, for pairs of what can we say in general for this game when A and B have differed by one element? So in general, duplicator can win the K round game provided that both A and B have at least two to the K elements. So duplicator can win for log many rounds and the winning strategy is just in rounds J of the game we preserve distances up to two to the K minus J. Okay, so you can write this out formally how it works, but I think it's pretty clear, very simple example. Okay, now I wanna use this result about even linear orders to show that the class of connected graphs is not first order definable and this will illustrate another technique in model theory known as a first order interpretation. So the observation is for, okay, so for a linear order A of size N, I'm going to define a graph G of A, okay, which will be a graph with the same vertex that is A, where the edge relation is we put an edge between every two elements of distance two cyclically. So this, let me just illustrate it here. So we've added purple edges here from guys at distance two apart when wrapping around. And so the observation is that if N is odd, then the graph G of A will be a single N cycle, a connected graph, and if N is even, this graph G of A will be a disjoint union of two cycles of size N over two. So in other words, connectivity of the graph G of A depends on the evenness of the graph A. Okay, so now here's our proof using this little observation that the class of connected graphs is not first order definable. So for toward a contradiction, let's assume we have some formula phi which defines connectivity. We have some sentence which defines connectivity. So I'm going to define a different sentence phi star by replacing each sub formula of the form edge, you know, there's an edge between X and Y. So each of those atomic formulas in phi with the formula which says that either Y has distance two from, you know, is X plus two or Y is X minus two cyclically, okay, in this linear order. So phi is a sentence in the language of graphs, but this phi star that we've translated it into is now a sentence about linear orders. And the observation is that, you know, the negation of this phi star is defining evenness on the class of linear orders, which we just showed previously is impossible. Okay, so now, of course, one can also prove directly that connectivity of graphs is not first order definable using Erne-Foyte-Rosset games, for example, on the pair of structures, you know, one N cycle versus the disjoint union of two N cycles. But this reduction that we showed to non-define ability of evenness of linear orders illustrates this technique of interpretation. So I just wanted to mention that. Is this clear so far? I mean, this is all very elementary. Okay, so now I'm gonna give a slightly more, slightly more complicated example and this will lead to some more interesting result also. Okay, so now I wanna consider a class of structures which I'll call set power set structures. So I'll denote by set power sub N, the following structure. So the universe of the structure will consist of two, what's known as sorts. So it will be a disjoint union of a set of atoms which will be just integers one to N. And then, as well as the set of all sets of atoms, so the power set of this one to N. And there'll be a single binary relation. So there'll be two unary relations, atoms and sets, which name these two parts of our structure, and then a binary relation in which just defines the set membership. So I is in X in this relation if and only if I is a member of X. Okay, so okay, and so that's set power sub N. And in general, a finite set power set structure is any structure which is isomorphic to set power sub N for some N. And we'll say that the structure is even or odd according to the parity of N. So according to the number of atoms in the structure. Okay, so structure has atoms and sets and then the set membership relation between those two sorts. Okay, so the first observation here is that the class of finite set power set structures is first order definable. And this is not completely obvious at first because something which we can't simply say in first order logic is the top expression here. So that for every subset of atoms, there's some set which has the property that for every atom that atom belongs to the set if and only if in of XS. Because this is not a proper first order formula because we've quantified overall subsets of atoms. So instead the way you define this class of set power set structures is the following. So first of all, we can write a formula which expresses that the empty set belongs to this set sort. And then we say the following that for every set S and every atom, the union of S and that extra atom is also belongs to the set sort. So this gives a closure property of the set of sets. Okay, and this formula is exploiting the fact that we're talking about finite structures in a rather essential way. So if our sort of sets contains the empty set and it's closed under the property that if you have something and you add an element, you remain in the sort, then of course we contain every subset of atoms. Okay, so now the theorem I want to show you using Erwin Freud-Frasay games is that the class of even set power sets, however, is not first order definable. So we can't define the set of the class of set power sets with an even number of atoms. Okay, and the way I'm gonna show this is by considering again a pair of structures A and B where A is set power sub N and B is set power of N plus one. And so one of those is even and one of them is odd. But I'm going to show that duplicator has a winning strategy in the log N round Erwin Freud-Frasay game, okay? So this establishes that the set of set power sets is not first order definable. And again, I'm going to illustrate this strategy visually. Okay, so here's the picture of the structures A and B. So we can see that A and B have the two sorts, the set of atoms and then some cloud of sets. And B has one extra element here, this kind of gray dot in the middle, okay? So we're gonna give a winning strategy for log N rounds. So let's say that in the first round of the game, spoiler picks an element in the set sort of A and it happens to be this set of four elements, okay? So what should a spoilers reply be? What element of B should he play? Well, he'll play any set of four elements in B. And the key point is that four is less than half of the elements. So now in the next round of the game, let's say that spoiler plays this blue set, which contains three points and it intersects this red set in one point and has two other points. But now, you know, it's, okay, so. So here, duplicator will play a kind of similar configuration in the structure A and so far everything looks nice because these two sets are both kind of small, contain fewer than half the elements. But now in the next round, let's say that spoiler moves back to the structure A and plays this green set, which contains everything but two elements in A. So now instead of matching the size of this set exactly, we'll play something which matches the complement of this set. So we'll play everything but two points in the structure B. And okay, so now let's say there's this yellow set we match and keeps going like this. So now at this point, we have two sets which differ by, you know, one has two elements, one has three elements. And now we'll, spoiler will start playing in the atom sort. And okay, so at this point in the game so far, we have this partial isomorphism. But then in the next round, spoiler can win by playing this blue point and then duplicator has no reply. So okay, I hope this illustrates the general principle behind the winning strategy. But so the kind of exercise is to show that you can take the intuition here and show formally that if A has N elements, B has N plus one elements, then duplicator really can win, can hold out for log N rounds. So basically the property that duplicator tries to preserve is that if you look at the sets which have been played, you look at all Boolean combinations of those sets in one structure and the other structure, and you wanna match the sizes of those up to some parameter which is decreasing by factor of two as you go along in the game. So that's how you can make this formal. Okay, so we just showed that the set of even set power set structures is not first order definable. And now I'm gonna give an application of this result to show something nice about a variant of first order logic known as order and variant first order logic. Okay, so here's a definition. So we have a first order sentence fee which involves some extra, some additional binary relation symbol which is supposed to be a linear order, okay? So we'll say that this sentence is order and variant if for every unordered finite structure A. So for every finite structure A without a linear order, if we take any two linear orders on A, then A with the additional first linear order satisfies fee if and only if A satisfies fee with addition to, you know, when we add the second linear order. So in other words, a first order sentence, which speaks about an extra linear order is order and variant if and only if it doesn't depend on the choice of order that you put on the structure. Okay, so we say that a class of C of finite structures is order invariantly definable or it's definable in order and variant first order logic if there exists some order and variant sentence fee such that A, a structure A, an unordered structure A belongs to C if and only if A together with any, an arbitrary linear order on A belongs, models fee. Okay, okay, so now I want to show that the class of even set power set structures which we just showed it's not first order definable but I want to show now that in fact, it's order invariantly first order definable. So the first order definable in order invariant first order logic. So we've already defined the set of all set power set structures in first order logic. So all we need to do is define the set of even ones within that class and how do we do that if we're given some arbitrary linear order on the entire universe. So recall that in the universe of these structures is a disjoint union of sorry that where it says elements should be atoms. So atoms and sets. So we have some linear order on this but of course it induces by restriction a linear order on the set of atoms alone. So, but now to define evenness of the set of atoms we can say the following. Basically, there exists some set which contains every other atom. So it contains the first at the minimal, it contains the minimal atom according to this linear order. It does not contain the maximal atom in this linear order and for every two consecutive atoms it contains exactly one of them, okay? So, okay, so this is something which can will hold if and only if the number of atoms is even, okay? So what we've shown is this result of of Gravich that order invariant definability is more powerful than ordinary first order definability. So we've given an example which shows this. And so this result has is kind of some application to relational database theory. So I'll mention this briefly though. So in database theory people of course consider relational database abstractly but in fact any physical implementation of a relational database will impose some extra relation on data. So for example a linear order or some successor relation and kind of question that arises is well is it possible to exploit this extra structure in some query language like SQL which you can think of as being like first order logic in some kind of representation independent fashion. So if you're going to make use of this some extra linear order imposed on your data well you'd want to make sure that you're not since that's something which depends not on the database itself but the representation you want to make sure that you're not sensitive to the choice of linear order. So this question when you translate it into the finite model theory language is precisely asking if order invariant definability is more powerful than first order definability. So if instead of order invariance you could consider some other kind of auxiliary relation and ask about invariance with respect to that. So one other question that people had asked is what about invariance with respect to an auxiliary successor relation and a result of mine is that even successor invariant definability turns out to be more powerful than first order definability and the counter example is kind of based on Garevich's counter example but it has some extra ingredients so it combines various standard erinfoid-frase games on set power sets on long paths on random graphs so it's kind of a nice application of many different erinfoid-frase games. Okay so that concludes what I wanted to say about erinfoid-frase games and now I'll move on to the main part of the talk which will be about logic and random structures so zero one laws in particular. So the study of asymptotic properties of logical expressions is one major area in finite model theory and so here I'll discuss, I'll give a proof of the classic zero one law and then I'll mention some connections to actually to computational complexity. Okay so, so I'm sure you're familiar but the definition of Erdisch-Rainey-Reynograph GNP is so this is a random graph with N vertices, vertex at one to N in which for every pair of vertices we independently connect them by an edge with probability P so we flip some biased coin independently for every pair of vertices and add an edge with probability P and so in this talk I'll consider two you know two kind of versions of this so the uniform random graph is where we each edge is included with probability one half so this is denoted GN one half and I'll also be mentioning some results which concern Erdisch-Rainey-Reynograph GNP where P looks like N to the minus alpha for some constant alpha between zero and one. Okay so these are particular model of sparse random graphs. Okay so the classic zero one law for first order logic which is proved independently by Fagan and also in Soviet Union so the zero one law says that with respect to the uniform random graph GN one half for every first order sentence phi if you look at the probability that GN one half satisfies phi then that tends to either to zero or to one okay so every first order sentence holds either hold asymptotically almost surely or it's negation holds asymptotically almost surely with respect to the random graph and I'm gonna give a proof of this so the proof of this zero one law uses the following graph property known as K extendability or the K extension property which I'll denote by X sub K okay so this property says that for every set S of at most K minus one vertices and every subset of S every subset T of S we can find some vertex in the graph outside of S which is adjacent to everything in T and non-adjacent to everything in S minus T so that's the statement and okay so kind of just to illustrate this for the four extension axioms says that for every three vertices for instance this red yellow and blue vertices then for every subset of those vertices is witnessed by in terms of adjacency to some other vertex in the graph so for every three vertices we can find eight vertices where one is not adjacent to anything one is adjacent to all three and for every subset we can witness it okay so that's X sub K so the first lemma about this K extendability is that if we have two graphs G and H which are both K extendable then they're K equivalent they satisfy the same first order sentences up to quantifier rank K and this is something we can show very easily by considering the Ehrenfeut-Frasay game so what we have to do is take two K extendable graphs G and H and give a winning strategy for duplicator in the K round game and this is extremely simple because all we need to do is just match partial isomorphism in each round so if spoiler plays some yellow element which has an edge and we can find a yellow element which has an edge and similarly if he plays a blue element which has an edge only to the yellow and not to the red well then there exists some blue element which is adjacent only to the yellow and not to the red simply by this K extendability and so on and we can just match partial isomorphism for K rounds okay and now the second key lemma about this K extendability is that well the uniform range of graph G and 1 half is almost surely K extendable okay so here's the proof of it uh... so I'm going to look at the probability that a random graph G and 1 half is not K extendable I'm going to show that that tends to zero okay so so the event that G is not K extendable is the you know the union of events that G does not satisfy the K extension axiom with respect to some particular S and T in the definition so we just use a simple union bound and this is at most the sum over overall overall S and T right so S is a set of size at most K and T is a subset of S that G does not satisfy the K extension property with respect to that S and T okay and just what that what that literally means is that uh... that for every X which is not outside of the set S that either X has an edge to something which is an S minus T or or it's non-adjacent to something in T okay but uh... by independence that probability it looks like uh... one half to the N minus K exactly and the number of such pairs S and T is at most N to the K times 2 to the K something like that so so we get a bound of the form something like 4NK over 2 to the N and for since K is fixed this goes to this goes to zero as N grows okay so we've proved that uh... almost surely G is K extendable okay and now this gives us our proof of the uh... zero one law so uh... uh... I'm going to give a slightly different statement actually of the zero one law so so the zero one law here that I stated before says for every fee first order sentence fee it's limiting probability is either zero or one but we can actually look at there's a kind of equivalent formulation of this in terms of two independent random graphs G and H so the zero one law I claim is equivalent to the following statement that for every first order formula fee almost surely G satisfies fee if and only if H satisfies fee so talking about two independent uh... uniform random graphs and uh... yeah this this follows formally because if you look at the probability that G satisfies fee if and only if H satisfies fee this is the probability that G satisfies fee squared plus one minus probability that G uh... satisfies fee squared and if this you know tends to zero or one and then this one tends to one okay so here's a proof now the zero one law let K be the quantifier rank of our sentence fee uh... and take two independent random graphs G and H uh... so almost surely they're both K extendable uh... therefore their K equivalent and therefore one satisfies fee if and only if the other satisfies fee okay so this proves the the you know the the equivalent statement of our zero one law is that clear so now I'm going to talk about some some uh... some other zero one laws uh... just just mentioned I'm not going into any proofs so uh... very very interesting beautiful result of sheila and spencer gives a zero one law for first order logic with respect to the random graph G and N to the minus alpha uh... where alpha is any irrational number uh... between zero and one uh... and here it's actually essential that alpha is irrational I'll come back to that in a moment but uh... so first order logic has a zero one law for such graphs uh... another very very very interesting uh... result which is not a zero one law but is known as a convergence law is for first order logic with respect to a random unary function so so here we consider structures consisting of some finite set a random unary function on that set and this is a this this is a result due to lynch so for every first order sentence fee in the language of unary function there's there's some real numbers see some limit between zero and one such that the probability that for that a random unary function on n elements satisfies fee tends to that c and more over the limiting probability c is is an expression that you can write uh... right down using integer you know constants and uh... operations plus times divides and x okay but there are there are other settings where we can say that there's in fact no zero one law not even any convergence law so first of all if we look at instead of the random graph where the only relation is adjacency but let's say we look at random graphs ordered random graphs of random graph together with a linear order then clearly that first order logic has no zero one law because we can express the sentence you know we can express that there's just an edge between the first and second vertices in this linear order so that has limit probability one half uh... but we can ask well but maybe there's some convergence law still so maybe there's some limit probability for every and uh... in fact that's uh... that's not the case uh... so there exists a first order sentence in the language of ordered graphs such that the probability that uh... that uh... n vertex ordered graph uh... satisfies fee does not converge and uh... similarly if we look at a rational a rational number alpha between zero and one then there's a first order sentence in the language of unordered graph such that g and n to the minus alpha the limit probability that that satisfies the probability that satisfies fee does not converge and uh... similarly if we look at instead of random unary function but if we look at a random binary function then again that we have a non-convergence there's no convergence law each of these counter examples has something in common which is that they they show that you can in such random structures uh... with high probability you can interpret some very small initial segment of arithmetic and then say and then which lets you somehow speak about the size of the structure so that's how you get some some non-convergence okay so uh... now I want to mention a uh... an interesting open question uh... so I already defined for you the order invariant first order logic so very interesting open question is whether this logic has a zero one law uh... so that's that's uh... okay so now let me discuss a different kind of convergence law so uh... so now I'm going to look at a uh... an extension of first order logic uh... by a what's known as a parity quantifier mod two quantifier so the way the mod two quantifier works is that for a formula fee with this with the some free variable x we can form a formula of the form mod two x of of fee so this expresses that fee of x holds for an even number of x so this logic is is only well defined on finite structures but so this is an extension of first order logic by mod two quantifier and uh... very nice recent result of uh... colitis and coparty gives of what's called a modular convergence law for this logic uh... f f o parity so the the modular convergence law says that for every first order sentence fee in this lot and in f o mod two there exist two constants a sub zero and a sub one such that the limit probability so uh... you know the probability that g and one half satisfies fee as you take large you know a large even integer n tends to a zero and the probability for for large odd numbers and and a one so this so this uh... to to can't uh... to limits uh... and this result uh... actually holds for any mod p quantifier for p being any prime and in which case you get p limiting probabilities and uh... the proof is is very nice because it takes it uh... combines techniques from complexity theory and algebra uh... as well as logic uh... so in particular involves approximation by low-degree polynomials and the gowers norm and quantifier elimination that's a very nice result and one one uh... open question coming out of that work is whether there is a similar modular convergence law if rather than a mod mod of p quantifier for some prime p if we have for instance a mod six quantifier you know or for for any composite number or indeed any prime power is also open uh... so uh... i think i'll in the in a moment i'll talk about connections to circuit complexity but uh... you know some of you may be familiar with this uh... you know open open questions still about this uh... ac zero with mod six quantifiers and in my mind getting a result about first order logic with mod six quantifiers with potentially shed some some some light on the ac zero with mod six quantifiers so it's a very interesting question okay so now um... uh... i want to talk about correlated you know some make an analogy between uh... zero one laws and actually complexity theory by talking about correlated pairs of random structures so uh... so in this slide up this is a repeat of the previous slide where i'm giving the zero one law the two formulations i had of the zero one law so when g and h were independent random structures than the zero one law says that uh... almost surely g you know for every first order sentence fee g satisfies fee if and only if it satisfies fee okay so that's for independent random structures g and h and uh... in fact it it follows from this that we we we we get in fact uh... if if we take g and h not to be independent random structures but we're going to to take a correlated pair of random structure so so we're going to condition on g and h differing at exactly one edge so that the symmetric difference between the edge sets of g and h is exactly one uh... even if we condition this way then then still for random for any first order sentence fee g satisfies fee if and only it satisfies fee almost surely so this will you know refer to as a kind of correlated zero one law and it just follows formally from the from the previous so that's for that that's talking about the language of unordered graphs uh... it it follows actually for any constant it also follows from this okay so but now the question is uh... so what if we add some background relations now to to our graphs g and h so such as a linear order or or arithmetic what happens to these to this zero one law this correlated zero one law for for such graph so okay so so here's the question so so now i'm going to add it to the background of our graph g and h i'm going to add some arithmetic predicates so plus times in a linear order and what happens to this zero one law to this correlated zero one law well you know we already saw that the zero one law failed if you have a linear order um... because you can you can talk about you know they're an edge between first and second elements in the graph but the the amazing fact is that this this correlated zero one law uh... turns out to hold if you if you have uh... arithmetic even if you have arithmetic in the background and uh... and and in fact you know this this correlated zero one law for random graphs with with uh... with uh... plus and times uh... is in fact you know kind of hiding it a statement about circuit complexity uh... and in fact it kind of directly implies that uh... parody is not an ac zero so uh... sorry uh... so uh... so we take so we're taking random graphs g and h conditioned on g and h differing at exactly one edge so this is the symmetric difference between the edge set of g and edge set of h uh... or in other words you can you can first take a uniform random g and then pick a uniform random edge and flip it that's how you get age so the first the first two uh... statement here are just the zero one law that we saw before so here we're talking about uh... independent random g and h and uh... the point of making is that uh... the zero one law implies directly implies this weaker statement which is saying that g and h are not independent but but uh... they differ at one edge then then still we have this this kind of correlated version of zero one law so then in the next slide i'm saying i'm looking at uh... g and h but now i'm enriching the i'm adding extra relations so now we can also talk about uh... uh... plus times in a linear order on on the universe of this of these random graphs and then i'm making the point that the zero one law fails so the zero one laws is becomes false but this this uh... this weaker correlated zero one law continues to hold even if we even if we're allowed to talk about these background relations no this is this is for sex that this is this is a disguised version this is literally equivalent to the statement that eight that uh... you know d log time uniform a c zero has average sensitivity uh... little old one exactly equivalent uh... just i'm just trying to this way of explaining zero one on and and correlated zero one lies my my own you know i guess i guess so sorry i'm trying to draw some kind of analogy between the two things yeah i don't know if it's you know i'm not going to get to a particular way of looking at this but i think it's an interesting new you know you could look at different degrees of correlation and and and try to you know try to bridge the the gap but anyway i i think there's some relation okay so um... okay so now in the last part of the talk i'm going to describe some connections to circuit complexity okay so this is the they've been diagram from the women's uh... book on descriptive complexity okay so so uh... so we heard uh... in in the new just talk so descriptive complexity is concerned with the characterization of complexity classes using logic and so the typical result in in this field is that the sphagan's theorem that says that existential second order logic captures complexity class n p and uh... in this talk i'm i'm concerned with first order logic and uh... there's a very nice descriptive complexity characterization of first order logic in terms of the complexity class ac zero or constant that polynomial size boolean circuits okay so let me let me uh... just give the definition of boolean circuits and uh... here since i'm talking about properties of graphs i'll be considering boolean circuits which take n vertex graphs as inputs so they'll be n choose two variables you know x sub ij which which uh... you know indicate the presence of an edge between vertices i and j and then the circuit is built up out of uh... and or not gates where the and in or gates can have you know unbounded fan in so any number of wires coming in and there'll be a single designated output gate so every such uh... circuit computes uh... a function from n vertex graphs to zero one so some boolean function and uh... ac zero that's a complexity class of languages recognized by constant depth polynomial size families of boolean circuits okay so the the descriptive complexity result saying first order logic equals ac zero is is uh... is following uh... result actually maybe maybe there this is kind of discovered independently by by different people with slightly different formulations but uh... uh... so the result is that for so for every first-order formula fee there's some constant that uh... boolean circuit of polynomial size basically evaluates you know computes whether fee is true or false given some encoding of uh... of a structure of size and okay so i'm not going to go into this is a bit a bit hand wavy the statement but okay so the as i was saying before so this is here i've just written that from uh... from the last slide in the previous part this kind of correlated zero one law for the structure so as i said this is this is uh... you know directly translates in via this descriptive complexity characterization into the statement that you know uh... every boolean function in uniform ac zero has average sensitivity little of n um... so so this implies parodies not an ac zero okay so uh... actually we can refine this this uh... this kind of characterization even further by looking at a uh... relating the certain parameter boolean uh... of uh... first-order formulas called with uh... so this corresponds to circuit size i'll define it i'll define with in the next slide but uh... but uh... if you look at the if you look at the this construction if you have a first-order formula fee with with k then then the equivalent circuit you get will have size order of n to the k okay so so what is the width of a uh... so in yeah so so f o with bit is equivalent to f o with plus times in linear order or even to plus times it's all equivalent so when i you know when i talk about uniform ac zero yes so so in fact you know their versions of this equivalence for uniform ac zero or non-uniform ac zero but uh... in fact if you have uh... ac zero on structures with a bit predicate then this is giving you d log time uniform ac zero yeah but for this talk actually you know for what i'm going to talk about actually i'm you know non-uniform ac zero everything will apply to non-uniform ac zero okay so so the width of a first-order formula is the maximum number of free variables uh... in any sub formula okay so equivalently a formula has width at most k if and only if uh... it's equivalent via some renaming of the of the variables to a formula which has at most k distinct variable symbols and here's an example of that so consider the following uh... formula which will be in the language of ordered graph and it will say that there exists an increasing path of length five uh... so the formula just as well there exists x one and x two where x one's less than x two and there's an edge and there's an x three which is greater than x two and an edge x two and so on so there's some increasing path of length five in the ordered graph so I claim that this formula actually has width two there are five variables and uh... but it has width two and and we can see this by uh... the fact that we're able to rename these variables and in order to have only only two of them okay anyway so number of variables is also the same thing as width okay and so I'll denote by uh... f o superscript k the uh... set of first order sentences of width at most k so this is also known as the k variable fragment of first order logic because you can think of it as a class of formulas with at most k distinct variable names and uh... so this gives a kind of stratification of first order logic uh... which is known as the width or variable hierarchy and uh... that there's uh... a version of the erin Freud-Frasay game which characterizes uh... you know rank in this k variable fragment uh... you know this is often called the k pebble game uh... so the basic question that that uh... you know people have considered uh... over which classes of finite structures is this this width hierarchy the number of variables hierarchy strict in terms of expressive power so uh... so just for example if we're concerned with the class of all finite structures then it's easy to see that the width hierarchy is strict in terms of expressive power because the sentence there exists at least k elements you know the universe has size at least k is something we can express uh... with the by width k formula by formula k variables but not by formula with with smaller width so but if we look over the class of finite linear orders with with no additional structure then uh... in fact it's it's easy to see that the hierarchy collapses to its two variable fragment so in other words every first order sentence uh... in on on the class of finite linear orders is equivalent to a first sentence which has only two variables and actually the previous example kind of show this that you can talk about how large a linear order is by you know just using to recycling two variables and uh... uh... uh... much uh... uh... harder result shows that over the class of uh... linear colored linear orders or linear orders with with some any number of unary relations uh... the hierarchy collapses to its three variable fragment in fact that's all both for finite colored linear orders and for the class of infinite colored linear orders uh... this is a result of uh... was up and uh... but one question uh... along these lines which had been open for a long time is whether this hierarchy is strict on the class of finite ordered graphs and uh... a natural a natural property to consider in trying to separate this variable is the property of whether if there exists a cake like uh... because it's obvious how to define with uh... k variable sentence uh... that there is a cake like uh... in an ordered graph but it's not clear how to do so uh... with fewer than k variables even if you have access to some linear order and uh... if you know for uh... for k equals two uh... we you know we one can prove explicitly that uh... you know two variables are insufficient to prove to to express that there exists a three-click and this is something you can show explicitly by playing uh... to pebble and for it for us a game on on the suitable pair of ordered structures uh... for instance if you take these two you know kind of complete binary tree with this with this ordering uh... and then we add an edge like this to make a to make a three-click a triangle in the bottom graph then you can show by taking large enough structures with only two variables you know you you the number of rounds you need to separate uh... you too for for a spoiler to win this game is increasing so that shows that three-click is not definable with uh... two variables on ordered graphs but uh... this was this is basically the limit of uh... what what we know how to prove kind of using explicit erin for it for a game yeah so ordered graph i just mean you know the two relations uh... you know and the jason say relation a linear order but that you know we require the linear order to define the linear order uh... okay so okay so the the so the result that pertaining to this with hierarchy uh... i'm going to give a statement which is kind of in the form of uh... of a another like correlated version of a zero one law so okay so if we if we consider a random graph at the g and p where p is a threshold probability for the existence of k-click so that is uh... g can g this this g and p contains a k-click with probability exactly one half and it turns out that for that this threshold probability is of the form and the minus alpha for some constant alpha so we consider such a a random graph g and i will let each be you know this the the kind of correlated random structure will take g and will uniformly plant a k-click somewhere on g so this is pair of structures where g has a k-click with probability one half but each has a k-click with probability one but uh... so what i showed is that if you have a sentence p of width at most k over four then almost surely g together with you know g satisfies p if and only if h satisfies p even if you're allowed to talk about arithmetic on the on the vertices of g and h so this is a okay so i'm trying to make an analogy with other correlated zero one law before but you know here here this applies to not all first order sentences but just first order sentences up to with k over four okay so some some corollaries of this result uh... are that well this shows in particular that k-click the k-click property is not definable uh... with k over four variables on the class of ordered graphs and in fact this even holds in some kind of average case sense so of course this implies that k over four variable fragment is less expressive in the k-variable fragment of first order logic on the class of ordered graphs so this shows that the hierarchy the wit hierarchy for ordered graphs does not you know is infinite it doesn't collapse uh... and in fact uh... uh... a nice corollary of these results which is which is uh... pointed out to me by neal emerman is that uh... the fact that the hierarchy doesn't collapse in fact implies that it's strict for ordered graphs so for every k there's some property of ordered graphs which is uh... definable with k plus one uh... variable but not with not with k variables okay so uh... so this answers that question about you know wit hierarchy for ordered uh... so i wanted to mention some some upper bounds for this problem so in fact this this k over four turns out to be tight uh... in the context of average case the finability of k-click uh... uh... a mono gave a first-order sentence of wit k over four plus some constant number of variables in the language of uh... graphs in the language of graphs with arithmetic uh... which almost surely defines the existence of a k-click for this random graph g at the threshold and uh... a recent result of mind still unpublished is that in fact even with if you only have a linear order then still with k over four plus uh... constant many variables you can almost really define the presence of a k-click so that that that that kind of shows that the lower bound uh... is really tight for the k-click property and i just wanted to mention that uh... you know unlike the the you know to pebble game before where you can give explicit examples of structures uh... this is actually uh... using results in circuit complexity to prove a really proving a lower bound about boolean circuits and then using the descriptive complexity characterization of first-order logic to to obtain the result on uh... in large enough about first-order logic so the the statement of the result in terms of circuits is simply that boolean constant that boolean circuits of size order of n to the k over four cannot solve k-click in the average case and uh... just to mention some some things about this result so this this in fact holds not just for constant depth but for depth up up to depth uh... log of n over k-squared log log n and this is all really almost tight because if you could improve this to any depth log of n over log log n then that would imply a separation of complexity classes uh... nc1 from np and uh... this this result is uh... you know breaks out of what had been known as a size depth tradeoff uh... which is the feature of previous uh... lower bound for k-click on depth d-circuits which were not even average case lower bound but in the worst case so what had been known previously is that uh... for depth d-circuits boolean circuits to define k-click you needed size uh... omega of n to some constant times k over d-squared something like that and the point being that the these lower bound degrade uh... very rapidly with the with the with the depth whereas the n to the k over four lower bound uh... holds without any decay but only up to a certain d and somehow this this this led to this answered some questions uh... in in complexity theory and and led to some kind of size hierarchy theorem for ac zero just to mention not going to get into the proof at all but just to mention some things about the proof so these the previous lower bounds for the k-click problem as indeed many lower bounds for ac zero uh... properties used uh... technique known as a host of switching lama and they use these in a kind of conventional conventional way and you know this is what leads to this this undesirable dependence on the depth parameter d in the exponent of the lower bound so so in my proof i'm using switching on it doesn't rely on switching but it's using it in a in a very different way and there's some key new ingredients uh... in in the proof for instance that there's a new a new notion of average sensitivity with respect to some kind of shape and and you know somehow we identify a class of bottleneck shapes which is what what's giving this k to the four lower bound so uh... it looks like i'm finishing a little bit early but i wanted to uh... okay so just here's some references you can look at to learn more about about the subjects and finally let me just repeat the two open questions that i raised in the talk uh... so is there a zero one law for order invariant first order logic and the second question is this modular convergence law of uh... colitis and co-party hold for for instance first order logic with the mod six quantifier thank you