 I suppose I should say thank you for coming back for the second part. Now, I'll start with a quick recap of what I said yesterday on one slide version. So we saw yesterday that Fagan's theorem, whose proof Jam presented, by Fagan's theorem a class of finite structures is definable in existential second order logic. If and only if the class of structures is in NP. It's decidable by non-deterministic machine in polynomial time. And it's an open question whether there is similar characterization for P time. We have characterizations by putting restrictions such as requiring an order on the structures, but sort of in full analogy to Fagan's theorem, we don't. And we saw that the question is equivalent to the question of the existence in P time of complete problems under first order reductions. And here, first order is very weak computationally as we saw. But that's not the essence of the result here. The point is that first order is itself invariant under isomorphisms. If you allow first order reductions to use an order on the structure, of course, then we do have complete problems. And in some sense, that's just a cheat. But we can replace first order with something like IFP and it's still equivalent, right? So in the sense that if there's a complete problem under IFP reductions, then we have a logic for P and conversely, okay? IFP we saw extends first order logic with inflationary fixed points. It's powerful enough to express P complete problems, still too weak to express such simple things as evenness. And it captures P time when we have an order on ordered structures. Now, back in the 1980s, Immerman argued that essentially first order logic is weaker than polynomial time on two grounds. One is its inability, first order is the inability of first order logic to express inductive definitions that's addressed by just adding them in IFP. And the other is it's inability to do any kind of counting, right? And we've seen that effectively IFP doesn't solve that at all. It still can't express evenness. And Immerman proposed just add, just as you add IFP, add an ability to count. And that might be enough to give you all of polynomial time. So here is his proposal. This is not exactly the way he proposed it. I'll tell you what the difference is. This is sort of with 20 years of hindsight, a slightly different presentation. But so we call it IFP plus C, fixed point logic with counting, all right? So we take IFP, which we defined yesterday. But now we have two sorts of variables. This is two sorted logic. We have variables which range over the domain of the structure, right? So when it's interpreted, there'll be interpreted over the domain of the structure and variables which range over non-negative integers. They're number variables, okay? Now, how are they incorporated? Well, here are the additional rules. If you have a formula with a free variable x, then you can form a term, number x phi, which is a term in the logic. And this term denotes a number, okay? It's a term of the number sort rather than of the element sort. And it denotes the number of elements that satisfy the formula phi. We have on number terms, which can be variables in terms of this sort. We have arithmetic operations. You can throw in any you want, as long as you stay within polynomial time. You know, throw in 0, constant 0 and 1, addition, multiplication, and so on. Now obviously, we don't want to throw in unrestricted arithmetic, right? If you allow quantification over variables which range over numbers and you have arbitrary terms, you don't want to introduce the complexity of full arithmetic into it. So we have to restrict these somehow. And what we'll say is, we'll require quantification to be bounded. Whenever you have a quantification over a number term, it has to be bounded by a term, okay? Now, since the only terms you can construct will now be bounded by a polynomial in the size of the structure. This guarantees that your quantifiers are bounded by something which is polynomial in the size of the structure. And so it can be evaluated by a search in polynomial time, okay? So this guarantees that the evaluation problem for this logic is still within polynomial time, okay? The, as I said, there's a slight variant. In Immerman's original proposal, explicitly they range not over non-negative integers, but over integers up to the size of the structure. Which, and then you didn't require this boundedness condition. I mean, that's the only difference. Okay, but I just think this is a little bit cleaner because it removes a sort of arbitrary upper bound on the integers. And, okay. So now, as I said, this Immerman proposed this and put forward the conjecture that this, the expressive power was exactly polynomial time. That conjecture turned out to be false. It was established already by in a paper by Cypher Immerman, which appeared in the early 90s. That this is too weak, okay? And what I'm going to do now, sort of in the first half of the talk, is tell you why it's too weak. We've learned a lot more about this over recent years. So I'll tell you what the original Cypher Immerman construction was. But I'll give you maybe a more detailed proof for a slightly different construction. Okay, so to analyze the expressive power of fixed point with counting. We go, as we did with IFP itself, to look at a logic with a bounded number of variables. So now, I think if you, you may be detecting a certain theme here. We take various logics and we try and identify, in order to analyze their expressive power, particularly to prove inexpressiveness results. We try and identify an invariance, an equivalence on structures, which we can analyze, typically by means of a game, like an EF game. And use that to, I mean, identify an equivalence with the logic respects, in some sense. So for first order logic, we had the equivalence, which we wrote this way, let's say, equivalence sub m, which is two structures are indistinguishable by any first order sentence of quantifier rank at most m, right? This has appeared in several talks now. And proving that a property was not invariant under this equivalence relation for any m proved that it was not first order. Then we considered, let me, the property of two structures being equivalent up to indistinguishable by any first order sentence of width at most k, and we showed that if a property is not invariant under this for any k, then it's not definable in fixed point logic, okay? And that's how we prove that evenness is not definable in fixed point logic. And now I'm going to introduce another one, which will allow us to define inexpressibility in fixed point of accounting. And for that, we consider Ck is the logic. Now this is another way of adding counting to logic. I've given you one on the previous slide. But now let's just consider another possibility. To first order logic, we just allow the possibility to have counting quantifiers, which means you can say things like, there exists i elements x satisfying phi. Which means, which is to be read that there exists at least i elements, right? There exists, there exists at least i elements x which satisfy phi. And we have such a quantifier for every positive integer i, okay? But we restrict the variables to be just x1 to xk. So again, we consider formulas of width at most k, in other words. Now, it's clear that we can always translate a formula in this logic to back to first order logic, because this thing has an obvious first order expression by saying there exists x1, x2, x. But to do so, blows up the number of variables, and blows up the width of the formula. So it's the simul thing is we add that additional expressive power, but limit the number of variables. And this gives us a notion of equivalence, which is, I'll write this way. A is equivalent to B. In CK, means they can't be distinguished in any formula of first order with counting in this sense of width k, okay? And then you can prove by pretty much the same method which I kind of sketched yesterday, which showed that any formula of IFP, for any formula of IFP, there is a k, so that it says that it's invariant under this. You can prove that for any formula of fixed point with counting, there is a k such that if A and B are CK equivalent, then they're not distinguished by this formula, okay? In other words, you can take the formula, unfold the fixed points up to whatever the sum bound given by the size of the structure. And the counting terms, you can unfold into counting quantifiers. Again, depending upon the size of the structure, it'll determine which counting quantifiers you need to use. But it turns out the total number of variables you need to use is again bounded by about twice two times the number of variables that appear in the original formula five, okay? And that proves this, okay? And now the idea is to prove that something is not definable in fixed point with counting. We just need to show it's not invariant under this for any value of k, okay? I just wanted to put these, something to keep in mind for the rest of the talk, I'll revisit the question. We also have the notion of isomorphism always. Everything we do is invariant under isomorphisms. And these can all be seen as approximations from below of the isomorphism relation. In particular, these are all polynomial time computable approximations. Well, this one we don't know is polynomial time computable, okay? And this, I mean, we don't really think of it in those terms over here because they're kind of clearly far away from actual isomorphism. But here it starts to get a bit more interesting, okay? Okay, so now, just as we have, we had the, Aaron first twice a game characterizing this. We had the game characterizing this, which I call the pebble game. There are, I'll give you two different games characterizing this equivalence. The one originally presented by, in a paper by Neela Merman and Eric Lander, which is what was used in the Cypher Merman paper. But then I'll give you a game defined by Lowry Heller called the bijection game, which turns out to be a bit more easier to use. But I present this mainly because it makes, this is, here is very clear how the game corresponds to the counting quantifiers, okay? So what's the idea? You have your structure A, you have your structure B, and you have our old friend, spoiler and duplicator. Or what was it you called them? Sujansing and dujansing, right? Anyway, and again, we have k pairs of pebbles, like in the k pebble game. But now at each move, think of what it means to try and prove the distinction with a counting quantifier. At each move, spoiler chooses a subset of the elements here rather than an element. So think of this, spoiler is claiming there exists i elements in here with some property. Duplicator has to respond with a subset of the same size, okay? Has to match the cardinality. And now spoiler challenges by saying, hang on, you picked an element which is different. And so spoiler challenges, duplicator picks an element, saying this one looks just like the one you've picked. And then we just forget about the sets. And these pebble elements remain and the game continues, okay? That's one round of the game, okay? So that's the moment lander game. So it's a pebble game where they place pebbles. But the protocol for placing a table is spoiler picks a set, duplicator responds with a set. Spoiler challenges an element of duplicators. Duplicator picks an element of the spoilers set. We forget about the sets and continue, okay? And then you can show in exactly the same way as for the other games, if duplicator has a winning strategy for Q moves, then A and B agree on all sentences of CQ of quantum rank at most Q, okay? I'm not actually going to use this game, as I said. I'll, I'll, I'll, when I get around to, to, to giving you a, a proof sketch, I'll, I'll use Halas bijection game. But I'm just telling you the ingredients that were used in this life were a moment result to then show that there are polynomial time properties of graphs that are not definable in fixed point with counting, okay? Now, the, the property they came up with is not at all, it's, it's not an obvious one and it's not an easy one to describe, right? So, in fact, it used to be said that any natural polynomial time property you can think of is definable in fixed point with counting. I'll show you that I, that's actually not true. I'll give you natural examples which are not as we go on. But it took some effort to come up with an example originally of, of something that's in polynomial time but not in fixed point with counting. So what they do is they basically construct a sequence of pairs of graphs which can't be distinguished in CK, right? So the, the, the pair GK, HK can't be distinguished. But there is some polynomial time decidable prop class of graphs that includes all of them, but excludes all the HK. And they don't actually give an explicit definition of what the property is, but they just show that there is an algorithm which will distinguish GK which is all that you really need, okay? And this last comment is, as I said, it's still a natural level of expressiveness. So just saying that something is fixed point with counting turns out to be natural and I'll tell you in, in a little bit sort of very strong reasons why, why that's the case. I've, I've decided not to present the proof of the Cypher movement. So don't worry too much about the details of the slide. I just want to give you an idea of how their construction went. They basically, so we're after these, these graph GK and HK. And how do you get them? Actually, I won't write GK and HK because that'll confuse with the G that's up there. You start with a graph G and we define a graph XG by just taking every vertex in G and replacing, we replace every edge by a pair of edges and replace the vertex by a certain gadget, okay? So this is the gadget in the case, in the specific case where we had a vertex of degree three, that's the gadget you would have, okay? And those dashed lines going off to the three sides represents these six edges because each edge has been replaced by two. What's this gadget? Well, you have these two pair of vertices off in each of the three directions and we call them A and B. Here A and B here and A and B here, right? So we have an A and B for each of the edges that was incident on this vertex. And in the middle we have four vertices, it's four because this vertex had degree three. In general for a degree D, the number of vertices in between will be two raised to power D minus one, okay? There is one for every even sized subset of the set of elements one to D. And you basically, so notice I've labeled these vertices by the even sized subsets of the set of elements one, two, three. And this is connected to those things over there by the fact that you look at the set that is labeled by and connected to the A's in that set and to the B's in the complement of that set. So because this is labeled by the empty set it's connected to all the B's. This is connected to A1 and A2 and B3. This is A1 and A3 and B2, A2 and A3 and B1, etc., okay? That gives us this graph XG from G, okay? Then we define the graph X tilde G to be exactly like XG, except at one for any one arbitrarily chosen vertex in G. Instead of doing it for the even sized subsets, we do it for the odd sized subsets. That's what XG and X tilde G is and then what we can prove is that XGA provided we started with a graph G which is connected and has large enough tree width, I'll tell you what that means in a moment. Then XG and X tilde G are not isomorphic but they can't be distinguished in CK where the K comes from that parameter over there, tree width, at least K. Okay, the cipher element construction they didn't say anything about tree width. They talked about balance separators. But the fact that you can do it from tree width follows from some results in this paper over here. But now, once you have this, these graphs then become our, what I call GK and HK before. And then the thing is, they're non isomorphic and in particular they show that there is a polynomial time algorithm that will distinguish between them. That gives us a polynomial time property. And part two, they prove using this game argument and from that it follows that the property, this polynomial time property that separates them is not in fixed point with counting. Okay, as I told you, it's very hard to say what the natural polynomial time property is over here. Okay, but I'm not going to give you the proof of these two things, because I'll give you a slightly different version of this. Now, the comment about fixed point with counting still forming a natural level of expressivity is justified by this sort of sequence of results, which have shown that restricting ourselves to certain natural graph classes, fixed point with counting does express all polynomial time property. Already in the Imerman and Lander paper back in 1990, it was proved that any polynomial time property on trees, any polynomial time property of trees is definable in fixed point with counting. And it was extended to any class of graphs of boundary tree width, like growing merino, any class of planar graphs. And in this sort of recent result of Martin Grover, which is a sort of technical tour de force. It's 200 page proof using enormous amounts of graph structure theory. He proves that any proper polynomial, if you take any proper minor closed class of graphs, right? So there's any class of graphs, closed undertaking minors, but that's not the class of all graphs, then all polynomial time properties on that class can be defined in fixed point with counting, okay? As I said, it's a quite stunning piece of work, this. All of these proofs of all of these results proceed essentially by the argument that we show that you can do canonization, canonical labeling of graphs on this class of graphs inside fixed point with counting. In other words, there is going from G to an ordered representative of G. There is an interpretation that can do this and that interpretation is definable in fixed point with counting. And then since on ordered graphs we know that fixed point logic can express all polynomial time properties, this then combining that with this interpretation gives us that in fixed point with counting, you can express all polynomial time properties of the graphs. Note that this, the fact that you can do this canonical labeling in fixed point with counting doesn't in particular imply that on these class of graphs, and this was all in general known before, all these class of graphs, canonical labeling is in polynomial time, so the isomorphism problem is in polynomial time. But in particular it implies that on any, so let's take the most general case, number four. On any proper minor closed class of graphs, there is a K such that this equivalence actually corresponds with graph isomorphism, okay? That's a consequence or not just a consequence. It's an essential ingredient in the proof of that, okay? But for any proper minor closed class of graphs, we can find a K. So this CK equivalence is graph isomorphism. Sorry? It doesn't follow from the statement. No, exactly, exactly. That's why I said it's in fact, it's the key ingredient in the proof, okay? Right, okay, so that's why I wanted to say that here we've hit something fairly natural at this point in the development of this thing. But there's still a gap, obviously, to isomorphism. But, and there's this explicit construction which I wanted to explore a bit more. Sorry, it's not up there, I mean the sci-fi element. Now the sci-fi element, I gave you 1992 as the date, which is the date the journal paper appeared. After following that, the construction or similar constructions were used to give other examples of things not definable in fixed point of accounting. Which are answering questions which arose in a different context. For instance, in a paper by Gray, which in Chela they construct something they called multi-peads. They were answering a specific question about to exhibit a first-order definable class of rigid structures on which no order is definable in fixed point with accounting. I don't want to get into the details of what all that means. But it does give us an explicit example of a polynomial time property, which is not in fixed point with accounting. But again, it's hard to say it's a natural property. I mean, if I was to give you the definition of what multi-peads were, it would take me quite a while. This, on the other hand, is a natural property, which I was able to show is not. It's in fact, it's not in fixed point with accounting. It's not in an infinitary logic with accounting. It's three colorability, but of course, this is not a polynomial time property. It's natural, but it's not polynomial time, as far as we know. And both of these proofs rely on constructions very similar to the sci-fi element construction. So, as I said, people often said that any natural polynomial time property is not definable in fixed point with accounting. Well, the answer is there is a natural one, and it's sort of implicit in all of these. And to draw it out, it's, I mean, we stated it explicitly in this paper. And that is the problem of solving systems of linear equations over a two element field. This is something that's doable in polynomial time. I think it's fairly natural. And we can prove it's not definable in fixed point with accounting. I'll tell you what I mean, precisely what I mean by that. Right, so it's clearly in polynomial time by means of Gaussian elimination. Now, if you think of systems of linear equations as given in a natural form as a sort of matrix, they come with a natural order representation. And then of course, everything is fixed point definable. So we need to see, what does it mean to have an unordered system of linear equations? Yeah, come on. No. I'll come to that too in a moment. Okay, so what do I mean? So here's just say, systems of linear equations presented as an unordered structure over the two element field. I mean, I'll generalize that in a moment. But we have a domain, our universe consists of a set of variables and a set of equations. Right, x1 to xn, e1 to em. Let's say, I mean, we're talking about linear equations over two element fields, so all our equations, right down in equations, all coefficients are zero or one. So you can always just write, so it'll be some, my variables are x1 to xn, so it'll be some xi plus xj plus xk. And here, on the right hand side, you can have either zero or one. So some of some variables on the left and the zero or one on the right. That's what our equations look like. So let's just say, for instance, I throw in a unary relation e0, which picks out those equations which have a zero on the right hand side, a unary relation e1, which picks out those equations which have a one on the right hand side, and a binary m which says this variable x appears on the left hand side of equation e. Okay, so that effectively gives me the matrix. Those give me the vector on the right. And that describes the system of equations. And crucially, there's no ordering on the variables, no ordering on the equations. Okay, and we can now prove that this, so now the problem is, I give you a structure like this. Does it represent a feasible system of equations? And that problem is decidable in polynomial time, not in, not expressible in fixed-pointed counting. Okay, so, and I'm going to work you in the next 15 minutes or so through a proof, a sketch of a proof of this. So first of all, how do we prove this? It's, the, the proof is very much inspired by the Cypher-Urman thing. But now, we're going to end up, instead of with these graphs, we're going to end up with a system of equations. Start with, let's say, a three regular connected graph of tree width greater than K. I haven't told you what tree width means. If you, I mean, if you know what tree width greater than K means, that's all right. If you don't just take this as a formal requirement at the moment, I'll give you a little bit of an explanation coming up, okay? Just think of this for the moment as tree width greater than K just means that the graph is highly connected, in a certain sense, okay? It's fairly densely connected. Now, I define equations as follows. The set of variables will be for each edge in the graph G. For each edge E, I have two variables, which I'll call XE0 and XE1, okay? The labels subscript zero and one are suggestive, but for now, they're just the names of the variables. And now, I'm going to add, for every vertex in the graph, I'm going to put in eight equations. What are my eight equations? Basically, I look at all eight ways I can take three variables on the left by picking for each of the, right. The graph is three regular, means the vertex has three edges incident on it. For each of those edges, I have two variables. There are eight ways I can pick one of the, one of each pair, okay? So those are my eight equations are obtained by taking those eight combinations of variables on the left. What do I put on the right? Basically, the sum of the subscripts mod two, right? In other words, my eight equations, I mean, there are only eight, I may as well write them down. So if I have a vertex V in my graph and I have E1, E2, E3, my eight equations will be X0, E1, plus X0, E2, plus X0, E3, equals zero, X0, E1, plus X0, E2, plus X1, E3, equals one. X0, E1, plus X, X1, E2, plus X0, E3, equals one. Because zero, one plus zero is one. X0, E1, plus X1, E2, plus X0, E3, equals zero. Sorry, that's a subscript one. That's it, and the other four, okay? That's my system of equations, and I call it EG. I call it ED. And now I get my, I mean, I'm hoping you're seeing the similarity with the graph construction I gave you earlier. I get my system of equations E tilde G by starting with EG, and for exactly one vertex, I replace the right-hand sides. Zero, I replace zero by one, and I place one by zero, right? I change the right-hand sides of all the equations in one particular vertex. And now what we're going to show is that the system EG is solvable. E tilde G is unsolvable. But they can't be distinguished by in the counting logic with K variables. And again, this K comes from the connectivity of the graph we started with, okay? So these are the two things we want to show. Well, three things, satisfiability of EG, unsatisfiability of E tilde G, and the equivalence. Okay, the first two are relatively easy. In fact, that EG is satisfiable is a completely trivial fact because actually, to satisfy this system of equations, just give every variable the value given by a subscript. And every equation is satisfied, okay? No problem there at all. Why is E tilde G unsatisfiable? Well, this is a neat little argument which is originally adapted from an argument by Saitin. But consider just the subsystem of equations where all subscripts are zero. So in other words, I pick one equation from every vertex, where all the subscripts are zero, right? And I'm going to show you that this subsystem is already unsatisfiable, which of course implies that the whole system is unsatisfiable. Why is this unsatisfiable? Well, now each variable occurs in exactly two places. Because it's a variable corresponding to an edge, it'll occur in the equation at the two end points of the edge, right? Each, so each variable occurs in exactly two places, each x zero. Which means if I add up all the left-hand sides of all the equations, basically I get every variable twice and that's it. So the left-hand sides must sum to zero because two times, we're doing arithmetic mod two and I've counted every variable twice. What do I have on the right-hand side? I have zeros everywhere except at the one vertex where I replace the zero with a one. So the right-hand side's all add up to one, right? It's zeros everywhere except in one vertex. So the right-hand side's add up to one. That's it, the system is unsolvable, okay? So we have EG is satisfiable, E tilde G is unsatisfiable. What remains to show is that these can't be distinguished in fixed point logic with K variables, arbitrary, pick any vertex. When I defined E tilde G, let me see, on the previous slide. I said, E tilde G is obtained by replacing for exactly one vertex V. I mean, I didn't define it as, okay. So I'm not defining this precisely in the sense I'm just saying, I suppose strictly speaking this should be parameterized by the vertex V. But it's, in fact, funnily enough you can prove, I don't want to get into that. Because it's a connected graph, that it doesn't matter which vertex you pick, you actually get isomorphic, as relational structures, you get something isomorphic for whatever vertex you pick. But that would be a digression, okay? So we have EG is satisfiable, E tilde G is unsatisfiable. And now I want to prove that they're equivalent, okay? And rather than using the Imerman-Lander game, which is rather unwieldy, I'll use a different game, which was introduced by Hella, and which gives another characterization of exactly this equivalence relation. And this game is defined as follows, that's okay. So the game is played on two structures A and B. Again, with K pairs of pebbles, A1 to AK on A and B1 to BK on B. At each move of the game, spoiler chooses a pair of pebbles, A, I, and B, I, and saying, says he's going to move those two. Duplicator responds by giving a bijection between the structures A and B, okay? And that bijection has to respect the pebbles which are already in place, right? It's, I mean, some pebbles, the pebbles already in place have to map the corresponding pebbles here, but they're extended to a bijection of the whole structure. And then spoiler just chooses an element A and places the two pebbles he picked up on an element of A and on its image under the bijection, okay? So think of it, this is the pebble game except that the duplicator has to give in advance his response to any possible move spoiler can make, right? So rather than duplicator, spoiler picking a pebble and duplicator giving, picking a pebble in response, duplicator has to give the bijection which it'll give the response to every possible move. Now, this seems to be making it harder for duplicator than the, than the Emmerman and Lander game where you picked a set. Because clearly in some sense, the bijection requires that every, for every set there is an image of exactly the same size already specified. So this seems to be making it harder for duplicator, but actually Hela proves that this is equivalent. Duplicator has a winning strategy in this game, he has one in this game. But for us, that doesn't needn't bother us because if we're going to prove that duplicator has a winning strategy in this harder game, that already proves this equivalent result, which is what we want. Okay, so we're gonna prove that duplicator has a winning strategy to play on these systems of equations. And to describe this now I have to tell you how I'm gonna use the tree width. So let me now tell you a little bit about what tree width is. As I said, tree width is a measure of the connectivity of a graph. The higher the tree width, the more strongly connected a graph is. It's also said sometimes it's a measure of how tree-like the graph is. The smaller the tree width, the more it's like a tree. In particular, graphs of tree width one are exactly trees or forests, to be specific, okay? So this is for undirected graphs, remember? Now, roughly speaking, the intuitive argument is that a graph has tree width k. If you can cover it in a tree-like fashion by sets of vertices, cover means, okay, cover it by sets of vertices with at most k plus one vertices in each patch. And cover means every edge of the graph must appear inside some patch, okay? So that's the intuition which I want you to have before you look at the formal definition because if all you get is a formal definition and you've not seen this before, this is not going to make a lot of sense, okay? Formally, if we have a graph, a tree decomposition of the graph is a tree t and a relation, so something which associates to every node in the tree, a set of vertices, that's what the relation d does. With the property that if you look at the set of nodes in the tree where a vertex appears, it's connected. In other words, you don't have the vertex of the graph appearing in one node of a tree and another node of the tree and somewhere not in between, right? If it appears in two different nodes of the tree, it appears in everywhere in the path between those two nodes. And then every edge is covered. In other words, for every edge of the original graph, there's a node in the tree such that both endpoints are related to that node. That's what a tree decomposition is. The width of the decomposition is the size of the, you look at the number of vertices of the graph associated with any node. You take that largest number, minus one, that's the width of the decomposition. The minus one is there simply to make sure that trees have tree width one. Okay, otherwise we'd have trees having tree width two because you need to cover both endpoints of an edge. Okay, that's the definite. And then the tree width of a graph is just the smallest case as that there is a treaty composition of width k. Okay, that's the formal definition. As I said, if you haven't seen this before, it's not going to make a lot of sense. So then keep the informal picture in mind. But what we're actually going to use, okay, thank you. What we're actually going to use is a lovely characterization of tree width in terms of games. The games are a recurrent theme of this presentation you may have gathered. And that's the cops and robbers game. So a cops and robbers game is a game played on an undirected graph between there's one player who has a team of k cops and another player who controls a robber. And the idea is these cops want to try and catch the robber. The robber can move along paths in the graph, anywhere, okay? The robber is allowed to follow any path in the graph at any speed. It's not one edge per move. Robber can go on any path. The cops are better equipped, they have helicopters. They can move from any node in the graph to any node whether there's a path or not, they can just in one step fly. Okay, so at any point in the game, there's a set of at most k nodes on which there are cops and there is a node on which there's a robber. And what does one move consist of? Move consists in the cop player removing some cops and saying where they're going, okay? These helicopters are noisy, everyone knows where they're going to land, okay? So some subset of the cops moves to a new position. Now, while they're moving, the robber can move as long as the robber doesn't go through one of the stationary cops, right? So the robber can move from a path from R to another node S, as long as the path doesn't go through one of the stationary positions, okay? So, and then the new position of the game is the new position of the robber. And the new position of the cops. That's the next move, okay? If the robber, if there's a cop and a robber on the same position, the game ends, the cops have caught the robber, otherwise the game can continue. And now the theorem, you're gonna see more in Thomas's, that there's a winning strategy for the cop player with k cops on a graph. If and only if the tree width of the graph is at most k minus 1, okay? Just to, I'm not going to give you a proof of this, but just think of this picture. We have a tree, if the tree width is small, you can see how the cops will win. What you do is, see, what does a, what does a 3D composition give us? A 3D composition gives us a tree where you can stick some, I mean, associated with every node of the tree is a set of at most k plus 1. Sorry, a set of at most k plus 1 vertices of the graph. Now, if cop places k plus 1, because we're in the tree with k case, there are k plus 1 cops, k plus 1 cops on these nodes. That disconnects this because any vertex which appears here, which is not covered, cannot appear here by the connectedness condition. If it appears here and here, it must also appear in the path in between them, okay? So this, this connects the graph into the vertices that appear here and the vertices that appear here. So robber, if he's not already caught, must be either there or here. And now what the cop does is, moves as many cops as are required out of this to in the next step cover the root of the subtree where the robber is, okay? And you can do that without the robber being able to escape the subtree. And then, of course, you just recursively proceed until you've, at the leaf, you will have caught the robber, okay? But the, the Seymour and Thomas theorem actually says that this, this is an exact characterization of tree width. So there's a converse from the winning strategy for the cops, you can actually derive a tree decomposition, okay? So we're going to use this game characterization and the idea is this. So remember, we have started with a graph G. From this, we constructed two systems of equations, EG and E tilde G, right? And we assume that this graph G has tree width at most K and is connected. And from that, I want to derive the fact that there is a bijection game, winning strategy for duplicator in the bijection game. This winning strategy is basically going to use the winning strategy for the robber in the K cops and robbers game. Or the K minus 1 cops and robbers, sorry, the K plus 1 cops and robbers game on G, okay? The fact that this has tree width greater than K means robber has a winning strategy against K plus 1 cops over here. And we're going to use that to construct the winning strategy for the, in the bijection game. And the idea is simply this, that in E tilde G, we picked an arbitrary vertex and said basically created an inconsistency in the equations at that vertex. Duplicator strategy in this game is to hide that inconsistency. And what you can do is, if you have a path in the graph G, from a vertex U to a vertex V, and effectively this is where the inconsistency is, you can just flip the variables, pretend the, along any edge, pretend the variables marked zero are actually marked one, and the ones marked one are actually marked zero. And make that flip along every edge along the path. And then the inconsistency moves here, and you can satisfy all the equations everywhere else except at this one, okay? So you can, effectively there's a bijection between the variables. If you take the natural bijection between the variables of EG and E tilde G, right? That's not an isomorphism, but it's an isomorphism except at the equations at one vertex. You can move that vertex to an arbitrary point, make it an isomorphism there and make the inconsistency somewhere else, just by flipping these zeroes and ones along the path. And that's the bijections that, so the bijection's duplicator is going to play in this game are exactly these bijections, which are inconsistent, which are consistent everywhere except at one vertex. Which vertex? Well, the current rubber position in this cops and robbers game being played over here at the same time. So whenever spoiler moves pebbles over here, we look at those pebble, current pebble positions as a new cop position. In this game, find a place for a robber to escape over here and give a bijection which hides that inconsistency at that new robber position in this, okay? This is somewhat hand wavy, lot of details to be worked out. But that's the essential idea of the proof.