 Thank you very much Eric. Of course it's a great pleasure to speak at this conference. I'm very sad that we couldn't meet in person. It's been a while since I've seen Dirk and I think it would have been tremendous fun and I think it would have been fun because we could have done some of this. I wonder if you can see, can you see my screen? Nope, you can't that's very strange. Okay, let me find a different way. We could have done some of this. So at the risk of embarrassing Dirk, this is a photo he sent me of a New Year's greetings in 2011 of a typical crime of family gathering. There's quite an extraordinary collection of bottles there. It's well worth close inspection. Okay, so now back to mathematics. I should say that I was in fact slightly relieved when the conference was delayed. I'm sorry to say because this project wasn't finished and I thought that when the time came I'd have made some progress and of course nearly a year has passed and I haven't worked on this for a single minute. So this isn't so much, it touches on pretty much every single aspect of it touches on work of Dirk's, the Dirk has done over the previous few decades. So with that said, let's begin. So I'm going to talk about graph homology. So we're going to take G, a connected graph and I'm going to denote the loop number or also known as the genus of the graph by HG, the standard, the number of edges is EG and the degree here is something slightly funny. It's the edges minus twice the loops. Okay, so that'll be the degree. So this will be familiar to you as minus the superficial degree of divergence. So it's like a superficial degree of convergence. Next, an orientation on a graph will be essentially plus or minus a wedge product of symbols corresponding to the edges in the graph. So what that means, it's just an ordering on the edges modulo the action of even permutations. So if you flip two edges, you get the opposite orientation. So that's an oriented graph. It's not oriented in the usual sense when you put arrows on the edges. It's an orientation on the ordering of the edges. Okay, so the graph complex is defined by taking the the Q vector space spanned by oriented graphs G comma E, sorry G comma eta and we assume that G has no tadpoles. So no self edges like this and it has no vertices of degree less than or equal to two. And then you impose some very simple relations that a graph with the negative orientation is the negative of the graph with that orientation. So that's fairly natural. You also impose a relation that the G of eta is equivalent to the same graph where you where you permute the edges according to an automorphism of the graph. So if a graph has an automorphism, it's equivalent to that graph, then the new graph with the new orientation. So in particular if a graph has an automorphism which is odd, which induces an odd permutation of the edges, then its class is zero. In equivalence classes, it will be denoted in square brackets. Okay, so then on this vector space of graphs or oriented graphs, you can define the following differential. So this was done by Maxim many years ago. And the differential of an oriented graph is a sum of all the edges in the graph. And what you do is you contract each edge in the graph and have the induced orientation with the appropriate sign. Okay, so this contraction, I use this double slash notation. And I always use this to mean the contraction where if you contract a tadpole, if you contract a loop, that's zero, that's the empty graph. Okay, so you're not allowed to contract tadpoles. So this differential is well defined and it squares to zero, so it defines a complex. And furthermore, you can check that it has degree minus one with respect to the degree. And I remind you that the degree is edges minus twice the number of loops. Great. Okay, so here are some examples which I'm sure you'll, this audience will find extremely straightforward. So here's the differential at the top. So the first remark is that any graph that has a double-edged is zero in this graph complex GC2. And that's because if you take a double-edged here with edges one, two, with the orientation, e to one wedge, e to two, sorry, e one wedge, e two wedge, something else, e to all the other edges, then you can flip the edges one and two. And that gives you an isomorphic graph, but it will reverse the orientation. And therefore it's minus itself and therefore it must vanish. So any graph with double-edged vanishes. And by the way, from now on I'm going to drop the orientations. It's boring to keep writing the orientations. Well, let us be implicit from now. From the previous remark, we get that any graph with the property that every edge lies in a triangle, it automatically has the property that its differential is zero. And that's clear because if you take a triangle and contract an edge in a triangle, you get a double-edged. And as we've just seen, a double-edged is zero. So graphs built out of triangles are going to have zero differential in the graph complex. And our favorite example is therefore the wheels. The wheels with n spokes. And they satisfy, of course, d of the class of the wheel with any orientation doesn't matter, is zero. Now, so here's just a hammer at harm on shore. You don't need this. Here's another example of a differential in the graph complex. So we're going to contract all the edges in this graph here on the left. There are two triangles. So contracting them gives zero, as we've already seen. The only edges that will do something when you contract them are the red ones. When you contract the top one, you get this graph here. When you contract the middle one, you get a wheel with four spokes. And when you contract the bottom one, you get this graph here. And the first and third graphs cancel out, and you're left with a wheel with four spokes. So the wheel with four spokes is exact. But in fact, more is true. The wheels with even numbers of spokes are always zero because they have a symmetry that's odd, and that forces them to be zero in the graph complex. So only the odd wheels survive. Okay, so graph homology is, so HNGC2 is the kernel of this differential, modulo the image. And the homological degree is this degree edges minus twice number of loops. It's also graded by the genus, the loop number. So we get a bunch of homology groups, and they are in turn graded. So what do we know? It's known that the homology vanishes in negative degrees for positive genus. And it has a lot of extra structure. So the first bit of extra structure, and here we see the first interaction with the work of Dirk and Anna. And that's that the graph homology has a lee co-algebra structure, and it's induced by anti-symmetrizing the con-crimer co-product. So that's, I'm sure everybody here knows, I've seen this before, you take the sum of a certain class of subgraphs, typically one particle irreducible. And you on the left of the tensor, you have the subgraph, and on the right, you have the quotient graph. And I apologize, there's a typo here, which I can't change. This should be a single slash and not a double slash. I'm sorry, that formula is wrong. The single slash means that you just contract the subgraph gamma, even if it contains a loop, it's not zero. So I'm sorry, I've used the wrong notation, that should be a single slash. Okay. So then Wielbacher showed in 2014, a fantastic result that the zero degree homology of the graph complex is due to the Grotten-Diek-Teichmann-Lie algebra, which is something that's explicitly defined, but quite tricky to understand. So we don't know much about the Grotten-Diek-Teichmann-Lie algebra at all. What we do know is something predicted by Deline. So this was formerly, this used to be called the Deline-Ihara conjecture, but I was recently told by Ihara that it should be called the Deline conjecture. So fine. And it states that there's a free Lie algebra with infinitely many generators, sigma 3, sigma 5, sigma 7, in degrees 3, 5, 7, every odd degree, which injects into GRT. So that means that GRT has got this huge free Lie algebra inside it, and you can show, I think Wielbacher showed that the sigma, that these sigmas end up being due to the odd wheels. So sigma 3 corresponds to the wheel with three spokes, sigma 5, the wheel with five spokes, et cetera, et cetera. Now the first puzzle here is that this group is to do with mativic Galois groups. This free Lie algebra is the Lie algebra of a mativic Galois group. And so we can rightfully ask, what on earth have mativic Galois groups got to do with graph homology? And that's the first mystery that I'd like to come back to. Okay, so here's a picture of graph homology in low degrees. And I'd like to spend a little bit of time discussing this. So up the left axis is the homological degree, H0, H1, H2. Along the right axis are the number of loops, one loop, two loops, three loops, four loops, and so on. Okay, so you can draw this picture different ways. You can grade by edges. And that's actually probably a better thing to do, as we'll see. There are different ways you can represent this. So the first remark is that there's this red diagonal line. And this red diagonal line is the line where, in fact, just immediately below this line are trivalent graphs. So every, every graph has vertex, sorry, every vertex has degree three. So above that, above that, that means on the red line or above, all these graphs have a two-valent vertex. So there's zero in the graph complex. So your intuition is as you go up in this diagram, you have lots and lots of edges, very few loops. So in quantum field D, that'd be very convergent graph. As you go down, you get infrared divergences as you go vertically down in this picture. Okay, so very convergent. And the thing above this red line is zero. Now going across the H0, that's the only degree in which things are really well understood. So Wielbacher showed that this is the Grottendieck-Teichmeler, the algebra. So here in degree three, we have the Wiel with three spokes. And you can show that gives a non-trivial class. Here we have the Wiel with five spokes, seven spokes, nine spokes, 11 spokes. Okay, so all these yellow classes are the ones that are understood of sorts. Here, the first really interesting class here in degree eight is G3, 5. So it's something whose co-product gives a Wiel 3 and a Wiel 5. It's a complicated linear combination of graphs. You can compute it on a computer. But the only, the reason we know it exists is unfortunately because of my theorem. Likewise, with this class at 10 loops and 11 loops, there's no one G335. Unfortunately, there isn't a purely graph theoretic way to get these elements on. Then there's, it's conjectured that H1 is always zero. So this line should always be zero. And the next interesting stuff happens in H3. There's some green classes here and a class in H7. They don't always occur in odd homological degree. There's a class in H6 up here. We'll see that later on. And these green classes are kind of mysterious. So somehow, the dual of Gordon-Dick Taichman group, many of you will know that this is somehow, you can think of this as formal multiple zeta values, modulo products. So symbol satisfying associated relations, modulo products. Though, of course, we're just doing combinatorics here. There are no numbers anywhere in this picture just yet. The green classes, on the other hand, have no such interpretation. Okay. So I want you to sort of hold this in your mind if you can. Good. Okay. So another reason why this is very interesting is a recent theorem by Chan, Galatius and Payne in 2018. And they showed that how to relate the homology of the graph complex to the cohomology of the modular space of curves, mg in genus g. In fact, the graph complex computes the very top weight graded piece of its cohomology. So by Deline, the cohomology has a mixed whole structure. And the graph complex sees the very top piece, which is point curidual to a trivial motive, in fact. So the key thing here is that in this equality from a sort of algebraic geometric point of view, the motives on the left are trivial. They're just vector spaces. They just take motives plus the data of a weight. So there's no GRT or anything like that that comes out of this picture. And that's a puzzle. And this will with three spokes corresponds to an interesting class in M3. Okay. So there are two questions that jump out. One is how do we think of these higher degree classes in graph homology, the green ones? And how do we relate the graph complex to mixed motives? So if we could relate it to mixed motives, then we would expect to see Matic Galois groups acting. We might be lucky and find this Matic Galois group acting, which would explain the appearance of the Grottendieck-Teichmanner group. This theorem here, unfortunately, doesn't do the job because this weight graded piece of MG is a pure motive. So it doesn't do that. So today, what I'm going to do is I'm going to define a notion of differential forms on a modular space of metric graphs. Okay. So that's some variant of the outer space that Akkaren talked about this morning. So we're going to define it. So I want to do Duram theory on outer space, Duram homology. And using that, we'll be able to assign numbers or motives to classes in the graph complex. And after explaining that, I will explain some conjectures about the meaning of these higher degree homology classes. Okay. So here's a battle plan for how the talk will proceed. So we're going to take this graph complex and promote it to a modular space of metric graphs. A metric graph is a graph where all of its edges have a length. So you can, so the contraction of edges has a real tangible meaning. You're letting the length go to zero. So there's a space of all possible lengths on a graph. And that gives a modular space of metric graphs. Now you can embed that as the real points of an algebraic variety, just a projective space, in fact. And doing some shenanigans, some compactifications, which we first learned in work of Dirk, joined with Spencer and Elen some years ago. So this is a variant of that. You can glue all these algebraic varieties together to form a huge infinite dimensional cosimplicial scheme. And on that, you can take the Duram complex. So what does that mean? That means that a differential form will be a collection of infinitely many differential forms for every graph. So for every graph, you're going to get a differential form, and they are going to have to fit together in a nice way that reflects the way you glue these metric graphs together. So differential form is an infinite collection of forms that satisfy some compatibilities. And the problem is, how do we construct such a thing, such a differential form? It's not obvious. So the idea here is to imitate the Torelli map in algebraic geometry. So there's a map from the space of metric graphs called the tropical Torelli map to a space of symmetric matrices. And this came up this morning in Karen's talk. So on the space of symmetric matrices, we know how to construct invariant differential forms. So this goes back to work in the 1930s, I think. I think Brouwer, in the book of Elman Weill, he mentions papers of Brouwer in 1935. It may go back before that. So we write down invariant differential forms on the space of symmetric matrices, and then we pull them back to the space of metric graphs, and then we check that they satisfy all the properties we need. Okay, so that's the battle plan. So first, metric graphs. So a metric graph is a graph. It'll be a connected graph, plus the data of a length, positive length to every edge. That length is L sub E. And you normalize it so that the total length, the sum over all the edges is one. That means if you fix your graph and you allow all the edges to vary, the space of possible metrics is just a simplex. It's just the set L1 up to Ln, positive numbers such that the sum is equal to one. It's just a hyperplane. And the idea is that if you send a length to zero, that's the same thing as that should topologically correspond to contracting the edge, which is a different graph, of course. That'll be a different simplex associated to a different graph. Of course, when you contract edge, well, I'll come to that in a minute. So here's a picture. Right, so let's take this graph here with a sunrise diagram with two vertices and three edges. It has two loops, genus two. The edges have lengths L1, L2, L3. And they are subject to the condition that their sum equals one. So that defines a Euclidean triangle, two simplex in Euclidean three space. And it's the open triangle, right? It's the interior of this triangle. Now, as one of the lengths goes to zero, that means we travel, I don't know if you can see my mouse, but let's contract this length L2, let that go to zero. So we travel down the simplex and end up on the boundary L2 equals zero. So the boundary is not in the simplex. It was an open simplex, but this is the boundary in its compactification. So now we get a new simplex in Euclidean two space where L1 plus L3 equals one. And that we identify with the simplex, the space of metric graphs of this type, now with two loops in which the middle edge has been contracted. And now it's the space of all possible values of L1 and L3. And we think of that as being a degenerate version of this graph for which the second edge has zero length. So you take all these open simplices and you glue some of them into the boundaries of others. So what you get then is this open simplex and the three faces. But you don't get the three corners, okay? So three corners correspond, would correspond to contracting a tadpole, contracting a loop. And we're not allowed to do that. So in outer space, for example, all these faces sort of fit together in a stratified way, but you don't have the corners. And that's very important. So I've written here that you can assemble all these simplices to form what I think is called reduced outer space. I'm not 100% sure on the terminology here. There's lots of different variants. A caveat here is, of course, in outer space, you need marked graphs. The markings actually play very little role here, so I'm going to just ignore that. And the next stage then, so that's the space of metric graphs. Now, to make it algebraic is very easy. We can just identify this simplex as the real points in a projective space. So the set of li positive reals that add up to one, we can just view that as the positive real coordinate simplex inside projective space. Okay, so here comes an approximate definition. It's actually, it's an okay definition, but it's not going to be what we need, actually. But then an algebraic differential form then on the space of metric graphs. So form of degree K will be an infinite collection of algebraic differential forms, omega g for every graph. And it's going to be a form on this open simplex. But when I say it's algebraic, I actually mean it's a form on this bigger projective space that the simplex sits in. And I'm going to allow it to have poles because we're going to need that. What we want is for it to extend, sorry, we want it to be smooth on the space of metric graphs. So it's smooth on this real Euclidean simplex. Then what we want is that we want it to extend to the boundary. And in such a way that on the boundary, it agrees with the differential form corresponding to the contracted graphs that sit on the boundary. And finally, we want this to be functorial with respect to automorphisms of the graph. So if the graph hasn't an automorphism, then the differential form, so if you have isomorphic graphs, then that differential forms should be isomorphic. So here's a picture, the same simplex as before. So if we want to, what will a differential form look like? Well, it'll be a differential k-form here on the interior of the simplex. So that form is indexed by the sunrise graph. And it's a form in the three variables l1, l2, l3 corresponding to the three edge lengths. And then on each face, we have differential forms for each graph. And the property should be that this form, when restricted to the boundary, should line up, should match with the forms corresponding to these caution graphs. Now, if you look at these three faces, in fact, they all correspond to the same graph. It's just that the edge labelings are different. And therefore, these three forms, in fact, should all be the same. They're just obtained from each other by changes of variables. And that's the third property. Now, the key point here is that it's not obvious to construct such a thing, because if you try to do it inductively, you start with the boundary of your simplex, and you might have defined something on the boundary, then you need to extend it into the middle of this simplex. And you might be able to do that. But then you need to extend it into the larger cells, and so on and so forth, infinitely many times. And you've got to do that in a functorial way. So it's not obvious. So the way we'll do this is using the Torelli map. So let me skip that. And basically, we'll replace the graph with its Laplacian matrix and define an invariant form on matrices. So let me quickly remind you of the graph Laplacian. So I take a connected metric graph, and we have the usual complex that calculates the simplicial homology of a graph. Let's say you have z to the number of edge to the space of edges, z to the vertices, and there's a boundary map that to an edge gives you the endpoint minus the source. And the kernel of this map is the homology of the graph. Because the graph is connected, the co-carnal is h0, which is z. Now what we do is define an inner product on the set of edges. This will be extremely familiar to everybody here, I'm sure. You just say that the inner product of two edges is zero if they're distinct edges, but it's the length of that edge if it's the same edge. The norms of the edges are the lengths. And that's an inner product on this space. It restricts to an inner product on this space, on the homology. And that's the graph Laplacian. So another way to say it is that you can just write it in terms of matrices in a very straightforward way. So it's probably easiest if I just show an example. Here's the wheel with three spokes. It's got three loops. So here's a basis for the loops. This loop 156, this loop 246, and this one 345. You write down this incidence matrix where the rows are indexing all the edges. So the loop 156 involves edge 1, edge 5, and edge 6 with appropriate signs, cross pointing to choice of orientation. The inner product I defined, I mentioned a minute ago, that gives the length l i to edge i is just a diagonal matrix in this basis. And the graph Laplacian, you just take this matrix, the incidence matrix transpose times the diagonal times epsilon. And when you do that, you get this matrix whose entries are the lengths of the edges. And if you think of the edge lengths as variables, this is a matrix whose entries are just polynomials, just linear forms in variables. So this is called the graph Laplacian matrix. It's very, very classical. It depends on a choice of basis. So what we want to do is construct invariance of this, that don't depend on the choice of basis. And as many of you know, you could take the determinant, and that would give you the graph polynomial, which comes up all over the place in quantum field theory. That's not good enough, you want forms. So now we turn to this classical theory of invariant forms. So how do we do invariant forms? So this is, again, very old stuff. I'm just going to take an arbitrary graded commutative differential graded algebra. So this is, just think of this as, in the examples, it'll just be polynomials in some, in determinants, and they're different, and they're catered differentials. So if I take any matrix whose entries are, say, polynomials, and I, if the matrix is invertible, then you can do the following thing. You can do x inverse dx, excuse me. Now x inverse dx is a more carton form. You take, you raise it to the power n, you take its trace, and that produces a form of degree n for every n. Now very elementary properties of the trace shows that this vanishes when n is even. It's always closed, it's a closed differential form. It behaves nicely with respect to transposing matrices. So if you transpose a matrix, you get a sign, and that shows in particular that when your matrix is symmetric, and our graph Laplacian matrices are always symmetric, then the half of these forms always vanish. And another key property is that it's bianvariant. So if you multiply by a constant matrix on the left or on the right, then the corresponding invariant form is unchanged. That's what's called invariant. Another thing that's quite important is that if x is a k by k matrix, then all these forms vanish once the degree is bigger than twice the size of the matrix. And here's a plea. You can notice that, in fact, more is true. It's not just the trace, but the matrix itself identically vanishes in that range. And if anyone knows a reference for that, please could you let me know? So the first such form is just the logarithmic derivative of the determinant of the matrix. And then we have all these odd forms, 3, 5, 7, 9, 11, 13. And if x is symmetric, which is our case, 3, 7, 11 vanish, and we're just left with 5, 9 and 13. And so these, in fact, give you the classes in algebraic k theory, which came up in the last talk, interestingly enough. Okay, here's some examples. So let's take a 2 by 2 generic matrix, then the beta 3x, you take x inverse dx, multiply by itself three times, take its trace and you get this nice differential 3 form. If you do the same thing with a 3 by 3 matrix, you're going to get a big mess. So we have to assume x is symmetric, otherwise the formula won't fit on, well, it'll fit on a sheet of paper, but it's kind of ugly. If I take x a symmetric matrix, of course, beta 3 now vanishes, but we get an interesting beta 5. And here it is. It's some thing over the square of the determinant. And you notice when you do that you get massive cancellations. And basically you're getting what's called condensation of determinants. So those of you who have been around this subject for a while will recognize that phrase due to Dodson. So here's a theorem that I'll prove in the right up of these notes. I don't know, I'm sure it sort of thing that must be known, but I can't find it anywhere. And that's that the denominators are much smaller than you expect. So if you take this beta to the odd power, because you've taken an inverse of a matrix and you've raised it, you know, two n plus one times, you expect the determinant to appear two to the n plus one times. But actually it only appears with half that power. And in the case when x is symmetric, it's even more spectacular, you only get one quarter of the power. Now the reason I say this is because we see something very similar in quantum field theory. So if x is the graph replacement, which it will be in a minute, what you're getting is actually if you write out a formula for this thing beta, you're getting what are called sort of Dodson polynomials. In the numerator divided by the graph polynomial to some high power, and you're getting massive cancellations between the numerators and the denominators. And this is exactly what happens in quantum electrodynamics in parametric form, which was formulated by Dirk and worked out in the thesis of Marcel Gauls. So I think that's an interesting connection to quantum field theory. Okay, canonical graph forms. So now we apply invariant forms to graphs. We take a connected graph and take lambda G, the graph Laplacian, as I mentioned before, the determinant is just the Kirchhoff graph polynomial. But now we want to define the canonical graph form, not the graph polynomial, is one of these invariant trace forms. So it's the trace of lambda inverse G, d lambda G, where lambda is just the graph Laplacian matrix. And then we can take exterior products of these forms, and we get what I call the canonical algebra. So these forms are a function from graphs to forms to every graph, every such form is a map, which to every graph assigns a differential form. So it's an infinite connection of forms. Okay, so here's an example for the real with three spokes, and you will recognize this. If you take a graph with edges e1 up to en, we typically call the edge length alpha now in the physics contents, not l, it'll be alpha. So the graph Laplacian has entries which are linear functions of the alphas. It's determinant is a polynomial, it's called the graph polynomial. And you can work out this, this first form of degree five, and it gives exactly what many people will recognize as the Feynman integral Feynman differential form for the real with three spokes. Okay, first theorem then is that for any graph, this canonical form is well defined, doesn't depend on choices. It is closed. It is projective. And it has poles only along the graph hypersurface, which is the zero locus of the graph polynomial, which is the vanishing locus of the determinant of the graph Laplacian. It is functoried in G. So if you have an isomorphism of graphs, it induces niforce morphism of these forms. And it's compatible with edge contraction. So if you contract an edge, you get the corresponding canonical form of the quotient graph. So that is exactly what we want to define a form on outer space or on space of metric graphs. So in fact, the theorem is true for any, any wedge product, you take any exterior product of these omega five, omega nine, omega 13, and so on. Those of you who know graph homology will be pleased to see that it has nice vanishing properties. So when the, when the graph, if you restrict this form to a graph of the right, of the right dimension, whether the graph who's simplex is the right diamond of the same dimension, then this form vanishes when it, when the graph has a two valent vertex, when it has multiple edges, when it has a tadpole, when it is one vertex reducible. That's exactly what you see in the graph complex. So that's very nice. It has lots and lots of other nice properties that I shall skip. Okay. So now, um, armed, now we have an infinite family of differential forms on spaces of graphs. So the natural thing to do is to integrate them. So to do this, we view the space of metric graphs. Remember this, this simplex, which, whose points, whether the sets of possible lengths of edges on a connected graph, sigma g, as I said before, we embed that in the real points of a projective space. So the sigma g here, which is the same triangle we had earlier, it's this, the, the, the, the set of points in projective space, alpha one, alpha two, alpha three in homogeneous coordinates where all the alpha is are positive. I apologize, again, I put bigger than or equal to zero. That's a mistake. That should just be strictly bigger than zero. I apologize for that. So that's, that's a typo that should be strictly bigger than zero, because the simplex was the open Euclidean simplex, which was very important. Okay. And as, as many of you know, of course, that these forms have, have, have poles along, along the boundary. So there's a whole tricky business here that was initially in the, in the case of finite integrals worked out by Dirk and Spencer and Elen, where you've got to do some blow-ups. And there's a whole business that I'm not going to talk about for reasons of time. But many of you know that very well. Okay. So the first term is you take any connected graph and any canonical differential form. So this trace of, invariant trace of the Laplace matrix, and I suppose that I assume that the form is of the right degree to integrate it. So it's a, it's a k form on a k simplex. So that means the, the number of edges of the graph is one more than the degree of this form. Then you can try to integrate this form over the simplex. And at this point, all the physicists brace them, adopt the brace position and take cover because, as we know, Feynman integrals never converge. They are always infinite, pretty much in all the interesting cases. But this is the opposite. It's always finite. So even if you take a graph with the most horrible sub divergences, you can imagine the integral will always converge. So that's kind of amazing because we don't see that very often in quantum field theory or ever, in fact. And in fact, this integral is a period of fight the fourth theory because this integrand, the differential form is some numerator, some very complicated numerator over the graph polynomial, the first semantic polynomial. So it defines some period in fight the fourth theory. Okay. So here it goes. I'll just a quick message to the organizers. I think I started a bit late. I started 10 minutes late. So I'll take the liberty of talking till 20 past. Sure. Sure. Yes. Is that okay? Yeah. Okay. Thanks. I'm nearly done anyway. Okay. So here's some examples. The wheel with three spokes we've already seen. If you compute this canonical form, you get, in fact, 10 times the residue, the coefficient of one over epsilon in dim reg in fight the fourth theory. And so everybody knows that that is six zeta of three. I had that explained to me, first of all, by Dirk and David many fond years ago. This is sort of the cornerstone, the thing that got me interested in all of this in the first place. But now it gets interesting. So the wheel with five spokes is not what you think. It's a complicated thing. It's the Feynman integral, some omega g is just some alpha i d alpha one dot dot dot and you omit a d alpha. It's just the standard projective form. Plus some multiple times alpha one to alpha five is a product of all the edge variables corresponding to the internal spokes. And this integral is not z to seven, the Feynman period of this graph, which is integral of just this first time. So the Feynman integral is just this piece and that would give z to seven. But this integral gives a multiple of z to five, its weight drop. And this coefficient exactly conspires to cancel out that coefficient of z to seven and gives you z to five. So again, this is incredibly reminiscent of quantum electrodynamics, where we know that the highest weight parts in the quantum field theory cancel out. So it strongly suggests that quantum electrodynamics or other gauge theories might have some matrix theoretic formulation in this period. Wheel with seven spokes, now it gets seriously hard to compute these forms. I watched it out. Sorry, I didn't check all the signs. It's something times z to seven. Another class of graphs that we know how to compute are the complete graphs. And you know then that this integral is some multiple of a product of odd z to values, z to three, z to five, z to two minus one. The reason you know that is because this is literally the Burrell regulator in algebraic K theory. And this calculation, the calculation that this integral gives a product of z to values goes back to Ziegel in a very beautiful paper in which he invented the, what's called the unfolding technique for, which is used across the theory of molecular forms. So it's the unfolding for maybe an orthogonal group or special linear group. And it's the beginnings, the birth of the whole subject of Tamagawa numbers is in this calculation. So it's great to be, we can give a motivic interpretation, we can write down a motive associated to this Burrell regulator, and it fits in this whole graph complex story. Okay. Stokes formula. So you can tell what's coming next. If I take a canonical form, so actually the canonical forms form a hop file to Burrell. Don't worry about this in the first approximation. You just need to know that if you took one of these basic forms like omega five or omega nine or omega 13, they're primitive. So delta of omega is just one tensor omega plus omega tensor one. Okay. And a lot of terms will simplify on this formula. But in general, it's a hop file to Burrell. So, and it's the obvious hop file to Burrell. So omega three, sorry, omega five wedge omega nine would map to omega five tensor omega nine minus omega nine tensor omega five. Okay. And by applying Stokes theorem to this compactification of the simplex in this blow up of projective spaces, and using properties of these forms, you show that you get that the, that the sum of integrals corresponding to every face of this, that the compactified simplex is zero. So that means that the sum of graph integrals for every possible contraction of your graph, plus a bunch of products corresponding to sub and quotient graphs, that sum vanishes. So you'll recognize on the right, this co-crymer co-product. And the proof is once you've set everything up is very straightforward. So we can rewrite this Stokes formula by taking this co-product here and writing it into the trivial part, omega tensor one plus one tensor omega plus the reduced co-product. And when we do that and rearrange this, what we find is that the Stokes formula has three terms. It has the first term, which is literally the differential in the graph complex. It has a second term, which is what is sometimes called the second differential in the graph complex, which is the differential where you don't contract edges, but you delete edges. And that's also differential in the graph complex. And the third term is exactly the reduced co-product. It's the co-algebra structure on graph homology induced by the con-crymer co-product that I mentioned way at the beginning of the talk. So this Stokes formula gives a very nice geometric interpretation of all these structures that are built into the graph complex. Some applications, you can use these integrals to detect non-vanishing graph homology classes. That's harder in practice than it looks, but in principle it's possible and you can do it in some cases. We can associate a motive to any graph homology class. So that explains perhaps why we get Grotten-Dick-Teichmeler motivic Galois groups appearing in graph complexes. And this third point here I'm a bit embarrassed by, but it is actually true that the cosmic Galois group acts on the differential forms on outer space. And it sounds like a big joke that I've concocted this. I promise you I didn't come up with the phrase cosmic Galois group. It was due to Cartier as we learned in the last talk. Outer space, it was coined by Kahn Vogtmann. It just happens that these things are in fact very closely related in a meaningful way by this machinery. So here's an example of the Stokes them. So let's go back very quickly to this picture of graph homology. Remember there was this class here which is the first non-trivial sort of Liebracket across G3,5 and weight 8. And using it, we're going to calculate an integral on this class, this green class way up here. And the way we do that is we take this graph here, it was a linear combination of graphs, and we integrate 0 over that graph. We apply the Stokes theorem to that. And what it does is it will produce, because its co-product is a wheel 3 and a wheel 5, which gives you to 3 and z to 5, we then do this differential D and delta, these two differentials in the graph complex, and we zig zag our way up to this class. And using Stokes formula, we deduce that the integral of omega 5 wedge omega 9 on this class here is in fact z to 3 times z to 5. And that proves that this class, this green class is non-vanishing in homology. So that was a bit quick. But here it is. So this class in H3 here, let's call it xi, you can show that this integral of omega 5 wedge omega 9 over this class is a non-trivial product of z to values. Okay, so now let's look again at this picture of graph cohomology. So graph homology, I've actually switched to graph cohomology because I want to dualise because we're talking about forms. So it's easier to relate to graph cohomology, but it's really the same thing. So now I've redrawn the conjectural, semi-conjectural, or anyway, in any case, the computer calculations of non-classes in graph cohomology, which is just the dual of graph homology, and I've given them names, and the names are these differential forms and lead brackets of these differential forms. So what we have omega 5 corresponds to the wheel with three spokes, omega 9 the wheel with five spokes, omega 30 the wheel with seven spokes, etc. Then we have wedge products, and the first wedge product is this class in H3. We have a triple wedge product up here in H6, and so on and so forth. And then we have lead brackets of the original forms. And we also have lead brackets of the wedge products of these forms. And these classes line up exactly with all the known graph homology classes. So here's a conjecture. I conjecture that the free Lie algebra on this canonical algebra of differential forms injects into the graph cohomology. It does it in a funny way. So the grading here is the number of edges, is the degree, and it can correspond to the number of edges. We have to forget, I don't know, they line up in funny homological degrees. I don't know how to predict those. Obvious question one is, is this map an isomorphism? Because I can later deny it, I will conjecture here during this talk that it is an isomorphism, but I may deny that later on. I reserve the right. But anyway, it's a conjecture, but it's a weaker conjecture, a more tentative conjecture than the previous one. And I was going to say something about the dual, since I'm out of time, I'll stop and leave you with this picture so you can contemplate it if there are any questions. Thank you very much for listening. Thank you, Francis. This was amazing. Thank you. I have a million questions, but if anybody else wants to go first. I'll go first. So the G35 linear combination, which you said was hideous. Do you have a handy? I'd love to know. Oh, it's known. I mean, I think it's written down in some papers. That one's okay. You can compute that one. But I think the limits of the computer calculations are this column. I'm not 100% sure and I'm out of date. As I said, I got this from slides of talks a year ago. I don't know if things have moved on. But I think the 10 column and everything to the left of it is checked by computer and this column 11 is possibly not. So you'll get your 35 graph, but you'll be in a mess. If you want to go much further than that, you're going to have to give up. I think if you just want to compute the graph, you can take G3, G5 and just compute the graphically bracket. I mean, the natural question is that this is a non-zero class. Yeah, but also, there's difference when you work in graph homology or graph homology. So if you work with graph homology, the classes are not wheels. They're wheels plus a long tail that's not known. And you've got to take the lead brackets of those. But you don't know what they are. Do you have a question of David? Yes. I didn't mind you throwing away half of the wheels. I thought that was quite fun. You got back the zeta values decreased. But as it's the week of the expert take, do you have any thoughts about the zigzag diagrams? Those are, after all, the only infinite class of diagrams on quantum field theory, which have a closed formula, which Dirk and I conjectured and you and Oliver proved. That's a great question. Actually, when I'm new to this subject, when I first learned about this at a conference a year ago, I asked exactly the same question to the experts and they said they didn't know. It's a great question. Thank you. I wanted to ask a question about the weights. I mean, you mentioned these weight drops and you had these. I found it quite striking that you had these pure weight evaluations of these wheels with these complicated integrants and you just got a zeta seven or just got a zeta five. So is there like an understanding from the structure of the blowup that you get some pounds on the weights that would predict this? It's not obvious from an algebraic geometry perspective, not that I know of. But I kind of have the feeling that all these these integrals come from this Burrell regulator. So you know that morally, beta five wedge beta nine corresponds to zeta three zeta five. And so somehow what I would like to be able to do, but I can't do it, is relate this complete graph using Stokes formula and identities to wheel three, tensor wheel five. And that would give it immediately. But I don't know how to do that. So I did that argument in the opposite direction to compute one of the examples. Thank you very much. Let's thank friends. Again, I'll see no other questions at the moment.