 Well, thanks for the invitation and the introduction. And yeah, thanks, Karen and Eric, for making this happen. And the team at ERGS against all odds, at least, to meet and celebrate digitally. So yeah, what I'm going to talk about is on a theorem of Kramer, rather, or you would say, in the future theorem of Kramer. But before that, let me go back to this point of being one of the first PhD students of DERC in Berlin. So we were a group of six people, and we check known as the Kramer gang. So here's a picture of us hanging out in Berlin. And something that I want to mention is that half of us didn't have a real background in physics and certainly not in mathematical physics and quantum field theory. So I had never seen a Feynman diagram ever before in my life. Markus had done any physics whatsoever. And Lutz even had started out as a butcher and then studied some kind of food science and then did his diploma in some polymer physics and then eventually switched over to quantum field theory. And I think that says a lot about DERC and his approach and his sort of laid back style and giving us the chance and forming this group and not only worrying about the science in this group, but also about a good atmosphere and some sort of a good group vibe, which we certainly had. And so yeah, and then we also had these nice field days together, and he would invite us at least once, I think, in the first years, even twice to his house where his lovely wife Susanne cooked fantastic dinners for us. And DERC served the finest wine. And yeah, these were always like big feasts, really, localian meals. And yeah, so it's been very much a joy ride being in this group and then later continuing to work with DERC. And yeah, so part of my work has been with DERC on his vision. So I said maybe future theorems, but for now it's his vision. And this vision is that Karkovsky rules in outer space, which is a slogan arguably on the same level as the cosmic Galois group acting on outer space, which is also quite nice, I think. OK, for reference, I think the story started at least to be written up by this article here with Spencer and DERC, which contains basically all these ideas of Karkovsky outer space and the cubicle chain complex. And then there's also here this paper by DERC himself alone, which actually has this title, Karkovsky Rules in Outer Space. And then most of the stuff I'm mentioning today is based on this article here together and that we did this year. And yeah, at least my point of view, my take on these things. And regarding this term vision, something else that I feel is very special about DERC is he's a really visionary. And as a PhD student, sometimes it was a bit frustrating or challenging when you would ask him a question and you always get this sort of mystical answer. And so there was always a guide to where to find the truth, but he lets you find it for yourself. And over the years, I have really cherished and liked this oracle-like work of DERC. And as we heard throughout the talks yesterday and on Monday, he has inspired many people by this. So thanks for being our oracle in regards to Feynman diagrams and physics. OK, so let's get serious. Here's a baby example. So we would like to study the analytic structure of a function that is defined by an integral where the integrant depends on some parameter here, just a complex variable t. And so for this argument, let's take gamma to be a circle around 1. You see it here in the figure. And then the integrant has two powers, plus and minus square root of t. And so for example, if we plug in t equals 1, we see that the integral is fine because this singular point doesn't hit our integration contour. And we can do the integral, for example, using residue theorem. And then we could ask ourselves, OK, for which values t is still well defined. And the point is that if I move my t around, these singularities will move in the complex plane. But I can also deform gamma to stay away from these guys unless t goes to 0 because then I have these two singularities approaching and there's no way for gamma to be deformed away from this. So it has to be hit by the two singularities. And what happens if I get a singularity in my function t, f of t, and OK, and here you can easily deduce the kind of behavior around this singular point because if you let t encircle 0, what happens is the two square roots, they exchange their position going half a circle. And in order to analytically continue, I have to deform gamma along the way. So what happens, I travel, so to speak, from 1 over square root of t to minus 1 over square root of t. OK, so basically that's an example of a function defined by an integral. Here's a more sophisticated one, which we all know. And that's the Feynman integral. So we have a graph G and edges and loops and SLX. And so here's the momentum space representation. We have these product of propagators, one for each edge. And these di are quadrics, qi squared minus some mass squared plus an i epsilon term. And the qi's here are always linear combinations of the loop momenta and the external momenta, which are associated to the legs of my graph. And md is d dimensional in Minkowski space. So the point being, this looks in some way really similar to our baby example, but it's more complicated. There are more singularities and many more dimensions to deal with. And then there's also the problem of this non-compact integration domain sitting in a non-compact complex manifold, at least in the first place. Yeah, so here's an example I think everyone knows. So the point is just that we can view this as a function of these kinematical data, the p's and the m's. And in the following, we will fix the m's and then just view it as a function of p. Of course, there's also momentum conservation between the p's. So that's why no p3 appears here. But for this talk, I don't want to worry about this. Yeah, so the problem is we want to understand the analytic structure of this function iG, iG as a function of the dp, the momentum p. In the mathematical case, so vastly generalizing this baby example that I gave, there is a nice mathematical account. There's a book written by Friedrich Fahm. And of course, many others have worked on this. And the problem is it's just almost covering the case of Feynman integrals. So there's some technical problems, which make the case of Feynman integrals way, way more complicated. And so one could ask, OK, let's maybe very first, where are the singularities of this map? And then in the 60s, or actually in 1960, Landau gave some necessary conditions when such singularities occur and where. And quite remarkably, there has been no proof in the literature for 60 years now, although they are commonly used and well agreed upon, I'd say. And Max Mübauer, PhD student of Dirk, at least that's what he promised me on Monday, told me that he'll upload his paper to the archive this week. So Max, I hope you did. So the precise conditions are maybe not so important for this talk, but so Landau's equations state, so for now, let's put some formal variable to each edge in the graph, internal edge of the graph. And so the first condition simply states that some of these propagators produce poles when DI vanishes. And then the second one is a formulation of this pinching condition. So these singular hyper surfaces have to meet in such a way that I can't deform my integration contour. And so this translates in this sum over the edges in a loop numbered from 1 to L, so loop number J, and that these xi times qi have to vanish. And then you can solve the set of equations in depending on how you get some subset or variety in P in the space of external momenta. And then a solution where all these DI vanish is called the leading singularity and the others are referred to as reduced singularities. And so whenever we set one of these xi0, we can think of the graph where the edge that has the label i is collapsed to a point. So let's start the reduced diagrams of G and its singularities are then the reduced singularities of G, or rather of the function iG. Regarding, OK, so the idea is that if we knew where these singularities are and if we knew what kind of behavior this iG has around these singularities, then in principle, theoretically, we could reconstruct the whole function from this data. The magic word is here, some Hilbert transform. And certainly it's debatable how doable this is for very complicated expressions. But it's also important from a practical point of view for numerical calculations and stuff that real physicists do in their real life applications. OK, so Lando tells us where are the singularities. Now Kodkowski also in the same year formulated his theorem or conjecture, depending on the point of view, that we can compute the associated discontinuity to such a Lando singularity by doing this kind of Feynman integral that I have written here. So if D1 up to DK vanish, so I have these propagators producing poles, then I have to compute this discontinuity integral, which tells me, well, I have to compute this. I get the discontinuity by computing this integral. And Kodkowski tells me, you leave all the other propagators as they are, and then you set these guys who produce poles, you set them, or you evaluate them, you take the residue at their polar hypersurfaces, which is expressed here by these data plus operators, so to speak. So for today, the precise form is not so important, but I put the definition down here. So it takes some sort of residue with respect to the positive energy part of this propagator. Yes. OK, so the message here is there's a formula, and it's not proven. We would like to prove it. There's an unfinished proof, at least as far as I understand. Maybe Spencer can correct me on that. But in this draft that I mentioned by Dirk and Spencer, so I think there's some details missing, and it's also not covering the most general cases. But as far as I know, it's also the only really modern account of this problem in the class of Feynman integrals. So if you're interested in that, so I advise you have a look in this paper. Yeah, so apparently it's hard to prove. So maybe we could look for an alternative approach. And one idea that's also been around in this whole amplitude on business is to maybe think about regrouping the Feynman integrals or re-expressing them, finding some other way of expressing the amplitude in which we eventually are interested. And maybe start from there, study the singularities of this guy. Maybe even there's some cancellation of singularities, which then would make our life more easy. In that regard, here's a theorem from the paper with Dirk. And it says that I can write the integral i g. And yeah, so renormalization can be covered here, but I don't want to comment in this talk on how to do this here, maybe later on a few words. So I can write i g as the sum over the spanning trees of g. And I write it as a sum of integrals, which now depends on g and the spanning tree t. And the sum goes over all spanning trees. And it's a sort of Kacowski integral. I do, for the edges in t, I do nothing. I just leave the propagators as they are. And the propagator is not in the spanning tree. I put on the mass shell by this data plus operation that I introduced before. So one says the cut integral or one cuts the edges. So here's an example. In terms of diagrams, here's a nice graph, two loops. It has five spanning trees as you may check for yourself. And then so it means I have to sum all these five integrals. And the edge is not in the spanning tree. I depicted here with this red slash, which means they are cut or put on the mass shell by these data plus operations. OK, and if you want to see a formula for one loop graphs, it's rather straightforward. What you do is you basically do one integration using the residue theorem. So you get precisely this cut formula. One has to be a bit careful with these I epsilon's, but in the end it works out. And then going from one loop to a higher loop as here, is then it's quite nice because you somehow want to write your integral as an iterated integral of one loop integrals. And so the proof then goes over carefully re-expression of these integrals. And it's done. What's doing this for you is the iterated application of the core product, the core co-product of the Hopf algebra, which then disassembles your graph into pieces of one loop graphs. And then these you put into this iterated integral procedure and do the stuff that I did here for this one loop example iteratively. And you get this formula, IG equals to IG of t, sum over t. And OK, so maybe it's clear, but just to remark, we have some trade-off here. I have one integral on the left. And on the right, I have more integrals, but they have less integrations because I have this delta plus acting, which basically take care of as many integrations as there are edges not in the spanning tree of t. So I have more integrals, but maybe simpler. And then one could go and even check to try to repeat this whole Lando business and Katkowski business for these guys here. Yeah, so a priori, if you try it straightforward, it's not so easy to see because there's some linear parts now in the polls, but maybe with some clever coordinates, one can make progress even here as maybe future work and future stuff to exploit here. OK, and I want to sort of embed the theorem into a new picture. So let's go to outer space. Here's a nice picture of Dirk in front of outer space. Here, this guy is outer space in rank 2. And the way to do this, or one way to get from five integrals or embed these ideas into outer space is the parametric representation. And I mean, so basically, Karen has told us this morning already what's outer space and the modular space of graphs. And then Francis made this even more precise, but I still just repeat it here. So the basic idea is you have these parametric Feynman rules. Now the integration domain is this positive piece of projective space. And we associate there for each edge in G a variable xe or xi. And so this data G is just homeomorphic to an n minus 1 dimensional simplex. And the integrant, after doing these transformations, is then, as we all know, expressed by powers of graph polynomials. And OK, and just to make the point here, all the data that appeared first in the momentum space representation is, of course, still here. The dimension D, number of edges, number of loops. And the kinematics, so the masses and the momenta are absorbed in the second polynomial here, the theta G. And yeah, so I don't want to give the precise formula of these graph polynomials, but they have some crucial identities which transforms into the identities for this form omega G. And Francis also mentioned this before. Yeah, so if I set one of these edge variables to 0 and think about, so it describes the boundary of the simplex data G, and I can associate to this boundary simplex, or I can think of this boundary piece as a simplex associated to the graph G, where I collapse the edge E. And so the identity here says that if I restrict omega G to the boundary, then it's actually the same as the form that I can associate to this graph that represents the boundary. Except if you have a tadpole, then this is not working. But yeah, let's not worry about this. And the other I guess also well known to everybody here says that, of course, I can repeat this procedure. But when my edges form a subgraph which is divergent, so a condition on its topology, then omega G has a pole. But the nice thing about this pole is that if I look at the residue, then it has this product form. So I have omega of this divergent subgraph times omega of the co-graph G, where gamma is contracted. And when we've already seen this couple of time sets, it's one piece of the renormalization hopfire geoproduct. And yeah, so it's OK. And this remark, maybe it's clear later. But for those who know, a similar identity holds for the blow up of the cell or its compactification. Which also Francis shortly described. So why are these crucial? The first, as I want to show later on, actually allow to compare Feynman integrals of different graphs, or more precisely to compare the singularities associated to these functions. And the second one is the starting point of cornerstone renormalization. So yeah, so let's think about if we wanted to formulate an amplitude, not only a single Feynman integral, but say all the Feynman integrals, all that's contribute to an amplitude for fixed number of loops and legs. Then we could try to embed this integration procedure into a whole space. And for that, let us fix the masses once and for all to a finite set. So we don't want to change the masses. We think of these IGs as sort of a family of functions. Yeah, the graph determines the shape of the function. And then these MIs are certain constants that appear there. And possibly, we could even have further restrictions on this coloring map, so to speak. Think of quantum electrodynamics, where you only have certain vertices with combinations of colored edges and directions, even, of edges allowed. So in principle, we can model all this with a coloring and or direction of the edges. And then what we do, it's also been described by Francis and Karen. We build a topological space out of these guys. So we take these integration domains. We take the interior points, one for each graph G. And this capital G is just a set of all 1pi Feynman diagrams. All vertices have to be at least three valence. And C is determining how the edges can be colored. And so we take the disjoint union of all these open simplices. And then we identify these guys by a relation induced by edge collapsing. So if I have a big graph G and a smaller graph H, and I get from G to H by a sequence of a forest. If there's a forest in G and I collapse this forest to get to H, then I identify the corresponding phase of delta G with the simplex that I can associate to H. So whenever two graphs are related by some forest collapse, I can identify the phase of the larger simplex with the simplex of the smaller graph. And then because we have colors around, so then we also identify points that are related by graph isomorphisms and even by colored isomorphisms. And this is actually what makes this space rather interesting from topological point of view. Here's an example. So the notation here means this asks for these coloring maps to be injective. This is just a toy model case where you want all the masses to be different. And then if I have one loop graph with three legs, then I can have at most three legs. So I need three colors to color them differently. So I can build this space out of this data. And OK, there's six ways of doing graphs where all vertices are three-valent. And OK, and then if I collapse here the blue edge, then I walk into this phase. If I shrink the length of the black edge, then I walk into this phase. Now the black edge has zero length. And by this procedure, I described the boundary of the simplex gets identified with this simplex associated to this graph here. And then I can reinsert or we blow up slowly the black edge again, but I can also interchange the position of two and three, and this would lead me to another simplex or another cell going here. And so what you get if you walk around and check this out, you get the torus, actually. On the other hand, if you forget all the colors, then first of all, all these points here in the interior of each simplex are related to the corresponding points in the other simplex. So these two simplecs just get all squished, folded onto each other. And also, if I these one-dimensional cells here, they also get folded in half. Because if there's no different colors on these edges, then I can't distinguish whether, so to speak, upper edge gets smaller or the lower, because it's just an automorphism flipping the edges. So that's actually the same point. So in that case, without colors, you have just one sphere as the space. And if you like these things, then you can try to figure out the case with two colors. It's quite fun exercise. With more than one loop, something interesting happens. And this is the fact that I now have some missing cells in my space. So here's the cell sigma that you can associate to this graph here on four edges. And I can shrink the length of any edge to zero. And I'll end up in this two simplecs here at the boundary. I can also shrink two edges. And I get to these one cells here, except the edges numbered by three and four here, because that would drop the loop number. I'm not allowed to do that in this space. So everything that's read in this picture is at infinity. So it's deleted. It's missing from the space. And then if you are looking for a compact space or want to study renormalization, you can do the following. You truncate this cell, or this semi-open cell, before infinity. So cut off this piece here. And so in that case here, it would look like this. So these new faces appearing, I've depicted in orange. And again, we have this Hopf algebra structure appearing in the description of these new faces. So the cell associated to gamma, which is this graph on two edges, is just a line of one simplecs. And the same for the co-graph. And so that's describing this square that I have sort of inserted into the space. So the upshot of all of this is when we recall our form of the Feynman integrand, that now we see that the integration domain of IG can be viewed as a cell in this modular space of graphs. The integrand, you can think of as some sort of volume, or let's say Feynman volume of this cell. Or more technically, it's a compactly supported distribution density on this space. And it's compactly supported all into the fact that I can only integrate without going to infinity. And OK, and then how to regularize this is I use this compactification procedure, which is sort of a Borel-Seer compactification. And if I do this, I have this nice structure on these faces at infinity or these new boundary faces. And so it's a natural setting to study renormalization. And so whether I worry about renormalization or not, the other point that I want to introduce here is now that I can think of amplitudes as sort of semi-discrete volumes of these spaces. And semi-discrete here means just that I'm... So if in my amplitude, there are graphs participating with different numbers of edges, then they belong to different dimensional pieces of the space because the dimension of the cell is number of edges minus 1. So then I would need to calculate the amplitude. I would need to sum the k-dimensional Feynman volumes of the k cells in this space plus the k-minus 1-dimensional volumes of the k-minus 1 cells and so on. So in some sense, this is a finite dimensional picture of what Francis just described. And so we cut off everything else that's appearing above a finite dimension, which is determined by the number of loops and legs. And we also, in contrast to what he did, we only look at one form, which is dictated to us by the Feynman rules. So this omega g is also a collection of forms on this space. But as I said, it's given by Feynman to us. Yeah, so apart from being a nice background to study Feynman amplitudes, this space is also interesting in its own right just as a topological space. And so together with Max Mübauer, we studied it for a bit and following the vision of Dirk, who told us to study the space and play around with Feynman integers and see if you can find something. And what we did find out about this space is that if all edges have to be colored differently and we look at one loop graphs with S legs, then we can calculate the integral homology. It's given by this z to the power S minus 1 factorial over 2. If we do the same calculator homology, the rational homology in the same dimension would allow arbitrary colorings, then this is much more difficult. And what we got was a polynomial bound on the Betty numbers here. And they grow with the number of colors as a polynomial in degree number of legs. And the third point is that for all lower Betty numbers, they are actually independent of the number of colors, which I find quite remarkable. And OK, you have to be honest with a bit of a stretch. So we don't have a full proof of this statement. But we have a geometric proof that's not quite working and an algebraic proof, which we maybe don't fully understand. But since this is a physics conference, maybe let's accept this as a theorem for now. And another interesting fact here is that you have plenty of maps between these modular spaces for various colors, various number of legs and loops. And these maps sort of given naturally by changing the number of colors, permuting colors, forgetting colors, chopping of legs, adding legs, gluing legs together, gluing graphs, along their legs together or inserting graphs. So there's a whole zoo of stuff to explore. And in the uncolored case, this has already been done in the study of the automorphism groups of free groups, where this uncolored space computes the modular space of graphs with L loops and S legs, computes the rational homology of groups gamma LS. And these guys, if S is equal to 0, it's the outer automorphism group of the free group on L generators. And if we have one leg, then it's the full automorphism group of the free group on L generators. And then the sequence continues, but think that these higher groups don't have such classical interpretations. Anyway, let's go back to physics. And let's review our theorem that this loop tree duality theorem, because there's similar theorems in the literature about how to rewrite Feynman integrals as a sum of integrals that are indexed by trees, or cut integrals, and these all go under the slogan of loop tree duality. And so I think, let me just describe some abstractly, some nice reformulation. But to make this precise and work this out in detail is actually quite hard. Probably everyone knows this. Try to integrate some or play it around with parametric integrals. So the fact is here, and Ralph yesterday quite remarkably produced this out of his Feynman categories, that there's this deformation retract that sits inside this modular space of graphs. And this is, in particular, nice because we have the space where these points at infinity, these missing cells. Here's an example of a two-loop graph. And these three corner points are missing because they correspond to metric or edge length vanishing on two edges. So to get something more finite, more that we can handle better, there's a procedure of deformation retracting onto a simplicial complex that lives inside of this modular space. And in fact, these simplicies, they group together to form cubes. So we have a nice cubical complex that sits inside this modular space of graphs, which is of quite lower dimension and has nicer structure. And then the point here I want just to quickly remark is that you can think of this integral as some sort of fiber integration. So you view the cell as a vibration over this cubical chain complex that sits inside of it. And then this theorem is just simply can be thought of as doing the fiber integrations. And then you're left with an integral that's only over this one-dimensional subspaces here. And OK, and obviously, there's more than one way how to describe this vibration, how to put the fibers, so to speak. And then this could produce these or explain the different notions in this loop tree duality or different formulations of loop tree duality that they are around. And OK, last remark here is what's also interesting is that depending on how you describe your vibration, how you put the fibers, different fibers might walk into these points at infinity. So then you have to accustom your normalization program to how you fiber integrate, so to speak. Let me check the time. So how much time do I have left? Well, I mean, 25 would be the ideal for the next talk. OK, I can manage. So the last point is, yeah, so let's consider a theory with only cubic interactions, which means all graphs participating in the amplitude are three-valent, which in this geometric picture I just described is that I'm integrating over these highest dimensional cells in my space because they are represented by the graphs where all vertices are three-regular. And so you might say, OK, but then I just forget all the other stuff in lower dimensions. I don't need to worry about this. But my claim is here, they carry some information. And this information is related to the singularities of these Feynman integrals of the high dimensional cells. So the way to extract this information is the incidence of graphs or the incidence relation in the cell structure. And one way to put this is, so let's take a subgraph gamma of g and we map it to the lambda variety associated to the reduced graph g where it collapse everything in gamma to a point or to its individual components if gamma is not connected. And here, without going into too much detail, you could also just think about the poles of the integrant. But you can also make this more go one step further into the solutions of the Landau equations associated to the points where di vanishes and i is in some subset that's not in the edges of gamma. So I forget the poles associated to gamma. And as I take just the union of these guys, then there's an obvious partial order. And then on the left, I have to order by reverse inclusion. But that's just a technicality. And then I have this pole set structure of singularities. And it tells me in some way if and where two Feynman integrals actually have singularities in common. So in terms of the space, I have a cell here, a cell there. And if there's some collapse of edges that leads into a lower dimensional stratum, which is shared by both, then they have all these singularities that live up until this point in this lower dimensional stratum. Then they have all these singularities and below in common. And then you might ask, OK, how do I probe this? Can I somehow study this incidence relation? And there's more than one way to do this. We're studying the Hasse diagram of this pole set. Or even trying to look at these incidence algebras that you can associate to pole sets. But in this whole outer space graph complex what we tried to do was, so we had this idea, so is there a simple graph complex that tells us something about physics? And so here's a baby graph complex if you compare it to the one Francis showed us before. So what you do is you take just Z2 coefficients. So you take the free Z2 module generated by all these graphs that form my cells in my modular space. So again, C is the number of colors and L number of loops, S number of legs. I grade them by some convention. Let's take edges minus 1 to stay with this dimensional picture that I had in the modular space. And I define a differential exactly as Francis did. So I collapse E. If E is a tadpole, I set it to 0. And yeah, so you see is the coloring. All the time I carried its implicitly around. And Kate just says, if E is collapsed, then forget the data of the color on E as well. So why is this so simple? Because if you want to study graph complexes and see how they interact with Feynman integrals, the science that you have to introduce to make this differential work, say, over Q, and the symmetries that we saw. So if a multi-edge appears, then the sky is 0 and so on. It's very hard to make sense of this in the Feynman integral setting. And sort of this Z2 coefficients get rid of all these problems. So the idea was then just play around and see what you can get out of this. And here, so the message is that if I take the top rank homology, which is the homology related to the graphs where all vertices are 3-valent, then they give me some sort of partition of the set of graphs in the amplitude. And this is sort of a nice partition. It's hard to, or I can wrap many words around this, but let's take a simple example. So again, it's relax one loop, basically the only example that fits on the slide, and just two masses. So then you can compute and see this guy is closed. D applied to this combination vanishes and get another class by simply exchanging M1 and M2. And also according to the rules of how to form the amplitude, this is all the graphs that you can build a one loop three leg amplitude. And if you solve the lambda equations, you get this set for the reduced singularities. I think for one loop graphs, you can also take the full set. I don't have to only take the reduced one, but let's stick with this. And so the point I'm making here is that the full amplitude, I can now write as one function of p, p is p1 up to p3, plus a second one. And these two functions are given by this summing the Feynman integrals of these homology classes. And they have this obvious symmetry here. So here permuting M1 and M2 translates the guys here as well as their singularities. And of course, in that case, it's a really simple statement. But what we can show is that in this time model setting where all edges are colored differently, this property holds for all one loop graphs and every number of legs. And how do we show this? Because we know the topology of the modular space. And here there's a direct interpretation of this graph complex as a chain complex associated to this topological space. For arbitrary colorings, it's not so clear. There are some obstacles. So it's sort of, in some cases, it computes the relative homology of these modular spaces that I introduced. There are some difficulties. And OK, just to finish, for higher loops, we don't know anything about the homology. So there's still a lot to understand. I have some stuff I calculated by hands, but it's only working for 0 and 1 legs. So exactly the case that's not so interesting for physics. And so we might have to put this on a computer or we switch to the cubicle world. Because as you've seen, this is a much lower dimensional space. It has a simplicial complex structure. So there it might be easier to make a sense of this comparison between the graph complex and topology of the space and in turn reduce something about sharing or incidence relation between Feynman integrals. And of course, Dirk has always preached to us, yeah, consider the cubicle chain complex, not the simplicial one. And so of course, it was total agreement with his prophecy. And yeah, thank you for listening. And once again, happy, happy birthday, dear Dirk. Thank you, Marco. Amazing graphic skills, I must say. Yeah, thank you. That's already Dirk for the funny pictures. But I think they are pieces of art, they're hilarious. Can I make one quick comment about my paper with Dirk? This is totally my fault. The method to say the Tom Isotope theorem and also the work of FAM just are not, they don't tell you the answer in situations where the propagators, zeros of the propagators intersect in complicated ways. So it's not clear the boundaries of how far the methods in that paper go, but they certainly don't cover all the complicated possibilities. So I'm sorry. Early on, I kind of communicated to Dirk, more optimism than was warranted. So that paper, their methods were there, and there it's interesting. I think that's the problem. I suppose you should talk to Max Müllbauer then, because he's also been working on understanding this geometric situation to prove the Landau equations. Yeah, well, the challenge was when you get old and you've worked on many things in your career, it's hard to get everything back in your head at the same time. So anyway, I did want to set the records straight. I mean, I think they're interesting things in the paper, but it doesn't go all the way. Yeah, I think the problem was with Farm in a way is that he wrote encouraging papers, but never did the physics calculation at the end. Healthy money for scenes, that's the thing is a bit more complicated. Well, that is one thing that the paper does do. I mean, Farm doesn't consider the important issue of sign. That is important for the... I mean, physics, it really cares about the Mininkowski metric and not the positive definite metric. And so that we do do. But the issue of how far you can push Kudkowski is open as far as I can tell. Thank you. Are there further questions for Marco at this stage? I was just wondering on a very general level, this idea of... I mean, you have presented a formula for the Feynman integral in terms of cuts. And then there is this whole idea of Landau singularities and expressing a monotony or a discontinuity in terms of something with cuts. So I'm just somehow getting confused in my head. So somehow we have representations with delta pluses also for the actual integral. So is this just reflecting relations in the homology of the integration cycles that we have many different ways to get the same integrals and that we can play around? No, I mean, this disappearance of the delta pluses in Kudkowski's formula are maybe more mysterious. But in this loop tree duality, and this is simply a residue theorem applied to these energy integrations. That's actually a remark in hit six and two already. And it's even harder than that in a way because you could do all energy integrals by every loop just by contouring integrations. And as hit six and two observe, that's the same as doing them by delta plus because the contour integration picks up the pole and the delta plus concentrates on the pole. And actually the positive seed energy part works out by itself just because the integrand when you do it for the first time is even under exchange of ks equals to minus k0. So it's a complete accident that this works, but it's a very useful accident. And then because we are only cutting one edge per loop, we are never cutting the graph into two parts. It's still connected by the mirror that edges because the complement of these edges is precisely the spanning tree. So it's not yet Kudkowski. So somehow it seems that the people in the 60s, they all knew the things and we're just rediscovering it. Yeah, what makes me kind of hesitant is that there's only so many homology cycles you can write down with delta pluses, right? I mean, you only have for each edge you can put a delta plus or not. And if I think of a very complicated graph, I mean, I have no reason to believe that the homology should be so small that I can really grab every cycle that I might need to compute a monotomy. But let's discuss this individually. Thanks Marco again for his nice talk.