 All right, thanks for the introduction and thanks for the invitation to speak. And happy birthday to Dirk. So as I wrote ceremony in 60 years of achievement, I'm pretty sure your parents thought they were achieving things at very early age, but I got to meet Dirk around 1998. His reputation had actually preceded him. A friend of mine was his PhD student. And we started talking at the IGS and talked through the years and these interactions have been very important for me in my work. And some of the things I'm about to say wouldn't exist if I hadn't talked to Dirk or would exist in some other form. All right, and so since this is going to be a little bit technical, I swish it up. I'll have pictures first and explain what I'm trying to make a technical sense out of. And what usually happens is that after I made technical sense out of it, it's pretty long and very correct and very mathematical, but maybe the pictures and examples will provide a guiding way through this categorical structure that I'm going to introduce. So the main character will be fine in categories. I'll see what happens. Sorry, I'll define what that is. But what I want to stress is that so actually can you see my cursor? I never know if that's true or not. I can see it, I think it's good. Yes, okay, so then I can do this. So the point is that these things are morphisms and if you know something about categories, morphisms start out to be sets, but then like in vector spaces, I can add morphisms. So the morphisms between vector spaces are vector spaces. So having more structure on the morphisms is actually very natural and this is important even if you're starting to some morphisms as you do in these half algebra things. So and then the main objective is a deeper understanding of the theoretical underpinnings. There is something which I've been starting to do recently is actually once you have that you can actually do calculations and I'll have some examples of that. And it gives you unexpected links between algebra, geometry and physics. So one large thing that if you wanna go towards algebraic geometry like in the last talk you see that you'll have functors and a six functor formalism. So it looks a little bit if you wanna go that way like she's. All right, so how do I get from one thing to the other? So very much what we hear in this conference is you wanna go from commentaries to algebra. And the point is that either you can look at the morphisms and they will give you a colored or partial algebras and then you get half algebras and buy algebras as we've been discussing. The other thing you can do is you can look at representations and I'll give a very basic example of this a little bit later on. And so in general, they will be functors into a target category and then immediately switches your possibilities to have a target category that's combinatorial again or algebraic, so linear, you can have sums or geometric, so spaces, topological spaces, modular spaces and things like that. And then there's an interaction between these levels where you can take these representations and put them back onto the category. Let me put it this way. And that is called enriching. And then just like I said before for vector spaces, now you can have spaces of morphisms, DGMorphisms and so on and so on. And if you stay in this framework all the universal things will remain true. All right, so then what else can we do? So there is, you can get something that called a growth reconstruction which tells you that you can decorate things and make more examples out of examples. What I will talk about is one way to get topological spaces is you do a W construction. That's a name that's well known in topology. And you will get cubicle complexes. And if you apply this to the combinatorial structure of graphs, you end up with modular space. And I found that one of the most amazing things in this whole story. I will talk about this later. This is also joint work with Clemens-Bauga. Then there, now you think which is nice, there's sort of, and I'll talk a little bit about this. There's a plus construction. There's certain hierarchies. Where you start out with something simple, just an object. Then you apply this and you get something that is simplicial. And we've seen that here a little bit. It will reappear. And then you get to the planar root of trees. Which are basic to one of the hop algebras of Kami-Kaima. You could also do something a little bit more difficult. And then you would end up in cross-simplicial sets. And sometimes they're known. So one flavor are non-commutative sets. Which you can get and then you get into root of trees. So the symmetric guys. You could stay in the planar setting. And we've discussed that. Or you could just start at graphs. And get structures for those. All right, so here are the promised pictures. So what is the basic idea that a physicist can maybe relate to? So we have a Feynman diagram. This is in five to the three theory. The idea is I want to use this as a morphism. So what should be the morphism? It should go from a source and a target. So what is the target? The target is basically you contract all the inner edges. And you end up with the vertex which just says the three external legs. And what should be the source? The source should be all the vertices, all the local structures. So I just break all the edges. That's my source. And then if this is a morphism and I have a category and that's the important thing in categories I can compose morphisms. And here's a decomposition of the morphism. And already you see the story if you're familiar with it appearing because you see that this piece of information here is a subgraph. It's this subgraph. And when I if I contract that then three of these vertices then UV and W just merge into one vertex which I give a new name R. And then I can put these vertices onto a new graph and contract that. And the theorem which is later is that this is a natural structure. This just appears without me doing very much. And then the factorization actually inserts this graph into this graph. There's a little bit more that I wrote this as a note. You have to be careful if you want this to be an actual category. I was a little bit about a glib about sort of just marking the vertices. I'll have an example later on. You have to mark all the flags. You have to instance say that UV and W actually are mapped to this vertex R. So if you look at this example long enough that actually reveals all the features that you need for a strict definition. And this is what I already said the composition of morphisms which is goes like this. This will be insertion of graphs into graphs. So I'm inserting this graph into the vertex R and I obtain this graph. And if I contract the subgraph I get a factorization because I can read off the subgraph. This will give me this morphism. And if I contract the subgraph, I get this one. So this is also very much like in the last talk looking at sub-objects and quotient objects. All right. So there is the subtleties that I come with the marking is that you have to get the isomorphisms of automorphisms, right? And you all know that they're all these factors with automorphisms that you have to take into account. So what you actually have is these stars are in these vertices, are in vertices but they actually are vertices with automorphism groups. So we're looking at a group point and that gives us if someone, one of the subtleties is that we actually get multiplicity is coming out in these half-automastructures. The second part of the title are cubicle complexes and again, I'll start with a picture. So here's a picture of what may it be. So let's say this is a root of three but it actually doesn't even have to be rooted. I have edges. I put labels on the edges which are parameters from zero to one. So S and T are that live in an interval. So this is an actual square. And then I can send S and T separately to one or zero. And what do I do? There are actually two things I can do. So if I send something to zero, I just contract the edge and I multiply the labels. So if I send T to zero, I go over here and I multiply B and C. If I send T to one, there are two things I can do and this actually comes out of talking to Dirk. Previous prior to talking to Dirk, I would have just marked that as one and call that in frozen edge. After talking to Dirk and looking at these Capcosca rules, I could just also forget this edge. So what's marked blue, I can forget that or cut that edge. So, and then you can go around and see what's going on. So if I would go up here, I kind of have two frozen edges or two cut edges. And here I just multiply everything together. And if that looks familiar, that's because I'll talk about this later. There are several other ways to look at this. For instance, the bar complex. So now I'm gonna let me just very briefly say what a Feynman category is. So I said you have these basic objects. They will form a group load. So think of the local vertices. The category itself will be a symmetric monoidal category. So it has a tensor product with a grass. This is just this joint union. I have an inclusion, which gives me the basic objects. And then I need some notation. So if I have a vertex, I can look at the free at this V. I can look at the free symmetric monoidal category. These are just words and words of isomorphisms. Words of automorphism. So any isomorphism is a word of isomorphism. And then I conclude this here and that's already just the basic structure. So I call such a triple of Feynman category. If this inclusion on the word level is an isomorphism in equivalence of categories. So that means basically every object is a tensor product of basic objects. So for the graphs, I'm looking at aggregates of stars. And I also have an equivalence of symmetric monoidal categories. I'll explain that in one second. This is the main axiom. This is what will make all the constructions work. So it's not just any monoidal category. There is something special going on here. And this is sometimes called a hereditary condition. And this is the thing that makes things work. And the last one is technical. I just put it on here because it's needed for some computations that they actually work. So what's the basic consequence? So let me try this. This is the first time I tried this live. So what this is saying is that if I have any morphisms say Y or X mapping to Y, then by the first axiom, I can write this as some tensor product of some basic objects. And I'll write star V to remind you of the vertices of the graph. And then what this axiom says is that if this is a phi, then I can complete this by looking at sort of fibers of this morphism. So there's another isomorphism here to this guy. And there's a map here, which is a tensor product of phi V. And each of these phi Vs is a morphism from X V to star. So what this says is that if I go back one slide, so this says that this is exactly what's written here, I can decompose any morphism into these morphisms, which go from a more complex object to a simple object. So this is from many stars to one star. So what does this mean for physics or for graphs? So first of all, to make this stringent, you can work in the so-called Borisov modern category of graphs. And then we restrict to those graphs, which are just aggregates of corollas. And then we get graphs back as the underlying graphs. So these three lines just explain that there is an actual very strict categorical setup where the pictures I'm drawing make strict sense. And in physics, how to think about that, they are the vertices of the theories and the morphisms are the possible final graphs. So that's why I use five to the three. And of course, in a physical theory, you can embed one graph into another graph and have another graph of the theory or you can read off, pull out these graphs. And that's all that you can do with this is clear from sort of expansion of Feynman diagrams. And this is of course the main thing that underlies the Huff algebra. And what you can think about, so the interesting thing, one interesting thing might be that, what is this, what are these categories if I map to a single vertex, but what is that? So that just means I have any graph looking over this and these are usually the terms in the S matrix, terms in S matrix. And now you know why I don't have handwritten notes. All right, so let me kill those. And here's again the same diagram of composing two things. So in basically I will compose two morphisms in this category of graphs. They will have underlying graphs. I have this double bar to denote that they're the ghost graphs. They're not actual graphs because there's some information. So they are actual graphs but they do not characterize the morphism completely. But what they do characterize, and this is important, they characterize the isomorphism class. So looking just at isomorphism classes, this diagram turns into this diagram. And so what I see is the factorization of one graph into subgraphs and the quotient graph. So what are sort of in this category of graphs basic morphisms? So okay, so these are the basic morphisms in one sense. And these basic morphisms are basically connected graphs if you wish for a large class of example, it's just connected graphs and then these will be disconnected graphs. But then these graph morphisms, see if I can track the subgraph, I can do this one edge at a time or one loop at a time. And so even these morphisms be composed into elementary morphisms, should be called the basic. And the elementary ones are, you have two vertices, you join them by an edge and contract the edge. The other one, you have two flags or legs of a vertex, you connect them by a loop and then contract it. And the last one, this is important for non-connected things to make them connected. You can just merge them together. So this has at least three applications. One is something we saw in Matthew Hyder's talk, this is getting props to work. So we're taking two things and putting them together. The other one is geometric. This is actually an incarnation of the connected sum of two things. So I have two disconnected two things which are not connected. And I put them together to get the connected sum. And the third thing, these are important to write down BV equations as master equations. But let's just include them. So after saying that, they're good, but let's forget them for a while. All right, so we move on to the next thing, what are representations? And representations are functors and I can take the functors on the final corner or itself. And don't worry, I'll have a concrete example in a sec and look at just the restriction to the elementary objects. And those things I will call ops and modules. And there's a reason for that. There is a trivial functor. So this is just very, you know, mathematically it's saying almost nothing. It's like, I always have a monoidal functor where I just send everything to the monoidal unit. So if it's categorical, sorry, if it's set theoretical, just think of mapping everything to a point. If you're looking at K vector spaces, just map everything to K. All right, and then how can I think about this? So if I have a graphical category, I can think as a physicist of graphs as Feynman rules, sorry, as these representations of a graph category as giving you Feynman rules. So first I have to fix the target categories. So fixing the target category, I'll be in a simple setting here, which sort of I just have a vector space, a vector space of fields. And I give a quadratic form, which gives me propagators, which is the inverse of the quadratic form I have here. So and then what does the functor do? So for each basic object, which was, remember just the one vertex graph, I'll call that star S, S are the legs of the graph. This should give me a morphism from some, my vector space. Tell me if so, I'm associating the following thing to it. So I pick a vector space and I associate to each of these guys, so I'm defining this functor O, to each of these guys, the vector space tensor S. And then from the graph morphisms, they are given, so the morphisms of the category are given by graphs. And if I want to know what to do with a graph, that should give me a morphism of these vector spaces. And what I do is I just contract the tensors with the cos in here. So let me write just a very quick example. So if I start out with just two vertices like this, one, two, three, four, then I can map that over to one, two, three, four. And what I did is, it's called this five and six. Then what I did is I put together five and six and contract it. And what is the operation I get? I have to be able to do a lie. So what I want is I want to get an operation from W tensor four to K. And I just take Y of phi one, phi two, phi I, G I J, Y, this is Y one, Y two, phi J, phi three, phi four. So I'm just contracting in this place and this place. So these are the places that are indicated exactly, exactly by these edges. So it's a straightforward thing. And then actually you can check that this gives you a nice functor. So this is factorial and in this way, you can think about Feynman rules. What I'm doing here is I'm always mapping to K. And if I'm in an algebraic situation, this is a non-generic form, that's good enough. I can dualize. Otherwise you have to be a little bit smarter. And that's of course what many of these things are about to be a little bit smarter. This is just a slide saying, okay, if I have these graphs, as I said, you can decorate these graphs and operatic people will know this, but not operatic, not so much. I just want to highlight that I did talk about props and Martin Heierach's talk was actually something about wheeled props. So even if you don't know this whole zoo, they do appear naturally as something important. And for me, these modular operas and cyclic operas will play a large role because the modular operas are related to modular spaces of curves. All right, so then how do I get my general relations going? So categorically, if I have functors, what I can look at is adjoint pairs. And what actually happens if I have a functor between two Feynman categories, so maybe I should write this here. So if I have an F going between one Feynman category F and another Feynman category F prime, what I can do is I can push forward and pull back modules from F prime to F or from F to F prime. And so this goes into the computational thing. So category theory actually tells me how to compute these things named the push forward and the pull back. And if you care about the six functor formalism, then you can do other things where you have to take a right kind extension as well. But let me make this a little bit more concrete. So let's see what happens in the most trivial case possible where I just have one object in V and its identity. Then if I look at the symmetric monoidal category or the monoidal category, so let me start with the monoidal category. The three monoidal category are just words. The letter is one. If I repeat one n times, all I'm getting is the number n. So I'm getting the objects as the natural numbers. If I do the free symmetric monoidal category, then if I have the object one tensor n, this has, I can act by just permuting. This is one, one, one, one. And I can, sorry, I can permute the ones back and forth. So this will have an action of Sn and this is a typical example of a group point. So another way to imagine this is to make contact with what we had before is you think about the star, which is numbered from one to n. All right, and then the V modules are simply objects of C because I just get to map one object, one thing here to there. And then since this is free symmetric, nothing happens to the, to the, to F as well. I'm sorry, just the V modules. All right, sorry. Now to look at what happens if I want to extend that, then I can just take the free extension and say this V tensor is F and that's what I was just saying. And what I get is just the representations of V, namely the group point representations of my objects with morphisms in V. Special case is if I look at this special category where we have one object and the morphisms here are just elements of the group. And now the representations are really group representations. And if you go through the calculations, what you see is if you have two groups, you have a function for the categories. It's just saying that you send the morphisms by this thing and the object, the only object with the only object. And then pullback is restriction and push forward. Now you can compute that as induction. And the adjointness is known in representation theory as Frobenius reciprocity. So what this general theorem says is this is true for all Feynman categories. So if you find some function between Feynman categories, you have restriction and you have induction. This is going to be important later on. Maybe I'll skip this. If I'm looking at the clock, I'll skip this and directly make these comments that there is this layer where you can look at very commentorial things. And this is what the slide would be about that if you just look at just sujections, what you get out is you can compute. This is again a computation. You get commutative algebras as representations. If you do the non-symmetric analog, you get associative algebras. If you actually look at all finite sets, you get unital commutative algebras. You can just look at the other part instead of sujections. You can look at injections. Then you're in the realm of FI algebras, which are popular now in representation stability. And they were introduced by Charles-Farben-Ellenberg. If you add symmetries to the non-symmetric part, that's where the cross-simplical groups come in. And this is also where the non-commutative sets come in. And I have a distinct feeling that this is very much related to a course talk, that so non-commutative sets are maps where you have orders on the fibers. All right, so getting to one of the main actors, namely the half algebras of Conan Kreimer or the generalization of that. And this is work that's just been published this year. Unfortunately, very many pages or fortunately and unfortunately, but to make these things strict is a little bit difficult, but the general idea is easy to state. So what was the main point? The main point was in a category I can compose. So I can also decompose. So naturally I can write out a co-product. If I now you see I have to take some, so I have to enrich or take the free availing group, the co-product over all decompositions of amorphous. And so you already saw in the basic example I gave, the decomposition is exactly this subgraph co-graph. So if you want to think about it that way, that's perfectly fine. And this says that this generalizes to any category which is actually finite, where this thing is finite. Now you can ask yourself the following question, namely if it's a minority category, I also have a multiplication. So I get an algebra and a co-algebra structure. And the question is this a bi-algebra structure? And that turns out to be a little bit subtle and the answer is basically yes. But you first have to go to isomorphism classes. So you call two morphisms isomorphic if they're related by an isomorphism. And you saw that implicitly in the last talk because there the functions were defined on isomorphism classes. So that actually factors exactly through this quotient. Then the main theorem is that if you started with the Feynman category, indeed the bi-algebra equation holds and you have a bi-algebra. And this bi-algebra is usually not connected. And if you want to get the usual things that you're used to which are connected, you have to take a quotient. And then under explicit checkable assumptions, there is a canonical quotient which is indeed a half-algebra. And so in the non-sigma case, this is easier if you don't have automorphisms of your object. So say you're looking at planar stuff, then already be before going to isomorphism classes is a bi-algebra. And then the quotient is also very easy. You just take the identity morphisms minus one where one is the identity of your monoidal category. So it's the identity of the identity. And the reason is that if you decompose a morphism, you see exactly that you'll get this decomposition for any morphism file. You might have the identity. So if I'm going from X to Y, I can always split by going through X or I can split by just going through Y, the same morphism. And so I'll get these two things. So in half-algebraic terms, you see that this thing won't be primitive. It will be it X, it Y primitive at the most. And that's actually an interesting structure that my student is actually working on currently. And now the incredible thing to us when we computed these things is I did something quite general. So now I can feed in my surjections and look at what I get. And I actually get out gotcha of soft algebra for multi-zeta values. I can put in a certain enrichment of surjections and I get namely leaf-labeled trees. I get con-crimeous half-algebra of trees. I can do this for this category of graphs, which I had before. So I got con-crimeous graph algebra. And this was maybe the most surprising. So if I look at this example here and take care of signs, I get actually a half-algebra that Bow was invented in a completely different setting to look at double loop spaces. So this is some superficial structure that is hidden in con-crimeous algebra. And here are the associated pictures. I hope your resolution of the monitor is good. Usually I project this into a larger thing. So this is the half-algebra structure you're used to. This is a rooted tree. This dotted line is an invisible cut. Whatever is on top falls off. And so you see if I label everything then I have to label all these cuts. And so I did this here. So this is the planar version of doing this. And you see now, I can't just stick with labels from one to n. I have to label an arbitrary sets, but this will work out nicely. Another thing which I can do is I can forget the labels and then cut. And you see I have no labels anywhere. And this could mean two things. Either I'm in the planar case where I can just label one, two, three, four, five. Oh, this one I may label it this way around. But this has in the planar structure, this has an automatic labeling. So I can look at this picture in planar case. Or I could say, well, I forgot about this. So mathematically that just means I took the co-invariance and then I look at the co-invariance of this thing. And then I'm in the non-planar case. That's so fully labeled planar or non-planar. And I learned this from a talk of Kourish. Another thing what you could do is, well, I mean, I could say this is important. This I is the same as that I. So if I wanna keep track of the labels this way, I have the automorphism group permuting these labels and that will permute these factors. So I can look at the ecvariance setting. And now comes the all-important quotient. So what happens here, I have these labels and we've seen in talks before, we've seen the actual conchauma algebra without leaves, without legs. And so what I have to do is I just have to pull in all these legs. When I do that, everything is fine, except if I only have a leaf here, which I'm allowed to do here. See, I cut just through some leaves and what happens there is this goes into one. And that is exactly taking this quotient that I had before making these into the unit. So that's exactly what happens there. All right, so that explains this thing and just summing up the upshot, we can produce this conchauma algebra. We get conchauma algebra, balances for double loop spaces. There's a non-commutative graded version and we have the threefold hierarchy, non-commutative, planar, commutative, and then there's an amputated version. All right, and then the decorations, which I will discuss, I think I'll have some time for that, maybe not that much, but if you now put some, so what do you do to make this structure more elaborate? You want to put in some labels on the vertices and on the edge of the graph and there are two general constructions for Feynman categories, which allow you to do that. Then some more remarks is the, why Bowis and Goncharov work is because simplicity is form and operate. So this is just a simplicial structure that you have. You can go from this to looking at co-operates with multiplication. And from this, you can deform. And when you look at this information, so this might be of independent interests, you can actually get in this thing, you directly see developments and deferrations, sort of developments in the notion of Gerstenhaber and these deformations are queue deformations. So instead of taking this quotient directly that I had, there is an intermediate quotient where you have a queue deformation, which basically sends each of these leaf labels just to queue. So this would give me a queue squared. So that's still graded and that is interesting in its own right. And the last remark I'll make, which also relates to many things we've seen here before and things that are going to come afterwards. So now remember, the co-product just said, I do a factorization. So if I do multiple factorizations, what I get is I get composable maps which are exactly what is sitting in the nerve. And that's actually also where these simplities come from. So an iterated co-product gives me exactly this element in the nerve. Okay, so here is the slide that tells you that what you expect is true. Once you get to isomorphism classes, then the morphisms are not actually given by their ghost graphs. So then this factorization of the morphism is exactly looking at the co-graph and so the sub-graph and the co-graph. And then at this point, let me mention one other quick thing that at this point is, and this is something that is for the future, you start seeing the co-module structure appearing because I had these special elements. So if I start factoring a morphism that goes to a special element, I get something that's just a general morphism and again a morphism that goes to a special element. So you see that this thing here, these form a co-module over the hopf algebra. And this is sort of, if you want the core hopf algebra, so that's the beginning of decorating and looking at core hopf algebras. And the other thing I can do is now I can invert this. So if I have an element like this and I have a general morphism here going to Y, I can just combine these and these will be exactly the B plus operators which take an element here of this special form and I can apply it if the colors are correct. So if this actually has the target of Y and just make that into the product. And then last thing, so how is this really? That is really the B plus operator because if you think about it, this gamma of phi one, this was a disconnected thing. So in the crime of tree version, this actually takes the forest and makes it into a tree. And in the more general thing, you can plug in your primitive elements up here. All right, this is just a computation that you do get these multiplicities. If you start labeling everything. So if you start with this graph, I'm certainly not the first one to tell you what the co-product of this is and other people know this much better than I do. And the only thing that I'm trying to point, I'm trying to make here is this now appears naturally in the same formatism. I do not have to introduce another ad hoc formalism or anything just in my final category of graphs which I had before. If I look at the factorizations, I will get two such factorizations. And that is because I can tell the difference between this edge 33 prime and the edge 22 prime. This will give me two different factorizations. And you see now it's important that I labeled everything because this graph, of course, as an abstract graph, just looks like any other abstract graph sort of regardless if it's two or three. All right. Now this one is an explanation of a relation which is also nice. So again, I'm just giving details and pictures of the general story how they apply to things you may know and love. So if you like the half algebra of Gontrof, you know that the way to write this down is with half circles and segmentations. And I said this has some sort of official feature and it's actually related to a con primer is trees. And now we actually know exactly why and why not and how it's related to, so Gontrof had some guesses about how these two things are related. And the idea is you take this half circle and you put this tree in here. So this I learned a long time ago from people working in the field. And now what the half algebra does is it would decompose these trees and cut these trees. And the dual picture is sort of taking these half circles and taking the, so taking the, so how about Ganga that I was looking for the name. How about Ganga taught me this and this I learned from Francis Brown in his lectures in Cambridge. So you see the interaction working. So I'm really happy about these things. So you see the segments here which cut this tree and this is a duality. But this is actually something very deep which goes back to Joyada. And there is a duality between double pointed, so intervals, double base pointed maps and maps in here. And you can find a nice version of explanation in this in the papers of Bhattanin and Dharam. And that's where I learned this from. All right, so as I said, then there's several things you can do with this stuff and you can decorate, you can enrich and you can do this W construction. So maybe let me just say something about the decoration. So the decoration because I need that the decoration says I can actually decorate vertices. So for the graphs, what's the decoration that one would care about is a cyclic order. Why? Because this makes the graph into a ribbon graph. So if I look at this graph at one, two, three, four, it just has a star with four edges and I can give this cyclic order or that cyclic order. So either one, two, three, four or one, four, two, three. And depending on this, I get different ribbon graphs, different graphs. And now if I do these decorations just through general, so this again is just a general category to a slide which says I can mix this with my push forwards and pull backs and decorations. And now let me skip that. Let's apply that to a very nice situation. So I can look at the following subcategory where I just look at graphs which are planar trees, not rooted, just planar trees inside all of the graphs. Let's say where the morphisms, the basic morphisms are connected. And then I can decorate this in two ways, either not at all, and that's a trivial decoration. I can push forward this decoration that by general theory gives me a way to look at this category with the decorations. And I can, this has then a cover where I resolve these decorations. And what I get out is actually something that's known. This is the category for modular operas. So this is graphs with genus markings. And now this is a computation that's important. This is a computation, this is, I'm not defining something, I'm computing something. Same thing if I take the cyclic, what I just told you that I look at sort of these written graphs, but that's not right. So what happens then I take trees with cyclic orders and then I can unfold the tree and look to the plane. So I get planar trees. So this is what it's called the non-sigma cyclic category. And then this general diagram says I can also look at, so they have a relationship up here and down here I can push forward this decoration and go up here. And I get something new which are, which is a nice category, which is called the category for non-sigma modular operas. But this has meaning for many things. So, and actually maybe I'll say this, this also says that actually this is something about fibers of morphisms so that I don't have infinite chains if I control this G. So what are the nice things? So we get back these modular spaces. If you push this forward, which is now a calculation, you get types of open surfaces. And then this has, so this thanks goes out to Karen Yates on this one. So now you can compute this. The very succinct way of computing this is to use combinatorial knowledge, which means that the spanning tree graph is connected. Doing this actually allows you to calculate the push forward, which in categorical terms is a cold limit, but this now is just something you can sit down and calculate. And actually it's a calculation, which is nice with core diagrams if you wish or some other format of this type. And you get out that exactly this is true. This is what you get out is what you expected. And then this immediately applies to something also somewhat combinatorial. So this is this relation between combinatorics and algebra. If I just do the little game that I did before with my correlation functions, you can figure out that if I just look at trees or cyclic trees, I'm just looking at surfaces. And what I get is I get one plus one dimensional TSTs. And then there's a nice theorem saying if I algebraize this, this is the same thing as looking at a Fubini's algebra. And then going up here, I get an open TST. And this is known that this is just open closed for me as an open closed one plus one dimensional TST. So something happened here. I can compute this stuff combinatorially and I can then do the algebra and I get a nice theorem about these spaces of fields. All right, and now I wanna do this in a higher version where the spaces of fields were just algebras, but now I wanna get actual spaces. And what lets me do that is this so-called W construction. And these are technical details. I need something which is true for graphs because if there's a commutation relation, if I contract two edges, I can do it in any order. There's a slight subtlety because I cut two different intermediate graphs. And then I can throw on the categorical machine again and compute a cold limit. But what this does in a non-technical version is for each graph with n edges, I glue on one cube. And just like I had in the beginning, I have two boundary maps, namely one where the parameter goes to zero, which means I contract. So that's going to zero or I mark. And as I said before, either marking means freezing or deleting. And then I get a complex by gluing along these edges. And again, this is interaction with Dirk. And again, thanks to you, Dirk. So I know that this has something to do with Kupkowski rules where I do this and there's research ongoing, thinking about what else one can get for these Kupkowski rules. So here's the picture again that I had in the beginning and I already discussed. So now we know exactly what this means. This has two edges, so I get a two cube, this has one edge, so I get a one cube and the boundaries are as explained. Now, what is the cubicle structure? I think I started five minutes late. So if I'm not cut off directly, I'll spend the five minutes. So here's the other thing to see that this is actually a simplicial structure and you can write down sort of, so you see now these sequences of morphisms and you see I'm contracting and splitting sequences of morphisms. So this relates to this simplicial structure of the nerve. And this is, if you know what the bar complex is, you see that something is happening in the bar complex as well because I'm just removing bars which multiplies these A's and B's as markings. Here's another picture where I apply this thing to something more interesting and this is now I'm blowing up just, so I'm looking at trees and associating cubes to trees and it's well known if I do this for root of trees, then it probably goes back to Bergman and Fouk that I get associahedra and this plays a role for the cyclic delinear conjecture which was in my abstract. So this is a picture for the cyclic delinear conjecture and there we were marking with one as a frozen variable and in general, as I said, what you do is you just take a sequence and you associate times T1 up to Tn to these arrows and if there's zero, you contract them. If you're there one, you freeze them. That's too technical. And now I can apply that to this diagram I had before. So remember here, I'm just including planar, sorry, here I'm just including trees into all graphs. Up here, I make the planar. Then this is already interesting that it's up here, it's not planar graphs, that's wrong. Sorry, it's not planar graphs, that's wrong. It's not ribbon graphs, that's wrong. What it is is what are almost ribbon graphs and their vertex types are exactly given by these surface types, which is interesting of its own thing. And so in work with Clemens-Bagger, we computed these things. So what can you do? You can make this topological and push it over or you can go over and make it topological. So if you push it forward and make it topological, you'll get metric almost ribbon graphs and you'll get back the planar-cron-savage multiplication. If actually what you get is something contractible and this is where these cones come in that we've seen in several talks and also last time just in the comment section, but when you have this cone with the full mass, what you're doing is exactly this type of thing. So you have a cell and you're coning it off with one point and there is a maximal cone point for everything, which would make this contractible. But if you throw out the cone point, you actually get the simplex, the base simplex spec and this is compact. If you do this upstairs, you make it topological and then push it over, then something magical happens. You immediately get the modularized spaces of curves and that was a slide line. You don't immediately get them, but you get something that you can contract these modularized spaces onto. So it's a strong deformation retract and you can do this even on the cellular level. So there is an algebraic version of this. Won't have time. And now I'll close with a few nice pictures. So here is the picture of sort of getting the modularized space, the combinatorial modularized space. You just take these nice graphs, so the theta graph and these graphs, so these books are cycles and you get the usual picture of your triangle and now you see I contract one thing and that has a boundary and that will then continue on this way. So this is, if I do the construction downstairs, if I do it upstairs and push forward, I only get this. And this is a cubicle complex. Notice this is not cubicle. This is cubicle, it just has one cubes and higher ends it would be more cubes. And how do these fit together? They fit together nicely. So there's a contraction which we can give. Again, with Clemens-Berger that you can put this into the spine here and contract it. And such constructions are known by Karah Forkman and Iguza and so on and so on, but we have a very nice new combinatorial way of just writing it down, very simple. Very straightforward, you can exactly see what's going on. It's a linear thing and we can describe that. So now, again, these are not defined by hand. So last slide here, what you can do then is, and this is work with Javier Zaliga. So I want to say something new that maybe I haven't said, so I forgot about this. So then you can start truncating these things and that's actually also what we did in the delinear conjecture version. And once you truncate these things, you start blowing up these cells and you can get a full blow up of this complex and then you can blow this down to this Kamula Stashev-Warnov compactification and the delinear Mumford compactification. So this guy is the one that appears in Stringfield theory by Zwiebach. This is the one that algebraic geometries care about. And this one is the combinatorial one I actually already discussed. And I'll leave you with a nice picture of a blow up or so a relative blow up. So this is my simplex and then I can do stages of blow ups. You see, this is not a cube, but it has this cubicle face. So it's a simplex cross of one cube. I glue that onto the faces and then I glue these cubicle things onto the lines and that was already. So this is actually one simplex and two cubes and then I'm done. And what I get out is this wonderful picture and that is one of the non polytops. It's a cyclahedron. All right, so I'll stop with that. All right, are there questions for Ralph? I have a quick question regarding graph complexes. I mean, there's many different graph complexes. You mentioned ribbon graphs, but like in conceivage the world, like the conceivage commutative graph complexes. So where do they fit in this point of view? So that's, it's in the paper. I can give you, so that has been discussed, but what it is, is this is very similar to the story that you had for getting the external legs of your graph in the conch limer tree picture. So there's a general story and there's another cold limit you can take. This makes it a universal construction and then you immediately get out these graph complexes of conceivage and what Wilbach is doing with them. So do you have to pick a particular final category and do this construction to get the different graph complexes? Right, exactly. So you can then, this is this bit with the decoration. So the final category needs to be a little bit special that you can, so that allows you to actually contract these legs, but the graphical final categories are nice. And then you can decorate them with all kinds of things you want. For instance, these cyclic orders or you can make the edges odd to get, you know, fermionic things. And I could have mentioned that I didn't do this. So I just had a cyclic order, but I can also make this sort of anti-cyclic. And then you get these different graph complexes like in conceivage where you think about and you're changing two legs and either they're symmetric or they're anti-symmetric to get assigned. Yeah, so yes. Thank you. Right, there's a question in the chat from Jonathan who doesn't have a mic. So he's just written it. And he says, your abstract mentions cones and simplices, sorry, cubes and simplices. What about cones? Are they important? Right, the cones, they actually appeared here. So the cones are important. So the thing is that if you do this construction here, then you see here's the cone. And what happens is the W construction is actually something which is called fiber, which means that if you started out with something that was a point, you get something that's contractable. And the cone is contractable. But the funny thing though is that if you remove the cone point, it doesn't need to be contractable anymore because it gets to the base. And this is where the cones appear. So then somehow I had this feeling because just previously somebody was asking, how do you disassemble these things, these simplices? And then in the talk of, I'm gonna butcher its name, so I better look it up that we had for these top order geometric, the geometric decomposition of, I can't find it, the geometric decomposition of these computational assignment integrals. You also have this cone and then you dissect with these regular right angles, you dissect the base of this cone and then you look at that. So I have a feeling that this kind of picture fits in perfectly with that. He follows up by asking if you have an understanding of the product of two cones. The product of two cones, I haven't thought about that, but probably yes, because this is something, because this cone comes in from a push forward here that automatically makes this cone happen. And that is, I can see that. So there must be some interesting truncation going on there because you get the cone here. So let me go to the graphs and say this here. So you have these, in the ribbon graph, you have these parameters that say here ADB and the cone comes about because you can set all these things to zero. So you know what the cone point is. But there's only one cone point because there's only one zero graph, there's only one zero, that's what everything contracts to. So I would have, I could venture, I guess, what happens there. All right, Megan has a question. I think she should be able to unmute herself now. Okay, or not. How about Alex Takeda? Hi, can you hear me? Yes. Okay, yeah. So you mentioned quickly these, you mentioned before these, this push forward and six-fung through formalism. Those are very interesting. Just wanted to ask you if there's any examples that I would have known of push forwards, like if I give you an infinity algebra, can I push it forward to something over like a modular operator or something of this sort? Yes, you can. So the point was that what I just, what I computed was sort of the modular envelope, I guess you know what that means of an actual algebra. That's, what was this? That's exactly, I'm looking for something in particular. Hold on. Right, so I was looking at this. So the point what I was looking at is sort of just, so I was just looking at algebras that's related to this TFT business. But I could have started instead of just looking at cyclic associative, what I can look at is I can take an infinity version of that and then push out forward the infinity version so I can resolve here. And then I would get a infinity algebras and then you can play the same game with a infinity algebras and do the push forward. And yeah, that can be done. Yes. I see. So because I would imagine that if you, for instance, the infinity case, for example, to push it forward, you would need an infinite algebra and something like a pairing. So does this give you- Right, I'm sorry. I should have said cyclic infinity, yeah. I see. So it gives me, okay, so I already have an information of the pairing. Right. Okay. There is a diagram which I don't have. I mean, I could add here just the one that comes for, for sort of just for trees. Yeah. And that has a relation so you can take the free thing with that too. Or you can, if you're happy, I mean part of the problem is something that I said before. So the question is, if your infinity thing has a non-generate form or not a non-generate form without any kind of integral, I don't think you can do something, right? I mean, you can put it onto graphs. When you have to be able to contract indices, right? I see. So you might get something like some of it all possible. Right. And then it's either going to be trivial or free or however you want to say it. So if you don't put that information in. I see. Okay, thank you. But maybe that last thing with that, but if you do trivial or free, one thing you can always do, and we've seen that here, if you don't want it to be trivial or free, you just put a Q there and sort of just count the number of vertices, number of edges. I see. So you can do some genus or maybe number of vertices. Right, exactly. Something like that. It's great. Okay. All right, time is getting on. I imagine people have to get onto the next thing in their lives. So let's have one final question from David. So wonderful mixture of the very concrete and specific as well as the categorically abstract. Can you just remind me what your ghost graphs have lost? When you start with the very concrete things, what have you thrown away and what do they still know? Right, so that I can tell you. Let me just try, I actually prepared a slide with nothing on it. So I can write on it. If I would be smarter, I could just click there. So what happens there? So let me exactly do the example I had in the, sorry. Look at the example, this example. So where I glue this together, it just drew a two point thing with this graph, right? So if I just have this ghost graph here, this guy, I wouldn't have, see what, so it keeps some information, but it keeps the abstract graph. It doesn't know, is this, you know, what I'm doing drawing now is what the morphism knows that the ghost graph doesn't know. So I have two vertices here. The ghost graph will know which one is which. The ghost graph will not know which one is which, but the morphism will. So I need to identify this vertex and that vertex. And maybe like, so maybe actually maybe make two legs here. So, and then the other thing it forgets is, you know, I have a symmetry on these two guys, which would be the symmetry of these two things. And so the ghost graph doesn't remember which one is which, but the actual morphism will remember which one is which of these. Okay, thank you. All right, let's thank Ralph for his talk. Thank you.