 Thank you very much. First important question. You can see the slides, hopefully. Okay. Yes, we can see. Yes. Yeah. I mean, okay. So, so can I start for, should I wait for the minute? No, no, it is okay. You can, you can begin now if you want, if you wish. Yeah. So, I mean, of course, thank you very much, John, also the organizers for this invitation. It's always very nice to be at Cup now, virtually, but I hope at least this way some more people can participate. And the talk I would like to today is about a topic which I think is quite the spirit of Cup because it is about this type of research which tries to make sense of the deeper links, the mathematical links between approaches to manipulation of graph-like structures and computer science, mathematical physics and mathematics. So, I mean, of course, every of these disciplines has many, many examples of situations where it is meaningful to look at these types of manipulations. For example, the very simplest such is drawn here at the bottom. It's just when you, when you manipulate, for example, in physics in distinguishable particles, this would be a model of chemistry, for example. Then, of course, in social network science, maybe you look at processes which can be formalized as, you know, manipulating locally a network graph, like for example, the rewiring and edge check here. But then there are also many, many more sort of Baroque examples. I mean, trees, for example, can be seen as a type of graph which has additional structure, like particular incidences, for example. And my personal motivation is really from organic chemistry and from biochemistry, where you encounter these beautiful types of theories where, for example, you formalize proteins and other molecules through these type of writing steps. And sort of my main motivation for this work is that I'm originally a mathematical physicist, but I'm now working in computer science. And I discovered that there's relatively little cross paths between these three disciplines that would allow you, for example, to take a continuous Markov chain theory and directly apply it to one of those organic chemistry simulation techniques in biochemistry, say, for example. And of course, one of the main motivations is to ask, so in these type of very complex systems, where can you attack with combinatorics? So what are the combinatorial structures, of course, beyond the data type of these graphical structures that one could use to analyze the systems? And I'm just putting this here because, at least for bio and organic chemistry, it is really the case that, I mean, sometimes you have abstractions which are made because you can then at least as a toy model talk about these systems. But at least in the case of bio and organic chemistry, these are really the state of the art techniques to simulate these systems. So in Kappa is a framework where you have these sort of circuit side graphs, there's a special type of graph, you know, everywhere takes some sites at which you can link to other vertices. And in organic chemistry, it's even more fantastic. It's really that the molecules you draw by hand also for chemical reactions are a data type that you can then formulate transformations on. And so these are the two main frameworks. So if you, I mean, I will say a little bit about this later on, and it so happens that there's really a very formal semantics for these type of theories. And that is good because that is really, I mean, it's very mathematical formulation. But that's a plus. What's the problem is that if you then look really at a realistic scale example, and here's a little piece of the human metabolic network, in this huge graph, every part of this network is, I mean, an enormous complexity and you have a very many different types of molecules that participate in these reactions. And here's only drawn like the micro molecules or the larger molecules. And then you have enormously many reactions. And so the real question is all of these transitions, these manipulations are fired at random. So what is it really you can say about these systems? And especially if you look at biology, which is still a very fast evolving field, of course, I mean, it is nice that you have this interesting data type, which is quite good also at, you know, encapsulating new knowledge you have about just individual transitions. But you can see immediately, since every of these green blobs, these agents, as typically on the order of seven to 10 sites, this is an extremely highly combinatorial data structure. And then if you start manipulating, it is very unclear of what you can actually extract as information. So main problem is to understand the function of such systems, because nature has given them. This is the abstraction that is very accurate. So what can you say about the function of the systems? And so, so to just motivate how one could approach this, I'm going to take an example of the transformation system, which is, of course, much simpler than these biocampal corrections, but it's complicated enough to explain the main sort of root of attack for these systems. So here's a system where you have an input state, which is just some graph now performing transformation. So the language for these transformations always looks similar, you have little pieces of evolution, if you will, each of those is asking for some input drawn at the bottom in this little cartoon, and some, then it produces some output. And the output here is so the input here is some two vertices, the dashed lines indicate that they are kept throughout the transformations are identically kept, and then you link them up with an edge. That's this local manipulation. And the second part of this framework, of this semantics is that you to apply such a rule, you have to exhibit a match. And a match is nothing but an embedding of this input motif into the graph. You see immediately that, I mean, here I've drawn a very, very small graph. And I mean, this rule is sufficiently simple, but already here you have quite a high number of possible matches. So part of this sort of encoding is that it's usually quite simple to write the rules, but I mean, there's usually quite many ways of applying these rules. But anyways, so then here, applying the rule amounts to locally then, you know, running this transformation, which here amounts to inserting this edge in these two vertices. So let's do a few more of those steps. Another of such exactly the same rule applied at a different place. Now, of course, I mean, I'm only showing some very simple rules, you could also have one which unlinks, for example, like this one. And let's take another one to link, and another one to link. Okay, so now I have given you a small sequence of the transformation system. And very nicely, not only for this particular case, but I mean, for an enormous variety of possible things of graph like structures and their manipulations, they are all covered by a mathematical theory called categorical writing. And nicely enough, it is very close to implementation sources. I mean, this is something you can, you can put very directly into algorithms. So it's sort of the source code for these sort of manipulations. But sort of the typical problem is now, I mean, going into this picture of what happens in biochemistry, how could you understand it? So imagine you were given these type of transformation, you had a little vocabulary, say just the linking of edges and the unlinking. But now you are asking so, of all of these possibilities, what is the likelihood of if you fire these say at random with same probability for sake of argument of seeing a triangle appearing? Yes, so I mean, you're trying to track how many times to see a triangle that was really produced, I mean, newly produced or maybe deleted through this application of sequences. And so what classical writing theory doesn't have much of an answer to is how to approach this and utilizing this feature of writing. So what one can do, and this is now I will motivate this more later on, but sort of the starting point of this combinatorial analysis I would like to propose has to do with, of course, you can track how actually these rules were applied. And here in this picture, you see that these are simply just the, I think, five steps one after another drawn works vertically. And whenever you attack with a second, third, fourth rule at a position where you have previously already applied a rule, then this is marked with these like blue lines. And the red lines mean if you touch something that is in your original graph, but not interacting. And so immediately it is obvious that there's some combinatorics in how you can sort of plug together these transformations and acting with each other versus how they are acting on this graph state. So again, like if you now look at this problem of finding triangles, you discover something very important, namely that sort of producing a triangle is an inherent feature of, of this above sequence of rules, I mean, because they are plugged together in a way that they are guaranteed to produce a triangle. There might be other ones that accidentally are produced by just simply linking up a V shape say with one edge, but that is definitely already one guarantee you can give this sequence will produce a triangle. And the other point of interest is, so these two parts of the transformation sequence highlighted in orange, of course, they are happening, this is one possibility to apply, but the two actually do not contribute anything to the production of the triangle. In fact, they only sort of they produce an edge and deleted later on, so that that actually doesn't contribute anything to this count. So what one needs is some sort of mechanism how one can analyze this combinatorially. And the, and the way to do it is to first put to as first class citizens, so to speak, these interactions of rules. So in some shape to produce these type of diagrams was a recursive or other generative mechanism and to reason about their combinatorics and these objects are what I call tracelets. Okay. So the plan of the talk is that I would like to start from something which I think is quite close to interest of many people in the audience, which is certain type of HOPF algebras, which have been introduced by Gerard Duchamp, Carvel Benson and others, which give exactly sort of the blueprint of this type of combinatorial construction. And then pretty much the second part of the talk is giving you a little background on categorical writing theory, just enough to demonstrate how this works. And then to show you how you get to tracelets and in particular some notion of algebras that replace the C's diagram of algebras in the general case. Okay. So for me personally, I started working on this about five years ago and I was studying this paper by Gerard Duchamp, Carvel Benson, and their collaborators from 2011, which was called combinatorial algebras for second quantum theory. Okay. And so from this idea, I developed my very first version of some notion of graphical algebras, graph transformation algebras together with one sort of us and colleagues. So here really, so I'm presenting to you sort of a slight reformulation of Gerard and Carvel in a language which then later directly generalizes to the case of reviting, but it's completely equivalent. So the idea is if you look at the manipulation of just graphs that do not have any vertices, then what can imagine there's some very elementary things one can do. So I mean, we are always only looking at isomorphism classes of graphs. So in this sense, you know, vertices are indistinguishable. And then there are two transformations one can do, elementary ones you can create, you can add some vertex or you can remove a vertex. Now, the starting point of getting to some notion of diagrams is that from these elementary building blocks, you can assemble larger diagrams. And they will typically look as follows. So you have some occurrences of these particular sort of little elements of the creation and deletion. I should say that each diagram, you have vertices characterized as output and some as input. So in the pictures, because I mean, the creation only has one vertex and I draw these little dash lines to indicate whether it's an output or an input. Yes. So they are read from bottom to top the diagrams. And then sort of the second information in the diagram is how some of these outputs, some of these creations are then wired into inputs of some of the deletions. And so more mathematically speaking, I'm giving two sets, sets of input and output vertices and a relation between them, which should be one to one. So that and also I'm only considering again, sort of essentially isomorphism classes of these type of diagrams. So more concretely, I'm looking at equivalence classes under, you know, joint permutations of the vertex sets, which preserves the incidence structure of this creation. Okay. So these are the diagrams. And now this beautiful idea from the aforementioned paper of Sharon Cowell was that you can now enrich the mathematics by, sorry, you can build the mathematics over these diagrams or equivalence classes by constructing a vector space whose basis is indexed by these equivalence classes. So for each diagram, d, little d, you have a basis vector, which I always write delta of d. And beautifully now, and this is, of course, you will immediately see how this glinks to rewriting. Now, one can construct on this vector space, a binary operation, sort of called the diagrammatic composition. And the idea is that if you are, if you give two diagrams, you, I mean, first the one, the one and then the two, you can sum over all ways, you can consistently wire together these diagrams, meaning you have some, some, like this drawing on the right here, for example, the one and the two, this blue M12 are the two lines that link some inputs to some, some outputs, some inputs. And then performing the composition amounts to forgetting that these were two diagrams wired together, sort of seeing the whole thing as one diagram, and of course, taking equivalence class. And so this gives a very nice composition operation. And the first important result is that this operation together, I mean, the vector space together with this binary operation gives the associative unical algebra called diagram algebra, whose unit element is the, the equivalence class of the empty diagram, or the basis vector associated to the empty diagram. So that, that for me was sort of the first, I mean, the literature I knew was the first description of such diagrammatic operation, which, you know, reproduces some, some combinatorics of these composition structures. Okay. And then sort of now looking more closely, we see that actually our little catalogue of elementary diagrams wasn't complete, because sort of one of the connected, so essentially, elementary diagrams are the connected components of such larger diagrams, which are of course, the creation and deletion of vertex diagram, but also the only other connected component you can get is creating a vertex and then deleting it again. So these are the three elementary sort of, sort of little mini diagrams. And if you simply introduce, so, so now we need a notation, which means sort of pasting this jointly such, such elements, but I mean up to isomorphism. So I mean, so, so happens to be the composition along trivial overlap in this definition. And it, of course, immediately by looking at these diagrams, how they already drawn, you see that the equivalence class of any given diagram is given is completely characterized by the numbers of occurrences of these three elementary letters. Yes. That's simply by the construction. And now that, of course, is already an interesting sign where one could get some combinatorics. And in particular, sort of the, so when one looks very closely, one can define something called, I mean, as usual, we now have a binary operation. So we can define a bracket, a commutator. I'm just using the ordinary commutator symbol for, of course, a commutator in this diagram, algebra. And so these three elements have the interesting property that if you take the Li algebra form by these three elements and the bracket, the only non-trivial commutation is also, I mean, you first create and then delete. You have one more option, namely the E pattern than the other way around. So I mean, that's the only commutation relation you have. So these three, the basis factors of these three diagrams form the Heisenberg Li algebra. This is exactly the commutator structure of Heisenberg Li algebra. And now one can take sort of very nice result from the mathematical literature, which is a Poincarebe-Coffee theorem. So if you form the universal enveloping algebra of the Heisenberg Li algebra, which is to say you basically, you can tensor arbitrary basis vectors together of these, you know, chosen from these three types, modulo relation, which is created by the ideal of, you know, exactly created by this commutation relation. There is a very nice basis for this space, which is, I mean, giving some arbitrary order on the elements. But in particular, the one where you just sort first the string of all the V plus, the V Dagger occurrences and just all the V's and then all the E's. Then you can write a basis for this universal enveloping algebra of the form shown here. This is a Poincarebe-Coffee theorem. Now, so the interesting thing is, of course, this is again just characterized by the triple of integers, the numbers of how many times each of the, of these elementary basis vectors occurs. And so recalling that there was this notation of this joint union of diagrams and that sort of the characterization of an equivalence class of a diagram is exactly by the numbers of connected components of each type. So it's motivated that you can actually find an isomorphism between elements of this diagram algebra and elements of this universal enveloping algebra. And indeed, it's exactly just like for every district union, you produce the ordered tensor product. And conversely, going the other way around, you forget the order essentially. But more than just sort of a coincidence, it turns out that this is actually also an isomorphism of algebras. So this exactly preserves the algebra product from D into U and the other way around. So that's sort of the first nice observation. But of course, since we have some people very well vested in Hopf algebras here, it's of course well known that this Heisenberg Lie algebra, universal enveloping algebra is a Hopf algebra. So the question is, where do you get a coproduct from? Because I mean, one could ask, is then also the diagram algebra or Hopf algebra? And indeed, there's evidently this idea that you have some diagrammatic structure. It's already characterized through connected components. So if you really dissect a given diagram into connected components, now need some notation. So effectively, every element of the diagrams, every possible diagram is just the district union of these connected diagrams. So in the notation, the only strange thing, but it's for convenience, is that you say district union of an empty set is just the empty diagram. It's just to make this formula more consistent, more notationally nice. And now the coproduct is just a sum of all ways to partition parts of the diagram into the left tensor factor and parts of the diagram into the right tensor factor. Now, this definition gives you indeed a coproduct and indeed this isomorphism I showed before extends to Hopf algebra isomorphism. So you can indeed define a Hopf algebra structure on these diagrams. And I spare you, of course, all of these many axioms, but and this was, I think, one of the core results in the work of Dijon-Pensen also. Okay. And I apologize for the typos this morning. So I had to write this by hand, but so if one now looks closely, I mean, so far we have just talked about high level properties of these diagrams, but one can simply just write down now the product of two elements in the diagram algebra. Again, they are characterized just by the numbers of occurrence of these elementary patterns. And the main point here is that there occurs a very interesting coefficient, which is the number of matchings of, so the first diagram has K1 outputs, so vertices created. The second one has too many inputs, so vertices that will be deleted. And now the sum is over all ways of pairing those with forgetting the order of the pairing. So and this is exactly the combinatorial coefficient, the number of ways of doing this. But if you look now at the famous Heisenberg-Weil algebra or more precisely its representation, this number vector basis. So I mean, of course, everyone knows this from quantum mechanics, for example. So there is an algebra defined over vector space indexed by just natural numbers, which has two generators and their representations act as follows. So a dagger on the vector n gives you the vector n plus one, and a, the annihilator on vector n gives you either zero of the field, if n was zero, or n times n minus one, if not. And indeed, the normal order elements in this algebra, normal order means first of all annihilators and all creatives, have precisely this above number coefficient as in the algebra structure. And for Jean-Pierre-Penson, that was the motivation to study this diagram algebra, because as you saw before, the coefficient is purely produced by combinatorics of matching in these diagrams. And now one can immediately think about, so we will now simply use different letters now. So before these were called v dagger and v for radius creation and deletion. I know you use capital A dagger and A and the other algebra, just to say these are, I mean, exactly the same diagrams but seen in a different algebra. So for each, there's a little dictionary where you say with phi bar, you map, you forget of a diagram in this Hopf-Algebra structure, all the parts which are these create and then delete pieces. This way you get a diagram and on the right hand side. And if you embed such a diagram in the larger algebra, you just have Hopf-Algebra diagram without any of these create and then delete patterns. And it turns out that this completes the picture very nicely because now this gives you an algebra H, again defined over diagrams. And it's exactly sort of a diagrammatic encoding of the Heisenberg-Weiler algebra. So it's this algebra with the only commutator A with a dagger commuted gives the identity. And finally, sort of a little piece of the information is also that you have a representation, which is exactly just that to such a basis vector and H, you can assign an action on number vectors through representation. And so everything put together gives you a very nice picture of the structure of Heisenberg-Weiler algebra, mostly explained combinatorially. And also even the representation, I mean, there's another way of seeing how the representation can even be seen as a combinatorial action. And so in summary of this part, I found this back then extremely interesting because sort of all of the combinatorics of how these transformation steps interact is coded in these diagrams. And you then postpone to a later step through this representation row, acting on states. And this is exactly the type of deciphering that is needed to make progress in these transformation systems. Okay. And now I just wanted to show you one sort of this is still a bit experimental, but I mean, sort of the key point about Heisenberg-Weiler algebra is that you can formulate chemical reactions in a language where you do not look at the internal structure of molecules. So you just count abstractly number of occurrences of different molecules. And so one can quickly write down a continuous time Markov chain for it. So I mean, this is now in the Sparkman-Fock basis where annihilation is d by dx, creation is multiplication is formal variable and you are tracking the probability distribution of states with n particles coded as monomials xn. Anyway, so this is a birth-death process. And if you draw these pictures again, the birth-death process chooses when to jump, when to perform transitions, either the creation or dilation. And again, of course, we could imagine we now look at the picture in this software algebra. And there you would track when a creation was followed immediately, or maybe at a later point, but connected with the deletion. So it's impossible to put a measure on any possible time point for this. But what you can do is you can put a box of time capital T and then just characterize the content of these transitions by exactly the classification into connected components. So creation, deletion events, and events where you create and then delete. And so there's a little trick. I mean, so far there is no Markov chain theory for the software algebra in that form to be developed. But here at least when you create, you can simply make this a different type of molecule. And then when you delete, you can make sort of a little artifact produce some third type of molecule. And this way, if you now track the numbers one, and you run all of this machinery, in the end, you can indeed get a nice expression, which tells you a little bit more about this dynamics, saying essentially, of course, you stabilize on a cross-order distribution ultimately. But you also see that in the time limit goes to infinity, you grow linearly with the number of these create delete events. So that's the dynamics of the system. And I mean, this is by no means a full theory yet. It's just like one very first motivation that presumably also Markov chain theory, one can accept interesting information. Only that, of course, this key particles have very little structure. And so I return to the question. So how could you now approach this for a much more complex situation, where you're not describing just vertex graph, but you're actually describing graph and hitting all of this combinatorial complexity. And so just to recall, the idea is now pretty similar. So we will mostly focus on first of all, classifying combinatorially how to interact with writing steps. And sort of the ultimate goal would be to then understand if you want to count, for example, triangle patterns that are produced or created or deleted by such sequences, how can you how can you give a measure of likelihood on that? That's the ultimate goal. And again, we would like to somehow find a way and this will be through commutators, of course, to sort of drop out some of the possible contributions, which do not actually influence the column. And the only sort of the real obstacle for this was that if you, I mean, this is a perfectly valid sequence of events, but at first sight, what complicates this is, so one could imagine these diagrams, the semantics will just be pairing of sub graph, you know, before we just paired vertices to vertices in these other diagrams. But here it turns out it's a little more intricate because actually it seems you're pairing half edges. And that is sort of the real problem for this, I mean, how to come up with concrete generalization. And so after a lot of experiments, so I came up to formulators in categorical writing theory. And so I mean, this is maybe a little bit of an exotic theory, but the nice thing about it is it's completely formal. And it can immediately be implemented in algorithms. So I'll tell you a little bit about this. And then afterwards show you how you can produce these sort of the analog of these diagrams. Okay, so sort of the full generality of this theory is quite intricate, because in general, you know, only look at graphical structures, but also some which are sort of constrained, additional structural constraints, for example, in trees. So, and I mean, so this is a paper which will jointly with Jean Grémin, which should be published soon in the applied category theory on compositionality. And so I just want to show you a little part also theory enough to get the idea, hopefully. So before we had transformations were essentially specified as partial maps between inputs and outputs of vertices, this generalizes now in, so we take a category typically, it should have some some nice properties called adhesivity. But sort of the important thing to notice that any, so for example, graphs are such a category, we can have undirected graphs, multi graphs, and all sorts of variants of specifications. But main point is, you take such a category, and you can formulate partial maps and quotation marks as spans of monomorphism, so spans of embeddings. And so in the most general framework on top, can put some conditions, which essentially constrict, constrain how you can apply these rules. And again, to make a meaningful theory, we, I mean, this would in general be a class, even for graphs, even for good categories. So we typically also have to quotient by some notion of isomorphisms as we had to do in the diagram case. Okay. But this is sort of the analog of these little building blocks of the transitions that one can use. And sort of unfortunately, then there's a little bit of, I mean, this is very general, it covers pretty much all of the known cases of such transformations, but for that it's also a little bit technical. It is just to say sort of the analog of finding an input pattern in a state X is now finding an embedding of I into X. And then how to from that, you know, apply the rule how to unroll this transformation depends on your semantics. And it's typically performed either in the double push out semantics, or in the sesky push out semantics. And the first one is essentially trying to compute some notion of set complement to get this object k bar. And the second one is using some construction called final pullback complement. So I mean, in these graphic graphic theories, push out is typically gluing together along common overlap, pullback is finding intersection of two objects, push out complement is roughly like the set complement is just that, for example, if you have a graph, and you just try to delete vertex, then it depends on whether this incident edges. So there's some some difference. But other than that, it's roughly the idea of applying these steps as in the graphical description. But so the key point is that there exists a notion from the pure rewriting theory of how to compose two steps, how to how to interact with two rules. And so this is very much like in the graphical language for these diagrams, you you have two rules, and you try to find an overlap of one of the output of the first rule with the interface, the input I two of the second. And so this is again coded as a partial overlap. Now you glue together along this partial overlap, and then you just run the rules. That's essentially what this says. And of course, there's some some technical details for how to do this with conditions. But I just wanted to show you. I mean, I think intuitions are pretty good here, also for the graph case. So I'm now drawing the diagrams from right to left, instead of bottom to top to be more in parallel with this mathematical notation. So here I'm drawing a picture where I take two vertices. I link them up with an edge and make an additional vertex and an edge. And then I use one of the vertices to link to it to ask for an edge to be incident. And then, you know, delete that edge makes new edge pattern. And so exactly how to interpret this diagram is to say, okay, so I'm encoding here a sequence of events where sort of the intermediate state is this shape where you glue together the patterns along the common overlap. And then you run essentially the one rule in the forward direction and the other in the backward direction. And now this is exactly the analog of these diagrams you saw before in this graphical case. So it's a sequence of events, which was produced through interaction of these two rules. And sort of the overall input motive here on the very right bottom is the necessary pattern you need to find in any state to apply the sequence. So this is sort of the very first example of such diagrammatic calculus. So there is one complication, maybe which prevented this from being useful from the start, because just graphs are rarely very interesting. Normally, most of these applications, you want to have some more structured species. And for example, one very nice species, I think, which is also a good communication grounds as a communicator is, for example, planar rooted binary trees. And so if one wants to, you know, formalizes in this context of writing, the first step to note is that you can write, I mean, not only trees and forest, of course, planar rooted binary forest, you can see them as some type of graph, which are typed. So you can take the slice category over some type graph. So every edge is given one of three exclusive types, either being left, right, or the bottom. And so that in itself is sort of a first step. But then, of course, you see it's not only a graph, which is colored by these three types of edges, but also it has some a lot of structural properties. And of these, I'm just listing some here. So, for example, you never have two leaves directly incident, and you never have two of these root edges and so forth. So I mean, this is not very pretty. You can formalize this as conditions. But of course, this complicates a bit the story. It can be done algorithmically. So at least that's a class. And so in the end of today, so transformations in this language you can do on trees can be made on, can be formalized as writing rules, but I mean, with the expense of those calculus, that is to be said. But in principle, it is now completely formalized how to do these transformations. Okay. And now finally, so the idea is how do you now get to combinatorics? So how do you get to calculus on these graphical writing steps? And the idea is that so as in these pictures in the introduction, you want to reason about all possible ways of applying and different transformation steps, where each step is chosen from some finite vocabulary of possibilities, let's say, for example, vertex creation, vertex deletion. So you want to classify the ensemble of all possible trajectories, let's say, for a fixed input state X0. And the strategy which is very efficient is to first classify all possible ways how these end steps can interact with themselves, so sort of the minimal context you need to fire a given transition to apply a different sequence. And then as a separate step, ask how many ways are there then to apply the overall of such input to the state X0. And so this is precisely this idea of traceless, completely formalized in this categorical writing theory now. Okay. And so the idea is that if you again, look, so this is now the analog of one of these diagrams from before in the graphical setting, where you now have three steps there in the sort of shaded boxes, they are drawn from right to left. And this wiring diagram indicates sort of a possible sequence how these three steps could interact. And you can produce from it now this type of tracelet. So the idea is that each of the wires codes for an overlap. And now, for example, you can take, you can zoom in on the first two and compute how they interact, producing this little subsequence of events. And now you see that indeed the overlap of this third diagram is what you just produced is now a proper graph. So there's no half edge or stuff like that. So you can glue together. And you essentially just complete sort of the full sequence of events that is minimally coded in these three rules as the bottom sequence. And it looks like this is, and finally to obtain what I call traceless. I mean, it turns out the only information you need to retain is just the outer hull of this diagram. So it looks precisely like just a sequence of transformation steps, but the specialty is that it only contains enough information to permit the sequence to occur. Not all, I mean, this could be happening in a much larger context. And so this is exactly where you gain something in the complexity. And at first sight, it seems this is sort of asymmetric, but one can show that that it's equally possible to build up this tracelet from the same diagrammatic overlap structure, just in a sort of first computing what how three and two overlap and then also overlap this one. So that's sort of part of this calculus. Okay, so I mean, it's just to say that these diagrams can be completely coded as just sequences of applications. And on those now you can do a combinatorial calculus. So sort of the what's combinatorially interesting is that now you are given your vocabulary, so it's just the abstraction of saying sort of the top parts are the colored bars are the individual rules. And so you can build up all possible trace, let's say here of length four by recursively composing or iteratively composing your letters. And so in each of these steps, of course, you can perform analysis. So this is exactly the philosophy of the diagrams that you can you can reason transitively so to speak on compositions. Okay, and so I just want to briefly show how this looks like in practice. So a trace set of length one is just you know the special case where you have just one rule, it really only needs its input. So that's sort of the sort of trivial case. And sort of what I showed just in pictures looks in reality, like you're inductively building tracelets of length n plus one from tracelet of length n and tracelets of length one. And so it is just to show you that that even in the case of trees, this doesn't look pretty, I admit, but it can be encoded in the algorithm. So I mean, this is a fully formalized structure. And sort of one of the interesting features of these tracelets is that you have at the bottom of this diagram, the sequence of steps. And so you can read out the composite effect of that sequence. And in the language of the show on pension, these hope diagrams, this was exactly this operation of evaluating the net effect, how this acts in the Heisenberg-Weil algebra. So this is now generalized here as reading out the net effect of the sequence of transformations. So so that is nice that's called an evaluation. And this evaluation is also compatible now with composition. So you can indeed, so the analog of diagrammatic composition is now composing these tracelets. And again, it is only important to take an overlap of the output of the tracelet with an input of the next. And so this is very exactly analogous to this diagrammatic composition. And finally, sorry, maybe just to say, so there is now this operation of composing two tracelets drawn with this vector notation at the bottom here. And this composition, now of course, if we go with analogy should be associative in some sense. And indeed, sorry, and indeed it is associative in the sense that just as in the Duchenne-Penzen case, sort of the number of ways to wire two diagrams together and then with a third are exactly isomorphic to the number of ways first wiring the third and the second and then also result with the first. So that is, that is the property this structure has. Now finally to put everything together, there is now precisely this aforementioned characterization that if you have a sequence of transformations, you can equivalently count all possible ways of performing these transformations by first counting the number of ways you can compose up these tracelets and then together with number of ways to applying them. So that's sort of, it's just to say this now exists for all of these writing theories, including of course, hope of case. Okay, and sort of the final piece of the puzzle is then how to actually get from here to algebras, because we want to analyze everything using commutator relations. One thing one has to do from the start, I mean, we already had to go to isomorphism classes for rules to even obtain a set of equivalence classes. And again, there's something like this also for tracelets. So everything is constructed with push-offs and push-off complements and so forth. So you have to quotient by isomorphisms. Something less trivial is that so you do, traces are slightly too large. I mean, normally you do not want to keep every bit of this information. In particular, you do not want to keep, like in this diagram here, information of order when the steps are completely exchangeable up to their effect. And so that's called shift equivalence. And finally, there's one oddity which is that simply formally, the trivial rule sort of intuitively should leave a transformation sequence invariant. But formally, I mean, it just produces an n plus one length sequence with some repeated parts. So you can define an equivalence that simply quotients out by such occurrences. And if you put all of these together, indeed then, oops, sorry, oh, sorry, yeah, here, you get to exactly now finally the construction of trace with algebra. Basis elements are labeled by equivalence classes of these tracelets under the aforementioned equivalences. And now the product is precisely as in the Dishon-Penson construction, the wiring together in all possible ways, the overlaps in all possible ways of the tracelets. So this is now the tracelet algebra product. And you can give an action of tracelets on states precisely by this aforementioned tracelet characterization that if you have a sequence coded in a tracelet, you can apply the entire sequence by finding embeddings of the overall input into a state. And this gives you a representation of this algebra. So the theorem here is that not only does these tracelets give rise to an associative unital algebra, but moreover, this row is indeed a representation, which means that you can, if you now want to do combinatorics on numbers, if you want to count number of ways of applying writing sequences, you are free to first, you can partition your problem by this bottom right outcome of the equation into first trying to characterize the number of overlaps of tracelets, which is very advantageous because now you can use relations such as commutators and so forth. So this is sort of the final outcome. And I then realized while preparing the talk that this was already much too much information. So let me just conclude on reproducing the special case of discrete diagrams. So here you see on the left the elementary sequence of creating a verdicts and deleting a verdicts. This is how it looks like in a tracelet. So this is the analog of the diagram of this generator E and the combinatorial half algebra. And on the right, you see exactly why you need this shift equivalence because, I mean, creating a verdicts and deleting, I mean, forgetting taking equivalence classes the same as in quotation marks first deleting and then creating. And this is in the tracelet language exactly obtained through such an equivalence. So it's not a trace value, but we label the elements of the algebra by equivalence classes. And finally, then, indeed, the type of relation you can then derive are precisely commutation relations. And those are key to the analysis in these combinatorial arguments. Okay. And I mean, I wanted to speak about planar-rooted binary trees, but I think I think I'm out of time. So it's just to say that in this calculation, and I also showed this last year at CAP, one can now start to see combinatorial simply counting arguments on why certain commutators have the form they do. So that's that's sort of the main motivation for this work. But I do realize I'm out of time. So it's just to say sort of one of the so this will be a forthcoming paper for beginning of next year. And one of sort of one of the cases will be an explanation of why sort of, you know, you see a certain commutator structure and these planar-rooted binary tree computations. Okay. But so let me conclude to stay in the time limit. So I've given you a quick tour of a new concept, which I call tracelets, which is intended as a generalization of the Stichon-Penzen et al. construction of diagram algebras. It seems to be very useful. I mean, I have a sort of developing an implementation of this with a Z3 SMT solver, which is available online, but I mean this is work in progress. And sort of the mathematically interesting question is maybe whether you also have some Hof-Algera structure on these tracelets, which for some cases I know one can demonstrate, but in general is sort of a research question. And sort of the long-term goal of this work is to bring combinatorics also into these bio and organochemical reaction systems, which through the tracelets now will boil down to enumerative combinatorics on tracelets. Okay. And with this, I would like to thank you for your interest and thanks a lot for your time. Thank you very much, dear Nikolas, for this. Are there questions? Yes, I have one question. What about directed graphs? Do your construction applies to this? Yes. Sorry. I just showed under directed for the sake of the diagrams. No, no. So it applies for any type of graph you can formalize categorically. So even for simple graph, or for multi-graph, or for hypergraphs, or for attributed graphs and so forth. No. I mean, it was just for the pictures. So directed graphs in particular are appreciative. You know, you can formalize them as a appreciative. And so any appreciative category, so any appreciative gives you an adhesive category. So that's, yeah. Okay. And in particular, for the directed graph case in this paper, in this archive footprint, we even have the Hof-Algebra structure. So may I ask a question? Yes. Yes. Are there questions? Yeah, I have a question. Yes. Nikolas, could we describe the construction of a penderous tiling with your graphs? Is it possible? It's like simple chemistry after all, but the dimensions. So I mean, the only question about this is whether you can generate your tiling in some process, which only asks local information. So I mean, I'm not quite familiar with penderous tiling, but I mean, if you can describe. You have just several tiles of forms, and there are just rules, quite simple rules, how to glue them together. Interesting. Yeah. Yes, so that sounds very much like an example you could study. Yeah, I think all quasi crystals, because it is much more interesting than crystals, because they are, of course, periodic, but this is up periodic. Yes. So one of the things I could imagine is you might, for example, ask in such a quasi crystal, what is the average occurrence, say, of particular subspecies? But probably with your technique, it may be described in a very simple way. Yeah. So in particular, if you have a model for how these local manipulations are fired. Of course. Of course. Yeah. Yeah. Actually, I have a little comment on this, because also your stuff, it's like, it's really chemistry, it's bullion, it's no special structure. And of course, in penderous tiling, you get things sitting on a plane. And in principle, for example, there are these universal equations equivalent to universal Turing machines. You do dominoes in two dimensions, like square tiles, and each side is colored by some color. And then you try to build things since it's a universal Turing machine. Of course, you can start to glue things, you get some pieces of planes, so you get eventually some kind of space like structure. Abstract story. Yeah. I mean, one of the things we tried was, for example, polygons. So you can have these sort of tissue models, which are simply just polygons glued, but I mean, without gap in the plane, and you can sort of divide the polygons, you can sort of insert triangles by expanding vertices and so on. And this is also in the realm of these riding techniques. And so you can use it in, there is a problem with capsid, you know, the viruses, which have icosahedral symmetry. They are surrounded by capsids. These are just coat proteins that come together, and they have five-fold and six-fold symmetry. And of course, they build these capsids. And again, there are very simple rules that make them glue together. Yeah, so the main motivation here is that for these types of theories, sort of the motivation is if you have a problem, which you think is of this nature, it is relatively quick to check how to formalize it into rewriting. So I mean, it's essentially to see whether you need more information than what you can expand in the small local neighborhood of the rule or not. If not, there's a very good chance you can, so what you get in quotation marks for free are, for example, commutation relations. So you can, for example, ask the pattern, the counting of the pattern is implemented as just an identity transformation of that pattern. And so you can, for example, ask how many of the occurrence do you have more or less before, after applying this transformation. And so these commutators really carry a lot of information. And it's much smaller information than if you were to analyze the entire structure. So that's sort of the main, you get sort of average information. Yeah. Are there questions, remarks or comments? Can I ask one little question? Have you been studied which information on this algebras are carried by some homologous of this algebras, like Hohschild homologous or whatever, something like that. I mean, I mean, these algebras have not been even written down. So, I mean, the question is, I mean, the answer is no from my side. I would not be surprised if you look at special examples. Maybe you could, I mean, it would be more that you recognize the trace lip stuff in it rather than, I mean, I don't have a good answer. Sorry. I think it's a good question, especially because last year at CUP, we had this nice talk by, we had this nice talk about graph homologous or homologous, I forgot, which also could be formalized through writing. So, I mean, if you have actually, if you have a good, good, good case, which you think could be of this elementary graph like structure, I would be very interested because I would like to play with Yeah, it should be possible in general kind of point of view, saying that they should carry some information. Interesting. No, I would, I would be very interested if you could have a good sign e-link, maybe. Thank you. Any other questions, remarks or comments? Oh, thank you, dear Nicolas for this amazing and huge expansion of our paper with Pavel Boazak and Carol Penzon. And I have a small question. Are your variants graded? Do you have each each time some, some way to count vertices or edges so that Yes, exactly. So, so the reason these formalized rise that, I mean, so in my formalism, they are filtered. So it seems, so in principle, you could count the occurrences of these connected components in some system. So I mean, if you give grade two to the, you know, this e pattern, which consists of two sub diagrams and grade one to each of the others, that would give you a grade. But the thing that generalizes is a filtration by essentially the cardinalities of the interfaces. And then this through this filtration, I mean, always when you compose, you exhibit a little bit less of the interfaces to the outside world. So that decreases the filtration degree. So, I mean, it's clear that this by composition gives a decreasing sequence. I mean, at most you have the same filtration degree if you don't connect overlap, because it's simply the sum of the overlaps of the interfaces and otherwise it decreases. And this happens to be compatible as a core product. So, yeah, okay. Thank you very much.