 First of all, happy birthday to Dirk. So I'm going to start with a monoid or a group which is protoalgebraic, so it does not really matter what it means, but it just has a good ring or algebra of polynomial functions. So this algebra just reflects the topology of the group and you have more, you have a co-product which ends one polynomial functions, two pairs of polynomial functions which is the co-product data which you can see here. We just reflect the composition of the group. So this is what is called a bi-algebra if you have a monoid and if it is a group, it is a hop-algebra. And you can recover, it's quite well known, the group of a monoid from the bi-algebra just by taking its characters, the characters are just bi-algebra morphism from your algebra to the field which is for me the complex field. And there is a product on character, she's the convolution, which is in some sense this dual of the co-product. So I'm not satisfied with only one group or one monoid, I need two groups and in fact, I want to do some sub-direct product or thing like this. So for this, I take two groups or two monoids with a group bi-algebras and I suppose that the second one, G-prime acts on the first one by monoid andomorphisms which is exactly what I need if I want to do a semi-direct product. So what does this mean? I have first a monoid G, so it has a hop-algebra which I call or bi-algebra which I call A. I've got a second monoid G-prime which has also bi-algebra which is B. G-primes acts on G, so B co-acts on A. So there is a correction which is a map, an algebra map from A to A tensor B. So this is just reflect the action of G-prime on J and if I want to translate the fact that the action of G-prime is by monoid andomorphisms, this means that A is a bi-algebra in the category of B converges. So what does this mean exactly, this fourth point? This means this axiom. So the first one just means that this is a correction. This is the axiom of a white correction. The second one means that the product of A is a morphism in the category of co-module over B and it also means that the correction is an algebra morphism, this is the same. The fourth one means that the co-unit of A is a co-module morphism from A to the base field. And the last one, which is the moistwisting, means that the co-product of A, the big delta, is a co-module morphism from A to A tensor A, which is also a B-co-module. So this means that if you do first the correction and then the co-product on the left, this is equal to do first the co-product then the co-action on both sides of the co-product and then to regroup together the two terms which belongs to B. So this is this product M1324, which case four elements and regroup the second and the fourth one at the end. So for example, let's just take a very simple example. You can consider the group C with the addition, which is a group, a rebellion group. And on this, the group C star with the multiplication naturally acts by group automorphisms. So this is exactly what you have before. So the first group G is C plus. So it's algebra, it's hop algebra is the polynomial algebras on C, which is the polynomial ring with one in the terminate. With the co-product, which is additive, the co-product of X is X times one plus one times X. So this is hop algebra. The second group G prime is C star with the multiplication. So the polynomial algebras on C star is the Laurent polynomial algebra C of X and X minus one. I have to add an inverse to X because there is no zero in C star. We have another co-product, which is multiplicative, which sends X to X times X. So for the first one, the big delta X is a primitive element. For the second one, it's a group line element. So this is also hop algebra because there is an inverse and B co-acts on A because C star acts on C with this co-action row, which sends also X on X times X. In fact, more or less the co-action and the co-product, the second co-product are more or less the same. So I don't act very much this X minus one. I would prefer to have the CX. So I just forget it and I obtain a hop and not a hop algebra, but a set by algebra, which is B, which sees the same algebra as A with another co-product. So it's no more hop algebra because X has no more inverse. It's just a by algebra. But it co-acts in the same way on A with the same co-action. And for this, the co-action on the co-product is the same. So this is the frame I will use now. So what I have is an algebra, A with a single product and two co-products, big delta and small delta, such that AM delta is a by algebra, AM small delta is a by algebra, and the second one co-acts on the first one with the co-action row, which is also the second co-product. So what I have is an object with one product and two co-products. For both co-products, it's a by algebra. And moreover, there is a compatibility between the two co-products, which is something like this. Replace just row by delta, and you obtain the second, the compatibility between big delta and small delta. So if you want to be complete, this should be called by algebra in a category of co-module of another by algebra, which is quite long. So I just now call them double by algebra or co-interacting by algebra, something like this. And the first example we have is just the polynomial ring, C of X, with these two co-products, big delta, which is additive, and small delta, which is multiplicative. We know more examples. So for example, the well-known corn crumb hop by algebra of trees, which is based on rooted forest. I won't belong on the remainder on it. So it's just based on rooted trees. For me, the trees, the roots of the trees are at the bottom. So if you have the trees with two vertices, one, two vertices, three vertices, four vertices, the product is the Dijon Union. So it has the basis of forests. And the first co-product is the corn-crimer one, just given by admissible cuts. So you take your tree or your forest, you just cut some branches. You put the branches on the right and the trunk on the left. For example, for this tree, you can cut nothing, and you obtain the tree, and so on. Or you can cut everything, one tensor of the tree, or you can cut a leaf in two possible ways. You obtain two tensor of the trunk, tensor of the leaf, or you can cut the two leaves, which go on the right in three minutes on either root. And the same for this, you can cut nothing or everything, or the trunk after the root, or the trunk just before the leaf, and you obtain these four terms. There is a primitive part, the first two terms, which means that the co-unit is very simple. The co-unit of a forest is one if the forest has no vertex and zero otherwise. And you can observe that it is graded, obviously, by the number of vertices. If you cut a forest, you don't lose any vertex. Some goes on the left, the other on the right, but you don't lose any vertex. So this is graded by the number of vertices. So this is the most famous co-product on it, but there is a second one, which was first described by Damien Calac, Kourou-Shebrah-Himifat and Dominique Monchon in 2008, I think, which is given by the process of construction and extraction. So what does this mean? For example, for this tree, you can separate your trees into disjoint sub-trees. So for example, you can disjoint it into three sub-trees, which has only the vertex on the left to contract these sub-trees, but nothing appears, and on the right to put these sub-trees. So here you don't do anything. Or you can contract the edge on the left, so this sub-tree, so this gives this tree, in two possible ways. You can contract the whole tree, it only remains one vertex, and put the other tree on the right. So this is another co-product, which is also co-associative. It's not co-comutative, you can see it. And it's not a hop-file-gebrah because you have a group like, which is the tree with only one vertex with no inverse. So you only obtain a bi-algebra for this co-product. And they prove that this is a double bi-algebra, so this means that this co-product really co-acts in a good way on the first cone-crimer co-product. And there are similar constructions on post-tets, finite post-tets, you can see trees as post-tets just by taking the order, the partial order to be higher in the tree. So if you have a tree, you have a post-set. And such a construction also exists on finite post-tets, or more generally, on finite topologies. So this is the first example of, not the first, the second examples of double bi-algebra. I'm going to give another one based on graphs. So for this, the basis of my hop-file-gebrah of graphs is the whole set of graphs. So it's actually simple graphs. So here you have all graphs connected or not with one, two, three, or four vertices. There is a simple product on it, which is the Dijon Union. The unit is the empty graph, which is here. So for example, this graph is the product of this by itself, I think like this. There is another simple product. There is a very simple co-product which is just given by separate two graphs into two parts. So take your set of vertices. You put some of the vertices on the left with the edges between them. The other vertices on the right, also with the edges between them, and you obtain a nice co-product, which was first defined, I think, in a paper of Schmidt on incidence hop-file-gebras. This is an example of incidence hop-file-gebras based on a set of graphs. This is just a set of graphs with a given set of vertices with expression of edges. So this is a co-product. This is co-associative. It's really not difficult to see. And it's co-comutative. This is very different from the co-crime or co-product. This is co-comutative. There is a second one, which is also can be found in the paper of Schmidt with another incidence by algebra, and which was also described in a paper of Dominique in 2011, with various examples on graphs, oriental graphs, acyclic oriental graphs, and things like this. So this is the same idea as for trees. So for trees, the first co-product is just cutting the trees into two parts. So this is the same for graphs. The second co-product was given by extraction and contraction. So this is the same for graphs. Just take a graph. You can take some equivalences on the set of vertices. So this means that you just do a partition of your vertices. On the right, you just contract your equivalent cases. So this means that you contract some sub-subgraphs of your graphs to vertices. And on the left, you just forget the edges, which are not between, which are in vertices, which are not equivalent. So I'm just going to give an example. So for this one, you can contract everything. You just remain a vertex. And on the right, you obtain the world graph. You can contract only one edge, for example, this one. If you contract, you just obtain a graph with two vertices and one wedge between them, this. And the extraction is given by this edge and the other vertex. So something like this. You have three possible ways to do this. You can just extract the vertices, which go on the right. And if you contract the vertices, nothing happens. So the graph stays itself. So this is also a second core product. This is also by algebra. It is not a hopped algebra because there is a group like the graph is one vertex and it has no inverse. So you don't have any antipode. It's not a big problem, but well, it's not a hopped algebra. And the code is very simple. In fact, this is because of this part for if graph you obtain a vertex and serve a graph plus a graph tensor vertices and more terms. So the code unit is just given by sending any graph to one if your graph is totally disconnected with no edge or zero otherwise. And you can prove that this is really a double by algebra. So the second core product, this extraction contraction core product really coax on the first one. And the last example, which is the hopped algebra of quasi-symmetric functions. So as a algebra, it's based on the set of compositions and the composition is just the finite sequences of positive integers. With these products, which was used just before by Dominique, this is the quasi-choffel products on composition. The core product, the first core product is given by the concatenation. You just cut your words into two parts between two letters. And there is a second one, which is given by extraction and contraction. So for example, for this, you will cut your words into two, into several parts. One part, two parts, two parts. And so three parts, two parts, two parts on one part. You just contract the parts. So contracting just means that you send all the letters of your word. Because the letters are integers, you can send them. So this gives the terms on the left and on the right, you just quasi-choffel your parts. So this is another core product. You can find it into the papers of Thibault, Noveli, and the other on this subject. I'm not totally sure they proved this is really double by algebra. I'm not sure they proved the co-interaction. Well, it's true. And they prove it with a trick, which is based on manipulation of alphabets. So it's not really obvious, but you can do it without too many combinatorics, just with algebraic tricks. So well, there is a co-unit, I forgot it, which is also, she's given by this. You take a word, a composition. If it's of left zero or one, the co-unit is one and zero otherwise. And it turns out that this is a character of QCIM, which appears in another paper by Agar, Bergeron, and Sotile. In this paper, they define the category of combinatorial hopp-vagebras, which are graded connected hopp-vagebra with a character, which is their person. And they prove that in this category, QCIM, with this character, epsilon prime, they don't mention that this is the co-unit of a certain co-product, by the way, but this pair is a terminal object. So this means that if you take a combinator hopp-vagebra, so a connected graded hopp-vagebra with a character, you automatically obtain a morphism to QCIM, compatible with this co-unit. Okay, so that's nice. These are nice objects, but the question is, what you can do with this, what you can deduce on this construction, well, what will it give on these examples of graphs and trees and things like this? So first of all, take a double-biashebra, A, with one product and two co-products, another by-agebra, B, and you're looking at morphism of hopp-vagebra or bi-agebra from A to B. So it turns out that the monoid of characters of A for the second co-product acts on the set of bi-agebra morphisms with the help of the co-action of the second co-products. So this means that if you obtain a bi-agebra morphism from A to B, in fact, you have a lot more bi-agebra morphisms from A to B. You can deform any bi-agebra morphisms with the help of characters of A. If you have this one, you will have everyone, just by using the actions. So let's try to do this for four S. So four S form a double bi-agebra. So this means that there should be a unique morphism from four S to with both co-products. You can prove that you can compute it intuitively. For example, let's just start with the first tree, the tree with only one vertex. Well, it's primitive for trees. So its image should be primitive for polynomials. The set of primitive elements of K of X is one-dimensional, it's generated by X. So this means that Phi one of this tree should be a multiple of X. Moreover, Phi one is compatible with a second co-product and with its co-unit epsilon prime. Epsilon prime of this tree is one. So epsilon prime of its image should be one. So epsilon prime of lambda X is lambda, so lambda should be one. So you're entirely determined Phi one of this tree, this should be X. For the second one, you do the same. Let's first compute the co-product of this tree. This is this, this is only one admissible cost, non-trivial. Let us apply Phi one to this. Phi one is compatible with back delta, so you should find something like this. Phi one of this is this. So this means that Phi one of this tree should be X two over two plus a primitive element. So, lambda of X. This morphism Phi one is compatible with the co-unit epsilon prime. So epsilon prime of this polynomial should be epsilon prime of this tree. So this should be equal to zero. And you obtain that lambda is equal to minus one over two. And you obtain that Phi one of this tree, this is exactly this. So what you are doing now is to put that this morphism is unique. What is not clear is that it is really compatible with the second co-product. I only use that it's compatible with the co-unit, but you obtain for free, but it is in fact compatible with the second co-product, just for free. So you can continue for like this. For this tree, you obtain something like this, which is a Hilbert polynomial, quite famous. And for this tree, this is normal in Hilbert polynomial, but something like this, maybe you recognize it. This is, this polynomial counts the sum of the square. This evaluate to N is one square plus two square plus, et cetera, plus N square. So these are quite special polynomials. You can do more in fact. Well, this is a nice way to compute your variant Phi one, but it's quite long. And in fact, you can do better. You can put some formula like this. If you take an element of your double by algebra A, you can compute Phi one of A in this way. First, you can compute all these reduced co-products. So the reduced co-products is just by taking, by forgetting the primitive parts. So this means that for tree to just forget the trivial cuts and the cuts of everything, you can compute it and iterate and iterate and iterate and something like this. And you know that at certain point, it will stop. The iterated co-products should be zero after a certain point. So you compute all of them. You obtain tensor of trees and things like this. You just apply the co-unit on the left, and then you multiply the terms by your Hilbert polynomial. And this means that your invariance really counts something. If you evaluate X into an integer, this is a binomial coefficients. So really Phi one of A just counts something for forests. And this counts something like this. This is quite a well-known construction. If you take a forest with K vertices, well, just choose an index session of the forest. It doesn't really matter. Then you can associate with a polytope of dimension K. In fact, this polytope is defined by some inequations. If you take one vertex, which is another one, X1, XI is, so sorry, the vertex AI is below the vertex J in your tree. Then you associate to it an inequation, XI times smaller than XJ. So this defines a polytope. And you want first to delete it by an integer, and minus one times a polytope, so just know what I see. And you want to count the number of integer integral points inside. So it's quite a famous result that this is, in fact, given by a polynomial sequence. So this defines a polynomial called the Erard polynomial. The strict Erard polynomial is the same, but just count the number of vertices inside your polytopes. So this is quite known that this defines two polynomials, which are related, I will say later. So just to mention a problem, usually in the literature on this, the Erard polynomial in N just count the number of integral points of the dilated of the polytope by N. Here I have a problem if I do this, it does not work. So I have to do some translation by one. So for example, for this, so you have three vertices, which I did an index by one, two three, from bottom to left. One is smaller than two, two is smaller than three. So your polytope is defined by X smaller than one, smaller than Z between zero and one. So the polytope associated to F is just a simplex. So if you want to count the number of integral points into the dilated of F, what you count is the number of points XYZ, which are integers such that X is smaller than one, is smaller than Z, is smaller than N minus one. And it's not very difficult to count them and to prove that this is N, N plus one, N plus two over six. So this is for the Erard polynomial, for the strict Erard polynomial, you count the point inside. So this means that you replace your smaller or equal by just strictly smaller and N minus one by N plus one. So this counts things like this and it's not difficult to prove that this is this polynomial, which by the way is in fact phi one of F evaluated to N. You can do the same thing for this tree. So here you can try to draw the polytope. This is a pyramid with a square basis. You can count the number of points inside, which is this and so this is the Erard polynomial of the strict Erard polynomial, which is again phi one of F evaluated to N. And in fact, this is exactly this. The unique morphism compatible with the product and the two co-products is in fact, the strict Erard polynomial. Okay, so if you look at this, you can observe that the strict Erard and the Erard polynomial are very similar. More or less they're the same coefficient and things like this. And in fact, you can prove that there is another morphism from the cone primer to polynomials, which is compatible with the product and two co-products. This is not the Erard polynomial, but more or less itself. You just replace X by minus X and you have to correct the same by multiplying by the power of minus one. It's not very difficult to prove combinatorially, but this is compatible also with the product and both co-products. But there is only one morphism compatible with the product and the two co-products. So this means that this morphism phi is the same as phi one. And what you obtain is an algebraic proof and the 180 principle for Erard polynomial. In fact, the strict Erard polynomial and the Erard polynomial are really closely rated. This is just by evaluating X into minus one, minus X and replace the same. So here, as an application, you have a proof of the 180 principle for Erard polynomial without, more or less, without any real combinatorial stuff. The usual proof who uses some mobius inversion in some posets are really combinatorial. Can I ask a quick question here? Sorry. So this, you exemplified this as in the case of these polytops, you get from trees via this poster form. Can you also use a similar kind of argument to get this Erard polynomial or the reality result for arbitrary polytops? I did not manage to do it. In fact, I would need the help of algebra structure and polytops, good algebra structure and polytops. I have a product, which is just the usual product of polytops, but I don't find the co-products. In fact, those coming from forest are very, very special. You will see they are defined by so many questions, which are nice. If you take any polytops, I don't know how to quote any polytops, so. Yeah, thank you. All polytops. So let's do it for graphs now. So for graphs, I can apply the formula for phi 1, which I gave before. So what does this mean? I have to do all the iterated co-products of graphs. So this means that I have to cut the graphs into any number of parts I want. So the iterated co-products just mean that I will cut the graphs into a lot of parts. And then on each part, I will apply the co-unit for the second co-product. So the co-unit is one if the part has no edge. And otherwise it's zero. So this means that in phi 1, which appear only decomposition of your graphs into parts with no edges. The decomposition of the graph is equivalent to a coloration of the graph. So a coloration is just, just associated to any vertex of the graph, a color, which usually is a number. So a partition of the graph is just the same as a coloration. And the coloration occurring in my phi 1 of A are just the coloration which are called valid. So this means that if two vertices have the same colors, they should not be neighbors in the graph with no edges between them. So this means that in phi 1 of A, I just take in account valid colorations. So this is a polynomial with a constant like this and which is called the chromatic polynomial. So in fact, what I find with graphs, the unit morphism compatible with both co-products and the co-product is the chromatic polynomial, which is perhaps not a big surprise because if you work with graphs, you know that chromatic polynomials is an essential tool for such a thing better. So this is perhaps an explanation why it's so important. In fact, it's a unique polynomial invariance on graphs which will be compatible with all these structures of extraction or so contraction and extraction of sub-graphs. Okay, so something else, no, another application. And so I'm going back to a theoretical result. In fact, I'm looking for the antipode. So in all my example for the big delta, this is a hop algebra. So it has an antipode. And for the second component, it's just a bi-ajibra. So no antipode. In fact, I can prove that if I want to compute the antipode of A, I just have to compute the inverse of a special character which is the co-unit of the second co-product. So the co-unit of the second co-unit is the unit for the convolution of the second co-product. But for the first co-product, it's just a character with no spatial property. Maybe it's invertible. If it is, well, you know that A is a hop-ajibra and you have a nice formula for your hop-ajibra, just apply the second co-product and then the character on the first components. So for double bi-ajibras, if you want to compute the algebra, just have to compute a special character. There is something more. In fact, maybe you observe that all my examples of double bi-ajibras are commutative. In fact, what you obtain here, like this, is that S is a composition of algebra morphisms. So this means that in double bi-ajibras, the antipode of A is an algebra morphism. But usually it's an anti-algebra morphism. So double bi-ajibras are special bi-ajibras so that the antipode is both an algebra and an anti-algebra morphisms. So this means that essentially, if you have a double bi-ajibra, it should be commutative. So this is the reason why all my examples are commutative. In fact, you cannot obtain any double bi-ajibra, which is not commutative because this is one of the reasons the antipode should be an algebra morphism. You can do more. In fact, you have to compute the inverse of the character, which is not so obvious, in fact. You can do it inductively, but it's not so obvious. But if you know how to compute phi one, well, it's very easy to find the inverse of the character. Just take an element of your algebra, a small a, just compute phi one of a, this is a polynomial, and then evaluate it into minus one. And this is really the character alpha of a. So for example, for rooted forests, if you want to compute the antipode, you need to compute the error polynomials of any forest evaluated to minus one. And it's very easy, just a polar of minus one. So alpha of s is just a polar of minus one. And you obtain this formula for the antipode, which was proved by Conan Kremer, not in this way at all. It's just an indicative proof. You obtain this, something like this with no induction. It's more interesting for graphs. For graphs, for a long time, the antipode was not known. Not really. You can compute it inductively, of course, but it was not so clear. Last year, in fact, formula was proved by Benyadeti Bergerot on Machatsek, I'm not totally sure of the pronunciation, with a combinatorial method, which was quite complicated. There is a mobius inversion, thing like this. And curiously, the number of acyclic orientation appears. And in fact, this is obtained with my method by this. If you want to do this, you have to compute phi one of the graph evaluated in minus one. Phi one of the graphs is the chromatic polynomial. And it's quite a famous result in graph theory that the graph polynomial evaluated in minus one count the number of acyclic orientations. And to prove this, it's not so difficult. This is just a combinatorial proof by induction of the number of vertices. So what you obtain is the formula by Benyadeti Bergerot on Machatsek with no more combinatorics, more or less. It just applies the chromatic polynomial evaluated to minus one. You can do better for the chromatic character. In fact, there is a very simple hypergeomorphism from graphs to polynomial, which is just sending a G to a monomial thanks to the power of the number of vertices of G. This is really easy to show that this is an hypergeomorphism. Of course, it's just compatible with the first core product, not with the second one. The unique morphism compatible with the second one is Phi one, the chromatic polynomial. And I mentioned before that any hypergeomorphism from graph to polynomial should be obtained from the chromatic polynomial by the action of the character. So we can write that Phi zero, this very simple polynomial, very simple invariant should be obtained from chromatic polynomial by action of the character, which I denote by lambda, which is very easy to compute, is just send any graph to one. So lambda is a very simple character, but which is more interesting is that it is invertible for the second convolution. So this means that you obtain the chromatic polynomial from this very simple morphism just by acting a certain character, which is not so easy to find now. So this means that you obtain a formula for the chromatic polynomial. In fact, the chromatic polynomial is a sum of the all possible contraction of your graph. Next to the power of a number of classes of your contraction and then a scalar, which can be addictively computed. So lambda, since the chromatic character, you can compute its values just by induction and you can observe on this example, that it's never zero. The chromatic polynomial never goes to zero and its thing only depends on the number of vertices. With one vertex, it's positive, two vertices, it's negative, three vertices, it's positive, four vertices negative and you can prove it just by induction. So just by something like this, I don't have any more time, so just cut it a little bit. So just by this, and with this formula, you can prove that in the chromatic polynomial, the things of it are alternating, which is a proof by Rota in the 70s, I think, with complicated combinatorial methods. Here you obtain each just by a small combinatorial tool, which is the construction extraction of edges and then this formula for the chromatic polynomial related to the chromatic character. Okay. I think I've got, I still have five minutes. So for the moment, I talk about morphism with values to polynomials. Now I'm going to talk about morphism with values in the quasi-symmetric algebra. So I mentioned before, but by Aguiard, Bergeron, and Sotile, I know that there are a lot of morphism to it because it's a terminal object. If I want a morphism to QC, I just have to choose a character and then I will obtain homogenous morphism compatible with the product and the first co-products. And I've got the formula for this, which is similar than the formula for the invariant polynomial invariant, sorry. If I want to construct a morphism from A to QC, I just take all the iterated co-products. I apply some projection on it. So I project on the homogenous parts. Then I applied the co-unit on all the parts and I take in account the degrees of the parts with a composition, something like this. So what can I prove is if I'm looking for a morphism from A to QC compatible with both bi-algebraic structures, so compatible with the products and the two co-products, this is the only possibilities. If I'm looking for something compatible with the products and the co-products, this is the only one we should work. And happily, it does not work any time. I need another condition. I need a condition of the gradation. In fact, I need that the gradation more or less just respect the first components. If this condition, this technical condition is not satisfied, well, phi one is not compatible with the second co-products. So this means that I won't have any morphism compatible with both structures. And this is what happens for progress. In fact, my condition means that on the second co-products, I should obtain only things with three vertices on the left. And this is not the case. There are some red trees, but we should have three vertices if I want to be compatible with the second co-products and unhappily, well, that's only two vertices. So in this case, I won't have any morphism compatible with both structures from trees to cussing. So I have to cheat a little bit. When I just replace forests by decorated forests, so this means that on any vertex on my forest, I had a decoration, which is an integer. So if I do some construction, for example, for this, I'm contracting the edge between A and B. Well, I don't forget A and B and I just replace the vertex, the decoration of the vertex by A plus B. So for this, now the co-product is homogeneous. Here the weight is A plus B plus C, and on the left and also on the right, I only obtain trees with weight A plus B plus C. So I can just put them into black. And now with this trick, the technical condition on the second co-product, which I changed, is satisfied. So this means that I obtained for free a hopf and jibramorphism from forests to cussing, which is homogeneous for this Godvation by the weight, uncomfortable with all structure. And this is a generalization of the Erhard polynomial, which I call the Erhard quasi-symmetric function. It is something like this. Okay. And the same for trees, so for graph, sorry. I'm going to obtain is a generalization of the chromatic polynomial, which is called the chromatic quasi-symmetric function. But in fact, it's not quasi-symmetric, it's symmetric for a reason of co-comutativity. And this is an object which was already known by combinatoricists on graph theorists. They know it, but they didn't know that it was compatible with a zero structure on the quasi-symmetric functions. And the last word, I said before that double bi-algebra does not run very well with non-comutative bi-algebras. In fact, you can do some non-comutative and replace by index tree or planar trees and things like this. And you can also define two co-products. Okay, there are two co-products. They are no more compatible as before, but they are still existing, so why not? And you can generalize your chromatic series or LWAT series. All the morphisms I mentioned before still exist in a non-comutative way, but you cannot use anymore the formalism of double bi-algebra, there are no more double bi-algebras. So if you want to do this, if you want to explain it, you have to work in another category, which is in some sense bigger, which is the category of species. In fact, all my objects are choices. This means that they're imaged by functors of something which exists in a category of species. In the category of species, with two funtons, which will give, in part, the commutative objects, which are double bi-algebras, and non-comutative objects, which are not double bi-algebras, but which are traces, non-comutative traces, are double bi-algebras in the category of species. So we stop here and thank you very much for your attention. Thank you very much, Loic. Thank you. Are there questions? Okay. Yes. Did you actually upgrade all your formalism in the species setting? Yes. In fact, in the species setting, you can do the same. So the QC and KX are replaced by the same object, which is species of composition. Which is also double bi-algebras, I think, like this. And it turns out that you have two functors, the frog functor and the full frog functor. The one will send you to commutative objects, and the other one to non-comutative objects. So in fact, the double bi-algebras in the setting of species are commutative in the species settings. It's sort of strong commutativity. So with the second functor, there are no more commutative in the user space setting. So I see you can do it all this in the set of species. More or less, there are the same proofs, something like this. More or less, there are some technicalities in some sense, but these are the same ideas. If I may, I'd like to ask or make a comment or ask a question. Please. Thank you, Loïc, for your nice talk. You were wondering about co-products on polytopes. There is one on cones, and polytopes can be seen as intersections of cones, and that's the way formulae on polytopes of the kind you were talking about can be derived. So I think it should be possible, maybe, going that way, following the path of Barvinoch, Berlin, then I wonder whether it's possible. Okay, so I can say that I found some co-products on polytopes, but more or less they are useless. There are stupid co-products, and you don't understand the era of polynomers as an invariant to obtain stupid things just by sending, for example, a polycopic text to the number of vertices or something like this, or dimensions. Yeah, the one found... I just found stupid co-products, not interesting ones. Okay, the one I'm thinking of is very geometric, and it serves similar purposes. It's four counting integer points on cones, so it's made for that. And it's implicit in people's work in toric geometry. So yes, it could be helpful, maybe. It's a geometric one. Okay, so it should be better than I wanted. I will look at it. There's a question from Yannick Vargas in the Q&A, and I have said him so that he can unmute, but he asks, is there a polytope associated to double-post it the same way you defined polytope associated to a forest? Yes, in fact, everything I did for forests can be done for... So sorry, everything I did for forests can be done for post-sets or topologies. And you also obtain some error polynomials with the same properties. I just restrict myself to forests because it was easy to describe than post-sets or topologies. David, do you want to ask your question? Yes, I wanted to make a comment because it's directly related to the title of this conference, an algebraic structures and quantum field theory. Now, look, you know very well, Dick's work on the Huff algebra of renormalization, but I hope you're also aware of his recent work on the co-action associated with the monogamy of the functions we get from the phymer diagrams, and that's associated with cutting lines. It was an interesting question as to how that relates to co-action and multiple polar logarithms that we obtain and functions beyond that. So there is, facing us at present in quantum field theory, a compatibility question, it might not be directly related to your talk. And that is, what about calculations in which we both cut lines to discover analytic structure, but we also have sub-divergences that we have to renormalize? Shall I? David, yeah, yeah, that's precisely the right question, David, but I think there will be an answer pretty soon. And it has a lot to do with Sleuch's co-interacting by Archibalds. Excellent, thank you. Okay, thank you. I think there's also some simpler connection that just when we were looking at the general tree phymen rolls, then having the co-structure on trees, the co-interacting structure on trees, it picks out a particular one, a particular choice for the lower order terms. So that's a more trivial observation than what David and Dirk are getting at, but it's another connection to the quantum field theory situation. Agreed. I see Mischi has his hand up. Yes, thanks, and thanks, Leuch, for the nice talk. I also have a question regarding the co-products on polytopes, and it's about, I mean, if there's a relation, or if you're aware of the newer stuff by Agia, together with Agia, on the Hopf-Morano structure of generalized permutahedra, which are polytopes. And on them, you can define this co-product. So is this too specific for your purposes? Because I think they also, at least they asked some questions about erhard polynomials there on the polytopes. Yes, but the permutahedron are very specific polytopes. All right, but in this case, I see one from Posetz. So it's easier to define some co-products on these sort of objects, which have a strong, they have really a strong combinatorial structure on them. If you take any polytopes, it's not the case. Just a second. So the problem is really the generalization then too. So this only works for these specific polytopes. Yes, I think that for a special family of polytopes with strong structure, you can define a co-product which should give you the erhard polynomial or things like this. All right. For any polytopes, in fact, this co-product is quite, you can see it easily on the Posetz, you just cut or things like this. On the polytop, geometrically, it's not so clear for me. This is a section of faces or things like this, which is really something more complicated. Okay. So I don't know exactly what it is geometrically. Okay, but okay, I see. Thank you. So the problem for me that is that I can understand polytopes in three-dimensional, but no more. But this is only a few, three or four examples, which I can manage. So it's not enough for me to understand what happens for bigger polytopes. So I don't know exactly what the co-product is for polytopes. All right. In the interest of time, why don't we thank Loic again now?