 Okay, so first of all, I would like to thank everybody and especially kind and Eric for this beautiful conference despite the pandemic and I think I couldn't hope a better conference for the moment. And well, I should also greet Dirk for his birthday, that's obvious, but I would also like to thank him for what he did, especially this hope algebra of graphs and trees, because as Reymar said, we have started working on that maybe more than 20 years ago and I remember very well that I was a PhD student and I first met Dirk. I don't remember the exact year, maybe 1997 and he was invited in Marseille, not by our group, but by Robert Co-Cro to talk about notes and Feynman graphs. And I remember very well to have seen him at the bus stop and asked him whether he knows how to go to the Institute because I wanted to help him as a young PhD student. I thought that this was a necessary service I should give to Dirk and this was my first encounter. And after that, he started to propose the hope algebra of Feynman graphs and this had a very profound influence on the topics I have been working on because it shed a new light for me on renormalization in various frameworks and especially as Anna has said yesterday, the fact that you can summarize the BPHD formulation of renormalization in a single equation which is a Birkhoff type factorization was in fact, it's certainly one of the most beautiful equations you can write about quantum field theory. So I'm very indebted to Dirk for all this and I've been working still then from time to time on a quantum of algebra. In fact, those algebras have at least three aspects. So one of them is combinatorial. So they can be used to to develop some power series with index being graphs in seed of integers and then you can substitute one power series into the other and this leads directly to the co-product of the hope algebra because the coefficients of the power series are a character of the hope algebra and the multiplication of these characters is just reading the coefficients of the power series obtained by composing two power series. So this is the combinatorial aspect but there is also behind that group theoretical aspect because the characters form a group and it's a kind of league group because there is a league algebra. You can also think of it infinitesimally and that's very useful and also in my opinion a very deep aspect as has been noticed by Alan during his talk yesterday. And finally, perhaps the most interesting aspect but less studied at least by myself is the analytic aspect because especially this Birkoff decomposition hides an important analytic aspect that you can factorize a function of some complex variable z into an analytic function of z and an analytic function of one over z. This is a Birkoff type factorization. So in my talk I will mostly focus on this point here. So this combinatorial point so it will be certainly less demanding than other talks because everything is quite visual you don't have really hardcore computations. And the summary of this talk is that I will start with a gentle reminder of hope algebras of graphs and trees. Then I will introduce random tensors because it among all the possible occurrences of hope algebras in various fields, various models in physics. In fact it appears very naturally in random tensors and this is what I would like to explain. This will be probably the main part of my talk. And then there is also topological recursion. So topological recursion is something which appeared less than a decade ago. I think it goes back to 2007 or 2005. I don't remember exactly. It has been proposed by Bertrand Enard and Nicolas Rontain. It's an extremely powerful technique and in the talk of Reimard we have seen how powerful this technique is. And it allows you to compute order by order some quantity. In the original setup it was correlation functions of random matrices. And it then invaded many areas of physics. You can for instance think of the work of Mirzakani on the volume of modular spaces and also of physics because this work of Mirzakani has been recently used in JT gravity. So I keep title palm gravity. And that's somewhat complicated because it really relies on the analytic aspect of the theory. But recently, Konsevich and Zojbelman have found more algebraic approach which is easier to explain. Well of course when you do the actual computation you have to use this the analytic property. And even in the paper of Konsevich and Zojbelman the analytic approach is used to treat all the examples. But you can formulate this in a relatively simple way as a kind of WKB expansion. And here I will just show you how in fact the combinatorial aspect of the cone primer of algebra appear. So to survive this will be more combinatorial than algebraic or analytic. And I hope I will be able to convince you of the underlying relevance of that of algebra in the case I will consider. But there are many other occurrences. So just to quote three of them where on which I've been working myself. So the Polshinsky's exact renormalization group equation a multi-scale expansion in quantum field theory or even graph polynomials. So for instance there is a very famous graph polynomial which is the third polynomial. And it is a polynomial which assigns some value some polynomial value to every graph. And it can be characterized by a relation of dilation and contraction of an edge. And those relations can be fritfully formulated in terms of hope for algebra because the character is a polynomial assigning a graph assigned to a graph a polynomial value obeys the dilation contraction relation which turns out so this polynomial this map from a graph to a polynomial defines in fact a character of the hope for algebra. And the fact that it obeys dilation and contraction relation in fact is translated into a differential equation for the character. And this differential equation can be solved as a product of exponential. So again I think this is a a nice and beautiful thing because it allows to treat in a single in a single step the graph polynomial just as a map from polynomial from graphs to polynomials. So I will not talk about that because I've decided to focus more on on random tensors because that's a subject of my personal interest and but there are references and of course there are many other occurrences I have not quoted here. Okay so let's continue. So I will first introduce the hope for algebra of graphs. So it's a free commutative algebra generated by classes of connected and irreducible graphs and it has a core product which is defined this way. So you take delta of gamma, delta of gamma is always gamma times one plus one times gamma. So sorry I think I should activate something here by screen or you can see you can see it. So it's always gamma times or one plus one times gamma. This is always present. Then you have a sum over all disjoint proper connected subgraph of the product of all those subgraphs. So this can be considered as a single disconnected subgraph and then you reduce the all those subgraphs to single vertices inside the graph gamma. Okay so it seems that okay that's let me just yeah. So okay so now I will be able to write on so this core product these connected graphs have been extracted from the graph gamma and on one side and have been contracted to single vertices on the other side. Of course for that to be consistent if you contract a graph say with a certain number of external legs say four external legs then you will contract it to a single okay so you will contract it to a single vertex then it means that inside your theory you must have the four valent vertex. Okay now let's see how it works on an example. Suppose that I take the phi free theory so phi free theory which is this simple example here. Okay so in in in case you have just three valent vertex then you have that example you take this free loop correction to the self-energy then you always have this gamma tensor one plus one tensor gamma then you extract gamma one which is for instance this graph you get this formula and this you contract it to a single vertex and in fact I didn't write the the vertex here and you can do it in two different ways either by taking this one or this one so you get this part of two here but you can also contract separately all the two graphs here and then you get this to the square times one but the graph being supposed to be irreducible you will not contract the whole graph made by this followed by this. Okay you you don't do this this co-product in fact relies on a hierarchy of subgraphs which is well captured by a tree for instance you can draw a rooted tree whose root is the overall graph and whose two leaves are the two subgraphs here and if you had another one inside then you could draw another one inside and so on and then the tree will be more complicated. Okay so this brings us to the hope algebra of rooted trees which is a second example of interest to us so the hope algebra of rooted trees is again a free commutative algebra generated by rooted trees with a co-product and the co-product is always of the same form t tensor one plus one tensor t then a sum over admissible cuts so an admissible cut is removal of edges such that from the root to the edges to the leaves every edge is removed every path from the root to the edge sees only one removed edges one removed edge then it turns out that the tree may be separated in several parts there is one part which is called rc of t which contains the initial root and then there are several pruned sub trees which are the trees that fall when you once you have cut so again if I take an example here if I want to have the co-product of that of that graph then it's going to be again the tree times one plus one times a tree then if you cut any of the two edges which are just above the two leaves then you get the first term you can also cut two of them the two separately then you get the square of the of the of the of the tree with a single vertex times the tree with two vertices and then you can cut the tree the edge which is just below the root of course you don't cut two edges like you don't cut the the edges here and the edge here because in that case you will not have an admissible cut okay now these those router trees are almost everywhere in in our perturbative computations because they obey a universal property and this universal property is mainly due to the fact that a given tree can be constructed by gluing several sub trees to the root and this operation is an important operation because it allows to have all the inductive constructions on the trees and whenever you can formulate one problem using such an inductive construction so if you want to start with if you want to start with an object of large size and then you can reduce the statement you want to prove to a statement about several objects of smaller sizes which can be in a sense glued together to an elementary object to give to give the the the larger object you have you are studying then you can formulate your program using a router trees so the router trees and the the graphs the router trees and the graphs are particular cases of graded alpha-gebras or commutative graded alpha-gebras so these are bi-gebras of the type it's a graded as an algebra so it's a direct sum indexed by integers and the product obeys a standard grading rule the co-product obeys a dual grading rule whereas the grading is of course n here it is important in to have a h0 which is proportional to c which gives us an antipode which is zero unless x is a member of h0 so this type of graded bi-gebra commutative are in fact alpha-gebras because you can define an antipode by recursion so in the case of the graphs then the grading may be the number of edges or the number of cycles or in the case of trees the grading is the number of vertices so both come equipped with an antipode moreover if i consider the characters on such an algebra then the the characters are algebra morphism from the alpha-gebra h to any commutative ring which means that they obey in particular this multiplicativity condition then they form a group for the convolution product so the convolution product is just a star h alpha star h which is this formula m the multiplication of alpha and beta evaluated on both sides of the co-product the identity of the group is the co-unit and the inverse is obtained by composing with the antipode this group has the interesting property of being a league group in the sense that you can define a notion of infinitesimal character which is delta of gamma gamma prime is delta of gamma epsilon of gamma prime plus epsilon gamma delta of gamma prime and this form this this infinitesimal character form a lee algebra and this is precisely the lee algebra of the group g because every alpha can be written as exponential of of a delta and conventionally you can get delta by by a logarithm of alpha and all this is perfectly well defined as a formal power series because of the grading so at each step you have only a finite number of additional multiplication to perform okay so let's come to the use of the renormalization and the characters of the alpha algebra of graphs in perturbative quantum filtering so in perturbative quantum filtering we are interested by computing the correlation function or wind function which are the expectation values of products of fields and they expand over Feynman graphs and if you compute the expectation of a product of n fields then you have Feynman graph with n external legs and those n external legs are decorated by the positions x1 xn at which the fields are evaluated or in Fourier transform the momentum entering into the Feynman graphs through the external leg as you all know here these amplitudes are most of the case divergent so they cannot be properly defined just by taking the Feynman rules you need to regulate these these divergences one way which is very popular is to regulate divergences using dimensional regularization so you will evaluate any of these amplitudes using the Feynman rules in a complex dimension and this complex dimension I will denote it by d minus z and z is the deviation from the physical dimension and d is supposed to be the physical dimension the divergences are cancelled by replacing in fact the action s by a new action which is the initial action plus some counter terms and once you suitably choose the counter terms as a function of z and of some of the renormalization procedure you have chosen then you can write down finite amplitudes for every Feynman graph now here comes the main theorem I want to present in this section is a theorem by Conan Kramer it goes back to the late 90s now in fact since you assign a value to every Feynman graph then you have a character of the hope algebra and let's write phi gamma the regularized amplitude a gamma which is a lower series in z let's call phi minus the counter term in fact it's a the pole part for the graph gamma it's not the pole part of the amplitude it's much more complicated but it's some pole part that will cancel the divergence in a gamma but before canceling the divergence in a gamma you are first to cancel the divergence in all the sub-graphs this was this was said by Anna already yesterday and he this is what he called the preparation and then phi plus of gamma is a renormalized and finite amplitude which is finally assigned to the graph a gamma now they are all related by this equation phi plus is phi minus star phi and this just says that you have to do the computation with phi but you have to insert not the original parameters but the renormalized parameters taking into account not the renormalized bear parameters taking into account the effect of the counter terms and if you put if you write it differently you multiply by phi minus one you have that this map phi which is a power series in z and z minus one can be factorized at the power series in z minus one and a power series in z okay so this is I guess well known to every body here but I I wanted to to present it because I because this is precisely the work of Dirac which I alluded in in the beginning and and which made renormalization as as as concise as as possible using this equation but moreover it's it started also to to produce analogies with many different fields and I hope I will be able to present one of these later now something similar but different can be said about router trees so the characters of router trees can also be interpreted as a kind of power series so this is butchers b series it goes back to butcher which was a mathematician from New Zealand I think if I remember well and he was studying differential equations and in particular he noticed that you can compose and inverse the methods and so there is a group and this group is precisely the group which is the group of characters of the algebra of router trees so let me just explain a few elementary things here if I have a router tree t and a nonlinear operator x so this should be considered as a a smooth map from one banner space to another one for instance then this map can be raised to the power of the tree what is this first you have to divide by the automorphism of the tree so the number of transformation of edges and vertices that preserve the tree here my trees are not embedded trees so I don't care whether the vertices are this way or this way so I can permute all the leaves and and all the edges only the root is distinguished so apart from this purely combinatorial factor which is just a number like one third here one over two here or one third or one over factorial then it will come equipped with a combination of differentials of the operator x so for instance I will start from the leaf from the root then from the roots there is one outgoing edge then I will differentiate once then I arrive at the next edge then I will differentiate twice x double prime and then at the end I arrive at the two leaves where I differentiate I don't differentiate because there is no outgoing edges so edges are always oriented from the root to the edge and I combine all these operators so this the first with x prime x double prime is an abstract notation based on differential calculus but you can have a more concrete version using partial derivatives so x i is the root then I derive with respect to x j which is the next outgoing edges then I arrive at x j for the second vertex then there are two outgoing edges then I derive twice and I contract finally with the leaves this is very fine because then you can say that I will compose power series of non-linear operators indexed by trees and the composition exactly reproduces the um exactly reproduces the convolution product on the characters of the router tree this is equation 11 and of course there is a very simple example if the operator is linear so if x is really a linear map then there is no uh second order differential and then there is no branching of the tree and then you have just a linear tree and of course in the general case this is always evaluated at x so you you have to evaluate it at x so that this is a map from x to x from e to e from the balance space to itself so this is a function of x and this allows the composition to be possible and among the simplest possible series there is a geometric series so the suppose that you want to solve this equation x equals x zero plus capital x acting on x then you just have to invert 1 minus capital x and 1 minus capital x minus 1 in fact you can view it as a geometric series so it's the sum of all the trees with with a weight one okay and this is precisely what you obtain if you solve that equation recursively you start with x zero then you put x zero inside uh capital x of x and then you get x zero plus capital x of x zero then you do it again and you get uh order by order you get all those trees okay so um that is the framework I want to use in the following slides so in the remaining part of my talk and the first topic I would like to discuss is the theory of random tensors so random tensors are natural generalization of random matrices so a random matrix is a matrix m ij subject to a probability law and the generalization is just to add several indices so a wrong r tensor is an object t with r indices so i j and k and so on and I assume that all those indices take values from one to n so if i equals two then you have a n by n matrix now the theory is random which means that we are interested in computation in computing expectation values of observables which are just functions of the tensor so the expectation value of o of t is just the average of o of t with a weight which is exponential minus vn of t vn of t is given potential I will give examples later and it is assumed to depend on n because most of the time we are interested in taking the limit and goes to infinity there are several applications to this the first application is in fact the original one it was it was an attempt to give a sum to give a meaning to a sum over random triangulations in dimension d with a view towards quantum gravity because random matrices are related to two-dimensional quantum gravity and it was hoped that a similar relation can occur in dimension d so this was a hope in the early 90s but it has been recently in the last decade it has been made more likely because since these recent years a lot of progress have been made in identifying the triangulations that contribute at leading order in n so the equivalent of the genius expansion in random matrices especially by Rasmund Gureau, Vincent Ribasso and many people like that another possible application is to consider a random tensor like random coupling content so suppose that I have a n vector model so a quantum field theory or even a usual vector with n component phi i then I can say okay let's write an interaction for such a theory the interaction of the theory must combine several fields and since the field has a vector index i then it's natural to ask for the coupling constant to be a tensor of rank r if you have r fields and then it's possible to consider a model with a random coupling constant so you will do your computations using your coupling constant gj and then after all after this has been done you will average the result over the coupling constant and since the coupling constant is a tensor then you will use the technique of a random tensor and this is precisely what is done in the sash def ea ket af model so I am not going to enter into the details here but this is a model a quantum mechanical model in fact so in that model phi is just a function of time it's fermionic and once you have averaged over the random tensor with a Gaussian weight and you take the large n limit then the famous schringer-deisen equation for that model are simplified a lot and they are exactly solvable so you can write down an exact solution for the two-point function and it's in fact in the infrared limit and it's a strong coupling model problem so it's not expected to have an exact solution in a strong coupling limit it's very nice it has been proposed in the 90s as a model for a condensed matter but it has been recently used by ket af as a toy model for the ads1 CFT2 model no ads2 CFT1 correspondence okay so here are the definitions of random tensors no we would like to have a potential which is invariant under some on or un transformations a tensor may transform this way equation 16 where o is here chosen to be an element of the orthogonal group but if you have a complex tensor there is a rest of the possibility of having un transformations now if we want to have something which is invariant under these transformations then we have to generalize the trace the trace was a way to construct a lot of invariance in dimension two so for random matrices but if you want to work with random tensors and you have to generalize that a bit and one way to generalize it is to use graphs so I will develop the potential over some graphs which are made as follows if I have a rank r tensor I will construct graphs with vertices of degree r and then I will assign the tensor to the vertices and contract them along the edges of the photograph so I'll give examples just right after so it will be crystal clear I think but before let me just say that x of gamma is a certain coupling constant so it's just a number and it will be used also to generate expectation values by derivation and s gamma is a certain power of n which I have to choose in order to have a nice finite limit in the when n goes to infinity for some observers now let me give a few examples of those invariants so for instance the dipole graph which is just two vertices related by edges so and each vertex is assigned a tensor and the tensor are contracted along the edges so here i goes to i g goes to j k goes to k which are the three edges here there is this graph which is called a melon so that doesn't look at all like a melon but it's hard to explain why it's called a melon it's this graph which is called the quartic melon because you have four vertices now each vertex is assigned a tensor and you contract for instance the two vertices on the left are contracted with two edges so ij ij and they k and n contract to the two tensor on the right which are themselves contracted using m and n similarly you can also consider a tetrahedral graph and then you contract the tensor along the tetrahedron and all those generalize what is known for random matrices with the trace now just be very quick about that because time is going to be a bit short so here i've been talking for tensors in general but if you want to have a large end limit it is it is better to work with a complex non-symmetric tensor so you have t and t bar and you don't assume any permutations of any permutational symmetry of the edges so this is it's very nice because then you can have a very rigid structure of graph because you have to to color the edges by the indices which by numbers from one to d or to r which will tell us what is the rank of the indices being contracted and then you can prove that the large end limit exists and only a particular class of graph is involved these are the so-called melonic graphs where you have the properties that a melonic graph for every vertex there is a compound and conjugate vertex such that when you remove the two vertices then the graph falls into exactly r connected component r being the rank of the tensor which i write d some type now let me know try to derive the schringer-deisen equation in the random tensor partition function so by this i mean that i will make a change of variable in the partition function and i will look at what happens so the change of variable must be for my purpose a polynomial in the tensor and respect all the symmetries so the best way to do this is to index the monomials occurring in the change of variable by a graph gamma with one vertex removed so suppose that i take here the quartic melon and if i remove a vertex i have this contribution here and this contribution here has still three three indices and these three three indices will be used to define the tensor here because the when i have removed the vertex i don't have a scalar anymore i have a tensor so i perform this kind of change of variables into the tensor the tensor model partition function and there are two contributions a Jacobian and a variation from the from the uh integrator and in fact from the potential here now the two contributions are slightly different so first the variation of from the potential it it occurs because every graph invariant appearing in the potential is modified by the change of variable so modified by this change of variable and the modification is is very simple to understand you take that graph then you take this delta t and you insert it at the vertices here you will because this is a tensor t and not a tensor t bar you will insert it only at the white vertices and then you will get from that graph to that graph so this operation is just opening a graph so you remove a vertex and then you insert now the Jacobian comes from the trace of the derivative of delta t so when you when you have a delta t which takes for instance the graph is the form of the open graph here then if you want to to take the derivative then you have to remove another vertex and then you take the trace and you reconnect everything so on the example on on the the last line you will remove the two central vertices so this is one removal is the definition of dt the second removal comes from the fact that you derive then with respect to t and then you reconnect everything so in that case you obtain that contribution now the tensor model partition function should be invariant under this change and it turns out that this can be written as a differential operator l gamma acting on z why differential operator because all the all the graph you will collect from this operation will be will be rewritten as a derivative with respect to the parameter x gamma and this generalize the the other constraint you have in the matrix model they are indexed by a graphs gamma and the nice thing is that now in fact they reproduce the the algebra structure of the conch primer of algebra because you have already noticed that when you took one of these graph invariant and you remove you remove a vertex it means that you will open the graph so we will have an algebra of graphs with our external legs are being the rank of the tensor and then our operation will be just substitution of those graphs into all the graphs to get bigger graphs and this is precisely what is done by the conch primer of algebra better to say that the conch primer of algebra does the opposite it disentangles a big graph into subgraphs but here you are working in the dual and it's perfectly fine and you can moreover show that the group of characters of the hope algebra is in fact the group associated to this to this algebra and again in the case of random matrices and you have just a tracing variant and then it's indexed not by a graph but just by an integer number and you find back the other algebra the only difference with respect to matrices is that of course the graph are more complicated than the integer but the main difference is the fact that you have a differential operator of higher order here because as you have seen if you have a graph with the vertex with vertices of balance are then by removing a pair of vertices you can get r connected components and this is why you have this differential operator of order r now this is all constraint or they are a different form of the loop equations and they the loop equation are at the root of the topological recursion so in the last years konsević and doiberman have proposed to to to revisit topological recursion in the language of deformation quantization and in fact in the w k b language so let me introduce what is called a quantum error structure so a quantum error structure is just a quadratic differential operator of the type given by equation 26 the important point is that the first term is just the ordinary derivative then you have a quadratic polynomial then you have a differential operator with x and d and then you have a second order differential operator and then you have a constant term the constant term always comes with an h and all the differential operator always come with an h which plays the role of h bar now what is it good for in fact if you assume that they obey the algebra those operators then you can show that there is a unique power series which satisfies the equation li acting on exponential of s divided by h bar is equal to zero so you can find a unique w k b action s the the first exponential is is useless here but i i put it for for some reason but it's useless here because li acting on exponential equal to zero is equivalent to that equation the important idea is that you impose that you start with g greater than one or n greater than three this is precisely topological recursion there is a long route from this equation to topological recursion in fact you can you can look at papers original paper by concevich and zeugerman but also there are very good reviews by by geithan brought who wrote lectures about that it's very interesting so this is precisely topological recursion but written in a different format i will stick here to this format and then i will slightly enlarge it so i will add perturbation by differential operators of higher order because i'm interested by the by the random tensor models and i will show a similar result except that i have li which contains higher order terms and di s zero because i have a load for initial terms while in the concevich- zeugerman paper it was they started with so now i i i don't know where i am anymore i have to finish so i yeah so i want to solve that differential equation and to solve this this differential equation that would give me the wkb uh solution s i'm going to use router trees so i write it first as an equation not for the action s but for the one form d of s and for the one form d of s i can write it in the form omega equals omega zero so the initial point plus the perturbation and this is precisely solved by router trees at the end the li algebra structure allows me to check that the one form omega is closed and then that the action indeed exists now the last last point is to precise that that the hope algebra there is a hope algebra behind this expansion it's a hope algebra made of graphs which are trees and those graphs are decorated by by loop edges and there is always a loop the loop edges always go from one vertex to a child so there is never a a loop edge between vertices that sit at the same rank on the tree always from a parent to a child and then there are also big vertices which are blobs and the blobs represent the d i of s zero so all this form a hope algebra and then you can you can combine objects into the other okay so i think i know my tablet is not always working very nicely so let's okay so there is a hope algebra which is very similar to the cone primer hope algebra of trees except that when i do a cut and the the core product is always given by cuts i have also to tackle the loop edges and the loop edges all come contracted to the to some to all come contracted to the pruned sub trees which are contracted to vertices so you see an example here and this hope algebra is nice in the sense that it can allows you to compose different computations so you start with an s you get an s prime then you can continue computing and starting with this s prime to get a new s double prime and so on so my my last side this way of doing topological recursion is in fact here what i have been doing is mostly a perturbative expansion so we have to distinguish the perturbative from the topological expansion in the perturbative expansion you expand in powers of t so in fact t is related to the number of edges so you expand really in five man graphs and if you come back to the to the router tree of before in fact you will see appearing all the five man graphs but topological expansion is not this topological expansion is an expansion in powers of h and and x so you want as has been said in the talk by a rhymer you want to have an expansion in fact in two minus two g minus n so g is the genus and n is the number of boundaries this is what is done by the conceivage the uberman approach because it's really an expansion in in the powers of h and x and to come from such a more general expansion to an expansion powers of h and and x you have to factorize the leading order term and this is precisely what is done by topological recursion if you know the disc and the cylinder so here the disc will be linear in x or constant for the df over dx and quadratic in f or linear for df over dx and once you factorize that you have to know them then you put them into the equation and then you get the higher order terms so i will just just just end by saying that it's here i've been mentioning mostly some combinatorial aspects and there is a deep analytical aspect behind both topological recursion and conchrimer computation in renormalization and it's very likely to me that there is a Birkhoff decomposition behind the the behind topological recursion you have already seen in Heimar's talk that there was a separation into an analytic term and a non-analytic term and this works fine at lowest order i've been able to check but it remains to find to present this as a as a general theory so i think i should be in time to finish and okay so there is no more slide okay so it's finished okay so thank you thank you Thomas for your talk i hadn't seen that different kind of trees with the loop edges ever before so i i wasn't aware of this combinatorial underlying of the this hop algebraic underlying of the topological recursion um so since you're yeah since we're a bit behind time i just want to ask one question from the chat so so Johannes Turian was asking that this tensorly algebra that you mentioned is this related to the guru Virazoro algebra is the relation between these yes yes yes yes yes it's that it's precisely that it's precisely that in fact the bureau the bureau derived that that those constrained and but he was not aware at least he didn't tell me that this was a compromise of algebra that i told you okay thank you Viraj i think that answers the question brilliantly um let's thank you to my here and leave further questions for the discussion afterwards in the break thanks so much