 Thanks a lot for the invitation. It's too bad we can't be there in person. That's life, I guess, this year. I hope it's going to be better next year. Yes, so, well, so as you maybe know, I'm really not an algebraist at all. But so I'm more of a probabilist slash analyst. But in the course of my research, well, the type of analysis and probability that I do is all led naturally to the type of algebraic structures that you actually see in perturbative quantum field theory. But the context is slightly different. And so let me start. So the first half of my talk, I want to basically show you in what sort of context these type of structures show up in probability theory. And for this, I want to focus on one example. So here, one way you can link, if you want, quantum field theory and probability theory is by this procedure of stochastic quantization. And the basic idea, which was essentially introduced by Parisi and Wu back in the 80s, the idea is that, well, if you want to build a sort of Euclidean quantum field theory, and so that would formally be given, be described by some kind of measure of this type, where this d phi would be some sort of, you know, the big measure on the space of fields, which doesn't really exist. And this would be some kind of action functional. And the idea is if everything were finite dimensional, so of course, you know, your field configurations don't belong to a finite dimensional space, but if you just suspend this belief for a second and pretend that they do, then you can actually write down a stochastic evolution equation, which is essentially a gradient flow. So if you just divide by dt on both sides here, this is just a standard gradient flow. So phi, so you introduce a time, which has nothing to do with the time of your quantum field theory, if you want, so it's a purely algorithmic kind of time. And you take a gradient flow, so this guy simply tries to minimize that action. But then you add an additional noise term to this. Okay, so the dynamic here is going to try to minimize s, but then it keeps on being kicked around by this noise term. Okay, and so here this w, you should think of it as a Brownian motion, which means that the dw by dt that somehow formally shows up on the right hand side here should be thought of as white noise. And white noise, you just think of it as being, you know, kind of independent random variables at every instant of time. So it's kind of as random as possible in a way. And then, I mean, of course, if you have a gradient flow, that means that you need to give yourself some kind of metric on the tangent space of your configuration space, because well, the differential would take values in the cotangent space. And so you want to turn that into something in the tangent space to get an evolution. And so you need to fix a metric. And the important thing here is that the metric that you use in order to define your gradient should be the same as the one that determines the covariance of that Brownian motion. Okay, so it turns out if you can think about it, well, the covariance in terms of its sort of tensorial behavior, it actually behaves like the inverse of a metric. Okay, and so you will take the inverse of your metric as being the covariance of this Brownian motion. And then in finite dimensions, it's a, you know, very simple kind of elementary theorem that you learn in sort of introductory courses on stochastic analysis, that if you take this evolution here, and you start with an initial condition that's distributed according to this measure, assuming that, you know, everything is finite dimensional and S kind of grows at infinity in such a way that you can normalize this and everything is smooth enough, then the solution to this equation has the property that this measure is left invariant. So if you start with an initial condition that has that distribution, then at all subsequent times, the solution has the same distribution. And furthermore, if you start with a basically arbitrary initial condition, and you look at the solution after a long time, then the law of the solution actually converges to this measure. And so then one idea is to say, well, one way of building this measure is to kind of go backwards. And to say what you could do is to actually build the dynamic and then try to show that this dynamic has an invariant measure. And then that invariant measure would be the measure that you're actually after. And the reason why you want to do this is that, well, there's this hope that these kind of divergences that show up in quantum field theory and sort of all the problems that you encounter when trying to go to pass to the limit for some kind of discrete approximations, this measure, all these problems might actually be somewhat easier for the dynamic. The reason being that when you make sense of a dynamic, you actually do so sort of for a very short time, and then you try to sort of extend it for longer times. And so you have automatically a small parameter, which is your small time parameter without having to have a small parameter in here. So like you don't need to do a perturbation in beta here. So your small parameter is not beta, your small parameter here would be like the time step for the dynamic. So the idea is that it should be easier because there's kind of a small parameter that comes in sort of for free, even if there's no small parameter that shows up in this measure that you're trying to build. Now, well, so Parisian who sort of had that idea in the 80s, but it actually took quite some time for this to bear some kind of fruit. And while the reason being is that essentially the theory of stochastic PDs that you would need in order to build these type of dynamics, well, wasn't sufficiently developed at the time, it actually took quite a long time to catch up. Now, the example, the specific example I want to focus on for today's lecture is that of the 1D sigma model where your fields, if you want, are just loops with values in a Riemannian manifold. And so that's sort of interesting. It's an interesting example because there's no, since the target space is not linear, it's a Riemannian manifold, there's no somehow Gaussian reference measure if you want. And so in this case, the energy, if you want, or your action functional is just the usual Dirichlet energy. So your field configurations are loops in a manifold, so they're just curves from the circle into some Riemannian manifold. And the energy of a curve is just given by the usual Dirichlet energy. So you just parametrize your curve. The curve comes with a parametrization because I really view it as maps from the circle into the manifold. And you just run along the curve and you take the tangent vector at every point to your curve, you stick it into the metric at that point, and you integrate this along the curve. And the minimizers for this are close to the basics. Now, you can actually just stick this into a computer so you can discretize, you can write down formally the corresponding sort of longevity and dynamic if you want. And you can just discretize it in some kind of brutal way and stick it into a computer and see what you get. So I can kind of show you a little movie of what this looks like. So this looks like something like this. Okay, so here in my target manifold is just a two sphere. And you see this curve that sort of wriggles around on this two sphere. And well, okay, so this is sort of the type of dynamic that you're interested in constructing here. Let me go back to the talk. Now, in this particular example, one actually knows how to build the measure in the sense that there's at least as a natural candidate for this measure. And so it's sort of known, if you want, that the Brownian loop measure. So it's just you take the diffusion that has the Laplace Beltrami operator as a generator on your Riemannian manifold and you condition it on returning to its starting point after a fixed time. And so that gives you a measure on loops. And that measure on loops, at least in some sort of formal way, it's been known for some time that about at least formally can be written precisely as a kind of Gibbs measure like this, right? So when you have this Dirichlet energy showing up, except that you have an additional term, which involves the scalar curvature sort of integrate the scalar curvature of your Riemannian manifold integrated along the loop. And so here's sort of an interesting thing is that if you go and look at the physics literature from sort of the late 70s, early 80s, where people derive these kind of results. What you see is that actually depending on the papers you look at, you get different values for this constant C. So there's a whole bunch of different values that show up in the literature. And they essentially show up because there's an ambiguity of how you actually interpret this kind of Lebesgue measure here, which again doesn't really exist. Now, you can write down this gradient dynamic for this Gibbs measure. And if you do this, you get the following kind of equation. So you see some sort of a nonlinear heat equation, right? So now u is a function. So it's an evolution, time evolution with values in that loop space. And so it's a function of two variables. There's time and there's still space, the x, which is somehow the parameter of your loop. So x here takes values in the circle and t is just the positive reels. And here you get the covariant derivative of dx u in the direction of dx u. That's some type of heat equation. And then you have here this sort of gradient of the scalar curvature showing up, which comes from this term. And then you have a noise. And in front of the noise, instead of having a constant, the natural thing to have here is the square root of the metric. The reason being that, well, the natural gradient with respect to which you get a nice expression like this is the intrinsic gradient in the tangent space of your manifold. And so the natural metric is really the metric of your manifold at every point. You can write that in local coordinates. So you get some kind of horrible looking PDE. Details don't really matter. So you have the Christoffel symbol showing up here. And here the way you take the square root of the metric, one way of doing it is you take a bunch of vector fields, which I call sigma i, which generate the metric in this sense. So if you want the sum of sigma i, sigma i gives you the inverse metric tensor. And so, well, you get this stochastic PDE. Now, one thing that you see is that here, even though my field consists of perfectly continuous kind of function, so that these loops, they are continuous as a function of their parametrization, if you remember the little movie that I showed you, they're continuous, but they're actually not very smooth at all. So actually what you can show is that the typical regularity of x goes to u of x is typically held at alpha only for alpha less than a half. So it's basically hold a continuous of all that half. So they're pretty regular. And that means that here, this stochastic PDE here doesn't a priori have an intrinsic meaning because you have these non-linear terms here involving the derivative of the solution. And the solution is not differentiable. So even though u is a continuous function, the space derivative of u is a distribution. And so here you have this product of distribution and then also multiplied by some quite irregular function. And so you have the same type of problems as typically show up in quantum field theory where you have this problem of not having a canonical way of multiplying distributions. So in this case, so in this context for these type of stochastic PDEs, so it's not just for this particular equation, but for a large class of equation of that type under just some kind of power counting condition. Essentially the power counting condition says that you should look at an equation which is such that it only has sort of finitely many elementary divergences if you want. So it's some kind of sub-criticality condition. So there's now a sort of general result which is sort of a combination of a number of works of myself with various collaborators where we give a kind of black box showing that for these type of equations you can regularize them in many different ways. So here you don't have sort of nice analytical expressions. So for example things like dimensional regularization don't really make sense here. The natural regularization would be for example replace this white noise by some kind of smoothened out version of white noise. So white noise formally has a covariance that's a delta function, so you replace your delta function by some kind of approximate delta function. So then you have a small parameter epsilon, you try to send epsilon to zero the usual thing. So in this case you get a number of, so the theory tells you that you have a finite collection of symbols. So these symbols here they are essentially the analog of Feynman diagrams. Okay you can kind of think of them as being sort of half Feynman diagrams where the actual Feynman diagrams would be obtained by sort of taking two of these trees and sort of gluing the leaves together or several of these trees and gluing the leaves together in various ways. So they are sort of like partial Feynman diagrams and on these there is a sort of number of interesting algebraic structures that are very similar to well what we heard already in the previous lecture for example where if you look at sort of the space of these kind of symbols they naturally have a, they don't themselves form a hopf algebra but they have a co-module structure actually for two hopf algebras in this context and one of the two hopf algebras encodes the renormalization which you can kind of view here as some form of re-centering in probability space and the other hopf algebra sort of encodes like a re-centering in real space actually where you somehow, where you perform kind of local Taylor expansions in real space in some sense. But for each of these symbols you also have a valuation that goes with them so each of these symbols you can actually interpret them for this equation as a kind of a vector field so the way this valuation works is well you're given the Christoffel symbol so you're given a connection and you're given this collection of vector field sigma i and now what you do is well when you see you see these symbols they're basically trees where you have different kind of nodes so there are nodes that are these kind of fat green nodes and then there are kind of small red nodes and the fat green nodes they come paired up so here sort of either you've got just two of them and then they're paired up or here you have four of them there so I've drew them in two different colors to show that they kind of form two pairs and now every pair of green nodes like that you should think of it as representing a sum over i of these sigmas and you should kind of think of each of these nodes since it's a tree as having sort of outgoing edges and these edges these outgoing edges you can think of them as representing the free indices here so here you have two free indices alpha and beta they correspond to the two outgoing edges here and then the red nodes so you have these red nodes that always have two red edges that go with them so you should think of them as representing the Christoffel symbol that has three free indices one upper index and two lower indices and that represents the fact that while here you have two free incoming edges that represent the lower indices and then again you should think of it as having here a free outgoing edge which represents the upper index right so outgoing edges at the bottom represent upper indices here and incoming edges represent lower indices and then you can create new incoming edges by taking derivatives right so if you take a derivative of an expression like this that creates you an extra free index and it would be a lower index so that will correspond to an incoming line above and you can join lines by contracting indices right so for example if I if I take that epsilon gamma sigma if I apply this procedure to the simplest one of these trees well what does it mean here where I have these two guys so that represents a sigma i alpha sigma i beta but then one of them has an extra incoming line that means that it has a derivative and that incoming line is contracted with the outgoing line of the other guy that means that this derivative the index of the derivative should be the same as the index of the second guy and then there's a free index here which is the free index of this expression right so you have this this correspondence that sort of allows you to turn each of these little pictures into a function that's built basically some kind of multi-linear expression of the sigmas and the crystal for symbols and their derivatives which sort of automatically satisfies Einstein convention and has one free index left at the end and then the general result so again as I was mentioning this is sort of really a whole secret it builds on a whole series of works with well Ivan Brunner who's going to speak just after me and A.J. Shandra who's also at Imperial in the Echevre who's now in Edinburgh and Lorenzo Zambolte in Paris and that black box sort of says that you can find randomization constants and so here I view the randomization as an element just of the free vector space generated by these guys so there's just one constant per symbol so that if you take the so you take some regularization of this equation and then well you do the usual thing so you add a counter term so here the counter term would essentially change the value of this vector field h right so you add to h some linear combination of the expressions corresponding to well all 54 trees of that type that you can draw then there is a way of choosing these constants and that's again you know some type of book of decomposition type thing which allows you to actually compute these constants so that if you take the regularized solution with the for the modified equation and then you send epsilon to zero you can get a limit and the limit is independent of the approximation procedure okay so here the important thing is that you can really prove so this is a purely these are really analytical statements right so these are not sort of sergeant right constructions they are actually analytical statements so I'm not going to go into detail of the sort of topology in which this limit takes place but these are really analytical objects here that converge to a limit and the limiting guy you can show that it's very stable under approximations in the sense that you can approximate this guy in pretty much any way that you want along its stationary and has some kind of moment bounds you always get the same limit and the limit is also very stable as a function of the data here okay so you can so these guys if you want you can make them depend on epsilon as well and you're actually going to get the same limit now in this example well the problem that you get now is that you get a priori well a fifty five fifty four dimensional space of possible limits so it's not terribly canonical so you would like to exploit symmetries in order to kind of reduce your space of nice limits or admissible limits right so you would want to have some kind of you would want to essentially say well I have this class of equations here if that class of equations satisfies at a formal level some kind of identity then I would want the object that I build here to also satisfy this identity right for all choices of gamma sigma and h and there are two such symmetries actually so there's a sort of meta theorem that one can prove so I call it a meta theorem because really these symmetries this off that doesn't seem to be one sort of good formulation that really covers all possible cases you can imagine but you know for all cases that we've encountered you can prove a theorem of that type and it's just slightly different proof every time but essentially it says that if you have a symmetry and you can approximate your equation in a way that your approximation preserves that symmetry then there's a way of renormalizing it so that the renormalized limit still satisfies the symmetry the important thing here is that in general if you cannot find an approximation which preserves the symmetry then it may just not be true that any of the renormalized limit satisfies all of your symmetries okay and I'm going to show you an example of it so here in this case there are two natural symmetries so the first one is changes of coordinates in the target manifold right so if you do a you perform a change of coordinates in the target manifold then in the way that I wrote things down you know in a coordinate system of course you get a completely different equation and what you would want is that the solution to that different equation would be the same as the solution to the previous equation simply pushed forward under the different morphisms that gives you the coordinate change and so in the case of usual stochastic differential equations there's a solution theory which is called Stratonovich which has precisely that property and so in our case one can you know prove the corresponding theorem here and what that tells you is that you can impose some restrictions on your renormalization procedure so instead of having 54 degrees of freedom you can kind of cut it down to 15 if you want to impose equivariance under coordinate changes and there's another symmetry which is that well remember I chose my in order to take my square root of the metric what I did is I chose a bunch of vector fields so that the sum of sigma i tensor sigma i is the inverse of the metric tensor you know of course there's lots of there are lots of possible choices for these sigma i's right so if I'm just given g that does not at all determine the sigma i's and well at the formal at the formal level one can sort of convince oneself that actually the law of the solution shouldn't depend on the choice of square root of metric here and what one uses for this is something that's called ito's isometry and again for usual sort of stochastic differential equation there's a solution theory which has the corresponding property which is called ito solutions and if we in our case sort of prove the corresponding theorem we can show that you know you can reduce your 54-dimensional space of solution theories to something 19-dimensional and so now you could say well you know I have these two symmetries of course you know if you have two symmetries for something you can always mash them together it gives you one big symmetry well not always but in this case the two symmetries can be actually mashed together so there's a kind of skew product of these two symmetry groups that kind of acts on the whole thing but we don't know of a good approximation that actually preserves that big symmetry so we have an approximation that preserves this symmetry we have one that preserves that one but they're not the same type of approximation and we don't know of any approximation of preserves both at the same time and so well so there's a natural question is can you have both and in the finite dimensional case so if you don't talk about stochastic p's but just stochastic differential equations there is a completely analogous question and the answer there is actually just no so there is no solution theory for stochastic differential equations that has both of these symmetries simultaneously in our case it turns out that you can actually have both at the same time so so that's this theorem we obtained with Yvonne from Gabriel and again Lorenzo Sampotti which is that actually these you know so you have this 15 dimensional affine space of theories that satisfy equivalence on the change of coordinates you have this 19 dimensional space of theories satisfying E2Sometry there are two sort of affine subspaces of a space of dimension 54 generically since 15 plus 19 is 34 which is much less than 54 generically you wouldn't expect these two affine subspaces to intersect but what we can show is that they actually intersect and they have an intersection which is actually of dimension 2 and well it turns out that you can actually even so now you have a natural two parameter family of sort of notions of solutions that have all the symmetries that your class of equations satisfies there's a sort of more analytical property that I don't want to really go into that you might want to impose as well and we can prove we can show that you can actually impose that more analytical kind of property also simultaneously with these two symmetries that kind of reduces things by one more degree of freedom so at the end of the day you end up with a one parameter family of sort of very natural solution theories that in some sense behave as nicely as you could possibly expect and furthermore in the case that we're actually interested in so in the case we're interested in this gamma and these vector field sigma i they are not unrelated right because the gamma is under christoffal symbols for the levichivita connection that comes from your rheumaniac metric and the sigma i's are some kind of square root of that rheumaniac metric so the two are related and it turns out that the way in which they are related is such that in this particular case this one parameter family of solution theories they actually all coincide so at the end of the day you have a completely canonical kind of notion of solution and well still there was one this last degree of freedom which i eliminated which i didn't spend much time on it which was more of an analytic nature rather than a geometric nature so there was no actual symmetry involved somehow that one you can still ask yourself you know what's the effect of you know changing this last parameter and that turns out to actually just add a term to the right-hand side of the equation which is proportional to the gradient of scalar curvature so that so it's kind of cute because in a way that gives you a different perspective on this fact that was that people had figured out back in the 70s which is that if you formally try to write this Brownian loop measure well you always want to write it like this but you don't quite know what the constant c should be and so here are sort of some of the different possible values of c that appear in the literature and so here the way this is interpreted is that this constant c is sort of the remaining degree of freedom in my solution theory for this stochastic PDE which is not fixed by purely symmetric considerations so now the main step in the proof is to show so we want to show that these two sub these two spaces sort of intersect right so we have a space so we have this big space s which is essentially you know just the vector space that's generated by all these symbols then we have a subspace which corresponds to if you want those linear combinations of symbols that can be written just in terms of g rather than in terms of this square root in terms of these sigmas so that's this subspace s eto and then there is this sort of geometric subspace which is the one that corresponds to those linear combination of symbols that actually give you a vector field right because all of these symbols give you an expression that satisfies sort of Einstein convention and has one free upper index but such a thing is not necessarily a vector field because the Christoffel symbols they are not a tensor of type 21 right the Christoffel symbols they are well they determine a connection but they're not a tensor of type 21 and so just contracting it with other tensors in a way that satisfies Einstein's convention doesn't guarantee that you actually get a tensor in the end so here you have a subspace of this space which is those guys that actually give you a vector field in the end and what we can show is that if you take one of the solution theories that satisfies e to isometry and one of the solution theories that behaves correctly under changes of coordinates then they differ by a counter term that belongs to the space of linear combinations such that if I take two different square roots of my metric and then I look at the difference between these evaluations corresponding to these two different square roots from the metric then what I obtain is a vector field for every choice of signals and gammas okay so now obviously this space as both contains both the geometric ones and the eta terms right because these terms are precisely the ones so that this difference vanishes because they are those terms so that if I choose two different square roots for my metric I actually get the same thing so it depends only on the metric and not on the choice of square root uh whereas these guys are the ones so that each of these guys separately is a vector field and therefore in particular their difference would be a vector field as well and so each of these guys certainly belongs to that space so their sum belongs to that space and the non-trivial fact is that this sum is actually equal to this space here okay and that's not obvious so in the case of stochastic differential equations the analogous you could actually try to do exactly the same proof everything works up until this point and then what you realize is that your space actually consists only of one single symbol and the evaluation of that symbol is well the expression that I already wrote down early on and that expression is not a vector field and it also really depends on the choice of sigma so it is not just a function of the metric so it belongs neither to this space nor to that space so these spaces are both zero um but if I take two different choices of sigmas that give me the same metric then it turns out that this difference is actually equal to this difference of covariant derivatives because the term involving the the connection actually drops out and therefore that's a vector field okay so these two spaces in this case are zero but this space is non-zero but one-dimensional and so the proof fails and well the conclusion is actually no not to be true in the case of stochastic differential equations now in the case of pds you see if I just look at the trees that have two leaves there was only two of them then it turns out that this guy here well if I hit it with my evaluation you actually get essentially just a contraction of the Christoffel symbol with the the sigmas right so that's that red guy represents the Christoffel symbols and these two green guys represents the two instances of sigma and the fact that they're connected represents that contraction here um and there of course this here is nothing but just g alpha beta and therefore uh this guy belongs to this eto space um and similarly you have this term here you can actually show that if you apply this evaluation map to this term here this actually gives you the covariant derivative of the sigma i in the direction of sigma i and so that's a vector field um and it turns out that there's no other vector field you can build in this way um and so in this case well these two spaces are both one-dimensional uh and the sum is actually just uh everything and well and you can actually show that in this case this both of these elements actually have the property that they belong to this space okay so in this case uh well this part of the argument works once you know that this difference is of the form of a sum of an eto counter term as a sum of a geometric counter term that means that you actually know that these two affine spaces have to coincide because you can actually you can just move that's there's one space in which you can move with terms of this type and the other space in which you can move with terms of that type and if the difference is of the form of a sum of these two terms that means that you can actually move both of them in such a way that they meet right um and the problem now is to actually prove this in general so so in our case for the trees that have two leaves it's kind of easy to just check it by hand and those with four leaves well there's 52 of them um and so you have to somehow figure out what these subspaces are um and well it's not so easy to kind of figure out what subspaces of a 52 dimensional space look like uh if you don't have something more systematic that you can do right so you cannot just turn here you could just about turn it into a sort of simple kind of linear algebra problem um but if you look at the sort of trees with four leaves that's not really doable anymore so you want some sort of more systematic way of looking at it um and so here it turns out that the natural way of abstracting the sort of the natural algebraic structure that actually shows up um of which these trees with these different various decorations that showed up are an example uh what we call a t-algebra so I don't know if it's we didn't find it anywhere in the literature so but maybe well I think we don't really know this literature so maybe some of you have seen this already somewhere in the literature and then we'd be you know very help very happy to have a pointer uh but we haven't been able to find this so this is essentially an abstraction of the notion of functions with multiple upper and lower free indices um and so what do we so how do we define this well so we define it as a vector space with a grading but sort of a double grading so it has two degrees um and these degrees here you should think of it as being the number of free indices right so the u is the number of free upper indices and l is the number of free lower indices uh so a vector field would be something with one free upper index so it would be an element of v10 now um and then you have three additional pieces of structure the first one is you want on each of these vul's you want an action of the symmetric group it's actually two copies of the symmetric group one that acts on the sort of u upper indices and one that acts on the l lower indices and you should think of it as you know corresponding to permutation of indices um then of course you have a product might if you think of these as functions with a number of free indices well you can multiply two such functions and it's essentially a tensor product and so the number of upper indices should add up the number of lower indices should add up so you should have a product which kind of preserves degrees in this sense um and in terms of permuting indices of course now if you multiply a with b and b with a well it's not quite commutative right but it's sort of almost commutative in the same sense as the usual tensor product is kind of almost commutative uh so in this case what you want to impose is that multiplying b with a is the same as multiplying a with b but then sort of permuting so this is the permutation which corresponds to you know taking a block of size u1 and u2 and kind of swapping the two blocks and the same for the lower indices you have a block of size l1 and l2 and you kind of swap the two blocks right so that's somehow the natural a natural property that you would expect the product you you would want to impose this product to have um and then of course it should you know um in some sense commute with the action of that symmetric group right so in the sense that if you first flip indices around and then multiply the two guys it's the same as first multiplying them and then kind of flipping the indices around in a natural way um and then the final operation that you want is a sort of a partial trace and the partial trace corresponds to contracting an upper index with a lower index um and so you would always view it you would view it as an operation from say vu plus one l plus one into vu l you've contracted an upper and the lower index so that both of them go down by one um and it should have to now you would want to sort of contract two arbitrary indices but we can actually always reduce it to the case of contracting the two last ones because we have this operation that commutes indices right so we should we should think of that operation as actually just being the operation of contracting the last upper index with the last lower index and then it should have if you interpret it like this then this property is of course very natural um and then you can again sort of think a little bit about how you know how this should sort of interact with the symmetric group uh one important property is this one which basically says that if you you know if you look at sort of the last two upper indices and the last two lower indices and your first contract right so you have say the last two upper indices the last two lower indices um and you apply this trace operation twice so that means you first contract these two guys so then they disappear then you contract these two you say well that should be the same as first sort of flipping these two indices around um yeah not sure I have a good way of drawing this here but you so just sort of first flip the last two upper exchange the last two upper indices and you exchange the last two lower indices and then you somehow contract both of them right so it's just corresponds to sort of contracting them in the reverse order uh and you want that to not make any difference um so so one typical example would be to just take us the ul space for example take some fixed vector space v and then take l copies of the dual and u copies of v itself and so then you have a natural you know product and permutations and you have a natural tracing operation as well right which sort of takes the last copy of v star and contracts it with the last copy of v and that has exactly all the properties that we just formalized um so I think I'm running out of time so I'm uh so let me just sort of give you one sort of little result that we have in this direction which is very useful because so the point here is that we're using we're using this algebraic structure sort of as a language but that at the end of the day we want to use it to prove an analytical result so we have to go back and forth between the algebra and the analysis um and so in particular we have to prove that at the analytic level you don't somehow end up with kind of spurious identities that you know you don't see at the at the algebraic level but that may you know just appear because somehow there's some degeneracy or something right and so you want some kind of non-degeneracy result that tells you that generically you don't actually have any sort of cancellations also at the analytical level that you don't already have at the algebraic level okay um and so here we have some kind of non-degeneracy result of this time which essentially says that for you know a large class of these kind of t-algebras that are all the ones that ever show up in the proofs that we care about um you can always if you look at just a sort of finite dimensional subspace of them then you can take the so if I go back to the sort of string in the manifold you can take the if you take the dimension of the manifold large enough and you choose these Christoffel symbols and these vectors in a generic way they can you can always guarantee that if the dimension is sufficiently large then there are no kind of spurious cancellations that appear okay uh so that's a sort of type of non-degeneracy results you can prove here um but I think I'm out of time so this is maybe a good place to stop and thank you very much for your attention thank you very much Martin for your nice talk thank you before I ask questions please if anybody else wants to ask uh go first raise your hand or just start speaking and one thought that came to my mind when you talked about these t-algebras I'm not an expert on these algebra things either but this thing with many inputs and many outputs right I mean that's that's a proper read and then yeah so this is an example okay so it's an example of one of these universal algebras that you can associate to an opera right so if you have an opera then there's always a universal algebra that goes with it um so here this would be like one specific example yeah yeah so so there is indeed so there is there are whole books on universal algebra um but then they tend to be sort of too general for our purpose right because they sort of say oh take an arbitrary opera and then there's this algebra that goes with it and you have some sort of general properties but here we don't care about the arbitrary opera there's a sort of one very specific opera yeah yeah I'm sorry now I have to ask I don't think you have anything upright you have a sort of weird structure where you have half of the structure for prop you're not doing you're only taking traces right or are you doing these things together so you have you have an uh you have an SM action and SM action so that's the first thing that would underlie a prop but and then you have what's called the horizontal gluing which is we take the SM you add them together and then you have sort of these wheels which give you these traces but that's it right you don't have anything else you don't have something that you can put the inputs into the outputs or do you have that uh yeah but you can you actually write the because what I described here was not the opera right what I described this is of the algebra uh so this algebra you can view this algebra as coming from an opera and then the opera would be the one where your objects are things of that type you have like boxes uh so you have a finite number of inputs uh a finite number of outputs and you have boxes and each box also has a sort of number of inputs and number of outputs uh and then they're connected well in the way that you should think you know they're sort of connected in this way um so here uh now I'm sort of running out of so I can do this and then this and well there's nothing to connect it to these outputs so there's stuff like that but now you can these type of objects you can plug them into each other right because I can get I can take an object of I can take a guy with two inputs and one output and sort of stuff in the middle uh and now I can take that guy here and I can sort of plug it into this box here and I connect these two inputs to these two slots and that guy to that slot right and that actually gives me an opera and this is all of the these t-algebra sort of comes from there yeah so technically speaking what you have is a wheeled prop sorry it's called a wheeled prop opera technically only has one output you have multiple that makes it a prop oh I see okay then since you have things going back that's what makes it called be called wheeled okay the crop you have a directed graph and then if you go back up you get this and so and and your t-algebra is a free t-algebra or I mean I'm sorry is a free algebra over this thing or just a specific one that's what you're saying no right so it doesn't have to be free right so you can so the free ones you could describe them as basically just being you know sort of linear combinations of stuff of that type but the ones that then show up in our context are not free and and those are the ones you care about that's what you were saying you don't need the proper language because some of the stuff you should sort of get for free yeah yeah yeah no the ones yeah that's fine thanks all right uh I suggest we we prepare for uh Yvonne's talk next thanks Martin thanks very much again thank you Martin