 Hai, văd, s-am văzut să văd că mă se întâmplă a oameni să mă gătim aici, să văd să se întâmplă acest lucru. Văd că văd că se întâmplă certă degree de seagură între mine și după că vreau de Banzang, Porque când am văzut să vezi, dacă v-ați înțelegi ca văd că văd să fie mai mare în rândul mei, văd că mă se întâmplă oameni care se întâmplă pe fiecare, mă înțelegi în introducție și motivație, văd că îmi discuțim randuiei. și după aceea își discuționează că sunt o expansiune și moduri SYK-uri, și după aceea își concluie să terminăm câteva concluie. Deci această motivăție, atât de astăzi, se văd să se interesează în moduri acestui, a fost de holografie, și acest lucru este un lucru care au fost mai multe bături de acest lucru. Deci, basicamentul acesta este ideea că, în teoria, își contăie gravitatea, volumenul espării și bălături trebuie să fie correspondență, încât să rendește mai concret încât această correspondență de ADS-FT. Și, atât de mine, acesta este ca, apărânt, un exemplu simplu să văd să văd să văd să văd să văd într-un dimensiu în două dimensiune. Și, bă, clar, avem adăugătării adăugătării de adăugătări în dimensiune 1. Așa că acesta gătește oameni foarte bine pentru rătătării care au adăugătării adăugătării de ADS-FT. ABreast, preție la câteva întotu destructive, repet în subprovedul, agreement gest mayor, dum genetic,, séptii, acest pintoude dacă l-a turcat. De la cel mai legat, întotde irritat trangătate și cu fiect. Aici avem margina iar power counting. În urmă, dacă veți găsește 4 funcției, 6 funcției, ceea ceva, să avem găsă infinite grafii cu contribuției. Și, dacă avem lucrură locală cu interacție cubătării, în urmă, acest lucru nu este independență, pentru că există only one power I can put here to make a cubătării interacției margina iar. Așa, acest lucru se găsește la acest lucru. Îmi face un argument de power counting. Let's consider a graf în care all moment are large or for their lambda. Așa, îmi putem putea la lambda la 2D delta minus 1 pe acest lucru, ca aici. Și dacă nu, îmi putem cumpărără în faptul sălătării pentru faptul lucru independențării, care pot să fie aici minus vertices plus 1 în grafă conectivă. Așa, să putem acest lucru în faptul sălătării, îmi putem sălătării minus 1 și plus 1. Așa, îmi putem sălătării 2 delta E minus numără de vertices. E este numără de vertices de grafă, și nu este numără de vertices. Dacă interacția este interacția cubătării, douași numără de vertices este cu multe vertices. Așa este pentru că acest lucru conecte în vertices și cu every vertex este incident cu multe vertices. Așa, veți văzut că, faptul în care se numea vertices, am văzut așa de care le putem dău delta Q minus 1. Dacă îi putem sălătării să fie marginelă, am trebuie delta Q să fie un. Deci, o poate poate, pentru că îi putem sălătării în iară, este 2 by Q, aici în funcție de două. Deci, așa, au văzut că scelinul care se găteți în S-YK, Lăsa yază că se văd în cazul de, în caz. În continuare, în fiecte care este unui boundulain, însesc proportion o funcție, se va gătește când se fie spune unul cu care înălată, simplu au fost lucruri și foarte importanti pentru a găta și așa și se passa direct cum trebuie să fie sigură, la care se face copilă, au fost încă văd, încă nu am lucrat acolo și se scopie mărât văd, pentru că aici este eu am câtă venit de fiecte, de ce da, de fiecte, function, sometimes called a self energy, then a c sigma c sigma c and so on, this is a geometric series which can be summed. So, the full two point function will be one over the inverse propagator minus sigma. Now this inverse propagator here is the quadratic part of the action, which is something which scales like momentum to some power. If my theory is massive, you know in the ultraviolet, if my bear theory is massive, this would be omega to the power zero, it's small omega. If it's a fermion, it would be omega to the power one, if it's a boson omega to the power two and so on. Now, bottom line, if I now go towards the infrared, so this guy scales like omega to some positive power and another power counting argument tells me that if I have a conformal field theory, the sigma should scale like omega to minus one plus 2 by q, the 2 by q coming from the previous transparency. So, you see that in the infrared, so a small momentum, this is a positive exponent, this is certainly negative for interactions which are at least three body or more. So, you see that the second term, you know, g sigma will always dominate over g times c minus one. So, for any field, conformal field, interesting conformal field theory in the infrared, my finger, that's an equation, will simplify like one equals minus g times sigma. Now, the problem usually is that sigma, the self-energy, it's a complicated function of the two point function. Now, because we deal with a conformal theory, it's massless, so I don't have a tadpole contribution, so the first order contribution at sigma, it's actually zero, but then I get a first contribution consisting of two vertices connected by infrared two point functions like this, which would be this term, but then again I have an infinite list of graphs, so here I get many, many corrections. Now, the idea is that the simplest way you can build an interesting CFT in dimension one is to imagine that by some miracle, this equation truncates at the first order here, so you basically get a self, you know, a closed equation, if you just say, ok, the rest is smaller by some argument, so I would need to solve something like one, this means delta equals, well, some coupling constant g square for the vertices, then a convolution of one g, the exterior g, and then q minus one g in parallel for a q-body interaction. Now, here surreptitiously, I assume that my theory is fermionic, so I use anti-symmetry of the two point function to, you know, heat up this minus here. So basically, you know, in such interesting CFTs, you would want to solve this equation, so you've seen a lot about it, so I just want to make a small comment about it, you know, I'm a mathematical physicist, so as a mathematical physicist, the first thing which stroke me when I solve this equation was that, of course, it's ill-defined. Now, it makes sense to everybody, everybody can solve it, we find very good solutions, and by the way, very interesting physics, but you know, of course, that if you look at the integral, you see that, you know, you just plug in this ansatz for the two point function and you remark two things. First, you can scale tau out of this integral, you scale u by tau, so this will tell you that you'll have a behavior 1 over tau on the right-hand side, and you have a delta on the left-hand side, so 1 over tau is not exactly delta, and the second, and even may be more dangerous, if you look in the region u close to zero, so this term is something, a constant, this one goes like 2 by q times q minus 1, which is 2 minus 2 by q, so this is something, the exponent down here is almost 2, so this is quite divergent in the, when you go to zero. So, well, of course, you see what diverges you want to regularize, the question is, you know, how do you regularize? It turns out that it's slightly less trivial than I would expect. For instance, one way to regularize is to say, okay, so I try to avoid going exactly through zero, so I should add some small imaginary parts to my u, for instance, like in an IEPC non-prescription, so just, you know, for in my integral to miss a bit the zero, so these go under the name sometimes of retarded and advanced functions, you know, I can just add to toss some small imaginary part. Now, it turns out that if I use either one or the other things don't work out, in what sense, they regulate the infinite, the divergence here, but when I send the epsilon to zero, the limit is singular. Now, there is one way actually, of course, once you see it is obvious, to get a nice regularization, it's to preserve the anti-symmetry of the two point function here, so it turns out that neither the advanced nor the retarded are anti-symmetric, so if you just take an appropriate combination of the two, even minus, exact minus here, and coefficients one, well, this fine regularized two point function, in fact, you know, allows you to regulate this divergence and you can even establish, like a mathematical physics standard, the theorem, saying that you build this objective and, you know, the right hand side of this equation, even cut of epsilon, then you send epsilon to zero, and in this equation, you know, in the sense of distributions on good functional spaces, you find here a limit delta. Now, the point is that the exact compensation plays a huge role, this allows you to actually exploit the fact that apparently this guy has a signum here, so it should be anti-symmetric with respect to the zero, so this is embodied in this perfect minus here, you need that for the cancellation of the minus here, and second, you also need the precise balance in order to get the delta function by what's called the Plemerstochotke formula, I don't know if you know that, you know, it's basically saying that in the sense of distribution, one over epsilon plus i tau plus one over epsilon minus i tau goes to the delta function, but you need really to have the same coefficients between the two, because otherwise you get some principal values which stay okay. So, this is just to explain that, this field theory is the one dimensional interesting conformal field theory, they are actually very interesting, and one can try to, of course, either pursue computations for higher order functions, or maybe, you know, a small modest program to make sense of, you know, what you already, everybody agrees, works, you know, how to actually make it work. Okay, yes. Sorry, so, for the physics point of view, these divergences you're talking about are just fake because the power loop behavior of the g of tau is valid for tau is very large, when tau is very large. Yes, so there is a physical reason for which these divergences actually are absent, is that this conformal ansatz just works tau large or ansatz, not tau small. Okay, now, there are two ways to regulate the divergence. One is to say, physics tells me to do it like this, so I have a precise prescription to regulate it, and then the way you regulate it is very dependent on that prescription, or to search for universal prescriptions, we say, okay, a theory like this, how can I regulate it? How can I get rid of the regulator? You have the regulator that physics tells you is on your previous slide. It's just that the Schringer-Daisen equation. Yes, it has this term. So, here you, yeah, so you, sigma is much greater than c minus one. So, you've neglected this term, and that's the origin of it. I totally agree. The problem is that if you regulate it like this, everything will depend on the J, for instance, which is the, and if J is 10, you'll get that. One result if J is 200, it will get a different result. So, I have nothing against, you know, we all agree that QED is not the ultimate theory, there exists something which completely is called the standard model. Stil, I can look at the QED ignoring the rest and trying to understand by itself how it behaves. So, you know, this is, you tell me, I should not worry that the flowing QED goes in, you know, ultraviolet is bad because there is something else to regulate it, it's called the standard model. And I'm saying, yeah, but it does not stop you from saying, I try to look at QED in the infrared, so I try to put a cut off, I forget about what regulate it physically, and I just study it. So then, what I would like to say is that what you should prove is that the regulator that you propose match with the infrared limit of the physical regulator which corresponds to the UV conversion, if you like, of the model. Yes, of course, as somebody should prove that if you take the standard model and you compute the corrections in a certain limit, everything which comes from the QCD part decouples from the QED, I agree. So, this is what you should do, but this being said, in the meantime, you study the flows of QED near the Gaussian fixed point by putting a cut off and studying it. Ok, so this is kind of what I had a normal theoretical perspective. So you see this interesting and simple enough, but it's the conformal theories in dimension two. Well, they kind of simplified, if in some regime, this happened, the safe energy became a product of two-point functions. So, of course, the question is, are there theories which have this property already seen some? And, well, you already know the answer, in fact, the theories in which this happens in some limit are melodic theories built on random tensor in which the decoupling, not the decoupling, sorry, the rest of the corrections die due to, you know, a large and limit in the tensor. Now, the rest of this talk, I'll present to be these random tensors, and I will come back to this kind of equation just at the end. So, now what follows, it's a, well, introductory talk in random text. If you answer now, Vatano, yes. A question at this point. Does the theory, the infrared theory we defined, can stand on its own? Can you make it valid at both scales, as a conformity? So, that is one question, because if this epsilon prescription, you are able in the end to make a sensible theory and send epsilon to zero, then it is a standalone theory, which, you know, you might hope it makes sense. Yes, sir. Not as far as I know, maybe Vladimir, if he's... He's talked about that, that he was studying this, was it? No, the point is that here you forget the ultra valid complet, you just take the infrared theory, you just take the infrared propagator, so you don't say that I have this physical theory which cut off it. The question is, can it make sense? Maybe. Maybe you are studying this or... I mean, the usual statement is that CFTs make sense, independent of where they came from. And that's why K context, it doesn't, but for the reason that it breaks conformal grant because of the 4-point function, but that's sort of a sub-cash. So, yeah. Ok, so, now, let's, well, so, go to the random tensors, ok? So, now, as you've heard already, random tensors, well, they are something which generates these random matrices to higher dimensions, so let's just, you know, discuss a bit random matrices, so what are they? Well, they are objects which have been introduced quite a long time ago, the first time in nine years ago, almost, in mathematics and then applied in physics, famously by Wigner, to study for spectroscopy. Basically, his idea was that, you know, for a big enough nucleus, the Hamiltonian where it's a matrix, it must look a bit random, so if I want to look at the energy levels, then I kind of need to understand how random matrices behave, you know? Ok, now, in the meantime, random matrices have found the plethora of applications, so they are applied, for instance, in the study of growing interfaces, so if you have an interface in a random environment, now this will develop wrinkles, and if you want to understand these wrinkles, you know, there is an approach to understanding these wrinkles which relies on random matrices. Now, they are also encountered in real life, for instance, they describe quite well the spacing of birds, you know, perged on wires, or the spacing of cars, you know, parked on roads, and they are also used in very practical applications, you know, for instance, to distinguish signal from noise in very big data sets, so if you have an enormous data set, you look at correlation matrices, which are enormous matrices, so you can study these by random matrix techniques, for instance, and one thing which interests you, in order to detect effects, is this, you know, big correlation matrix, it's so large that I can't say that if one coefficient is large, it means there is a large correlation between these two guys, but does the matrix, as a whole, look like a random matrix or not? These are the kind of questions which you can, you know, address in data analysis with random matrices. Ok, now, for closer to physics, random matrices are also crucial in quantum chromodynamics, so here, as you all know, this is an S-U-3 gauge theory, so basically, the field, it's a matrix, you know, so this kind of tells us that, you know, random matrices can teach us something about, you know, QCD or other gauge theories, so that you actually know very well, and just a small remark, you know, so once you know that you talk, you know, the invariance of your theory, then you kind of know what observables you look at are the gauge invariant observables, so like that's an important point you need to remember for later. Ok, now, last but not least, as Vansano is mentioning, the random matrices are also used to study random surfaces, so this comes from the following thing, you know, like, of course, you can define a random matrix with no reference to no surface, no nothing, then turns out that if you use a Feynman expansion to evaluate expectations or something, this Feynman expansion is not in term of usual graphs, but in term of embedded graphs, you know, because you deal with matrices, you can automatically build embedded graphs which live inside surfaces. Now, this might interest you or not, but turns out that they live in surfaces and they have built in scales, which is the size of the matrix N. Now, what happens is that this N sees the topology of the embedded graph. Now, maybe you don't care about the topology, but the N sees it, you know. So, it turns out that in fact, the perturbative expansion can be reorganized in powers of 1 over N, and this is indexed by the genus, which is the topological number associated to the surfaces, you know. So, this expansion, it's beautiful, it's beautiful for several reasons. One, because in fact, at any fixed order in 1 over N, you get an infinite series, so at least you capture a lot of the perturbative series, but it's summable, so it's much more controlled than the original perturbation theory, which was divergent. Now, of course, you know, there is a conservation of difficulties, the divergence goes into the sum of the topologies at the end, but you can just start exploring, you know, very long, you know, very important effects in your theory, which is some infinite packages of the perturbation theory, you know, without caring about, you know, the other topologies. Ok, now, well, given this spectacular success of random matrices, one idea is to try to generalize them in higher dimensions to random tensors. Now, it is true that this arose mostly from an attempt to try to find the theories of random geometries in higher dimensions, but still, even without that, one could say, ok, matrices are so useful, so can we talk about random tensors? Now, well, so the proposals date back to the 90s, and here I have a slightly longer list of names than Vansang. And, well, the problem is that, of course, you know, you can always write the tensor model, but then you want to have some tool like the one of an expansion to explore it, and this, well, was not so obvious for, you know, various reasons. Now, it turns out that in the end, you know, some years later, we were able to find this one of an expansion and since then, random tensors per se, you know, became, like, if you want more of a live field, and here are, you know, kind of the number of some people who work in this field these days. Ok, so now, what was the secret? Why was it possible to finally do this in 2010, 2011, is that in the beginning, people were studying tensors which had symmetry properties under permutation of indices, and at least in a first stage, turns out to be simpler if you distinguish the indices. Now, this does not mean that the one of an expansion does not exist for symmetric tensors, but if it exists, it's certainly much more difficult to identify. Ok, so, let's revisit very quickly matrices. So, what are random matrices? Well, because they are not, you know, I just want to study the one of an expansion, I forget about the space in QCD, let's say. So, I see them as zero-dimensional gauge theories, which can accommodate up to two copies of the unitary group. So, they basically come more or less into flavors. These are Hermitian matrices, and the univariance, you know, the A, the field changes in the, how do you call this, adjoint representation, I think. Or, you can take a theory for an arbitrary matrix, and then this accommodates two copies of the unitary group, because you can act with one unitary on the first index, and a different one on the second index. So, you see that in some sense, this theory actually has more gauge invariance than the first one. Ok, so, of course, you know, one natural question is, what do birds have to do with zero-dimensional gauge theories? So, it turns out that this is actually due to universality. So, if you gauge the gauge invariance, you gauge it out, you can integrate the unitary here, and you get a theory for the eigenvalues, which is quite universal. Now, of course, it's a function of the precise details of the model. You know, if you take this kind of free action, where the exponent here might slightly change, but the features of this action are very clear. So, on the one hand, these eigenvalues, they repel each other, you know, due to this term, they can't really stand to be exactly at the same point, but they live in a confining potential. So, they have these two effects, you know. In this sense, they are very much like birds, birds can't really occupy the same space, but they just live on a finite wire. So, it turns out that, in fact, if you measure, you know, in this guy, in 2013, actually 2000s of pictures, and measure the spacing of birds, this fits very well the gap distribution between eigenvalues. Ok, so now, let's go on to zero-dimensional gauge theories with larger gauge groups. So, tensor models will be zero-dimensional gauge theories, which accommodate three or more copies of the unitary group. So, this means I will deal with fields, with objects which have at least three indices, or maybe four, five, whatever, and the gauge transformation will be, each index here will transform by its own unitary. So, the first index A turns with the unitary U1, which lives in its own copy of the unitary group. The second index will change with unitary U2, the third index with unitary UC, and so on, and so forth. So, each index has its own unitary. Now, in fact, you see that there is nothing which tells me that the dimension of these unitary groups must be the same. They can even be taken to have different dimensions. And if you want, the talk we had yesterday can be translated as saying, I pick one of these unitary N, to be actually a unitary in some D, which I consider like a rotation symmetry in space type. So, basically, that was coming to just not choosing the same unitary group here. And then, of course, this leads to other interesting one-over-an-expansions and so on. Okay, now, once you understood how the field behaves, so this is like my gauge field, so the action, well, and the observables will need to be invariant, which I can build out of this field. So, the first question is, how do I build invariant out of a tensor which transfers like this, you know? Of course, so, from now on, I'm not forced to, but I will choose here to work with the unitary group. I could choose the orthogonal one. So, namely, this means that I have a tensor T in this dual or complex conjugate. So, I have a pair of fields, T and T bar, and if T transfers like this under unitary transformation, T bar, well, just picks up bar all over the place, you know? Okay, so, now I will discuss how I build unitarys invariant, sorry, out of these tensors which behave like this, okay? So, it turns out that, okay, here I leave out the transformation law. Some invariants are easy to understand. So, let's call them traces and they are built as follows. I take polynomials in T and T bar, and I contract indices with the rule that I will always contract the first index on the tensor T, we have a first index on the tensor T bar. So, if you look at the transformation law, this makes that the unitary U1 will always come into contactive unitary U bar one if I change bases. So, the two unitary will cancel, they live in, you know, this is a graph, you know, because there is the same one here, they will cancel and drop out. So, it's obviously such an object is an invariant. I always contract indices respecting the position. Now, these invariants, where they can be very nicely represented as colored graphs and the construction goes as follows. So, let's take an example of an invariant. So, it's built by my rules. I have three tensor T's, three tensor T bars, and I contract indices by the rule that every time a first index on a tensor T, for instance, A1, here, is contracting the first index on a tensor T bar, for instance, the P1, here. And then, the second instance, A2, is contracted to VQ2, which is a second index here, and so on. Okay. So, well, actually, this equation is much more complicated than the underlying graph. The underlying graph is obtained as follows. First, I draw vertices, for instance, T, I draw wide vertex, and for instance, T bar, I draw black vertex. So, here, I just put the tensors and they're associated vertices. Now, for each contraction, for instance, let's take this contraction, I draw an edge which connects the corresponding vertices. You see? This guy connects A1 on this tensor with P1 on this tensor, so I will connect the two corresponding vertices by an edge. Now, because the position of the index is fixed is the first position, I can promote this position to a color. I can say, ah, this is an edge of color one, because it represents the contraction of the first index. No, and then I apply the same rule for the other contractions. This one is a second index. It connects these two guys, and it has color two. And finally, the last one is a third index. You know, this one goes A3 from this guy to the R3 on this guy. So, these are these three delta functions, are these three edges, and the edges represent contractions. Okay. Now, well, I can do this for all the contractions present in here. This will be, give me a graph which will look like this. And what are the rules? So, the graphs are B part, I have T and T bars, I have white and black vertices. All edges connect the black and the white vertices, because mine, this is, are contracted between tensor T's and complex conjugated tensor T bars. They have colors such that every vertex has exactly three edges. Incident, because the tensor has three indices, so I have an edge one, two, and the edge three coming here, and the edges have different colors. So, I have exactly one edge one, one edge two, and one edge three attached to these vertex. Because, you know, each of them represents a contraction of exactly one index, the first one, the second one, or the third one. Okay. Now, as Vansan showed in his talk by giving you some numbers, where there is a wealth of invariants I can build like this. For instance, in dimension, for tensors with three indices, out of three T bars and three T's, I can build these guys. So, you see that these two guys are identical up to recoloring, for instance. So, I draw them separately, but in fact, you see that it's enough here to switch the role of the edges one and two to get, you know, combinatory the same graph, but these are different. So, these are the kind of grass Vansan showed to you. Now, one remark about these invariants, so as I said, you know, these invariants, which can be represented like graphs, they are nice, they are easy to understand, were easier than others. You know, once you give me a graph built with these rules, I put a T for each vertex, a T bar for each black vertex and then I put a delta contraction of the appropriate index for each edge. So, okay, that's simple. Well, turns out that they form a basis, well, actually an overcomplete system at finite 10. It becomes a basis if I send them to infinity. So, any invariant can be written as a linear combination of these guys. So, this makes them nice. I know that if I study arbitrary, you know, an arbitrary invariant, I can write it as a sum of these guys. You know, it's a bit like the traces of the powers of a matrix. They are nice because any invariant in the matrix can be written as a sum over traces of powers of the matrix. So, this is really the appropriate generalization in higher dimension. And of course, as in finite dimensional matrices, these traces are independent up to some power. Well, at finite 10, these guys at some point become again linearly dependent, but when I send them to infinity, they become a full basis. Okay. Now, so, given this, I understand how to build invariants. I understand what's my field, you know. What does it mean to study a tensor model? So, my fundamental field is the tensor and of course, it's complex conjugate. The action will be an invariant, you know. So, the invariant, you know, action I will split it into parts. First, a quadratic part, which is the simplest kind of invariant one can think of, it has a t and a t bar. And all the indices are contracted between the two. You know, the index in the first position here is the one in the first position here and so on and so forth. So, this is the simple invariant just as a t and a t bar. I can't do it with less than this. I contract the indices. Okay, then, well, the rest, the perturbation part of the action, it's a sum over all possible invariants, which are if you want graphs. So, this is the sum of all possible connected graphs. And this connected, I put it by hand. I decide to look at single trace models. You know, it's like in matrices. I can just look at the matrix, in action, which is the trace of some function. I don't want to have double traces in the action. I could, but, you know, for simplicity, I will just restrict to single traces. Okay, so, now if I have an action, I build a partition function. This will be an integral over all possible tens or entries, t and t bar. Exponential minus some scaling factor and we'll discuss a bit of scaling later on. And then I put here the action S. Now, the gauge invariant observables, well, they are these invariants themselves, you know. So, each of them can be an observable, gauge invariant observable of the theory. So, the objective is to compute things like the logarithm of the partition function, the connected, you know, the expectation of product of invariants and so on and so forth. So, these are the kinds of objects I try to compute in this theory. So, how do I do that? How do I compute it in practice? Well, you know, I can try a Feynman expansion first. So, a Feynman expansion consists in expanding in the coupling constants here, you know. So, I just write this partition function as a sum of Gaussian integrals and here these are just products of invariant. So, each of these guys is represented by a graph with three colors. So, let's imagine that this one represents this invariant and I have a second one, this invariant and so on and so forth. So, now I only need to compute a Gaussian integral with a given measure here. So, this Gaussian integral, well, it can be computed in term, you know, by a victiorem in terms of contractions which consists in pairing tensors two by two. So, if I compute these contractions, I need to pair the tensors by weak contractions which comes to adding another category of edges. These are if you want the propagators of my theory. So, this guy here would be like the effective interaction and the green guy, it's a propagator, you know, which goes from a white tensor t to a black tensor t bar by the way, you know, because idea with a complex measure, these green edges, they keep by partiteness, they still go from white to black all the time. Okay, so, if I start with a tensor of D indices, let's say three for the purposes of the figures, I always have tensors of three indices, then my partition function will be a sum of graphs which have one extra color, which will have four colors, you see? Now, turns out that these things, each graph is embedded in a d-dimensional space. Now, of course, I might not care about that, you know, sometimes when I deal with random matrices, I might not care from the, you know, that they do have to do with random surfaces, but they always have to do with random surfaces. So, in this case, this graph, it's always dual to a d-dimensional space, which might be maybe relevant or not for the problem I try to describe. Of course, for people who try to do random geometries, this is an important point. I can study tensors without caring about this d-dimensional space, but it exists there, behind. Okay, so how do I build this space? Well, you know, it's all in the colors, you know? So, a graph has vertices and edges, my graph has colors, you know, and that will give it actually, will make it a space in the dimensions. And to understand, this is quite simple, let's look at dimensions three. So, I have graphs which have three colors corresponding to the indices on the tensor and then another on a fourth one, which corresponded to the contractions. But now I look at them just as combinatorial objects. You know, I have black and white vertices connected by edges with colors. Now, to each vertex, I will associate a dual tetrahedron. So, around this guy I draw a tetrahedron. So, the edges which emanate from the vertex, they will be dual to the triangles which bound the tetrahedron. And because I have colors on the edges, I can color the triangles. This edge here one, you see, it goes through that triangle there. So, I'll say, ah, that's a triangle of color red. And the edge two comes out here. So, this will be a triangle of color blue. And the three goes there. So, this will be a triangle of color. What's that? Yeah, okay, whatever, and then a green. Okay, so, the point is that then, I do the following nice idea. In a tetrahedron, there is always exactly one vertex opposed to a triangle. So far, this was the red triangle. Now, I will say, ah, in fact, I will call the opposite vertex the red vertex. So, I send the colors on the opposite vertices. So, I can do this, for this, the red one goes here. The green one, which was down there, goes up there. The blue one, which was in front, goes behind. And, well, this, I think, is magenta for the coloring. Goes here. Okay, now, you see that I did all this coloring of my tetrahedron, just looking at the vertex and not at the graph. So, I can do it independently at each vertex. Now, turns out that this colored graph represents the unique gluing which respects the colors of the vertices. So, this edge here will glue to tetrahedra such that the green vertex will be, the left will be glued on the green vertex on the left, the blue on the left, with the blue on the right and the red on the left with the red on the right. So, it turns out that these colors, they encode gluing, which preserves the color labels of the vertices. Now, okay, if one graph with four colors represents a gluing of tetrahedra, the invariants, which are graphs with three colors only, like this one, well, they represent gluing of triangles. They represent the triangulations in dimension one less. So, the objects I get here dual to the graphs are, if you want, three-dimensional gluings of tetrahedra, whereas the invariants are just the boundaries. Okay. Now, psychologically, I can look at these theories, the theories generate geometries in the following manner. So, the observables, as I said, they are boundary triangulations. So, if I look at an expectation, well, I put here some insertions, which are the boundaries, and then I compute this integral, which comes to filling in the bulk between the boundaries. So, I have here a boundary state, which can be, for instance, a sphere, and here a boundary state, which can be a torus. They are two-dimensional boundary states, and this integral really fills in the bulk and puts all the possible triangulations, topological triangulations, if you want, which are compatible with this guy. And to each topological triangulation, it assigns a certain weight, which is gives you a canonical weight. Of course, this weight depends on the coupling constants and so on and so forth. But, you know, the distance or motors, they are quite an effective manner to sum over triangulations between boundaries. Now, this drawing should be very familiar to some of you, because this is exactly what people draw instinct theory. They say I have a boundary state, which is a circle, a boundary state, which is a circle, maybe another boundary state, which is a circle, and then I fill in the bulk. So, I draw objects like this, and sometimes I have one of our corrections, which are surfaces like this and so on. Now, this is exactly the same framework, but one dimension higher, my boundary state, it's a surface, can already have its own non-trivial topology, and then I just fill in the bulk between them. Okay, good. So, now having discussed a bit the framework of random tensors, let me go to a bit of a more precise statement, namely the run over an expansion. I promised you that they have one, and I will explain to you the run over an expansion. Okay. Now, let's first revisit the run over an expansion in matrices. So, let's consider a matrix model, and in order to really glue on my picture with tensor t and t, but I will take a model of a matrix m, you know, which is arbitrary, so I have an m in an m dag. So, my action will have a quadratic part, and then interaction will be an invariant sum over powers, or traces of powers of the matrix m. Okay. Now, the point is that each of these graphs is embedded in a surface, and, well, my graphs, so, how do I see these embedding in two dimensions? I can see it as follows. I have vertices, I have edges, but then I have patches, two-dimensional patches, you know, which is what once I called faces, if you want, which corresponds to cycles of two colors. For instance, here I have a two-dimensional patch for the red-blue, then down here I have a two-dimensional patch for the blue-green, and when I have a red-green two-dimensional patch, now, it turns out that, you know, these patches, they are not innocent, they do have to do with sums I need to perform in, when I compute the amplitude of a graph, and, in fact, in the end, the scaling with n of an amplitude will sense the topology of this surface. The surface is described by vertices, edges, and these two-dimensional patches, and this will be, it's captured by the scaling in n, you know, the amplitude of the graph will go like this. Now, here's the second example, so this graph, you know, it's a bit more complicated. If you trace it carefully, I don't know if you really see it there. It has one face for the colors blue-red, then, if I go on red-green, I also get one face, I go along the red, then the green goes behind, then the red, the green goes along, then the red, then the green goes again behind and comes here. So this graph here, it's a non-trivial graph, it requires at least a genus one surface to be drawn. Now, of course, you know, I can add extra decorations, but I need at least a genus one. And so this graph will have actually here genus one, so it will be more suppressed than this one. So, you know, in one over n, the topology of the underlying surface, it's captured by this scaling in n. Now, well, once I see the fact that each of these amplitudes, you know, it starts by like a five-man expansion, I expand in graphs, by then I reorganize in powers of n, while I can do this famous, you know, one over n expansion with the property that here, at each given order in one over n, I get a sum that's fixed topology. Then, in front, I get sum over topologies. And this is the complicated part, but at least here, I have infinite and simpler sums. Now, this can be translated in higher dimensions. It's just that the genus, well, it's replaced by something which we call the degree. And, well, if you have the main technical point, is that in two dimensions, we have the Euler relation, which relates the various, the number of vertices, edges and faces in a graph. Well, you heard, I mentioned you have a generalization of this, which relates the number of faces, vertices, and that's all, because the number of edges having fixed coordination is fixed. And here, I have, you know, a number which arises, which is always positive, or maybe zero. So, the point is that, you know, this number of faces, which in fact counts the number of hidden ends when I compute the amplitude of a graph, you know, these are the number of independent sums, well, it can be expressed in terms of the number of vertices. So, then I can set up my problem such that, you know, exploiting the fact that this degree is positive, I can get a one of an especially high dimension. So, instead of trying to explain in detail where this relation comes from, I will just give you some examples. So, this is the simplest possible graph one can draw, you know, two vertices connected well. So, these are the three edges which represented the indices of the tensor and this is one contraction. So, this is a graph of degree zero. Now, look at this graph, you know. Now, you can compute it as degree four. That one has degree 14 and so on and so forth. So, you know, this degree is a positive number. Okay. Now, in what sense, this degree acts like the genus, but in higher dimensions. Well, if I start with an action which is a sum of our invariance here and here for a statistical reason I chose to put a certain particular scaling in n on the invariance, then the free energy of the model admits a amount of an expansion in the following sense. So, I can rewrite this as a sum over non-negative omegas of some scaling in n and then at each scaling in n, I just have an infinite sum of graphs contributing to this degree. Okay. Now, this is exactly what you see in two dimensions. It's just that you basically, you know, I replace g by omega and instead of having here minus two g, I put, you know, we are the refactor. That's exactly the same thing. Okay. Now, similar expansions exist for arbitrary observables and just a last remark, I can in fact manage to obtain one over an expansions, of course, indexed by some, you know, variants of this degree. If I don't put this kind of scaling here and you know, there is a lot of debate about what are the maximal skillings I could put and so on and so forth. Okay. Now, well, just a small comment. So, I said that this degree is like the genus, but it cannot be like the genus. So, the point is that in dimension three or higher topology is complicated, you know. In dimension two is not very simple, but dimension three or higher is really complicated and notably, there does not exist a unique number of topological environment which can discriminate topology. Like in two dimensions, at least you have the genus, you know, okay, you have a sign plus or minus to talk about orientability, but assuming that you just discuss orientable two-dimensional, you know, topologies, then they are characterized by one number of the genus. This separates the torus from a sphere, from a donut with two holes and so on. Now, this thing, we know it cannot exist in higher dimensions by no, mathematicians tell us that the topology cannot be indexed by one number. Okay, so, this degree, of course, cannot be a topological invariant, so, it automatically is something which is sensitive to some triangulation aspects also. So, it mixes topological and triangulation dependent information. Now, it has, you know, as one number which index is one over an expansion, go, you know, it goes relatively well. It has three qualities and one flow. So, the qualities are a leading order, so, a degree zero, one just obtains spheres, and when we'll see in many that these spheres are actually meloni graphs. Now, a second point, which is, again, a good feature is that at any degree, one has a finite number of topologies. Now, we know that we can't have only one because the mathematicians told us I can't discriminate them, but I have a finite number at any degree. So, that's actually like the best we can do. Now, a third feature, a good feature is that if a topology contributes at some degree, then it contributes an infinite number of triangulations. This is important. It means that my one over an expansion is not trivial. The patches that I resum at fixed end are non-trivial packages. They contain an infinite number of triangulations. There are an infinite number of graphs at the same scaling. Now, the flow is that if a topology, it will start contributing for the first time at some degree. For instance, the sphere starts at zero, but then there are other topologies in three dimensions like the RP3, the projective space, or S1 crosses two or whatever. So, they all start at some degree, but then they contributed all the subsequent degrees. So, an a topology will contribute an infinite number of degrees. So, this is something which we would have appreciated not to have, but unfortunately we have, so, yeah. Okay. Now, at leading order, going back to just forgetting the geometry behind these graphs, I generate graphs, they have a scaling in N. I want to look at leading order, so I look at graphs of degree zero. So, the graph of degree zero, it will come as no surprise to you, are the melons. Now, this one is already an example. I told to you that this guy has degree zero. Now, turns out that the degree is invariant. If I make the following move, I take two vertices connected by three edges, and I insert them arbitrarily on any of the edges here. So, for instance, for the first insertion, I can either insert it on the red line up there, or on the blue line in the middle here, or on the green line by the way, so it's the second one inserted here, or it can be on this, well, magenta color here. So, it would be that graph there. So, I have one graph with two vertices, I have four graphs with four vertices, and so on and so forth. Now, it's relatively straightforward to see that the degree first of this one is zero, and then it's invariant if I do these insertions. Now, you need to believe me that this covers the full class of graphs of degree zero. Okay. Now, you see that from afar, if I did these insertions, meloni graph, which contributes a leading order to this partition function will look like this. So, I go along an edge here, I encounter a self-energy. This is a one particular reducible two-point function, and each of these branches can have maybe an arbitrary two-point function, and then I have a full two-point function here. So, this really is what you've seen so far with these meloni graphs. Okay. Now, the crucial point is that the graphs of degree zero are meloni, so they will dominate whenever one sees random tensors, the degree or some version of it will dominate, and of course, one can cook up models when this doesn't happen, but you know, typically random tensors will have phases leading orders dominated by meloni graphs, and, well, this is what one would call the meloni universality class, you know, everything which has random tensors, a leading order will behave like a melon. Now, one should be very careful what is the consequence of this. So, in some cases, this means that the self-energy factors are leading orders in terms of the two-point function, which is the kind of feature which you write in SYK-like models. Now, in other cases, in some invariant models, this means that random tensors are university in a very precise manner. Okay. So, this I will discuss just at the end, you know, my last transparency about this universality state. Can I make a question here? Yes. Is the property of matrices, for example, that all correlators factorize to leading order true here? Yes, but it's even worse here. I have an even stronger property than flinas. So, what you talk about is flinas, so, I have even more than flinas. So, let me go, I'll get to that on the last transparency. Okay. So, now, let's discuss a bit. I explained to you that random tensors, you know, they, you know, basically describe the SYK model. So, let's see why this happens. So, you already saw this model. You know, it's built on a vector Majorana fermion, have here a quadratic part, and then in Q-body interaction of random coupling, which is drew out of Gaussian ensemble, you know, with certain covariance here. Okay. Now, the object of interest here is the disorder average of the thermal two-point function. So, what do I do? I put two fermions. I integrate over the fermionic degrees of freedom. I normalize. So, this is why this is the thermal two-point function. And then I take the disorder average. This is the quenched model, you know. If I wanted to have the annealed model, I should put the integral before the, you know, division. Okay. So, this is the object which I care about. And, well, the claim is that because of this disorder average, the melonic graphs will dominate the reading order. So, let me explain to you how this comes from what I discussed before. I will just ask you to allow me to make one small modification to this model. Allow me to distinguish the fermions, because I told you things with colors. So, just color them too. So, I will discuss a model, which will be like chi A of color 1 dito chi A plus, bon, chi A of color q dito chi A. And then plus j A1 Aq chi A1, and here I put the one of color 1 chi A2, and here I put the color 2 chi Aq, and here I put the guy of color 2 chi A1 dito chi A1, and, again, I suppose. Good, yes, yes. Good, okay. Now, the point is that in order to understand one over an expansion of this gap, the simplest thing is to integrate out the fermions. This is something which just serves to understand the one over an scaling. So, if I have this action, if I integrate out the fermions, I will get an effective, you know, and even let me put two chi here as an insertion, let's say chi A of color 1, A, A. Now, if I integrate out this fermions, what will they do? Well, I will have, for each vertex, I will have a tensor j, within this is A1, Aq. So, this will be a vertex, then there will be another vertex there, j of A1, Aq, then here a vertex j of A1, Aq. Well, maybe B1, Bq, C1, Cq. Okay, the propagators of the fermions, they will identify the indices. So, I will draw them as edges. For instance, the color 1. We connect these two guys, and this will represent the identification of the indices in position 1. The color 2. It's here to represent the identification of the indices in position 2. So, you do recognize that if I integrate out the fermions, this is a sum over this kind of colored graphs. So, it's a sum of invariants. Now, if I divide by the vacuum, I take just connected graphs. So, this object, oops, once appropriately normalized, will just be a sum over bables, well, over connected graphs, invariants of trace B of gj of j. So, in this particular case, I just took a real model, but it's basically the same with a complex one. Okay, and after this, so, this is everything which sees the random tensor, and then there will be some complicated function which depends on the particular times I have here. So, this will be some thing which depends on tau 1 and tau 2. Now, all the, if you have all the conformal field theory story, it's in this guy here, the explicit amplitude, but the one over n, it's all captured by this guy. And you see that this is a tensor model. In particular, it's a very simple one, because now I just need to integrate this guy against the Gaussian. So, in order to understand the quench average in this individual, well, I just put here exponential minus n to the d minus 1 j square. And I need to look at these expectations how they scale with n. Now, this is a bit of a technical transparency that I did it here because during Vladimir's talk, there were some questions about the n scalings in the SYK, this vector SYK. So, the statement is the following. One can prove that the scaling in n here, it's the unique one which is interesting, the d minus 1, in what sense, in the sense that, if I look at the expectations with a Gaussian measure of this invariance, well, I have this theorem, it looks impressive. So, there exists here a positive number which I call the order of convergence, sorry, such that this limit is well defined. And on the right hand side, I have the covariance to some power. So, it means that if I look at this sum here, this will be a sum over orders, capital omega orders of convergence, if you want, n to the power minus the order of convergence of the graph times, well, the covariance to the appropriate, the covariance, let's call it j square, to the appropriate number p. Okay. Now, you see that this is an especially non over n index by this order of convergence which is by the way sum, which is strictly larger than the degree. So, the Gaussian order of convergence has the following properties. The scaling in n is the unique scaling which makes that this order of convergence is positive or zero for all graphs. So, if I modify the scaling, I might end up having graphs which blow up, which is not good. So, if I want to have an expansion, I want this omega to be positive or zero. So, this is the case. And second is the unique one, such that there exists an infinite family of invariants for every given order of convergence. So, this is the only scaling of the Gaussian which allows this sum to be non trivial here because the sum over graphs fixed order is infinite. So, it's the unique scaling which does this. So, there is no choice I can't play with the scaling in n, in SYK model, if you want, in a Gaussian tensor model. Okay, now, sir, the last but not least, this order of convergence is zero only for melons. So, actually, at leading order, this restricts to melons. And, well, for melons, this particular combinatorial number here it's one. So, I just get a sum over melons with weight one. This is why in the SYK model reading order, I obtain the melons. It's a property of the Gaussian tensor model, just if you want, like, the simplest tensor model of them all. Okay, now, of course, one can do interesting stuff. For instance, one can consider an SYK-like tensor model in which I don't have a special guy called the coupling which is the tensor, but now all the fields are tensors. So, in some sense, this is a bit more. I don't know if it's nicer, but anyway it's certainly different from the SYK because there I had some individual guy special called the coupling and here I don't. You know, I have no random coupling anymore, but all the fields are tensors. Now, it turns out that in this model, well, the same property holds because I have tensors, now the fields are tensors, melons dominate. So, they didn't order this two-point function factor. Okay, so, just some comments. So, why would one like to complicate his life and do a tensor SYK version? So, from the point of view of the conformal field theory, this doesn't change much, but it changes from the point of view of the one-hour and one-hour in expansion. So, well, for once it eliminates the quenching which is something which you might like or not to do, but it eliminates in a specific manner. All the fields, they have the same number of degrees of freedom. So, it's not, one way to eliminate the quenching would be let's transform the coupling J in a dynamical field, but this would mean that basically all the degrees of freedom are in the J and almost none in a fermions. This one has the advantage of having distributed the degrees of freedom among the fermions. Now, a second thing is that it's slightly nicer to take the n to infinity limit in this case because it's just the gauge group which grows in a tensor version of the SYK. It's not the number of fermions of degrees which grows in the gauge group which grows. So, that is a bit more closer in spirit to what happens in things like QCD. Now, it also addresses the question of singlets which was asked during the Vladimir's talk because if you have a gauge invariant theory, you just discuss gauge invariant observables. So, there's no question about singlets. And, well, the one-hour in series in this kind of tensor SYK model is indexed by the degree, whereas the one-hour in series in here is indexed by the order of convergence in a Gaussian theory turns out that the degree is a slightly nicer number that is order Gaussian order of convergence. So, the series for the tensor guys is somewhat better controlled than the series for the non tensor version. Okay. So, now, this is my last slide. And I told you that this melonic, the fact that the leading order guys become melonic sometimes has the property that it factors the self-energy in two-point functions. So, this was the case in the previous two transparencies, also for the vector SYK or for the tensor SYK. But in general, this leads to what would call the appropriate definition of the melonic class. So, this comes from the following. I put again here the theorem about the Gaussian distribution for a tensor. So, this is just the simplest example with a T and T bar. So, what happens? Ah, very good question about the freeness. What happens for an arbitrary invariant tensor model in the large n limit, but this is a leading order statement. So, of course, the corrections are very interesting. Any measure becomes Gaussian. This is much stronger than free because if you take an observable in a matrix model, you get, you know, four-point cumulans which contribute and so on, and the six-point cumulans is just that they are the non-crossing ones. So, this is the notion of freeness. Now, for tensors, it's much stronger than this. It becomes Gaussian. But, and the subtlet is that is a big but, the effective covariance is not trivial. So, basically, you know, a model, you know, under very general assumptions, a model where I have here interactions, this will enter and alter the covariance or get an effective covariance, you know, which respects a certain self-consistency equation which sees the melonic part of the potential, blah, blah, blah, maybe this precise statement is not very important, but that's important is that the covariance is not trivial. So, the measures are leading order become Gaussian, but with non-trivial covariances. Now, of course, a lot of research is dedicated these days to trying to find such scaling such that, you know, I still get a reasonable over an expansion, but I don't get in this universality theorem. So, this can actually be much more precise. You know, it works for this model, but you know, one can give mathematically precise conditions under which this theorem works and so on. So, we try to escape them. Okay. So, in conclusion, well, I hope I convinced you that, well, we have today and we've been building for some time a theory of random tensors. So, the nice thing is that it is built on a canonical pass integral like formulation. It has built in scales the size of the tensor and it exhibits a universal over an expansion in which melons dominate and, well, apparently this is interesting for some people. It can be useful for things like, you know, obtaining interesting CFT-1, as I said in the beginning. It may be useful for random geometries in higher dimensions and so on. And, well, with respect to these interesting CFT-1s, you know, the SIK model, it's, you know, a lot is to be understood there, you know. And, yeah. So, this is what I had to say. Sorry, as one, the fact that the model saturates the gap of bound, it's automatic, it's implicit or I missed it or I mean. So, a reading order in 1 over n, if one takes the model written out, a reading order in 1 over n, one produces exactly the same equations as in the SIK model. The four-point function also is identical. So, not only the two-point function, also the four-point function is identical. So, if I understand correctly, and maybe Vladimir knows better than I do, how this comes about, you know, once you look at, you know, you take the, you solve the swing or that's an equation which is the same, you plug it in the four-point function which is the same, well, you get the same kind of four-point corrector hence it will saturate the gap of bound. So, that's, you know, basically built in. It's the same model. It's just that the 1 over n limited obtain differently and the corrections are organized differently. So, what is not very clear at all is like imagined by some computation, we are able to compute 1 over n corrections to this exponent. And they will be different in the two models. And it will be interesting to see, you know, I guess it can't go over the bound. So, probably it will go under by how much in which one goes less or more. So, it's a... You should think ahead of them of the different quantizations of the same semi-classical theory. Yes. And the question is, in one sense, they are not the equivalent. When did they start to really differ? Can you say again the difference? In one case, the cap and the random, in the written case, they are fields. Yes. Is that all? And in one case, the fields are vectors. And in the written case, the fields are tensors. So, also, the gauge symmetry changes. So, okay, because I didn't give many details in the, for the written model. So, okay, now, this is like the usual way. So, I call you vectors. Now, you put a lot of indices here and you contact in a very specific pattern. Notably, the gauge group changes. So, it's quenched versus no quenched couplings. And the gauge group is quite different, actually. So, yes. Related to the question. In the dual of the tensor model, so, in the tensor model, there are an enormous number of gauge invariants of singlets under. So, in the bulk, there'll be an enormous number of fields. Absolutely enormous. Is there a way to make some of the invariants preferred so that there won't be so many fields in the bulk? Very good question. So... What would be the dimensions of these operators? Could be all of the same one? So, one thing which I know is that if you look at the expectation of these guys, for instance, there will be many, but not all of them will arise as leading order. Some of them are very sub-leading in 1 over n. So, you get all these invariants which you look at. Let's look again at this theorem here. You see, so, you take here an invariant, even if it goes in tensor, imagine that you take something more complicated. It's a bit similar. This omega, for some of the invariants, which would be like this gauge invariant observables, which many of them will arise at very high orders in 1 over n. So, that's certainly, I know that they don't all contribute a leading order. So, I can just add a small comment. So, if you do the large delimit of matrix models as I explained yesterday, then the observables are just the usual traces that you have in all the string theory constructions. So, this is really not the problem. You can have models where the observables are just the usual matrix model gauging by an observable. Nothing more or less than that. I think this is a problem. The density of states will grow fast to the natural. It would be exactly as a usual matrix model. So, you will have traces in d to infinity. So, it's like a tensor model. So, the density of states will grow fast to the natural. No, because only the un power will be gauged. So, it's up. So, I think you're taking... That's very crucial. That's one of the conceptual difference between the matrix... Go, go, go. I think you're answering the question I asked. Can you make some invariants preferred to others? I think you're saying you're making some. Absolutely. That's one of the main points of bringing to this large matrix models versus tensors. That's one of the... Ok. Thank you.