 Okay, I think I'll allow for one more minute as I see people joining. Yeah, we still have people coming in. So I guess we can start the recording. Okay. All right. So I think this is the last day of our school. Let's look forward to hear from the final lecture by Kazuya Yonakura. Yes, go ahead. Okay. Thank you very much. Okay, so yesterday I talked about the construction of chiral fermions as boundary mode. So I considered this kind of situation. So we have d dimensional manifold w and d plus one dimensional manifold y. And on y, we put the massive fermions. And then if we take the mass parameter to be negative in this bulk, then we can obtain some chiral fermions on the boundary. And in that way, we can realize chiral fermions. And then, so this was the important point. So the modern formulation of anomalies. Everything is formulated in a completely gauge invariant way. And maybe yesterday I didn't write this explicit formula. But in this situation, we can define the function by using the power of the regulator like this. So this small m is the physical mass. So this capital M is the power of the mass. And then we can just take the ratio between these two and then we get a completely gauge invariant definition of the partition function. So everything is completely gauge invariant. But the definition of the d dimensional chiral fermions theory may depend on the d plus one dimensional bulk. But that is the modern interpretation of the anomaly. And today I want to discuss this dependence on the d plus one dimensional bulk. And I compute this dependence more explicitly. And this turns out to be given by the Atyapati-Shinga gauge invariant. So I consider this quantity. This is the partition function on y with the boundary condition l. And so maybe I write it again. So I don't use this explicit form today. But this boundary condition is given by this condition. So tau equals 0 is the boundary. And then I impose the condition that this psi is an eigenvector with eigenvalue plus one under this gamma tau. Okay, now to compute this quantity, it's convenient to change our point of view. So so far we have looked at the system in this way. So this direction tau is a space direction. And this boundary is a spatial boundary. So we looked at the system in this way. But we can, so we are working in the Euclidean signature space. So there is no distinction between time and space. We can regard any direction as a time direction and we can regard any direction as a space direction. So we can see this system in this way. So in this interpretation of the manifold, now this tau direction is a Euclidean time direction. And this w is a time slice. So this is a constant, this is a time slice at tau equals 0. This is tau equals 0. So we can see the system in this way. Then we can interpret the path integral in a different way. So the path integral y gives actually a state vector. This is w. So the path integral was originally introduced to compute amplitudes, transition amplitudes from one time to another time, initial time to final time. So because we are looking at this tau as a time coordinate, this manifold y, so if we perform the path integral on y, then we get the amplitude. But in this manifold, there is no initial time, there is only this final time. So this is, I mean, this is similar to, I mean, Hartle Hawking's creation of the universe from nothing, but I'm not discussing quantum gravity. So here the metric is just a background field. This manifold is just a background manifold. But we can still consider it as a transition amplitude from nothing, empty space to w. So in this interpretation, this path integral gives a state vector on w. So here this hw is a Hilbert space on w. And also I imposed this boundary condition. And this boundary condition can be also seen as a state vector. So this l, boundary condition l, this also gives a state vector, which I denote by this ket l. I don't explain the details of this state, but we can characterize this state very explicitly in canonical quantization formalism. But I don't use any explicit, I mean, explicit realization of this state. I just use the existence of this state. Then the path integral, the result of the path integral on y with boundary condition l can be written as an inner product between this state l and this state ket y. So this function function can be seen in this way. So for the computation, there is another important point. Near the boundary of the manifold, so this neighborhood of the boundary is just a product of some interval times w. So near the boundary, so we have this, okay, I take, sorry. And I say some y, then the neighborhood of the boundary is just a product between interval and this w. So I assumed this yesterday. Then the path integral on this region is described by time evolution. So the path integral on this interval corresponds to this Euclidean time evolution. So here this h is the Hamiltonian of the Mashable-Felimion system. And this epsilon is the length of this interval. So the path integral on this region is equivalent to this time evolution operator in the Euclidean space. And this operator goes to the prediction operator to the ground state. Here this ket omega, this is the ground state. We have a Mashable-Felimion theory. And because we are taking the mass to be very large, this theory has a very large mass gap. So all excited states are suppressed under this Euclidean time evolution. And so in the limit that this epsilon times mass parameter goes to infinity, all excited states decay under this time evolution and only the ground state survives. Then in this infinite mass limit, this state ket y is proportional to the ground state because we have this region which is represented by this time evolution and then we get this projection. So this is proportional to the ground state. Now we can use this property to compute this partition function. So this is given by the inner product between this plural and ket y. And because this y is proportional to the ground state, we can insert this projection to the ground state and then we get this. So now we can separate the contribution from the boundary under the bulk. So this contribution, so this is an inner product between this boundary state and the ground state. This is completely determined on the boundary. So this is, we can interpret it as a contribution from the boundary. And on the other hand, this inner product between this ket y and omega, this is just determined by the property of the bulk. And this does not depend on what boundary condition we impose. So we can interpret it as a contribution from the bulk. So we can separate the boundary contribution and bulk contribution. But this separation is not complete because so in this separation, we needed to introduce this ground state, but the phrase of the ground state is not canonically determined. So this ground state has a phase ambiguity. And in particular, this system has a very phase. So there is no canonical way to determine the phase of the ground state. Anyway, up to the phase ambiguity, we can separate the boundary contribution and the bulk contribution. Oh, excuse me. Yes. Could you say again about the barrier phase parts? I didn't explain it at all. Yeah, we can compute the barrier phase of this ground state, then we change the background phase. So for example, if we have a U1 symmetry. Yeah, I'm assuming that I put the U1 gauge field and then I can change the background U1 gauge field. And this ground state gets a very phase under the change of the background field. And in fact, we can, for example, we can reproduce, I forgot the name, KKLT. I mean, it's a very phase feature appears in integer control system. Solus komoto. You mean? Yeah, yeah, yeah. Solus komoto. Yeah, sorry. That phase was usually computed in momentum space. But you know, maybe here is the transference action. Yeah, yeah, yeah, yeah, yeah. I mean, yeah, yes, yes. Yeah, we can reproduce that very phase computation from the point of view of this ground state on background fields and yeah, we can actually really reproduce the same result. Six seconds. I'm sorry, can I make a second question? There is, maybe I misinterpreted something, but the Y get, right? It does belong to a Hilbert space in an E plus one dimension state and L is a basis. I don't know whether you can always, if we had made some input regarding that we already know that this D plus one massive fermion can be reinterpreted in terms of a mass less fermion in D, because it would naively seem to me that you have in your last equation put the same, the same state, the same vacuum state bracketed with things that should belong to different Hilbert spaces. Am I making myself clear or not? Okay, so let me go back to this one. So yeah, originally so we regarded this W as a special boundary, and then we have a localized fermion here. But now I changed the interpretation in this way. So this, so now this W is just a time slice. And now there's no special boundary on this W. So we can just consider. Okay, so we are always dealing with the same Hilbert space in this new picture. That was what I am fearing, because if you have a theory in D plus one, and then you have a second theory on D, those Hilbert spaces should not be the same. But in this interpretation, one is preparing the heart of the way function, and the other one is just the basis at the T fixed P. So everything is okay. In a sense. Yeah, this is my question. I think I saw it. I think I saw it with your comment. Thank you. I got confused for a second. Thank you. Thank you. Thank you. Yeah, so yeah. Okay, so yeah, maybe I should emphasize that. Yeah, so in this new interpretation. I'm only discussing massive fermion. And so this, this is a, this is a Hilbert space of the massive fermion, which is defined on this W. So, so we can imagine performing canonical quantization of the massive fermion on this W. So because this W does not have any boundary, so maybe I should write it. So, it doesn't have any boundary. So then, this is a Hilbert space or just a massive fermion. And there is no massless mode in this Hilbert space. And the path integral on this gives a state vector here. Maybe, maybe my question is better posting the following way. This L gets, you cannot expand any state in the Y-Hilb in the HW Hilbert space in terms of L, right? You would miss some, there are other states that are not linear combinations of L. L is just a state, but you cannot expand state. It's not a basis in that sense. L, not a basis. Yeah, I mean, so this, yeah, this is, so this L is something similar to some, I mean. Okay, it's a particular state. Yeah, yeah, in quantum mechanics, we can consider some state which is localized on some infinite position. This L is something similar to that. Yeah, so I mean. Yeah, yeah, so this L is also a state vector of the massive fermion theory. Yes. I think there's some question on the chart. Is the ground state always non-degenerate? Ah, yes, yes. Actually, the ground state is non-degenerate. So that can be seen by performing explicit canonical quantization of the system on W. And, sorry, I didn't show that canonical quantization, but yeah, by quantizing the fermion on W, we can see that the ground state is just isolated. The other states are, I mean, very excited. Yes. Thank you. Okay, so, so I separated the contribution from boundary and bulk. Now, I can study the dependence of this partition function on Y. So let's take another manifold, Y prime, which has the same boundary. Then, to study the dependence on Y, I compute the ratio of the partition function between the partition function on Y and partition function on Y prime. So, this Y prime and Y have the same boundary. But the bulk is, their bulk are different. So let's compute this quantity. Then, so by looking at this expression, we can see that this boundary contribution is just determined by the boundary. This is common to once partition functions. So this part cancels out and we get this result. And I forgot to say that, so let's assume that the absolute value of this inner product is equal to 1. Sorry, I don't explain this point, but yeah, it is possible to show this explicitly. But the reason for this is that the bulk is almost empty. So there is no degrees of freedom in the bulk. I mean, so because the bulk has a very large mass gap. So in the large mass unit, there's no degrees of freedom in the bulk. So the bulk theory is kind of trivial. So this condition implies that the bulk is trivial. But the bulk is not completely trivial. This contains some phase. And that phase ambiguity is the point for anomalies. But anyways, for simplicity of discussion, let me assume this. Then, so this is now given like this. And again, I use the fact that this state y is proportional to the ground state. Then I can eliminate this projection and I can just write in this way. So this is now inner product between this state y and the state y prime. And this is actually a partition function on some closed manifold. So let me explain the situation. So we have this manifold y with boundary w. And we also have some another manifold, maybe with different topology. So I call this w prime. But boundary is the same. Then we can glue them together like this. So along the common boundary, then we get the manifold which has no boundary. So I call this as yc, y closed. So this has no boundary. Its boundary is empty. And we can interpret this inner product as a partition function on this closed manifold, yc. So in this way, we get the formula of the ratio to partition function. So the ratio of partition function on y and y prime is given by the partition function of the bulk theory on yc. So this is a very general formula for the anomaly. And in the discussion, I didn't even use the properties of filimons. I just assumed that the ground state is isolated. So it says only non-degenerate ground state and all other states are very excited. It says very large mass gap. So under that assumption, I have derived this equation. So actually, so this formula describes very general anomalies of any theory. So we can consider not only filimons, but also other theories such as chiral performance or something like that. And then if this partition function on yc is equal to 1 for any closed manifold, then that means that this partition function does not depend on the choice of the bulk. So this partition function y and y prime are the same. So we can use this partition function for the definition of the chiral-filimon partition function. So I define the chiral-filimon partition function by this bulk partition function with the boundary condition l. So this is very defined because this right-hand side does not actually depend on the choice of y. So this partition function only depends on the choice of w. So it is reasonable to regard this as a partition function on the d-dimensional space. So in this way, we can define the chiral-filimon partition function. So this is possible if I'm just repeating what I already wrote. So we see that this bulk partition function is g-anomaly. So if this is equal to 1, there is no anomaly. And if this cannot be taken to be 1, then there might be anomaly. Actually, there is still some freedom to modify this partition function, but maybe I don't have time to explain that point. Okay, so I gave an abstract discussion. I still need to give some abstract discussion. So this bulk theory. So this bulk theory is called invertible field theory. So this invertible field theory is characterized by the property that dimension of the Hilbert space is always just equal to 1. So this means that there is only the ground state on any which has no boundary, no spatial boundary. So if there is no spatial boundary, the ground state is isolated. So this is the property which characterizes this invertible field theory. So this terminology often appears in the recent literatures on anomalies and topological phases. So the fact that the ground state is one dimension means that there is no degrees of freedom. So this theory is almost trivial. The invertible field theory is almost trivial. But the partition function may be non-trivial. In the case of massive ferrimion theory, massive ferrimion becomes invertible field theory. We take this mass parameter to the infinity. Okay, now let me compute this bulk partition function on closed manifold in the case of the massive ferrimion. So this partition function, so this was an important quantity for anomaly. So this is given by, so I wrote that this is given by the determinant of d slash plus m, divided by this power pillar regulator. So for simplicity, assume that this absolute value of the physical mass parameter is equal to the power pillar mass parameter. So anyway, we take both of them to infinity. So I take the physical mass to infinity and the power pillar mass to infinity. And then I take them to be just equal. The sign of this physical mass parameter is very important. So for m negative, so this is an interesting case. So if this m is negative, so this partition function is, maybe I should write it again. So this is given by d slash minus capital M divided by d slash plus m. And this is equal to the infinite product. So this is equal to the infinite product over eigenvalues of the direct operator. So this lambda is the eigenvalue, eigenvalues of the direct operator. And we notice that each factor in this infinite product is, so each factor has absolute value one. So this is a pure phase. So I define this phase as exponential minus 2 pi i s lambda. Here this s lambda is defined to be the phase. So it's the argument of this, argument of this ratio minus i lambda minus capital M, minus i lambda plus capital M. So this is a pure phase. So I define this s lambda in this way. Just as a convention, I take this argument to be between minus pi and plus pi. I take this argument in this region. This is just my convention. It doesn't matter how we define this argument, but I just need to set some convention. Okay. And this s lambda goes to the sine of lambda. So sine lambda means this one. You can check this limit from this expression. You can just check by yourself. So I just use this fact. And also just by convention, I take this sine of lambda zero plus one. This is again just a convention. Then the sum of this s lambda over all eigenvalues of the Dirac operator is, so this, I mean, this goes to, so like this, maybe I forget to put one half here. Okay. And this is the definition of the Atyapa-Teshinga yeta invariant. I should write it more clearly. So this yeta invariant is defined to be the sum of the sine of the eigenvalue of the Dirac operator with some regularization. We need some infinite sum. So to make this infinite sum well-defined, we need some regularization. But regularization is in this case naturally done by this Pauli-Piller's regularization. But in fact, Pauli-Piller's regularization is not the regularization which mathematicians use. I believe that there is no problem about this discussion. So anyway, so we define this yeta invariant. So this is the Atyapa-Teshinga yeta invariant. By using this yeta invariant, this bulk function is given by exponential minus 2 pi yeta. Okay. So this, so I was computing this function and it is given by this infinite product and product can, so this product is given by, so this infinite sum. And this is the yeta invariant. So this was the definition. So, finally, I get this result for the massive fermion. So this is a result for negative mass, positive mass. It's very easy to compute the partial function in positive mass. So this is just given by the ratio, this ratio. And because I took the physical mass to be the same as Pauli-Piller's mass, so this is completely trivial. This is just equal to one. So let me summarize what I got. So being the mass parameter m is negative, I get the boundary chiral fermion. So this localized chiral fermion on the boundary. And in the bulk, I get the non-trivial partial function given by the yeta invariant. Okay. So the boundary degrees of freedom is non-trivial and also the bulk function function is non-trivial. And this yeta invariant corresponds to the anomaly of this localized mode. But on the other hand, I think the mass parameter is positive, then there's no localized mode and the partition function is trivial. So this case of positive mass parameter is completely trivial. So in this way, there is a correspondence between the bulk and the boundary. And so this is the general formula for anomalies of fermions. And for the result is just one half of this direct case. So in the Majoron case, instead of minus 2 pi i eta, we get minus pi i eta. So let me just say this. Okay, so anyway, so in this way, we get this general formula for the fermion anomalies. And I'm almost running out of time, but sorry, let me very quickly show some examples. For example, if we take e plus 1 equal to 2, and if we consider bulk mass of Majoron fermion with ON symmetry, then on the boundary, we get some one-dimensional fermions which have anomalies under this. And for example, if we consider S2 with some non-trivial ON bundle, or here I show an example of SO, topological non-trivial SO3 bundle. Then it turns out that the result of this gradient invariant is given by minus 1. So this shows the existence of non-trivial anomaly. And this reproduces the anomaly which I discussed before by the traditional approach. So this is anomaly or equal to 1 Majoron fermions with ON symmetry. So this is one example. In the same way, for example, if we take D to 3, so D plus 1 equal to 4, and if we consider fermion with SU and symmetry, then the partition function on S4 with an instanton number turns out to be given by this. So the exponential of the gradient invariant is given by minus 1. And in this case, it is called pi t anomaly. And okay, in the same way, if we consider D equal to 4, so D plus 1 equal to 5, and I take SU2 symmetry, and if we consider some manifold, some five-dimensional manifold, then by computing this gradient invariant, we get minus 1. And this reproduces V10 SU2 anomaly in four dimensions. So this V10 SU2 anomaly is very famous, and this was first obtained by traditional approach. So this was the first example of global anomaly. But now, by using this general formula, we can reproduce V10 SU2 anomaly. Yes. And the computation of the gradient invariant is actually not so easy. So there is no straightforward way to compute the gradient invariant. So usually it's hard to compute. But there is some mass technology, such as both is generalized homology, spectral sequences, and so on. So by using these techniques, sometimes we can compute this gradient invariant. Sometimes it's still difficult. But anyways, in that way, we can determine gradient anomalies. And in my lecture, I just talked about fermions with ordinary symmetry. But of course, we can consider more general theories. One example is perform field. So for example, in string theory, in type 2B string theory, we have four-form field, which satisfies self-dealing equation. And that field has an anomaly. And we can also consider more generalized symmetries, such as higher-form symmetries, or higher-group symmetries, or non-invertive symmetries, and so on. So what I discussed in my lectures is just a basic thing. So from here, you can go to any direction you like. Okay, that's all. Thank you very much. Well, thank you, Kasuya, for giving us these lectures this week. Maybe we should stop recording. Okay, I think I'll pass it on to you, Irene. This is the fourth and last lecture on aspects of this long plant. Okay, thank you very much, and thank you, everyone, for coming until the last moment, the last lecture. So I hope you learned some things during these three days, and I hope you will enjoy the last one. Yesterday, we introduced the wood gravity and the distance conjectures, which are at the core of this long plant program because they are more useful for phenomenology than the absence of global symmetries, but still they are in very solid ground. Like they are, I mean, recent years, there are many works that have been testing them. And especially from stream theory, we discussed yesterday what is the evidence that we have for the conductors. I gave you a few examples of stream theory compactifications in which we always find towers of states at the asymptotic limits, the infinite distance limits, which are super extreme. So they are satisfying both the wood gravity and the distance conjecture. Now, studying this in the same theory gave rise, although I didn't have time to discuss with many connections with mathematics or polygamy, because sometimes they are satisfied in Montreal West, and there's a lot of research there. And at the very end, I showed you this slide, which was sort of a summary of the evidence that we have at the moment for these conjectures coming from the stream theory. So the cases of extended supersymmetry are very well understood. And as usual, what is left are the cases with less and less supersymmetry. Also, there are more works in the context of idiocy, but many more things can be done in the future. So this is also a very interesting avenue to pursue. Okay, so the plan for today is to, I mean, yesterday was about the evidence from the state theory. Today, we are going to discuss how one can get a motivation from the wood gravity conjecture, independent of stream theory, just in king of black hole physics. Okay, we'll see that the wood gravity conjecture arises as the kinematic requirement to avoid stable black holes. So that this can be connected to weak host missions or ship and other things. And I will briefly discuss also, I mean, what are some of the main phenomenological implications that they could have. And especially what are the open questions. Okay, so since this is your attending lectures of active topic of research in the moment, I think it's interesting to point out exactly what are the main open questions that we have. Because the more, the more people will work on this or think about these things using different approaches, the better. And at the very end, we will come back to the map of conjectures that we had, and I will briefly, we have very few words, introduce the rest of the conjectures and if you have any questions or are curious about any of them, you can ask in more detail. Okay, so this is the plan so let's start with the wood gravity connection from black holes. Okay, so when we discuss the absence of global symmetries. We, I told you that there was some heuristic motivation based on black hole physics because we have a global symmetry we are going to get a travel with remnant. Okay, so this is the right. So this is the metric for the, for a black hole. And the case of sparsil, okay, of a neutral black hole, because this is the case of neutral black hole. As you can see the metric, like the near horizon geometry only depends on the mass. So it's not sensitive to the global charges. And that's why when we were computing the number of stable states. We had to sum over all possible global charges and masses. And this was infinite. Okay, and this infinity is what gives them a heuristic motivation that something is wrong, like it's weird not to have to do many states. Now, how this gets resolved, if we have some gauge symmetry, okay, as we are thinking with a wood gravity connection. Well, let me do it more return now. So this would be the metric for a Chrysler-Nostrom black hole. So this is a charged black hole. Where there are these two horizons that depend on the mass and the gauge charge now. Okay, so that the near horizon geometry is characterized by both the mass and the gauge charge. Now, this black hole, depending on what are the values of the mass and the charge, can be a smooth solution. They can have just a horizon, or can be a singular, can have a next singularity, but it's not behind the horizon. So this implies that in order to avoid next singularities, which is also what is called Wilco's consensus, one requires this extremality bound so that the mass of the black hole has to be bigger than the charge. Okay, this is the extremality bound to avoid next singularities. And if we count now, as we did with the global symmetry, the number of stable states like remnants below a certain energy scale before it was infinite, but now, when we sum, the charge has to be smaller, right, than the mass, which means that we only have to consider, I mean, black holes that are sub-extreme, so that the charge is smaller than the mass, so we have to sum up to this energy scale. And if you do this computation and put that the entropy is just the typical result proportional to the charge, then you get that the number of states goes as one over the gauge column. Okay, so as long as the gauge up in is finite, not so the symmetry is gauged, then there is never an infinite number of elements, you will have a finite number of states below a certain energy threshold. However, right here you can see that this, of course, becomes problematic again if we send the gauge up into zero, right, if we try to restore the global symmetry. And then you can wonder, okay, how close can I get to this situation of restoring this global symmetry, because if the gauge up in is very, very small, okay, I don't get infinitely many, but still I get a parametrically large number of remnants. So in what moment, I mean, when I'm going to start violating the entropy bounds and so on. So this happens when we try to restore a very approximate, very nearly exact global symmetry. Okay, so this serves as a motivation to show, again, that when you try to restore a global symmetry may be something of wrong. And a way out of this problem is just to say that actually these states that I'm counting are not stable, like that this computation fails, because they are going to decay. Okay, so a way out of this is that the black holes actually are not, they don't count as stable ones. Okay, so we have to allow the black holes to decay. And this is what the gravity conjecture provides. Okay, a mechanism, a way that black holes, all the black holes, all the steam and black holes can decay. Okay, so the gravity conjecture we are going to derive now is just the kinematic requirement. Okay, to allow black holes to decay. And this is possible if there is indeed a particle to which the black holes can decay. Okay, so what are the charts and the mass of these particles such that this process is kinematically allowed? Well, if we start with a black hole, let's imagine that we start with a black hole that is extreme, otherwise we can just let it evaporate until it becomes to the extreme value. This black hole, I mean, we could think that it's going to decay. Imagine, let's assume for a moment that we wanted to decay just in two black holes. One of them can have a charge which is smaller than the mass, so this is fine for the, in terms of the extremality bound that we had before. But if one has a charge smaller than the mass, the other one, just by charge conservation and energy conservation, has to have a charge that is bigger than the mass. Which would mean that this is not really a black hole, this is an X singularity. Okay, because it doesn't satisfy the super extremality, this extremality bound. So this is not possible if we don't want to generate X singularities. So to have a smooth semi-classical process, the point is that this state has to be a particle. Okay, because particles do not need to satisfy this extremality bound. Okay. So, you can show this as an exercise if you wish. I mean, it's very easy, but I mean it's just using that the masses are smaller and yeah, like that energy is conserved. Okay. And that the charge is also conserved. So, what I just told you that the charge to mass ratio of the extremal black hole that we start with has to be smaller or equal than one of the products. And this is precisely just the, the gravity connection. Okay, so in other words, we need to have a particle with a charge ratio bigger than the charge to my ratio of the extremal black hole in Italy. Okay. Okay, any question. And maybe, maybe a couple. So first, in, you know, in, there are lots of cases in which we do have to take into account states, even if they do a decay, right? So in QCD, for example, we would, we would account for the Rho mesons and loops or things that have very short lifetimes. So it's not clear to me that even if these black holes can decay, if, if they are stable for a reasonable amount of time, we still have to sum over them, no? The thing is that if they don't, okay, it's like when you are summing, for example, running particles in a loop, right, what particles are running there. I mean, it's true that you also have to consider resonances. But to what extent, I mean, if the resonance is very unstable, like the lifetime is smaller than the scale of the process that you are detecting, it doesn't really count, right? Like the, doesn't make sense to consider it anymore, because otherwise, we would always sum over infinitely many states, right? Because we always have infinitely many multi-particle states that are very unstable. And that would violate any, like it would always give us an infinite number of the computations that we do. So what we actually do is that we have to compare, like the lifetime of those particles, the process that we are considering. And only if the resonance is sufficiently long-lived, then enters into the computation. So here it's kind of the same, like they need to be sufficiently long-lived to enter into the computation, for example, of loop computations to the graviton, and see if we are constantly down or not. But so it's, but precisely, it's not clear to me that these things would have a short lifetime. In fact, I would feel like that they would have a long lifetime. Yeah, so it depends on the, the thing is for the black holes, it's true that the, yeah, even if you have a graviton particle, a very large black hole can still be quite long-lived. But I think when you do then the computation, I mean, since it's a black hole, you also have a suppression with the entropy and so on. So I think the problem only appears like that if you really have like in many of them that you have to count over them. Like as long as they are short-lived, since you have all this oppression of the entropy, they will not really dominate the scattering processes and so on. Like you have to fight with that. So it's like, even if they have, since you have this e to the s suppression, like with entropy, unless they are, I mean, even if they are long-lived in the sense that they have an exponential, like the gate rate exponential is suppressed. You already have this exponential suppression before, I mean, it's like less dominant that if you do the same for particles. So that's why, like if you do the computation lightly, I mean, sort of makes sense. Still, I mean, I agree that it's not like very quantitative. It's just like a way out. At least if they are unstable, I see a way out. No, that there's no problem. And also, I think this lifetime is the gates coupling. Okay, well, once you also check how the life tension is when the gates coupling goes to zero, then it's when the number of times increases. Yes. So, I mean, if you consider the extremal black holes, for example, these things have Hawking temperature zero. So I would, I mean, classically, they're, well, like semi-classically, they're stable in that sense. So even, I mean, if you had an extremal black hole with a lot of charge or even though there is an electron in nature, it's not going to, I mean, semi-classically, we don't. So what I'm describing here, indeed. Okay. So even if they have CO temperature, I mean, this decay mode is semi-classical. You can compute the instanton. And you can show that indeed you have an exponential separation. But that can compensate with the entropy because both depends on the gates coupling. And you can ask the entropy. So for the global case, the counting of the entropy summed over all these global charges that could exist in the black hole, but a far away observer was blind to this. But for the gauge to charge, technically isn't every single charge part of a different ensemble because you can distinguish, even if you have many, many charged states, they don't sort of contribute to the same entropy, right? Each one of them is in a different ensemble. You mean when doing this competition, summing over the remnants? Yes. I mean, the entropy for a black hole with, yeah, with a certain. So I mean, I'm just saying in the global case, there was this huge entropy because we couldn't count global charges on the black holes. So any global charge contributed to the same ensemble. But when you have charge, we can distinguish the charges of these things. So there's not really the same type of entropy problem. So you have to, okay, it depends, yeah, how, how you're doing the competition and what type of ensemble are you considering and so on. And here I'm taking a very, yeah, like, it's like, I'm saying just that the number of states. I mean, and I intuitively, right, if you have a different state, I mean, even if you think they are the couple and you wonder how should I put here, like, how to count of the entropy and so on. Like, if you have different states with different charges, like, it's like each of them will be a different state running the loop. Yeah. Yeah, no, I agree. I agree with that. But, but just an entry. Sorry, go on. No, no, I see, I think I see your point, like how explicitly not try to compete in a word example, ensemble to choose and then what if the entropy subnation and everything. Yeah. So I, yeah, I guess I agree that they still, there's still many states in the loops, of course, because if you, if you have more and more charged states. But so I agree with that argument, but in terms of entropy arguments, I don't, I just feel like, even though there are many, many states, all of these are distinguishable because we can distinguish chart like gauge. I don't think they are distinguishable in the sense that there's no way from far away to measure what is the charge. Right. You can wonder this team was having the sense of how to enter in the computation but not distinguishable in a physics way. And there is no experiment that can be seen with that. Well, can't I just measure the charge of an object with the global charge of a black hole or a part. No, but this is gauged. This is gauged. I mean the gauge case. Sorry. Yes, in the gauge sense. Okay, sorry. Yes. Yeah, yeah, yeah. So I guess I'm saying that there's still is this problem of too many states and loops, but there's not the same kind of entropy problem it seems to me as in the global case because Okay, you can distinguish. So, I'm going to tell you something new that has not been published yet. But I already advertise it. So, and I was worried about the same questions, because it's always a bit like how we I think it's possible to make this much more quantitative and precise if you think of that small black hole. In the sense that whenever you have a history and you have kids, some black holes, you can also construct small solutions right like the area is going to zero. And then if you apply the argument there, you can do it quantitatively in plan. I mean, like requiring that if the vacancy in bound is not violated, I really put in an upper bound on the, on the vacancy in bound, I mean using the vacancy in bound to constrain how many states you can really have for this small black holes but it's very important that they are small because the intuition is that now what's going to happen is that you get They are small and they are becoming smaller and smaller. I think it's having both to zero. You can put many of them in a box of the big size. And this is going to violate the big and small bound, unless somehow you manage that the area for black holes increases with the charts so that in the end not there's a maximum number of black hole that you can put. So I think the point is that all these arguments when we are using just usual rising and non-strong black hole, they don't really give a problem. But when you use these small black holes, they really give a quantitative problem because they are becoming like point like objects. And you can show that the way to avoid this is that the cutoff of the theory goes to zero as a gauge coupling. So that effectively they are the area of the black hole increases in such a way that you cannot have too many states in a box of big size that would violate it. So by small, what do you mean precisely? I mean that there are some solutions. They are called small black hole because the area of the black hole actually goes to zero, like coming in the extremal limit. Okay, sure. That's the same way the temperature goes to zero. Yeah, but I mean, this is probably not so right. The area is still fine in the extremal case, right? Yes, yeah, sorry. You have a finite area, like a finite element because that is the percentage of the charge. But you can also have solutions that are not of that type. And that's the area actually goes to zero in the extremal limit. And these really give rise to problems. I think you have to use these solutions to properly do these computations. Okay. Okay. Yeah, thanks. Sorry, I took a lot of time. Go ahead. It's fine. Yes, it's difficult for me to argue more candidature for this without using the small black holes. That's why it was too much for the lectures. Okay, so let me, let me move on. Okay, so this is the gravity conjecture. This was actually one, I mean, the original motivation, the original paper, together with the absence of global symmetries, okay, so we had the two things. And they felt that this was what people were obtaining in the string theory as well. So this was a kind of a combination of all these things. Now, let me, I want to say two more things about gravity. And then I will just move to the open questions and the implications. So one thing I want to say is that, okay, I was just talking about the vanilla case of just particles and one gauge field. But of course it can be, it has to be generalized when you have many gates fields, or extend objects. So let me just explain how it's in analysis. Okay, so when you have more than one gauge field. If you want to allow all the whole 3k is not enough to have one particle for each gauge field independently, you need to satisfy this combat fall condition. Okay, so that if you imagine that we have two gauge fields, so we are going to have a charge to my ratio under one of them and charge to my ratio under the other one. And imagine that the extreme region, I mean, if we don't have a scalar field, the extreme region is just going to be a ball in this plane. Okay, so that the black holes leave inside because they need to have a mass that is larger than the charge because of the extra mic about the extreme area for the black holes. And now you need to require that if you have some particles that charge the gauge fields, the convex hole of this particle includes this extreme area. Okay, because otherwise, I mean, if you just have meant one color. If you have one here, another one here, that are saturating the withdrawing connection for the two gauge field, it is going to cut the extreme region and therefore a black hole in this diagonal is not going to be able to decay. So you need that the convex hole for the particles includes the extreme area. Okay, this is the general condition. If I put here also an exercise that you can do, which is that starting with a new one gauge field and one particle that satisfies the gravity connection. One can show by dimensional reaction of a circle that the theory in one dimension more does not satisfy the gravity connection unless we started with more than one particle at the beginning. Okay, so if we start only with one particle, you can compute, I mean, when we reduce in a circle, now we are going to have two gauge fields, the original one, but also the KK photo. Okay, and we are going to have the particle and all the KK copies, the kalooza plant copies of this particle. But each of them, so it's going to recharge under the original one, but also the KK photo money, all these KK copies are going to be charged under the KK photo. And if you, I mean, I mean, you compute what is this and just using dimensional reaction, one can show that the convex hole is not satisfied in general. Okay, so, I mean, it can be satisfied if these are some of the issues very, very super extreme, but there is always some moment in which even if you are spending with a conductor in the arena theory, you do not satisfy in all dimensions. So this was an exercise that was actually used to motivate the fact that it's not enough to have just one particle to define a gravity connection. If originally you start with a tower of gravity states in the arena theory, and you dimension reduce, then you will automatically satisfy the gravity connection and the dimensional reaction. So, this, yeah, this is a way to motivate that we need to have towers of the states of this kind of gravity connection, which is indeed what we obtain from all the string theory examples. Okay, but the black hole motivation only motivates for one, but consistency and their dimensional reaction on string theory examples, give you a motivation for having that one button. Okay. Okay. Now, the other general decision I wanted to say is just that, okay, of course, this should be, this should hold for any P form which fit, not necessarily about particles and one for this piece. And in that case, what you need is that you need to have some electrically charged P minus one brain with a charge or a retention that is bigger. And that is going to be some order one factor that one can compute explicitly given the. Okay. Okay. Now, if we have much less scale of this, I want to remark that this order one factor in this gamma this extremality factor, depends on the scale of this. Which means also that this complex whole condition in this extremar region is not a ball can be an ellipsoid or can be straight lines and have different shapes depending on the scale of this. This is something that eventually you have to compute. What is the stream on value for the black holes or black brains in the theory, and I will tell you what is the word of the context that you have to satisfy. And just to give you an example. If you will have some dilatonic theory, this is that we have some each piece whose gauge kinetic function is parameterized by some scalar in the exponential form, which is the typical one at a week of impulse. Then this gamma, in general, is given by this dilatonic copy this alpha. Square over two plus P work is the P from the screen. The word is the dimension minus two. The next. Okay, this is the problem. So this is like the gravity contribution that we have usually. And this is the dilatonic scalar. Okay, so something I wanted to point out is that without scalars, if you don't have master scalars, sometimes I mean we say that they will gravity conjecture is like a repulsive force condition. Right, because saying that the charge has to be bigger than the mass is equivalent to the fact that the gauge force. Over this particle will be bigger than the gravitational force. Okay. And actually, if you require these, you get the same order one factor that you get for the commodity bound. So it's not just qualitatively like quantity that it is exactly the same. And that's why we have this name. That's why it's, it's called the weak gravity connection because then gravity is the weakest force. Okay, I'm requiring that gravity is the biggest force is equivalent to say that you have a charge to my ratio of part of my ratio bigger and extremely. This correlation between extremality and repulsive force condition does not work in the presence of scalar fields. Okay, it's no longer true. We will get the same numerical factor if we compute the extremality bound in the presence of scalar fields, but if we compute this order one factor by requiring that the sum of the repulsive forces is bigger than gravity. However, this difference between the two seems to always disappear for the powers of state that we get at the weak up angle. Okay, so it doesn't disappear in general for any value of this copy, but when we take the limit of small gauge copy and going to this infinite distance limits, the two conditions coincide. This scalar contribution here becomes equal to this scale contribution to the multiple. And that's interesting because the one in the repulsive force condition depends on how the mass behaves. Okay, how the, because this scalar contribution goes like the derivative of the mass or the mass. The mass behaves in terms of the mass of the scalar field, where the thermality bound depends on how the gauge coupling behaves into this alpha is like the derivative of the gauge coupling or the gauge coupling, the exponential rate. So, at the weak coupling points we obtained that both coincide which means that indeed the exponential rate for the mass is equal to the exponential rate of the gauge coupling. And this is why also the exponential rate for the distance conjecture, how the mass goes to zero exponentially, what is the exponential rate? This can be determined in terms of the extremality bound as well. It's going to coincide with the value for the extremality, like the dilatonic contribution for the extremality. Okay, so the, in the asymptotic limits, at the weak coupling limit, the distance conjecture exponential rate can be fixed by the extremality bound, by the extremality of the black horse. Okay, so this is nice because the, I mean the wheel of the conjecture has a very nice theme which is that there are no order one factors free. I mean everything is fixed in terms of the extremality bounds. The distance conjecture had this factor which was free, which is the exponential rate. But whenever we have a gauge coupling that goes to zero asymptotically and we have the tower that is satisfying both conjectures at the same time, the exponential rate of the tower is also fixed by the extremality bound of the black horse. So there is no difference in those cases. Okay, any questions? There's a couple of questions in the chat on, I think things you mentioned quickly but maybe can address them. Yes, sorry. There was a question that I didn't see, which was, if there is a finite gauge coupling, so a finite number of renders to get a physical problem. The physical problem appears when you send this gauge coupling to zero, and then you get a parametrically large number of renders. Okay, so at the good coupling point. So the motivation from remnants for the wheel of the conjecture when it comes to this quick coupling points that therefore you need to have this gap of the state to calculate that. And that's why you combine it with the argument of black horse up to the K and that's more general and it's valid for any value of the discussion. And how does extremal black horse decay be a swing effect? Okay, that's the point, like even if the temperature zero, swing effect is non-zero. You can compute the decay rate, and it's non-zero if you have a particle with a charge to measure bigger than the extremal value. Otherwise, this decay mode, the swing effect is the decay rate is zero. If the charge law is not bigger, directly the decay rate is zero if you compute it. Okay, any other question? Okay, now you could wonder to what extent we always have key charges asymptotically. It is something that we already discussed. So far, not the string theory sample to have it. We don't know if it comes from your generator. Okay, very good. So this is all I'm going to say about the wheel gravity and additional connection. Okay, so what I'm going to do next is just to tell you very briefly about the canonical implications and then discuss the open questions for these connections. Okay, the main open questions. So for the, okay, so canonical implications. Okay, so the wheel gravity and the distance connection are concerning theories that have either very small gauge couplings or very large field ranges. Okay, in general, so if you have some DNA from a model proposal that requires one of the two things you can try to check if these connectors can tell you something about it or not. So the distance connection, right. So it says that the cutoff, the mass of the tower goes exponentially in terms of the field distance, the geodesic field distance that we're driving in the field space. Would you approach this significant distance limit, which means that there is an upper bound on the field range that you can describe within an effective theory of a finite cutoff. Okay, which is then given by this. The larger the cutoff of the theory you want it to be, but the larger the energy of the process you want to describe the smaller the field range you can accommodate. And for example, for inflation. I mean, at least one has to require that the whole state is smaller than the cutoff of the theory and the quantum gravity cutoff. And you can write the upper bound on the field range in terms of Hubble. And in large field inflation, the Hubble scale is very large, it's close, it's like a few orders of money to be low and blank only. So that's why maybe you have heard of people say that you cannot have transplants in a field range. Because this logarithmic is going to be like some kind of one part, which will depend exactly on what is the Hubble scale and what is the exponential rate exactly. So that's why it's also very important to determine the exponential rate. Now, for the indication of the gravity connection, there are different implications that you can do. I'm just going to mention one, which is that if we apply the gravity connection for actions, I mean we can constrain action physics because the action is like a zero form each field. So you could also apply a gravity temperature there. And what it implies is that you must have some electrically charged state, which is an instant. It's a charge to a ratio bigger than one, which implies that the action, let me write it when I explain it. So the action is like the mass of the instant, so it is the exidian action. And the decay constant of the action is like the one of the gauge cap. So this is the decay constant, which is the periodicity of the action. This is the inverse of the gauge cap, because you can just see, for example, in the Lagrangian, right, the decay constant like multiplying the kinetic term is the one of the gauge cap for this zero form gauge field. So the wheel of the conductor is telling you that the unit of the instantons will find this inequality. So that if you want this instanton, you want to have perturbative control over the instanton expansion, it means that the decay constant of the action cannot be transplants. And that's why maybe you have hair, but transplants and axioms are problematic. Of course, here, I mean, this is just for one scalar. So in general, you have multi-field, and so on, you have to check whether you satisfy the convex condition for all the axioms. And maybe depends on exactly what are these other factors. It will be valid or not. So in the conclusion, in general, I mean, transplants and axioms are also used for large field inflation. One of the conclusions for this analysis is that large field inflation is constrained. It's not correct to say that this rule out, because it depends on the details of the model. You have to check for each model. But at least it's fair to say that it's constrained, because when you have large field ranges of transplants and axioms, it's very easy that you get into contradiction with this conduction. Any questions? Okay, so what are the main open questions? Okay, so this is what, where a lot of research is devoted now, or maybe it will change in the following years, maybe change by some of you. So let's point out what are the main open questions. So for the will gravity conjecture, it's very clear that I want to point out what is the most important one, which is, who is satisfying the will gravity conjecture. So, do I have just one particle, do I have infinitely many, do I have these towers, even if I have the towers, then what are the charges, what is the minimum charge. So, by black hole arguments, let me remark that black hole arguments only motivate like a mild version of the will gravity in the sense that only one particle is required. Okay, it's a mass model in the charge. And then you can wonder what is in the charge of this particle. Okay, so how light this particle is going to be, whether it's going to be within effective theory, or it's just going to be very heavy, so it doesn't have any effect on the effect. Now, string theory examples, modern invariance for the C of T, consistency and the dimensional reaction and so on, motivated you need to have a tower, sub lattice of will gravity states. However, this tower, I mean, even if you have a tower or a sub lattice, and what is the first, what is the charge of the first stage is not necessarily one, you can have a, still a tower. And then the main question is what is this index of the tower of the sub lattice. So again, whether the first state satisfying will gravity conjecture is light enough, such that it's within the effective theory, or not. So this is the main open question because the fundamental implications depend on this. Okay, I mean if you want to have some fundamental implications you need that the will gravity state is within the effective theory, otherwise you will not be sensitive to it. And there's a lot of research here trying to put lower bounds on these charges. And so far, in all the examples in string theory, these charges like this index of the sub lattice is always on order one. In the sense that you cannot make, maybe you can make the charge to be two or three or four, but you cannot make it parametrically large, which means that the states are going to impact the safety, but there is no proof for this. Okay. And then for the distance conjecture. So one is that the so far, the for the distance conjecture so the exponential behavior in terms of the field range, or the evidence comes from a state theory or radius safety. But there is no like black hole motivation or bottom up rational why this should be true. And it's important if we want to argue that this is a general quantum gravity. So at some point, I mean we need to give these general explanations, even if they are not so quantitative, but please give us an inclusion of what will go wrong in general if it's not satisfied and we can then use it to argue. So this is one of the main questions. Also the exponential rate of the power, as I said, very important because this is important to be precise and mental implications. So here is, recently we got we got some lower bounds from string theory. Okay. That are very useful. And also point out that what matters for these lower bounds are for example the discrete symmetries at the store as one. So I think it's very likely that in the next year, we'll get a better understanding of how to react this, which is points of rates, purely from ESP data, and maybe how it relates to the global symmetry. And finally, I was mentioning a lot the evidence in terms of when we have extended supersymmetry, but of course to give them medical implications, and we need to have a scalar potential, right, we need to break supersymmetry and so on. And you could wonder what is the fate of this connection when you have a scalar potential and you don't move just a flat model space. So, here they are also, we have several words about it. The idea, I'm just to give you the intuition, is that there is that connection, as we grab all this connection should be satisfied at energy scale, I mean they shouldn't care about what is the energy scale that you are defining the ESP. Right, it should be consistent and then the RG flow, which means that if I have a scalar potential I can always integrate out the heavy loads, the heavy directions and focus for example on the value of the potential. And this is like my new model space. So, the distance connection in particular should be satisfied by the values of the potential and that's why we use it for inflation, because when we move in these values, the potential, okay, which is very consistent of RG flow. But this is highly non-trivial because not every potential that you engineer just randomly is going to satisfy, that the value of the potential move is in such a way that the distance connection is satisfied. Because when you have a super, very turning trajectory like some spiral, it seems that you could get a very large range without having towers at the state that you found light. So, the distance connection itself is already putting constraints on the scalar potentials that are consistent with quantum gravity. Okay. So, this is something that has to be explored much, much more, but it's one of the main open avenues to discuss. And at the moment, as usual, I mean, this has been studied first in the context of theory, so we'll have checked that this work in the context of Calabriaun uniforms, but we need to have more general evidence. Okay. Any question? Okay, so with this, I finished this to connect it. Okay, so. Irina, is there, is there any preliminary work connecting the field range to to block all arguments and so on that you had there as a question mark, but is there. Yeah. You don't, you don't have to say if it's not published yet, but is there anything. No, it's actually the one I told you about small luck. Okay, okay. So, let me, yeah, so this is the map connection that I showed you at the beginning of the lectures. So, what have we done. So we'll have discussed. And the absence of lower things in great glory. And I've seen how it relates to these other conductors, which were the company's hypothesis, which actually seems to just come from absence of generalized global symmetries, maybe non invertible symmetries as well. And this generalization to cover recent classes, which are including topological global charts. Now, then. So this is about, like on the theme of topological operators. Then we have the. The conjecture. And the distance from the chair, which is about towers of state that are becoming light. When they get cut, it sounds more or the distance are very large, such that a, they are also super extreme, not in terms of the blackboard. So this constraint that will expect that. And what is left that I didn't discuss in the lectures are these other newer conductors, which can actually derive from consequences of having these towers of states. Okay, so what I'm going to do if I have five minutes left is to say a few words about what they are, even if we are not describing the data just to make sure that they are about. But just before I want to remark one thing that I have here on the right hand side, which is that. Yeah, I mean the way to work in the strong plan is, it depends who you talk to is not like just using some techniques to answer many different questions and I will have some concrete quotes that I will need to answer and you can approach it in many different words. And that's good because it's very hard course to prove this condition because we don't have a complete framework quantum gravity that we can just let's check it and that's it. We're trying to understand quantum gravity in a sense. So, we need to approach it in many different ways. And I want to remark that the first step is identify the conjectures, I mean like universal patterns that we think might be general constraints for quantum gravity, but this is only the first step. Okay, so identify quantum gravity patterns that we formulate them in terms of conductors. Now the next step, of course, we need to test quantitatively. So, in order to test the quantitatively we need to take a framework of quantum gravity and typically then we do this because allows you allow us to test this quantity. But then this is also not enough because we want to argue that this is a general quantum gravity principle so we need to provide some explanation of what would be wrong in the EFT if we don't satisfy a conjecture. So while it should hold together. Here we cannot prove that it should hold in general because we don't have a quantum gravity completion but the goal is try to prove it in string theory or in string theory compactification and then explain where to understand what is the quantum gravity principle. So this is the way to proceed and that's why it's a mixture of working in string compactifications but then also using black hole physics, holography or sensitivity bounds to do more general explanations. Okay, so as I said this is going to be the last slide in which I just want to mention briefly what are the other conjectures. So, you can ask me during the discussion like if you have something more to say but let me just define it so we have this statement that non supersymmetric vacuum are at best metastable so that if you don't have supersymmetry there is no way to protect the vacuum from decay. This is motivated from the gravity conjecture when you apply to co dimension one. Because then, when the charge to my ratio is bigger than one, this is equivalent to the stability condition because then you can a new plate above all that will expand because the electric force proportion is bigger and the cost of energy of expanding the bubble. So that was the original motivation that is used at the gravity conjecture if you have this combination one brain because you have fluxes and so on, then the bathroom will be an estate. But the other motivation now. The theory and the evidence comes from these bubbles with nothing that we discussed the second day. And it seems to be always topologically allowed, at least if you don't want to require if you don't want to have this topological global chat. So that will have these two lines here. Okay, now we'll have them. All the secret conjectures that are sometimes the most controversial just because they are the ones that have a biggest impact. And at the moment, I want to remark that the evidence that we have for this is that the asymptotic limits, when you go to these infinite distance limits in the modeling space. Indeed, it seems that this always the runaway behavior of the scalar potential. That's that that is fine that the slope is bigger and the potential itself. But this is something that we only have to be done for at the, at the infinite distance. And you can also show that indeed these behavior is runaway potential is induced by the tower of the state that is becoming that's why it's kind of consistent. That is something that appears. Okay, by the power of the distance connection. Why do this correlation to the distance connection. And finally, the ADS distance connection. If this is a general agency of the distance connection that implies that there is a power of the states. Whenever that becomes light, whenever you take a flat space limit selected you are in ADS or in the city, and you take a limiting with the cosmological constant goes to zero. You are going to get some power of the states. That is case, the cosmological constant to some point. And this is interesting because understanding well what is this power can shed light on whether we can have scale separation in ADS. So this, this alpha, you know, one half, this is the human and this is a no scale separation. Because if you think that this tower is like a Kalusa flying tower. And we have ADS cross something. Here, when the mass of the tower is smaller or equal than the ADS length, then there's no space separation between the, the non compact and the compact. So this is an open question of research. And again, this is motivated from applying the distance conductor applied to the space of metric configurations. So instead of thinking of distances in the scalar field space, we think of distances in the field space and the space of other fields, like the metric of the. And if you do that then you get this collision. That was kind of a generalization of the distance connection. Okay, so all these three are are important because they are telling us information about the large scale pop up on the structure of the potential so that obviously can have many more technical implications. And they also deal with important question by themselves, like, what are the stability of the backwater whether we can have the sit there in quantum gravity, whether we can have scale separation in ideas. So they are, of course, the level of evidence for this tree is much, much less than for the previous ones. Okay, so they are motivated by the previous ones. But whenever you generalize something, I mean you have to work again harder to try to provide the evidence for that. So these are open questions for search. I mean, I mean, this thing is very quickly evolving so it will also change. So any question, maybe, okay, let me just finish and then you can ask any question. So, so this is it. I want to finish with a positive message because I'm just a positive person. I think that if there is a small plan and the decision of a small plan is good news. I mean, if not everything is good because it means that we can have predictive power from quantum gravity. And the goal is try to determine precisely what goes in quantum gravity. So what is the space of consistency if we pick up. Because then we can use it to provide new constraints for Fino. Also, because it's interesting by itself, like what are the quantum gravity constraints that we impose, like in the same way that we already understand this very well for gauge theories and we have anomalies and all these things. What is the equivalent for gravity? We have to continue proving our understanding for gravity. And it's very promising that a quantum gravity has something special that is that can provide UVI are mixed or will have in quantum theory, and that is exemplified by some of these connections. So something that I find very exciting is whether we can understand better this UVI are missing from quantum gravity and use it to solve the naturalness issues. For example, that we observe in our universe, like why the cosmological components are small, the electro-witch hierarchy problem, or why nutrients are so light, and so on. Because could be some, I mean, all these fine human issues and these hierarchy problems have to be revisited if not the full space of parameters is allowed. So whatever things that seem natural from a quantum theory perspective, maybe are not natural from a quantum gravity perspective on vice versa. So some natural things from quantum gravity can be very surprising for the AFT and this is an opportunity to learn new things. So let me finish here. Thank you for your attention. I hope you enjoyed and let's see if we have more questions. Okay, thank you. Maybe while we digest the last few slides and I think if we have any questions, let's thank you for getting us these lectures in one form or another. I never, I can never find the clapping reaction. But yes, here it is. Okay, so are there any questions or comments? Oh, sorry. Let me first stop recording. It looks like I also have the time. So the question is, are there other qualitative tests in other frameworks that support any of the conditions? Qualitative, yes. Maybe you mean quantitative, like, so the, okay, let me come back here. So from, I mean, from back home, this is right now I give you like heuristic motivations, but for example, one important work is to connect, I mean to show that there were some examples that were violating weak cosmic censorship, that when you introduce a weak, a particle-satisfying neural network, they are fine again with weak cosmic censorship. But there is an interconnection between the two things, and that's quantitative also because it's really, you need the charge to manage a bit bigger and the extremal value to satisfy weak cosmic censorship. And it's not related to the mixed singularity. Like, I mean, it's a more complicated thing that you have an electric field that grows independently and that would produce some violations. So from holography, we have, as I said, like we have some, also, especially evidence for the weak gravity conductors and proofs in the worksheets, or equivalently in areas where it grows in CFT2, using the modular symmetry, the modular invariance of the CFT. So by modular invariance, we will always generate states that have a charge to manage a bit bigger than one. And then we say, so there are also other, there are other works from holography, like using, well, for example, the absence of global symmetries also has this proof in holography and using also the locality of the CFT. That requires that the global symmetry in the boundaries, a gauge, symmetry in the bulk. So for global symmetry, we'll have many of these quantitative tools, but for the weak gravity, it's only this one. So holography or weak cosmic censorship. And for the distance connection, I mean, we know also that whenever we have a free point, we will also get the power of high spin operators. So that's the top and the other thing. Yeah, these are the examples that I think of right now. All the rest come from the stream here. And there are people now working on the positivity power, so, but this is some working progress. There are, okay, positivity bonds can also give you some proofs for the world to connect, but for a very mind version of the world. Okay, like that many small black holes that serve to satisfy the world to connect. But they are not enough at the moment to give you stronger, stronger versions of the world that we have a light part because it's fine. More questions. Can I ask perhaps a quite a dumb question? But so the fact that some global symmetries are bad and these lack of arguments boils down to the fact that we can't tell from asymptotic space. Are you in your office? I am. I, I very much, I'm very much in favour of offices with a couch. Well, yeah, I mean, the only, okay, maybe I can share this. And the problem is like, sometimes I'll come in in the morning and one of the, one of the janitorial staff clearly enjoys using my couch to take his breaks. And every once in a while, I'll find evidence that someone's been sort of hanging out on my couch for the evening. I begrudge no one the chance to sit down and have a rest. Are you ready to go? If I start this. Yeah, please. Okay, let's start recording please. Yes. So we have the fourth and last lecture by Alex Maloney, holography and averaging. Take it away. Thanks everyone for coming again today. As always, let me remind you that I encourage everyone to ask lots of questions and to run me at any point. Feel free to just unmute yourselves and go ahead and ask a question. No need to, you know, raise your hand or anything like that. Feel free also to write questions in the chat, although of course I can't necessarily monitor that in real time. So you should also just feel entitled to unmute yourself and ask, ask a question right away at any point. So, um, today, what I'd like to do is spend our last lecture investigating an example, or I should say, a conjectured example of an average holographic duality between a theory of gravity in asymptotically ADS three space times. And an ensemble of two dimensional CFTs. And as we articulated last time, really, in order to formulate an ensemble average over a family of CFTs, we really need to do three things. The first thing we need to do is understand exactly the space of conformal field theories. The next thing we need to do is understand how to define a probability distribution on this space of field theories. And the third thing we need to do is learn how to compute averages over this space of conformal field theories. And those are very hard things to do. We don't know how to do them in general. And so the example I'm going to talk about today is one where we add tons of symmetry to the problem in order to make it solvable. In particular, I'll be considering ensembles of 2d CFTs with some central charge. That will be equal to some integer that I'll call capital D. And with a big symmetry algebra. So in particular, we'll assume that there is a chiral algebra. Created by currents that create a u1 to the D times u1 to the D current algebra. That's a very fancy way of saying something that you already probably know, which is that the theories that we're going to be thinking about with this giant algebra are really just free bows on conformal field theories. Our free bows on conformal field theories in two dimensions. And they will be dual to a somewhat exotic sort of 3D theory of gravity. But it's going to be a theory of gravity where we can compute everything that we will want to be thinking about exactly. So in particular, just to maybe spoil the punchline. One can compute the average partition function. So here, this is going to be the average over the space of CFTs. And this will be the partition function on some general Riemann surface. You can compute this. And what we'll discover is that the result takes exactly the form that you would expect in a three dimensional theory of gravity. So in particular, you know, it will take the form of a gravitational path integral, where we have a sum over classical saddle points. So these will be locally ADS three geometries with a weighted by some classical action, plus a one loop correction. And in principle, a bunch of higher loop corrections. Now the theories of gravity that turn out to appear here are so simple that they're always one loop exact. And so in fact, all of those higher loop corrections will vanish. And so there's a sense in which this is a theory where we can compute all of the perturbative and non perturbative corrections exactly. And so we can just perform a completely explicit matching on both sides of this duality. Like this is an exactly solvable version of ADS CFT in the sense that on the gravity side, we have a complete set of saddle points and the complete perturbative expansion around each saddle point. We can sum it all up and we get some CFT answer. Now, in order to get this exactly solvable theory, we had to sacrifice a lot. I assumed some like huge symmetry group. So this means that this is a very, very simple theory of gravity. In fact, when I say that it's an exotic theory of gravity, I'm not kidding. It's a theory that in a sense looks more like a gauge theory than it does like, you know, Einstein gravity or some genuine theory of fluctuating metrics in higher dimensions. But less, however, because it does have the form of the sum over geometries, I do think, you know, it is still a rich enough example that we can draw interesting lessons for our understanding of quantum gravity from this example. So that's where we're going today. I, you know, at this point I'm ready to dive in to some of the details. But maybe before I do so I'll just pause and see if there's any any questions or anything that requires clarification. So let's just go ahead and dive in and get started. Okay, so let's start by thinking about the example where D is equal to one. That is to say a free boson with central charge one. So this is a free boson. Let's call it X. And because we want a discrete spectrum. That is to say, we want each of the elements of this ensemble to be a compact unitary conformal field theory. I have to take this to be a compact boson. And so it'll be a boson that lives on some circle of radius are with the standard action. So with the standard action. So this will be the world shoot action or the 2D action of a single free boson of radius are. So this has a you one times you one current algebra. What are the one symmetries, they're just basically translations around the target space circle. Here, the space of CFTs is one dimensional, right, because it's labeled by this parameter are. And the radius are is a coordinate on this modular space of CFTs that I'll call M sub one. So, I mean, this is just a very simple space of CFTs. But now we could start trying to average over the space of CFTs. So what sort of quantities might we want to average. So we would like to average. And the first thing we have to ask is what sort of quantities do we want to average. So, for example, you might want to average the partition function of the theory. So the partition function of the theory so let's just think about the tourist partition function of a free boson. And that so that's the thing that computes the spectrum of the theory. And for free boson. This is an easy thing to compute. It's a straightforward exercise. It involves two pieces. There's a one loop determinant coming from the fluctuations of this free boson. And a sum over classical saddle points. Right. I mean, it's a free theories free theories are one loop exact. So if you know the classical saddle points and if you know the one loop corrections you can compute the partition function of the theory exactly. So here, this data function is the sum over classical saddle points, which is a sum over momentum and winding loads. So if n is the momentum and w is the winding mode, then the part data function that describes the sum over classical saddle points the sum over classical solutions of a free boson that maps from a tourist into a circle just takes a form that you have probably seen before. I only write it down to emphasize that what I'm doing here is really nothing terribly sophisticated. So here, tau is the modular parameter on my tourists if I'm considering the tourist partition function. And I'm writing it as X plus IY. Why if you like is the inverse temperature. If I wanted to think about this as a thermal partition function and X would be some sort of angular potential. So that's the piece of this partition function that comes from the sum over classical solutions. And this one loop determinant. Well, that's just given by the usual formula that would appear even for non compact free boson. So this is the function that counts the descendants dates in the theory. So if you like that theta function that came from the sum over classical saddle points is the thing that counts primary states, and this one loop determinant counts descendants dates. And here by descendants states I mean descendants under the you want current algebra of the theory, or you want times you want, which is why we have those absolute values. So what would we like to do. We would like to do things like compute the average value of this tourist partition function by integrating that tourist partition function over the space of conformal field theories. Now, when I perform this integral here, let me write this integral a little differently. We're going to need to integrate it with some probability measure. Which is going to be some probability measure on this from probability density on the space of free boson field theories. And so the first thing that we're going to have to ask is, what is that probability density. And it turns out that there is a very natural answer when we're talking about spaces of two dimensional CFT is for what that probability density should be. But in fact, even given everything I've told you now, there's already a very simple guess for what that probability measure should be. And that's just found by remembering T duality. So remember that we're studying a free boson on a circle of radius R, and a free boson on a circle of radius R is the same as a free boson on a circle of radius one of R. That's T duality. You know, I wrote down this formula for the partition function in order to remind you of that fact. Because you'll notice that this partition function is invariant under the T duality symmetry that takes R to one over R. So that means two things. That means first of all, that when we think about the modular space of CFTs, we really don't want to count the same CFT twice. So you should really think about this modular space as labeled by coordinates R. That's the radius of the free boson on the space of theories, where R runs from one to infinity, not from zero to infinity, because otherwise we'd be double counting our field theories. But the second point is that any probability measure that you write down should be invariant under the symmetries of your theory. So in particular, we want a probability distribution that is invariant under R goes to one over R. Otherwise we're not defining a good probability density on the space of theories. So in fact, given that you could already guess what the correct probability density is going to be. Because there is a unique, or no, that's that's a lie. I should say there is a natural measure. That is to say a natural metric on this space of theories that is invariant under R goes to one over R. And if you stared at it for a second. You would immediately write down that metric, and you can check easily dr squared over R squared dr over R is invariant under R goes to one over R. And so when we are guests for the probability distribution is just going to be the natural measure that's inherited from this metric. So I'm going to take the probability density to be one over R. So, when computing these averages over the space of free boson CFTs, this T duality symmetry teaches us what the probability measure is, and what the correct range of integration over the space of CFTs is. I have a possibly naive question. The modular space of C equals one CFTs also includes the orbital branch. So why don't you consider it. Good. So remember my starting point was, I wanted to take the one times you uncurrent Delta. And so the one currents are like DX and DX bar. So if I took this easy orbital that takes X to minus X and projecting out those operators. You could expand your perspective to include those as well. But though, you know, my, my, the starting formulation of my problem is that I wanted to preserve this symmetry algebra. It would be a separate discussion if we wanted to consider a larger space of CFTs without symmetries. I think once you don't have symmetries sort of, you know, you've unleashed a Pandora's box of possibilities. For C equals one, it's not so hard. You can actually go ahead and just do the calculation explicitly. It's not bad, but for higher central charge. It'll get pretty thorny and I don't know exactly what happens. Good question though. Yeah, thank you. Good. Now, please. Alex, could I ask something else. So you mentioned before that there is a very natural answer for spaces of 2d CFTs. And I was wondering a bit like, would this be for any given CFT or are there other. So let me clarify confused by that statement. Let me clarify that statement because it was sort of an off and remark I didn't, I didn't explain fully. So whenever you have CFTs that live on a modular space, that is to say CFTs that are continuously connected to one another. It means that they're connected to one another means that they're related by the addition of a marginal operator in the action of the theory. And so you can take the two point function of those marginal operators and think about that as a metric on the space of theories. This is known as the Xamalogikov metric on the space of conformal theories. And it turns out that the metric that I have written down here coincides with the Xamalogikov metric. So to do easy to see what is the operator in the present case that changes the radius of the free boson. Well, it's just DX DX bar. Okay. Because, you know, if you added DX, if you added DX D bar X to the action of the theory, you could absorb that into a rescaling of X, which is essentially a rescaling the radius. So you can compute the two point function of that operator. And indeed, you'll see that it exactly coincides with the metric that I wrote down below. And the same will be true at higher for higher higher values of the central charge as well. So this is the so I motivated this choice of the metric by saying, well, it's invariant energy duality. It's also the sort of canonical metric that you would define on a space of CFTs. Now, if you're considering a fancier problem where you have conformal field theory theories that are not related by marginal deformations, then we would have to have a separate discussion about what the proper choice of measure on that space of theories would be. I have a guess, but I don't have a as as good a justification for it. Okay, that makes sense. Thank you very much. So, there's a big problem, however, and what I've discussed, which is that when I said that my probability density was one over R, I lied to you. Because one thing that I know about probability densities is that they should be normalizable, and that the integral of a probability density overall possibilities should be one. However, if you're to try and compute that integral, you get infinity. If I had a finite number that would be no problem, I would just divide P of R by that number. But this probability distribution I've written down is not normalizable. Okay. And indeed, if I were to take that theta function I wrote above, and plug it into this formula here, and try and perform that integral, I would get infinity. So, at this point, you might just throw up your hands and give up. But it turns out that this divergence is just an artifact of the fact that I was considering only a single free boson. And that once you consider more than one free boson, all of these problems go away. So in particular, if you take the similar set of examples with central charge D, then it turns out you can write down analogous formulas. They're a little bit more complicated, but it's basically the same thing I wrote down. But suddenly you have a normalizable probability distribution. So let me just tell you in a few words how that works. So we now have D free bosons. Let's call them XP, where P runs from one up to D. And the action, so let's now write down the action of D free bosons. So the most general action that you can write involves a symmetric D by D matrix. And an anti-symmetric D by D matrix that I have called G and B here. I've called them G and B for the same reason that I called the central charge capital D, which is that if we were doing a world sheet string theory, then G and B would be a target space metric in B field, and capital D would be the number of space time dimensions. And these coupling constants, G and B, should be regarded as coordinates on the space of free boson field theories in D with central charge D, which teaches us that the space of CFTs has dimension equal to D squared. A little bit like the space of N by N matrices, except now instead of integrating over the space of N by N matrices, we're integrating over the space of conformal field theories. Again, it just has dimension N squared or D squared, where D is the central charge. And again, I'm taking these to be compact bosons. And basically, these metric components G are the analogs of the radii of these free bosons. So here, for example, I'm taking all of my free bosons to be periodically to live on a target space circle of radius one or circumference two pi. And the component, the GPP components, for example, are just the radial coordinates of these target space theories. So I'm normalizing everything so that the X fields live on circles of radius one. And I'm treating the metric in the B field as the independent coupling constants that label my space of CFTs. So again, it turns out that these theories have a T duality symmetry. So we now have a T duality group, and the T duality group is much bigger. So the T duality group for a single free boson was Z2. The T duality group for D free bosons is ODD valued in the integers. So this is an infinite T duality group. And again, it turns out that there is a natural metric on this space of theories that is invariant under this T duality symmetry. What is that metric? Well, it's actually very similar to the metric that we wrote down here. So I'll just write it down for you. So instead of factors of one over R. We have factors of the inverse metric. I remind you that G and B are coordinates on my space of theories. And so when we write down the metric on the space of theories, it'll be in terms of these G and B coordinates. And here I'm doing the usual convention where G with upper indices is an inverse metric. So the metric on this space of theories. So this is the natural metric that is invariant under that SO, the T duality symmetry. Again, it also coincides with the Zamelon-Chikov metric. And with respect to this metric, this modular space now has finite volume. And in fact, although I've written down an explicit set of coordinates and metric on this space of theories. In fact, there's a much more geometric way of thinking about this modular space, which is as a coset of SODD. So here this modular space, again, is a space where I mod out by T duality symmetries. And it can be represented as this coset, and this is known as Norene's modular space. If you're familiar with this, that's great. Don't worry about it. It just happens to be the case that the space of theories under consideration has a group theoretic structure of this sort. Roughly speaking, the way that we think about it is that these free bosons are coordinates on a d-dimensional torus. And any two tori can be related to one another by a rotation, because what is a torus? A torus is Rd modded out by a lattice, and you can relate any two lattices by an orthogonal rotation. And that's what this orthogonal group is doing here. That's the set of rotations that relate these lattices to one another. The only slight wrinkle in that story is that it's SODD instead of SOD, because the left and right movers are more or less treated independently in this way of thinking about it. In any case, if you don't like thinking about it in terms of abstract groups, you could just think about, I've written down a coordinate and a metric on the space of theories here. And now it turns out then that because this modular space has finite volume, that means that you can think about the measure induced by this metric on the space of theories as defining a normalizable probability distribution on this space of theories. So here I'm just labeling by M a point in this space of CFTs. And now we can go ahead and start computing averages over the space of theories using this probability distribution. So when I talk about an average over the space of CFTs, that is exactly what I mean. Notice the crucial role that T duality played here. The T duality group is infinite. Right. So if I haven't quotiented by the T duality group here, I would have gotten a diversion answer for the volume of the space of theories. And I wouldn't be able to talk about some nice normalizable probability distribution on the space of theories. One question, please. As you did before now that you have a metric which is not just like a number in order to find PM you would use like the density like the square root of the of the determinant of the metric. Exactly. I wrote down a metric. You know it's a metric in D squared dimensions by DMP of M I literally just mean the volume. So we could write that way, you know, it was the things, you know, let's call it default. Okay, fantastic things. Okay. So we now have a nice normalizable probability distribution. And you could go ahead and start computing things like averages of partition functions. So in particular, you could do something like compute the average of the tourist partition function. So here M is a point on modular space. Tau is a point in the tourist modular space. Okay. So this is, and this is going to be the average spectrum of the theory. So the average of the trace of q to the L zero q bar to the L zero bar. So if we were going to be computing, you know, if we were doing a matrix integral, this is the thing that would give you the semicircle law. Okay, but now we're doing it in the space of theories space of CF teams. And you can see that I'm just doing some finite dimensional integral right how hard can it be, you know, upstairs, you know, upstairs on the last slide I wrote down the partition function of a free boson. Here it is the partition function of a free boson. As I mentioned, single free boson. Okay, there was a one loop determinant and a theta function. So for D free bosons, it's just going to be a one loop determinant and a theta function. And I'm not going to bother writing down the formula for you because it's so similar to the formula that I wrote down above. It's just that, you know, N and W those momentum winding are now going to be vectors, rather than of integers rather than just a pair of integers. And I'm going to just and one could at least in principle, just go ahead and compute the integral. I mean I say in principle, because it's actually a hard integral to do. Fortunately, however, mathematicians are very smart. New mathematicians, in particular, Siegel and Bay, were very smart in exactly the right way that we need, and they computed this integral for us. And they did it long before anyone ever thought it might be interesting for physics points of view. And I won't bother to derive the formula, I'll just write down the answer. So the first thing to notice is that there's a one loop determinant. So, here we have D free bosons. So instead of one over the square root of that one loop determinant I wrote down above, we get D factors of that. You know, that one loop determinant doesn't depend on the moduli, because that's the thing that was there even for the compact boson that's the thing that counts descendants. That just goes along for the ride doesn't depend on where you are in now in modular space. And that's actually the other thing the theta function that depends on where you are. And so that's the thing that you need to average. And when you do that average, you get a result that I'll write down for you here. So the result turns out to be what is known as an Eisenstein series. Not the sort of Eisenstein series that you may have encountered before if you study these things in number theory, what you study before if you encountered Eisenstein series and number theory, we're most likely holomorphic Eisenstein series. This is what's known as a real analytic Eisenstein series. So, what sort of form does it take. So here, we've integrated over the module I am. So this is a function only of tau, the Taurus module I. And you know that it better be a modular invariant function of tau that is to say invariant under SL to Z transformations. So in particular, if gamma is some element of the modular group. So that is to say it is a two by two matrix with unit determinant and integer entries. So that acts on the Taurus modular parameter in the familiar way by a fractional linear transformations. And we know that our answer for this partition function better be modular invariant. And the way that this happens in this formula is that the average partition function is a sum over this modular group. It's an average over this modular group. And typically, such Eisenstein series are denoted E with a subscript that is the weight of the Eisenstein series. And in this case, the weight is D over two. That's the number of powers of m tau that are appearing here. Good. In the sum, did you mean modded out by Z2 or just Z? So in particular, what is that Z? So in tau is invariant under tau goes to tau plus a constant. So tau to tau plus n is the subgroup of SL to Z. That is generated by elements that look like this. And so note that the sum and appearing in this Eisenstein series is invariant under tau to tau plus constant. So if you wanted to get a finite answer, you shouldn't sum over all of SL to Z. Right. So that's why I've done that right coset SL to Z might see. Thank you. Because actually has a physical interpretation. But anyway, you needed to do that to get a finite answer. Good. Now, there's one other thing that I should mention, which is that although we've only written the Taurus answer here, it turns out that we can also compute the average of the partition function on an arbitrary Riemann surface. So here, I'm sort of schematically representing that by a, you know, some genus to surface. And this is what's known as a Siegel Eisenstein series, or more properly, a higher degree real analytics Siegel Eisenstein series. I'm happy to write down some details of people are interested, but I think for the sake of keeping things relatively simple. I won't do so in too much detail. So we're now really almost done, surprisingly enough. Because our goal was to find a space of theories where we can average over CFDs compute the average of observables and compare this to our gravitational expectations. In particular, this formula that I've written here, it turns out, has a very simple interpretation as coming from a gravitational path integral. So let me give a name to that formula, I'll call it star. What I mean by that is that this formula star is exactly what you would expect from some gravitational path integral, where it is going to be a sum over classical saddle points of some sort of classical action with a one loop correction. And here, that sum over the modular group SL Tuesday. I'm going to interpret as a sum over geometries. And in fact, it's a very famous sum that has appeared many times in the literature before when people try and understand the sum over geometries in theories of gravity in ADS three. So what is the idea. Is that this modular group SL to Z, or I should say rather the coset SL to Z mod Z labels geometries. And in particular labels of class of geometry known as handlebodies that fill in the boundary torus. So now I want to think about my boundary theory. As living on a torus. And I want to think about a gravitational path integral bulk path integral that can be used to compute this partition function by summing over geometries whose boundaries of torus. Now what three manifold could you write down to his boundary is a torus. Well, it's a solid donut. So for example, I'm filling in the interior of this torus. And I'll indicate that by drawing a cycle in the torus that circle which is going to be contractible in the interior. But if you think about it. There are actually many different ways of filling in a boundary torus. How can you see that. Well remember that if you have a torus, you have two cycles that are completely treated completely democratic. Right, there's no difference between them. So I drew this picture here, I had to choose one of those cycles to make contractible, and one of the cycles which is not contractible. So what that means is that I have a entire choice of possible topologies or geometries that I could use to fill in the boundary that are labeled by a choice of which cycle is contractible. You can show that it is exactly this coset that labels which cycle is contractible in the bulk. And in fact, you know, we already have names for the geometries that appear in the sum. So for example, you know, we started out these lectures by talking about ADS as a geometry with a time direction and a spatial direction. And so if you were to get a torus just by identifying a Euclidean time direction, then you would get a solid donut where the five circle is contractible. So that means that the geometry where the five circle is contractible is what you would call thermal ADS. It's the geometry that you would use to study a thermal gas of particles propagating in ADS. On the other hand, we discovered that there was another Euclidean geometry, the Euclidean that comes from a black hole, and that's one where the time circle is contractible, not the five circle. Right? Because what does it mean for the time circle to be contractible? It means that there's some location in the geometry where the coefficient of dt squared shrinks to zero size. And we have a name for a place where the coefficient of dt squared becomes zero. That's an event horizon, right? So the Euclidean continuation of the ADS Schwarzschild black hole is one of these handled bodies, but where the Euclidean time circle is contractible instead of the spatial circle. And so you can see that the sum over geometries that we're talking about here is a sum over bulk manifolds in locally ADS38 manifolds, it turns out, whose boundary is a torus, and that includes things like black holes. There's one final question that we should address, which is what about the loop corrections? So in order to think about these loop corrections, we need to remember one more thing about ADS CFT, which is that every time you have a global symmetry on the boundary, that's going to correspond to some kind of gauge symmetry in the bulk. And I started out with a family of CFTs. So we started out with a family of CFTs with a huge global symmetry, the u1 to the d times u1 to the d global symmetry. And it turns out that, so let me say that a different way. Because we have this huge global symmetry in the boundary, we expect a huge gauge symmetry in the bulk. And in particular, we expect a u1 to the d times u1 to the d gauge symmetry in the bulk. And what sort of gauge theory could you write down in three dimensions that would realize this gauge symmetry? Well, the simplest option, and it turns out the correct one is a trans-simon theory. And I want to interpret this trans-simon theory with the perturbative degrees of freedom of our bulk gravity theory. And you can now see why it is, I said our bulk gravity theory should be one loop exact. Because if you have a trans-simon theory for some gauge group, the structure constants of that trans-simon theory are the things that appears the coupling constants, the three point coupling constants of the theory. So if you've got a u1 gauge theory, then it's a free theory. And so everything in the bulk is going to be one loop exact as well. And indeed, you can actually take this a bit further. And you could go ahead and compute the one loop determinant of trans-simon fields in one of these handle body geometries. This is now going to be a 3D calculation. Now, trans-simon theory has no local degrees of freedom, so you might think things like computing one loop determinants are trivial. They're not entirely trivial. The reason being that when you do this calculation carefully, you need to take into account the fact that you have no degrees of freedom because of gauge transformations. And so really the proper one loop determinant calculation involves a gauge fixing procedure, but I have pop up ghosts and all of that stuff. So even though you might think that the one loop determinants of trans-simon theory are completely trivial, it turns out that they're not entirely trivial. Because when you compute a one loop determinant, you compute a one loop determinant for a bunch of gauge of vector fields of vector bosons, and also a one loop determinant for a bunch of ghosts. And they cancel out only up to a small correction term. So you have one loop determinants for three-dimensional differential operators, and they cancel out up to the one loop determinant of a two-dimensional differential operator. And what is that differential operator? It's that thing that sits right there. I mean, for those of you who know about the relationship between trans-simon theories and WCW models, this is not at all surprising because that pre-factor was a conformal block for a current algebra. You know, to some extent what trans-simon theories do for a living is they compute conformal blocks on the boundary. So all I'm describing is a version of that calculation. But nevertheless, it means that we have an interpretation for every single term that is appearing in this sum. The sum over SL2Z is a sum over saddle points described by manifolds of different topology. The one loop determinant comes from the excitations of gauge fields in the bulk that correspond to the global symmetries of your theory. And that's it. Can I ask about where do these factors of m tau come from? Well, those factors of m tau are more or less guaranteed to appear in the calculation by modular invariance. Because it turns out that those one loop determinants on their own are not modular invariant, but transform in a particular way under modular transformations. And you need those m tau's there to soak up everything and make the whole result modular invariant. So if you go through carefully the one loop determinant calculations, you'll find that those factors of m tau are indeed there. So what we have here then is a picture where a sum over geometries and in particular a sum over handle bodies exactly reproduces an average over a space of CFDs. And although when I was writing down the formulas, I was only stating things for the tourist case. Exactly the same thing happens in higher genus. So in particular, the higher genus sum over geometries includes a corresponding sum over these handle bodies. That is to say now geometries where you imagine taking the interior of some higher genus surface. And it includes a sum of all possible ways of doing that. And it will involve some sort of sum over a higher genus or higher degree modular group, which is some sort of symplectic group. Similarly, you can compute the turn assignments partition function that counts perturbative stuff in the bulk. And you're also going to get some sort of character for, or I should say some sort of conformal block for a U1 to the D current algebra on the boundary. The higher genus, it turns out that you can't write down those one loop determinants explicitly. So on the tourists I wrote down that one loop determinant explicitly in terms of these in terms of the static and data function. Turns out there's no exact analytic expression at higher genus. But nevertheless, you can show that the one loop, the turn assignments one loop partition function does exactly what you want. You know, even though you can't write down the functions in sort of simple analytic forms, you can still show that this exact relationship between the average partition function, and the sum over handle bodies goes through. So here, schematically, you might represent, think about this sum, this average partition function on a genus to surface as being a sum over handle bodies of the sort. And again, it's one loop exact for the sort of dumb reason that you want your assignments theory has vanishing structure constants. And I'm just about out of time here. But I'll just end by saying that this also gives us a very explicit realization of the Euclidean wormhole picture that we started out these lectures with. I have a space of CFDs, and so there's no rule that says I only have to average one partition function. I could average a square of a partition function. And it has terms that we interpret. So this can be computed exactly in this theory. You know, I wrote down the tourist partition function for you. I wrote down the probability distribution on modular space. It's an integral. It's a hard integral, but it's an integral that you can do. Okay, and you can write down the answer. And what do you get, you get terms that in the bulk gravity interpretation. I think of as disconnected contributions, plus terms that come from wormholes. So here I'll try and sketch a wormhole. And roughly speaking, this is what they look like, you know, their bulk geometries that connect disconnected boundaries. And the fact that they are there is no surprise. I mean, they were baked in from the beginning because I have a probability distribution with a non-trivial variance. Okay, so you know you know such things have to be there. The nice thing about this theory is that I now have a geometric interpretation of all of these wormholes. And in fact, although I won't write it down for you explicitly, it turns out that it's possible to compute this variance exactly. You know, for those of you who may have studied JT gravity or SYK, or something like that, you know that typically the spectral statistics, which is what we're computing when we talk about this two-point function. We're talking about the two-point function of the density of states. So the spectral statistics is packaged into what's known as a spectral form factor, and one can compute exactly the spectral form factor in this model, completely in terms of this sum over geometries. I won't bother writing down the details, but I'm happy to discuss features of that if people are interested. I am out of time, and I think this might be a good place to end. There's obviously a lot more that can be said about this class of theories. So I'm happy to sort of speculate further. But maybe I'll do so in the discussion period. Thank you. Oh, wait, and before, before we end, I believe I am the last lecturer on the last day of the school. So I want to take this opportunity to thank the organizers and all of the administrative support for helping this all run smoothly. So I don't know how many of the organizers are here, but you should all raise your hands and let's all maybe unmute ourselves and give them a round of applause for organizing such a great school that I think we all enjoyed. But thank you again on behalf of everyone. Yeah, and Pavel, I'll just mention mentions that at the end of the discussion section, which is to say in 15 minutes, there'll be a group photo, so everyone who wants to continue on their camera and take a screenshot. Just how we do things nowadays. Okay, thank you very much Alex. Yeah, as you said, this was the last lecture on the last day. Some of you are probably tired, but let's see if there are any questions, comments. It's morning for me. Please. I think probably. Okay, so. Yeah, thank you very much for the lecture selects. It's a very nice. I wanted to ask a bit more if you don't mind sharing about the intuition you mentioned about safety is that aren't connected through the marginal operator deformation. Because yeah, like, I, yeah, I, it seems really obscure to me so far. I guess it's a problem that a lot of people are thinking about. Did my video disappear? Oh no, here I am. Okay, good. Yeah, let me, let me mention. Okay. Let me mention a little bit of speculation. So, my guess. Okay, and I can tell you where this comes from. My guess is that when you sum over CFDs. So, in general, you don't expect CFDs to be connected by a modular space. That's a sort of weird accident that happens. When we're considering this particular family of CFDs, you know, it's kind of a miracle for a CFD to have a marginal operator. In the present case, we had symmetries that guaranteed the existence of a marginal operator, but generically it would be a miracle to have a marginal operator, which means that a typical CFD you would expect to be isolated. And generically, I would expect that a sum over CFDs would be a discrete sum rather than an integral. And you then would have to guess what sort of measure would appear. And in principle, you might imagine that different gravitational theories would be associated with different measures. Of course, you know, something like type two string theory would be associated with a delta function measure. If you were to ask me what the generic measure is, I would say it's probably one over the automorphism group of the CFD. Okay. There's a couple reasons to think that. One is that if you look at the Siegel Bay formula, you know, the original form of the Siegel Bay formula, the thing that you would find on the Wikipedia page, if it has a Wikipedia page, there's not going to be this fancy integral over a modular space. It's going to be a discrete sum over a discrete space of lattices. That's because mathematicians generally don't think about lattices and Lorentzian signature, which is the thing that's relevant for this Nareen modular space. They think about Euclidean signature lattices that live, that, you know, which they're a finite number, right, you know, even self dual lattices and Euclidean signature, you know, there's like 24 of them and 24 dimensions or something like that. There's a finite space of lattices, you get a finite sum, and the measure that appears is this one over automorphism group measure. You know, this is a kind of natural thing from our point of view. I think there's some physics reasons to think that might be the right thing, but in any case I think that's the most natural guess. So if you had to ask, if you asked me, what is quantum general relativity to the extent that such a thing makes sense, dual to, I would say it is an average overall CFTs with this one over automorphism group measuring. That's a conjecture, however. Was that reasonable answer. Thank you so much. Yeah, very interesting. Thank you. You will go. I think you had your. Yeah, my question is about the family of CFTs for which D is small enough. I think even for equals to you would have a well defined probability distribution but then the central charge would be small and your background would be highly curved. So I was wondering in and the stringy corrections to be very important in that case. So I was thinking for such a case for all the members of the ensemble of the CFTs have a very strongly curved gravitational dual. How should one think about it. Okay, great. There's a couple things to say about like the finite central charge case. So the first thing is that I lied to you as I usually do. Because I called this a classical action, and I called this a one loop action. Usually, in order to have a clean separation between a classical piece and a one loop piece, you need a couple in constant, you know, each bar that is small. But what is the perturbed of description. I said the perturbed of description involved the you into the D Chen Simon's theory. Okay, that mean and what is D is the central charge. So remember that the central charge is like one over h bar. Okay. And I chose I had C that is a central charge number of perturbed of degrees of freedom. So the number of perturbed of degrees of freedom is also of order one over h bar. You know when you talk about loop corrections loop corrections are suppressed relative to tree level corrections. If you fix the number of degrees of freedom, as you take h part is zero. But if you have a number of perturbed degrees of freedom that grows is h bar and loop corrections are not suppressed relative to tree level corrections. And what that means is that although I wrote this as a classical action and a one loop piece. They're all mixed up together, right they're all of the same order. Right. And so, you know, the way that I compute them is using the technology of corrections and classical actions, but they're all of the same order. So your question that is why do I trust any of this at all. And the answer is that everything is one look exact, because I'm studying a free theory in the bulk right you want your in Simon's theory in the bulk. And so, you know, under normal circumstances, if loop corrections are the same sizes tree level corrections all of perturbation theory is broken down. Everything's one look exact here though so I don't need to worry about them. You know, in a real 3d theory of gravity. I wouldn't expect, you know, see is genuinely a large number. And I don't have a number of light particles that scales with the central charge in this way. And so I would expect a good separation between the classical loop effects. Okay, so it's really wrong to think of this, even at large central charge as a kind of normal semi classical theory. So the question that I thought you were going to ask, actually, is a different question, which is one of the things that we noticed in the paper is that if you consider a very complicated observable. Were you considering the.