 Okay, so I announced that this is one of talks on topics in quantum field theory and string theory. I'm very happy and honored to be here, to appoint it for the Louis Michel chair. As a graduate student I, of course, knew the name and one of the themes which he researched was phase structures of various theories, so this will be part of my talks. I'm also very happy to have this chair together with my colleague from my neighbor from a northern country from Lebanon, Ali Chamsadine. Okay, now my general talk in these first two lectures are going to be about very esoteric things. So I would like to start by pointing out a letter which Klein, after he had been appointed as a professor in Stockholm, got from his friend, very good friend, Wolfgang Pauli. And this is part of what it says, now in case you can't read it says, so this is to congratulate him and he says, I'm not of the opinion that finding new laws of nature and indicating new directions of research is one of your great strengths, although you have always developed a certain ambition in that direction. I find much more beautiful those of your papers which deal with applications of known theories such as, for example, and the paper with Nishina about the new scattering formula, etc., etc., okay, so isn't it good to have such nice friends who tell you exactly what they think about you, that you should not leave any of your limitations, you should stay within your limitations and do applications. Now, I think I should mention that I found this in a biography, a short biography which mentions Pauli and Klein, and one should mention that Pauli did work on Calusa Klein theories, so probably he just thought that Klein maybe got them by accident and really it's he who should work on them, but maybe that's worth well mentioning, okay. Now for the last few years, nature is sending us also a similar letter and what appears here are various bounds, I'm afraid, on what could be possible masses of all kind of particles which fantasy of theorists thought that they should be there. And we find that if they are there, they are heavier than what one expected. Now, right now there is some local, or maybe it would be global excitement about one particle at CERN, which in a few months we will know if it's there or not. In any case, it doesn't immediately fit into any of the fantasies anyhow. So this is the letter we got from nature, and so supersymmetry, strings, extra dimensions. Okay, now how are the scientists reacting? So we can say that we are just not listening, even though nature is trying to tell us, or we can say like many, for many years, we just ignore and we have our own convictions on how things should be, and we hope that eventually our convictions will be listened to, and we will continue to work on those problems independent if we get this or not. And that's of course a risky situation, but let's say T-Bo, which just was very honored to get a prize. I think expected for decades something which Einstein in the beginning refused at all to accept, and nevertheless other people ignored both what Einstein thought about black holes and what he thought about radiation, and then both of them together somehow did give something. So the lectures maybe should be taken in that point of view, or maybe not, I will continue. And the first set of works I'm going to describe, the first topics I'm going to describe, issues related to geometry and space-time singularities. And trying to understand, trying to understand them. Work is done mainly with José Barbón. So this is, and I will discuss a long series of works on the subjects and some things that we learned, and I will talk a lot about things that we don't really understand, because I didn't want just to present here closed works. I think then you can just find in the literature, but more, and this, a scene which will appear, more works which have still open ends in them, which need to be filled. And this is, they're all of them, no matter when they were done, work in progress. So the basic questions that I will try to answer here is what does geometry capture when one studies quantum gravity? The diagnostic tools on which I will spend some time can be very long time correlations. They can be very small. And this goes under names like quantum noise and complexity. And I will describe the, I will define in detail as we go on. And these issues I'm going to mainly discuss in a holographic setup. And the key thing of a holographic setup says that when you have, when you are in the conditions of ADS CFT, the theory is really defined by the CFT. And in the CFT, you can get the final non-perturbative results. The way how to attribute a geometrical description to them, when is it possible, to what extent is possible, is already an approximation. And the definition actually comes from the quantum field theory. This is more or less the attitude. Now, it would be nice if one could calculate in this duality on both sides independently. But it's not obvious. We don't, definitely we don't have the tools to do non-perturbative calculations on the string theory side. While in the quantum field theory, we do have much more knowledge, many more developed techniques. So really the point of view is the data, the information, to the extent that it exists is secure on the CFT side. And it's less secure when you talk about the geometry and the bulk physics. So if I would say there is some relationship here, I would say the responsible adult in this relationship is a quantum field theory. Because there, the things are really, in many cases, clear cut. While when one goes to the other side, you will see there are approximations, which is exactly the question, to what extent does geometry capture, or when a geometry, I mean semi-classical geometry, solutions of equations of motion, to what extent do they capture what is going on. And this will be related to issues like black hole information, and also to the structure of space-like singularities. So the bottom line of a large part of the talk is going to be that geometry, and this will be shown by concrete calculations, geometry is very good in giving averages. Even averages of very non-perturbative, small quantities. However, there will be things which it misses, and we will show what it misses and what it captures. And this will be related both to the black hole information, so-called black hole information paradox, and to the claim that Einstein-Rosen equals EPR, on which I will say a few words, I won't go in great detail, I will show the details of the agreement, where the agreement is discussed. Okay, so I'm not going to review ADSCFT, I'm going to assume that you already heard something about it. This condensed slide captures the basic claim. The claim is that N equal 4 super-young mill theory, which has an SUN gauge group, and is in four dimensions, captures, describes the sphere of a string moving on a background, I would say, which includes ADS5 process five, and a black hole at the very least, in ADS5 process five, and there are specific ways to relate the parameters, I've written them here, anyhow in the case, when I'm going to use them, I will bring them up again. Okay, now one of the most striking things, and now I'm discussing phase structure of the theory, is the very, let's say surprising spectrum that N equal 4 supersymmetry is supposed to give us. So the result I'm showing you here is a result of an imaginary graduate student, which appeared, let's say 25 years ago, in the office of his PhD advisor, and told him I have just succeeded to diagonalize the full spectrum of N equal 4 young mills, super, I mean N equal 4 SUN young mills gauge theory in four dimensions, and I have isolated, I've calculated the density of states, and I find the following band structure of the theory, and the band structure of the theory contains a piece where the entropy goes like a power of the energy, which is near to one, but smaller than one, nine over 10, then it has a part which goes like the energy, then has a part which goes like the energy to the eight over seven, and from a certain point onward, it behaves like the entropy goes like the energy to the power three quarter. Now the region this graduate student is unknown to us is because his advisor kicked him out of his office and told him there's no way this can happen, and everything you write here is counter to how we usually think of field series. For example, let's consider now Calusa Klein series. When you have Calusa Klein series are supposed to be extra dimensions, but we don't see those extra dimensions. We live in our four dimensional world. So when are we promised to see the extra dimensions? When the energies will be large enough to excite these extra small dimensions, and we are going to see the particle, not just the zero modes, but the particle excitations there. And that is when we should begin to see the 10 dimensional, if it was a supersymmetric theory, aspects of string theory. Okay, but if that is the case, then the higher the energy when you have these compact dimensions, the larger the dimension looks like. The lower the energy, like we are at low energy, the smaller the dimension looks like. Now we are going to see that this behavior is totally the opposite. This behavior at low energy indicates a 10 dimensional system. At high energy, it reduces the dimension, and there, that's not why the graduate student was kicked out. There it made sense, it's a four dimensional theory. So I'm going to show to you how these different things come, and they are going to play a role in our checking of what role geometry can play. So first the claim. The claim is that this part here signifies a 10 dimensional theory. This is characteristic of high energy excitations of a 10 dimensional field theory. This is characteristic of a free string theory. This is the Haggadon spectrum, I'll remind you. Then comes a period which is characteristic of a 10 dimensional Schwartzschild black hole. Schwartzschild black hole in flat space. And then comes indeed the four dimensional theory. So let's go to these various components. Okay, here is just what I said. In general, if I have a field theory, and I will demonstrate to you, the entropy at high energies, the equal means that it goes in high energies like the energy to a power which is smaller than one, and it goes like the dimension of space time, minus one over the dimension of space time. For free strings, one expects that the entropy goes like the energy. For black holes in flat space, this is in general number of dimensions. You expect to have a power of the energy which is larger than one. And in general, I will show again, remind you how you get that. But of course the power larger than one creates heavy suspicion because you have a negative specific heat for the system. So it's really unstable. It's you can't make thermodynamics in such a system while black holes are supposed to be related to thermodynamics. And there are many other issues which come about once you have such a high density of states. So let's take first the radiation or a field theory. So we are in a box. We are in a box always. I always put, I'm doing thermodynamics in a box always. And for the black hole, we need the box anyhow. And for other systems, the box is for convenience for putting an infrared cutoff. And so I can define the system how it behaves. So in all theories which are conformal or asymptotically free, or I would characterize them in all theories which have a finite number of thresholds. That means that as you go to higher and higher energies, you do not produce at rest, lots of new particles. But that the number of particles or constituents that the theory has is finite. Then you can, if you go to a very high energy above all the scales and their number is finite, by the way, unlike string theory, this is the loophole for string theory where there are always new particles. Then you can get the relation, I said, in the following manner. You assume that in a field theory there's no gravity involved. So black holes are not created. So everything is extensive. The only scale you have in your problem are the temperature and the box. So if you count the number of states at very high energy, it must be equal to the number of constituents times the volume of the box. And because the only scales available are the temperature and the length or R of the box, this exactly will be the amount of entropy. The amount of energy is calculated in the same way except it's energy. So one needs to multiply it by one extra scale to get energy and that gives you the temperature. Remove the temperature from these equations and you will get that the entropy goes like the number of degrees of freedom at high energy to the power one over D and then you get energy to the power D minus one over D. So this is the general argument. By at high energies, you should always get this dependence. Now in recent years, there were arguments by Cassini trying to obtain in a more rigorous way various type of bounds, in particular what was called the Beckenstein bound. But in all those cases, you are not really looking at the high energy spectrum of the system. You're studying something else and the bound there was just one and from the point of view of field theory, it's obvious that the bound is going to be one. But there are issues about short distance behavior and so on which I'm not entering in here. So take this, this is a derivation, it gives the correct result, but there are subtleties which one has to take care of. So this vindicates if we go back and look at what the student calculated, this is characteristic for high energy in 10 dimensions and this is characteristic of high energy in four dimensions. Now you realize that such a system tells you that low energy people can be megalomonic because they believe they live in a 10 dimensional world, while actually the number of dimensions is much smaller by six smaller. And of course we could also be in such a framework megalomonic, we think we live in four dimensions, but from such a point of view, maybe at high energies, our number of dimensions would be actually smaller. If that would somehow be the case, this would solve many renormalizability properties, many properties we have and think about at short distances. In any case, this is there and what I'm, let's say in recent, in this year I'm looking also for examples in solid state physics because my experience in the past was that outside general relativity usually most, or it's a most phenomena that we found in particle physics had the earlier counterpart in condensed matter physics. So I'm looking for solid state systems which had such crossover phenomena and maybe they're there, I'm trying to see what characterize them there, but that's part of an open problem. To try and find systems which are not part, not GR-like systems or necessarily, maybe they do have to have a dual, I don't know, but when they start, they are not GR systems which have the property that the infrared dimension is larger than the UV dimension. And I remind you again, for normal compactifications, the UV dimension is the higher one. Here it's the opposite. It's the IR which is the higher and the UV is the smaller. Okay, here I do a fast calculation to remind you why a Schwarzschild black hole, why the entropy goes with the power of the energy which is larger than one. So I use the Beckenstein Hawking idea that the black hole entropy goes like the area. This is just to translate the Newton coupling in whatever dimensions to the string coupling. And then when you use, you change the language and go to the language of strings. So strings give you, instead of the box, you get the scale which is L string. You will find that the Schwarzschild radius is related to the energy by this power. You plug it back in and you find that the entropy goes like the power of D minus one over D minus two. And let's see here. D here is the number of space dimensions. You see from the calculation already from RS D minus one, the D is not space time here, so I'm sorry. In the transparencies before D was space time, here D just signifies space. Now I make here a side comment which many people have emphasized. We know that we are lucky in order to understand physics which we measured up to now and not including quantum gravity. That scales are separated. So the fact that we are ignorant on what happens at high energy did not disturb us from calculating G minus two was an astounding precision, calculating the weak interactions with fractions of percent, calculating the strong interactions with percent accuracy or some phenomena in each of these with this high accuracy. Even though we don't really know what happens at higher scales and we may never know. So the fact that scales are separated is very helpful. So when we look at galaxies or we look at atoms, we have a big separation of scales and this enables us to work. Now nobody owes this to us, of course. I mean, we could just be lucky so that we can use renormalization group, effective Wilson-Fischer ideas and so on. But it is a very helpful thing. But somehow when gravity comes in, we have to take into account that this stops being the case. And I will show, I'll remind you on two ways you can see that. One point of view is usually we expect that if you take a mass of a particle and it's Compton wavelengths, then the higher the mass, the shorter, the larger the mass, the smaller the Compton wavelengths. And that is true till we reach things of the, which are black-holish or plank lengths and plank mass. And we see that in this case, actually the mass increases and the object radius the equivalent of Compton radius actually increases as well. So the separation of scales stops to be so clear. And I remind you that the same happens also for strings. If there are compact dimensions, strings have winding numbers and have momentum modes. So their mass is related to the lengths of the compact dimension by momentum modes going like inverse radius squared, but winding modes going like the radius. So for the objects which are non-field theoretic but which are like, go like winding modes, again, the heavier the object, the larger it actually the radius of it is. So this idea that we have separation of scales becomes more complex when gravity is involved. And this is something which time and again people think about. In any case, we see that for black holes, the radius, the Schwarzschild radius increases with the mass, with the energy. Strings are more, have random walk in occupation number. I just write this down. I'm not entering now into the, in how this exactly goes. And when you add up, even though these objects are different than what we expect, we find that their entropy, I didn't write it here, is actually goes like the energy. So for a free string theory without taking interactions into account, that you afterwards have to take interactions into account, the entropy goes like the energy. In particular, this means that the system has a limiting temperature. Now, maybe in one of the talks which will come, I will discuss in more detail if you cross that temperature a little bit or not, but asymptotically, you definitely, for free strings, you have a limited temperature. Now, if what I told you here about ADS CFT is correct, let's go here, then you see there is no limiting temperature actually in that system. Because when you reach high energies, the entropy goes like, the claim is it goes like e to the three quarters, that has a positive specific heat. You can increase pump in energy, the temperature will increase. So there will be no limiting temperature. So here is a case where it would seem that in flat space, you have a Haggadon maximal temperature, but when you in a system that you can study in detail, which is ADS, this bound on the temperature disappears and you can increase the temperature at will. Okay, so we are back here to this graph. I explained to you the e to the nine tenths and e to the three quarters in retrospect, namely what I would identify from the student, what the claim was here, what the claim was there. Then the advisor probably also threw him out because how come this field theory, why should there be strings? And why should in this field theory, which has no gravitation, there should be objects which behave like black holes from the term of the entropy. So these are all the reasons that the student disappeared. However, later on, once ADS CFT was accepted, even though it was never proven, but there are let's say a lot of indications that many aspects of it are correct. And we will see here how non-trivial these things are. It was, this actually was argued in the opposite way, namely from the point of view of the phases of gravity, this is what you would expect. You would expect to have a phase of gravitons in 10 dimensions, because the string theory is 10 dimensional. You would expect a correspondence point where the highly excited gravitons actually turn into strings. And this is why you see that. Then there is another correspondence point where the string begins to form characteristic lengths which are like the Schwarzschild radius related to the mass of these objects. And that correspondence point relates strings into black holes. And then as you go on with energy, because you're in ADS, and now ADS is important, the black holes, and I will show them, in the beginning they are small and they don't realize that there is a curvature to the system. But when they become as large as the curvature, they begin to realize they're living in ADS. And in ADS, Gibbons and Hawking, okay, these are the transition points, by the way, if you, they are calculated in a very naive way, but which is very powerful. You just say these are the effective degrees of freedom. You build a cocktail. You say the cocktail contains in it gravitons. The cocktail contains in its strings. The cocktail contains small black holes. And the cocktail contains large black holes which are Schwarzschild. For each one of them, you know how the entropy should behave. And you ask who dominates in this cocktail at a certain energy scale. And the words I told you before are related to this comparison of entropies. Now this was very useful in gate series. When one studies the phase structure of gate theory, the degrees of freedoms in the most general case would be dions, objects which have both electric and magnetic field. I may again reach that later on. But the simpler objects, let's say, have just electric and magnetic charge, either that or that. And you can find out which phase the system is by doing, again, entropy arguments. So the key issue is, the key assumption is you identified the relevant degrees of freedom. And afterwards, you just do simplistic thermodynamical arguments. And actually, they were born out by very sophisticated calculations going under the name Zeiber-Gwitten and what happened after that. So the idea is that you use this same type of argument which worked so well in gate series, use them also in gravity. And then these are the different transition points. This is when radiation ceases to be, this is when strings become small black holes, and this is when small black holes become black holes inside ADS. What parameter needs to be small to have a separation between these correspondents? Well, as you see, you have here, there is G string. And the transition, let's say, point would be 1 over G string squared. So you would need that not to be pushed too much to 0, then you would not see anything. But the other end. Well, gs times n is lambda. OK. So no, it's got two parameters. Yes. And lambda. Yes. Yes, you have two parameters. You have one parameter, which is the Tuft coupling. And I didn't say, because that's part of the ethos of ADS CFT, that the geometry properties become more apparent when this coupling is very large. And then you do perturbation in string theory, which is the other coupling Gs. OK, now I want to take the graph we had here. And I want to write the temperature of the system. So if I write the temperature of the system, you see that I have positive specific heat for the gravitons, constant Hagedon temperature for the strings, negative specific heat for the Schwarzschild black holes in flat space, and then positive specific heats for black holes in ADS, which was, again, the big surprise. This was found by Gibbons and Hawking long ago, and somehow it escaped, its significance escaped for many years, that these are black holes which have that property. Now, Gibbons and Hawking use them actually as an example of confinement. And I will discuss, we'll see where that comes in. OK, so let's say we have this type of behavior. So when you do your course in statistical mechanics, this invites a Maxwell construction. Namely, you see that if you have a micro canonical analysis, which is what I've done, I've shown you the full information. But if you want to go to a canonical ensemble, you will actually jump and have a first-order phase transition. You will go from the system here, which is a system of gravitons, of radiation, into the system where you already see only the four-dimensional properties of the system. And the part which are strings and black Schwarzschild black holes in flat space are transient. They will disappear. These phases don't materialize. In the canonical ensemble, if your temperature is your control parameter, you will go from one phase to the other. And this goes under the name Hawking page transition. And I will discuss it now from the side of the gravity. So the ADS metric, I'm going back after I told you what would be the consequences of the work of the student, which didn't exist. We go to the bulk side. On the bulk side, we have an ADS-5 metric. And you see the r squared, which appears in the metric up there. And what actually Gibbons and Hawking were doing, they wanted to build a faraday cage for energy. It is important when one studies such problems to be able to say there is a wall where you put the system and energy cannot pass through. But energy can pass through anything. So their way of building a faraday cage, which you can in electrodynamics, but you can't in gravity, was to put the system in ADS where the first component you can relate by, this is Euclidean. When you go to Euclidean, you can relate to a temperature. And you see that the temperature goes to zero as are the distance in ADS goes to infinity. So in a sense, you arrange that the system lives in the box by using this metric. And that was their trick of confining energy in a certain range. Or as you look here at the effective temperature, the effective temperature goes like one over the square root of G00. And it's therefore this behavior. Then if you use, if all the data you give us is that the system has a boundary, which has a certain structure, then another object which has the same boundary is a black hole sitting in ADS, which I've written down here. And you see that when you take R to infinity, the boundaries are the same. They differ when R is of order related to the mass here. So these are two thermodynamic or two configurations, which are solutions of Einstein's equation of motion. One can be viewed as thermal ADS. And the other can be viewed as a black hole in ADS. Now these are two solutions of the equations of motion. There is no general analysis, so we don't know what the most general solution of the equations of motions are. There could be others. And you will see that actually would be nice if there were others, even though nobody owes us. You will see that the final description is like that. So these two different solutions are going to be the competitors of these two different phases, which I've shown here. What sits here is thermal ADS. And what sits here is the black hole. And the way you do it, Ala Bekenstein and Hawking and Gibbons, you calculate from general relativity what are the free energies of these two different objects. And you ask, who dominates? Now, as I told you from the picture, we already expect the first order phase transition, which is what was found by Hawking and Gibbons for this configuration. And what actually happens if our conformal boundary is a sphere, S3 times a circle, which gives the temperature, then at small temperatures, there is actually no black hole. The black hole is singular. You could not close the black hole with a cap. It becomes singular. The black hole only begins to appear as a configuration, so as a candidate, once the temperature of the order of the curvature. So there is a phase which is just thermal ADS, because it has no competitors. Then comes a phase when the temperature is of the order of the curvature, when there is a competitor. But in the beginning, the competitor loses. The black hole doesn't win the free energy competition. But this is for an order one change in curvature. And eventually, when the temperature reaches scales, which are larger than the curvature by some specific amount, it's a black hole which takes over. And you find out that the black hole is the dominant configuration. So the two phases which we saw here, just by this construction, actually materialize in the thermodynamics of general relativity that you do here. So you have the black hole, which I've drawn, and you have the cylinder, which is ADS. Now, once you look at this, you already find yourself involved in another question. Something that people asked without even having a good handle on general relativity is, is there a client paradox in general relativity? What do I mean? We know that when people realized that you have to write down a relativistic field theory, they realized that the number of particles is a good number conserved as long as there are no interactions. Once you have interactions, you have to have superpositions. Here, the question is about topology. Namely, do you allow yourself to mix topologies which are different or don't you? And maybe there's a super selection rule which says that once you live in a certain topology, there is no need for you to go to another topology. As I said, in client paradox, the question was can you keep yourself content working with a fixed number of particles or will you have to, there is no super selection rule. Once there are interactions, and you will have to have a superposition of the fixed number of particles you started your theory with. OK, so what one finds here is that if ADS-CFT should work, you have to have the permission, the license, to mix topologies. As I said, there is very strong indication that it is correct. I don't know of any proof that there is a hole in that. So if you take that as your basic point of view, you have learned that topologies have to mix. Why? How does this come about? When you look at the theory on the boundary, at short distances, its entropy goes like n squared, where n was the n of SUN gate theory. When you look at the dictionary and calculate the entropy of the black hole, it goes like n squared. On the other hand, in the ADS part, there is no n squared. Everything is of order 1. So if you start and say the theory lives only in ADS, and I should not allow myself to mix a black hole because it has a different topology. Classically, it doesn't have a pi 1. It ends here. So classically, I don't see a pi 1. If I would not allow the black hole, I would get a contradiction. And I would have to give up ADS-CFT. OK, so I mean, you may want to give it up. But the idea here was, I think this Edward Witton actually suggested. He said, OK, no, let's allow to change the topology. We know that in gravity the black hole would dominate, and the black hole gives contributions of order n squared. And the question, which is really left open and not solved till the end, even though people conjectured about it, is, OK, we have here a behavior which we have shown in gravity. In the field theory, there was this imaginary student which diagonalized it. But the imaginary student left. So we don't actually have a diagonalization there. And here is a prediction that there should be when you are on S3 for very large n, you should have a phase transition. But that phase transition, there were suggestions on what it is. I'm not yet familiar of a proof in the theory that it was there. In any case, this was a case where actually gravity taught us on the gate theory. OK, so here I repeat, when the temperatures were much smaller than the curvature, there was only ADS anyhow. When the temperatures were of the order of the curvature, you had both thermal ADS and a black hole in ADS. And when the temperature, actually there are two black holes there, one with positive specific heat, one with negative specific heat. In any case, it's the thermal ADS which dominates. And eventually it's the black hole which dominates. And these are the two phases of the theory. OK, so keep this in mind. We learned something about ADS CFT. As I told you, there are some open issues there involved. And now let's relate this information to the original question, which was what can geometry capture? And in particular, let us go to the case of the black hole information paradox. OK, so here the original work on this was by Maldacena. And the tools he used was ADS CFT. And the idea was to punch a hole in the arguments of Hawking. So the calculation that you can do in principle is the following. You're asking if there is a black hole information paradox or not. OK, take some initial bulk state. If you would know the dictionary, which we don't know, put it to be an initial CFT state. You know the Hamiltonian of the CFT. It's a field theory. Let it evolve. You will learn what the final state is. Go back with the dictionary which doesn't exist to the bulk and see what happens. So you can answer the question, how exactly does a state evolve by going to the field theory? But as I said, this direct approach is still beyond us. One doesn't know how to. The dictionary is not developed enough to be able to do that. So instead, one looks at a different way to address the same issue. And the different ways you take a system at thermal equilibrium, now the quantum field theory. It's true we think also about the black hole like this and we'll come to it. But you take the quantum field theory and you perturb it and you ask, how does it react to this perturbation? What happens to it as time increases? So one considers a thermodynamic ensemble with some density matrix rho. And one calculates the correlation function rho at A of 0. And one looks at very long time scales. And one wants to see what that does. So if you look at this operator A of t, you can develop it by e to the iht on one side and e to the minus iht on the other side. Insert, as we usually do in elementary quantum mechanics, a complete set of states. And you will see that this function c of t is a sum of a density matrix time matrix elements times many phases. And the way this system will behave will eventually, it does depend on details. I think it's now appreciated more than it was appreciated 10 or 15 years ago. It does depend on the details of the spectrum. It does depend on the details of the operators one inserts. And I will show you what are the classes of results that one gets. Now, what does one know about such a correlation function? Now, there are theorems in the classical case. In the quantum case, I don't really know how strong the theorems are. Maybe people in the audience know how strong the theorems are. There are things which are done in quantum mechanics. There are some things in quantum field theory. And some are, let's say, common wisdom and intuitive. I don't know what basis of rigor they really have. So if one looks classically, and one tries to answer the question, what are the characteristics of this average time average correlation function, then without entering first into details, which one should, how the spectrum looks like, how the matrix elements look like, one says the following. If the phase space is compact, and if Liouville theorem is true, namely, you have a conservation of volume in phase space, if you are under the conditions which allow that, then if you calculate some, let's say, you are in some element of phase space, then there will always be a time t for which, for any epsilon, you will be in a distance epsilon from that volume. To do that in quantum mechanics, instead of the compact phase space, you need a discrete spectrum. Instead of volume conservation, you need unitarity. Instead of looking in where you are in phase space, you calculate some correlation function at time 0. And the claim is that any correlation function you have calculated at the time 0, there will always be a time t of p for Poincare, for which the correlation function at time t of p will be different in value than its initial value by epsilon. It will always exist. This is the claim. So let's say people who are very conservative and say that they have seen it all, it could be true, because you see it all. Or I don't know if in French you have Humpty Dumpty as part of the French culture. Is there an equivalent Humpty Dumpty? No. So an egg which falls and breaks will come back together again, contrary to what you teach in thermodynamics. So what is the way now to make this useful for us? So you look at a quantity g of t over g of 0 of some not too large correlation function. You take its absolute value squared and you take its time average. So you take this quantity g of t over g of 0 absolute value squared, you integrate it from 0 to capital T, you divide by 1 over t, and you send t to infinity. Now what will this measure? The claim was that the sink will return essentially to what it was. So that means that there are sinks which recur. The function cannot just go to 0. Now you want to measure with what weight this recurrence occurs. And it turns out it's enough to put a 1 over t. You don't need to put a higher power of t in order to be or lower power of t in order to see that actually the sink does survive. And I'll show you the claim how this goes. But the claim is that this quantity is bounded from below by e to minus a number times the entropy. It's not just minus the entropy, but it's a number which sits before the entropy. So this is a very strong statement. And you will see that in the context of ADSEFT, it's a non-perturbative statement. It would be a statement which the lower bound would be 0 to all orders in 1 over n. But it's actually not 0 because it has a non-analytic part which gives this contribution. So what does this tell us? It tells us that our usual intuition that systems just thermalize, which would be just this exponential drop-off, isn't correct. Because if that would be the case, there wouldn't be a lower bound. So that means sinks have to recur. And the question is, OK, what happens? And how does this manifest itself? So right now, I want just that we keep in mind, we are speaking about gravity. But now this is a lecture in quantum field theory. Gravity doesn't appear. It will reappear, thanks to ADSEFT. But for the time being, it's not here. OK, so now you define a quantity, which is the noise, which is this average, the square root of this average value. Actually, we'll normalize it in a while. This is, again, I remind you, the quantity. So let's go and calculate it. So in general, if we take this object, take its app. I thought it was called G. The G is C. This was again, you caught it. No, it was called L. But still you're right. It was the ratio, no? Yes. And C? C is L. It's a ratio, G or T. And this changed. This letter changed. You're right. Now what does it mean? The same, just L became C. So it's the ratio of G over T divided by G0? G0 will, no, without G0. G0 will appear. Right now, with no G0. G0 will appear in a slide or two. But thank you. And I didn't know how to change it, so that's it. So these are two different letters which have been identified. So we take this quantity and we average it. And let's take, to make things simple, let's take B to get a feeling, no diagonal elements. Let B be a matrix where we have removed diagonal elements. If you want, you had an element, I think, before it was called G. Now it's called B. I could remove from it, write a new object where I removed everything from the diagonal. So in that case, it is clear that if the diagonal matrix elements are 0, then when I take the average, it will enforce that E m equal E r and E n equal E s. Because all other edges with respect to time. Yes. This is the line above is the average with respect to time. So in that case, I would have gotten that c of t is that sum there. Rho is a density matrix, so it's positive. And the square of matrix elements is positive. So in that case, c is actually positive. And you see from there comes the lower bound. In the case of general B, the things could fluctuate around 0. But this captures the feature that there is a lower bound. OK, so now you do an estimate. And I will show you a rough estimate. And the estimate gives you that you get here E to the minus s. But also actually in here can appear another number. So the main thing is the decay stops. You cannot just go down because you have to satisfy a lower bound. Here is a rough estimate. You take the matrix. This is now c on the maximum is actually c at t equals 0. And this is the normalization factor you asked about. And you find by analyzing the numerator and the denominator that the ratio you get here of the noise, which will take a square root, would be E to the minus s. I leave you as an exercise I've given here all. So definition of s? Entropy. Yes, but the definition, you estimate s as a function of b and o. OK, so s comes out of the density. Let's say it would be a micro canonical ensemble. Then in the normalization of the micro canonical ensemble, s would come. The same in a canonical ensemble, it would be the same. I didn't complete here the proof. I invite you to look at the paper. And I will show you an example where s appears also through the objects b. So here b disappears in this. In this here, but I told you things do depend on b. So b actually reintroduces. And I will show later on, I don't know if this today or next time, I will show a b which is related to the eigenstate termalization hypothesis where the matrix elements of b contain in them an e to the s. So where does, in other words, where does it appear? It appears in rho. It appears in the sum because we are summing over states. And the number of states we have is e to the s. And it can also appear in the matrix elements b. So these are the places where the entropy emerges. And the final outcome is that it goes the estimate in certain cases like e to the minus s. That's the. So the statement is that the Poincare recurrence time is e to the plus s. Let's reach that. We haven't reached that yet. Here, this was just a state of what is the average noise. Then we have to see about recurrences because now we are into averages. OK, so first time, the first thing is when you look at these graphs of how this would behave, usual termalization would expect that the correlation function drops exponentially. And it's actually when there are a large number of conservation laws, first you go to Gibbs, you first remove stuff. And then after you go further, you get power laws even. But in any case, the thing goes down. But let's say there are not too many conservation laws, so this thing just thermalizes. So this would go, depending on the operator, definitely not universal, would go like e to the minus some gamma t. And because you need to reach an average of e to the s, the time when you reach the average is of order s. So the first time scale in the problem as formulated here is s. That is the time when the system begins to reach the average. And it cannot continue its behavior. Now how intuitively do you look in the sum? You say that as long as t is smaller than s, or much smaller than s, then the spectrum for you is continuous. The Heisenberg uncertainty principle for time and energy, you don't yet feel the very discrete nature of the energy levels. So the exponential decay part is the time when everything looks to you continuous. And that stops when you reach times of order s. Now the truth, this is what I've written here, are gedankin calculations. The whole set, I have not yet seen somebody really even doing a computer simulation showing that all these intuitive ideas, which are accepted, are the ones which actually occur. So now once you reached this value of e to the minus s, what happens? So there is a first time scale, which we called the Heisenberg time scale, where the system begins to feel that it is the sum of many different clocks. Let's think of e to the i alpha, difference of energies times t. Each phase is some circle and a point on the circle. So the clocks have deviated, and you begin to feel their each its own frequency. How do you estimate that time? That time you look at a certain bend you have. You say that the average characteristic time of that will be related to one over, sorry, to the average energy difference, because you have a sum over energy differences. So the average difference in energy will be related to one over the number of states in this discrete bend, which means the time goes like the number of states. So this time goes like e to the s. This is not yet a time which is particular to you, because you can be much more particular than me once we decide which epsilon you need. You may want to have much smaller epsilon than I want to have. So this number here, e to the s, does not depend on how picky we are. It's just a statement that I'm going to feel the fact that I have some average difference in energies of the system. So I don't have things of order 0. I have things which are now e to the minus s there. Now in order to be able to want to reach and estimate, so this goes on and this is the Heisenberg time. And it goes like e to the s. Now you can ask yourself, OK, I want to be, I look at this set of clocks. And I want, let's say, all of them together. I look at a global shift. To be within delta alpha angle where they were originally at time t equals 0, which was 0. And I want to see how long that takes. So how do I ask, and now comes your precision, because delta alpha is related to how much precision you want. So you have this motion on a torus. Each clock defines a certain compact variable. The time it takes should be one over the probability which is related to the volume. The volume would be delta alpha over 2 pi to the number of degrees of freedom. This should be a power, not a multiplication. So the volume to which you want to return is delta alpha over 2 pi to the number of clocks that you have. And you want to return in all of them. So that means that the estimated time goes like e to the number of degrees of freedom times the log of 2 pi times the delta alpha. And this is going to give you the Poincare time, which will be exponent of exponent of s times the log of 2 pi over delta alpha. So if you don't care very much and you make delta alpha 2 pi, you're back to the Heisenberg time. But if you are very particular and you want really the delta alpha be significantly smaller than 2 pi, then you will have times of exponent of exponent of s. And then you will be really returning not just that you are not 0. And you are there, but you will be returning to the original value that you had. So this is when the spikes will become of order 1. So these are three times. The time of the drop, stop of the drop of the thermalization or whatever behavior. Then there is the time where you begin to feel the difference, the average difference in the distance of the energies. Of course, when the smallest gap gives you the longest time scale, the largest gap gives you the shortest time scale, you look at the average. And then exponent of exponent of s, you begin to return to where you once were. So here is how this would look under these repetitions in time. Now it's interesting that all these times go, each is an exponent of the other. So you can ask, what is the log of s? Because we had s e to the s and e to the e to the s. And the log of s is related to something that we did not describe and will not describe here, which is called the scrambling time, which is more or less the time it takes you to spread the information, let's say, over a horizon of a black hole using your calculations in the past. So that's another time, but I'm not using it here. But it's also related by a log. OK, but I'm sitting here before Tibor, so you can't not put numbers. You want to see what, OK, what does this give? So I'm afraid the natural unit is universe lifetimes. We'll call it UL. And let's look at s, not at e to the s and not at e to the e to the s. So the page time, this also goes for other reasons. Sinks of order s go under the name of page time. So look at page times for a black hole the size of a proton. It's 10 to the 10 ULs. The page time of a black hole, which is 10 to the 9 solar masses, quasi, with 3 kilometers radius, is 10 to the 87 ULs. And this is just s. So when it's talking here about something which science fiction doesn't dare touch, I mean, one is going to epochs, which are enormous. So why? What's the justification of doing such a thing? So I will go back to phenomenology. Many years ago, Gianni Lyopoulos and his colleagues were looking at something which was of no significance to most people, which were flavor changing neutron currents. At the time, there was no nice picture of the standard model to most people in mind. So why concentrate on flavor changing neutral currents and their absence? Who cares? And nevertheless, from that, they derived a deep understanding on what is the structure of the universe. In particular, that you need to have at least one more quark to complete the doublet of the strange quark. So here we are in a situation where we have a very consistent theory. And in a way, I would say that any hole, any hole you can find in this fortress of bastion of consistency, because that's our tools. We can't do the experiments as we discussed, is worthwhile. So even if there would be a problem after e to the e to the s, let's identify the problem. And once there is a hole, maybe one can inflate it and help the fort crack. But maybe you will find that once again, the fort reacts and doesn't allow you to have a problem. So the justification to going to these very long time scales is one is looking somewhere for a hole in the argumentation. OK, so the summary, there are time scales related by log s. There is a scrambling time. There is a page time, the end of the decay. There is a Heisenberg time, when you begin to feel the average distance. And then there is the Poincaré time, when you recur to what you really started. So how is this related? And here we to the original problem we were discussing. OK, so the way it is related is by doing the ADS CFT. And as I told you, the responsible adult in the relation is a CFT. So we know that if we do n equal 4 super Young Mills theory on a sphere, the theory is unitary. We have no reason to suspect the theory is not unitary, no evidence of that. And the spectrum is gapped because I'm sitting on S3. So one is sitting in the conditions of a general conformal field theory. And there is a lower bound to the object at hand. It's there. This is the spectrum of states. The point is, there is a lower bound. The lower bound, and now I'm talking not about the general theory, but this theory at hand, goes like e to the minus s. This s goes like minus n squared because we are in the joint representation. There could be numbers before, but it's minus n squared. This goes like e to the minus 1 over g Newton that appeared in the slide where I showed you the dictionary between the field theory and g Newton. That means that this is from the gravity point of view, it's non-perturbative. To any finite order in g n, this vanishes. But for n large, if n is infinite, it also vanishes. So you have to think of an n which is extremely large, but not infinite. And you find from the CFT, you know that there is a lower bound. And now, OK, you found it. Let's just find it also in the bulk. Let's go to a temperature which is well below the Hawking page transition. So we have the black hole. And let's calculate a typical correlation function inside the black hole, in the background of the black hole. And let's recover the lower bound. And here was the point that Maldesena emphasized that there is a paradox. Why? You look, you go when you calculate the correlator of two scalar fields in the background of a black hole. You go to turtoise coordinates. So you shift the horizon to minus infinity. And you find that the propagator can be calculated by solving a non-relativistic quantum mechanical problem sitting in a certain effective potential. And the effective potential characteristic is that it's Leoville-like as it approaches the horizon. It's minus infinity. So such a spectrum doesn't have a gap. So your ground to be sure that things will behave as you expect and that you get the lower bound, you lost that. You do the calculation, and you see the reason that it was important to have a gap spectrum. And indeed, you find that the black hole gives a result which is 0. So you can drop ADS CFT because you get 0 from the black hole. While you know from the responsible adult that there is a lower bound. Yeah, you are computing the correlation time of a scalar around the black hole, and you say goes to 0 for large time. Yes, quasi-normal modes, whatever words you want to use. But I used here because I used in the assumptions the fact that the spectrum was gapped. This assumption, once you have Leoville here, the spectrum is not gapped. So you lost the ground. And indeed, you don't get that. So you get, if you wish, what you know, anyhow the quasi-normal mode behavior. So you get an exponential drop-off. An exponential drop-off will give for the lower bound 0, finished, a contradiction. OK, but you get 0. So you can throw the depends conservation laws or not. But you at the end just throw ADS CFT. Maldesena has proven that Maldesena didn't wrong. OK, but no, it's Maldesena, so no. So then he's not US, so he took a non-US attitude. And he said, well, let's not concentrate only on the winner. We had two configurations. And I remind you there can be many more, which we don't know about. But we have two configurations. As long as the leading configuration delivered, then we follow it. But if the leader configuration gives us 0, we should go to the other saddle point. And the other saddle point is thermal ADS. So first, if one does it once again, just like before, just calculating the entropy, you see that you need the change to have several topologies. You cannot live in a super selection of topologies. OK, so in political correct language, you have topological diversity. You have to allow different topologies if they solve the problem. So you get out of the black hole, and you go to thermal ADS, which was highly non-leading configuration. What characterizes in tortoise coordinate? How does the effective potential look? I described to you before, there is a box. And that's the characteristic of ADS. It's a box. So you get the potential where you have a box. And the box appears also near where was the horizon. It moved away. We have a box. And at this stage, you have a background in which the spectrum is gapped. And the gap is determined by the curvature of ADS. So now when you calculate the object, you are going to find that it respects the lower bound. So you see that a configuration, which was down by e to the minus s, was called to help when the leader failed. And it delivered. It gives the right answer. But it delivered for what? It delivered for the average. And I told you in the slogans in the beginning, SYNC's geometry works for averages. So we had here an average, which is extremely small, exponent of minus s. I think you've got an idea how small it is. So it's a very small effect. And nevertheless, the small effect can be reproduced by geometry. But this is the average. What happens when you ask, OK, here is a, OK, I will end in five minutes. And by the way, the lectures will be an hour and a half and not two hours. That's the good news from today. So if we look one way to think about this, we had a black hole. We had a black hole. And then it suddenly becomes, as it moves in phase space, it suddenly becomes a thermal ensemble of gravitons. This was large black hole. Yes, everything is large, always. I'm way beyond the Hawking page transition. I'm not discussing. We can discuss also what happens below. But I'm taking a thermal average way above the Hawking page transition. So everything is large black holes. It's a decay. I hope in addition it transits to the thermal. Yes, just, I don't know, we like, OK, in some sense, instant on, tunneling, superposition, whatever your fantasy tries to build. And then the dissipation, instead of happening in the presence of the black hole, the dissipation happens in the presence of ADS. And this is somehow the type of attempt to see how these recurrences come again and again. However, if you look at the exclusive quantity, not the average, but you ask what c of t is at every time t, then you see that this cannot explain the problem. The geometry fails here in this way. Because we knew it from the start. We know that Poincareeric currencies are order one. Because we had to resort to the loser configuration to give us the peaks and not the zeros, it will be multiplied by e to the minus n squared, because the probability to be in it and not in the leading configuration is suppressed by e to the minus s. So what happens is you get an average by having short Heisenberg times. The recurrences happen very frequent. But the height of each recurrence is minimal. It's e to the minus s. Well, the true behavior should be large recurrences, which happen extremely rarely. And that we know from the quantum field theory. And both give, it happens, and it's very nice to see in detail. They give the same average up to coefficients in the numerator. However, they don't give the same exclusive behavior. So we saw here an example where geometry gave an extremely small quantity correctly when we were talking about the average. It failed to give the correct behavior when we talked about this very minute behavior. So we see in here the limitations of this semi-classical description. It cannot, as far as we know up to today, it doesn't mimic the detailed behavior. And I think it's nice to see that coarse graining is given by black holes. More details are specifically once are not reproduced. And I think it gives some meat to a general feeling, which I will give by another example in the next lecture, that indeed there is some coarse graining when we talk about geometry. And when the questions are very, very detailed, we have to resort to other stuff. Semi-classical geometry is not giving it to us. But maybe we need a very special operator to go here. You are the generic operator, so you don't know what it corresponds to in the bulk. So maybe there are operators that measure something geometrical and are something very special in the CFT that. Yeah, it could. No, no. And well, there are many things proved which I haven't shown and things I have not proved that I didn't show. But I would say that the idea, if geometry is a description, it should be generic. There never is a claim. I mean, also averages, if you wish, are something non-generic. And we decided to pick an average and then geometry works. So there is no doubt that there can be quantities for which geometry could work. But the idea when you use geometry is not that you have to look out of e to the s properties, find one property for which it works. The idea is what happens generically. So I think the idea here would be that for generic objects, averages do work, but more detailed, exclusive misses. And you can see by how much it misses. Now, what was one way we suggested out, and maybe this will at the end be, we said, OK. So this reminded us, like in QCD, people were trying to find what is the configuration which will lead to confinement. It didn't happen at the end. And it's a more complicated phenomenon. So maybe that looking for the average for the ADS and having also the black hole is just the tip of the iceberg of many, many lots and lots and lots of stories you could tell geometrically. And when you sum over all of them, maybe they look like Tuft's brick wall. Because if you do put a brick wall, a firewall, whatever what you want to call it, you can arrange so that you get also the exclusive CT for every T. You can do it. The problem is that, and this we show in our paper, but the problem is you don't know a priori from the string point of view what is the width of the brick wall. You just impose. And for the theory, there exists a brick wall for which it will reproduce C of T. Because of course, it produces a gap spectrum. But what is it in order to have a real understanding if that's the right way? You need to get it from the string theory side. You have to get the width. And then you have to show that that width actually corresponds to what you get from the QFT. So there is a way that there would be many, many configurations. And the sum of all these configurations would effectively be a black hole with a brick wall so that you gap the spectrum, which was again the idea of Tuft, maybe others, how to get a gap spectrum. OK. Also to get the factor 2 wrong in the working spectrum. OK, but here the main thing is to get the gap. OK, that's without that there is nowhere to go. Then we do the next of the details. But the first thing to make the continuous infinity into a discrete, that's the key thing you need to do. OK, so the conclusions of what was done. Maldesena punched a hole in Hawking's argument. He showed, because his arguments of Hawking did not include the possibility of changing topology. So now the burden shifted. And other people may have picked it up or not. And I think one is still discussing this issue. Topological diversity is required. Without it, it won't work. I also would say that the fact that even it works for the average, for such a minute quantity, shows you how difficult it is to punch, breach this hole in the wall of the consistency of string theory. We have not yet succeeded in doing it. And these are the slogans. And for next time, I will start and continue. Because one question which would come out would say, OK, what did we have here? We had here a very strange situation that the long time correlation functions behavior was dictated actually by the thermo dynamical disfavored configuration. Is this generic? Or did it just happen by accident in this case? And we will start by addressing this and relating that to the ER equal EPR paradox. OK, I will stop here. Let's see.