 Well, it's a great pleasure to be here. Thank you very much for the invitation and thanks to the organizers especially for organizing such a nice workshop conference. I chose the title of Non-Protervative Studies of Membrane Matrix Models, primarily to focus on the fact that the physics that I'm interested in has to do with the matrix, or the membrane aspect of these matrix models. And to highlight some other aspects of them, the Non-Protervative Studies of them I would focus on as well. The starting point, which I believe was Jens's starting point when he began with these membrane models, was to start with the Nambu Goto Action. And the Nambu Goto Action is just the induced volume pulled back onto the membrane from the ambient embedding space. So we're pulling back the metric from some ambient embedding space with the metric little g mn onto the surface, this p plus one-dimensional surface called the p-brain, one is meant to be time. And we pull it back to get a Nambu Goto Action. The membrane could be charged in the same sense as if this was p was 0, this would be the whirl line of some particle. And we could think that it's a charged particle. So we'd want to add the integral of some one form onto that so that there's some electromagnetic potential. So in general, we'd want to consider the area with these additional p-form gauge fields. One could complicate these theories by adding some anti-symmetric part to the metric to get some Dirac-Barn-Einfeld action. It's, of course, very natural in some context to add extensic curvature. The focus for my lecture will mostly be on supersymmetric, the supersymmetric version. Supersymmetry restricts the dimensions in which one can. And I will only be considering supersymmetric extensions of these charged Nambu Goto strings or membranes. It's pretty clear that there has to be a maximum dimension if we're going to deal with supersymmetric extensions because if we've got a spinner, a spinner's number of components grow exponentially with the dimension of spacetime, whereas the number of bosonic components only grows with the number of spacetime dimensions. So there has to be an upward limit. And then consistency, there are special ward identities, special fields identities that close the supersymmetry algebra only in these very special dimensions. These are one more than the maximal dimension for N equals 1 supersymmetry. So 10 dimensions, this is 10 plus 1. So then you will see that you've already seen some of the relation between that and the matrix models. One can discuss it in polyurethane form, but that doesn't seem to lead to any additional insights. If I put in the Lagrange multiplier field, which are Lagrange multiplier metric H, one can integrate it back out. It looks more or less the same, but it hasn't led to any additional insight into how to quantize these models. What seems to be successful, and this was the initial observation which made significant progress in these, was by Jens, was to go to the light-con coordinates, the shield gauge in light-con coordinates. So basically, one writes down the ambient, I'm taking the flat-space embedding, pull back the membrane, the metric unto the membrane, the sigmas are the coordinates of the membrane. And if you pull it back, you get this metric. If I choose the light-con coordinates, then, and choose one of my time parameter tau to be x plus, then the thing that's special that happens here is that x minus dot only appears linearly here. And the fact that it's linear rather than quadratic plays a crucial role. And one's choosing the shift gauge fixing so that the shift is 0. So nj is said to be 0. This is an additional gauge fixing constraint. In this gauge-fixed action, the Nambu Gota form takes the product of these two terms. One takes the momentum dL dx minus, one sees that on the equations of motion, this is actually a constant. And the derivative with respect to dj of x minus doesn't appear anywhere. One has avoided the dependence on this quantity. In two dimensions, then, the additional special ingredient that happens is that the determinant of a metric can be rewritten using it. The determinant has two epsilon tensors. You can rearrange the two epsilon tensors so that you pull out something that looks like a Poisson bracket and the potential that's left takes this form. And one ends up with the flat space Hamiltonian going from the Nambu Gota action to the Hamiltonian with an additional Gauss law constraint. So this is the structure of what's going on. And the obvious thing then is to go into the quantization to replace this with the Poisson brackets with commutators. This looks like one should be able to make further progress. And really quantize this system properly without going to any non-commutative version of it. However, to the best of my knowledge, that has not been successful. I'm sure somebody here will let me know if there's some promising progress on that. If one looks at higher dimensional objects, those are also of interest. The determinant now is going to have many more. It's going to still not only have two epsilons, so we can pull out one of the epsilons and attach it with derivatives of these x's so to get something that now looks like the square of the Nambu brackets. And the residual symmetry in both in all these cases is the area preserving diffeomorphisms. And this can be reformulated as gauge field that is a very preserving diffeomorphisms. I'll say a small little bit on that a little later. Quantization of these, how do we quantize them? The quantization involves, as we heard very nicely this morning, is again to go to a non-commutative version of these when deforms the functions to matrices. And the commutator or the Poisson bracket to commutators. And the momenta become just the Laplacian. So this is the Hamiltonian in quantum mechanical form. And it's just a many-body Hamiltonian with this peculiar potential. The potential, you will note, has these flat directions. So there are directions going outwards where it's flat. And it becomes very narrow. In the Poissonic setting, these are lifted. Those flat directions are lifted by the zero point energy of the fluctuations due to the other coordinates. However, that's not the case in the supersymmetric version, which I'll come back to. We have replaced the diffeomorphism in variance by a un-symmetry. And the Gauss law constraint says that we should choose singlets. These are going to be un-singlets at the physical states. So this is the Poissonic system. H is matrix membrane or fuzzy membrane in d plus 1 dimensions. You'll see that at low energies, these flat directions. And all of the saddle points are given by these, of the potential here, are given by these equations, which we heard a lot about in the second doc this morning, these minimal surface equations. An interesting quantity from the physics point of view is to study the partition function of such an object. So the partition function would be z is trace of over the physical states of e to the minus beta h, where the physical states, again, means we're just focusing on un-singlets. Path integral version of this object here is to integrate over x, go to the action associated with Hamiltonian h, which is periodic in imaginary time. That integral and then do a path integral over the x's, these now matrix versions of the embedding coordinates. So the fact that they're matrix versions, being that somehow the embedding space itself should naturally have also been considered non-commutative object. And the Gauss law constraint is implemented by Lagrange multiplier field, which is this gauge field here. And here, of course, is there's no curvature to air. It's flat. We're in one dimension. The only physical degrees of freedom are the fact that air can have some values that you can't undo because of the periodicity in time. So you can diagonalize them, but one could imagine that there was Bohm-Arnauf flux through this time circle. And for each component, if we diagonalize the matrix, each component of that can give rise to a flux which we can't get rid of. And those are the physical degrees of freedom of this matrix, air. The exponential of air, e to the i times a, integrated around the closed path around that period, was the polygraph loop that Gauss and referred to this morning. So an interesting object will be what's going on with the physics of the variable a. And just a small comment is that one can think of these bosonic matrix ones. As you see, this is a dimensional reduction of Yang-Mills theory. If I take Yang-Mills theory in p-spatial dimensions, I will be left with just one covariant derivative left. The commutators in the other direction are the only things that will survive. And we are going to get this here as being the zero-volume limit of Yang-Mills on a torus. So that gives us a hint as to some of the physics of what's going to go on here. We expect that if it captures correct, the bosonic case is going to be captured by the global physics of Yang-Mills theory. And that's exactly what happens, in fact. You get massive degrees of freedom for it. One can consider these membranes embedded not just in flat space, but one can consider other backgrounds. Interesting family of backgrounds which preserve essentially all of the properties that I described already is to consider what I call P-P wave backgrounds. So P-P wave, a parallel propagation wave, in a given spacetime deforms the metric by this potential along one of the light-con directions, this x plus plus. So there's a v which enters here. When I go through exactly the same procedure, pull back my metric onto this space, quantize it, it induces a potential, which is given by this potential here, into the Hamiltonian of the system. So we see how to deform it in rather nice ways, which can be quite useful. An interesting and special case, which I'm going to focus on. It has many sub-cases, which we can consider is when the potential is for this BMN matrix model. Basically, it chooses, I should have written down, the potential. You can see from this blue part up here, and this part here, what the potential is, this cubic one here is coming from the fact that we require a three-form gauge field as well in the model. We have to turn on for consistency. We have to turn on one of these three-form fields. It has to be charged. There's a slightly different factor between the xi. i is equal to 1 to 3, and a is equal to 4 to 9. When mu is 0, one gets rid of this term, this term, this term, and this term. And one can absorb all of the indices into one. I don't need to spill them out. And this becomes what is known as the BFSS model, which I think was certainly presented by DeWitt, Hopper, and Nikolai prior to that. And it was originally discovered, I believe, in the early 80s, by people trying to extend supersymmetric quantum mechanical gauge theories, just playing with the Q operators and squaring them, trying to get maximum supersymmetry in that setting in quantum mechanics. But this is the model. You will notice as well that these colored terms here, if I regroup them together, these SO3 terms, the model has SO3 cross SO6 symmetry. It's extended to SO9 symmetry if I set mu to 0. The upsides are Majorana vile spinners. There are 16 component spinners. The model has 16 supercharges. The spinners psi is the transpose of them. I'll say more on this in a minute. The minima, the configuration, has rather special minima. Aside from just the trivial minima where xi is 0, it has minima where xi are mu times Li, where Li are SU2 generators. Not necessarily irreducible. Any irreducible representation will give this object here. If I look at... These are what are referred to as fuzzy spheres. We've heard about them already. If I look at the Dirac sector of this, which people won't usually do, you will see that if I focus on the fuzzy sphere sector, we get our standard time derivative. We get the psi transpose gamma Li with a minus i3 over 4 times that. If you compare that with the standard fuzzy sphere generator operator on the fuzzy sphere, it has a gamma i Li plus 1. There's a slight difference. These are not masses on the fuzzy sphere. The Dirac fermion has not become massive. In fact, it's a spin C. There's an additional coupling to the spin connection. So that's what's happening in these settings. It'd be interesting to explore a little bit more on what's going on there. If I just go back, if I just take the first line here and focus on this aspect of it here, you see that if I take mu to be large, I can forget about these terms. I should keep this. I can forget about all these cubic interactions. It just becomes a Gaussian model. But it's a supersymmetric Gaussian model. I'll return to its structure again in a second. But let's focus, first of all, on what the properties of a gauge Gaussian model are. This is just a harmonic oscillator, a matrix harmonic oscillator. And we are demanding that the physical states of these are the singlets. You are not allowed to consider non-singlet states. They're not on physical states. There's a ghost law constraint. That ghost law constraint is implemented by this gauge field. The physics of this, let's see, is that if we focus at very high temperatures, if we focus at very high temperatures, well, this kinetic term, we can Fourier transform it. It's got a zero mode here, so only the potential will survive. We're left with just matrices, pure matrices, pure Hermitian matrices. Those pure Hermitian matrices are each, will have a spectrum of a Wigner distribution. Sorry, but it will have one other ingredient. It's with the commutator of xA with xi. It will survive as well. It'll be the dimensional reduction once more of this. So we will just left with the time derivative will disappear, and this will be reduced to just the commutator. We'll get a factor of beta in front. In practice, what it means is that the eigenvalues of xi are Wigner distributions. At t equals zero, just to go back to it, at t equals zero, there's the periodicity is irrelevant. That gauge field, there is no loop to put our Bernouff fluxes in. A becomes irrelevant, so we can set it to zero. It's a uniform, so A can be gauged away. At high temperatures, A again is going to behave like a pure matrix, and it will be localized. It will be localized, its eigenvalues will be governed roughly by a Wigner distribution. So we'll get a Wigner distribution for the eigenvalues near zero. For A, at zero temperature, its gaugeway becomes flat, so the eigenvalues have to undergo some phase transition. They won't cover the circle at high temperatures. They cover the circle at low temperatures. There's a point of non-analyticity when they start to cover the entire circle. That is a hagedon type transition. It's when the constraint is no longer that relevant, and that occurs at a temperature which one can evaluate exactly as Tc as m over the log of p. So we need more than one matrix for this to exist, but two matrices, we're going to get Tc as m over the log of 2. The transition can be observed in the... It can be observed as a centrosymmetry breaking the polycuff loop. What happens is that the polycuff loop, if we look, it's easy, the polycuff loop goes down, and it goes to... It hits one half, and then it goes to zero. Suddenly. If one looks at the eigenvalue... There's a jump discontinuity in the polycuff loop. Or if one looks at it in terms of the eigenvalue distribution, rho of A as a function of lambda, the eigenvalues, as we increase to zero to 2 pi, they reach this critical value, it's meant to be normalized to have one, and then after that, they become the uniform distribution. There's a sharp transition to a uniform distribution. This is the transition that occurs here. It is one of the characteristic transitions that occur in matrix model. There's another very famous one, which is a gross-witten transition. It is not that particular transition, the gross-witten one. It is a different transition. The gross-witten one actually corresponds to the polycuff loop going down continuously, and it's the square of the polycuff loop that exhibits the transition. I want to focus on the tree matrix model sector, just to flash back. I want to focus on this setting here, of these three matrices, but I'm going to actually not... One of the features we will see that occurs in the non-perturbative physics of this model is that there are fuzzy spheres that emerge because of the balance between the bosons and the fermions. There can be transitions to fuzzy spheres. This symmetry is not exact if you integrate out the fermions. There's no reason to believe that you still have this quadratic symmetry. There can be other linear terms that are induced, or cubic terms that are induced. So I'm going to choose a tree matrix model. I'll study... Three or three? Three. Okay. Purely bosonic. Purely bosonic. This is a toy model now to exhibit some other features that are going on in this, and you will see in the full model how this plays a role. The physics related to this plays a full role. So I've called them D, and I've normalized it slightly differently because I want to take beta out here in front, and I've pulled it out so that the D's are minimized. The minimum values have the LA's rather than some mu times LA. So D has a minimum... If you look at the extreme of this value here, then the minimum of this energy arises when D is equal to LA, and it arises when LA is the maximum distributed... Maximum irreducible representation allowed for the matrix, for the representation LA. So LA is the size... has dimension, the size of the... The dimension of the representation is the size of the matrix. If you plug that back in here, you see that this is going to become a quadratic chasmere. This will also become a quadratic chasmere, and the overall energy is negative. So because the overall energy is negative, hence the maximum representation is the one that gives the... And I've written down the ground state energy here. It's E is minus N squared minus 1 over 48, and you plug it into that. Now, if I consider this as a statistical mechanical model, so we see that Z is... Now I'm just taking a zero-dimensional one integrating over the D's, to the minus beta times this energy functional. If I take very large temperatures, you expect that this cubic term is not going to play that much of a role. The energy levels are going to go up and up. We've got some... We've got some wells here, some quadratic type of... We've got some relatively flat wells that they're a little bit distorted because of the cubic, but if we take it up at high energies here, fluctuations at high energies, the dominant term should be the quartic term. So again, we expect at high temperatures, at high beta, that the dominant distribution here is going to be Wigner semicircles for the D's. At low beta, yes. Sorry, low beta, yes. Low beta, high temperature, low beta. The dominant physics is going to be Wigner semicircles. However, at large beta and low temperature, the dominant physics should be associated with this ground state. So we expect to see some sort of phase transition between these two. And this is a phase transition where there's a geometrical phase that emerges. As you cool the system down, you get a condensate, which is a fuzzy sphere in its maximum distribution. One can simulate this. It should be possible to analyze this system analytically. I'll show you that some programs... You can get the essential results analytically, but this is a numerical simulation of it. If you take the expectation value of the action the internal energy is just S, there are clearly two phases with a jump between them. The specific heat, which is the standard deviation of this energy, it has a characteristic phase at... There's a high temperature phase and a low temperature phase. And just to get it right, beta is alpha to the 4. Alpha is large. So this is the low temperature phase. As we heat the system up, the fluctuations of the fuzzy sphere get larger and larger. It explodes in some sense and then disappears into a... just collapsed eigenvalue distributions. It collapses, is what it does. It goes into larger and larger fluctuations and collapses. The eigenvalues... One can look at the eigenvalues of one of the matrices, D3, the commutator of two of them. There's a little distortion here in these things which are partly due to the fact that each one of these is a vaginal distribution in its own right. And you're measuring these ones, and not every one of them has exactly the same occupancy. And this is in the high temperature phase. The, again, a vaginal distribution. If one... To analyze this, one can try and analyze it by just looking at an effective potential for taking D8 to be phi times L8. Phi is just a... Phi will be a potential. It's essentially the radius of the fuzzy sphere. It's related to us. And if we take out the... If we calculate the effective potential and then I've normalized it by pulling over half the power of n squared, we get beta, a phi to the 4 from the commutator squared term, a phi cubed, and from the van der Mond, because once you gauge fix this, there is a log of phi squared that plays an essential role. You can view this as coming from the fact that if I diagonalize one of the matrices, there's a van der Mond in there, so there's a log of the eigenvalues of that matrix. The eigenvalues of that matrix are proportional to phi times some values that differ. The phi comes out, and it plays a role in the... When we take it up into the exponential, it gives us a log of phi. When we keep track of all of those factors. So this is the effective potential for us. The location of the minimum, it tells us that there should be a critical value, beta c is 8 over 3 cubed. It also tells us that there's a characteristic divergence of the exponent of the specific heat, which is alpha, that is one half, and those match excellently with the numerics. So this little model is capturing the essential features of the phase transition rather well. So from that effective potential, one can derive that s is equal to 512 as the transition is approaching from the fuzzy sphere side and s is three quarters on the other side. So there's a jump in this internal energy, a rearrangement of it. What was the definition of it? S was the expectation value of the action. It's the internal energy. So there's a jump in the internal energy. And the divergence of the specific heat is giving this... I didn't try superimposing the plots, but you can see it in some of my references. It works quite well. So let me go back to make some comments on these membranes, back to membranes. How about other fuzzy spheres? Are there other fuzzy solutions of other fuzzy solutions of equations? Have you tried any kind of... So you did the fuzzy spheres, right? In this case. For the toy model? I suppose. The toy model has all of these solutions that we heard this morning. But they are saddle points. They are not... They are saddle points. They are not the minimum of the energy functional. They satisfy the equations. They satisfy... I mean, the potential is just this xA xB squared. And there's a trace of this. And then there's a cubic term. x epsilon xB... So this one here will give you your commutator of the x's. And this one here is going to give me... It has some commutator. There's some plus and epsilon times an x, x0. You have some matrices there, right? There are matrices, yeah. So how you construct them, I mean, is dependent on the particular... There are random matrices. In the model, there are random matrices. But if you want to find solutions to these equations, then... But maybe the question... At low temperature, when you say x goes to L. Yes. Which L? There is a modularized space of L. No, then it goes to the... In that case, it goes to the largest one. In the model that I gave, it went to the largest one because that has much lower energy than any of the others. The energy is going to minus infinity. It does not... Continuous symmetry, I mean, it has to go somewhere. That is not a mix you can have potentially in the early election. No, the symmetry just... is the symmetry that rotates these matrices L, A. All of them. But they are all the one. They are just... You have L, A, and if I do U, U dagger, this will be an R, A, B times L, B. Yeah, but just... We are covering all of them at low temperature. The model covers... Yes, I mean, this is a symmetry of the model that a joint symmetry... Which is never broken. That is never broken. It's never broken. It... It's the same as... It's the same as rotational symmetry of the continuum model. Anybody... It is related to a residual gauge symmetry associated with the fluctuations around those backgrounds. So, I want to go back to supersymmetric membranes. And again, as I mentioned, they only exist in four, five, seven, and eleven dimensions. And we are expanding around flat space there. Dimensional reductions of supersymmetric Yang-Mills in one dimension lower. There's the fermionic symmetry of the model. There's a kappa symmetry which plays an essential role if it wants to construct the model on and ensure that it is consistent on a generic background geometry. Ah, thank you. He just survived. Thanks. The... The kappa symmetry says that that the only consistent background... The only consistent backgrounds on which we can define these models, are solutions of the supergravity, of the corresponding supergravity. So, 11-dimensional supergravity for the brains that I'm... This BMN model and this M2 brain, any solution that you can define it on, it has to be a solution to supergravity to define the background of these models. Aside from that, anything goes. The... I was saying this is reminiscent of the sigma models having to satisfy a beta function being zero. And I'm wondering whether in the type 2B model if there was some... The kappa symmetry was mentioned this morning as well. And therefore, from the supergravity, that the type 2B model should only be defined on solutions of the supergravity. I expected there as well. So, and again, the BFSS model was when we were on the flat space. It's often thought of as a system of ND0 brains. And this is a particularly interesting model in that it's the simplest model and many people have attempted to construct the ground state wave function for this and various other aspects of it. One of the nice aspects of it is that it looks like it should have a well-defined partition function. The zero modes associated with this potential and the fact that we've exact zero and the exact symmetry suggests that it may be slightly pathological. However, if we look at the fermions, the 16 fermions and quantize them, they are in their real objects. They satisfy some clear-ford algebra in their own right. If I take one of them it's just one for it defines a Hilbert space of 256 dimensions. However, even though these objects are SO9 invariant, because they mix between creation and annihilation operators under SO9, this 256 dimensional representation cannot be reducible. It must be reducible. If you break it up under SO9, it breaks up as the 44 and the 128. The 44 identifies with the Graviton, the 84 with the anti-symmetric tensor, and the Gravitino of 11 dimensional supergravity. One can see that the fermionic sector is again hinting that it wants back the 11 dimensional supergravity, and there was a nice attempt at building the ground state by Jens and collaborators using some of these ideas. The BFS model as I said is dimensionally reduced Yang-Mills Psi, this is the action for it, now I've called it Psi rather than Theta because this is just the classical Grassman variable and we're doing a path integral over it. Psi is generically a 32 component, but one can it has only 16 non-zero components. It's under SPIN9. 11 dimensional supergravity, I probably don't need to say anything about that. Sorry, I'll come back. The solution to 11 dimensional supergravity that's meant to be dual to this, there's an additional story that's going on here in that as you go to strong coupling for this matrix model it should have a gravity dual. The gravity dual has been exhibited and it should correspond to this geometry. It should correspond to incoincident D0 brands of type 2A theory, if one says well what do incoincident D0 brands look like in type 2A's theory. One can solve for them and one gets a harmonic function here for H which is this object and this metric lifted it to 11 dimensions here on the NM theory circle. If I include temperature in this setting there should still be a dual and temperature should be corresponded to match a Hawking temperature of the system so we match the Hawking temperature with the temperature of the physical system of our matrix model and we look for black hole solutions that instability associated with the flat directions has been argued to be associated with Hawking radiation. I'm not completely convinced of that but there's a suggestion by Hanada and collaborators to this effect and some evidence. The statement is that the matrix model is the matrix model that matrix model you can extract the physics of that matrix model by going to a dual theory. The dual theory is 11 dimensional super gravity that 11 dimensional super gravity because you are dealing with the matrix model at finite temperature is a finite temperature solution of super gravity and the finite temperature in the super gravity setting is the Hawking temperature of the black hole so you have matched the Hawking temperature of the black hole to the physical temperature of your matrix model and you claim the two they should agree if the conjecture is correct. You were looking at a BPS solution with zero temperature I showed you the BPS solution that's a zero temperature that was just yes it has to be that when you turn the temperature to zero you get that BPS solution when you turn the temperature off you have a black hole solution hopefully. You do because you can exhibit it so there's a solution this is a BPS it's a black hole solution and it turns out to be rather trivial to write it down in that you just put in dilation factors associated with the black hole this F previous one was BPS and the previous one is corresponds to F equals 1 here and now you claim ah this is the one that describes the dual at finite temperature if I'm in a in a in the strong coupling regime ok so this ah and the prediction is that the temperature is related to the surface gravity the usual way we extract it I make the identification take the area over 4g for your entropy get the energy and the claim is ah the energy has to go like T to the 14 fifths and you can go and say let me get improvements on this well we've put it on a computer and other people have put it on a computer the best results are Berkowitz and collaborators, Hanad is is part of that collaboration and this is the result of numerical simulation this is the result of numerical simulation we and these are the error bars the first one here well ah it's been I mean they've fed these in I think in this one to get and the errors are on the subsequent ones but there's there's a rather comprehensive analysis and checks as to how you how you compare these if you don't feed the parameter in you and you just feed the exponent you get a good value corresponding value for the coefficient if you feed the coefficient in you don't feed the exponent in becomes quite difficult to get it so it's consistent it's consistent but is it convincing well I prepared far too much here ah because I wanted to show you checks on the geometry in the next 10 minutes lambda is what it is like lambda lambda was the coupling constant it's large so it goes completely well actually you can set it to 1 it plays no real role because the temperature the only the t over lambda is the only variable here so set lambda to 1 and forget about it but t is large t is small okay it's confusing so it's near zero temperature is where the difficulties arise and that's the strong coupling limit of the theory this set of dual ideas allows us to explore things a little bit more comprehensively because one can say well let me add probes to this scenario I could consider a put an M5 brain probe a D4 brain probe so the idea is to add to consider the previous system was a D0 system let me consider adding D4 brain probes to this so that I'm going to add a small number of them that so that they won't change the geometry but they would be able to act as probes on it so the number of D0's is large the number of D4's is small and finite A2 you have to change your matrix model to incorporate that effect those ones and you add these faes to it and this is these DA's amount to quadratic terms in the faes and the X's but what it's meant to do is just focus a little bit on the physics of it what it says is you put in a D4 brain and now I can put it into this geometry so I put the D4 brain I can embed it the background geometry has a black hole in the middle somewhere and now we're going to put in a four-dimensional surface that four-dimensional plane or surface it can cut the black hole here it could just barely touch it or it might not intersect with it at all the suggestion is to calculate the free energy of this D4 brain and it's the D4 brain is meant now to be described by a Nambu Goto action in its own right more generally Dirac Born-Infeld one but in this case it's just a Nambu Goto action where you pull back the geometry of this solution and you calculate properties of that quantity and for instance as we vary this separation the derivative of this separation as we vary it we get a condensate we can measure quantities that are observable sort of the theory so MA is this quantity derivative of the action with respect to MA this is a matrix action so we know how to take the derivatives and we can compare these two when we get a prediction for it this one gives us a prediction for the expectation value of that of the surface and if one works out what it is as one varies the mass parameter it gives us this particular curve the curve is universal in the sense that for any temperature the all temperatures should fall onto this particular one if you scale things correctly and numerical simulations of that particular scenario agree quite well with it this is the point where the embedding no longer intersects the black hole it intersects the black hole and it's a maximal intersection yeah I wanted to go back to describe a little bit of the BMN model I have a few minutes left five minutes so I'm going to be relatively quick so the BMN just to remind you it had a metric with this potential in the PP wave scenario this is the V that I described and it has this tree farm gauge field this constant tree farm gauge field which lives on X1, X2 this is not meant to be a bigger X3 it's just and X plus and it's a constant field strength so that's the scenario it induces this change in the Hamiltonian of the system and to remind you of the action again I've written it here colored it a little bit differently because I want to focus on the large mu limit of this so now we're going to take large mu that the advantage of these PP waves is that they allow us to take large potentials to analyze those the large mu limit gives us if I focus in on it in the large mu limit you see it's just a supersymmetric Gaussian model you might say well it doesn't look very supersymmetric because the fermions have mu over 4 and the bosons have mu over 6 and mu over 3 however if you check it it is supersymmetric it does have exactly the supersymmetry of the system this one, this model has a phase transition for the gauge field that's entering in here now this is supposed to be describing a membrane as well if I went back to the original membrane and I didn't write it out in gory detail but I just focused on the bosonic membrane it would look like the analog one before going to this quantum version would look like this where this is this omega square root of g object and the gauge field of these diffeomorphisms this omega so one can ask where what's going to happen what does the physics of the phase transition correspond to should this model here have a phase transition this is a Gaussian model with diffeomorphism invariance it looks like it doesn't have a phase transition in that the analog of omega seems not as far as I can tell it has no phase transition so what that's suggesting is that the matrix models really only describe one phase they seem to be very close to the membrane the supersymmetric versions seem to be very close to this membrane however they probably only describe one phase of it and that should be the phase above the the the the deconfined phase the high temperature phase of the model these guys did some further analysis once you've got a massive and once you've got a Gaussian thing that you can expand around it's very natural to do higher order expansions and as you see there is a lot of work they went up to order lambda squared and they found Tc to behave like this they found a nice series it's going to go larger if you plot this series you see it well it's going to break down it goes to zero here goes through zero it may be equals 5 by 5 the supergravity prediction our colleagues in well did some nice work connecting the back hole solution to which a small mu dependence linear in mu they found the linear mu dependence this is from the gravity dual prediction and it gives some linear thing here so they put the two of them together they look like that so they match them the obvious thing to me it seems to me is just well let's see once you've got a nice series what a physicist will do is they say well let me see if I can padi approximate and get the other end if it's going linear then I have a reasonable idea of what's going on so rearranging this into some padi approximate gives you something like this where these numbers enter in here if you plot the padi approximate well I told you it goes linearly here and I've deduced what the linear coefficient is it predicts that the linear coefficient should be this which is actually not bad for such a short series so you might give you some belief that the phase transition and the Hall and Amir does behave go like that there's no there doesn't seem like there's any spurious padi pole the padi is this but the super gravity tells us that there should be a linear one here so we once I know that it should be linear there I put that into my padi and it gives you this it tells me I can try and improve this so I suggest that it's well worth going to a higher order in this series so we can then do some numerical analysis of this this is a collaborative phase diagram for it these are physical measurements put on a super computer and looking at the polygraph loop and where it's undergoing a transition so it's not that far out not terrible it says that it's probably there's a lot more going on here there's a Meier's term this is the cubic term here following the expectation value of that quantity it seems to do its own thing a little bit come in into it here these are the two of them put together they seem to merge around here and then go back on towards the super gravity solution so in detail what's going on just since has done numerical simulations one can see well what's happening is that as one goes as one decreases the temperature for a given mu these fuzzy sphere phases emerge so you can see the fluctuations these correspond to different representations of SU2 for the x's these are all nine x's are plotted here six of them are lying down here and three of them just break out and this is Monte Carlo time as the system is generating new random configurations it jumps in here some of them collapse back down down again back up so these fluctuating fuzzy spheres that one was at mu equals this and this quantity here is this is at a much lower temperature it has settled down into one of these what happens is these ones are fluctuating near this is near the transition where these fuzzy spheres emerge once you cool it down further these fuzzy spheres settle out onto this particular one and as you see they're very happy with what they're getting so the eigenvalue density is there's no indication that it's spreading to cover the unit circle whereas here it's it has this is gapped and this is after something closer to a gross widthen type transition in this it hasn't gone flat as you see this is anything but flat but it's so that's what what that one got when you have the two the pink and the blue on the so as you said you go on the blue sphere what about the pink one sorry this one is the fuzzy sphere these are they are the other matrices there are nine matrices three of them live up here and six of them are down here so six of them are fluctuating around zero and three of them are fluctuating around non-zero which corresponds to a particular fuzzy sphere configuration right and the reason that I focused on this one is that this is the value at which the Hall and Amie covers the unit circle so that's the second transition there really are two transitions there's not one and possibly three transitions in fact so this is what the Polyakov loop looks like in that the Meyers transition and the two of them if you look at what's happening in the energy not that noticeable if I go to smaller value of mu something similar happens but the two transitions appear to have a car more or less together to the transitions seem to merge and this is what's happening in the energy there's a relatively clear jump in the energy here and now the Polyakov loop is undergoing some significant rearrangement that has to do with the fact that the the constraint on two singlet states is quite different around the background with SU2 chasmiers than it is around zero the Bosonic model for this recently we've looked at the Bosonic model it is relatively boring in comparison it has no fuzzy spheres because the Bosonic version it has just this quadratic this complete square potential it does however have two phase transitions one corresponds to the eigenvalue distribution becoming non-gapped the second corresponds to when that distribution becomes flat when the Polyakov loop becomes some conclusions and thank you for your attention I leave you read the I should actually say something about the conclusions yes because I have wanted to make some comments as well that are so we look we have seen these membrane matrix actions in models in action so to speak the BMN model the plane wave matrix model it's very rich with both emergent geometry in the form of fuzzy spheres and confining, de-confining phase transitions the models these models have gravity duals that predict their strong coupling behavior that seems to work there's much overlap with the large in reduction that we heard this morning saddle points of the Bosonic action well they can be quite interesting they're I the comment here really has to do with where are they likely to be interesting in this setting and why would I still encourage people to put effort into them well there's another topic that I didn't touch on at all here it's called large in resurgence there are this topic of resurgence and in that setting it says that all of the saddle points are important and that you can extract more information if you know those saddle points that the properties around one saddle point influence those around another and that you can extract a lot of information from that so there's I would say the topic the place to try and make contact is what's called resurgence in this setting the question can one find a background independent relation of string theory is one of the questions you often hear in string theory setting well the same question really arises here in that well we had to choose a background which was flat background or pp wave background what is the the difference Kwaisan told us this morning that he thinks that actually if I work around the flat background I may be able to get everything that's certainly one way of approaching it and that would be nice these models have a countable number of degrees of freedom which is quite nice and as Harold told us and I agree with his thing I don't think you should need an infinite number of degrees of freedom a finite number of degrees of freedom per plank link should be quite sufficient we should be able to do any physics we need to do with that and these things these models are very close to string theory and I would say the big difference is that they cut the multiverse part on that structure they cut all of that strangeness they don't because they have a finite number of degrees of freedom that aspect of it cannot be there ok so thank you for your attention so yeah maybe I just didn't understand properly but these phase transitions and so on do they have an implication for some kind of emergent geometry this analysis of these models yes Dave that's a significant point of what I was trying to make so what's the implication well the implication is that emergent geometry should emerge in a phase transition that it is likely to emerge from a physics point of view there is a very if I've got a large number of degrees of freedom that that reorganize in a drastic way that should that is really a phase transition it would be very surprising if it cannot be well approximated by thinking of it as a phase transition it might be just a crossover but it's useful to think of it as a phase transition yes along now I got lost at a in some of them did you explicitly quantize using Clifford algebra the fermions Jens did in here for the numerical simulation no all the calculations were for bosonic models no all the calculations were for there were bosonic models and there were fermionic models the fermionic models were treated with dynamical fermions on a lattice and the lattice was the time lattice or the temperature lattice the tau what was the largest fermionic space dimension like you had the 256 to the power oh well in your case the 256 would correspond the n it was 256 to the the power 56 to the power of n squared minus 1 if I remember correctly yes in your case the largest one that we treated was n is equal to 16 but there was the 256 also the 256 is hidden in the time parameter this gets replaced by a lattice and d tau which is into zero to beta and this one this one lattice was approximate with lambda is equal to 48 and sometimes 96 and but many of these ones were smaller I think and typically 24 I think the ones I showed you had 24 and dynamical fermions some of these ones are bosonic but this one here had n is equal to to see the transition here I've done it for n equals 6 dynamical fermions are 6 and this has lambda equals 24 so it's dynamical fermions on it 24 so there's 16 component fermions the Dirac operators are 16 times n squared minus 1 times lambda and the matrix has degrees of freedom it means you have a representation of a key for the algebra of the dimension for this number of fermions degrees of freedom just to understand what you do what you do is you put these on you represent them by the pseudo fermions which means you integrate the fermions out you get a faffion for the fermions ok just to understand you get a faffion for the fermions you replace the faffion I should have said this in fact that the computations were done in the phase quenched you assume that the faffion has no phase so you replace it as the the faffion is the square root of a determinant and you have integrated those out so you have a square root of a determinant you replace that or back into the action so you have what you have is you've got integral d e to the epsilon some of your lattice fermions we call them m, sorry this is the faffion of m you say this one is the determinant of m dagger m to the one half and you say well it's e to the i theta times and you quench ignore so this determinant of this quantity is determinant of m to the sorry I've done two there's no m dagger here I'm taking and jumping steps it's the determinant of m to the one half and it's the determinant the modulus of that times that so this is determinant of m dagger m to the one quarter right and you replace this you say this one is determinant of m dagger m to the minus one quarter with a minus one here and you say this is an integral over xc dagger dxc of e to the minus half of the trace of xc dagger to the minus one quarter xc the xc or bosonic the next step the next step is you say well actually I don't need to compute this inverse completely because I'm doing numerical work so I can approximate it to the accuracy that I'm working with so you say well a rational approximation to something to the one quarter x to the minus one quarter I can if I know the range in which x is I can approximate it by a rational approximation so you use a rational approximation for that and you reduce it to a linear problem where you then can you get something that involves an x which will involve this matrix without powers on it and you can solve a linear system for those ones you shuffle things backwards and forwards you build a Gaussian distribution for your xc's you solve your linear solver feed them back in to get your pseudofermians that are equivalent to this and using those ones that's the technical details I have some slides on it which are just a last question but can you hear what kind of information do you expect knowing that I mean you were speaking about the fact that you had non-commutative manifolds for each subtle point so what kind of information will you give you in that case well you see if you know what the solutions for those subtle points are then you would put those solutions back into the action functional that you were working with and you would have e to the minus that action you would like to know a little bit more you would like to know the small situations around that so that you can compute a determinant so that you have a proper Gaussian around that situation and then what resurgence tells you is that if I know these subtle points sums over those subtle points should be related to properties around a different subtle point and we know where most of the physics is happening around zero but you can get much of the physics of around zero other subtle points and they often focus on complex subtle points and there's some very beautiful work in that those directions because once you tidy up the model you can say let me test it in this particularly nice situation last question Joachim told us about some formulas that we can use in commutators and so on that we can maybe compute geometric properties something like this for these models so you look around the subtle points so you know the configuration that you find numerically can you somehow analyze something the configuration we find numerically if it's quite well around the subtle points described by fuzzy spheres are zero accurate doesn't help to also compute some geometric not that I can see but if you have further results around those subtle points yes then it probably would help but to directly I don't see that but the same type of question always arises why work around a different subtle point than the one that really is the steepest descent if I'm going to do an ordinary integral you can ask yourself well there are many subtle points there's the one that's going to give me the steepest descent and it's going to give me a good approximation to it but there can be information around other ones thank you very much