 I think there's a small delay. So let's go. Welcome, everyone. Thank you for joining us for today's webinar. My name is Alejandro and I'll be your host today. Today we're presenting Quantum Gravity and Matrix Quantum Mechanics by Sean Hartnell. Sean did his undergraduate degree and then his PhD at Cambridge University in the UK. After that, he held research positions, although just giving a second again and having a feedback, okay, cool. Sorry about that. And then after he finished his PhD at Cambridge, he moved to the University of California in Santa Barbara and then at Harvard. And since 2011, he has been a professor of physics at Stanford University in California. He has been awarded several prizes. I'm just going to list three. He won a New Horizons Prize in Physics, the Presidential Early Career Award and he's currently a Simons investigator. Sean's research focuses on problems in condensed matter, high energy and gravitational physics among other topics. So please remember that you can ask questions over email through our YouTube channel or Twitter and then the questions will be read at the end of the talk. So without further ado, we will turn the time to Sean. Thanks for joining us. All right, many thanks Alejandro. Let me share my screen. Perfect. Good, great. Well, thank you Alejandro. It's nice to have this opportunity to speak to you. And so I like to talk indeed about quantum gravity and matrix quantum mechanics. I'm going to assume that I'm not talking to an audience of experts. So I'll try to motivate everything and mainly give a sort of big picture overview of what some interesting questions are. And by all means, I'm very happy to take questions as I go along. So please don't hesitate to ask questions if you want. Okay, so good. So let me let the theme, well, one motivation for thinking about matrix quantum mechanics is the idea of emergent space. And so space is a very intuitive concept that we find ourselves immersed in space. However, it seems that space is probably not a fundamental concept of it in its own, but something that arises is an emergent phenomenon that arises from some degrees of freedom that are not geometrical in nature. And one, I guess that this is an old idea, but in the last couple of decades, the idea of holography has shown us how this works concretely in certain specific models. And so the best understood examples of this are something like what's called the ADS-CFT correspondence where a field theory like a large n n equals four supian mill theory, that's a certain supersymmetric quantum field theory is dual to gravity in living in more dimensions. Okay, so this supian mill theory lives in three space dimensions and time, but it contains within it the dynamics of a space time in 10 dimensions. Okay, so there are various dimensions of space that are emergent and how does a field theory, just how does it even have enough information to build a whole space? And the reason it does is that it has these large n matrices, okay? So the fields of this field theory are not just like a normal scalar field, they're n by n fields, okay? So they have lots of entry, they're like n squared fields. And there's so many fields that these fields are able to reorganize themselves in a way that makes it looks like it lives in more dimensions of space. However, in n equals four supian mill theory, you already start with some space already, this quantum field theory already lives in three dimensions of space. And so the absolute purest examples of the emergence of space time are to start with a theory that has no space at all. It does have time. I have nothing to say about the emergence of time. So you start with what's called a large n matrix quantum mechanics. Okay, so that's in the title of the talk, matrix quantum mechanics. And so what that is, it's just a quantum mechanics of a matrix. So what does that mean? So of course in quantum mechanics, the operators are matrices. That's not what this means, okay? So imagine a quantum mechanics of let's say a harmonic oscillator, right? Then maybe you'd have p squared plus x squared, okay? So now imagine having maybe two oscillators. So then you'd have a p one squared plus p two squared, x one squared plus x two squared, right? So now imagine having n squared oscillators. So you have p one squared, p two squared, a whole bunch more, all the way up to p n squared, right? So a lot of operators. And similarly, all the way up to x n squared, a lot of operators. And so you take these n squared oscillators and you organize them into a matrix. Let me maybe I can squeeze this in here. And so I define a matrix x. So that has of course x one one, x one two, all the way down to x and n. Okay, that's a matrix. And similarly there's a p. And then I could write down a Hamiltonian. That was something like the trace of p squared plus x squared, okay? And that would be the simplest matrix quantum mechanics. It's just, and this is a silly way to rewrite n squared harmonic oscillators. But now with using this trace structure, I can also write interactions between them and I can have a trace x to the fourth term, for example. Okay, and so matrix quantum mechanics. So this is what I mean by a large n matrix, okay? So you have a quantum mechanics, it's just a completely normal quantum mechanics except that you have n squared oscillators that you put together in a matrix and then you're allowed to couple them. You can have an an harmonic terms like this one. And it turns out that when n becomes large, these kind of models can have an emergent space. There's no space in this model, okay? There's just n squared oscillators, all right? But it turns out that when you have this kind of trace structure, you can get an emergent space. So let me show you how this works in a very simple example that was understood in the early 90s. So this is a very, very baby of version and I'm gonna go through it a little bit quickly because it's not what I mainly want to talk about. But I want to show you that there's, one example where you can really understand how a large n matrix leads to an emergent space. And so the theory that you start with on one hand, so the action, the matrix quantum mechanics action is this one, okay? So really this is just an action for n squared point particles, right? So there's a M, M is this matrix, right? Of X11 all the way up to XNN, it's a big matrix. So M dot is just the kinetic term and then there's some potential V of this matrix M and everything has this trace structure, okay? So that's the action for the matrix quantum mechanics. And there's only time here, no space, right? Just time. Now there's only one matrix. So what you can do is you can, you can diagonalize this matrix, right? You do a rotation, that's the symmetry of this theory. You can diagonalize, there's just one matrix so you can basically diagonalize it. And so when you diagonalize this matrix, you get the eigenvalues and it turns out, I'm not gonna have time to explain this. When you diagonalize the matrix, there's a certain measure factor that appears and what this measure factor does, it has the consequence of turning the eigenvalues become fermions, okay? It turns out that they anticommutes. You're just gonna have to believe me. This is not the main point of the talk. I just wanna show you an example. And so when you diagonalize this matrix, this theory of this big matrix becomes N, it turns out non-interacting fermions in a potential V, where V is this potential, okay? And so it turns out a potential that you wanna take. This is the upside down, this is a potential here and this is V of X is minus X squared, okay? So upside down potential. Okay, now if you have a lot of fermions in a potential, what do they do? As you know, fermions have partly exclusion and so they build up a fermi C, right? So the fermions occupy, fill up the available phase space. Actually this is not phase space, but it doesn't matter. So the fermions fill up all the states that are available to them. And then what are the low energy excitations? Well, the low energy excitation is a ripple on the fermi surface. So you have your fermi surface here, these are fermions. You take one fermion from underneath and you put it just above, right? That's the lowest energy thing you can do to a fermi surface. You take one and you move it out. And what you can show, it turns out, is that if you do this, if you excite one fermion, it wants to propagate along the fermi surface like this. In condensed water physics, this is called zero sound, right? But it's a fact that there's a motion that the fermion propagates ballistically along the fermi surface. And so what this looks like, it turns out is a wave moving in one dimension. But we didn't have this dimension to start with, okay? This dimension was built by the eigenvalues of the fermions. And it turns out, you can show in this case that the low energy dynamics of this eigenvalue or of this excitation of the fermi surface is described by a field theory that lives in one plus one dimensions, right? So we started with zero plus one, just time. And now it turned out there's a map of this whole problem. And it turns out it's an exact map into a theory in one plus one dimensions, which actually you can describe as gravity coupled to a dilaton, coupled to what's called a tachyon. Actually, it's not a tachyon, but that's what it's called. Now, in one plus one dimensions, gravity doesn't really have any degrees of freedom, okay? So all this complicated-looking action is really just an action for a scalar field. So the main thing here is that there's an emergent space, all right? So matrix quantum mechanics with a single matrix is equivalent to basically a scalar field living in one extra dimension. But that extra dimension emerged, okay? It wasn't there. And so this is a very, very simple version where everything's solvable. And in the early nineties, this model was solved to death essentially. And you can calculate everything you want to calculate. And you can see this extra dimension a match, okay? So hopefully all I wanted to try to give you here is a flavor of how a large n matrix can give you a space, all right? In this case, the eigenvalues of the matrix build up the space for you and then ripples on the eigenvalues alike fields that live in this extra, in this space, okay? However, one matrix is really not enough. So what did we get here? We got a one-dimensional space with a scalar field living in it, okay? That's not an extremely exciting space. And so it's been appreciated since the nineties that many of the deeper questions in quantum gravity, such as things involving black holes and what happens at space time and short distances, you need more structure than just a single matrix. Now it's known what we have to do. What we have to do is consider the multi-matrix quantum mechanics. So this matrix quantum mechanics that I showed you just had a single matrix X, okay? However, once you have, and for example, n equals four, super young mill theory has many matrices. It has six scalar fields, a bunch of fermions, a gauge field and so on. However, once you have more than one matrix, then you can't diagonalize them all at once, right? Generically, two or more matrices are not simultaneously diagonalizable. And once you can't diagonalize the matrices, it turns out the models are not solvable, okay? So however, if you want to understand how ADS CFT and how holography really works, it's gonna be essential at some point to understand how to solve matrix quantum mechanics with more than one matrix, all right? And so all of this in a way is the motivation for what I wanna talk about, which in the last few months, we've developed two new approaches for solving matrix quantum mechanics with more than one matrix, all right? So in the remainder of the talk, I'm gonna be talking about methods to solve a certain class of quantum mechanical problems that involve more than one matrix, okay? So with one matrix, we can solve it. You do get an emergent space, but in a way it's too simple, okay? It's just one dimension. So to be very clear, here are some matrix, multi-matrix matrix quantum mechanics. So there's one of them that is very, very famous. That's the BFSS model, the Banks, Fischler, Susskind and Schenker. And so here's the Hamiltonian. So this is what we would like to solve eventually. We're not gonna get there today. So this is a maximally supersymmetric. It's got everything you could possibly want. So it's a theory of nine bisonic matrices plus some fermions, okay? So here the both. This is very much like what I wrote down before, except these PIs, each one is now a matrix, right? So I go from one to nine, right? They're nine matrices. And each one has its conjugate momentum, right? So these are the momenta, just in this normal quantum mechanics, okay? This is just nine times n squared P squared, okay? P one squared, P two squared, P three squared all the way up to P n squared, nine n squared. So you can think of it as nine n squared coupled harmonic oscillators, right? This commutator squared is the interaction between them, all right? This is supposed to make you think a bit about SU engaged theories, okay? That's what this kind of thing comes from. This is a commutator squared interaction. And there's also some fermionic matrices which are the size that are coupled to the bisonic matrices and this gamma is a gamma matrix. Okay, so this is a theory of bosons and fermions, but it's just a quantum mechanics. It's not a quantum field theory, okay? However, this does have a lot of matrices and it is tricky. So what I'm gonna talk about today is a baby step in the direction of this model. And I'm gonna talk about two simpler models. One is the simplest thing you could do going beyond one matrix, which is to involve two matrices. And so this is this Hamiltonian, there's a Px squared, a Py squared, an x and a y, right? So these again like harmonic oscillators and this is the interaction, this term is an interaction between them. And then I'm also gonna talk about a supersymmetric model which has three matrices, okay? So again, these are the bosonic terms, the three of the pi squared. And now it turns out the potential is this one, which we'll come back to later. This is what, yeah, this is a nice potential, we'll see. It includes a commutator squared term, it includes a mass term, but it also involves a certain cost term with an epsilon. And the reason for going to three matrices is if you have this epsilon, ijk, and also some fermions lambda, okay? And so I'm gonna discuss, what we wanna, the problem is like a, in essence, it's an undergraduate problem, all right? Find me the ground state of these Hamiltonians, okay? What can we say about the ground state wave function? So I'm gonna discuss two approaches and they're both numerical in nature, but in certain different ways. So the first one I wanna talk about is gonna involve machine learning. I tried to resist a long time, but I guess eventually being in Silicon Valley, this was necessary to do something with deep learning at some point. But it turns out this is really quite, quite simple at the end of the day. So first I'm gonna give you, I just wanna give you a flavor. We're not gonna be able to go in a lot of technical detail, but I wanna give you a flavor of what's involved. And so this is, the details can be found in this paper, on the archive in PRX, and this is with my student, Shisha. So what is a deep learning involves neural networks? What is a neural network? It's actually nothing very complicated at the end of the day. So in our context, what the neural network is gonna be, is gonna be a variational wave function. So remember in quantum mechanics, if you can't find the ground state analytically, something you can do is write down a wave function that depends on a few parameters, and then you calculate the energy of that wave function and you know that the ground state energy is always lower than the energy of any other wave function you can write down. So you try to minimize, you write down an ansatz, it's called a variational wave function, and then you try to minimize the energy within the variational wave function, and hope that you approximate the real energy. And so a neural network is gonna be a variational wave function. This was a method that's been used successfully in a condensed matter context for spin systems. And so we're gonna try to do the same for these matrix quantum mechanics. The motivation for doing this is that the reason that deep learning has become so popular in the last few years is that it turns out that this neural network structure has found empirically, okay, it's not really understood to be capable of robustly approximating complicated functions with relatively few parameters, okay? And so it turns out it seems for reasons that are not totally understood that this neural network is a good way to parametrize an arbitrary function, right? If you have a linear function and you want to approximate it, then you can just use a matrix or maybe you can use a Fourier series, right? Those are good ways to approximate linear functions. Here you wanna approximate the wave function, okay? The function that we're gonna be approximating is the wave function. This is a complicated wave function, right? It depends on n squared variables, right? It's a complicated function, non-linear function of many variables. And those are exactly the kind of functions that it seems that these neural networks are good at learning, okay? Which just means approximating. So let me show you how this works in practice, all right? So we need to, right? So we're gonna have a wave function. It's a complicated wave function. We're gonna take n that the sides of the matrix is gonna be some number, like two, three, four, five, six, okay, some number. And we're gonna give it a wave function, right? So a wave function is some psi of x, right? Where x is x and y and z, right? At the matrices, these are each of these are n squared variables, okay? So it's a function of order n squared variables. And we're trying to approximate, actually what we're gonna try to approximate is the probability distribution, which is the wave function squared, okay? We're trying to learn this probability distribution. And once we have the probability distribution p of x, then we can calculate things like the energy, right? Because the energy is the expectation value. That's right, in some fancy way. The expectation value of the Hamiltonian given this probability p, right? So once we know the probability distribution, then we can calculate the expectation value of the Hamiltonian. And how do we do that? We draw many samples from the probability distribution. You calculate the energy of each sample, you add it up and that's the expectation value. Okay, so what I just described is a Monte Carlo method, right? We're gonna have some probability distribution, then we calculate the energy by drawing samples from this probability distribution, right? And doing this in a way, okay, if you can actually do this, you have what's called a generative model. That means you have a way of efficiently drawing samples from your distribution so you can calculate things like the energy, right? So we're gonna, I'm gonna write down a probability distribution numerically. We're gonna calculate the energy by drawing samples from this probability distribution. And then we're gonna minimize certain parameters in this probability distribution to make the energy as small as possible so that our probability distribution is approximating the ground states, right? That's the name of the game. All right, so how do we make a generative, how do we get a generative model? Okay, so it turns out it goes like this. So first you start the simple distribution. And so what that could be is n squared independent Gaussians, okay? So Gaussian, so each distribution, this is gonna be a probability distribution for n squared variables where each variable comes in a Gaussian distribution. And that's nice because for example, Mathematica or many other places will have a package where you just say, give me a random number that's drawn from a Gaussian probability distribution and it just gives it to, right? So it's easy. There are many algorithms for sampling from Gaussian distributions, okay? So you can just do that. But of course, the ground state wave function is not a Gaussian distribution, right? And so what we're gonna do is take this Gaussian distribution and then feed it through the neural network, all right? So the name of the game is the following. We draw, we sample from this Gaussian distribution and we get a number Y, right? We draw a number and then we feed this number into some function F to get another number, okay? And then if we do this many times, this other number will be drawn from a different distribution, okay? And what is this different distribution? It's, well, I think this should have been Q, hang on. Yeah, this should have been one of these should have been Q, maybe this one, sorry, I think. So it's basically, it's essentially the distribution that the X's are drawn from is the product of F and Q. And then there's some measure factor from changing variables, okay? So, but, you know, this formula, I forget this formula, I think it's a typo in it. The important point is that if I have some function F, I sample Y from some distribution Q, then X, which is F of Y is samples from different, some different distribution, okay? And so I'm gonna show you an F, okay? That's gonna be a neural network, you'll see. The Y is drawn from this Gaussian distribution and that's how I'm gonna model the wave function. My variational wave function is gonna be such that it gives me a probability distribution which is a Gaussian probability distribution fed through a neural network, all right? Once a neural network, it's again, actually something relatively simple. So you take your n squared variables, right? That's X11, X12, X13, okay, the whole matrix. The wave function is a function of this matrix. You put them into one big vector, okay? So it's a long, it's a n square, a vector of length n squared. And then what do you do to this? You take this vector, okay? You multiply it by a matrix, okay? This is some n squared by n squared matrix and you shift it, okay? That's in a fine transformation, right? You take your vector, you multiply it by a matrix and you shift it. This matrix and the shift are not determined, okay? And so the variational parameters of this wave function are gonna be what matrix you multiply it by and how much you shift it by. So these are often called the layers of the neural network. So you take your vector, you act on it with a matrix, then once you have your matrix, so a new vector, what you do is you take each component of this vector and you apply a non-linear function to the component of this vector. And this is this tangent, right? It looks something like this. And this is supposed to be inspired by neurons, okay? It's a function that if it's less than zero, you send it to zero. If it's bigger than zero, you send it to one, roughly, okay? But this, so this and then, so then you get a new vector y and then you apply this affine transformation again. And so this network is a linear step than a non-linear step. A linear step than a non-linear step, right? The linear bit is this bit. That's like a matrix multiplication. And the non-linear bit is this activation function that you apply to each entry of the matrix. Okay. Okay, so that's the neural network, okay? So it's just a function, right? It's a function. You feed it, you feed it a vector and it does something linear and it does something non-linear and it outputs another vector, okay? And what it outputs is gonna be this x, right? So we started the Gaussian distribution. You sample something from this distribution, okay? Remember this x depends on all the entries. It's really this x is n squared entries. You feed it through this neural network and it pops out a y. Sorry. You take a y, you feed it through this network it pops out an x. And that defines a probability distribution that is your wave function, all right? So we did that on this model, okay? So we take this matrix corner mechanics. Why did we choose this one? Well, it turns out this is one of the simplest ones that can be super symmetric, okay? And it turns out that this term is nice. This potential term is nice. So suppose you just take this model classically. What would the ground state be classically? You just minimize the potential. So this potential is minimized on this set of matrices. And these matrices define something that's called a fuzzy sphere, which I'm not gonna have time to talk about, but it's basically, it's like the entries of the matrix essentially form into a sphere, okay? So actually that's an emergent space in fact. But I will not be able to not gonna explain that right now, but it turns out that the classical limit is known and that's a nice thing about this model. And the classical limit of this corner mechanics is when this parameter new becomes large. So a nice thing about this model is that we know the answer when new is large, okay? So when new is large, the model becomes classical. And so the dashed line, these dashed lines are the classical, this is, okay, sorry, I'll tell you what I'm plotting in a second, I'm sorry. So this model has one parameter new, which basically measures how quantum it is. And so a large new is sort of classical at small newest quantum. When I'm plotting here, so this is new, when I'm plotting here is the radius. And so this gives you roughly a sense of how big the emergent space is. And so R, you have these three matrices and so we take trace of X squared, okay? We sum them over the matrices and we define that to be R squared. And so that is a measure of how spread out if you like the entries of these matrices are. And so this is R against new. And in this classical fuzzy sphere, this R is just the radius of the sphere, okay? And these different lines are different values of N. Remember that these are N by N matrices and we're solving the problem numerically, so we fix N. This is N equals two, N equals four, N equals six, okay? And so what I'm showing you here is we've taken this model, we've written down this neural network variational wave function. We've minimized the parameters, we've minimized the energy within this variational wave function. And then once we have our best approximation to the ground state, we calculate R, the radius, the sort of how spread out the wave function is. And so what I'm showing you here is that it works in the sense that we can learn the wave function learnt this classical regime, right? Which is this dashed line under the dots at the data points. And so I'm showing you that it works, at least in this classical regime. However, what we can now do is go beyond the classical regime into this quantum regime. And here we see that the sort of collapse of the wave function, the collapse of the sphere as you go into the quantum regime stabilizes. And this stabilization is not something that you can do analytically, right? So this regime is solvable semi-classically, but this regime is not. And so we're using this neural network to understand this matrix quantum mechanics in the quantum regime, where I don't think there's any other methods that is going to work. And so to zoom in on this regime, let me just show you again. So I'm zooming in on the quantum regime. Don't worry about this. So this is again, the radius is a function of nu. And let's see what's happening. Sorry, I'm going to go back to the previous slide. So as nu goes to zero, you lose this term, okay? And then you're left with just the commutator squared term. While this term includes a sort of a mass term. And the mass term has the effect of stopping the matrices from spreading out. And so it makes sense that when, as this mass term goes to zero, the matrices can spread out more. The potential that's keeping them in is only a quadratic potential. And so actually this upturn is the fact that in the quantum mechanical regime, that the matrices sort of spread out more, okay? All right, so this upturn, the stabilization of the radius is a quantum mechanical effect, okay? The fact that the matrices are not spring off to infinity is, yeah, it's a quantum mechanical. Okay, so that's all I'm going to say here, but what I'm trying to show you, I'm just trying to give you a flavor of what we're doing. And what I'm trying to show is that we can solve these matrix quantum mechanics models numerically with this method. We can get the wave function and you can calculate features in the quantum regime such as this upturn that I don't think have been not accessible via other methods. All right, now I guess I can ask for questions. I'm going to, I want to show you quickly another method, but if there are any questions, this might be a reasonable time for someone to ask them. Yeah, I'm checking. No, in the YouTube channel, however, I get from an email, what about the degeneracies when you are looking to minimize? How do you know you might find a local or the minimum? Is there- Absolutely, very wonderful, yes, perfect. Good question, so the, right. So indeed, this method is, that is a risk. So with any, very good. So, well, there's two separate questions. The degeneracies, I think, right, if they're two degenerate ground states, then, well, actually, at large N, they're probably gonna, if they're like two of them, they're probably quite separated. And so what you can try is various different initialization points. And then depending, you might flow to different minima. But indeed, in this method, you can definitely get stuck in a, if there's some better stable, right, if the potential looks something like this, you could definitely get stuck in a local minimum and you might not find the actual minimum. And so in this method, this is a weakness of this method, of any variational method, that you are not guaranteed that it converges to where you want to go. And furthermore, even when there's, even if you don't get stuck in a local minimum, you have no way of knowing how close you are to the real answer, okay? That's why it's very important to what's called to benchmark the method. And so for example, here, it's very reassuring that we match the results in a regime where we know the answer. And then we have that as our starting points and we can take a baby step into the unknown by going along here, but absolutely in this method, you're not guaranteed that you found the true minimum and you have no estimate of how close you are to the real answer. Okay. So it's very important to benchmark these kind of methods. And actually, we did some other tests that I don't have time to tell you about, but indeed, that's definitely a weakness of this approach. Thank you. And I think there's another question now on the YouTube channel regarding to the introduction. And the question is like about whenever you talk about emergent space time, is it the usual space time? Does it follow casuality or locality? Wonderful. So what do we mean? Very good. So good question. So indeed, what's the name of the game? What do we mean? And so indeed, in all the cases that I'm talking about, I have in mind really a space time like what we normally mean. So indeed, one that has a relativistic dynamics with gravity and with causality. And so in a way, the whole point of space is that it defines a locality for you, right? That so space tells you that not everything interacts equally, right? You interact better with things that are closer to you. And so, for example, in this case, yeah, this look, I mean, this R, this is really a relativistic field theory in an emergent space. So yes, it should have causality and all these other things. Okay. And I think there are two questions more, but I'm gonna keep one of those for the end, but there is one regarding your last slide that you show, which is if you could again, say what happened to the flat direction in the quantum regime. Wonderful. Good, good. Thank you. Yeah, thank you. I'm not pronouncing the name of the people because they're a little bit hard, in particular the last one. So I apologize for the people I'm not naming them, but they're on the YouTube channel. Yeah, that's no problem. So wonderful. Indeed, so very schematically, this potential has like an x squared term. This is a bit of a schematic and then there's this commutator squared term, okay? And what's happening here is that we're killing the mass term, we're turning it off. And what the question is, is that this term has a flat, classically has a flat direction, right? So if, let's say, let's call these x, let's say it's x and y, for example. So if these matrices commute, then this potential zero, right? And so if, I know there are three matrices, but let's just say they're two for the moment. So x commutator y, if the matrices commute the potential zero, and so you can go all the way to infinity with no energy cost where the matrices commute, right? And so this problem, as new goes to zero, this problem has flat directions and you could worry that there's no normalizable ground states, okay? So that's a bit of a long story actually, but it's not true in general. So a famous example that you can find in, for example, the Griffiths in the textbook of Griffiths and quantum mechanics, say you have this potential, this is just normal quantum mechanics, okay? x and y. This is the x and y. And say the potential is infinite here and zero here. So this, that potential has classically has flat directions, but it turns out that quantum mechanically, this potential has a normalizable ground states, okay? Because if you want to go off in this direction, the wave function has to vanish here and here and that forces it to bend in this direction and that makes it decay exponentially as you go in that direction, okay? So that this is not true if you have a, if for this potential, well, anyway. So in quantum mechanics, there can be potentials that are classically have flat directions, but quantum mechanically, these are lifted, all right? And so the question is, does that happen in this model, okay? And actually, finally, there is actually, it's believed that they're not, although this is not completely clear. So if I have the Bosonic model, okay, with no fermions, then actually it turns out that quantum mechanically, these flat directions are unlifted, okay? And there is a normalizable ground state. And in BFSS, let me just go back a little bit. In this model, it's actually known that there is a normalizable ground state, okay? So the fermions, it turns out flat, sort of flatten the directions, but this model nonetheless does have a normalizable ground state despite the fact that there are flat directions and the potential. This model is believed that there are still flat directions in the classical limit. And so there's not a normalizable ground states, which would suggest that as you go to zero, this thing should be diverging, but we do not see that. And there's also another numerical study that doesn't see that. So I think there's a question about new goes to zero, okay? So we don't see a divergence, but it might be there. Thank you, Chef. Yeah, you have around another 15 minutes of so if you want. Yeah, very good, thank you. Happy, great, thanks so much for the questions. It's good to know I'm not talking into the void. Yes, that's right. So now I wanna quickly, again, this is sort of in the spirit of giving an overview. I want to give another approach, which is exactly was one, I mean, I think these neural networks are quite powerful, but they're certainly not perfect. And so it's good to have other approaches. And so I'm gonna tell you quickly about another approach we worked on recently. That's on this paper in the archive, again, with the Shizhe, and also with the Yuri Tuzzi postdoc at Stanford. And this paper I should say was very much inspired by a very nice paper by Henry Lin, which has this reference. So Bootstrap, you might have heard the word Bootstrap used recently in the context of quantum field theories, and there's a whole Bootstrap program that has been very exciting recently for things like informal field theories. And so what we're gonna be doing here is a bit different, but it's very much inspired by this philosophy. And so to give you a sense of how it works, I wanna show you how it works for a unharmonic oscillator. So just everything I'm gonna do later for matrices is already basically present in this very simple case. So here, I'm gonna show you how to Bootstrap the unharmonic oscillator. So these P's and X's now, for a couple of slides, the no matrices, these are just textbook quantum mechanics. So one particle in one dimension with the X to the fourth potential. I wanna, so that's not solvable exactly. It's very easy to solve numerically, of course, so you can do perturbation theory. I wanna show you a different way of solving it. And the reason to do that is that it's gonna generalize nicely to matrices. So just, and I think I wanna go through this because this might be useful for any of you for completely different reasons. This is just an interesting way to solve this problem. Okay, so we're gonna do it in two steps. What I'm gonna do is calculate the expectation values. Like so X, the expectation value of X to some power in the ground states. All right, and so I'm gonna do it in two steps. Firstly, I'm gonna establish a recurrence relationship. Just gonna be this one, I'll tell you, I'll come back to it. This is a recursion relation between expectation values. Okay, and so given some small expectation values like X squared, I can calculate complicated expectation values like X to the 20, okay? And then I'm gonna use some positivity constraints. For example, here's a stupid one, X to the 100 is positive, right? X to the 100 is a positive operator and so it's expectation value is positive, right? And so the spirit is the following. I calculate, I'm gonna calculate simple expectation values like X squared, sorry, I'm gonna guess, I'm gonna guess a simple expectation value like X squared. With that, I'm gonna calculate lots of complicated expectation values, X to the 100, X to the 102, X to the 104. I'm gonna impose that those are positive and these positivity constraints, it turns out strongly constrain this simple expectation value. And in fact, you can very, very quickly calculate this expectation value by all these positivity constraints. It excludes almost all possibilities except the correct one, right? Okay, so let's see how this goes. So the first step is to get a recurrence relation between expectation values, so how do I get that? So in an eigenstate, in an eigenstate of a quantum mechanical problem, the commutator, the Hamiltonian with any operator is zero, right? Because the Hamiltonian can act to the left, it can act to the right, it gives you an energy in the two cases which castles, okay? So for any operator, the expectation value, the commutator and Hamiltonian that operate is zero, right? Now, if I put in these operators X to the X and X to the T times P, then by just commuting stuff, it's, you can, very simple manipulations, you can get this recurrence relation, okay? Not really nothing fancy, okay? I take this relationship, I take this Hamiltonian, I put in these expectation values and I'll get this recurrence relation, okay? And the energy comes about because at some point I get some P squareds, but the expectation of P squared is the expectation value of the Hamiltonian minus X squared minus GX to the fourth and this is the energy. So this energy appears because I want to eliminate the expectation value of P squared, right? So very, very, very low tech undergrad, very simple stuff, you can get this recursion relation, okay? What does it give me? This gives me, as I told you, higher powers in terms of lower powers, all right? Now, the bootstrap approach is the following. I say, well, it's certainly true that this expectation value of O dagger O for any operator O should be positive, okay? And in particular, I can take O to be a linear combination of Xs, right? So O could be any number times X plus any number times X squared and the expectation value of O dagger O should be positive. Just also trivially true. So how can that be? So for this constraint, if I consider all operators of this form requires that this matrix, so basically how would I write? So for this to be positive, that's the same as saying C1, C2. Let's see how to go. So X, so anyway, okay, I won't go through it for interest of time, perhaps you can show, it's just two steps to show that all these positivity constraints are the same as saying that this matrix of expectation values, so that's like X to the zero, X, X, X squared in this case. This matrix of expectation values should have positive eigenvalues, okay? So if I consider all linear combinations up to length K, right, so in this case, so yeah, I should have the problem here, so I should have written the operator is like C1 plus C2 times X, so this would be K equals two, okay? So I truncate to polynomials of some degree, I impose this constraint, that requires that this K by K matrix of expectation values be positive, all right? So what do we do? So now we fix K the size of the matrix, so that's how many constraints we're imposing. We pick, we just guess E and X squared, you calculate all the higher expectation values using the recurrence relation I just showed you and you impose this positivity constraint, then you calculate the smallest eigenvalue of the matrix, but let me start again. You pick E, you pick X squared, you calculate, that turns out as enough to calculate all these other ones up to K, you impose that this matrix have, that the lowest eigenvalue of this matrix be positive and that defines, that excludes a whole bunch of possibilities, so, and this is what you get, so for example here, this is E, so I guess E, I guessed X squared, I scan over the whole plane, and for example, if K is seven, that's how many constraints I'm imposing, this whole region is excluded, okay? The lowest eigenvalue of that matrix is negative and so that's not allowed, and so I know the ground state has to lie in this region, then I go to K equals eight and it gets excluded even more up to this region, I go to K equals nine and then I'm forced to the green region, look and look at these numbers, these numbers are already quite zoomed in, all right? And this red dot here is the exact answer, okay? We know the energy and we know the expectation value of X squared for the ground state of the anharmonic oscillator, okay, you could easily calculate it by solving the differential equation numerically, but here with the relatively small, like this you can do, you know, essentially instantaneously on a laptop, okay? You, this is not a big matrix and you very quickly, this bootstrap, these positivity constraints very, very quickly zoom you in to where the eigenvalue is and you don't just get the ground state, you also get the first excited states, and again, look how zoomed in we are with some values of K, all right? So this bootstrap, this positivity excludes certain possible values of E and X squared and I'm gonna show you here is that it very quickly zooms you in on the ground state values, all right? So what we wanna do is apply this method to matrices and so as a warmup, let's consider the case of one matrix quantum mechanics. So this is the one I showed you right at the beginning and it's solvable, all right? So we know this is a test. So here's the Hamiltonian, right? There's one N by N matrix of oscillators and so there's a P squared and X squared and let's just consider X to the fourth, okay? But these are now N squared oscillators, all right? I'm gonna go a little quickly because it's the same idea, all right? So we're gonna impose it for any operator O, the commutator of H with this operator gives you zero. So for example, if you take the operator trace XP, that's a certain operator, you commute it with this Hamiltonian, it's just normal quantum mechanics commutating and you'll find that the expectation value of trace of P squared is equal to trace of X squared plus trace X to the fourth. Actually, this is the virial theorem, actually it's an example of the virial theorem. And so here, I'm just showing you an example of how you put an operator into this formula and you get a relationship between operators. We're not gonna get in this case, so in the anharmonic oscillator, we got a closed form relationship, a recurrence relation that went to arbitrary powers. Here, that's not gonna be so nice. We're gonna have to sort of go case by case, but okay. So here's an example. It turns out these matrix quantum mechanics are typically want to be gauged. So these matrix quantum mechanics have an SUN symmetry, which is basically X goes to U dagger XU. Okay, you can see the trace sort of stays the same if you do that. And this symmetry needs to be gauged, which means that physical states should be annihilated by the generator of gauge transformations, okay? So physical, yeah, okay. It turns out, so the generator of SUN symmetry is basically this one, let's not worry about the shift. This generates the SUN symmetries. It's like a rotation, SUN is like a rotation, right? And so this is a little bit, this is a little bit like X to the X, right? It's like a epsilon, it's like a rotation, but it's an SUN rotation. And okay, I won't go through that, but this gives you more constraints, for example, this one. Okay, so we have certain, by choosing certain operators O, we can get certain constraints between expectation values. And so the philosophy is the following. You take all strings, so I take a like X squared, P, X cubed, P squared, X, okay? That's a string of X's and P's. I take all strings of length less than L. And there are two to the L such strings because there can be an X and a P at each point in the string. And that's the operator, right? I take this operator. I build a matrix of this operator, okay? So I have all possible strings. So the same way that here we built this matrix, now this from each of these strings, I have an operator, sorry, I have O, right? And I'm going to impose it O dagger O expectation value should be positive for any such operator. There are two to the L such operators. And so this positivity constraint becomes a matrix of two to the two L entries, okay? There are a bunch of expectation values of long operators, traces of a string of X's and P's. And this whole big matrix has to be positive. However, using these kind of facts, not all of the entries of this matrix are independent. There's sort of relationships between the traces, all right? So I write down all possible relationships that I can between the entries of this big matrix, then that leaves a few expectation values left that are not fixed, okay? And then I minimize the energy over all, I scan over all possible, right, very good. So I take, okay, so I've got a huge matrix of expectation values, but they're not all independent. Let's say these are the ones that are independent. They're a few that are independent. I take these values and I impose the positivity constraints, okay? And that restricts me to some region of allowed expectation values. And then within those expectation values, I'm gonna minimize the energy to try to get the ground state. And here to show you how it works. So for example, this is the coupling, this is the quartet coupling. This is the energy that we get, the lowest energy that we get, and this is the trace X squared that we get. And so here, L equals two, three, four, okay? So that's how long the matrices the operators are that we're considering. So here's L equals two and then for L equals three and four, you get this line and the exact answer is the solid line, okay? So very quickly, okay? You don't even have to consider traces that are very long. You're very, very rapidly converge on the, in fact, the correct answer. And now this in turns out is actually the N equals, for reasons I won't get into right now, this is actually N equals infinity, okay? So we're not picking N equals three, four, something, okay, this is the infinite dimension. This is where you wanna be for the immersion space N equals infinity. And this I'm showing you that it works, okay? So by relating the eigenvalues, by getting relationships between expectation values and imposing positivity, you can actually very efficiently find the ground state energy and the expectation value of X squared in the ground states of these matrix quantum mechanics. So in the last two minutes, I just wanna say that basically you can do the same thing now with more than one matrix. And so we did it with this model, which is the simplest model with two matrices, right? So there's a X, a Y, a commutative squared, no fermions in this case, and some mass terms. Okay, and so here we calculate the ground state, the energy, the expectation value of X squared. And the interesting thing about having now two matrices, you can also calculate the expectation value of trace XY squared. And you can ask questions like, for example, naively, you might have thought that when the coupling becomes large, when G becomes large, you wanna minimize the potential, which means that the matrices want to try to commute. Okay, but actually that turns out, we can see now that that's not true because the commutator, it turns out, it's not obvious, but it doesn't go to zero in some appropriate sense. So here, here, here, L equals three, L equals four, and so this is our best estimate for the ground state energy. And actually these dashed lines, we were able to prove some rigorous bounds on the energy. And so the fact that it lies between these bounds is suggested it's almost converged. So, okay, so basically we can calculate stuff in this matrix quantum mechanics using this bootstrap method, which in essence is the same as what I told you for the harmonic oscillator. They're just, it's just a bit more complicated. And I believe that I don't know any other way to make these kind of plots. And so the matrices do not become commuting a strong coupling. All right, so just to wrap up, so this is sort of a whiz through three or four things. I wanted to give a flavor for some of the questions. So let's just rephrase the motivation to understand how holography actually works. And basically at some point, we're gonna need to solve matrix quantum mechanical theories with more than one large n matrix. There's just no way around that. That is the model that we're trying to solve. And we wanna understand how spacetime comes out of these models. And the technical step that we need to get there is to solve these models with more than one matrix. At the day I talked about two attempts to do this are using new methods. I'm sure both of what I said can be improved on a lot. And what I quickly showed you is that these methods are successful in the sense that they capture non-trivial aspects of the ground states of multi-matrix quantum mechanics. So we'd like to go to more matrices eventually to try to calculate BFSS. We'd like to do better in terms of convergence and how quickly these methods converge. And ultimately, you'd wanna use these wave functions to understand the quantum structure of the emergent spacetime. And that's revealed in things like the entanglement. So some of you might have heard about this Riu Takenagi entanglement entropy and stuff. And that's the kind of thing that one would ultimately like to actually calculate. Okay, and that's all. Thanks for listening. Thank you, Sean for this nice webinar. Let me see, we have, so the question I said I will postpone it, it's on the YouTube channel and it says, how could we apply this in particle physics? Good, well, I don't know what you mean by this, but let's see, so. It was a question like maybe yeah, about that. Yeah, no, that's fine. Yeah, I saw, so let's see. So I mean, there's several different answers for what depends what you mean by particle physics and what you mean by this. But I mean, because in a way, quantum gravity is particle physics taken to the limits. But if you mean like collider physics, I do not know a lot about collider physics. And I think that the neural networks are being used actually to analyze the output of collisions, but I don't know very much about that. But whenever you have a big data set and you're trying to scan stuff, then neural networks are useful. I wave functions, it's one of these things, right? And if you look at some textbook on quantum field theory, there are no wave functions in those textbooks normally, right? However, quantum field theories do have wave functions and it may be interesting to think about the ball of wave functions in quantum field theories in particle physics that I haven't thought about it. If you find yourself thinking about wave functions in quantum field theories, I think it's conceivable that these methods might be useful. I suspect there might be similar approaches to scattering amplitudes, but I haven't thought about it. Okay, we have another question in the YouTube channel. It says, hi, Sean, is the emergence of a space always dependent on a Fermi C? Or is there another mechanism? Very good, thank you, good, good, good, good, no. So indeed, no, the Fermi C is very special to this one-dimensional case. So in fact, it's sort of too simple, actually, because really in this case, the space is really made out of particles, essentially, right? The fermions just sort of build up the space for you. But in these high-dimensional cases, the matrices won't commute and they'll also be stringed. So very, very, very, very schematically, the eigenvalues are like particles, but when you have two matrices, they're also off-diagonal modes. So for example, you have the particle that you see have the eigenvalue one and eigenvalue five, and there's also an off-diagonal mode, X one five. That off-diagonal mode, you can think of as a string connecting the particle of eigenvalue one and the eigenvalue of five. And so it turns out these off-diagonal modes of matrices are quite connected to strings, right? That it has two labels, right? An eigenvalue has one label, but an off-diagonal mode has sort of two labels, the X and the Y coordinate in the matrix. And these strings are just essentially more complicated than particles and fermions. And so that's one reason that two matrices is very different from one matrix. And it's not really a, you probably don't have a point particle picture anymore of what's happening. And that's crucial for black holes. So we know that to understand the entry of black holes, you need all these excitations of strings. It's not enough just to have particles. So the Fermi C picture is almost certainly too simple for more than one dimension. That's a good question. Do we have questions from our low physics coordinators? Well, there's actually, there's a couple in there. Let me see. Yeah, I'll go over those. I was testing this. Oh, thank you. Thank you. Yep. So do we have questions from our coordinators or should I just keep reading the ones on the YouTube channel? I have a little one. Yeah. Shanbury, I like a lot of the talk is very, I mean, since I'm not from the area, for me was to discover a new, all a new stuff here. So I was wondering if you change the Hamiltonian, I mean, let's say, because you were assuming that you have this, I mean, that one of the properties to get this emergent and the idea is to have like relativistic dynamics, let's say, because the fluctuation propagates on the space. But if you change the Hamiltonian to something like more relativistic, like kind of Clang-Gordon equation, I mean, the Hamiltonian of the Clang-Gordon, you can also, you can apply this same technique to any kind of Hamiltonian or it's just something that looks like. So let me see. Are you talking about the Hamiltonian with the space or the one without the space? I mean, you were taking the example of the harmonic operator. Oh, okay. But in principle, Hamiltonian and relativistic is depends on just the square root of... Okay, so I think I understand what you're saying now. Yeah, yeah, so good, good. You're saying, in fact, it's a little strange, good. Yes, thank you, right. So it's not true, this Lawrence, I mean, these major quantum mechanics, indeed, they don't look very relativistic. They look quite non relativistic. And it's a bit of a miracle in a way that you get these relativistic emergent, that the emergent theories are relativistic. They didn't have to be. And now let me see if I'm gonna say something wrong here. I mean, of course you can, I mean, there are different ways to write a relativistic particle. I mean, you don't have to have the square root and everything right in there. So the most naive relativistic quantum mechanics, you'd have some square root action, but there's this political way of writing it where you don't write it in terms of a square root, but yeah, you, let's see. So these types of matrix quantum mechanics that I'm writing down like on this kind of things with the p squared and the commutator squared, they are what naturally come out of string theory. So if you take certain objects called d-brains and string theory, then these are the kind of actions that they want to have, okay? But yeah, I don't think there's a rule that you have to start with those ones, right? And you might have, yeah, let me see. I don't, yeah. So I think the best thing I can say is, good, okay. This is motivated from ADS-CFT, where we actually know that. So there are cases where you know that there's an emergent space. Sorry for the background. You know that there's an emergent space. Okay, anyway, I'll, so you, right. And so in these cases where you know that there's a duality, the Hamiltonians have this kind of form, but that doesn't stop you looking at the Hamiltonians. Okay, yeah, let's see that. It's a good question, basically. Thank you. Thank you. There's another question. It says, I guess my question doesn't relate directly. I post it on the chat in case you wanna read it also. Doesn't relate directly to what you have been talking about, but is there an emergent space corresponding to the two matrix that's called super integral? Yes, yes, wonderful. Well, this is a good question and it does relate. So in matrix quantum mechanics, there's a time, okay? But actually there's something even simpler you can do with matrices, which is just a matrix integral. So for example, so in matrix, if I think of a path integral, the partition function is some integral over matrices that depend on time, right? And e to the, the action, which has some integral over time, some Lagrangian for the matrices, okay? That's the kind of structure that I'm talking about. And in the quantum field theory, there would be an X, it would be T and X, right? So I've made a simplification to remove the X. But of course, you can go even further and remove the T, okay? And then this is called just a matrix integral, right? It's just an integral. This is an integral with n squared entries and you do the integral and that's a very rich, it turns out as a very rich mathematical structure. And in particular with two, a two matrix, there's a certain two matrix integral that is in fact doable, right? So there's an integral dx dy where these are just matrices, no time e to the f of x and y and this, this Isaacson-Zuber integral is it turns out integral you can do. And yeah, and actually this paper by, that I mentioned in passing very briefly, this paper by Henry Lin, actually that's what he did. So he's applying the bootstrap, this bootstrap philosophy to matrix integrals, okay? With no, with no time. And yeah, so the issue is these things have no time, right? So that means that if you really wanna have an emergent space, you have to worry about the emergent time as well, okay? Which is just a little bit more conceptually confusing. But definitely, yes, you can get emergent Euclidean spaces from these kind of matrix integrals. And in fact, various string theories are related to these integrals. And so what I recommend is reading this paper by Henry Lin, which talks about this. But yes, the answer is yes, you can be there. Thank you. There's another question in the YouTube channel and it says, in the bootstrap method, you focus on the bosom part of the BFFS. Is it possible to use this bootstrap method for the fermionic part as well? What would be the differences? Yeah, wonderful, that's a fantastic question. And we are actually thinking about that at the moment and I hope it's possible. Yeah, I'm sure it will be possible. I mean, so what are the tech, the fact that fermions doesn't really matter too much, right? Because at the end of the day, I had these sort of trace these long, these strings of operators and there's nothing that stops me putting some fermions into this string, okay? However, then I have more matrix, the strings, they're already, they were two to the L, right? And so if I have three matrices and I have like three to the L and things start to get numerically harder. So at one level, it's just a numerical problem. I mean, not just, I mean, it's a problem that it takes more, they're more matrices. However, on the other hand, these models also have more symmetry. They also have super symmetries, right? Here I used the Hamiltonian, but in these supersymmetric models, you also have an H is basically Q dagger Q or something, okay? There's some, you can sort of take the square root of the Hamiltonian, but there's a supercharge. And so there are more symmetries, so there are more fields, but there are also more symmetries. And so you might hope that these two can somehow cancel each other. And I, yeah, I'm sure it's possible, but I don't know how to do it at this point. Thank you. There's another question that says, is the ground state that you found independent of the choice of the operator? Wonderful, yes, it better be, right. So good, if this method is working, right? And so what another way to ask that question? So actually we have lots of operators, right? We have a whole, we take a whole space of operators, these, all these strings, and we calculate all the expectation values in this trace, and within that trace, with those constraints, do the best we can. So the more operators you add, the constraints just get stronger, okay? And I think you kind of have to start with now, right? So, and the question is whether it converges, right? And so the evidence so far is that it does converge, meaning that the more operators you take, the smaller the space gets that the smaller you're allowed region becomes. And so it seems, there's not a theorem that it converges, but it seems to be converging as you take more matrices. Okay. And I think we have time for another question that says, is it possible to link these matrix quantum mechanics to supersymmetric quantum mechanics? Yes, yes, that's exactly what I was just talking about, right? So, so, right, so where'd they go, yeah. Right. So this one, so this, so here's like, so BFFS definitely you can, this one is the bosonic one, you can supersymmetrize it by adding this term. This one here that I talked about is the bosons and you can supersymmetrize it using this term. I believe that there are supersymmetric extensions of two matrices, but I think the simpler one maybe three. So, so, so one matrix quantum mechanics can be super, has a supersymmetric extension. Definitely there's three matrix quantum mechanics does. And I think there are also ways of extending these two matrix quantum mechanics to supersymmetrize. So not all, so not all matrix quantum mechanics has a supersymmetric extension, but many, many of them do. Thank you. I guess there are no more questions so far. And we wanna thank again, Sean for his time and being part of low physics webinar. We have another four seminars for this system and then we will be back in the fall with the webinar, very special 100th. So thank you everyone and let us meet next time. Thank you. Thank you, Sean. I'll stop sharing. Yes.