 where? Hello? This is Paul Adzanardi from USC, University of Southern California in Los Angeles and of course I want to first thank the organizers for giving me the opportunity to be here today, even though my talk doesn't necessarily fit the main focus of the conference, so I guess this is something different today. You know, you get one of those talks for every conference that is completely out of place and that's me today. So what I want to be talking today, I see the other people see. Okay, so the key word of this conference is so far, well, very aptly as a quantum annealing. Today we're going to be talking about something different that is quantum scrambling. So let me start and this is work done at USC over the pandemic. And I'm going to share with you at least the initial part of that we have a bunch of papers and I'm going to be focusing on the first one of them. And let me start as a way of introduction to, well, tell you what the setup of my problem is. So, well, it's pretty much everybody else set up. I got a quantum many body systems with finite dimensional local spaces. And of course, you can think of your favorite spin chains or lattice model, and so on and so forth, bunch of the subsystems. And the goal of the talk is to relate and to sort of argue that they form a unified framework, three differently, three, three different concepts that seemingly are related, namely the scrambling and I'm going to define one of that each one of them later on momentarily is quantum scrambling operator entanglement namely operator at the operator space level and entangling power. Let me start to define what I mean very loosely and possibly lazily what is information scrambling here. So I just read that. And many body quantum state psi is called scrambled. If the reduced state of all subsystems of size that k is larger than half of the system size is nearly maximally entangled or maximally mixed, I should say, apologies. And accordingly, a quantum dynamics, a unitary quantum process, this talk here is going to be about unitary evolution, no open systems. Well, we dealt with the open system case later on. But for today, we're going to be focusing on some quantum dynamics U of T generated by some local Hamiltonian say, and I'm saying that this is performing quantum scrambling or quantum information scrambling. If for many or almost all simple enough initial states, for example, you can think of product states or low entanglement states, after some initial transient period, the time evolved state is approximately scrambled. The idea is that you got a multipartite quantum system and say you got a subsystem A over here and you encode information, say a bit of information by some local process. Then you let the system dynamics unfold over time and information starts to get delocalized. Information initially localized over here leaks all over the place. And after a little while, you're unable to tell the two different bits, the two different encoding apart, unless you're able to perform a large measurement, namely involving a set of qubits or subsystem that is larger than half of the system size. Okay, so we say that information is lost. This is relevant to scenarios like the quantum information paradox in black holes and in general in many body systems. Okay, let's keep going. The way people, well, there are different ways, of course, of thinking about this process here, the quantum information scrambling process. But usually, at least over the last few years, people are working in the field. They typically characterize the scrambling in terms of the growth of the strength of the commutators or the decay of some out of time order correlators known as otoks. Let me define those for you. So this is my, actually a pointer, well, I guess, whoops, okay, doesn't matter. So you got this quantity here, do I actually, okay, I have it. I have this quantity here. So v and w are two local operators. You want to think of v being an operator localized at time t equals zero in subsystem. A, say over here, and w being localized in some other subsystem. And at time t equals zero because they have this joint supports, they commute. But as soon as the time unfolds, time goes by, then this guy over here, vt evolves according to Heisenberg picture. And the support of this operator grows accordingly until it hits, it overlaps with the support of the operator w. And from this point on, operators are no longer commuting. This is really the idea that the commutators encodes for this relocalization of information. And once you expand out this expression here, this might be, this is just the two norm. So it's the Hilbert Schmidt norm. There will be lots of operators' norms here for this commutator here. And once you expand this out, if you assume that v and w are unitary observables, then you have a simplification. You get to 1 minus this object here. That is a four-point correlation function, w, vt, w dagger, vt dagger. And this is precisely what is known as an out-of-time order correlation function. This is the Outlook. The idea is that when time goes by, the scrambling is measured by the growth of this commutator here. It's initially zero and then starts growing. Or similarly, by the decay of this out-of-order of the Outlook over here. So roughly speaking, we can use this quantity here to characterize different levels or scrambling powers of different types of families of Hamiltonians. And if you will, on the left-hand side here it says low. I probably should say weak scrambling. You may have models that are integrable. And by that, I mean either free fermions models or bideonsets or localized models like the Anderson model or even many body localization, many body localized Hamiltonians. They have low scrambling abilities. And then as soon as you move rightward, you get stuff that is still local, that is in the chaotic family. You know, you got some quantum chaos going on and I'm going to show a few examples of that, like the transverse field ising model, some specific form of that of the XXZ model with next-near-bore couplings. And if you keep going to the right with a stronger scrambling ability, you have systems that are better described or might be described in terms of random matrix theory. Random matrix theory. And there are highly known local operators like SYK model. If you guys know what I'm talking about. And it's all-to-all coupling model of random couplings between Majorana fermions. And eventually you get stuff like the Gaussian Unitary Ensemble over here. So this is the idea. There's a larger quantity here. I should be able to tell apart these different classes of operators or I should say of quantum dynamics. Okay. So let me try to be a bit more precise here concerning our setup. The idea is that it got a bipartite Hilbert space. H is the tensor product of H sub A and H sub B being H sub A and H sub B, the Hilbert spaces of the subsystem A and the subsystem B. And the subsystem B is supposed to be the complementary system here. And I start off with a couple of operators. VA that is localized in A and operate doubly bit as localized in B. And in order to have a quantity, so what I have defined so far, the commutator or the autoc is something that depends both on the dynamics and the particular choice of the observable. And in order to have something that depends just on the dynamics allows me to study that. I want to get rid of the dependence on the choice of the operator VA and WB. In order to do that, of course, you may go down different paths. The way we decided to do following Yan et al. from the Wojciech Zurich group at Los Alamos National Lab, we considered the average autoc, the bipartite average autoc. Namely, we are going to perform an average, hard uniform average over all possible unit areas VA and WB localized in A and B respectively. And the good thing is that, well, and the good thing is that you can actually perform analytically this average here. Well, this wasn't done in that paper. In that paper, they went through a whole bunch of a standard, more or less standard approximation, weak coupling marker and stuff and they draw a few interesting conclusions. But what we found out during the pandemic is that, well, actually my student, Yorgo Stiliaris, that if you sit down long enough and you have some familiarity out to perform hard group averages, you can indeed find a closed analytical formula for that. And this is the first result I want to share with you guys. And there's this formula here. And it's a beautiful formula as you will agree right away. And it contains just information about the dynamics. But let me tell you what the other ingredients in there are. So this has been published in this paper here on which this present talk is based. So basically you got to move from your Hilbert space to a doubled version of your Hilbert space. And so you got sub system A, sub system B, and then you got another copy here, system A prime, system B prime. And what, as AA prime does, is a swap operator between the subsystem A and A prime, leaving B and B prime alone. And perhaps, well, perhaps I say this, this is in my slide, just in case you were wondering, we are performing averages and we certainly achieved the goal of getting rid of the dependence on the initial choice of V and W. But you may still wonder whether this is representative of anything. Well, in fact, as it turns out, this formula for, in this problem here, you get measure concentration phenomena. Namely, if you are in sufficiently high dimension, then basically this average here is very, very representative of the whole ensemble, namely with very high probability that you were to pick randomly V and W and find a result that is different from this is exponentially suppressed. Okay, so we got typicality going on over here. But let me try to make this formula a little clearer. And they say that a picture is worth a thousand words and probably I guess 10,000 equations. So let me draw a picture of that for those of you like a quantum circuit. So this is what it is. So my expression here, my bipartite average O took is given by 1 minus 1 over D squared the trace of this operator here that depends, given by this quantum circuit here that depends on U. And basically you got the two copies of the system here. You first apply or you enact the unitary process U on both copies. Then you swap A and A prime. Then you enact the inverse unitary process U dagger and U dagger on both two copies. And then you swap stuff again. This is a quantum circuit. It is a unitary quantum operator. The trace of this operator here that you may think of measuring using standard quantum computing tricks is precisely my quantity. Okay, so we are focusing on the trace of this operator here. That contains this average bipartite O took. Okay, so you can measure that in principle. There's a little start over here because it means that in practice in principle, namely for me as a theories, this is going to be pretty easy. But in practice, of course, this is going to be extremely challenging sense. Really what we are interested in here is many body quantum systems. Okay. Good. So now I want to introduce the other two concepts that ultimately I want to relate to quantum scrambling as measured by the bipartite O took. And this is the notions of quantum entanglement operator entanglement and entangling power. Of course, we are all used to the notion of quantum entanglement at the state level. Well, as it turns out, and it's not hard to see that as soon as you have a bipartite quantum system, namely whose Hilbert space is a tensor product of two copies, then necessarily even the operator algebra on top of it, namely the space of operators is a bipartite Hilbert space as well. As we know, the operator algebra of a tensor product is a tensor product of the operator algebras. And so very much as you do with quantum state, you can ask yourself, what is the entanglement of an operator? An operator space is an element of this bipartite Hilbert-Schmidt space, if you will. And this is what I've done years ago. And the way you do it is, of course, the very same way you mimic entirely what you do at the state level. Namely, you've got the Schmitt decomposition for the bipartite state. And here you've got the operator-Schmitt decomposition for the operator, the bipartite operator. And you can write the corresponding Schmitt decomposition here in terms of a basis, a bi-orthogonal basis of unitary operators fulfilling this normalization conditions here. And you get the usual set of Schmitt coefficients that now you've got many more of them because these are actually an operator space. You've got this square of them, but they're still nice, non-negative numbers. The lambda j, they form a probability distribution. Once you have a probability distribution, then you're one step away to define entanglement measures or entanglement monotones, because once you have this operator-Schmitt coefficients, then you can define operator entanglement. Of course, there are many ways, basically any measure of uniformity over the probability distribution would do it for you. And for Neumann, excuse me, Shannon entropy being the main character there. But if you want to, and this is very high-brow kind of approach, you can prove nice quantum information, theorism stuff. But if you want to get stuff done, namely you want to perform calculations, you probably be better, a little humbler, and we are humble people here. So what we do, we pick the linear entropy, whoops, is missing a square here. So it's one minus basically the Euclidean norm of the probability vector. It's one minus the purity, okay? And this is a good operator entanglement monotone. And the first result is that, quite surprisingly, I have to say, that this OTOC that people have been talking for a long time, or at least for a few years, in scrambling turns out to be a bipartite OTOC. It's exactly the operator entanglement of the unitary. So really, this notion of OTOC and scrambling can be mapped onto the, I would say, the fundamental and fairly intuitive, I would argue, notion of how much entanglement you have in the operator space for your quantum dynamics. So this first result, GFT, my main object here, my main guy, the protagonist of the stock, if you will, is just operator entanglement. If you pick us entanglement, measure the purity of this operator here. Okay. You guys happy? Okay. Well, here we don't take questions during the talk. I wonder whether this is an absolute policy or... Well, Mr. Chairman, is the... Sure, Harry. I don't know. The local people may have... Anyways, if you want to, let's say that if that's allowed by the law and you want to ask a question in the middle of the talk, I'm happy to try to answer. I can't promise a good answer, though. Okay. So let's now move to the next character. I always think of this as movies. We are from Hollywood after all. And the next notion I want to connect to the formal one says the notion of entangling power. And it again is a fairly intuitive idea. So if you want to quantify the ability of a quantum dynamics to produce, to generate entanglement, there's a very simple idea there. Namely, you start off from some product state. You have a bipartite situation, system A, system B. You prepare a product state, and then you evolve the system according to the unitary U, and then you see how much entanglement you have generated for this particular initial product state. And in order to get something that does not depend on the particular choice of the initial state, of course, we're going to do it again. We're going to perform an average, a uniform average over all possible initial states. And if you do that, well, you get this quantity here that I call the entangling power of what was one way. Of course, if you pick other, you have the freedom. This is the entanglement measurement there. And again, I'm going to pick in order to get stuff done. I'm going to pick my favorite linear entropy quantity, namely this one here with the square over here. And this is a measure of how much entanglement, as measured by linear entropy of the reduced density matrix, you produce acting upon product state. You say this is a very intuitive measure and has been introduced many years ago. Fortunately, by myself and the late Chris Zalka and Lara Fawro back in 2000. And well, even back then, we realized that this is straightforward. Well, not entirely so, but I would say a rather interesting relation between the entangling power, namely average, repeating myself, the average ability of a quantum process, a unitary quantum process to produce entanglement starting off from product state with entanglement of the operator entanglement. This was pretty much the only connection I could find back in the day between operator entanglement. It seemed to be a very natural notion to have, but I wasn't sure what this was trying to tell us. And well, you got this relation here. So if you were wondering what is the relation between operator entanglement and entangling power, well, look this formula here. Forget about the pre-factor that any alpha or large system dimension D is basically one. And for a bipartite symmetric bipartition, namely you got two subsystems with the same size, then entangling power defined by this formula here is connected to the otoc or the operator entanglement. It's simply this linear combination here of the otoc or the operator entanglement of ut plus the operator entanglement of ut where you pre-process the information using the swap between the two subsystems, minus basically the operator entanglement of the swap itself that turns out to be a maximally entangled operator. So basically there's this very simple relation. And in fact, as it happens for most of the random unitaris, an operator entanglement of ut of your quantum process and the operator entanglement of utsab, namely the same pre-processed by the swap, for most of those random operators, these two terms here are actually very similar, in fact identical. And so you can say that in that situation the operator entangling power and the operator entanglement are proportional quantities. So knowing one, you know the other. In general case, you would have just to use once you have the operator entanglement in order to know how the entangling power, you would have to use this formula. Again, let me use a picture to show some physics. Whoops. Okay, so let me. So this is graph. So the x-axis is actually time. And I'm plotting the operator, the entangling power of the dynamics as a function of time for different models. It doesn't go away. The bar here doesn't go away. Anyway, it doesn't matter. So you see you have different curves with different colors. The black one is kind of cheating. You see a random unitary Gaussian from the GUE ensemble. And it has maximal entangling power, as you may have guessed right away. And there's the blue line. The blue line is a chaotic transverse field. The Heising model over here, where both the transverse X field and the longitudinal Z field are different from zero. This model is now to be chaotic. And you see its entangling power after some short transient over here. It gets nearly maximal. It basically becomes indistinguishable from the GUE to the chaotic Hamiltonian. On the other hand, if you consider an integrable model, say for example the same Heising model with just say H equals zero. Then you get the green curve over here. And you see there are two main differences. First, the long-time average, and we will have more to say about those momentarily, is lower. So you don't have as much as entangling power as you have with the chaotic one, quite intuitively I would argue. And the fluctuations, the temporal fluctuations, the variance of the temporal fluctuations. Can you guys hear me well? Yes. So I don't have to do this. No. Let's not do that. Okay, very good. So this is integrable. The green line is integrable. And yes, fairly high entangling power, but still quite, you can tell that apart from both the chaotic and the GUE ensemble. And then I got another couple of examples here that are clearly way below that in terms of their entangling power. And this MBL model is basically this model down here where you randomly select this transverse coupling here. They're drawn from some uniform distribution in some range. And this model here is known to be a metabody localized. And in fact, if you set H to zero, you get basically something that is, can be mapped in one body problem. And this is basically end or shown localization. So you see that the localized models are very easily detected just in terms of their behavior of their entangling power behavior. Okay, so one, okay, long time average. Oh, perhaps I go back to say, I did, this is an interesting point here, I believe. So one could think, and I thought that too, that indeed the entangling power is could be a great measure to tell apart this different classes of Hamiltonians. Even over a short time, even the transient, you may think that you have a, well, if you have a chaotic model as opposed to a localized model, even the growth of entanglement for small t should be different, right? Short time behavior. Well, it turns out that at least using entangling power, this is not the case. You can actually see that both the integrable is clearly the green integrable and the blue chaotic ising model. They're the very same slope at the beginning, the very same slope. So if I were just to focus on the initial growth of the commutator or the entangling power, for what matters, I would see exactly the same thing. And this is because basically there's two models, they share, given the bipartisan, they got the same interaction terms. And in the same interaction terms, they basically is what tells you what is the slope of the growth for short time. But the long time behavior should be different, right? So this is what this graph, I think, is trying to tell us. If you average over time, if you wait long enough, you see, you can tell them very easily apart. And this is precisely what I'm going to do next, basically focusing on long time averages. As I said, the short time growth cannot distinguish chaotic and integrable model in Latin systems. This wouldn't be true for continuous variable systems. For continuous variable system people in the CAS community have seen that the growth is indeed different, depending whether your system is chaotic or integrable. But this is not the case for Latin systems or discrete systems. And so now we're going to be focusing on infinite or long time behavior. We've got a little problem here. It's a standard one with a standard solution. Let me tell you what that is. Of course, these are finite dimensional systems. Whatever observable are you studying, you're focusing on, is going to be a quasi-periodic function in time. There's a sum of basic complex oscillating exponential, and it's never converging to anything. You can't just take the limit for t going to infinity because you're going to have, well, that limit will not exist. You will have recurrences, and eventually, if you are patient enough to wait until the Poincare recurrence time, it will go all the way back to the initial signal. So in order to get overcome this difficulty here, of course, the answer is a standard. What you're going to do, you take time averages of this object here. This is the infinite time average of this. And what I'm trying to do next is to compute this time average under some mild assumption that are generically fulfilled in many body systems, and to see whether, well, what is this is trying to tell me. Well, as you can guess, this can be done. So if you assume this generic mild assumption I'm going to make about the spectrum of the Hamiltonian, it's called the no-resimence condition. And in short, this just says the energy levels and the energy gaps, namely the differences between the different energy levels are non-degenerate. Of course, this is generically true. The generality is symmetry. As soon as you add a perturbation, you're going to break the symmetry. You're going to split those levels. So this is generically fulfilled. Of course, many generically, it doesn't mean there's lots of systems that are very interesting. They do not fulfill this. But if we're one of those, you do a little perturbation. That is what happens anyways in real systems. And you get the NRC, the non-resimence condition. And if you do that and you write down the spectra resolution for your Hamiltonian this way, being this, the phi sub k, the eigenstates, the projections over the eigenstates, and the ek, the spectrum. And you define the reduced density matrix for each of the eigenstates. So rho k is the reduced density matrix over, say, subsystem A or subsystem B. chi here is a label that stands for either A or B. Then you find the following formula. This one here. Okay. Let me say so what? Well, first, there's a nice thing about this formula here that is indeed, a fully analytical rigorous result. It's a little theorem, if you will. And this is one minus, well, the time average under the NRC, the non-resimence condition of the operator entanglement. And it's in terms of these matrices here. It doesn't really matter that you try to understand the details of this. But the point is that these matrices here are basically just encoding our Gram-Schmidt matrices, namely matrices made out of scalar products between these reduced density matrices associated to each of the eigenstates of the Hamiltonian. And the thing is that this object here does contain global information about the entanglement, the state entanglement structure across the full system of eigenstates of the Hamiltonian. Basically, inside here, secretly, and in fact, not even so secretly, you know basically, you know how much entanglement the eigenstates of the Hamiltonian, all of them, this is an infinite temperature thing, if you will, all of them, how much entanglement you have in there. Okay? Let me show how this works. Okay. So, whoops, there's a funny symbol there. There's a question mark there. Okay. Let's keep it simple and let's consider an hypothetical model such that, so let's say let's pick the bipartition da equal db equal psi subsystems, and so their dimension is the square root of the total dimension. And let's say that you got L qubits, so that this da is 2 to the L over 2. And let's assume all the eigenstates of your Hamiltonian are indeed maximally entangled. And if this is so, then you sit down a little bit, and you stare at the equation I've shown you before, and you can actually find this very simple result over here. So basically the infinite time average of the operator entanglement, or the auto, the bipartite average auto, is 1 minus 1 over d. And if you expand this out, well, this is just a square, and you see that this is exponentially close, in fact, in the system dimension, in the system size, to the maximum possible operator entanglement that you have. And if you take the average of the log of this quantity here, this basically is just, this quantity here is the purity, is the operator purity. Remember that the operator entanglement is 1 minus the purity, so 1 minus g is just the operator purity. And you put a minus sign there in order to have a positive quantity. This is the 2-reiny operator entanglement entropy of the system. Again, in this assumption here, if all your system eigenstates are maximally entangled, it turns out that this scales, and this is again, well, this is a lower bound, but it's pretty tight, and you can show that this is scales extensively, okay, maximally entangled eigenstates across the full spectrum. Then you're like, okay, this sounds like a very strong assumption there. Well, first, it's not really, because even local systems, they may have low entanglement states, but those are, you know, the low part of the spectrum, and maybe the very high end of the spectrum, but most of the bulk of the eigenstates of any local armature is going to be maximally entangled, or I should say nearly maximally entangled, okay? But nevertheless, good point, let's see what happens if I make up another model trying to disentangle the different contribution to this object where all the eigenstates are product. I still assume the spectrum is sufficiently complex, namely that NRC assumption holds true, and let's say that your system is, well, okay, here are a few more things. Of course, if they're not exactly maximally entangled, they are like epsilon away from being maximally entangled. You have this nice bound here, so things are robust. So if they're close to that, even your result is going to be close to that using this upper bound here, but let me focus on this system, of this hypothetical quantum antibody system where the eigenstates are indeed products. It's kind of the other extreme case, and let's see what happens there, okay? Here, okay, you see that. You can go through the math and you find an exact formula. You have to sit down. It's not trivial calculation. It's not even hard. You sit down and you find this formula. Now it's one minus one over square root of it. So now if you were to compute the average operator entanglement, rainy entropy, then you see that the scaling is still, it's still extensive in the system size, but there's a pre-factor, a very neat pre-factor. Namely, if all your eigenstates are product states, or say low entanglement eigenstates, then it turns out that it is us if you add half of the qubits in the system from the lenses of the operator entanglement. So the scaling is L over here is one half over L. So this is a very neat distinction, separation between these different type of models. Of course, both maximally entangled eigenstates, all of them, or all of them being product, is kind of a joke, right, isn't it? So we want to look at this, what happens when you take a realistic model and by realistic, I mean realistic for a theorist like myself, namely one of those spin chains, okay? Let's see, hopefully this is what is the next slide. Okay, the next slide is probably the most important of the talk, but time, are you keeping track of the time? I know there's a launch, so as you see, I'm hurrying up, I'm going very fast. Ten more minutes, can't slow down. I can't slow down, and so I want to tell you about this. So let's now consider the same models, and the Ismer is the transverse fielding Ising model, basically the same guys we have before, and it's chaotic if G and both the fields are different from zero, then I got the MBL, I only localized models where this, the coefficients of the transverse X field are random, and then, well, and then again, here's my model, the fake model, the fake model telling you where all the eigenstates are declared to be product states, okay, really you have to think of something with low entanglement, really, there. Let's see what happens. So now I'm plotting, I'm plotting basically the log of the purity. If you will, this is just the negative of the rainy entropy, and again, remarkably, I would say, astoundingly, this is what we thought, this simple prediction of these two extreme cases, the L and the one of L seems to be exactly reproduced by these objects here. So you see this is the slope, okay, let me do this, this is the slope for the integrable models, including the localized ones that are, in some sense, integrable, as people here know all too well, for example, the many-body localized state is characterized by the existence of an extensive number of quasi-local integral of motion or quasi-integral of motion, and where has, down here, where this slope is steeper, we got the chaotic, the chaotic models and the random matrix model here. So again, this quantity here, the long-time behavior of this quantity is very, very clearly distinguishing, sort of like an order parameter-like behavior in terms of the pre-factor or the scaling of the function of the system size, because of course, here again, maybe we don't see that. Well, of course, this is the system size, so this is like 14 qubits, this is as far as we could go in terms of this, to illustrate our results, but it's very clear, it's very clear, integrable and localized models, they fall with a smaller slope and there's the chaotic one with the steeper slopes, very same, pre-factor one and pre-factor one off in the scaling of the operator intent, okay? So this is a pretty neat result that I wanted to share with you, and since it seems like I have a few more minutes left, if that's the case. This should maybe, yeah. Sure, over here, great. Yeah, yeah, yeah. And if you do well, I guess you can, since you say so. Yeah, yeah, of course. You mean just the variance, just the variance of the distribution, yeah. I would, this is, okay, here, okay, here, of course, I don't have the specifics of the numerics, though of course I've been put forth. This is exactly what Namit Anand, my student, did, but so you're saying it was your, wait. I believe I should see a crossover. How old do you want? Actually, I don't know, that's a great point, and yeah, because I said all the parameters, because that sounds cool, right, doesn't it? This is what we really want to have in these situations. Whether it is really all the parameters, Professor Skardik is telling me, okay, why don't you try to make something out of your claim and waiving, and you just change the strength of their random fluctuations and you should tune from a many-body localized with very large variance to a chaotic model with small variance. Let's see how this loop goes. It has exactly this behavior, like from one to one-off. Next conference, or I'll send an email to my student right now. Very, very good point. We'll try to do that. Victor. No, no, no. Good point. You see is gr of beta. There's lots of notation there, because this is from one of the other papers. This really is one minus g. It's the log of the purity. It's minus the guy who had the scaling, so yeah, sorry. It's the purity, right? Operator entanglement grows, namely the purity. The system size, and what I'm trying to tell you to convince you guys, and this should make it, that scaling with the system dimension is different depending whether you are chaotic, which class you are. Yeah, sorry about that. Thank you. Good point. I should change this notation. I see a question in the chat. I'm afraid if I click this, I won't be able to go back. We do not hear... Yeah, actually I wrote it and I wanted to ask a question. Please. We do not hear the question because the question was hearing Mike. Good point. Thank you for bringing this up. So you see I just wrapped the rules and it was a price to pay. Well, I was concerned because I was personally working on this kind of things at this moment and writing this paper that we are going to write. I did not really understand what you considered with the time as the time is increasing. The entanglement for the Anderson localized system is like wriggly, but when it is many body localized, it is quite steady in the long time limit. Can you please comment on that? Well, I'm not sure whether... Okay, let... There's a few flights back. Yeah, I understand. I don't want to say anything intelligent about this. You mean... No, it's here, right? This one? Yeah. Look, I see and it's exactly a feature of integrable models. They are more wiggly. So I would... I don't know the answer. It's a good point. I don't have the answer from the top of my head. You see, this is not just the only difference because even the average the value is higher and so it's just the average value as well as the variance of the signal is different from the MBL and the Anderson and in general integrable model, real integrable models is a generic feature of those. When you plot those quantities you see very wiggly stuff and of course we could go into there but yeah, good point. Not sure. By the way, this is the entangling power and really it's not the main focus of this work here. It is about operator entanglement auto but good point, good point. Thanks for the comment. I actually have also one more question. Please. This is an answer we are trying to find as well. Maybe you know this already. You are presenting this talk about this entangling power of different gates of different operators. How does it relate to the quantum annealing schedule? Oh, very good question. This would have been the way I would have made my talk coherent or at least somewhat coherent with the rest of the conference and I talked a little bit about it and in fact I think perhaps back in the day Daniel and I considered briefly that or but I don't know. The adiabatic schedule is indeed a good unitary evolution and I should be able to apply all the tools and the ideas that I have been sharing with you guys today to that. Whether we are going to find something interesting about it, I think for me is an open question so it is your question but it is also my question and again perhaps next talk in the conference. Thanks a lot for bringing this up. It's a good point. I don't know. Maybe you have an idea. You want to share with us? I do. Maybe I will talk to you about this over lunch. Oh, so you are in this room? I will be. Oh wow, that's nice. I thought we're somewhere in some different time zone or possibly another planet. No, I'm just here. Look, I have another pretty awesome section but in the interest of time and lunchtime especially it makes it more important I'll just maybe entropy production okay. The summary here and I'll take some more questions. Thanks a lot for your attention. Dimitry? Yes, I have the following basic question. So there was this formula for unitary circuit the very first one. Oh, you want me to go all the way back? Yes, okay. Now going back to the history why people were thinking about... Right, right, this one. People were thinking about, okay, it was some formula in time. You were showing how the correlator behaves in time. These two unitary operators and two swap operators. Yes, I mean you want this one. Yes, this one, exactly. What we call the fundamental formula. Okay, good, yeah, exactly. So what was fundamental about discovery of Stanford and Moldova Center that they were telling to us that there is a maximum of exponent how the correlator may grow. And then S-Y-K model is just one example of such system where this balance is saturated. So now question to this unitary quantum behavior. Is it known what the bound and is it known that, for example, your example saturates such a bound or... So the bound, the Moldova Center plus other people bound is about the growth, right? Temporal behavior. So you subtract... It is the rate of growth. As I said, here this is a fairly different setup. There is like continuous variables or even secondary. And the quantity is slightly different. So certainly we have bounds for this. But if you look the time behavior it's exactly this. So thank you for bringing this up. Let me real quick try to go here. No, bear with me. I'll be right there. So it's a very good point. So you see basically I'm going back to a comment that I made earlier on, but let me do that again because I think it's relevant to your own question. So the growth for continuous variable system or quantum fields you have this, well you can have diverging, you have Lyapunov exponents and basically this allows you for this quantity here. And this would, in this case, amount to be focusing on the short time behavior. But for this discrete model and for this particular version of the otoc, right, this is a bipartite average otoc. This big family is not necessarily though it's quite related to the one you have in mind. For this model here the short time behavior it is not able to tell chaotic integrable system apart. So it's a weakness of the model. This is why we can sit down and show the formula. We actually found a very nice formula. It was very excited about that and then the students said, okay, it's a great formula. This exactly shows that there's nothing we can say in this case here. Because it depends just on the boundary and you can have a model that's chaotic but when you look at the boundary between the two subsystems they've got exactly the same interaction. And what this loop here is telling the strength of this interaction here. So we are powerless. So there's no, we don't have that type of result. This is something we have this very nice, neat scaling telling the two different categories of many model systems apart. Okay, thanks. Thank you. Good question. So my question is related to that plot of entangling power with spin chain. Entangling power. Yeah, this one. So as you said the plot for integrable and chaotic was almost of the same order. I was wondering, for fixed system dimensions they do, though one, before you say, let me say that first a long time average is clearly different. You can, you know, you eyeball this and you see you can tell them apart is same order though. And there is a small wiggly, well it's more wiggly because it's lower so you can flat weight up. But if you take the long time average, the scaling is very different. As a system size grows. So this just staring at this, I wouldn't bet my life. I may perhaps bet somebody else's life. But you see what I mean? So, but please. The question was that how these curves change with system size. That's basically one of my questions. Yeah, yeah, well basically all the other plots are about that, right? So if you take the long time average of, well first, this is entangling power. Most of the talk was focusing on a slightly different yet connected quantity, this operator entanglement but you can say that in some circumstances they are proportional to each other and, well, if you take the time long time average of this and the chaotic, then you see that when the system size grows, well they have different scaling behaviors. That's the answer to your question. You did long time average, you plotted for different system sizes and you can tell them apart. There's a clear separation. Okay, and they look this way. Well, we don't really look this way. And the second question I have is that are there any sort of restrictions on the type of operator which you use? I mean for example, operators which do not satisfy ETH or something like that? No, no. They, no. It's a good question. So, I mean, of course, if you were to plot different operators, and again this is what most of this talk was about, you will have different results and this is why we believe this is useful and interesting, but per se thank you for allowing me to say it again. The fundamental formula if you allow me to be kidding here, this one here there's no assumption whatsoever about the nature of the unitary. In fact, it doesn't have to be generated by any local Hamiltonian. It's just you. This is a group theoretic result. So, no. There's no restriction. Once you apply it to different models, you find different results and this is what we believe makes this interesting. Thanks. Any other questions? At the risk of putting you on the spot, just trying to connect it to the conference. Oh, you want to connect me to the conference? So, if you take you to be the adiabatic intertwiner, any intuition as to what might happen? An honest answer or what? Both. Excuse me? Speculative answer. Speculative answer is going to be extremely interesting. I will solve many of the problems you guys are unable to solve and more seriously, that's the let me make it more specific. I don't know, because is I don't know. I've been thinking about it. Actually, I had a plan because I felt so guilty being invited to give a talk that didn't fit in the conference. Do you think that the entangling power before the conference and I realized that this is a program for one of my students? So, I don't know, Daniel. Do you think the entangling power of the adiabatic intertwiner would be higher or lower than that of a generic unitary? I think the long time is, in fact, thank you for reminding me of this paper that I had, like, embarrassed many years ago, with your post-doer Kaliusha, and we exactly studied the entangling power of adiabatic solutions. This is not what we are, this is about entanglement, operator entanglement, but on the other hand, because of that formula connecting the two concepts, I may be able to answer partially your question if I just could remember what we found back then. But I was like, you don't want to know many years ago. Good point, thank you. I have another very technical question. It's not the kind of many-body stuff you guys are doing here. For that, I would do numerics. In fact, we could do numerics with our master equation. But maybe I say this. This is the first of a series of papers, a trilogy or maybe there's four of them by now, and published on different journals and stuff, and we have been extending this to open quantum systems, to general quantum channels, and even more peculiarly to a different notion of a subsystem, virtual subsystem defined by sub-algebra of observables. And most of this extends very nicely to that, even theoretically, mathematically, right? When it comes to adiabatic evolution, the intertwiner, I don't think there's much we can say other than going to simulations. So I can sit down there. I thought... Could you go to slide six, the operator entanglement and entangling power? Yes, so there's this curious formula at the bottom where you relate the entangling power to the O-talk. This is curious. The question I have is, is there any intuition for why the O-talk for the swap operator is being subtracted? Why does it appear with a negative sign? Intuition, no. I would say it comes out rather straightforwardly from the calculation. In fact, look, this... No. Intreated? No. Do I... Well, let me show this, though. It must be there. Okay, maybe this makes it for you. Because the entangling power of a swap, look, you've got a product state and a swap one. But I've got it still a product, right? So it's a zero entangling power in this simple minded working-class approach that is entangling power. So you see, so now plug it in here. This is G of SAB. Then you have G of SAB times itself, namely the G of the identity that again is a product operator. So this guy here is going to be zero. So they must cancel out. So I... I can say more sophisticated thing. This is a average Z2 group average, blah-blah-blah. I don't want to go to the mathematical physics side of my work because I think this is alien enough. But yes, is that... It has to be there. Not... It wasn't there without being a mistake. So I will try to raise your organizers and relate your talk to quantum computing. So let us take a system of two-dimensional system, not one-dimensional system, but the one which you can describe by topological field theory with non-ibillion anions. Okay, and then there is this classical result by Kitayev and Praskol that entropy, entanglement entropy is proportional to L plus plus correction proportional to number of non-ibillion anions. So question to you. First of all, probably it will be power one. Yeah, it looks like integrable system. Or... What's the question really? So if you take for example, system with non-ibillion anions two-dimensional like this azine or this Fibonacci and apply your technique, what you will see from your classical entanglement entropy is L minus correction proportional to number of anions. So what you will see probably... What would be your question? That's a claim about the ground state entanglement by the so-called topological entanglement entropy. Actually we cannot help get to say that Praskol and Kitayev they found this very nice result but in fact we were the first to find... Hello, could you please use the microphone, otherwise people in zoom can not hear... Sorry about that guys. Okay, maybe I won't be bragging anymore. So I know very well that result because I found it first. But that's a ground state result for ground state and here is operator entanglement. So on the face of it I don't know whether I would see anything like that because it's not a state property of the ground state. This is a property as all the formulae have been showing to you involving the full system of eigenstates. If you will this is an infinite temperature result if you want to phrase it that way. So I don't expect to see the topological entanglement entropy that I happen to love very much for above mentioned reasons. Any more question? If not I just have one question. Not lunch yet. These operators are local operators or it has to be... Well, the formula is very general can be whatever and indeed we have used that for GUE that are certainly not local. But then all the graphs I've shown are operator entanglement. The evolution has been generated by local operators. In fact, one-dimensional spin chains. So yes, they are local in this sense. They are generated by local Hamiltonians. So let's hand the speaker once more. Thank you.