 about dynamical quantum phase transitions. Yeah, so, thanks again for coming. Yesterday I started the lectures by first motivating why we want to study non-equilibrium dynamics, then I told you a bit about the basics about these dynamical transitions in relation to of the central object that looks with amplitude to complex partition functions. And I discussed already one important aspect, namely that they can obey scaling and universality for one particular example. Now the aim of the remaining parts of the lecture is to more, not so much anymore address the question of why they occur, but more of what that actually means and why these non-analyticities in this lošmit amplitude has any significance on other observables that are those which are typically measured like local observables and correlation functions. And in particular now this, the first part I would like to discuss today, which is about dynamical phase transitions in a rather broad class of problems, namely in systems which in equilibrium exhibit symmetry breaking. For such kind of models we have a rather developed a rather general understanding of what these dynamical transitions mean, and this is what I would like to tell you in the following. Essentially this will follow the line of one experiment that has been performed in the trapped ion group in Innsbruck roughly one year ago, and plots of which you have seen already on previous slides and which are also in this, which you can see again here, also on this slide. Ok, so now we want to study dynamical phase transitions in models which have symmetry breaking in equilibrium. Let me go back one step again and show you one of these central objects I am discussing here all the time, which I call here lošmit eco, which is a probability here, so I am not actually in the following, will not discuss the amplitude, but rather the probability of the probability associated to the amplitude that the overlap of your time evolved initial condition with the initial condition itself. And in the quantum quench protocols I am considering here, this initial condition is the ground state of some initial Hamiltonian. But now I already told you I would like to study dynamical transitions in systems with symmetry breaking, and now, which leads to the following question. Now, what should you do now when your ground state is not unique, when you have a degenerate ground state manifold? So then, of course, this quantity is not uniquely defined anymore and you have to think about some generalization of that. And this generalization is not unique, the different choices you can take. And as I would like to convince you in the following, this generalization turns out to be very useful. It is the full probability to return back to the ground state manifold. So the idea is the following. You suppose you have now, I think, the simplest case, you take a simple magnet, which has a doubly degenerate ground state manifold with either or spins pointing up or spins pointing down. You choose one of these two symmetry broken initial conditions, as symmetry broken states as your initial condition, your psi not, your time evolved, your state, and then you don't study just a single return probability to one of these symmetry broken ground states, but rather the full return probability, which involves the sum overall individual probabilities to return to one of these states. So why do we want to study this? Long the lines of the experiment here, because in the experiment they realized some long range icing model, which has symmetry breaking in its ground state, at least when the transfer speed is weak. So the full Hamiltonian they can implement in the experiment, consists of two parts. H not denotes here our initial Hamiltonian, so for which we prepare our ground state, and that H not is some icing, is some classical icing model, but of infinite range type, meaning that you not only have an interaction between nearest neighboring spins, but also very distant spins interact with each other. It's some kind of a mean field type icing model. And then there's a second term, this V, this is a transverse field. And the protocol they have been implementing in the experiment was like taking one of the ground states initially of this H not, and then suddenly switching on a rather strong transverse field. So for our initial Hamiltonian here, we have precisely the situation of a magnet, so the ground state manifold is doubly degenerate, all spins pointing up, or all spins pointing down. As I said, now like the protocol is the following, now let's choose one of these symmetry broken ground states as our initial condition, the all spins pointing up here. And now we are interested in this full return probability to the ground state manifold. Since the ground state manifold of our initial Hamiltonian is doubly degenerate, this full return probability consists of, or is a sum over two terms, so either, so where this nu is either referring to the polarized with plus magnetization and minus magnetization, and so either returning to the same, to your initial condition, or to the state with all spins completely flipped. Now, as I already told you in the last lecture, such probabilities have this large deviation scaling, meaning that they have this exponential dependence on the number of degrees of freedom n in our system. So here n just corresponds to the total number of spins that are realized on the chain. OK, so what can we now learn from that? So there is one important property that I will use the following very often. So now this full return probability is the sum of two terms, and since now comes in the large deviation scaling, so both of these terms, due to this large deviation scaling, are exponentially small in, or depend exponentially on system size n, and since they do that, always one or the other term will dominate completely the sum in the thermodynamic limit. So, for example, if you choose, if you have a situation where lambda up is 0.1 and lambda 2 is maybe 0.2, or lambda down is 0.2 when you take n to be 1000, this number will be much smaller than this one. So in the thermodynamic limit, only one of these two numbers will dominate. So, at least you can see here on the right hand side, so if the lambda up is smaller than lambda down, the p of t converges just to this value, given by this first term, and in the other case, in the reverse case, where lambda down is smaller than lambda up, p of t will be just given by the lambda down contribution. So that's now a property of the thermodynamic limit. In the end, it means that suppose you are plotting now, suppose you are able to measure both of these numbers separately, and you're plotting now just these individual rate functions lambda up and lambda down, as I've done here, so suppose this is your measurement of this function lambda up, and this is the measurement of your function lambda down, from the above considerations. We know that when we consider this big p, only the smaller of the two will dominate in the thermodynamic limit, so it means when you observe such a crossing of these two functions, only the dominant contribution is indicated here with this yellow line, will be this one, and will lead to such a kink, as you can see here. So it's very suggestively similar to a first order ground state phase transition in a quantum many-body system, where you would only look for the one state, which minimizes your energy, so here you minimize accordingly your rate function. So this might look somehow like this is constructed in such a way that it has to lead to some non-analytic behavior, but the main point now of the following 5-10 slides will be to argue that there is a lot of physics behind that, and this is not just a construction, but has physical meaning. OK, and now, just please. So, for this experiment, it's rather simple, because in this trapdion experiment, they can prepare you like any kind of product state you would like to, so which means that you can program your experiment such that it will output you the Ospin's pointing up state, so there is no problem in that. In general, we would always assume that, of course, in a realistic system, if the system is large enough, here we would have symmetry breaking, and we could, for example, assure that the system chooses the Ospin's up state by applying a magnetic field additionally. But here it's simple, the initial condition is initialized in a different way such that you're almost certain assured that you always get a specific spin configuration. OK, and now, this construction has now been used in this experiment, so what I've been showing you already before. So now, suppose you can measure these individual lambda lines, and this is what you can see here in this plot essentially, so there are some great data points up there which are not really prominent, but they're included to show that actually what has been measured individually are these lambda up contribution and also the lambda down contribution, and as you can see here, those two lines cross. Now, when you by hand take just the minimum of those two curves, you will automatically, of course, get this kink here. Now, we had a question already on yesterday, but these are now rather small numbers for which they did the simulations, like for six spins, eight spins, and ten spins, how can you deduce now non-analytic behavior? Of course, it's only possible here due to a theoretical input, so if you were to compute this full return probability for six, eight, and ten spins, you would, of course, see that this is rounded off. What has been done here, measure both of these rate functions individually and use the theoretical input that only the minimum contributes, and if you use that input and suppose these individual rate functions have almost converged for your accessible system sizes, you can get a rather accurate prediction in the thermodynamic limit, which is not the case over here. Here you see rather strong dependence on system size for the second kink, but for this one here, finite size dependence is already rather weak. Ja, please. You mean this remaining slight system size dependence here? Yes. Slightly, but this, like you see, so the dots are experimental data and the red lines are theoretical calculations. And from the theoretical calculations, we know that this converges to something which gives you a kink in the thermodynamic limit. So from that side, we are sure, and theory also compares well to the experiment. You should be careful, as I also tried to emphasize, you should be careful in what are the predictions you can make from this experimental data, because in order to argue about non-analytic behavior, you first have to use theoretical input and secondly, there is still some weak finite size dependence, which of course does not allow you to directly extrapolate to the thermodynamic limit yet for these three numbers. That's not possible. It's just consistent, but maybe I can also take the chance here to make one further remark. Actually, I would say it's remarkable that this data exists, because you have to keep in mind that those two probabilities that you measure, they are exponentially small in system size. So these are extremely small numbers and you cannot, by principle, measure these numbers for a large system. So this is impossible. The only way to measure these quantities is for a small system. And to find a way here to make at least some prediction on potential non-analytic behavior in the thermodynamic limit is rather remarkable. OK, so now again we have, I've shown you some further example where this quantity becomes non-analytic, but I've never really told you what that now actually means. Let me now take a pessimistic perspective initially. So this psi-0 of t is our full time-evolved wave function with a lot of information in there. And for this amplitude or for this return probability, we are always projecting onto one basis state in Hilbert space. So when you think, when you want to visualize that, Hilbert space is a huge object and what this g of t measures is the projection of your full time-evolved wave function onto one single dot here. So how, at all, can this single overlap be important for understanding the dynamics of this whole object in Hilbert space? So how can this be? Yeah, please. You can of course also take a superposition of those two. OK, like the, I didn't mention that, like there was a question after the lecture yesterday also concerning that, the, actually everything I present to you relies at the moment on having a pure state. The, you can generalize that to mixed states, however, as for this full return probability, this generalization is not unique and there are many different choices you can take which are not equivalent for mixed states, which are reduced to the same quantity or converge to the same thing when you make your state pure again. So you have to make a choice and that case is not settled, I would say. So there are many different proposals, how one should generalize that. I will not cover that in my lecture here. So there are ways to do that, but it's not unique and it's not settled. OK, so how can this single overlap be important now for understanding the dynamics of this wave function in Hilbert space? So how can this be important for any local observable? Essentially, this question boils down to the following, is what we observe at this point, a single point in Hilbert space, is this some singular behavior? Or do these, the properties we observe at this point, are they continuously connected to a larger portion of Hilbert space? So is the influence of this single overlap spreading over larger portions of Hilbert space or not? Or can this spreading occur at all? Not clear beforehand, probably. Fortunately, there is one example, many of you probably know well, where such a situation happens in equilibrium. And that is for a quantum phase transition. A quantum phase transition is a transition, which only happens in the ground state of a quantum antibody system. So here you can see a plot of illustrating the properties of some system, exhibiting a quantum phase transition. So you see temperature and some external control parameter g. The quantum phase transition occurs at zero temperature in the ground state that corresponds to the temperature zero axis. So it occurs down here. But now one could say in experiment you can never reach zero temperature by the third law of thermodynamics, it's impossible. So how can a quantum phase transition be relevant at all? Then if you cannot observe it in experiment. The reason is that the influence of your, the properties of this that you can see for the single state, the ground state, spread to larger portions of herbert space. So there is a quantum critical region in the temperature control parameter plane, where the properties of your system are still controlled by this underlying quantum critical point. So the influence is spreading, although the quantum phase transition only occurs in one state, namely the ground state. And that's what I would like to convince you now in the following, and that also justifies the notion of a dynamical quantum phase transition, that this kind of transition is not, so to which, to some extent at least, an analog of a quantum phase transition in the dynamical regime. And upon doing the following replacements, I will go in more detail later on. So of course there is no temperature in our dynamic setup. I told you rather in the beginning, the states we consider don't have a thermodynamic description, there is no thermodynamic, there is no free energy, so there is no temperature. But we can think about energy densities. And instead of a control parameter, of course we have time, because those transitions occur as a function of time. So instead of thinking in terms of a temperature control parameter plane, you should think of an energy density time plane. So why does it make overall sense to do that? Actually because of the following, in that I'm mostly repeating what I said already, so what this amplitude is doing, it's projecting the time-evolved state onto the initial condition. Initial condition, however, was a ground state of some Hamiltonian, okay? So if you think now measuring energy with your initial Hamiltonian, what these dynamical transitions tell you is when you, in this energy density time plane, these dynamical transitions tell you what happens down on this line. And now what is the main question, whether there exists some analog of a critical region indicated here as this white area, that where the properties are still controlled by this underlying dynamical quantum phase transitions. I will now make this much more concrete. Yes, please. So probably the best thing is like, let me go further two slides, then you will explain that in detail. But it's a subtle point. Good question, I don't know. But I'm thinking about it, but I don't know yet. I don't want to make any definite statement about that. So you mean these crossover lines, how you should think about the crossover lines here. So I don't, that's why I didn't put the word in quantum critical region here, because we don't know yet whether we can actually have... So we are working on that, but we are not yet at 100% sure whether we can have really scaling in this region or whether there is just some other kind of influence of this point up here. So like the precise, when you go to the equilibrium case, to the quantum critical region, these crossover lines or how they look like, this depends crucially on the critical exponents of the underlying quantum phase transition, whether something like this holds for, in this case, we cannot say yet. OK, so now, here you can actually already see an actual measurement of what I would like now to discuss of this analog of this quantum critical region. And this has been done now, again in this chapter ion experiment, where they, for this data set, realized a slightly different Hamiltonian, so it was not a infinite range Hamiltonian where all spins were interacting equally with each other, but you had some algebraic, there is some algebraic dependence on the distance of the two spins, however, this is not important at all for the general picture. I've just written it down, mainly to point out the following. So now, suppose you're interested now in the dynamics of the order parameter, of the magnetization. So initially we were preparing our state in, our system in the fully polarized state, where this order parameter is 1. And suppose now we want to monitor that, the dynamics of the order parameter now upon switching on the transverse field. For this particular initial Hamiltonian that we were choosing, which includes only the spin-spin coupling, we have that the order parameter actually commutes with that operator. Of course that's a fine-tuned point and we are currently working on the generalization of that, but for simplicity let's consider that case. So now because these two operators commute, you can measure them simultaneously. And that is actually what has been done in the experiment. The experiment has done a projective measurement on energy, measuring the energy, and afterwards measuring the magnetization of that state. And that's possible because those two operators commute. Now on a more formal level, it means so when you have two commuting observables and you can measure them simultaneously, you can, there exists a joint probability distribution for like when you fix some time t here, there is a probability distribution that your state has energy e and some magnetization density, I call it here m. Since energy is also an extensive quantity, but in the following work also with the energy density, which is an intensive quantity. And from this joint probability distribution of energy density or energy and magnetization density, you can compute also your order parameter. So your order parameter is nothing else than this integral. So integrating over energy density, magnetization density, here is the magnetization density and the corresponding probability distribution. But that's only possible because you can measure those two quantities simultaneously. So what I will now tell you is the part, which also took me most time to adjust myself. So don't be worried if it takes a bit of time to digest this. Yeah, please. Yes, so that's the system. Yes, so like we are doing the time evolution up to some time t and then with that state at that time t we are now doing a series of measurements. So we first measured the energy with our initial Hamiltonian. So doing that projective measurement gives you in the end an energy and your system is afterwards in the corresponding eigenstate. Yes, in the end it's just a sigma z measurement of all the spins in the chain, yes. And from which you can compute then the magnetization and the energy. Yes, so v is somehow the perturbation, but it is strong here. It's not a weak one. Yes, what a integrability plays a role here, you mean? So no, a long time limit is something, which is especially when you think about adding perturbations, something highly non-chevil, where I cannot give you any definite answer to that. This phenomenon I'm discussing here happens on short to intermediate times. And for that there are many studies, which hint towards the fact that weak perturbations on these short to intermediate timescales don't play a crucial role. But long time limit is a completely different aspect. And also much, much more challenging to study. Coming back here, so the fact that we have this joint probability distribution allows us to express the mean value of the magnetization during dynamics, by the way this is the one which is plotted here, so over time, this one here, is given by this expression. Now, since both energy density and the magnetization density are related to extensive variables, there is a central limit theorem for the probability distribution. Essentially, it's very sharply peaked around mean values. So this allows us, for example, at least in the somewhat dynamic limit, to perform the integration over the magnetization density. Analytically, because this P is just a delta function at some mean value of the magnetization. So while I'm doing that, I'm doing that only to show you that the full magnetization, you can decompose spectrally as an integral over energy density, involving the probability to be at that respective energy density and the corresponding magnetization at that energy density. Practically in the experiment, what you would do is to get this object, so that is now the main object we are interested in, to get this object, you would measure an energy density, you would perform a projective measurement, giving you some energy density, epsilon, and then you measure the corresponding magnetization at that given energy density, and that is this quantity. Formally, you can derive it in that way, but you can also think in simpler terms of this projective measurement. Third or more, we can do one step more. There is also a central limit theorem for this probability distribution in that the sharply peaked only at its mean value, that's what I denote by epsilon average, so that the overall magnetization is this energy resolved one, given just at the mean value of the energy density. And that mean value of the energy, so like, now let me go to the actual experimental data here. So what you can see here is a color plot. On this axis is the energy density normalized such that the full spectrum goes from zero to one. This is time, and the color scale is this energy resolved magnetization at the given time, t. In the upper plot you see a measurement of this full return probability, indicating a dynamical quantum phase transition at this point, and at this point here. This red line here, this red line is the dynamics of this mean energy density. So that is the value of the energy density, which is relevant for local observables. OK, because the mean value of this local observable is only determined from what happens at this mean energy density. OK, so now let's try to interpret this plot in more detail. So at zero energy density, as I pointed out, that's where the dynamics is controlled completely by these dynamical quantum phase transitions. Because the full return probability lives down there. So now we have here at these points, remember back a few slides, the origin of these kings that you see here is that these two rate functions cross. It means that at this point you have a switching between a dominating probability for all spins pointing up to a dominating probability of all spins pointing down. If you do now the sequence of two measurements, project first to your ground state manifold, and afterwards measure the magnetization. If you would do that in a thermodynamic limit, what you would observe is along this line, initially you would always measure plus one for the magnetization because with almost certain t you are still in the upstate in your ground state manifold. And at the point where your dynamical transition occurs, you suddenly jump to minus one. Because the probability of all spins pointing down completely overcomes. That's now to dominate. At the next dynamical transition you will find the complete opposite switching from minus one to one. So along this axis down here, actually you find a non-analytic behavior of this m star of t in the thermodynamic limit plus one, jump to minus one, down here it's always minus one plus one. Consistent here with this color scale that you see. But as you can also see, is that this jump, which occurs down here is actually, does not represent a singular point as I was trying to argue before. The influence of that jump extends to non-zero energy densities. You see that maybe the crossover from a positive to a negative magnetization is smoothed out. Still, you see a region in between where the magnetization always has to be zero. And that one could interpret here now as like this dynamical analog to a critical region, which extends up to the mean energy density up here, which is then controlling the magnetization. So, let me give you another physical argument. What this dynamical transition in this model tells you is that this is the point, so initially in your initial condition, it was chosen as all spins pointing in up direction. This initial condition explicitly breaks the Z2 symmetry that the transfer speed ising model has. From the analysis of the full return probability to the ground state manifold, the point of this non-analytic behavior is precisely the point where the system is able to restore the Z2 symmetry because the probability of up and down becomes the same. In terms of the magnetization, which is the order parameter, which measures to which extent you have broken the Z2 symmetry, has to become zero, and that's what you observe here. So, the zero of the magnetization here is somehow a remnant of the underlying dynamical phase transition. In other words, if you observe now in this model a sequence of dynamical transitions as here, you can conclude that your order parameter shows some oscillatory decay. There's actually a vast literature in the non-equilibrium dynamics context discussing precisely this aspect of that. There are very often parameter regimes of the dynamics where the order parameter decays in an oscillatory fashion. This allows you, I know this takes some time to digest all these line of arguments, but these line of arguments tell you that you can trace this back to an underlying dynamical transition, which are those points in time where the system is capable to restore the Z2 symmetry, which you have initially broken. OK. So, questions to that one here? Yeah, please. No, the magnetization, I'm almost sure a local observable or some correlation function can never exhibit in dynamics, can never exhibit non-analytic behavior. Probably just to principles such as causality and locality. I think it's not possible. So, those quantities are always smooth. But it does not, like in analogy to a quantum phase transition, it does not mean that their behavior is disconnected from something which is disconnected from these dynamical transitions. But those, I'm almost sure magnetizations correlation functions, they will always be analytic functions. Like measurable quantities are analytic. Like for a quantum phase transition in equilibrium, any susceptibility will always be analytic when you measure it. Suppose you don't have a finite temperature phase transition, any susceptibility will be always analytic. Only upon cooling to zero temperature you would get true analytic behavior. Here at the analogy would be some post-selection procedure. So, when you were able to post-select your energy density such that you cool your system effectively down to your ground state manifold, then you can see non-analytic behavior. But as long as you are at some finite energy density, finite temperature, everything is smooth. Yeah, please. Yes. Yes. Yes, they are the origin of these non-analytic behaviors completely different. And that's actually the case we are currently analyzing with a PhD student. The problem you have there is that you don't satisfy this condition. So, you cannot measure these two quantities in the same way. So, you have to use a completely different framework to draw a similar analogy, and this is what we are developing right now. Yes, and like what we observe a similar behavior like we have here, but this work is not finished. OK, third question is concerning this part, yes. To, sorry, to do what? To use which kind of propagator? Sorry, I didn't get it. Are you mean whether one could think about different kind of operators, like not just unitary evolution, but something different, like are you mean the Fourier transform of that essentially? So, you mean essentially looking at the Fourier transform of that object. This question came up in some different context. Yesterday, like, the Fourier transform of this Lohschmitt amplitude is an energy distribution function, or work distribution function we were trying to see signatures, but there are some who are hidden in that quantity. So, like, there is, at least, there is nothing spectacular, immediately visible when you look at these quantities. Of course, the non-analytic behavior in time, when you do a Fourier transform, there has to be something in that the Fourier transform, some property of the Fourier transform quantity, some special structure, but it's only hardly visible. It's not very prominent. OK, so, like, I know to understand the details that take, so maybe just as I take home message here, that is an example where you can take, where you can connect your, the, these dynamically quantum phase transition, or the influence of these dynamically quantum phase transitions to the dynamics of a local observable for a rather general class of problems. And this goes via some analog of a critical region, and even that critical region, to some extent, has been measured in this traptide experiment. OK, now let's go to simpler things. Let me maybe just point out one further, I think, interesting aspect of this experiment, and, namely, what they also did, they measured the entanglement dynamics also of this, in this quantum spin chain. You see two different quantities. So, this is both done for a six-site chain. Here, the entanglement half-chain entanglement entropy s, and here, the so-called ketogava, ueda spin-squeezing parameter. Let me first discuss a bit the entanglement entropy, so both entanglement entropy. So, you see here a time rescaled in some units in satcha vade, that one corresponds to the occurrence of the first of the dynamical phase transitions observed, and three to the value of the second of these two kings were seen in the experiment. And, as you can see, overall, the entanglement entropy. So, initially, it should be actually exactly zero, because in the perfect world, the initial conditions should be a fully polarized state. This is a product state, should not have any entanglement. So, that is then what this red line, the theory curve, of course, gives you. The black dots are the actual measured data. You see that there is some overall offset, and that overall offset can actually be explained rather simply by the fact that the initial condition was not a perfectly polarized state, but like the spins were maybe slightly tilted away from the north pole. And this is then the blue line. So, incorporating these slight deviations from the initial condition, the blue line is the corresponding theoretical prediction for the, for assimilation for that case, and it matches rather nicely to the experimental data. OK, so now, initially, entanglement is weak, and as we all in general believe, when we do non-equilibrium dynamics in a quantum antibody system, the entanglement entropy is when we are not dealing with systems that are disordered or of many body localized nature, that the entanglement entropy is supposed to increase linearly. But actually, when you look here, there is maybe an overall linear growth, when you do it on theory, like it goes on and on in this way, but there is a substructure to that, so that the entanglement entropy shows its actual growth in the vicinity of this dynamical quantum phase transition, levelling off in between, and then starting to increase again in the vicinity of these dynamical transitions. The same thing you see for this ketogava spin squeezing parameter, you can focus here on this red curve. Smaller value of this number means that the system is more squeezed, has more entanglement into it. Again, you see that there is in the vicinity of these lines, that you see a rather sharp drop of that quantity. So, indicating that the entanglement production in this model is strongly connected to these dynamical transitions, and you can actually understand this, because this long range ising model that they realize is actually used for spin squeezing, and although in a different parameter regime, but in recognizing that connection, you can actually understand why at least spin squeezing should occur only in the vicinity of these dynamical transitions. So, there is also a connection between these dynamical transitions and entanglement production in this model. OK, so that was the hard part of the lecture. So, in the following, I see now my time is almost over for this part of the lecture. In the following, I would now like to discuss such transitions in topological systems. I will not, so I will probably stop here before continuing. Let me just say maybe two or three more words. So, this has turned out to be a very interesting application of this concept of dynamical transitions. On the one hand, you can make rather well analytical, can develop a rather well analytical understanding, and also for this type of systems, we have a rather generic way of also constructing order parameters for these dynamical transitions, which I've not shown you before. And I will also show you a few experiments that have in the meantime measured these order parameters in such systems. And with that, I would say, let's stop and let's go for lunch. Thank you.