 Okay, so first of all, I would like to thank the organizers for giving me the opportunity to talk about my recent work. It was done in collaboration with the Jamir Marino, which is here, and Sebastien Dille from the University of Cologne, and with Andrea Gambassi, which is also my supervisor, and he's here in CISA. So, oh, sorry, okay. So probably you've heard that now there is a lot of interest about the non-epilebial dynamics of isolated quantum systems, and that one of the most celebrated protocols in order to drive these systems out of equilibrium is the so-called quantum quench. That is, you take the Hamiltonian, which covers the dynamics, and then you suddenly change the parameter of this Hamiltonian. But actually the word quench comes from a more humble framework, which is actually material science. Up. Okay, so if you take a piece of metal and you put it in a, I mean, you heat it a lot until it reaches a really high temperature, then suddenly if you put it in contact with the vessel containing some water or some oil at a much lower temperature, then you're changing the properties of the metal with respect to a more slower cooling. And this is actually a quench, and it is using metallurgy to give some properties to metals. But how does one model this kind of process theoretically? So, well, you take your model, your favorite model for the metal, which is maybe some easing model, and you prepare it in some high temperature configuration. And then at equals zero, you couple it to a reservoir, which imposes a dissipative dynamics, which will eventually lead your system towards an equilibrium configuration with a temperature given by the one of the bats itself. For instance, when you do metropolis dynamics of your easing model, you see that your observables will fluctuate a lot around some quasi-stationary value. But before doing that, there is a transition time in which they change fast their behavior. And this time, which I call relaxation times, is absolutely not universal, and it depends on the microscopic details of your system, or on the temperature, and even on the observable themselves. But what happened when the temperature of the bath is the critical one for your system, when in that case, when you are at equilibrium, you know that correlation length and correlation times diverges, and your system, it's critical, because it's on the onset of a free transition, and it can be described by, it exhibits a universal behavior, which is determined only by macroscopic properties of your system, such as dimensionality, such as symmetry. So the same happens for the relaxation time, which diverges accordingly. Your system takes an infinite time to reach its equilibrium. And the question is, is there any universal behavior related to these divergence? So is the quench affecting the properties of your system? So the answer is yes, and it's the so-called aging dynamics, which was first proposed by, predicted actually by Jansen in 1989, and then is reviewed in an article by Pasquale Calabrese and Ragan Bassi. And the feature of this aging is basically the following. So you take, really, oops, oops, okay. Can we use this one? Okay, sorry. You put yourself on the critical point, so correlation length diverges, and you look at the correlation function, the response function of the order parameter, and you see that they obey some scaling relations, which are characterized by the equilibrium-exponent eta and zeta. But then you can ask, since they are at different times, is there any other non-analytic behavior, which is hidden in this scaling function? Well, the answer is, if you take the two times to be well-separated, then you can isolate a peculiar dependence on the ratio of the two times, which is controlled by a new non-equilibrium-exponent theta, which is an independent exponent with respect to the equilibrium ones. And if you look at this scaling function, you see that it depends separately on the two times. Actually, it depends on the ratio of the two, which is a clear signature out of the non-equilibrium, because the equilibrium, it only depends on the difference of these times. Another even more remarkable consequence of aging is that, of course, when you consider the dynamics of the order parameter, so you slightly change the quench protocol, and you start with an initial configuration with a small external magnetic field, and after the quench, you switch it off. And then the dynamics of the order parameter is the following. So after the quench, it starts to increase algebraically in time with the exponent theta prime, which is related to the exponent theta. And after some time, then it starts to relax still algebraically towards its equilibrium value, which is zero. So this non-monotonic behavior is clearly a non-equilibrium effect, and it is one of the most remarkable features of aging. The take-home message about this is that, whenever you break time-translation invariance, or to some extent fluctuation dissipation theorem, then new exponents with respect to the equilibrium one are expected to arise. So this is actually what happens in equilibrium systems, but in confined geometries, so for instance, in a slab or in a semi-infinite volume, or the same effect is also observed in homogeneous systems, but in which fluctuation dissipation is broken by some other mechanism, and this was, for instance, studied in recent years by the group of Sebastian Dill in Colonia. So what is the proper theoretical description for computing these exponents? So since we are working on the criticality, the coarse-graining is allowed, because microscopic details do not matter, and the universality classes of the dynamics of these systems were defined in the 70s by a work of Hohenberger Halpering, and they really found out that a bunch of universality classes defines different possible dynamical behaviors, and for instance, for the Glauber dynamics of the Ising model, that is the dynamics that I showed before, the proper coarse-graining model is the so-called model A of such classification, which tells you that now the order parameter, which is a field, obeys a Langevin equation, which is characterized by some external noise, which accounts for thermal fluctuation of the buff, and also, since the initial configuration is a random and a high temperature, the initial condition of your equation is a stochastic variable described by some probability distribution. What is important here is not the precise form of this diffusion, but the fact that the initial correlation length must vanish because you want a really disordered configuration. But it's not really easy to perform RG on a Langevin equation, so what you have to do is to resort to the so-called response functional, also known as the answer, the dominicis, Martin Cigarose, which tells you that averages of observable can be evaluated instead of computing the solution of your Langevin equation as an functional integral. And these introduce a new auxiliary field, which is the so-called response field, which basically plays the role of the noise and codies the information of the noise. Now, this way, you are in a framework which is suitable for doing renormalization, and one thing that you notice is the following, that renormalization basically tells you that your field theory has some divergences, and in order to reabsorb these divergences, you have to renormalize some parameters of your theory. So for instance, the mass, the field, the response field, in this case, and this gives you the critical exponents. So for instance, nu, eta, and z respectively. But what Jansen found out in 1989 was that when you do a quench, you have extra divergences which cannot be accommodated in the renormalization of these parameters, and therefore you have to renormalize something else. And what is the correct thing to renormalize is the value of the response field at initial time. And this gives you exactly the initial ZIP exponent, which is in argile language, basically the anomalous dimension of this field. Then the value of this exponent can be computed using perturbative agin. This was actually done by Jansen, and they compute it at second order in epsilon expansion, and you can compute it in the large N limit, the OM models, where there is an exact solution, and even with Monte Carlo simulations. So what can now the functional renormalization group add to these techniques? Can it add something, actually? So let me be a bit brief on this, so you just can define the functional RG equation also for the response formalism. It works straightforward as for the equilibrium systems. And basically what you have to do is just to solve this exact equation, but to do this is impossible. And what typically is to resort to some approximations. So one is to typically to choose some answers. And the answer is in this case to write down an effective action, which contains information about the initial condition, about the post quench dynamics. So you see that this theta function ensures that the second part of this effective action holds only at times after the quench. And then you can choose some, for instance, we choose just the lowest derivatives in time and space and some potential for the order parameter. And you see that this is exactly the same, I mean it can be interpreted as a field theory which lives on a boundary. So the initial condition lives on the temporal boundary and the post quench dynamics constitutes the temporal bike. So this is a useful analogy for later. And the choice of the initial action gamma naught is quite simple and it's determined by the initial probability distribution and by the causality arguments because the response function has some causality structure which must be satisfied. And then given this we can compute, try to compute this initial slip-expoint but the point is that now, since time transition invariance is broken, one of the typical tools you use is the Fourier transform and it is no longer available. So you have to do without it. So our approach to this is divided in three simple steps. So the first one is that, okay, you have to choose a former of your potential. So you have to truncate. The most simple way to do this is just to take the usual five to the fourth potential which describes a breaking of the Z2 symmetry for a second order phase transition. Then you put this potential inside your FLG equation. And now you realize that the point, the crucial point is to compute the inverse of the second variation of the effective action which is what actually appear right inside of the FLG equation. And this is where you typically use the Fourier transform. Here you cannot do it, so you have to do something else. And to do this, you realize that your G, which is this inverse, can be rewritten by separating the field-independent part which corresponds to the quadratic parts of the effective action and everything else which depends on the field. But when you do this, you can rewrite this equation as a integral equation for G, a Dyson-like equation which can be solved iteratively. And then you have a sum of an infinite sum of terms and each of the terms of these terms can be computed exactly, no problems. The point is that you don't need to sum this series, you just need one or two of these terms because actually for this choice of the potential, each of these terms contains a precise power of the fields. And for instance, if you want to renormalize the quadratic term, you just take the second term in this series. And that's it. Sorry? Okay, now this is just the fact that there are two variables. So there is the field or the parameter and the response field and this is a Pauli matrices. So it's not the fundamental, okay? Then you evaluate these terms and you plug it back into the FHG equation. But what you find is that as a consequence of the breaking of the transition invariance, you have a constant term which renormalizes your bulk action. And then you have some time-dependent term which actually decays exponentially fast in time. But the role of this term is to renormalize the boundary actually because when you perform the integration of the given operator with this function f of t, then you see that this integral can be expressed as an infinite sum of derivatives of these operators computed at a time equals zero. But since derivatives for larger n, these derivatives are more and more irrelevant, you just need to keep the first term of them which is actually the term which contributes to the renormalization of the fields on the boundary which is the one you need to compute theta. We did this for a more refined ansatz, not so refined actually, but more precise, that is we included the background field and a sextic term which is needed because we wanted to compute the value of theta in three dimensions. And in three dimensions, these operators is marginal and therefore is expected to contribute with sizable correction to the determination of the fixed point. And this is our result. So this is a plot of the exponent theta as a function of the dimensionality. The black dots are the results from the Monte Carlo. The blue curve is the results of our ansatz while the green and the red ones comes from the first order, second order perturbative Rg. And you see that for the approaching four which is the upper critical dimension, the merges and they go to zero as they should. While for d equal three, both the F Rg and the perturbative Rg results are compatible. You can see in the inset are compatible with the Monte Carlo result and the F Rg gives a different value with respect to the perturbative one. Of course, for lower dimensions, our ansatz is not good because more operators become marginal or even relevant and they are not taken into account in the expression. So if you want wants to improve this result should account for these terms in the ansatz. So this concludes the talk and I will just to summarize. So the idea is that when you do a critical quench or in general when you have a boundary in a system this generates new exponents which does not exist at equilibrium. This exponent for the case of a critical quench in the easing systems can be computed within the F Rg and we find, I would say, good agreement with the results of the numeric and the result of the perturbative Rg. So what is left to do, actually there is a lot to do and for instance is to try to do this computation with full potential. This should be possible and another thing which is actually a subject of ongoing work and is the topic of Jamir talk right now is how does this technique can be used to compute aging and the critical dynamics in isolated quantum systems and one can also try to study other kind of models so the ON models or POTS models or even try to attack different models in the Hohenberg algorithm classification and the really far future goals could be to use these techniques to shed some light on the so-called non-thermal fixed points which describe a general concept introduced by Berges and by Cassenter which describe generally non-equivalent dynamics in universal dynamics. Then the coarsening dynamics may also be attacked which is what happens when you quench not at the critical point but across the critical point and maybe to try to devise some experimental implementation to see these exponents because as far as I know the aging exponents are never been measured in experiments so thank you for your attention.