 So I hope you can hear me and yes, so then you should be able to see my slides. Yeah. Yes. Okay. Yes. So then let me begin. So I'd like to thank the the organizers for giving me the chance to speak at this conference. So this is work that I've done in cooperation with Alessio Bardazzi, who's at CISA and Ricardo Ben Ali Dinatti, who's now in Paris and was before a student at CISA. So what we're doing is we're proposing a new scheme for this method that's called the non-perturbed renormalization group. It's also known as the exact renormalization group or the functional renormalization group, depending on what you want to emphasize. So these are just three names for the same method. And this method has applications in many areas of physics, including statistical physics, quantum many particle systems, high energy physics and also gravity. So actually I come from applications in gravity, but since I've been at CISA, I have started to explore some applications in statistical physics. So there is this very nice review that came out in the summer, which is on the archive. So if you want to know more about this, it really is a thorough review that talks about all the different applications. So this new scheme that we that we are putting forward is actually a practical implementation of a much older idea, which has been advocated in the 70s by Stephen Weinberg, Giovanni Giovanna Lozino and Franz Wegner. So there are some references here to these old papers where they are putting forward these ideas. So the idea is that whenever we have problems in physics, they can usually be made simpler if we use a particular coordinate system or a frame of reference. And we can transform there by making it a change of variable. So we can make a certain choice of variables to express the problem. And the renormalization group itself is an embodiment of this idea where we want to make a change of variables in order to better describe physics that happens at different length scales. So then an RG transformation basically gives us a new description at each length scale. So usually what is done in this non-perturbative RG, what we'll call the standard scheme, is to just make simple linear rescalings of the of the fields. So I will be talking about a field theory here. So this is where the field chi is rescaled. And then you work in terms of a field phi. And this rescaling is where the word renormalization comes from. So this is where the name comes from. But the idea that was advocated in the 70s. So this is really back when this non-perturbative renormalization group was first invented by Wilson, Weinberg, and Jonasinho. And we were advocating that you could also make very much more general transformations, such as non-linear ones and also ones where you have derivatives of the fields involved in these transformations. And although, of course, if you make an arbitrary transformation of this type, you'll just make things much more complicated. The point is that you can make specific transformations. And these can actually really simplify the RG because what it means is when you make the RG step, you can choose these transformations so that actually only a subset of all your coupling constants, which are called the essential couplings, are actually becoming scale dependent. And there is these other couplings called the inessential couplings, which don't need to change under the transformation. So it actually simplifies these renormalization group equations, which can in general be very, very complicated. So this is the idea that we're concealing. So the overview of the rest of my talk is that, first of all, I will introduce these sorts of transformations, which I will call frame transformations, in a simple setting of a classical field theory. So this is classical in the sense that there is no fluctuations either of statistical or quantum nature. And then I will talk about these frame transformations when we consider correlation functions for statistical field theory or quantum field theory. And this will then set up the framework where I will develop these ideas. And so then I will introduce this non-perceptive renormalization group in the standard scheme. And then I will tell you how this can be generalized in this essential scheme. So I'll explain what our new idea is. And then I'll show you an application to the 3D easing model, in particular the critical points, the Wilson-Fischer fixed point, and how these ideas actually can be used in practice. So to get started, I just want to consider, so in this talk I will just consider a real scalar field in d dimensions or simplicity, but everything can be generalized to different cases. So more fields, fermionic fields, and situations where you might lose certain symmetries. So I will take for granted here a translational variance and also z2 invariance. So if I make chi go to minus chi, then the equations will be symmetric. So if we just consider some classical field theory or even some classical model, then described by an action, and then we have some of the Gramscian density, then at the classical level we just want to solve the equations of motion which are obtained by minimizing the action. But in the original variable chi, the equations of motion could be quite complicated and it might be that we can move to a different variable phi to simplify the equations of motion. And the equations of motion, then the action can get transformed then into an action which is then depends on the field phi, which should then be some function of the original fields. So where I use the square brackets, that normally means that it's not just a function of the field, but also the derivatives of the field. So it's a function of whatever is in the square brackets. And what we can think about this more generally is that this change of variables can be thought of as a dipomorphism or a change of coordinates on configuration space. So we can think of chi as just some coordinates on configuration space. And then what we're doing is we're we're choosing to use a different coordinate system on configuration space. And for then the equations of motions to be equivalent, we need that this map is a property for morphism, which means that it should be suitably differentiable, and that it should have an inverse. So then I can give an example, which I will actually come back to in when we talk about the easing model, which is if I imagine I have a classical theory, which has an action, which depends on actually two derivatives of the field. And it depends on these two independent functions of the field. So a potential and then a function z that multiplies the two derivative term. And then we could ask what is there a transformation I can do to get rid of this one of these functions. So to get rid of the function z in particular. And one can easily check then that there is a transformation where this where the potential would transform as a as a scalar if you like under this transformation. And in order to get rid of the factor of z, then you then you have to have the first derivative of chi as respect as expressed in terms of phi just as a function. So this derivative is the inverse of this of z to the one half. And then you have this this this inverse transformation. So as long as z is not going to zero or infinity, or any of the values of the fields, then you can perform this this transformation. And then you can solve the equations of motion where you only have to worry about a potential and not about this extra function z. So this is the this is the idea at this classical level where we're just solving equations of motion. So now we want to consider talking about a statistical field theory. So where we're interested in computing averages quantities, so some quantity which depends now on the fields which now will be averaged by integrating or summing over the the configuration space. And then the action appears in the Boltzmann way. So if you like s is the inverse temperature times the Hamiltonian. And if we're doing something like the easing model, then we would have this simple action. So then the normal procedure is to introduce a generating functional for the fields chi. So this means that we couple a field j to the to the to the fields chi and then we can compute the correlation functions. So here I just give a note on my notation. So wherever you see this dot, it implies that I perform an integral. And if I have a trace over some two point function, it means that I evaluated at the same value of the field and do the do the interval. So this is the field theory notation. So this is what we would do in the standard. So then w is the generating function of connected correlation functions. So then what we want to do is we want to instead think of a generalization of this, where instead of coupling the source to the original field chi, we instead couple it to to a composite operator, phi hat, which is again, like in the classical example, this is a deformorphism of the of the configuration space. So it's like a new a new coordinate frame that we we can choose to do our calculations in. So now I should say a few words about the interpretation of the source in this case, because in some in some applications, we might think of the source as being a physical external field. And if we want to do this, then this would give physical meaning to to to whatever composite operator we have here, because then obviously this interaction depends on the choice of this of this composite field. And this isn't really the the spirit that we want to work in. But but we can also take a different perspective, which is we can also make a change of variables in the in the in the in the measure, so that the j appears now to be to couple to the same field. But then what will happen is the action itself will transform. So in this way, if we want to think of the j as being coupling to the same field, then we have to think that the action is different. So what this means in the end is that if we are interested in in non universal quantities, they of course will will will now change because we will be looking at a different source or or alternatively a different action. But if we're interested in universal quantities like critical space scaling exponents, then we are free to do this because once we shoot in the theory to to the critical point, we will just find the same scaling exponents independently of of of this choice. So so if we want to just look at universal quantities, then we can then we can do this. So the the last point of view is that we can really just think always of this j as purely a technical device, which is kind of the framework coming from a high energy physics point of view. And then we would always want to just compute observable at j equals to zero, in which case we don't have to assign any physical meaning to this. So if we would like to do this and still have some sort of source, then this means that we would have to put a source in the in the action. So here H is now some other source, not the Hamiltonian, some other physical source, let's say, which which always couples to to this. So this is like a price that we would have to pay if we really want to compute non universal quantities, we would have to put the two sources, essentially one source inside the action and one source is just this computational device. So with these points in mind, then what we want to do in in the spirit is adopt this idea that we can pick these frames however we like. So therefore we will restrict to quantities which we can compute independently of this frame. And these will include these the universal scaling exponents at a critical point. So in the end, the application I will give will be to the to the easing model where one can really show. So this the the paper by Wegmer back in the seventies really showed that you can make these general transformations and and the critical exponents at a at a critical point, they will be invariant of the of which frame you pick. So yeah, this is the idea. And so for our purposes, we want to we want to develop this into the model formulation of the renormalization group, which I will explain in a minute. And this is formulated in terms of of a modified form of the one particle irreducible effective action. So the thing that generates your your one particle irreducible correlation functions, which is which is then obtained by doing a Legendre transformation of of W. So I should say that I dropped these indices that I that I included before where they are no longer really necessary. So it's just implied that we will work in this general frame from from now on. So we can do this Legendre transformation. And then we have a generating functional where where phi is now now the mean field in presence of this source. And we can give this this this other alternative definition to the effective action. And then we have expectation values which would depend on the value of the field, which is just a different variable compared to a different variable, which also just as we vary phi where we're in a sense also very varying J. So it's just a change of variables in this sense. And then there is one last ingredient that we will need to include, which its purpose will become apparent later. But but we can we just want to now include also a an extra source, which is a two point source, which couples to the fluctuations around the the mean value of the field so that we will work with an effective action, which depends on the phi, and then some two point function K, which I will specify in a moment. So now the idea is that we have the these generating functionals. And we want to apply this idea that you know that the physics does not depend on which which frame which frame we choose. So the natural thing to ask them is how these how these functionals transform under these transformations. So what we can do is we can take the original frame and then we we add to it an infinitesimal part, which is Xi. So this is the infinitesimal frame transformation. And then you can see this is how these different generating functionals will transform. So note that the W and gamma they transform in in quite simple ways just with this one extra term. So what happens, of course, when you do these transformations is that they they get then re expressed in the expectation value of Xi. So in the case of W, you see that the transformation just gives you a term which is always linear in J. And for gamma, it's a term which is proportional to the to the first function derivative of gamma times times the dysfunction Xi. And then if we have this this extra source, though, we get this more complicated term here, where you have an actual loop correction. So since you have a trace, if you go to momentum space, this is a this is a loop a loop term or one loop term. And it involves this this this modified propagator of the theory. So this is a more complicated transformation. And this is the one that will be interested in when we when we when we look at the non perturbative renormalization group equations. So then now I can finally come to this word essential, which is in my title. So the point is that these correlation functions, they will depend on on the different coupling constants that we have. And these coupling constants, we can then they fall into two classes. So if we think of when we compute quantities, that they should always be invariant under these frame transformations. So these critical exponents, then we should have couplings, which actually enter these physical quantities. On the other hand, if we can, if if we are allowed to make these frame transformations, the the inessential couplings are those that where where we vary the the generating functional respect to the inessential coupling omega, this can be expressed exactly as one of these frame transformations for some or some Xi. So these are the coupling constants, which ultimately we want to to get rid of because they won't they won't actually enter into the physical quantities that we're interested in. So this is the idea. And now now I want to talk about the the non conservative renormalization group and how we how they incorporate this idea. So this idea is as I explained in the in the introduction. So it's it's a method which you can use in statistical or quantum field theories to to actually obtain correlation functions. And the key idea is that what you want to do is not average over all fluctuations at once. But first of all, what you do is you introduce a cutoff scale. And then so the cutoff scale is k, which is a momentum scale. So then you you can first average over the the small distance or the large momentum fluctuations. And then you get a new description where you now just have the remaining long wavelength fluctuations to to integrate out. So you get a new description in terms of a new action, which now depends on on these new coupling constants. And then ultimately, what we can do is we can take this scale all the way to zero. And then we've then we've included all the fluctuations. So if you like, then we have performed the integral in the in the functional integral. So and then we at the end of the the RG flows, when we take k all the way to zero, then we obtain the correlation functions, which is ultimately what you what you want. So the modern formulation of this idea. So this idea was first introduced by by Wilson in the 70s, and then there was work by Pochinsky in the 80s. And then since the 90s, the work of Kristof Wetterich and Tim Morris and others have really turned this into now an industry where you can really use this to in all these different areas of physics and most applications that they are based on what's called the effective average action. So the idea of this effective average action is that it depends on the cutoff scale k. And when we take this k towards the ultraviolet cutoff scale, then this action should should approach then the microscopic action or the or the Hamiltonian, if you like, of your system. So some simple, simple theory, which is describes your microscopic degrees of freedom. Microscopic degrees of freedom, on the other hand, then when you take the other limit, when you take k all the way to zero, then this this this functional should then actually be just the one PI effective action so that you can obtain what your correlation functions once you've integrated out all the modes. So then in a picture, this is the idea. So you can think of this action, if you think of all the coupling constants that it can depend on, it's then different actions and then just different points in theories based on all the coupling constants that appear in the action. And you start at some point where the the action is the microscopic one, and then you flow taking k down towards zero. And then at the end of the flow, you will reach the full effective action where you contain your correlation functions. And then these different lines represent the fact that you can do this in different manners. So there are different regulators you can introduce and different other things that you can do to to construct different RG flows. But all of them start from the same microscopic and action and then should end with the full effective action at the end. So within the formalism that I introduced, what you do is you take this two point source and you make it you turn it into an infrared regulator. So then in position space, it has this form and if you transform into momentum space, it depends on this kernel, which depends on the ratio of the momentum divided by k squared. And then the point is this kernel, when you take the momentum very large, it should vanish so that you're integrating out the the large momentum modes unsuppressed. And when you take the limit that p goes to zero, then it should either go to a finite limit or it should diverge so that you're already suppressing the low energy modes. So then you have this this expression for the effective average action. And then what's happening when you take k towards lambda and lambda is we assume here lambda is essentially infinity. What happens then is this essentially becomes a delta function in the path integral. And then when you evaluate the delta function, this gives you the limit that this should then go back to the microscopic action. So then this is the idea. And as I said, this this function are here, you can choose it in many ways to have as long as it has a real limit. So here I just give some different different functions that you can choose. So you can have you can have smooth ones, which involves exponentials and where they might have some finite limits, you can also have ones where the it depends on the theta function. So it's zero until this ratio becomes one. And then and then the regulator turns on. And you can also have power law type regulators which which diverge when the momentum goes to zero. And at the exact level, all the physical quantities are independent of which choice you make, because you this is just different ways to do your course training, if you like. But obviously, when you make approximations, then this choice of the regulator becomes important. And this is where then you have the technical problem of trying to find, okay, you make some approximation, you're expanding somehow. And for different regulators, this expansion can be the the the radius of convergence of this expansion could be could be zero, it could be finite. And you have to choose a good regulator to actually get physical results. So this is a this is the technical technical thing you have to do when when you use this this method. So in terms of what I'm calling the standard scheme, so what is generally done, the the the field that that we're using, as I said before, what it is is just it's related to the to the field chi just by a simple rescaling. So there is just a simple relationship between the two fields. And then when you take a K derivative of this expression, then to get how how this action depends on K, then you end up with this equation, which is called the Betherick-Morris equation. And here with this anomalous dimension included, so the anomalous dimension is then related to the derivative of this wave function renormalization, which is then just a single in a central coupling in this in this formulation. And you can see that the terms proportional to eta here, they have the form of this this frame transformation where where the xi is just linear in the field. So it's minus one half times the eta, which is the the anomalous dimension of the field. And this is the the standard the standard scheme, so the standard setup, which is which is done. And and the purpose of introducing this eta is that now the action when expressed in terms of phi instead of in terms of chi, it will it won't depend on the wave function renormalization. So yeah, so I could I could re-express it in terms of the mean value of chi, and then I would introduce this dependence on z. So you see by working with phi, I'm just removing the the independence of z. But I still have the freedom now to say well exactly what is z, right? So I have the freedom to basically choose which is which coupling I'm freezing. And this is just the what's what's known as setting different renormalization conditions. So what is typically done, let's say, is that I can expand my action in derivatives, and then I will get this term with two derivatives, and then one can choose a specific value of the field, for example, where the field vanishes, or maybe where the field is at the minimum of this potential. And then I can say that this function xi is just one at that scale. And then the point is that I will under this condition, I will then solve this equation. And since now I've made this one condition that this means that then I'll have one one equation which which then determines now eta. So so so I'm basically trading the dependence on this one in essential coupling. And now I just will get eta, but it will, eta will not depend now on z itself, but it will just be a function of all the other coupling constants. So this is the idea. And in this simple setup, you can actually see that when you arrive to a fixed point of the renormalization group, this eta of k will actually correspond exactly to the the critical exponent eta, which is why why it's given given the the same name. So yeah, this is the idea within the standard scheme. And so then how do you actually solve these equations. So there are many different approximation schemes that you can choose. But I will I will talk about the derivative expansion, since it's the one which is we know the most about, and it is shown good convergence properties. So the idea is that you expand the action in in derivatives. So you start with a simple potential, then you have a term with two derivatives, and then you have terms with four derivatives, and then you can go on and this expansion, and you will have then more and more functions of the field, which here we're not expanding in the field, we're just expanding in the derivatives. So then what this amounts to once you plug this all into this into the equation and expand all the trace in the same way is you will then get a set of coupled differential equations for each of these these functions. And then okay, so practically what one has to do then is pick some some order to truncate this expansion and then just solve solve for a finite number of these coupled differential equations. And then of course you can you can increase the order so where s here is the order of the truncation. And then and then what you should find is that if this is going to converge is that physical quantity should hopefully converge towards their their their physical values. So this is the idea in the standard scheme. So now I want to discuss what our idea is for this essential scheme. So the point now is that we take this field phi hat which now is a function of k to be a completely general suitably local function of of the field chi. And then we derive this equation and what happens is it has a similar form. But now this this term that was previously just minus five times eta is now replaced by f which is the expectation value. Sorry this should this should not be a curly bracket but but an angular bracket. So this is this is the expectation value of the derivative of this this this this field. So now we have the freedom to to choose this guy this f and if we make a derivative expansion then we can expand in derivatives. Here my notation is also a bit wrong. This is just a little plusy and these are just these are the gradients of of the field. So the point is that then we can expand f just like we expanded the action also in in the number of derivatives. So the first time there will be no derivatives and then we will get terms of two derivatives and and higher. So these are the two derivative terms so then there should be order four here. So then the point is that just like before we can now choose renormalization conditions to remove the inessential couplings but now what we can do is really remove all of these inessential couplings order by order in in this derivative expansion. So whereas in the standard scheme we were just removing a single coupling now we will we will attempt to remove all of these inessential couplings so that we just get a flow for just these essential couplings. So in principle there are many different renormalization conditions that one could attempt to to to impose so there's not a unique way that this can be done. So pragmatically well what we can think about is in some sense that we have this we have this reparameter this frame invariance or reparameterization invariance and we can think of the renormalization condition in an analogy to a gauge fixing condition in a gauge field in a gauge theory. So this this just like when we pick a gauge fixing condition we try to reduce the redundancy introduced by the gauge invariance here we're reducing the we're removing the redundancy with respect to the the framing variance. So we can think of different conditions as different gauge fixing conditions and the simplest thing we can do is we can look for a free fixed point so where we have a Gaussian model so where the action is just given by this this simple two derivative term and then we can find well what are the inessential couplings around this this point and then we choose the renormalization group equation so that at least around the free fixed point we remove all the the inessential couplings and then as we flow we we we this this this will then give us a way to close the equations so if we do this this means that apart from this term in the action which will get normalized with the one half we then remove any terms of this form where now this capital phi can be any operator which is suitably local in the fields so all of these terms will then not appear in the in the ansatz for the for the effective action and then we solve the effective action we solve sorry the flow equation under this renormalization condition and then this will determine all these functions f and g as functions of the remaining coupling constants just like previously what happened is we would just get the equations just for for the single eta now we have all the eta is replaced by this functional expression for given by f so this is the idea and then we can just see how this idea works where we make the the order two approximation in this derivative expansion so in the standard scheme this would mean that the action has this form where we have this function zeta which is playing the same role as z in our in this example I gave in the classical field theory and in the standard scheme we can just fix one of the couplings in z so we can just pick a value of the field for example the field equals to zero and set this guy to one but other than this constraint we have to solve the flow equation to to find to find zeta but what happens in the essential scheme is that we can oops we can actually set the entire functions zeta to one so this means that we can remove this entire function and at this level of approximation this means that this this f is just expressed as just a function of the field so without any any derivatives so then if you look at the at the structure of this equation you see that it's non-linear in the action because here you have the second function the derivative of the action in in this in this propagator so if it depends on zeta as in the standard scheme then it depends non-linearly on on zeta whereas what we're doing then is we're trading this for dependence on the function f but the equation is linear in f so this this automatically makes much things much much simpler so you will actually have the same the same number of equations but you will have this linear dependence on on f rather than this non-linear dependence on zeta which really makes everything much much simpler when you're doing doing these these calculations because essentially because this propagator that you get by taking two functional derivatives of this action and then inverting is going to be much simpler so this is really a practical idea you see we've introduced this this this this quite a lot of this machinery but in the end the point is that the equations that we actually solve they will be simpler once once we once we have done this so then I can turn to an application of this so this is what we've done to to test this this formalism is we've gone to the 3d easing model since this is like the benchmark where we can really compare how well this this new scheme compares to to to other schemes so in particular we look we study the wilson-fischer fixed point in in in three dimensions so when you look for fixed points what you want to do is you want to actually remove the scale k because you want to look at something where you have scale invariance so this means that you you work in dimensionless variables so you you just essentially rescale everything by factors of of k and then look look in this in this formulation in the dimensionless formulation where where fixed points where where the dimensionless variables become independent of of k and okay for this case we just use this simple regulator and then when we look for for the fixed points then we get these these these differential equations so here b is a number which actually you can you can just set to one if you like it just depends exactly how you do this this this rescaling so so you can think of it as one here so then we get these differential equations for these two two functions and as you can see they are linear in in in f which is the dimensionless version of the capital f and these equations okay i'm not going to show you the ones in the standard scheme but they are much more much more complicated than these ones you get two equations but they they are they are more complicated so thanks to this form of the regulator you see that these just depend on on rational functions whereas for the same regulator in the standard scheme you you you don't get such just rational functions of of the derivatives of of f and and v so this means that once we arrive at these equations actually finding the fixed point solution is much simpler within the scheme so what we are then interested if we're looking at the the 3d easing models so now we've gone to three dimensions so we also impose this z2 symmetry or the the potential and then what we do is we solve these equations looking for solutions where the the solution should be valid for all values of the field so from from minus infinity to plus infinity and once you do this you can find that there are only finite there are only a finite number of solutions so this basically picks what your your initial conditions for these differential equations need to be so the only fixed points that you find are the the trivial free fixed point where the potential is just constant and then you also find the interacting wilson-fischer fixed point so this story is the same in in the standard scheme where you would be solving for for v and for zeta but these equations are much simpler to to to solve so practically it's much easier so you can do this and then this is the form of the the dimensionless potential so you have you see it has this mexican hat form which which is the the form you expect for the for the for the for the wilson-fischer fixed point in in three dimensions so this allows you to find the the fixed point and then to find the critical exponents what you do is you perturb the solution so you look at the linearized flow around the solution and then you can express this linearized flow as as a sum of terms which are then eigen perturbations so you make a separation of variables so you have all the k dependence in this factor where where where then theta will be the critical exponent and then you have all the phi dependence in this psi of phi and one can show when you are when you look at these these equations that that that these these equations are quantized so that they're the spectrum is not a continuous spectrum but so the value is i think there is a problem from the spectrum the kevin there was a problem with the connection some yeah okay repeat the last few sentences okay i go back yeah so i just go back to this line so um so the point is after solving the the fixed point equation which which means that in dimensionless variables there is no k dependence then you look for a perturbation of the fixed point and this is how you can extract the the critical scaling exponents so uh yeah this is done by by by then making a separation of all of variables so that the k dependence just comes in this pre-factor where where theta will be the critical the universal critical exponent and all the phi dependence is in this psi and you can show both in the standard scheme and in this in this in this essential scheme that that this spectrum is quantized which is what you expect because you just expect some discrete values for these critical exponents um so then one can solve these linearized equations and uh so to find the critical exponent new you look at the the the relevant exponent which is the one where this exponent is positive and there is just one of these and then you can identify the inverse of this with with the scaling exponent new to find the the scaling exponent eta uh this will be related since it's related to the scaling of just of the operator phi hat if you like then this turns into an odd an odd perturbation so here you have to make a perturbation with an with a with an odd function of the field to to extract this this critical exponent uh so then there is a critical exponent which is which which there is a scaling relation which which which allows you to then identify the value of eta so i should say at this point that this is this uh calculation of eta is actually more complicated in in our formulation so this is the one drawback and this is related to the fact that uh as i was saying so you can think of eta as being uh related to a perturbation which is just uh which is just chi so in the standard formulation you can just read this off from the value of eta at the fixed point but in our uh essential one you're actually the the the expression for the for this psi so in the standard scheme the psi would just be phi but for us in in the essential scheme it will now be some complicated odd function of the field you can find this perturbation and then you can compute the value of of eta so then these are the values of nu and eta which we obtain and so it makes sense for us then to compare well what would we get if we did the standard scheme so then there is the there has been these papers that looked into this in 2003 and you can see that our results are very comparable actually we get the same value of eta but our value of nu is slightly different and if we compare to the the best results which come from the conformal bootstrap our value for uh for uh nu is slightly better so i should say here that all we've done here is used one regulator so far and we haven't studied the dependence on the regulator whereas these ones they have uh what these ones have been obtained by looking for the scanning the space of regulators so just introducing one parameter and then and then looking at the dependence of this one parameter and then finding the point where the sensitivity to the critical component is minimized so looking for a minimum of these quantities so this is known as the principle of minimum sensitivity and it's something that in a sense you're obliged to do because for most regulators the derivative expansion will not converge well so you have to find the one which is picked out by this principle of minimum sensitivity so we haven't yet done done obtained values which have done this but these values just for the single regulator are also already in good good agreement with these ones so so uh so this this really gives us gives us the indication that our method is not only simpler but also in the end it seems to converge in a similar fashion to the to the standard scheme so then we can of course then increase s to go to higher orders in the derivative expansion and then we will get more complicated equations so but what's going to happen is that each at each order in this expansion always our equations in the essential scheme will be simpler than those in the uh in the standard scheme because we have now the power to remove all these different terms so I just go back so we will remove all terms which which involve this essentially the the the equation of motion for this free free theory so once you've done this at different orders then so for example at fourth order then you just get one new function of the field and then this table where we compare what happens at the standard scheme so in the standard scheme there are five functions of the of of the field at the fourth order whereas for us in the essential scheme there will only be two and then this repeats it at higher orders so once you go to the sixth order which is the highest order that is currently being investigated there are 13 of these uh 13 uh couple differential equations so you can imagine that it's very complicated so we will also let 13 equations but but we will get them such that uh they only have linear dependence on on on the apps so the non-linear dependence will only then be on on five of these these potentials at order six so you can see that at each order in this expansion we will get a we will get simpler equations to solve which which hopefully means that uh practically one can actually go to higher orders with less effort let's say so really it's then of practical advantage to to use this essential scheme over the standard scheme so uh so then I can just make my conclusions so I've been giving a well a quite biased and personal introduction to this non-perturbative renormalization group uh sort of introducing not only the the method but what I think is is perhaps an even better scheme to use and uh so this takes advantage of that we can perform these general reparameterizations or these frame transformations as I have called them and then one can apply not just a single renormalization condition to remove one coupling but to actually remove all of these inessential couplings and uh this is really just a practical idea so it's just there to to reduce the complexity of the calculation so that one can actually uh access physical quantities with with less effort so this is really just the the simple idea and the point here is that okay although I've just discussed a single scalar field is that really the these ideas so this essential scheme can be adapted to all the different other applications of the rg so for example to to applications in ultra cold atoms to qc qcd to quantum gravity and to many other many other things so hopefully this idea can be really universally applied to make this method more useful and to allow us to obtain quantities more easily so this is the message so uh thank you very much for listening to my talk thank you kevin for this nice presentation uh any questions we have again plenty of time for our questions uh I actually have one so I am uh I never actually worked in renormalization group from the analytical point of view but the first thing that I remember when I was taught back in university was that one of the pretty much one of the first requirements that you have in your in your renormalization group theory is they usually restrict yourself to short range interactions um and then if things are long range the the theory might work but maybe there are some corrections some things to take care of and the things is actually controversial as far as I understand does your approach change anything to this regard so can long range interactions be integrated naturally into the thing and does it work just fine out of the box so uh I think there is no reason why in this exact formulation uh the non-perturbative formulation that that you that you that you can't that you can't do this because the idea always is that the long the long-range interactions these are going to be integrated out last so in a sense what you're doing is always removing the short range interactions so certainly this this is the the idea and therefore if there are long-range interactions uh yeah so it it might be more difficult I think that uh I think that yeah I mean you would have to look at the specific specific example and obviously like any tool there are certain things that it can it can be more useful to do and uh certain certain problems there it can be less useful so what I should also say is that where you have uh points where maybe your fundamental degrees of freedom at some point at some scale become uh not the the relevant ones you can also use these sort of frame transformations to to describe transitions to to to other physical degrees of freedom so for example you can use it to to describe like those your those normalization and things like this uh so there are also maybe more physically motivated ways to to use this freedom to to to fix so for me I'm just introducing it in a very practical way to to do this but maybe for for problems with long-range interactions there is some more uh more physical way of choosing this F which you could which you could do which would which would aid this so I think that this is a strong possibility okay thank you uh Pierre so I also have a question and more for the this uh the application of this uh of this essential renormalization to QCD as it was mentioned you're in your conclusion so how would this uh renormalization scheme compare to some other let's say partial um summation renormalization scheme that has the one that gives the schringer-deisen equation and and thatch I mean would you say that this uh renormalization this renormalization scheme that you present is more powerful in the sense that it is more non-perturbative but it is more difficult to apply to a more generic problem or is this more complicated uh so the functionary renormalization group is already applied to to QCD and I would say that compared to the schringer-deisen it's equally non-perturbative because they're both they're both based on these um exact equations so schringer-deisen equations or all these functional renormalization group equations they're both exact identities um so uh in the in the case of QCD in the applications that that are done by in Heidelberg which is where I was before they would uh they would take this this f to typically they would take this this f to to be of this form where we would talk about the standard scheme but they will allow this eater to depend on the momentum so this makes sure that the the the glue on propagator essentially always just just depends on p squared and doesn't the the the glue on propagator doesn't doesn't become non-trivial so you always get rid of the non-trivial momentum dependence of the of the of the of the trivial propagator okay so this idea could is then incorporated in this trivial way well non-trivial but uh but but in in this approach we really have the the power to to to include uh more non-linear transformations of the field and I think that this would then uh I I really hope that this can be applied to to QCD so for myself personally I'm more working in gravity and there I which is also a gauge theory so it's quite similar to QCD so there I know already that we can apply this idea so we've already started doing this and there are there are certainly certain terms in in in the flow equation which then you you can again get rid of so the same thing will happen QCD so essentially any any term which is uh more or less it's any term that appears in QCD are proportional to the equations of motion then you can remove these terms so uh so yeah I think that that can really be be applied there as well uh of course it's also possible I think to incorporate the same idea into the Dyson Schringer equations it will just appear in a slightly different way so so I think that wherever you are doing some sort of renormalization you can you can always choose this more general approach and therefore hopefully simplify your equations okay thank you any other questions I have one question again relevant to what Pierre asked how how does this machinery compare can be compared with the so-called two-particle irreducible effective action methods which my understanding is that they are one has a systematic expansion in in loops which makes the equations more and more complicated one starts with the the so-called cactus diagrams which is the simpler the simplest form of the the simplest case that can be solved relatively easily and amount accounts for you know corrections of the mass mass shifts etc like in Hartley-Folk then one can numerically solve the next leading order corrections coming from sunset type diagrams and I wonder if this approach corresponds to some simplification or some some special so broadly speaking so I've introduced the derivative expansion which is used mostly in the scalar fields so in in applications to QCD what is used is a is a is a is another expansion where what you do is you expand only in the fields but you keep all the momentum so you do the opposite of what is done here what I do here is that I keep all the field dependence but I expand if you like in the momentum or the derivatives so in QCD what they're usually doing is they're they're instead doing a vertex expansion so they're expanding in the in the field amplitudes and because of the structure of the equations because there is two derivatives here this means that the the endpoint function will always depend on the n plus two point function so then you have to of course truncate the system exactly as you must truncate this derivative expansion but so you can look into this review which I gave in the in the in the in the introduction and into the work of the group in Heidelberg so Jan Pavlowski's group and this is where they're really using the these flow equations to study QCD but I think they're they they are then doing this this non-perturbative approximation and then they can actually study things like confinement using using the this flow equation where they're doing this this vertex expansion this is the approximation scheme which which they are mostly mostly using so I hope this this at least answers your question or maybe you look into into these papers to understand exactly how this lines up with the 2pi 2pi effective action I think there is also a recent paper where they are also trying to incorporate this 2pi effective action into the functional renormalization group so there I think maybe they are trying to combine these methods so this paper was by Jan Pavlowski and and collaborators came out a few a few weeks ago talking about how to incorporate this 2pi and they are making at least some some some statements that this can really improve the the approximations when you do this so so there are generalizations of these equations where you where you can work with the 2pi effective action instead of the one p i one thanks and one more question again from the 2pi approach I I know that there are generalizations extensions of the machinery that apply to non-equilibrium cases like when I start with some initial state and I want to do the dynamics mm-hmm how much does this your approach rely on you know on the specific problem of here you want to calculate the ground state properties when you want to include time dependence let's say then then typically what you have to do is well I think that one approach used in is to do some analytical wick rotation to be able to describe some time dependence also to work out of equilibrium so I am not an expert on these techniques but again in this review you can you can you can see that there is some there is some work in this direction so so because of this regulator essentially if it has the if it also if it isn't a positive then you will get an infrared divergence here so you can see from the structure of the equation that this r is becoming an infrared regulator here so if you try to have a time derivative then then typically you either have to break the the symmetry or you or you have to do some analytical continuation so I think that these are the two methods which is used when you are studying time dependence thank you any more questions I think there are no more questions so let's thank Kevin again and I don't know if there is any announcement so no particular announcement except that the next and last session will start at 1 30 today