 Thank you, the first I would like to thank you for the organizers too for the possibility to be here and to speak. My topic is Functional Realization Group in Firmienic Systems and in fact I would like to concentrate on three particular topics where the presence of Firmienic background may need more thorough treatment. One is the how the how the local potential approximation works for fermions. The other is that the finite chemical potential case when the temperature is small and the family direct distribution function becomes a theta function. And the third one is some words about the Higgs stability bond and the role of irrelevant operators. First let's go through some introduction and first let's have a somewhat wider view on the exact generalization group. Actually by observation we know that the behavior of a subset of the word is governed by some relevant concepts. For example in a free falling body these concepts are time, position of the body and so on. Everyone knows that. There are some special concepts which are the laws which determine the relation of the other concepts. And we know by observation that these concepts are far to be uniform. That means that different segments of the world works with different concepts. And that's why actually scientific disciplines appear which speak in a different language. For example quantum mechanics starting from quantum mechanics to say society, sociology, we have vastly different concepts. And it is not hard, so the relevant concepts are mutually irrelevant for others. And it is not hard to realize the analogy between the real musician group fixed points and the subsets of the word with different concepts. So from this point of view the exact real musician group describes the subsets of the word. And since we perceive the existing word through our intelligence it is not surprising that also artificial intelligence have some very deep relation to the renormalization group. There are some papers on that recently on the web. So deep learning and renormalization group somehow goes in the same way using the same technique to understand the word. Going a step further and concentrating on quantum field theory, the quantum field theory is a representation of the word where the concepts are represented by fields or powers of fields and the laws represented by the effective action which in quantum field theory comes from a path integral, a well-known path integral. The advantage of this representation is that it is constructive and we can in principle describe the way how one fixed points goes to the other. And in particular we can tell that if we change the action here by an amount in a quadratic way, how the effective action changes under this change. And this is the Wetterich equation which is somehow the generalized Quarland-Siemensich equation. And it's a formally a one loop equation and exact, it is a very important thing. So it can be treated as an alternative definition of the quantum field theory. Now we see that how the fermions are present, the fermions present themselves in this formula. There is this super trace here which means that the fermion loop brings in a minus sign. And also we have one derivative which means that we need a left-right derivative which is not just one side derivative which is also the property of the fermion modes. Okay, if we go on, this R parameter which is introduced in the previous slide, if we assume one parameter scaling which is not always true but if we assume one parameter scaling, then we can require that the regulator having some of these special properties. And in that case, the resulting effective function, functional, alternates between the classical and the effective action, which is a good property. And by choosing the concrete form of the regulator, we can mimic different techniques existing in the quantum field theory starting from the perfect action description which corresponds to the choice of a sharp cutoff through the optimized regulator. So we have different techniques which can apply here. I can mention the CSS regulator which is a multi-parameter form which can adapt a lot of possibilities. So going now directly to the presence of fermions, we now see that the effective action now depends not only the bosonic variables but also a generic fermionic background as well. But if there is a, the fermionic background is present that it means that it can have special links between fields which is not present in the bosonic theory. For example, it can link to a fermionic fluctuation to itself or a fermionic fluctuation to a bosonic fluctuation. That means, okay, I will introduce this number representation where the fermionic field and its conjugate is treated in a single vector. Now, in this case, the fermion matrix, which is the left-right derivative of the effective action, becomes a super matrix not only having this of diagonal elements. And we have boson fermionic mixing which has not just diagonal elements but also the fermionic and the bosonic modes are mixed. These matrices are hyper matrices. And if we want to calculate the super trace, we have a special care to them. And how one should do this calculation? So we want to calculate the super trace of this formula which is equivalent to having a super trace log of gamma 2, the regularized gamma 2. And how one does it is that one finds a good representation of this hyper matrix, for example, in this way when this coefficients a, b and c and c bar comes from consistency. So these four parameters and four elements in this hyper matrix. And then the right-hand side is easy to evaluate. I mean, this trace log is easy to evaluate. And we see that actually we have three terms contributing to this trace log. The first one corresponds the propagation of a fermion field in a bosonic background. The second one corresponds to a bosonic propagation in the presence of the bosonic background. And that is the third one which corresponds to a mixed propagation in a fermionic background. So this is how to calculate the, in a generic fermionic background, the trace log. And the next slide, next topic is that how it can be approximated. How can we do approximation? How can we do actual calculations in this fermionic theory? The vector equation is exact, but we still have to have some means to treat it. Usually one chooses an operator basis and expands the effective action in this basis. And then we have an evolution equation for the coefficients which leads to a flow. But the general basis is too large. We cannot treat a general basis in general. That means that we have, in practice, need an ansatz to restrict or calculations. The most beloved ansatz is the local potential approximation. For bosonic fields we all know what happens. We assume a generic potential which is the function of the, the function of the field. And a function of normalization can be added too. With this assumptions, so we use the optimized regulator and also we go to high finite temperature and chemical potential is in KMS relations which defines, defines the distribution functions. One can work out the generic structure for the potential evolution for the potential, for the U field. And this, this is described here which contains the, the multiplicities of the different fields as well as the generic frequencies containing background dependent passes. And here there appears the bosonic and fermi distribution function. So this is finite temperature and finite chemical potential and then generally, general formula for the evolution of the, of the, of the potential, local potential in the local potential approximation. How it works for fermions, this is the question. The, the, the problem with the fermions is that in principle we can assume, it could assume that there is a potential contains, containing the fermionic variables. But we also know that the fermions, the product of two fermions at the same position is zero. So it means, it seems that the locality here, together with this, this restriction, restricts also the form of the, of the available potentials because we, anyhow we take a quadratic form from, from fermions, there exists some power, a large enough power, which is already zero by this, this requirement. So that would suggest that this, this potential, which, which can be written for the fermions is only a finite polynomial. But there are also counter arguments against that, that we know that if we bosonize the fermion fields, for example, these, instead of quarks, we speak about the pions. For pions, we allow any, any form of the potential, even in the local potential approximation. Also expressing through the quarks, they, they are, they are, they are local potential for the quarks, quark fields. The other argument which, which means that it is not necessarily so, that if we can calculate the endpoint function of fermions, they can be non-zero even in the local limit. It depends on the limit, how we, we can take the limit, but there, there are cases where it is not zero even in the local limit. All of this means that there must be a way how that, that we define the fermion, the, the, the, the fermionic potential, the local fermionic potential, which works. And there are works on that. Let me see, let us see how it, how it can be done. Actually, we, we have to understand the, how, what, what, what are the assumptions behind the local potential approximation? The, the assumption is that the propagators, their, their eye is based on much slower than, than vertices, that's all, vertices are much more local objects than, than, than the, the propagators. That means that if we calculate in perturbation theory this type of diagram with proper vertices here, then the numerical value of this, of this diagram is very close to, to, to add other one where we put these, these vertices to a single point. So this is actually the, the philosophy behind the, the local potential approximation. So the, actually the local potential approximation is a limiting process also in the, in the case of the, of the bosonic fields. So then we can use this, this thought how to, to define the, the fermionic modes, fermionic local potential approximation. And we, we do the following thing that we introduce this variable, this bi-log, this quadratic variable is in fermionic fields, psi bar psi, smeared out in a, in a certain volume. And this is a bosonic operator. We can take it into the nth power and it is not zero. And then after, after this procedure we go delta v to zero. And in notation we can use this formula that this is psi bar psi to the nth power. And it is then not a, not, it will be, will not zero. So heuristically what allows us to define the local potential approximation for fermion is that the position is not a point, but a patch, a finite patch. And we can deceive with the, the fermionic operators in this, in this finite patch. And how it works, we, we have done some works on, in the growth in the model to, to see that if it, if it is a valid concept, I mean the local, fermionic local potential approximation, this is the model with the, with the i is meaning the invariant of, of reduced bar from, from the fermionic variables. This is the sum of psi bar psi for different fields and finally squared. This model has some symmetry. This has a chiral symmetry and a flavor symmetry. And the, the goal is to construct the effective action which must depend on the invariance of the, made from the fields. This is the ansatz we have used. We, you see that the ansatz is that the, instead of using a linear i function here using, we use a, a potential ui. And we neglect here other invariants. It can, in principle it could depend on other invariants as well, but as is, it has turned out it is a self consistent assumption, assumption that it is only a function of i itself. After doing this, all of this calculation I, I have mentioned before, we can have a result for the running of the fermionic. So this u is already a fermionic, effective potential, fermionic potential, fermionic local potential. It is very interesting in, in the results that besides the nf fermionic modes, there appears with a negative, an other side, a contribution with a, with a negative side, sign, which would sign all if we would, would have a, a, so originally it would have a bosonic and fermionic field. This would sign all the contribution of a bosonic field. So there appears a contribution of bosonic field without explicitly introducing a bosonic operator here. And we can analyze this, this function and these are, this is the, the fixed point pattern of, of the fields, the, the critical exponents, which agree well with the bosonized version of the, of the, of this, of this model. We, we can also reconstruct the fixed point potential, which can, with, with, by the approximation, I don't want to speak very in detail in, in this, about this topic. So this, this is how we, we should define the, the fermionic local potential approximation. The next topic would be the, the fermion systems that finite temperature and chemical potential. Let us see what is the problem here and how, what is the solution? As a toy model, let us consider the Yuccavata type of, Yuccavata type model with a single bosonic and a single fermionic degree of freedom. So it is the most simple bosonic fermionic model, which exists. The FRG equations can be, okay, and the, the ansatz for this, for this model is that now, now we assume only potential for the bosonic field and we assume that the fermionic field is interacts with the Yuccava term so that there is no, no complication with the higher, higher psi, bar psi terms. And then for the, for the running of you, as I, I've already mentioned, this is the generic form for the, for the, for the running of you, the potential, containing the bosonic contribution at finite temperature and fermionic contribution at finite temperature and chemical potential. If the temperature and the chemical potential is finite, nothing, no, no, it is a very, well, bonafide differential equation. You can sit down and, and solve it. And of course there, there are complications, but, but not conceptual problem. And these people has, has done these computations. The problem arises when we have, we go to zero temperature. Because at zero temperature the, the fermi direct distribution becomes a theta function, which means the right-hand side of the, of this, of this differential equation, non-analytic. And therefore, analytic methods will not work at all. And in the numerics, if one doesn't take scale very precisely on that, on that regime, one can easily find a very non-convergent results. Okay. So the, the presence of the theta function actually is, signals the presence of a fermi surface. And in, if you describe it in the k-phi coordinate system, the case, the scale variable phi is the field. Then this fermi surface is a circle, or a ellipse. If it, if you have a g-phi, then it is a circle. Okay. How can we treat such a differential equation? This is the question. And in, in, as is, in general it is the case. In this, these cases one separates the, the, the study in, in the external and the internal regime and require continuity on the borderline. So this is actually what I will present here, how, how this, this, this regime must, must be treated. So we have two regimes, the large energy regime, the larger and the internal regime, the smaller. And let us see that how, what, what are the differential equations and how to treat them. In the larger regime, we start the evolution from, from k equal lambda and going down. And from this, from this point of view, the presence of the fermi surface doesn't affect the evolution at all. This is the differential equation which we, which we have. And that means that the solution of this differential equation is the same as mu equals zero. And if we have the solution, we have cut the, the fermi surface from the, from the solution. And this will provide the initial condition, the boundary conditions for the, the smaller regime evolution. We, in this case, since it's a, it's the standard renomization group evolution, we have assumed the, here five square is missing. So we have assumed the, the most simple polynomial form of, of, of the field. And then if we have the solution, we cut, as I've said, the cut, the, the, the value of the potential at the fermi surface. And this provides the boundary conditions for the this smaller regime. But it's, how, how do we treat the this smaller regime? Okay, let us see that. The differential equation is not too complicated. The problem is that we have now a curved boundary. And it is not very easy to treat. And especially because the, the, this is not, not a symmetric equation. In k, k direction, it is a first-order equation. In the phi direction, it is a second-order equation. So we cannot mix the, the degrees of freedom just, just by, by, arbitrarily. Because then it would introduce a second-order derivative in the k direction. So, but, but we, we should do is, so the, the idea is that we transform this curved surface. This is actually, now it is a, it's, it's a circle. Transform the circle into a rectangle. And then in the, in the rectangular set up, it is easy. The evolution is very easy. This is the way how, how we can do that. We introduce new variables x and y. Instead of k, we, we should introduce x of k. And instead of phi, we introduce phi of k and phi. With this, with this choice, we have, we can have different choices. But with this choice, we, we have transformed the Fermi surface to a one by one rectangle. Little bit, the differential equation becomes a little bit more complicated, but it's very straightforward to, to derive that. And here, the boundary conditions is, is simply zero, zero around, around this, this boundary. And the, the, the boundary conditions appears here. This v zero is prime, v zero itself is the boundary condition in, in, in the surface. Okay, we go on and solve, solve, how to, how to solve this, this equation. So, now you, you, you now have a, a rectangle, zero boundary conditions, and a second or the differential equation. What to do? It is also known. It's like electrodynamics, electrodynamic problems. You remember, you, you need a basis which is, which satisfies the boundary conditions. Now, now it is zero at, at the boundaries. And we have some initial conditions in the, in the, in the upper, upper line. In our work, we have chosen the, the, the harmonic functions, but there can be other choices like the, the Chebyshev polynomials used by Borhead and Geese. The, the, the actual form of the, of the, of the, of the basis is actually not, not very important, but it is a very other, it's a very nicely convergent result as you will, you will see soon. And there is another good property of having, choosing an, an orthonormal set of basis functions because we can transform the differential equation, partial differential equation for the U by an integration to an ordinary differential, ordinary integral differential equation for the C for the coefficient functions. And it is much, much easier to solve an ordinary integral differential equation. Okay, so this is what we have done. So this, we have this, this, this equation. And since these are cosines, we can also do the integration to, to some power by hand. And that, that is a topic of the next, next slide. So we have this one over omega expression which is, which contains the square root of kx plus this derivative of the, second derivative of the postential. So what we do, it, it, it, we power expand around some appropriate m squared and we choose m squared for the best convergence. This is an optimization parameter here. And actually the, how long we expand this series around the m squared controls the, the, the loop order in, in quotation marks. That means that if p equals minus 1, so we just neglect all of the contributions which is mean field. This is mean field. p0 corresponds to one loop and p larger than 0. It is not a strict loop order, but it is some, some RG, RG improved higher loop orders. Okay. This is just for parameter choice. We, we have chosen some nuclear parameters for the later application. I will show you, show you results on how this has been used. Okay. So these are the results. But you see here, I have chosen here two specific mu. We can, you can do it for any mu. Here is here there, there is the results here for two specific mu. Let us look at this right panel here. What you see is that this orange dashed line, this is the mean field. The blue is one loop and all others are the higher loop, higher loop approximations. What we see is that if you, the physical thing is the minimum of the potentials because this describes you the, the, the phase transition. But you see is that the minima, this is one minimum, this is the external region actually. If you see that the mean field approximates the minimum very badly. One loop is much better, but for higher orders, the, around the minimum, all of the other, other contributions are convergent very, converges, converge very well. So what we see here is that if the, and it is a generic experience doing, doing this, this type of study, that if the curvature of the effective potential is positive. So we are in the physical regime, then the convergence is very nice. If the curvature of the, of the effective potential is negative, that means we are in an unphysical situation, then the convergence is bad, but we don't care it because it is an unphysical regime. So what we see that from, from this limiting process, we can identify very, very well the minima of the, of the potentials. And actually if we go to P equal to infinity, so very large P, then we recover the Maxwell construction. This, this flattens out all of the potential. This, so the convergence is, goes to, to that way, it converges Maxwell construction. And since it is a very, so it is not a numerical, so it's just differential equation. You do, the mathematical does it for you. So it's not a grid method. It is, it is very, very cheap. I mean it's very, very, very fast to, to do the calculations. For example, we can, one can easily produce this phase diagram. Explore the, the, the coupling constant regimes where the transition is second order or first order. And here are the, the border lines. What we see is actually supports the common wisdom that the mean field predicts the strongest transition. One loop is, is, is, is a less stronger and the, the FRG is the, is the predicts the, the less strong phase transition. I mean that if we pick a point here, then it is already a first order transition according to the mean field, but according to the exact results it's, it's still second order phase transition. In that case it's, it's a weaker, the FRG predicts a weaker phase transition. But we can go on and, and have some, some physical results as well. So it is also physical in that sense, but it is actually, as a study of the model. But we can do other things as well. For example, we can calculate the pressure curve for different chemical potentials. And this is already in principle measurable. We see that for a given chemical potential the, the less, the, the mean field gives the, the largest pressure and the, the FRG gives the smallest pressure. Okay, but the chemical potential is not a physical observable. Why not we calculate the equation of state? And interestingly in the equation of state the trend reverses. So that means that for a given energy density the one loop is the weakest, then the mean field and then the FRG. So if we start from mean field, calculate the one loop correction, this softens the equation of state. But in the, in the exact result it, it, it will turn around and it is a stiffer equation of state than the, than the, than the mean field. And it has consequences of course because the, if we want to calculate the stability of compact stars for example, then exactly the stiffness of the equation of state what, what matters. So what we did here after that, we have computed the, the, the equation of, from the equation of state we can compute the, the, the stability curve of, of, of compact stars, the neutron stars. In this case this, I don't know this, this fermion star which we use here. It is a very, very simple model, simple minded model. So it is nothing to do with the, with the, the exact stars because if we have just one bosonic and then fermion content. So it is not, not, not a real realistic case but still it, it, it gives a very, very good looking curve. And what we see that this, this red line is the, the mean field. And we see that we have the, the, the FRG allows a little bit, in this case a little bit but we study other models as well. A little larger value, limit for the, for the mass of the neutron stars. It is some, some percent in this model is just 5, 5, 10 percent in this model. So how many time? Not really. A couple minutes. Two? Okay, so we have, since I have no, lot, lot of time, let me just very shortly sketch the, the, what are results? Say about the stability of the, of the Higgs. We have heard a lot of talk by Rene Zonaheim, for example, about the stability of the, of the, of the model. So I don't want to go in detail in that, in that transparency. But we, we, we have the top Higgs sector and, and try to analyze the, the upper and lower bounds for the, for the, for the Higgs mass. We use the FRG equation of the potential. We introduce in general the, the, the fermionic, the fermionic local potential, fermionic local potential. But at the end of the day, we just keep linear order in the, in the invariant, in the fermionic invariant. This is the Yukava coupling. Okay. There are two issues here. One is that if we linearize a function, the point where we does it, do it, it, it, it, it can matter. So where we do the, the linearized linearization, it can matter. And that, that is the first point we have studied that we, since we have, we know the, the complete dependence on the, on the, on the Yukava invariant or in the system, we can examine what the different points where we, we have linearized the, the, the, the equation linearized the, the effective potential. Okay. This, this, this, this is the transparency I want to do. So this, we linearized the right-hand side of the Wetterich equation. We can do it near i equals zero. But we can do it around other points. And we can even try that we neglect completely the, the, the linearized version. That, that means that the Yukava coupling does not run. And we, we have then three versions and we can study the, the upper and lower bounds. What is the message of from this, from this picture, from these figures is that it doesn't really matter. So even if you don't run the, the Yukava at all, it, it leads to the same lower, same stability bound for the, for the, for the Higgs, for the Higgs field. So this is the LO maximal cutoff, which, which allows for, by the stability of the vacuum. So the choice of the expansion point is actually irrelevant. But there is another issue which we have studied. And this is the written here. And I apologize, I have one or two sentences yet. The point was that we have tried to understand what is the effect of an other, other, in principle, irrelevant operators. When, to the Yukava coupling, we introduced a background dependent Yukava coupling as well. So we have H0, which is a Yukava, and H1, which is the coefficients of the, of the row times that Yukava coupling. And we see, we, we have started from the stability bound. So from lambda equals 0 and, and, and consider the running. But you see here is that as we have, we have expected the, the H1 is irrelevant. So the running of the, of the, of the original with H1 equals 0 and H1 not equals 0, these are parallel. That means that this is irrelevant. So with the definition of the, of the, of the randomized couplings at, or the couplings at, at the cut-off, it can be reproducible. But if you go to the other direction here, a surprise can happen that it doesn't cross the zero. So it doesn't go through the instability regime of the, the instability, unstable vacuum. But it turns back. And this is because H1, which is irrelevant in the, in the infrared, in the ultraviolet, it can change the, the slope of the, of the running of the lambda. So it turns back and goes away. And we have, we, we can study that how, where, where is the, what, what happens here in this case? And after all, after a while, we, we, we have found that the, that the Yukava couplings become negative and this results in instability of the vacuum. But anyhow, it means that we have improved very much the, the, the stability bound for the Higgs. And this is, this is the effect of an irrelevant operator which you cannot measure in, in the, in the experiments, I mean, accelerator experiments. And actually one doesn't really know in advance that which, which are the operators, which do you, you, you do this job to turn back the, the, the, the running and which, which are not. For example, also if you treat gravity very nicely, then, which is irrelevant for according to the standard model in the, in the first case, it can also shift the stability bound towards higher cutoffs until the, the, the the Planck scale. Okay, conclusions. So we have studied three problems. The first was, was the, the local potential approximation for the Ferminic systems. And we have seen that it can be defined. And as the way I have, I have told you, we have studied the, the problem of the, the sharp Fermi surface at t equals zero and finite chemical potential. And I have presented that with a proper choice. So we have to, to separate the field evolution, the evolution to, to, to two domains. And we can have, we can have a complete basis treating the, treating the evolution in the, in the smaller, smaller energy domain. And the third one is the Higgs stability, which, which has the, the message that the stability bound itself can depend on irrelevant operators, which cannot be measured at the, at the experiment. Thank you very much.