 The live transmission is the start. Oh, actually. Oh, no, now it's actually it's going live right now. So I guess I have to present you Sandra again. Sorry about that. Connection problems. I think, yeah. I think we are now live. Sorry about that. So welcome everyone to Lowe's 360 webinar. My name is Alejandro and I'll be today host. Today we have the pleasure to have Sander Moich and he's going to talk about renormalization, final renormalization and naturalness. He did his PhD, the University of Amsterdam and then he moved to Santiago de Chile for a postdoc and then to another postdoc at Louisiana and Switzerland. He's an expert in perturbative QFT, the intersection between cosmology and particle physics. We're very happy to have you today. So remember that you can ask questions over YouTube, Twitter or email us the questions and then they would be read at the end of the transmission. So thank you, Sander and take it away. Thank you, thank you. So I told you two minutes ago that I'm very honored invitation and I still stand with that statement. Let me see. So I share my screen. Let me see. Wait, now my computer has to think about this because now I think you can, I think now you have the whole thing. Perfect, yeah, we can see yours. Thank you. So, good. So yeah, I wanted to talk about this word that I did with my boss in the NFL, Misha Shapochnikov. Well, we've been working quite on it for quite some time. Now, I think it's very nice. Other people like the referees think it's either wrong or trivial. So, well, I'm very curious for your opinion. But now I see that Alejandro froze in my screen. So are you still with us? Yes, yes, it's right, yes. Okay, good. Good. So then we tell you the whole aim of the talk is to challenge the standard nationalist picture. And I will explain exactly what I mean by that. And to the end, well, first we will review that and I'll show you what I want to say. And then we're going to talk about standard way to do renownization with all the intermediate infinities. And then I'll show you another way to do it without infinities or large corrections. And with that, well, I hope to convince you that naturalness is not a good guiding line in physics. So, let me see. Let us begin just from the standard model. Here it is. It has been around for 30 years already, at least. And the question is, will there be something else? So probably the standard model is valid until some energy scale lambda. And at that energy scale, we expect new physics to show up. The question is, what is the value of lambda? We have, well, there is the proposal of supersymmetry and composite hex models that actually mentions and then you will typically see that this scale of new physics is around 1000 GB. It could be that new physics is coming up much later in the theory of renownification. Typical scale is 10 to the 16. Quantum gravity will happen around the plunge scale, 10 to the 18. Perhaps there's no new physics at all. Then lambda is, of course, up to infinity. The point with lambda is that apparently the fact that there is new physics, that there is just a very high scale in the game also changes a low energy physics. So here you see just a cartoon of how you would compute the Higgs mass. There is the standard three level contribution. And then there's a loop. And if in this loop, there's a particle running, which is like a new part of the extension of the standard model, which comes into play at this large scale lambda, then this will be a very large contribution to the mass, which means that when you renormalize, you will need a very large counterterm to cancel that and to still end up with the mass of the scale that you observed, which still end up with 125 GeV. So the larger lambda, the larger the cancellation you need between a heavy loop and counterterm. And in general, so that's why I say unnaturalness, this unnatural cancellation grows with increasing lambda. So what about lambda? Well, there could be that video there's super symmetry. And if super symmetry will be unbroken, then every loop, the effect of every loop is canceled by a loop with the corresponding sparticle, like super finer of the particle in the original loop. And that would be, well, then there will be no fine tuning at all. Now, by now we know that unbroken super symmetry doesn't exist. It could still be broken a TV scale. And then the cancellation that we need will be of order TV. So that's still relatively small. So in general, the general statement is that any new, any new physics or any new TV physics lowers lambda and therefore improves the naturalness of the theory. And that is why people are very interested in these TV physics models just because then it will make our world more natural. Just because the cancellations we have in the computation of mass, for example, of the mass of the Higgs field, these cancellations become much milder if there is low energy physics. And that's why there's a big hunt for TV physics guided by this line. So that's a standard naturalness picture. And now, in the rest of the talk, we want to join us. So I want to talk about standard renownization. So again, now I will talk, I will tell you what our answer is to this. So first I just give you a small overview of the talk. As I say, the aim is to show the absence of meaning in UVA vertices and also in the cancellation of large quantum corrections. And this is exactly where naturalness issues come in. It also goes under the name of hierarchy problem. The method to do that is to compute observables in a non-centered way, in a way which does not encounter all these intermediate corrections that we need to cancel. Of course, the method is the more technical part of the talk, but at the end, for the discussion of the talk, the spirit of this talk, the only thing you should take away from the method part is it exists. For the argument we want the point we want to make, the only thing we need is that this method exists. I will show how it works, but if you don't like the technical stuff, you just take away, it exists. So why do we do all that? I would say it makes theories more consistent and it is very good for the intuition because for me, I always thought that these UVA infinities are really have physical sense there. And now I learned that actually there's some physical sense there because there's a method in which you can just circumvent them. So intuition is already nice, but the relevance here, as I just also pointed out just before is that, well, what are we doing in physics? We test some hypothesis. And this can be a pretty costly business because the LHC, for example, doesn't come for free. So intuition might change the choice for a new experiment to test. So that's why in the end, it looks like innocent intuition, but there's also hard money involved, I would say. But that's what we hope to achieve. Okay, so let me review with you. So this is all textbook material, the standard way to renormalize. And I will just work, okay. So what is the picture? There's just one slide to connect experiment theory and one hand in some experiment, for example, LHC, one measures cross sections, decay rates, and these fill the entries of the S-matrix. At the other end, all these S-matrix elements can be computed because they all follow from correlation functions, from endpoint correlation functions that are computed by some confinement diagrams. And in the rest of the talk, this gamma with the over bar denotes renormalized physical correlation function. So if you want the aim is to compute the right-hand side here in a finite way. So we work with, I will work just with the simplest theory possible to make our points. So this is five for theory. How does it work? You need to compute its green functions, its correlation functions. And once you have them, you need two initial measurements at some momentum scale that you choose, some scale of what actually the momentum. With these, you determine the values of the parameters in your Lagrangian. And once you have them, then you are all set to make, well, predictions for any new experiment at any new external momentum scale that you want. Yeah, as you wish. And of course, the first, the red, if you want calibrating measurements can be done at any external momentum scale. And of course, the choice for momentum scale will change the values for m and lambda, which are intermediate values. But in the end, in the end, the prediction that come out will be the same, at least, well, up to the same order in perturbation theory. So this is how five for theory should work. So now, so what do we need to do? We need to compute these correlation functions, these endpoint functions. So the gamma bars. At three level, that's not so difficult because, well, there, you see the propagator, which is the inverse of the two-point function. You see a perfectly finite result. And there's the three-level contribution to the four-point function. And that's all perfect. Of course, all these things are, all these correlation functions are expansions in the coupling lambda. So the two-point function begins at, order lambda to the zero of power. And the four-point function begins at order lambda. And now, this is at three level. And now, when we go to loop level, we will find the next or all next contributions to two-point function and the four-point function. So what happens when we go to loops? So if we run into these, you'll be the vertices. And this is a very old observation, almost a hundred years old. And so here, I show you the simplest example. So here, we compute the one-loop contribution to the two-point function. And that is going to give us critically divergent integral, just because there's a propagator running in the loop and that's going to give the one over L squared. We need to integrate it over for momentum dimensions. So we have, well, we have something which is critically divergent. What to do? First, we need to isolate these aberrations. So for the most, in the process of regularization, you're not solving for them, but you're isolating them. And of course, the best way to do that is by dimensional regularization, which was invented partly in South America, partly in my country. So what's the idea here? Instead of doing the interval of four dimensions, I formally continue it to do an interval over four minus two epsilon dimensions. Of course, this is all analytic continuation. There is nothing visible here, it seems. But what is the result here? Now, instead of doing a computation over four dimensions, I have to score minus two epsilon dimensions. And this is the integral I can do. So what did we achieve now? At the left-hand side, there's a divergent integral and that we don't know, we didn't know how to approach it. But now the effect of this dimensional regularization is that the divergent gets isolated. Because when we go back to four dimensions, we take the limit epsilon long to zero. So the first contribution at the right-hand side is the divergence now. And what's left is all finite stuff. So this is a very, very smart way to peel off the infinity from the integral and still end up with something physically meaningful. So that's dimensional regularization. It looks like a mathematical trick. If you want, there are some lecture notes where you can find what there's actually some physics in it. This is just the integral that I showed you before. If you now formally do the integral over two epsilon dimensions first, you end up with the integral you had before times some factors that all go to zero in the limit of epsilon to one, in the limit of epsilon goes to zero. But the last part, the fact that the epsilon is gone zero changes the last part of the integral. So still this goes to one as it should be when epsilon goes to zero. But for every value of epsilon that you choose, you can still find some large momentum, some large value for L that still gives something which differs from one. So from here, this is just a normal site. This is like a physical interpretation of dimensional regularization. You see, it actually, it modifies the UV physics. So it acts like a cut-off, but it's a smart cut-off. It's a cut-off that reserves all physics. Okay, but that was just an aside. Once you have managed to isolate the divergences in these one of epsilon poles, then comes the step of renormalization. And in renormalization, what you effectively do is you just subtract the divergences. There's many words for it, counter terms, bear quantities, physical quantities. But in the end, what you do, you just add some terms by hand in your action and you just compute the values that you need to end up with something finite. So since we just found these divergent contributions, you need divergent counter terms. These are the delta z, delta m squared, delta lambda. And so if you tune them very precisely, now this is a formal tuning, you end up with renormalized finite correlation functions that you can use, that you use for predictions. So then there's different schemes because you'll have to subtract one of epsilon poles, of course, but you can just decide for yourself whether you want to subtract a finite piece as well. In MS bar scheme, you only remove one of epsilon. In the mass dependent schemes, you organize your counter terms and situate it but one chosen momentum scale, there are no differences at all. So these are just a choice. You need to subtract the one of epsilon's and besides, you subtract what you want. That was a two-point function. In the four-point function, you have a logarithm divergence which you need to handle as well. And from then on, at least in five-four theory, everything is convergent because here you see the six-point diagram, here you see three propagators. So that is one over six powers of the loop momentum. You integrate over four powers of it, so you have something manifestly convergent. And this goes up to all orders and perturbation theory. Sometimes, of course, you can have internal sub-divergences when you go to higher loop order but that can all be handled by what is known as skeleton approach. So the picture that arrives, you arrive at, these are, I remind you, this is a standard interpretation of renormalization. You begin with some measurements, the two measurements, to use your renormalized correlation functions to find the values for your variance in action. And from then on, from there on, you generate all the predictions that you want. So this works. If you want to take the intermediate step as a black box, it works. When you open the black box, you see there's an issue here because actually what you did, you did subtract one infinity from the other. So, well, as soon as you have encountered affinities, then of course your perturbation theory, your perturbation theory, perturbation expansion doesn't make any sense anymore. So physically, it does work. Mathematically, of course, it's wrong from the beginning. From the moment that you, when you encounter the virtual media versions, everything has begun meaningless. It's just, as Simon said, it's a focus focus. It's a miracle that the correct physics still comes out. So, the first aim of the story is to see whether we can play the same game, but without any intermediate infinite cancellation. So without encountering any one-off epsilon ball in the computation of the green's function. So that's the first aim of the talk to find such a finite Q of t. All this Q of t, it will, for sure, it will give exactly the same results for correlation functions, but its computation doesn't require any divergences that need to be removed by divergent contractions. Okay. So, here comes the part. Here comes the technical part, because now I just want to show you how that works. Well, this is technical part. If you don't like all the details, as I said before, the important point here is that this stuff exists. So, how does it work? As I said, we want to find a way or not find, or at least dig out from the literature, a way to do finite minimization. So, a way to compute endpoint functions without running into these intermediate cancellations. So, this is a problem that people have looked at for many years already. There was, so LSS, the same Lehmann-Siemann-Tzimmermann, people that's sort of known from the famous LSS formula, they had the program in which they wanted to try to compute all correlation functions without ever running into any action, any intervention. That was in the 50s. So, so in the 60s tried to continue this program, but well, we've been through a lot of literature, but it seems that this program has never come to an end, so this is still ongoing business. It looks promising, but it has not result in any firm result. Then, there's an approach based on Kalam-Siemann-Tzim equations which was proposed by Kalam and other people in the 70s. And well, I put this one in both phase because this is actually, this is the approach that you want to follow. This is the one that you want to explain, to use and to generalize. Other approaches are by BP-HZ, but we use both Parachute-Tzimmermann. They use the so-called R operation, which is a subtraction on the integrals. So, they still subtract something, but they do it before integrating. So, before hitting the one of epsilon balls, they already subtract something. So, it works, but no, we are still after something better because we want to find something without any subtraction, not after integrating and not before, nor before integrating. My counterman, Nobel Prize winner, Gerard Toth, has a paper from 2005, where he proposes some ingenious construction, which also involves subtractions. He wants to subtract correlation function at different momenta and with that, he tries to end up with some randomization group equation. Also, just as in the first point, this looks promising, but well, to our surprise, it seems this work has never been picked up. It's already 15 years old. I think it has 10 citations or something. It looks very promising to us, but for some reason, no one really worked on it, including Gerard Toth himself. I gave a talk in Utrecht to him two weeks ago and I asked him, but he said, now he's after other problems now. He wants to solve quantum mechanics first before he returns to this problem. So, which opens away to us. And there's many more, and if you want, you can check our paper and then you have all overview of previous attempts to do QFT without any UV differences. So, but as I say, I put the second approach, the column semantic approach. I put it in both, in both ways, because that's the approach we want to follow here. So, how does it work? Colom invented this thing called theta operation. So, what does a theta operation do? If you want to think about it in an algebraic way, it's just the derivative with respect to the mass parameter, with respect to the bare mass parameter. And, well, if you want to think about it in a graphical way, it is just an operation which cuts one propagator in two. Of course, you know that the propagator goes as one over L squared plus M squared. So, if you take a derivative with respect to that, you get two propagators. You get two powers of the propagator. So, you can either think of it in terms of derivatives or you can just say, I just, every theta operation cuts a propagator in two. So, on the left, you see the first loop contribution to the two point function. When you perform one theta operation, you cut the propagator in two. So, now the thing has two propagators, the loop has two propagators. And if you want, you can do another theta operation which cuts propagators by two, one by one. Of course, the two diagrams on the right hand side are equivalent, but I just write them like that to show you that theta operation cuts propagators in two, one by one. So, the first object on the left is just theta function, two point function. And then we use the subject theta to show that one theta operation has been performed on it. So, that's what we call gamma two theta. And then on the right hand side, you see gamma two theta theta to show that here we perform two theta operations. So, what is the sense of that? Well, that's of course that the degree of divergence of a diagram reduces by two from every theta operation because from every theta operation, you get one extra propagator. The one propagator goes as one over L squared. So, you get two more powers downstairs if you want. So, any divergent diagram, if you perform enough theta operations, you will convert it into a convergent theta-theta diagram. Of course, if you have any questions, I don't know how it works because I see there is a chat, there is a... Yeah, questions will be just read at the end if any. So, you can just continue the word, Senator. Okay, good. Of course, we can go back to... Wait, wait, wait. There is a question now that you opened the kind of worms. Go ahead, Nicolas. Just a question. Is d the dimension, the one, the theta operation? Sorry? What d? d, d. d is the derivative. I'm derivative, okay, sorry. I was thinking about the dimension. Oh, sorry. No, it is meant to be the derivative with respect to the mass parameter. I see, thanks. And the reason to do that is with that you... Well, the derivative of one over L squared plus M squared is one over L squared plus M squared squared. So instead of one propagator, you get two propagators, which is what you want because now you get something which is more convergent because you get two more powers of the momentum down steps. So, these are the theta diagrams and now how does this work? Kalam derives, of course, you can go to our paper. You can go to the original work. Kalam derives two equations that we need. The first one relates gamma theta to gamma. I remind you, so gamma is still the correlation function we want to find with N external lines. And the second equation is to find relates gamma theta theta to gamma theta. And you see these equations, they contain three more parameters called beta, gamma, gamma theta. For the moment, they are undefined. We just need to determine them in the process. So these are the equations we have. Then there are the boundary conditions, which are, of course, very much like the usual renormalization conditions. So at one particular scale, you want to have only three level theory. And here we choose a scale to be case-rate equal to zero just because it's easier to work with. Then as an input, we need the finite two-level results. I showed you already. So this is all finite. You see, there's finite boundary conditions. There's a finite two-level results. And with that, well, that's enough to find everything. And if you want, you can even just forget about conventional Q of t. You can just forget about the derivation of the two equations above. You can just take this thing as a package to compute any correlation function as a problem, as a new problem. So how does it work? Here I take the first equation for n equal to four. So here we are trying to compute the four-point function to be precise, the first loop correction. So that is the second one, the first one on the right-hand side. That is the diagram that normally is UV-divergent. Now we try to compute it in a new way. As you see, at the left-hand side, we have something which is finite. It contains three propagators. At the far right-hand side, there is the three-double input, which is also finite. So as you see, the only divergent quantity we have here is the one we're after. It's the first one at the left-hand side, at the right-hand side. It's the standard one-loop correction to the four-point function. So what can we do now? I showed you before, just two slides ago, I showed you we have the equations and we have the boundary conditions that we can use. So if you impose the boundary conditions, you'll get a condition on beta and knockdown. The left-hand side is still completely finite, so the result of this first exercise is that you get a differential equation for the ones that you're after, and you have one condition already on beta and gamma, and this is all finite. But this was the computation of the four-point function. Now we need to compute the other divergent guy, which is the two-point function. And again, we take the simplest thing, we take the one-loop correction to the two-point function. This is the diagram I showed you in the very beginning. So at the right-hand side, you see the diagram that we want to compute. It is correctly divergent as I showed you already. So we need two steps here. At the very left-hand side, we have the object gamma2, theta, theta, which again has three propagators, so this is completely finite. Then I use the second equation that I had and I showed you there are two equations here that converse the gamma, theta, theta into gamma, theta. That's one step. But this thing is still a little bit divergent. It has two propagators. And then you use the first equation to compute the one that you're actually after, two-point function. So here you need to use both equations. And well, there's no need to go into details, but you have two equations in which in the end I hope you to find the thing that you are after, the diagram at the very right-hand side. And in the process, you also find two conditions on gamma and gamma, theta. So in the end, you end up with a computation which this computation gives you the one-loop correction to the two-point function, the one-point correction to the four-point function, one-loop correction to the four-point function. These are the two objects that normally involve UV infinities, but now we haven't encountered them. And just in the process, we found these objects beta, gamma, and gamma, theta. And the nice thing about this approach is that it works recursively because now you have enough material to go to the next set. Now you can compute two-loop diagrams. And from there, from these results, you can compute two-loop diagrams. So order by order, of course, it becomes very complicated. But this approach, this Kalansi-Mansik approach helps you to find correlation functions up to any order without ever running into any infinities. So about this very nice thing, you can generalize it. You can compute cosmological constant without any extra correction. You can compute the effect of potential without any intermediate cancellation. You can show that the Vef of the Higgs field doesn't shift from heavy new physics. This is an urgent problem in theories of grand unification where the problem is always that the Higgs vacuum gets shifted by a large amount from the existence of some high UV completion of theory. All to say, this helps you to eliminate any UV infinity from your computation. So, so that was the, so from now we are the point where I showed you how any UV infinity can be segmented. But in the end, we have, I think this is more or less known that infinities are formal. But now we go to the more practical part. Now I don't want to look at infinities. I want to look at these large-loop corrections and for this, the simple setting is a model of two fields. Two fields with a large higher UV in between the masses. So there's small phi, which is a light field. Its mass is given by small m. Then there's capital phi, which is the heavy field. Its mass is given by capital M. And we assume large m to be much small, capital M is much larger than small m. And then, of course, there's any, there are the three couplings allowed by the symbols of theory. So what happens? Now, if you want to compute a two-point function over the light field, I call it gamma 2.0, just to indicate that I have two light fields and zero heavy fields. This is exactly the thing that we're after. This is the thing that I showed you in the very beginning. This is the problem where the two-point function gets a loop correction from the light field running in a loop, which is fine. But there's also a heavy field running in a loop. And after renormalization, so after you already get rid of the infinities, you are left with this renormalized two-point function that you see in the bottom, which is an S3 level contribution. And then there's one loop contribution from the light field, which is proportional to small m squared, which is still fine. And now there is, but the heavy field, the loop with the heavy field running into it gives a contribution, which is proportional to large m squared, capital m squared. So this is what I showed you in the very beginning. Here you see that new physics. In the beginning, I just called it by lump. Now there's a more detailed prescription of this new physics. Now you see how this heavy physics, how this heavy field has a large influence on the two-point function of the light field. But this exactly thing that people are worried about, here you see how if you want the Higgs field, gets very large mass corrections induced by some heavy fields living in a high energy UV completion of your theory. So it's exactly the presence of this capital m squared. That is the root of the problem. Because okay, what happens now? If you want to compute the mass of the light field, you'll find, as I just showed you, you find this very large quantum correction, this very large loop corrections to that mass. So you need an order m squared or a large m squared constellation between three level and loop level to explain the low value that you measure. So again, if you think of the intermediate steps as just some black box, it all works. You take some measurements. Okay, now you need five couplings in this theory, two masses and three couplings. You take your initial measurements and at the end of the day, you'll get your predictions for all new experiments. So with the black box, if you view the process as some black box process, it all works. But again, if you open the black box, you see that to get to your predictions, you again, there was a very large cancellation. There was, if you want a very large fine tuning because there's two numbers here or the capital M squared that result in an observed value of order small m squared. And that's exactly what people call hyper problem because as I showed you, to end up with this, if you want, if you think about this heavy field phi as a field which occurs in great unification. So it is order mass, order 10 to the 16. So then you see that to find the Higgs mass of order 100 GV, to find that actually you need a cancellation of, no, you need two numbers of order 10 to the 16. You need them to conspire in such a way that you get your result of 10 to the two. So it's like order 10 to the 14 cancellation. That's what people call the hyper problem. Is the problem? Yes, people say because even if they occur at the intermediate stage of the computation, such a cancellation is unnatural. And from there, as I showed you in the very beginning, that motivates searches for new TV physics because if new physics comes in at a lower scale, you need the cancellation that you need are much smaller. So if you want much more natural and that has been one of the main motivations for all these low energy supersymmetry models and all the others that you see here. So this is how hyper problem, how naturalness is really a hot topic in our field. In this cartoon you see the number of papers which have hierarchy problem or fine tune on naturalness in that title has a function of year. This is just to convince you that this is really an urgent problem. Even, okay, it's a formal problem, but it's still something that attracts a lot of attention. So we hope if at one point everyone will accept our paper that this graph will go to zero soon and no one will talk about naturalness anyway. But I will come to that. There's other people that say, well, there's a problem at all because in the end, the cancellations are in between unphysical parameters. These are just parameters that occur in your action, but they don't have any, there is no way to measure them. It's all formalism. So even if there's a lot of cancellations, well, why worry about it? Other people say, why is 10 to the 14 unnatural if you don't have to measure on these numbers? Okay, people say the B-like numbers of order one, but well, that's us, that's not, well, that's just us describing nature. And there's a more like intermediate point of view that says, well, well, you can still talk about natural unnatural even if you don't know so well how to interpret these numbers. So the whole aim of our talk is that we stand with a no, we are in the no, can we want to convince you that the answer here is no? And to do so, we hear what I'm doing in this talk is to show you formalism, which doesn't run into these corrections at all, into these cancellations at all. So if you see that there's no fine-tuning about formalism, then you understand that fine-tuning is really formalism-dependent. So all naturalness concepts is formalism-dependent. So as I say, all our talk is to give you a new evidence why you should choose no at this point. And to do that, this Kalansi-Mansik approach that I just showed you, it was written in the seventies in the context of one field, so just the 540 I just showed you. Of course, if we want to address this problem, we have a theory of two fields in its very simple setup, a life-threatening heavy field. So now there's really no reason to go into details, but now, okay, there's two masses in the field. So there's two kinds of cheat operations. Everything becomes more complicated. We devote an entire new paper to show how it works. So this is our, if you want, this is a new work. We just generalize this method to two fields and, well, here you see the first equation that becomes very large and the second equation is really hopeless. But again, the thing to take away from this, from this part is that, yes, it can be done. This Kalansi-Mansik approach can be generalized in a way in which even in the two field models, you can compute any correlation function without any divergence or without any large cancellation in the middle. And I'll show you quickly how it works. I'm interested in look corrections to the two-point function of the two-point function of two light fields. In this weakness condition begins at order of the squared. So the first corrections come from this sunset diagram that you see here. So we have light external fields because we are computing the correlation function of light fields. And there's a heavy loop running through it. And well, this is a large computation, but if you want, this will be gamma two of them. You can compute, you can convert them into gamma two theta-thetas. So these are at the left, at the downstairs, you see four diagrams that are hard to compute, but for sure they will be finite. And then you can use the boundary conditions. You can use the equation I just showed you to compute the contribution from this very first diagram, the HSC in the middle, in a finite way. So here in this result, C1 and C2 stand for complicated but finite expressions involving finite parameters and all. So you get the result that you always get because in this result, you get the result that you always get, but without having made any subtraction. And you see, this is what you want because if you expand it into the region of low momentum, low external momentum that we're interested in, you see that this one is actually small, plus, well, it is the standard result, plus contribution from heavy physics, but these are all suppressed by powers of the large mass. So you get the result that you always get, but now without having imposed any condition, without having subtracted any large number, which is exactly what we were after. So to conclude, to put everything in context, what have we done? We looked, we have looked at standard lambda five four theory, but now we compute its correlation functions in a way in which we never found any intermediate UV diversions. To do that, we use this finite coulomb-Siemann's method, which would get you the package, which gets you the equations, the boundary conditions to compute all correlation functions, all correlation functions you're interested in. If you want, you can even take these coulomb-Siemann's equations and this boundary conditions as a new formulation of Q of t. Then you say, well, you just forget about this loop die, about this, these are version diagrams. You just say, they don't, in my new formulation, they don't exist. What I have are the equations and the boundary conditions. And from there I compute everything. Now, what we did, the technical work that is behind this talk and behind our papers is the generalization of this method to the two field kicks, which is a nightmare. But again, what we need for this talk is the fact that it exists. And with this formalism, you are, one is able to compute the light field superfunction or if you want the Higgs mass, without having to subtract any large or the large m squared mass at any point. Which, and these subtractions, these cancellations are exactly the root of the problem of all naturalness problems. So, with this, we show that all these cancellations, all these naturalness thoughts are actually all formalism dependent, just because we just found the formalism in which they don't occur. And that should strengthen the intuition, that's our motivation to state that there's really no physical contents in higher-grade problems and naturalness problems. And in particular, they should not be taken as guiding lines for the design of new experiments or for deciding which new hypothesis to test. Remarks. Yeah, I put don't write this at home, because you see that for any loop diagram, I don't know what I want to say, is that any diagram to compute it in this way, you need to compute many new diagrams and then integrate them back. Of course, if you need to, if you just need for practical purpose, if you need some correlation function, of course, you just take the user away, use of the version of the way, subtract the differences and done. So, all we're doing here is more this, it's a matter of principle. If you want to, you should not use this formalism for real computations because it's very complicated. It's needlessly complicated, unless you want to make a point of principle. Another important point is that we have not explained why the Higgs mass is so much smaller as the scale of new physics, for example, the Planck mass. Or we don't give any explanations for why the cosmological constant is so much smaller than Planck mass of the fourth. So, yeah, we are not explaining why this hierarchy exists, but we are dealing with the consequences of it, this hierarchy. So, what we are showing by low energy physics is still stable against these huge quantum corrections. With that, we argue that, of course, new TV physics, it's of course possible, but it's not necessary. It would not solve any naturalness problems because these don't exist. At the technical side, it would be interesting to think about gauge fields because gauge fields, well, they are then you have masses fields and it's not clear how this method would work for masses fields. I'm sure that it has to work, but this is something that should still, the question is still how to generalize this currency monthly formalism in such a way that it can also deal with masses fields. Another thing is that it should be possible to write same method based on MS bar formalism. I mentioned the paper by Gerhard Hoft, which I think with this in mind, it should be possible to work that out further and see whether we get the same results. And the very last point, so far I only talked about five four theory, which is a minimalizable, but in principle, this data operation, you can do as many operations as you want. So, even if you have a diagram with the degree of the versions 14, if you do eight data operations, in principle, you can scale it back to a virtual conversion diagram and then work your way back up. So, in principle, this should also be able to address non-reformalizable theories, like five six as the easiest example. And of course, the very last statement is that your comments and questions or emails are very welcome because, well, there's still a large ongoing debate on whether there's a large debate on whether this is all useless or very interesting and very meaningful. And we are in the second line, but we encounter people that are more, that take more the first point of view. So, with that, that's all I wanted to say. So, yeah, I'm curious for questions. Thank you. Thank you, Sandra, for this very interesting talk. Let me see. Yeah, there are a few questions. So, we can start with the first one that appeared on YouTube. It is related to your first two slides explaining the CS method. So, the person is asking in the last equation, what's... Wait, wait, wait, wait, wait, wait. I think where there was gamma, the question is like, why do you have gamma, but I know gamma bar and if there should be any difference? I think what you're going to do is come to the next slide. Okay, let me see. So, I think in the beginning, I was talking about renormalized, it is a very good point. In the beginning, I was interested in what I call gamma bar, which are renormalized endpoint functions, right? And in this Gaussian-Masic method, I think I did not write the bars. Where are we? This would be here. No, all the bars are here. So, what is the question? Yeah, we can, okay. Let's see, there's a little bit of delay, Senator, so we can ask this person my follow-up. No, but what I want to say, the idea of this method is that everything is renormalized from the very beginning because this method doesn't meet and doesn't require any regularization or renormalization. Once you have these equations, these are renormalized endpoint functions from the very beginning. So, yes, every capital gamma you see here should have a bar. And if I forgot one. Can you go one slide back? Yeah, okay. This one? I guess so, yeah, I think, yeah. Yeah, that is a very, very good point. Because, so Kalan wrote this method. Kalan, so Kalan has derived these equations, which are all in terms of renormalized quantities. But to do so, he actually began from an equation in terms of bear quantities. Then he converts left-hand side and right-hand side into renormalized quantities. So, he ends up with something that is completely, that only was renormalized quantities. So, the answer to the question is that to derive these formulas, you indeed, you begin from a bear quantities. But what we argue is, well, we don't really interested in derivation. We just begin from here. We just want to begin from this backwards. And then we only have renormalized endpoint functions. So, yeah, I think that is a very good question. Okay, thank you. There is another question over you too. It says, but I think you address it at the end of the talk, but maybe you might want to comment a little bit, is how do you do renormalization? It seems you can do this, even for non-renormalized sum of theory. Can you comment? And I think that was your last remark, right? Oh, the very end, the question is how you would do it for non-renormalizable theories. Yes. And if you can comment, yeah. Well, I think, look, people always say that non-renormalizable theories are renormalizable. That's why they're called like that. But the precise statement is that you can still compute any endpoint function, but up to any finite order. If you're interested into the six point function up to order number q, you're still able to do it. So I think you're going to find some equivalent statement to that. But yes, I think this, well, of course it's not that, that now, but it's exactly, I think it can be done. And you will arrive at the statement you'll also find in the Weinberg book. At one point Weinberg writes, actually non-renormalizable theories are as renormalizable as the renormalizable ones. I think you will end up with this statement. Okay. Thank you. I think we have here some questions in the coordinators. Then also people on YouTube is saying many thanks for your clear and nice talk. Congratulations on your work. I think Joel has a question, then Nicholas. Yeah, I was, can you go back to your Kalan-Symethnik equation? Let me see. The first one, so we did this for the- Yeah, the first part. Yes. Right, yeah, no, that's where you were. And the next one, the next two couples, yeah, that, okay, great. So here, since you're using, you have finite quantities here, right? So, I mean, you have finite quantities. Yes. So the point would be, okay, now I need to integrate the derivative of gamma bar over M, right? To get my finite correlation function. Yes. So, okay, so first just to get out of the way something is bugging me. So what are the integration limits there? They're defined by the boundary conditions. Exactly, all the integration limits are set by the boundary conditions. So then this M on the second one is an M hat or something, right, so it's a different- Sorry? The second one is an M hat, right? That's correct, right? It's like- Yes, yes, yes, but everywhere, because once you're here, everything is normalized from the very beginning. So it's M hat, it's lambda hat, as you wish. So in that case, of course, you will not be getting any divergence here by construction. Exactly. Right, that's the whole point, because I was concerned about the possibility of getting a divergence after integrating over M, right? But of course, you're not gonna get that because it is by construction that everything is coming out from finite quantities. Okay, okay, that does make sense. Okay, so I'm gonna leave Nico less for a while. I think about my other question because I wanted to get this clear. Okay, so thanks for that, and I'll be back. I'll be back. Thank you, thank you, Nicolas. No, but I just to be very clear, so this formalism was not invented by us. So what we do is at the technical side, we generalize it to the case of two fields, and as the physical side, we are using it to make points about naturalness, about absence of mind. Sure, that's what I need to think of right now, but yeah, yeah, I just want to be sure about this part. Thank you, Nico. Okay, thank you very much, Sandra, very nice. So you were talking about naturalness, so I'm not sure if I got your point exactly. So you're saying that, for instance, this theory is not natural, but at least you can guarantee that it's technically natural, right? No, I mean, for me it's something else. Technically natural means that when some parameter is... When some parameter is small, that is still technically natural if in a limit when it goes to zero, there's a symmetry being restored. But yeah, so I mean, the whole picture that we want to challenge is this very idea that if I have a large mass, it's okay. I'm trying to compute this diagram. So here I'm interested in the mass or the two-point function of a light field. Then there's a heavy field running in the loop. This is going to give a very large answer, which means that I need to cook up a very large counterterm without to explain why the Higgs mass work sample is as small as this. So the idea of our method is that there is a new way to compute this one where it is not proportional to this large scale, which means that I don't even need any renormalization. I don't need any counterterm to cancel them. So basically both numbers of order lump is squared don't occur in this method. So there is no cancellation between the numbers because both numbers are absent. Okay, so Maria got confused at the very end. You say that you have no explanation why the Higgs mass is much smaller than a Planck mass. Ah, yeah, yeah, yeah, yeah. Yeah, yeah, that's a very good point. Look, of course, the fact is still that there is a heavy field running here. And the heavy field is, as his name suggests, the heavy field is heavy, light field is light. And this is something we see. Well, we had the light Higgs field if there is renunification, for example, they're very heavy scalars. So what we're doing here, we are computing the effect of the existence of this heavy physics on low energy physics. But to explain why such a heavy field would exist in the first place, I don't know. That's what I'm trying to say here. All we're doing is dealing with the consequences, with the consequences for loop integrations of low energy physics, the consequences of the existence of high energy physics, of like new extreme UV physics. But if you ask why is the Higgs mass so much smaller than a Planck mass, we don't know. All we're saying is that that does not have any consequences about metroneness or fine tuning or anything. But of course, the cosmological constant problem, for example, is why is the cosmological constant so much smaller than the Planck scale? Well, I don't know. Yeah, exactly. But I mean, if we assume, if we agree with that the cosmological process is much smaller, we don't have to care. I mean, the question, why such a big difference exists? I mean, we understand why it's stable because I mean, these corrections will not bother, will not play to raise the cosmological constant. Okay, sorry. So the aim of this talk is indeed to show you, okay, look in the standard method, in the standard method, the existence of heavy physics is going to give heavy contributions, large contributions to your correlation functions, which means that you need to make large subtractions. You need to subtract another large number to end up with a small number that you observe. In this method, even if this heavy physics exists, it never gives any large contribution to correlation functions. Yeah, yeah. Yeah, okay, thank you, thank you. You're welcome. Thank you, Senator. There's another question. I just coming here in the Zoom, so long, so let me say it, it says like, can we imagine this final randomization scheme to essentially derive some mapping between the IR value of a parameter and the UV value of the parameter? Now that two close potential values of the parameter at IR becomes very highly separated at UV. Is this a problem? Yeah, I would argue that that is still a manifestation of the first problem. That is, look, what we're doing is we are computing correlation functions in a new way, right? But in the end, if you ask questions about the running from low-scale to high-scale, that still involves the same renormalized correlation functions. So no, we're not doing that, but in the end, I would say that is all these running issues in the end, in yet also, okay, look, let me put it in the right way. When people worry about running, about life is becoming heavy in the UV, the underlying question is always, how can I generate a light scale out of heavy physics? But for us, that is a question about the generation of this scale, so about the generation value. So that is exactly this question again. All we are dealing with is are the consequences, not the generation of the two scales. Okay, thank you. So I have a follow-up. Could you go to your, I don't know which one it was, but then can you go back a couple of slides, like three slides maybe? Yes, right there, right there. So this is the base, right? This is the point that bothers everybody, right? That the fact that the look corrections will be proportional to the heavy masses, right? Which is what you're having on the first slide, right? Which is proportional to the head. No, but watch out, this one is already in the region where I'm interested in. Sure, sure, sure, sure, sure. Exactly, exactly, I know, I know, I know. So the point here is that you're getting your correction, which is proportional to M squared, to big M squared. And then you set your boundary conditions and you say, okay, my boundary conditions, determine that when K is very small, I need to have that correct mass. This one. The mass that I insert, yeah, which is a little M, right? Yes. So that is what drives you to use a very small K. Yes, but yeah, and you know, I mean, when I'm measuring the H mass in some experiment, I only have access to small values of the internal momentum. Sure, sure, sure, but the point is that that comes from your, I mean, because when you're measuring the H mass, right? You're using it like, no, give me a second, let me rephrase that. So when you set your boundary conditions, you say, okay, I want that my propagator will give me the H mass when K is around some value, right? So I will get a pole in the propagator when K goes, right? So there we're saying, okay, I want my masses to be small. I want my pole in the propagator to be at small values of K. Yes. Right? So that's how you are a... Yes. Sorry, because I'm thinking about this while I speak, right? So I'm going very slowly, right? So the point is that, right? That you are putting this on your prescription, that you want your mass to be small for small K, right? Small for small K. Yes. Right? Which is what you also do in your standard case where the divergence has come out. And since in the standard case, it's uglier than your method, which is much more elegant because it uses always finite things and the things that you observe experimentally. Of course here we will not be seeing these cancellations because your requisites are everything must be finite and everything must be the way I want them to be, which means small mass at small K, right? Yes, but in the standard approach to renownization, well, at least because now this is all equivalent to the mass dependent schemes. So as you say, you impose that at one value in the direction of momentum, the renownization scale. At that scale, you want to have zero quantum corrections, right? Yeah. So that is, so of course, what are you doing here? Well, what you see here, this is of course very similar to that one. I mean, this is exactly that condition. Yes. So if you want here, you're putting in the same renownization conditions. Here we call them boundary conditions because there is no renownization going on. So of course, if you get the same input here, so you will also get the same output because what you find here is exactly what you want. But the point is that if you do it in a standard way, you will have to make a large subtraction to get it. And here you'll find the same result without any subtraction. That's the difference. Yes. Yeah, my point is that in the standard approach, you're subtracting non-physical things in a way, right? Well, in your approach, you don't have those things, right? They're, all of them are physical, right? So no, no, no, I'm still not, I mean, in the end, long time M are still parameters that appear in my Lagrangian. I would always say, I would always have this picture. All parameters that appear in your action are always Lagrangian parameters. And they're only tools, the only thing they do is only thing you want is you do some, you do some first measurements. So if you have five parameters in your action, you do five measurements, then you put it into your green's function. It doesn't matter how you found them. And from there, you find predictions. So parameters in the Lagrangian are always unphysical. That's why you should never worry about translation between them. It's exactly our point. Yeah, which is the point of the first one. So what we do with this method is that we're just giving more meat to this statement. But I would say even in the renormalized green's function, doesn't matter how you found them, even there, these parameters are still unphysical because it just, I mean, their value depends on your choice for initial experiments. The only thing which is physical and it's still within perturbation theory is to conversion from initial experiments to resulting predictions. That's the only physical thing there is in all of this business. Okay, nice. It's very, very interesting what you're doing. Thank you. It's making me think a lot. But we get the one rejection after the other from J-Hack referees. So people all say it's trivial or it doesn't solve the problem. So which in a way is also encouraging, but yes, so thank you. Okay. Thank you very much. I think we're past the hour. So if you have more questions, you might find Sandra's information on low physics WordPress or whatever and you can continue this discussion. Thank you, Sandra, for sharing your work. It's very interesting work. And for all our audience, remember we have another webinar next week. It's going to be about straight menstruation spirals with the future LiSA instrument. So thank you very much and let's stay and keep doing. Thank you very much. Bye. Bye. Okay, that was nice. Thanks. Okay, yeah, cool. Thank you.