 Cool. Okay. Perfect. Let me check. Okay. Okay. I think we are live. Welcome, everyone. Thank you for joining us for today's webinar. My name is Alejandro and I'm going to be your host today. Today we're presenting numerical renomertization group-based approach to secular perturbation theory by Tomás Galvez. Tomás did his undergraduate degree in physics at Universidad Nacional de Ingeniería in Lima, Peru. Then he went to Imperial College, London in the UK to get a master in quantum fields and fundamental forces. And then he did his PhD in physics at Simon Fraser University in Vancouver in Canada. Tomás is currently a postdoc at the University of Mississippi, and he will be moving soon to the Canadian Institute for Theoretical Astrophysics in Toronto, Canada. Tomás' research focused on many things, in particular on inflation, quantum field theory and quantum gravity and strong gravity phenomenology. Remember that you can ask questions over email through our YouTube channel or Twitter, and then the questions will be read at the end of the talk. Now, without further ado, we will turn the time over to Tomás. Thank you for joining us, Tomás, and everybody. Thanks to all of you, ladies and gentlemen, first and foremost for joining me and also for inviting me and giving me the chance to give a talk to you. I hope that you guys very much like it. Just give me one minute. This is completely out of schedule, but there's something I really want to show you guys. Let me begin by saying something very interesting that we face is due all the time. This is based on the idea on renormalization. Renormalization is the procedure that allows us to understand how physical phenomena behave at different scales. Well, apparently this is not going to work that well, but it doesn't matter. So I'm going to use the machinery of renormalization group flow theory in order to try to improve certain things about perturbation theory that we as undergrads learned, loved and hate during the course of our career. So let me begin and let me just share the screen with all of you. There we go. I hope that it's visible. Yes, we can see it. Thank you very much. So the way which is going to happen is that, okay, this is the table of contents, but I will put it in very simple language. We are going to solve a problem all together. And I'm going to introduce you more or less on what's the dynamics of the problem that I'm going to propose you, which are the problems that come with it and which are the ways to actually solve it and get something very nice working on. So here is the small setup that I prepared for you guys. Imagine the following circumstance. You want to study something called the KDV equation. The KDV equation is normally used to solve inverse scattering problems. It also used to solve the propagation of solitons, which are these very nice ways to propagate to water and they're just left all alone. And all of a sudden you decide to introduce a small damping term. That damping term in this case is a small perturbation and it normally is added in the case of this light. You can see that very nice small orange rectangle. This is also very particular of what we people do in physics all the time. When we introduce extra terms, normally the extra terms bring extra complications and involve high non-linearities and stuff that make the situation much harder to handle. So what perturbation theory showed us all the way? Well, we can have a very complicated problem and divide it into a set of much simpler problems that can be solved one at a time. Let's say the first problem that we're able to solve is the KDV problem itself, which is a free soliton just moving across the space without being altered and being completely free to do whatever it has to do. What follows after comes from the first contribution coming from a very first small correction that is added to the full solution. That is going to be the first correction in perturbation theory. And the other interesting thing is that the full problem that we're going to solve in this case has a very interesting feature and is that by itself the full problem is not ridiculously hard to solve. That gives us the opportunity to test whatever we're doing and actually see if it makes sense or not. Or if I'm just picking complete nonsense. So in this nice figure that I'm showing you here, I'm showing you more or less how the solutions behave. In the left panel you can see the background solution, which is the soliton, the KDV thing just moving all across the space. This picture shows just a peak that is moving without being altered and without feeling any perturbation or anything is on the way of this soliton which propagates freely through space. On the right panel we have solved the full problem doing no effort or no perturbation decomposition at all. And what we can tell is that the full solution of the problem shares some similarities with the original solution. It's still a soliton, it propagates through space, but it reduces its amplitude and also it leaves kind of a path behind it. It's very similar to a bubble that propagates through a very nice material and if we start to move the bubble ever so slightly through the material we're going to see that a path of soap is formed along the way. That of course is going to make the soap bubble more frail and eventually it might break. So something very similar happens in the solution of this full KDV program which also contains damping. So let's try to follow the standard perturbative approach which is have a background solution, have a first order perturbation, add them up. And what we find is slightly fatal because the full solution if we allow the system to evolve for sufficiently long it blows up. It breaks up miserably and this essentially happens because the first order perturbative solution at some point is going to take over the background solution and it's going to blow up. The other thing that is important to mention in most perturbative approaches is that a perturbative approach is a truncation in a solution. And every time that one decides to truncate a solution there are going to be terms in the solution that are going to be left over. And those terms that you're leaving over are the things that make the solution converge to something finite. So what are we going to do with that? Or how can we treat the solution? In this case we're very lucky because the full system can be sort of numerically without much of a problem. But that's not always going to be the case unfortunately. So these poses are very interesting situation that allows us to more or less try to see if we can actually produce some progress out of the situation. Okay, low and behold on the very first line. Phi 0 is equal to what is called the standard KDV soliton solution. This is exactly the thing that is on the left panel that you can tell that is just a soliton just moving to space. The interesting thing is that this solution depends only on one parameter which regulates the speed of the soliton peak. It regulates the amplitude, the width, and also the peak position. What we're going to try to do is the following. What happens if we just change that parameter by a time-dependent function that captures what happens with the system once that we have introduced the extra term in equations of motion that perturates the solution. So we changed V, the very nice movie that you see there, by calligraphic V, which is a function of time. As you can tell the original solution depends only on one parameter. So what we're going to do is to change that parameter by a time-dependent function. So the name of the game now is the following. Let's try to find what function of V is going to do the trick for us and give us a very nice solution. So the full solution is going to have the following shape. It's going to be, it's going to depend on the parameters and two things are going to happen with it. First, the parameter is going to be allowed to change and to be reparameterized as it moves. You're able to absorb and also let go certain terms as long as the evolution goes in. And at the same time the parameter is also allowed to change as time goes by. So there are two things that are happening to the parameter space. One is the possibility of doing reparameterizations and the other one is the opportunity of doing time evolution. So there are going to be two generators, one which I call beta, which is the generator of time evolution, which is named in this very nice yellow rectangle that you can see there, and the generator of reparameterizations, which allows us to change what is the identity of the parameters that you move along the way. This is a schematic plot that more or less tries to show what we are going to try to do. We're going to try to connect the deformations that happen in functional space that take one background solution and change it into something else and map those deformations into transformations in parameter space. So the generators of translations and movements in functional space are going to generate deformations in parameter space. It is very much possible to link the two transformations and make the reality of parameter space still represent what happens at first order of perturbation in the field. And to do that what we're going to say is the following. We're going to link the two sets of deformations, right? As I've been telling you, I'm going to I'm going to try to calculate something that allows me to minimize the full solution under deformations. In general deformations, both functional space and in parameter space, the deformations from in functional space just come given by the only generator it has, which is the perturbative solution. The deformations in parameter space are given by this second line, which is a complicated derivative thing that depends on the parameter solutions and on the parameters of the system, which in this case are V and the big position. If these two things are somewhat connected and the identity that I'm putting on the very first line is true, then the subtraction between the two derivatives has to be minimum. That generates a particular solution that needs to be minimized in all the domain. So here is the plan of action that I'm suggesting to all of you. Let's build a bunch of particular solutions, a bunch of background solutions, sorry, which are functions of different values of the only parameter of the system being. Let's calculate the perturbative solution for all of those different choices of the parameters. And let's try to calculate values of alpha and beta that minimize that thing called phi squared particular. Okay, so this is trying to be more or less consistent with what I've been trying to explain you so far. We're going to try to relate deformations of a particular solution in functional space and deformations in parameter space. Therefore, we are requiring that the integral in all the domain of the solution is minimum. With respect to the parameters alpha and beta, which are the generators of translations and the generations of re parameterizations in parameter space, that integral over there can be written as a nice quadratic form. And minimizing a quadratic form, something that looks like a parabola or an ellipse or an hyperbola is something that is not a huge deal to do. It's more or less simple. The only thing that we have to do is do a mathematical inversion. The arithmetical inversion is going to happen for every value of V that we are going to test. So this means that we need a bunch of simulations for all those different values in v in order to reconstruct what alpha and beta are as functions of b. So once that we have found our functions of alpha and beta as functions of b, then we have, we can easily reconstruct what five particular is. And the very interesting thing that we find once we found alpha and beta is that if we do an our job properly, then it is perfectly possible to reconstruct the first third perturbative solution with respect to things that just depend on the background. That's one part of the story. The other part of the story is let's try to actually plot what is this five particular what these five particular means. And as you can tell in the left panel, you see the growth of something like a step that starts to stretch as time goes by. Okay, I really want you to look at that figure on the left and try to compare it more or less with what happened in the left panel of the same figure. So it's very interesting because a first order perturbative scheme is also letting us know about what we thought that were nonlinear effects on the solution. This thing that actually allowed the soliton packet to dilute along the way. And he's telling us something that we never thought that was going to appear at first order solution, which is the full dilution of the solution as it goes along the way. So this means that first order perturbative schemes might have more information than we think. And it's all a matter on how to combine that information into a solution in order to make things simpler. Okay, very good. So let's proceed. Yeah, this is exactly what I've been telling you. So you have the very nice step that grows in time, and also the growth of these called nonlinear effects. And it turns out that the nonlinear effects could be reconstructed on first order analysis, which is very cool. If we have done this nicely, then in principle, we know beta, which is the generator of transformations in the generator of time and motion in parameter space and alpha as functions of this. And if we take the definition of what alpha and beta really are, it turns out that we have an equation of motion for what that parameter is as a function of time. The yellow rectangle over there gives us also very interesting and important message. And is that parameter space has more geometrical structure than we think. There is there are dragging effects, there are possibilities of having re parameterizations and time evolution that go along the way. So this parameter, not only time evolves, but also changes its coordinates along the way. It's the geometrical structure of parameter space is very rich in terms of parameter insertion of geometrical structure. I need to say something else, the dragging effects that appear due to these changing coordinates as the parameter moves in time. It's a second order effect. But it also poses a very interesting situation. If we try to make this routine scalable to higher and higher perturbative orders, the problem is going to be that it is not enough to calculate alpha v and beta v of that higher order we want to get. It's not sufficient to calculate those things in order to have the full picture. The dragging effect is going to come with contributions that mix things in order to make the order match. So higher order perturbation theory in general, it's not a simple deal. If the story that I've been telling you so far makes more or less sense than what is going to happen is that we're going to be able to find beta and alpha functions of v and there's going to be numerical error due to convergence to a full solution of the problem and that is going to introduce in this case if you can tell beta as a function of v looks like in logarithmic space looks like a power law. It just looks like a straight line, just looks linear and they're going to be small deformations due to numerical error due in the slope of that line and also in the intercept. So this is just a reminder that is very important to be very careful about the accumulation of numerical error along the system in the system because in the end it's going to bring something else to the solution that maybe you don't want to see. So things are going to have error bars and error bars need to be put in the system. Luckily in this exercise that I'm showing you the error bars are not that big. I would say they are so small that in order to plot the errors I need to multiply by something like a million to see the covariance matrix change ever so slightly. Yes, this blue this red thing that go along the line, this red cloud that I'm trying to plot in there it's all due to errors. So it's not in order to plot that cloud that you see in there I need to multiply the error by a factor of a million or a factor of a thousand in order to see the error in the same context as the data points. So yes, the errors are there but they are not that big. So please do not be afraid. If I've done the job properly well properly then it's perfectly possible to reconstruct the parameters of the solutions assumptions of time and this plot more or less shows you what it's going to be what are going to be the errors that are going to be made when compared to the full solution. Remember we have the we are in this rare circumstance and we do have the full solution of the system so we are in very good stand to compare whatever we're doing in that solution. So instead of just telling you okay believe me this is the solution I'm going I prepared a nice a nice more video in order for you to actually see what's actually happened. The blue curve is the solution that we just generated from renormalization workflow. The purple curve that you're going to see is what comes from naive perturbation theory and the black curve comes from the full non-linear solution. So what you're going to see is the following. The first order perturbative solution blows up very rapidly on you and the renormalized solution likes to stay more or less close to the dynamic of renormal to the non-linear solution. So apparently we haven't done a bad job in reconstructing a solution that decently approaches the full solution and doesn't blow up on you. Okay that seems apparently fine and everything seems to be in order with this single parameter story. What happens if you are not really sure what the correct parameterization of your problem is? In this case what I'm trying to introduce is some decoy parameters. What is going to be the amplitude and the other one is going to be the inverse width of the of the soliton and I'm going to allow them to be independent parameters of the system. Of course I know that eventually those two parameters amplitude and inverse width are going to be somewhat connected with this only parameter called v. So if I repeat the full procedure again what I'm going to find as the system is trying to tell me okay your system is only truly one dimensional in parameter space. These extra parameters that you introduce converge very nicely to the original parameter prescription that you have. So therefore the method on itself allows us to have very good control on not making any errors on how many parameters the system really has. There are ways to test the flow in parameter space and to test that we are really flowing along one line. And what makes it even more interesting is that if we actually try to also compare with the extraction of beta from the full solution which as I'm telling you all the way I already have we notice that as we allow the value of the perturbative parameter to become smaller and smaller we converge to the actual value of what beta is coming from the full solution. There is very nice convergence along the way and things are going very nicely there. So at the very least the story for beta is very good. So this is telling us that we are able to reproduce parameter flow in a very accurate way. When we actually see if we can actually compare or actually consider certain values it's the amount of evolution time that we allow the system to grow or to change is vital. This is telling us if the parameters that we're considering are going to converge to a fixed value or not and what we find is that alpha and beta not all the alpha and betas have stable values. It is very much possible that some of those values that we find are not converging to something. So yes this method has a way also to self-regulate itself until okay not all the generators or all the generators that you're finding are doing well and they're converging as they should. In the case of alpha a and alpha m there are problems and things do not converge nicely and this problem is connected to something that I'm about to show you right now. When I try to actually see and measure the convergence of alphas with respect to the full solution things do not go that nicely. There is not smooth convergence to the full alpha solution but if we completely ignore the the presence of alpha that means if we completely ignore that there are reparameterizations along the way we can expect a change in the renormalized solution that introduces an error of around 20% in the full solution. So it is actually important to consider that reparameterizations happen along the way. The method that I'm describing to you is not perfect but still it produces very decent solutions and takes into account these two things which are an evolution of the parameters and reparameterizations along the way. Okay very cool we went through this problem we tried to solve it yeah everybody hands up great good job guys. What is this good for? Here's an application that I have to show you. Since 2017 we are perfectly able to listen for the first time gravitational waves come into us. Yes great uh so okay give me one minute good. So we are perfectly able to hear gravitational waves come into us very good uh and what we are learning is that this very nice spectrum of gravitational waves looks pretty much like what we know from general relativity. So our theory of gravity predicted by Einstein more than a hundred years ago seems to be fairly accurate but there is still some room for modification. The residuals are there they are not big the shape of the waveform is more or less the same as in gr but there's we are still allowed to have some modifications in both amplitude and frequency of the gravitational waveform. So if we try to actually reconstruct templates for modified gravity what do we need to do? Well it turns out that secular divergencies as we saw as we saw on the explosion of the kdv solution are also popping up in perturbative schemes for modified gravity. So the idea of doing a perturbative expansion doing reparameterizations and allow the parameters to change as functions of time might be a very good way to actually control the divergencies happening when we try to model gravitational waveforms in modified gravity. This brings many things and many challenges ahead of us. First of all I'll just show you how to calculate an integral in three-dimensional space without further issue. Now let's try to extend that to a gauge invariant way which represents itself a challenge or at the very least try to do it at first perturbative order. Then the other thing that happens is that okay there is another problem that doesn't doesn't have anything to do with modified gravity and it has and it goes from the case of doing extreme mass ratio in spirals. In that case in particular there is also going to be blowing up and secular effects popping up. So therefore there is another opportunity by using the same parameters as in GR to try to see how things are going to be corrected in a scale dependent way that allows us to still replicate the solution of the waveform without caring too much if the mass ratio is too extreme. So that is all pretty much what I have to tell you. Thank you very much. I really hope that you like it. I hope that you have fun. Please shoot me with all your questions. I'm all available for you. Thank you Tomas for this talk and an informative talk. Let me check if we have some questions for you too. Okay not yet. Let me open the floor for questions from the coordinators and I'll start with one question. Can you comment on the is there a way or timescale to see like you know the solutions might blow up at some point but then is there a way like to know when it's going to be good for or is? And normally that information comes from the order in the perturbative scheme. Let me show it to you. Okay so what is going to happen is that depending on the order of the perturbative scheme that you have and the time dependencies that are introduced in the parameter space you're going to have different scales happening along the way. So in this particular case there are two parameters. There's only one parameter but there's only one true dependence which is on v. But in trying to resolve the peak position of the kdv soliton we see that there are two timescales being confined. One which is linear in time and another one that is quadratic. So that's very important because that's going to tell you okay there's going to be a very short timescale period in which things are going to diverge as one over epsilon and there's another timescale in which things are going to diverge as one over square root of epsilon times beta. So it really depends on the parameters that you have and more is what is time dependence. Let me talk just ever so just very quickly about the program that we want to solve. In this case for the situation of the inspiral we have the rotation frequency of one around the other and the orbital angle. The orbital angle and the rotation frequencies are very similar to the speed and the position of the peak for the kdv problem. So let's go back to 101 physics. Phi is going to vary quadratically in time as function of the angular acceleration of the system and linearly in time as a function of the angular velocity of the system. It's extremely similar to kdv. Okay and regarding this like you might want to say like perturbations and as you said like GR generality works pretty well to like for the events that we have so far. So your ideas like to try to treat like the modifications as perturbations and then go to a particular like order. What about like getting full solutions or something like that? I don't know like you start from one. In some of the theories proposed, well let's remember one thing that is important. Einstein's theory of general activity been there with us for more than a hundred years. Yeah sure. If Tonya gravity was for also hundreds of years. But here's the thing particle physicists are not going to allow me to lie in this case because in the absence of a full theory what is going to happen along the ways that candidates are going to pop up from everywhere and they're going to flow your space of theories in the space of theories in front of you. Many of the candidates for modified gravity people are not really sure if initial value problems are will post. In some cases the tests have only been done at certain perturbative schemes. In some other cases the problem when you try to actually do the separation in scales becomes a complete mess because you start from the point of view of a partial differential equation which is hyperbolic and throughout evolution it becomes elliptic and the nature of the problem changed completely along the evolution. So what happens typically in certain regions right is not necessarily has to be everywhere for the problem. The problem eventually becomes not all the problems in this series of modified gravity have been solved and the only statements that can be made are only perturbative at this point. Trying to extract full solutions is not going to fly. In this case I'm even trying to show you and try to encourage you to avoid second-order perturbation theories. It's not only an overkill it's a problem to go to compute that and in most of the cases it's not even well done. I see okay thank you I don't see any questions let me just check somebody's thanking you on the youtube channel. Do we have more questions from? I have a couple very basic questions so on your on your first plots you were showing how the soliton would propagate and then you were comparing the full solution the background solution the perturbative solution etc but then what I noticed is that your x-axis was was changing when you were comparing the different plots right? Yes you're right. So is this due to the to the redefinition of x? No I just have been careless I'm sorry. Oh yeah it's just negligence on my don't worry guys it goes along the way and and if we try to to keep the x-axis fixed along the way it's going to show the same situation there's absolutely no problem. Okay okay and I will say something else in the case of the KDT soliton it's space translation and invariant solution. All right take the same soliton and just move it 300 units to the right nothing's going to happen. Sure great so the other question is do these alphas and betas have got like fixed points? That is interesting why because in principle the idea is to try to do the expansion around the fixed point and what is going to happen is the following alphas and betas are going to tell you how natural things are with respect to that to those fixed points if the beta function is huge it tells you that this theory doesn't want to take that fixed point so seriously it's trying to move away very fast. So imagine that I'm not just interested in calculating at all just corrections to the solutions let me just calculate alpha and beta functions just for the sake of doing it. If that's what I try to do that is also going to tell me about naturality of the theory and going and it's going to make me wonder if this is the fixed point is this theory the next reasonable step for the theory to move across or not? So yeah it's like for the for the sake of actually trying to improve the solution you're getting a society effect that is also telling you a little bit about how things are going to look in parameter space. If that correction makes sense or not we know that in gr for example certain parameters are just like the background value of the parameters of course are most of them fixed points. If your theory wants to try to escape away from the fixed point very rapidly then there's something that the theory is doing that doesn't like the nature of that fixed point that poses a of course it poses a question of naturality on the theory we don't know. Okay great thank you thank you very much and I think we have time for another question that appears in the youtube channel. I I paste it on the chat Thomas if you also want to read it and the question is in the solution of the kdv solution and the renormalized one it seems to be some room for improvement as well is it possible to do this considering second order terms? Yes it is possible to consider second order terms as well. The implementation might not be that simple but it's still possible. The only thing that I wanted to for you to retain from this message is that up to first order there is absolutely no conundrum with the situation. Higher order realizations of the solution do not just imply calculating the perturbative expansion and calculating something like phi 2 and alpha 2 and beta 2. It is not just as simple as that. A second order effect that shows up right away is just that it comes from the fact of doing something like okay now consider very seriously the fact that you have actual coordinate drying. Why? Because at first order order in perturbation theory what is happening is that the only thing that the reparameterization is doing is to shift the versus light in the solution. It just shifts an initial condition instead from being certain value it's going to be some value and that's it and the beta function just allows the system to evolve. If we take higher order contribution seriously we have to add the dragging effects and the dragging effects from the first non-trivial dragging effect comes from a lead racket of the alpha flow along beta. So yes it is possible to expand it to higher orders in the case of the KDB solution but it is not as simple as just calculating higher order solutions. Okay and if there's a follow-up question there's a delay of a couple of seconds so I'll read it again. Then there's another question on the YouTube channel I also paste it on the in our chat it says could you please talk a little bit about first versus second order effects also the role parameters in this competition how well do you know the parameters based in real physical systems? That's a very interesting question and well a part of this question I more or less try to answer in the previous one because okay we are learning that even from the fact of considering reparameterizations and allowing those to have a dynamical effect we are noticing that the second the first non-trivial second order effect comes just from first order perturbation theory. That's a nice first message to take. The other situation that happens in here is that how well do we know the parameters space in real physical systems? The first guess that we have just comes from stating what is the parameter space for the background solution. In principle testing what the time dependence are going to be allows us to explore more or less what's going to be the way in which we're planning the parameter evolution to flow what we're trying to see is more or less what's the way in which the flow is going to happen. So the information about how well this represented it how well this represent this situation is representing real physical systems comes from something very interesting imagine that we do not do any of this experiment at all we are not trying to renormalize anything on the solution okay then in this case the way in which the solution blows up is going to tell us what are going to be the dominant and the running of time in the solution as we allow the solution to progress. So is the divergence in amplitude in this case linear in time or quadratic or exponential so depending on what is the order of the divergence that you had therefore you're going to need to correct the question of motion for the flow up to that to match up to that order to actually allow the solution to converge nicely. So as in every perturbative scheme the first thing to do is to just analyze how badly it blows up evaluating how the running goes as time is going to allow us to introduce what are going to be the dominant terms in time evolution and correct the equations of motion and correct the deformation in parameter space accordingly so don't throw out your divergent solution observe very carefully what are the divergences in parameter space and what things are diverging once that you know that evaluate if that divergent is quadratic or whatever order in time and construct at infinitesimal order the equations of motion or the reparameterization accordingly. Therefore you derive alpha and beta functions accordingly and allow the system to evolve and renormalize the solution. So again do not throw out your divergent solutions understand very well what are the parameters that are diverging and how are they are they diverging. Okay thank you Jose. Okay I don't see any more questions just out of curiosity at the beginning you wanted to do something with your hands like you have like an instrument or something at the beginning of your talk. Oh yeah the thing is that I just wanted to show you yeah the thing is that I just wanted to show you as more as more experiment with the spring that I really wanted to do but for some reason the spring got in my system and it did not allow me to proceed so yeah technical failure just happened so okay I just try to be a happy camper and leave. Okay well thank you. Yeah um oh yes sorry. I have one question that is kind of because the method is very very powerful for non-linear equation let's say which is a little bit of stuff but what happened if you apply okay the name was the what happened if you apply this for cases that are linear differential equation like the typical that we know and the second question just in what happened if you include for instance you could apply these same things for the navier stoke equation or something with more special special debate how it would you're gonna have alpha beta for any dimension or is alpha beta for the common for all the dimensions or okay these two questions. That's very interesting uh it is true that in principle I try to work with a non-linear equation right but the correction I don't know if you notice this but the correction that I mentioned you hear is linear so in principle there is absolutely no problem if you introduce something non-linear or non-linear in the system that's that's not a that's not a there's not an inclusion principle for that that's fine you have a very good point maybe a very nice next thing to try is to try with fluid equations I would say to try with particular solutions of fluid equations and allow those solutions to evolve in a non-linear or in a more complicated environment so you have a solution that you you know very well and understand and you know very well understand how the the parameters behave and now all of a sudden you expose that solution to something way more complicated and it's exactly what we're doing here I'm taking just a soliton which I know very well how it behaves and I'm exposing it to a more complicated background so therefore there's going to be alpha and beta for every parameter that you have in the system so uh it comes up again of understanding what is the nature of your background solution and how compatible is it with the perturbative solution that you're trying to to implement or I would say more precisely is your is your background solution allows for the system as a thing to evolve are we parameterization sufficient to actually try to solve it because if your values of the beta functions and alpha functions are huge then that's telling you very clearly two things either you're using the wrong initial condition or either that modification that you're doing in the system is actually doing something completely crazy to the solution okay thank you Tomas for this very nice talk and very passionate one I think people can find your email on the University of Mississippi web page of the physics department in case there are more follow-up questions and yeah this is our webinar 97th please stay tuned for the next three months of this uh season and then we will come back potentially August or September thank you very much everybody for attending us bye Jose bye everybody thank you very much