 We consider a complex kick-single-lander equation of this form, okay? And here, nu is, of course, the whole state, whole state number, eta is the random force, I will specify later, and k is, we just choose k to be a cube, n-dimensional cube, and here, n, we can just, as in the title, it's the un-victory dimension, space-width dimension, so n can be un-victory integers here. Yes, yes, yes. And for the boundary condition, we can just impose the Dirichlet's boundary condition, and we, for this special choice of domains, we can, for Dirichlet's boundary condition, we can just consider it as the, of the periodic dimensions, which means that for any x plus, and here, here, j is just n-number 1, 2, n. So for this, and for, for the random force, just 1, 2, 3, 4. The random force we just consider is a, is a smooth in space and y in time. I just, we have, b is just a set of real number, of course, and then beta, beta d is the standard Brownian motion in the complex space, which you can be written as, so here, set both these two, beta r and beta i, just the standard real Brownian motions on the real line. So here, another one is, this is the standard, standard orthogonal base of, of the Laplacian on this domain, and you can, you can be written as x is equal to d1, x is equal to x1. So, so this is the setup. What we are interested in is, in the, when the goal of, here is that we want to study the solution of this equation for new is, of course, and very small, of course, for time. The motivation of this work is, of course, is the, the motivation of the, is the, the turbulence, turbulence in water. And let's see, give some, some naive, naive observation of how we can relate this with turbulence. And the turbulent, turbulent in, in fluid is, the turbulent in fluid is, usually it's, we, of course, we need to consider the Amistok-Costle equation. And if we add the random noise here, maybe. So, and when, when the, for a turbulent, we need to let the tether in, in this limit, when we let the new go to zero, and, and the turbulence is characterized by this Euler equation. So, it's a nonlinear equation, but this, if we can just collect this force. So, it's, it's a, it's a turbulent part is determined by this nonlinear equation. And if we consider our equation when taking, taking this in this limit, is, get the equation of this form. So, here we have maybe say the slightly similar equation. So, it's also an equation that we only have nonlinear part without, linear, linear. So, it's the, the main similarity of this, this, this two equations. And because the, the fluid turbulent is completely out of touch at this moment. So, we maybe, as some standard, we can consider this equation as just a toy model or maybe a model for, for, for turbulence. So, maybe want to study this equation. And the another characteristic of turbulent is the, the, the small scale effect, which means that in any, any small, small space difference you can, you can always observe some ripple effects, no matter, no matter how, how small the, the, the scale is. So, this is, and, and if we formulate this effect mathematically is that the lower order sub-left norm, maybe L2 norm is of order one. But, and for higher sub-left norm, it's maybe much larger, much larger than one. So, it is, so, so the, the problem of turbulence can be in some sense expressed in, expressed in this close of sub-left norm's scenario. So, that's so much for the motivation. So, this is enough to justify our efforts here. And as I said, we consider the case for, not vanishing, but for small viscosity, for small viscosity. Here we have a, and for, for these equations is a work by, by, by, for this equation, this equation is a mixing, mixing, if I am mixing evolution here, is the work by Cauchy and Nasha, 2000. I will explain this later, okay. So, that's what we know about this equation already, a lot, because it's mixing. And, so, before, before going to, no, no, no, that's so much. Now, let's first stay the main result and then with just a little bit of proof. Our main, main theorem is that for any, bigger than one half the space mentioned and two, and for any, if OPI is, of course, it's, PM is finite. PM, we just defined it as, and for any, for any, for any, for any delta smaller than one over eight, this number is innocent, just taking there, and then for any delta prime smaller than one hundred, one over a hundred. This, this number is also innocent, it's just to be a small number, it's fine. And we choose kappa, positive and smaller than another number, kappa, smaller than, kappa, smaller than, and here d alpha is, if we, if we divide alpha as the minimum of m minus two and one, then d alpha is just equal to this number, and d alpha is this number. And if we can also assume that the, another thing is that alpha, alpha defined this way, alpha should be bigger, alpha n, where alpha n equal to zero for n equal to one, two and n equal to, and alpha three equal to 3.6, alpha four equal to 0.6, not equal, almost equal, approximate, alpha prime approximate in equal to one point. Then, then we can assume additionally that alpha, kappa is bigger than gamma, r, delta prime. This is the, the assumptions which you can be, you can, you can just ignore at this moment. Then for any, for this, then for this fine m, delta, delta prime, and gamma, kappa, kappa, gamma, we have a zero, new zero, if new is bigger than, smaller than new zero. Then for any, new zero belong to, just the initial data belong to the sub-left space h m, we have t, then on zero zero and and delta, new, so for any, any t bigger than, big t, we have the following statements. The first one is that we have gamma, then on t, new, everything's here, and the probability of this gamma is bigger than one minus delta. And for, for omega inside this, this set, we have, we have a maximum of the sub-left norm of this, is bigger than, bigger than new, negative kappa m. And for t, t belong to, this is the first statement, is that, for, for large enough time and for larger, larger probability, which is at least seven over eight. So, there is the sub-left norm of the solution, close at least to this power, this power of, of the viscosity constant. Yeah. And the second statement is that, second statement is that the average over time for, for t bigger than here, average over time, bigger than, so it's the, the average with respect to time and then take the expectation, at least of this order, where l is equal to two, new minus two, is, so here is this two. So, so we need to make, for this one, we, if it makes sense, we need to have this exponent, negative, positive, negative, negative, that's why we impose this constant here. Also, it might be imposed on this. Okay. This is the statement. Let's, let's comment on this statement. At least in the, for, so for, for here, we have the sub-left norm of this order. So, in some sense, that in a, in a, in a, in a space, scales of maybe delta, delta is of order, we can see a change of order one. So, this is kind of, of a characteristic of a turbulence. Okay. Yes, yes, yes, yes. Okay. Yes, yes. And for the upper bound, also upper bound is, is given, but there is no, upper bound, upper bound of this constant of this order. So, this, we can have lower bound for the, lower bound for the, for the sub-left norm, also have upper bound. Of course, you can see that there's, this one is at the best is one or five, and this one is two. All right. So, there's still a gap between the two exponents, but for this one, we can also see there is a, as a new go to zero, as new go to zero, of course, this will blow up. So, yes, yes, yes. No, no, of course, this has a constant here. So, this constant depends on initial data. No, no, no. It depends on initial data, the size of it. Yes, yes, yes. Founded by the negative power of this one. Okay. So, because, and then there's no blow up. Yes. No, no, no, no, no, no, no, no. There's no explosion, no blow up here. It's a global, it's soluble and it's have upper bound in there. So, it's normally with, with viscosity here, you, you, you don't expect explosion. So, here is so much about the statement and I think about half, one, oh, maybe 50 minutes as fast as, so I can do some nasty things as, no. Okay. So, for this statement is, the proof of this, this theorem consists of two parts. Oh, let's, let's explain them more than mixing. Mixing is here because we are going to use upper bound is in a much earlier favor. Yes, yes, yes, yes, yes, yes, yes. So, for, of course, this equation is a globally well posed. For any initial data, if we consider operator t for ability, gamma is just a set inside, inside the breast space, maybe. And no, no, it's C0 subset. And then this is just the probabilities, yo t down to, so it's the same notation with, before now is the, and then mixing means that for any initial data, there's a unique, there's unique measure on this, on this probability space, on the probability space of this continual, continual function space, such that you can merge it quickly for any, for any initial data. This is the mixing, a uniquely mixing system. There's a very, also have a very good upper bound for the, for the C, how does it say this? Supreme norm of the solution is that for any, for any t bigger than 0, any solution, any initial data, the expectation of one is more than some constant, big constant. So, this upper bound is also for the, this upper bound for the supreme norm of the solution is also, so with this two, with this two, with these two, prior, these two results, we need, we need another, these two, and we need another. So, the proof of this theorem is based on the, on these two estimate, and this mixing, and this upper bound, estimate. And now the first one is that the stationary, this stationary measure, the probability is that for the stationary measure, L2 norm is smaller than any delta constant. This constant is, it's, it's depend on, on, on b, small, small b, and just depend on small b. Okay, number two is if, if the initial L2 norm is bigger than delta, then for small enough, we have the sub-left norm of bigger than for some t, some s belong to, here we just assume that this initial data is also 0. So, omega is belong to, we have a, here we have a axis, and this omega, beta of this omega is also bigger than one. Okay, so it's two lemmas here. So, so with this mixing, we know that for with this, the first lemma is that for the stationary state, the L2 norm of the solution cannot stay, with a large probability, it cannot stay small, right? So, it can, can be small only of this, this probability. The second one is that if the L2 norm, initial data, the L2 norm is large than this one, then it will at least grow the sub-left norm, m sub-left norm at least goes up to this order in this tie interval, in this tie interval. And of course, here this probability is chosen in this way. So, if this tool we can see that for any initial data, it will be, it will close, close to the stationary solution, a stationary state. So, it will eventually, with a large probability, it will be just have, for most of the solution, it will have L2 norm bigger than this order. And then after that, we just use this lemma tool. Okay, so on that, it will at least grow up to, up to this order, right? So, it's a two-step statement. And for the first lemma, I will not prove here. So, I will just give some, how we prove the second, the second lemma here. So, now how, let's see how we choose this big omega set. Here, we just assume time is studied from, from, from zero. So, we, the omega set is, it's just chosen to be, no, it's a stationary state. Stationary state, it, it's not dependent time. So, is this, for this one, this is a stationary state? It, it does not depend on time. For any time, it's like this. So, we have, we choose the omega to be a, such that 30, more. So, here with A is for, it will be any, any positive or small, small number. You can make it to be as small. I'm pretty small and then just, and this is, this is just, we restrict the, the growth of the brown emotion in this time interval. To be up to, up to this order. Okay, for, since we have A is positive, so for, for, for gamma small enough, we always have, have this one is, we have fixed data here. And then, we want to show that for, for omega here, omega inside this set. And for initial data of this order, at least of this order, at least bigger than this one, it will close to the sub-left norm and will close to this order. So, so we use, so the claim is that, use the, assume, not true. Which means that for any omega inside this big omega, at t, smaller than, smaller than, smaller than kappa, kappa here. So, if it's not true, we have this, this, this relation. And we want to show that this will lead to contradiction. So, for the solution of this equation, we can just write as the following equation. We go to n plus plus, we can, we can write, we can write the solution of this CGR equation as this integration here. So, for the first one, oh yes, this omega is not okay. A is just equal to, okay, equation like this. So, for this equation, we need to, need to control, need to make some estimate here for, for the first one. We have, since we have this assumption here, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes. Yes, for, for this one is, this one is for the, this one is for the, the elastic force. And this one is the linear curves here. So, for this one, we need to, so for the first one, for the first one, we have this, it's just the square, the, the, for the first one, maybe, one. And this one is two. Last one is, no, it's three. So, for I, so, we have, it's smaller than, smaller than minus power. Then, omega here. And stream is the T, but then power. And then, this one, since we have this assumption here. So, the h1 norm is smaller than new to the minus kappa. So, we have, we have this one is just, and, and, and those here, we have nothing, three norm. So, this one is smaller than minus kappa. And from the, from the supreme norm, estimate here is always just of constant. It's independent of the, independent of the viscosity constant. So, so we have, this is smaller than, smaller than just 80, 50, go to gamma. So, here we have, we will have a norm of, of norm of no minus plus 2m kappa. Then, for the second term, second term, since we can just, we can bound it from below by, since this term is just, you take the h1 norm, h1 norm of the first one. You take the h1 norm of the first, this one will be, will be just of, of the, some L2, the L2 norm, largest one. So, and for the last one, you is using integration by part, will be just integration by part formula for, for the stochastic integration here. And the integration by part form here is just equal to kappa nu. So, we have this term. For normal integration by part formula for stochastic formula, you will have an additional term. It's called, called quadratic variance between this one and this one. Since this one is just a, a c1 function with respect to t, so the quadratic variance with respect to bounding motion is just equal to zero. So, we don't, we don't have that difference here. So, if our, our choice of omega here, this one will be just bounded by, bounded by this exponent here. So, we will, we have these two bounded by plus kappa. So, here we want, we want this term to dominate this far and this far. So, if we sum up the exponent here, we can find that if the kappa and gamma be chosen, satisfy the condition in the theorem, which is smaller than some, some, some smaller than that in relation here, then we will have a contradiction here, because this here, we will, we will have, this is of order, this is of order, of order negative power. So, the sum up here, we will have a negative order of, and we, we choose the kappa and gamma to satisfy some relation before. So, then we will have the h1 norm of this at time, at time gamma nu minus gamma of this order c to the minus nu, some exponent smaller than, smaller than gamma. So, for nu, small enough, this will be bigger than, and if this hope, this hope choose by the sub, subive embeddings for, we will have further, and this is of order nu minus gamma, and this one will have order nu minus kappa, and nu half a kappa, and this will have a nu gamma. Gamma is bigger than kappa, so we will have a contradiction here. So, this is more or less, more or less the, and the main, the main, the main, this part is mainly due to the non-linearity terms here, right. So, the growth of this sub left, sub left norm is mainly due to the non-linearity. It's not because of the random noise, or of course, not because of the dissipations here. So, here is more or less like this, and so, so with this, here this, now, now we already prove that it's, at least the, the left norm, at least we will go up to order kappa m, kappa m. Yeah, so, so it's, for, for, for pulling, for pulling the, is, remains, remain larger with respect to time, your average is, just go back to the equation and take. So, if we apply the Ito formula to, find the Ito formula, if Ito formula to this functional, for this, and then we will have the following equation, n e t, 1 t 2 is equal to, of course, we have minus 2 t 1 t 2 plus 1, and then we have this 2 t 1 t 2 plus the, the, the classic term. Here m is just the inner product in, in the sub left space. So, for this one, always have, it's smaller than some constant. So, if we already prove that in this time interval, there is at least one odd, one time is bigger than, bigger than minus kappa m. So, now we can choose 2, interval of this form t plus 0. So, in this 2, we can just take the, take the, take the, take the interaction, in the, in the section of this, for different gamma here. So, in this time interval, time intervals, we have a moment is bigger than this order, and in this moment, this time interval, we also have a moment bigger than this order. So, so if, if we, we same that for, if we same it, if there is a time, a moment, if all of this are bigger than 1 over, maybe same that, we always have 1 over 2. Then, of course, the integration with respect to t is bigger than, if we have a moment inside this one, such that the smaller than, smaller than 1 over 2. Then, since then, we can have another, another moment inside this interval is bigger than this, this up, up this order. So, if we choose, choose t, t1 to be, maybe t0, we say t0 here, and t2, the, the moments inside this interval is bigger than this order. So, this is, the, the difference of this one is of all the, at least of all the, you know, this one is just innocent. So, this one is positive, it's negative, this part is positive. So, it's, so we, we will have at least the integration of uc infinity dt from t1, t0 to t2 is at least of this one. And of course, since this is just smaller than, bigger than 0, smaller than 2, 2 nu minus gamma, then if we take the, take the, take the time average, we will have additional, additional exponent here. You will have to plus another, you divide it by, divide it by 1 over, 1 over nu for the gamma, and then you have this one. So, we, in both case, in both case, we will have, we will have the, in the time average for the, so it's like this. Okay, thank you.