 Can I show something? This one. This one. The first talk of the session is about the scale of the sport by a non-guarantee human flow, the children's regime, by Iliyan, Sirot and Zibin. Iliyan, Sirot and Zibin. Everything is very easy. Zibin. My name is Valeria Sirota and I'm presenting our work with Anton Iliyan and Kirill Zibin. Our topic is passive-scholar transport, but I'd like to begin with some mathematical preface. It concerns the... to make the next one. Here it is. It concerns the time exponentials of random matrices, which are the formal solutions to such a linear equation. The equation is linear, but these are matrices, so they do not commutate. And the matrix A is assumed to be a given stationary or a random process with a given statistics. And the question is how to find the statistics of Q. Here is a good tool for such problems. Well, this equation appears in many turbulent problems, and so I hope that it may be of use not only for passive-scholar, but also for many other things. So the important tool is cumulant functions. This is the definition, and if there exists probability density function, then the exponential of the cumulant function is just the Fourier transform of the probability density. In the case that we deal with random processes, we should consider cumulant functionals instead of functions, but if this probe function, the argument of W, changes slowly compared to the correlation time of this process, then the functional can be just reduced to a function in such a way. So anyway, we can deal with cumulant functions. Here is the example. How does this tool work? If we take the one-dimensional case, everything is easy, and we want to calculate, for example, the momenta of the value Q, we just have to recall the definition, and we immediately get the answer. By the way, we see that these momenta either grow or decrease exponentially as functions of time. Well, it's okay with one-dimensional case, but what shall we do with multi-dimensions? It is convenient to make the so-called eva-sava decomposition of the matrix Q. Any matrix can be decomposed into a product of an upper triangular matrix, diagonal matrix with positive elements, and rotational matrix. This is true for any matrix, but if Q is the solution to this equation, then these three components behave in a very different way. Namely, as time goes to infinity, the rotational matrix remains just random. The upper triangular matrix stabilizes at some random limit, at some random value, which depends on the particular realization of the process. And the diagonal matrix grows, or the elements of the diagonal matrix grow or decrease exponentially in this sense, in the sense that there exists such a limit. Moreover, the limit values lambda do not depend on the particular realization of the process. They are the same for all realizations and depend only on the statistics of A. So they are important characteristics of the process A and of the process Q. In this sense, they are universal, and they are called Lepunov exponents. In the case of Gaussian process A, the expressions for these Lepunov exponents have been calculated, but still there is a question what to do if the process A is not Gaussian. Since the main part of the Q matrix appears to be the diagonal matrix, our goal is to know the statistics of the diagonal matrix as a function of the statistics of A. And the good news is that now we know how to do this. The main idea is that we have to make a change of variables in such a way that we just should rotate the A matrix correspondingly to the way the Q matrix rotates. So it is not easy just to guess why is this change of variables good, but still it is good. And just as the Q matrix is decomposed into a product of three matrices, three components, correspondingly the X matrix can be decomposed into a sum of three components, and unlike the Q matrix, these components are easy to separate from one another. And luckily the diagonal part of X corresponds to the diagonal part of the Q matrix. So these change of variables is rather complicated because of random matrix R. Still it is possible to calculate the Jacobion of the transformations, and so we can eventually calculate the cumulon function of X based on the cumulon function of A. The relation appears to be very simple. And since we are interested in the diagonal part of X, we can in some easy way restrict ourselves by the diagonal components. So we get the cumulon function of the diagonal components. So now knowing the statistics of A, we can calculate any averages of the matrix D. In particular, it is easy to get the expression for the Lyapunov exponents in this case. Well, now we pass on to the passive scalar transport. So here is the equation. Theta is the density of passive scalar. V is a random velocity field, and it is assumed to be given. The Cregnan model assumes that velocity is delta-related in time and Gaussian. And this model is very well studied. There are very many results, but it's lack is that it is rather far from reality. For example, in this model, there can be no energy cascade because of time reversibility. Well, there is a common mistake to think that if we have a sum of a large amount of summons or an integral of random value, it behaves just like a Gaussian, so we can just replace any integral or sum by a Gaussian value. It is not so because indeed all the powers of these two random values coincide, but if we take the exponential of this random value, the result would be quite different. So if we take exponentials, then non-Gaussianity is very important. It makes the same contribution as the first two orders. So what to do if we want to deal with non-Gaussian fields? The usual way is to use the bachelor limit. This corresponds to the viscose range, and then we deal with the velocity stress tensor instead of velocity itself. The problem was investigated by Balkovsky and Fuxon, and the result was that the moments of the scalar density decrease exponentially, and the exponents saturate at some large values of alpha. But the relation between these exponents gamma and the statistics of A remained unknown. So in this paper and in this talk, we will find the exact expressions for gamma in terms of statistics of alpha. So given A statistics, we can calculate everything, and the interesting fact is that it appears that the saturation occurs always at the same value of alpha, and it does not depend on the statistics of A at all. The saturation is universal. Well, so we take the bachelor limit, and we take the quasi Lagrangian reference frame. This means that we move along together with one particle. So the equation becomes like this, and we'll first solve it and then make the averaging. So it is rather easy because this equation is linear. We have to make a Fourier transform and a change of variables in any order, and we know that in this change of variables there appears the equation we discussed above. So this is the result in the Fourier space, and to get the value in the real space we have to make one more Fourier transform. For initial condition, we take the isolated blob with Gaussian distribution without any loss of generality. The only important thing is that it has some scale. It is not even important to which scale it has. So now we calculate the value of the passive scalar density, and we see that it is expressed via the determinant of such a matrix. So we have to calculate such an average. Sometimes people consider homogeneous initial conditions instead of an evolution of the blob evolution, and it is just the same. You only have to average additionally over the initial conditions, and the result will be just the same up to renormalization of alpha. So we have to calculate this average, and now we recall that the q-matrix can be decomposed into the three components. Only one of them remains randomly. Now if we look at this matrix that we have to calculate, we see that q is here in such a combination, and that the rotational matrix vanishes here. So the matrix q is random, but this matrix is not random at large times. Moreover, to calculate its determinant, we have to take only growing components of the d-matrix, because the growing components of the d-matrix contribute to the determinant, and the decreasing components do not contribute because of this summand. Well now, this determinant is proportional to a product of the components of those components of the d-matrix that are growing. Well now, we recall that we know everything about the matrix D or the matrix Z if we know the matrix A. In particular, it is easy to calculate the cumulant function of the Z-matrix for any given time, and also we will need in what follows the probability density. It is also easy to get it by taking the Fourier transform. Now how to calculate this integral? The most naive way of consideration is like this. Well, the Z-matrix, the components of Z, have some definite limit at large t, at large time. So we just can take these limits instead of these components, and we get such a solution. But of course, this works only for very small values of alpha. The less naive consideration is like this. Okay, Z are not equal to their limit, but since time is large enough, we hope that at least the sine of Z is equal to the sine of its limit. So it's not so far from its limit. Then we can also easily calculate the result by using the definition of the cumulant function. But it is also not true. It also works only for rather small alphas. Because for any finite time, for any finite time, there exist particular realizations of the process when the value of Z is of the opposite sine to that of lambda, or at least the value of Z is zero. And though these particular realizations are very rare, they may be very important, and we will see that it is just what happens. So the honest way to do everything is just to calculate the probability density of Z and then to calculate this integral. Luckily, all these integrals contain a large parameter T in the exponential, so we can use the settle point approximation. Now we find that all the averages are exponentials and the exponents are the maximums of such functions. Well, and what to do with these functions? Here it is. This function is not analytical because of this ster-like part. Yes, well, we have to make averages over all possible values of Z, or which is the same over all possible values of K. So if the maximum is reached inside the analytical field, the analytical region, then the result is like this. This is the maximum value. But the maximum may as well be reached on the boundary between two analytical regions. This boundary corresponds to the value Z sum of the components of Z equal to zero. And then the maximum looks like this. One of the equations is the equation of determining the boundary. In both cases, the momenta are equal to the exponentials of the... Well, in both cases, the exponents are the values of cumulant function at the point where the maximum is achieved. In the three-dimensional case, everything becomes simpler because of incompressibility. We can make a change of variables in such a way that one variable vanishes, and everything depends only on two variables. And also one of these Z components is definitely positive. One of the Z components is definitely negative. So only one of them may change its sign. Here are some examples just to see what this all means. Here is the simplest case, the Gausson statistics. In the Gausson case, the cumulant function of A looks like a paraboloid, and the cumulant function of rho looks like a paraboloid shifted to this point. This is the minimum. And what happens? If we start with alpha equal to zero, we are at this starting point. The maximum corresponds to the zero point. And we are already situated on the boundary. This boundary looks like a roof. So this is the region of analyticity, and this is the region of analyticity, and this is the boundary between these two regions. And the maximum is reached here. As alpha increases, the maximum just moves along this boundary to this point. And as alpha becomes equal to two, the maximum reaches this point. And then it stands there, whatever happens. Alpha increases, and the maximum stands here. And this is the saturation of the exponent. If we take a small deviation from Gausson, then the boundary does no longer include the zero point. So we start at the point inside the region of analyticity. But as alpha increases, the point of maximum moves inside this analytical region towards the boundary, and as it reaches the boundary, it begins to move along it again. And again it goes to this point, and here it stops. Well, if we consider a reverse sign of the deviation, the situation is very similar. We are again in the region of analyticity, but now we are moving to the left, just towards the boundary. This boundary is z equal to zero. This is the boundary between two regions of analyticity. So anyway, whatever we do, the point of maximum moves towards the boundary, z equal to zero, and then it goes along this boundary. Well, in these two cases, the deviation from Gausson eventually is very small. But it may be not so if the deviation... If the deviation from Gausson is large enough, if the deviation of A process from Gausson is large enough. For example, we consider such a model for the process A, and here we see that in this case, the exponents also saturate at alpha equal to two, but the shape of the curve is just different. So this is a summary. We analyzed the passive scalar advenction in a turbulent flow at a time much bigger than the correlation time of the flow, and scales much smaller than the viscose scale, and we obtained exact expressions for the exponents of the Lagrangian scalar density moments in terms of Lagrangian velocity strain tensor statistics. If velocity is assumed to be delta correlated, the letter coincides with the Euler strain tensor statistics. The exponents saturate at the universal value alpha equal to two, independently of the statistics. In the range between alpha zero and alpha two, they can differ significantly from those in the Gausson case. Thank you for your attention. When you look at turbulence, we're often told that it's not Gaussian. Yes. Did you describe your turbulence as a Gaussian field? Turbulence is a much more complicated problem than just passive scalar transport in a given velocity field, but maybe, if we will be successful, we are just thinking of trying to use these techniques to the turbulence, but of course it is a much more complicated step to a turbulent field. Well, here the field may be turbulent, but the statistics is assumed to be given. We do not think of the statistics of velocity. We assume it to be given, and if it is given, then we can calculate the statistics of the passive scalar.