 OK, thank you. And also, thanks for the organizer. Today, I want to introduce a new method, multi-level segment analysis. We have developed it in recent years for data analysis. Here, especially if you're interested in to get the flow structure and the scaling relations, this method may be interesting for you. We know that when we talk about turbulence, here that always that we are interesting to talk about multi-scale. And because in the turbulent flow system, there are many different skills. Those skills are the intact. Such interaction definitely needs to kind of a mixing of some physical properties. Here, property is a general word. You can understand property as many different things. Then that if we are trying to do data analysis, this time that a lot of data available, we can do a very large simulation. And also, it's very good at data from that experimentation. Then that a lot of data available, now that the next step should be from those data, what kind of information or what kind of physics we can get. Is that actually such topic is that more and more important than to get data itself. Then that skill interaction that we know that will lead to statistics that we got mixed. But from that fundamental viewpoint, if we want to understand turbulence, we want to understand that in different regimes that the statistics, actually we want to separate such kind of mixing. Physically, they are mixed together. But actually, we want to separate such mixing. Then that if we know that in different regimes that the clear physics, that will be great. However, how to separate such kind of mixing is not easy. And today that we are introducing a new method trying to separate such kind of mixing. For example, when we consider the structure function, structure function that we do scaling analysis. We know that from when we're teaching that the theorem that here that the wave number that they all have that always have contribution to different skills. Such kind of because once you do average, then that definitely will lead to those kind of strong mixing. So then that is a possible idea. We want to try to understand that the statistics in different regimes that people always that are possible at the solutions for that. One thing that we do at large Reynolds number simulations. But we know that it's not easy to do. And even you can do very large Reynolds number simulation, sometimes that physics is still unclear. Then that here that from our understanding of battery is that maybe at here, of course, you can do larger Reynolds simulation. But another direction is we do not do larger Reynolds simulation, but we are trying to develop it to some new data analysis methods. Then that this direction may be more effective to help us to understand physics. Here that the idea we get started at from such kind of very simple picture. Basically, simple things that could be beautiful and important. If we have those kind of different profile, velocity, temperature, scale of different profiles. And for this case, typically, we see it just at single component, which such kind of lens scale. And here for this case, you see we have that typically two different structures. One is that the small scale structure, the ripples. Another structure is that there's a large fluctuation. So for such kind of scale, basically, there are two different structures. And in this case, we have that peak. And this kind of large scale fluctuation and very short spatial scale. And also, this kind of structure is small function difference, but a large spatial scale. Then that based on those typical scales, typical examples that we believe that scale should be determined by structure. Mathematically, scale can be arbitrary input. You can change the scale, let it as a kind of independent input. But actually, physically, especially if we are talking about a turbulence, scale should be determined by structure. For this case, if we have those kind of structures, basically, we have only two typical scales, large, with number component and the small ripples. Then the next step, if we think about a scale determined by structure, the next step is how to separate or how to define such kind of better scales. Exactly. But not if you do full rate decomposition, you can also define scale. But those kind of things, it's just approximation. It makes a lot of things. Here, we want to define that exact spatial scale. Then that very simple observation is that extremely conditional valid. It means what? It means that if you use a small window, here that we just got at those extreme points. OK. And then that if we use a large observation window, we got those extreme points. Actually, those here local extreme points, it is valid only when you set observation window very small, you got it. That is the extreme point. But if you're in large observation window, this point is not extreme. So then this idea is that extremely conditional valid, we adjusted by changing at window size. Then that here, we see it in the same profile that we change the window size. We see extreme points at distribution very different. OK, got those. Now that based on this observation, we developed a multi-level segment algorithm. The algorithm is the following. First, that S is the window size. You can change S continuously from small to large. Then that for a specific S, as you found that the set of extreme points, so x1, 2, and n. Then that from those extreme point set, we can define that segment. Segment is that the structure between two adjacent extreme points. We define this as a segment. Then that this segment has the two characteristic parameters. One is a function difference, another is a scale difference. OK, so that when you set, let's go back to this plot. If you set small window size, that you got segment 1, segment 2, and segment 3. And if you use that large window size, you got this segment, different segments. OK. Then that next step is that we change S continuously. Because you have different structures at small wavelength, small length, and larger length structure. Then that we want to capture all different segments. Therefore, we need to change that S continuously. Then that for different S, we can also, if you change S, we may get different segments. Then that for different segments, you can also have those characteristic parameters. Then that you collect, finally you collect all the segments obtained from different window size. Then that you do your statistics. This is that illustration. We still use this example. And if you set at a small window size, you got segments, look like this. Then that you use that large window size, you got segments, segment 1, segment 2. OK. Then that actually is a special example. For turbulence case, you have also intermediate, many different intermediate segments. Then we collect all of them and do analysis. Here that once we have that algorithm, we will check, based on this algorithm, what kind of results for different test case and for real turbine case. The benchmark checking is that we consider a fractal Brownian motion. And the turbulence case that we checked, Lagrangian 2D and the sea surface temperature. So we'll go through those examples. Here that the fractal Brownian motion, and for this signal, we have it very good at theoretical results, that the scaling should be proportional to the first number. This is the unknown. And you see it with our new method. Actually, we got it pretty good at the results. And here that each is very small. This part we have deviation. The deviation may be from the following observation. Because typically, we use a different algorithm to generate the FBM data. For example, the Wood-Chien algorithm. This algorithm is not the same as that theoretical case. They introduce some artificial effects. If this is the case, that means that our new method may be more capable to detect such kind of very sensitive difference. Then that for turbulence data, one thing is that the first case are Lagrangian turbulence. Because that this data is that it's a kind of challenging case. So people are not easy to find the scaling relation. And then that we use our method to check if we can find the scaling. Data from DNS, that DNS at the setup, that parameter is on the following, that Ackermann-Namit-Stokes equation. And here that 2D. Oh, sorry. Here that Reynolds number, based on the lambda, lambda scale is at 400. And totally, we have at 0.2 million Lagrangian particle trajectories. And for each trajectory, the velocity components are very good resolution at 0.05 chronological scale, which can result at all of the detailed information. So here that you see at this result, if you use a classical structure function, typically you cannot see any initial scaling. It's just around the curve, no initial scaling. Because here we presented the compensated and the result here, based on the chronological analysis. You see it now played two, so now scaling relation. Now you see it use our new method and actually it's curve. This is another different method. You see it here, the scaling relation is very clear. So it's based on the dimensional argument that we got at. So it's very good at argument with the prediction that you see it in the initial regime is pretty large. So that's the second example at 2D turbulence. 2D turbulence is also at very challenge. Here that we have based on D and S data. So here at Ackermann's Navier-Stokes equation at 2D is at 80,000 cubed. So actually it's 2D D and S data, not very large. So even with this data, relatively small at 2D D and S, we can still have very good result. Now that if we see here at the comparison, the scaling relation from the classical structure function and from our new method, multilevel segment analysis method, you see here at the structure function also at the last scaling relation. It's almost at around the curve. You cannot see at which part in the initial range last scaling relation. Now that for here, if we use that multilevel segment method, we see very clearly at two different regimes at forward and inwards cascade. Forward cascade and inwards cascade regime. So at the scaling relation, it's very, very clear. See, compared with this one. Now that we further we checked at this at the inwards cascade part and the forward cascade part and there are those two solid lines that they form some theory results. Here, if we without such kind of that novel method, actually, we cannot get a scaling. Without such kind of scaling relation, we cannot understand the 2D turbulence physics, for example, intermittency. You see it, based on our result, because we can get a very good scaling relation and then that we check at the experiment relation with Q. So then that based on that, so we do at a singularity spectrum, these results show clearly that for the forward cascade it's more intermittent and for the backward is that that's intermittent at very strong at evidence, show that two scaling regime and intermittency results. So this method that while that it is at very powerful to detect a scaling at the physical reason is that because it is able to separate at different structures, different scales. This is a physics then that from numerical test that it proved our kind of intuition. So this method it can work successfully to detect 2D and Lagrangian that very, very challenging case to get a scaling. And another case is that case one, case two for numerical data and this case that we also we try to check at experimental data and the sea surface at temperature. This kind of open source data. Here at this cloud shows the sea surface temperature got from the floaters. And totally have that globally have so many floaters and roughly show that how that floaters are distributed due to a large scale motion. Here that we define a mean at sea surface temperature. So this one, this is the profile of that we defined at the mean sea surface temperature. So we want to do some analysis of this profile. So the physical meaning of this kind of mean temperature is not quite clear, but at least that it is related to a large scale at motion. And so at a turbulent structure, 2D structure, and so on. Then you see it use our new method. We also see it very clearly at the scaling radiation. So at time and data s, very clearly at radiation. If you structure function at not such kind of beautiful results, also it's kind of wrong curve. If you just use structure function, then this part we show also at the skin exponent relation from MSA and here about home and the transform. So those two methods are actually very close. And you see structure function at very differently from those two results. Which means if you structure function at here, maybe you got some kind of incorrect physics. So you see structure function indicates that the flow field is more intermediate. But actually it may be not. Here you see it much less intermediate and more intermediate. Here the last example, just a new result that I also want to show. This is that Vintana experimental data from Lidia 1 University. We set many different cameras. And use our new method we detect as scaling. So you see scaling at very beautiful. And actually at different wall, normal distance that the scaling at here will be slightly different. So it is a blue dots and red dots. And it is that also show some important physics. The same thing that if you do this by structure function, not such kind of results. Structure function not such kind of results. So now that I want to make the main conclusions are the following. This method is very interesting turbulence. Actually this method you can also use to analyze some other complex systems. Not necessarily be turbulence. We tried, for example, we tried the financial data and also got some very interesting features. This method is generally valid. It can be even it is very noisy. And even if you missed some data, even if the data is that itself. If you do full analysis, for example, data needed to be periodic. But our method is not such kind of special requirements. General data always possible. You have some data, you got a lot of noise. Still it will be helpful. It will help you to extract the physics at least in some range you want. Let's just imagine if noise is just kind of a random noise, then that scale very small. Scale very small, then that we use our method. Noise just influence that small scale part. At large scale part, noise kind of influence. You can still have that very good physics. Then that here that physically why that this method is that able to detect at good physics because it separate the structure mixing. Then that separate that the scaling relations to help you to get at a pure physics in different regions. So that's all of the key points. And any questions, we'll be welcome. Thank you. Thank you. Does anybody have a question?