 My talk is a little bit related in the subject to the first talk of the session. If you're here, you'll see some similarities on some topics. Today I'm going to talk about hierarchical and wavelength-based turbulence modeling. And because of the, for most of you probably, it will be unfamiliar to the subject. It's not common stream approach. I will try to talk more on philosophical, higher level without providing some details. And if you're interested, you can talk after the session or any time we have left in the conference, I can fill you in the details or provide your references. So this is the list of my collaborators, university collaborators, former students, postdocs, working with me while I was at University of Colorado, and students and postdocs of my collaborators in different universities, who one way or another contributed to the subject of this talk. So the plan of the talk, I will talk about motivation. I'll talk about new paradigm that we are pursuing of direct physics-based coupling of adaptive numerical models and turbulence modeling. Talk briefly about the parallel adaptive wavelength collocation method, because this is essential methodology which is used not only for numerical simulations, but also for modeling. And then once I describe this mathematical framework briefly, I will talk about hierarchical turbulence modeling. In particular, you will see there's four types. This hierarchically starts from the wavelength-based direct numerical simulations and coherent vortex simulation, following an adaptive latch simulation, which is, in one sense, it is similar to LAS, but it has some fundamental philosophical difference. We'll discuss about that and then talk about low fidelity approaches, and then I'll talk about what we try to do in the future, try to glue all this together. So now, most of you are aware of LAS, but let me simply briefly say what it is. So I put my talk in perspective. So with large edged simulations, you take the velocity field and decompose into the two components, large-scale component and small-scale contribution. Then you filter the Navier-Stokes equations and you end up having, with the term which isn't closed, then you have to model that term. So if you think about the kind of, what people do, you can put it into the kind of diagram, depends on the diagram for the simulation diagram. You have a CFD engine, typically. Then you have some rescale model. Then you have a grid filtering definition. Then you have a numerical mesh and if it is provided filter width, if it is explicit filtering, or if it is implicit filtering, very often the filter width is function of your local mesh resolution and you get results. This is basically the block diagram of the classical LAS method. And if you see here, the main thing which I want to show you that typically the simulation engine is independent of sub-grid scale model. What people do, people take a computational tool, take the model, plug together and want to get results. And more or less the same is done for the run approach for anything more or less common nowadays. So what is the main deficiencies of classical LAS? The number one is that classical LAS does not take advantage of the main turbulence features, intermittency. Beforehand, you decide the flow is large or small and you basically don't take advantage. As a result, what happens, your flow fields on the LAS don't have uniform fidelity. In some regions of the flow, you have very, very, very important regions. You have under-resolution. In some regions, there's nothing going on. You have over-resolution. And then the main issue with the classical LAS, you already decide what is large, what is wrong. While in reality, what is really, you're going after dynamic or important structures. And the scale of them depends on the flow of particular locations, particular moment in time and particular structures in the flow field. So we are pursuing different directions. You can call philosophy, paradigm, or where we directly coupling our numerical, adaptive numerical methods with the turbulence modeling. That's a line of the message was given in the first talk of the message. The idea is that we want to be our numerics to be aware of the numerical model and we want to be numerical model to be aware of numerics. They go together. Take advantage of each other. And compensate of deficiency if necessary for each other. So now, if you want to develop this type of approach, we need to have active control of fidelity, have adaptive partial mesh to do that. Then we have to basically be able to capture desired flow of physics. And then we have to have basically more method producing and end up producing reasonable better scaling than classical methods. And most important, you have to have adaptive mesh refinement methods to do that. So let's start with the adaptive methods which we developed to give you perspective what we try to do more or less capabilities what we have that allow us to pursue this research. So the idea we have to do with the adaptive mesh refinement. So this is since lots of talks we're talking about, really tell instability. Here's example of related instability, model for mole fraction density, vorticity. But more important, what I want to show you here is mesh. Mesh is colored by levels of resolution. The higher level, the finer it is. And what you see, if you see the structure of the mesh more or less mimics the structure of the flow. So the idea we want to put the mesh points only where it's necessary. So this is another example, simple example flow around the cylinder. You see once again we put the mesh points in the vortex street only where we need it. This is now why, and the question that you want to do is say we want to use wavelets for that. And the question is why we want to use wavelets. And there are, well let's have four particular properties. Number one, they're good in image compression. Number two, the algorithms are very fast. And they're good denoting the signals and they're also very good in recognition, feature recognition. So the first two features we use to construct numerical methods. And the second two we use for modeling. And once combined together, they have a nice uniform framework, hierarchical framework for wavelet-based turbulence modeling. So those of you who are familiar with wavelets, here kind of a brief example which tells you about why wavelets are good in image compression. On the left, I will keep original image. This is Relate Bernard in stability connection. On the right, I will have a used wavelet compressed image. So here, I use one-fifth of the wavelet coefficients. Basically, only 20% information but you don't see the difference, basically, identical results. You use 6% of the wavelet coefficients and you still see only small difference but practically still capture the fields. Even less than 1%, all the major features are captured quite well. Only if you continue beyond, you start losing the approximation. So this is a posterior analysis of the solution. What we're after, we want to get this solution right away on the fly in compressed solution. So why wavelets? Because the wavelets, compared to classical methods, they actually have very nice property. If you compare wavelets to, let's say, four-year expansions, four-year is very good localizations in wave number space and very unlocal behavior in physical space. Find a difference, find that volume, find that element, the opposite. They have very good localization in physical space and in wave number space. Wavelets, they have neo-optimal behavior because they are localized in wave number space and also localized in this case, exponentially decay in wave number space. So that means if I have a coefficient of wavelet expansion and this coefficient is large, that means at that location, k at level j, there is something going on. So this is contrast with Fourier transform. As I mentioned, you completely lose locality of your information. So now if I take my, if I put the wavelets, the wavelets from the basis by dilating and translating your original wavelet and you can see this how it looks like. The higher scale, the smaller the wavelets are. So then if I decompose my, let's say, signal with sharp transition, what I have are basically decomposed wavelet coefficients and here, at each location, I will have to basically find my wavelet coefficient. But what is important, only few of these locations where wavelet coefficients is large. So in this particular case, I showed marked in red coefficients which are above 10 minus three. And so what is more important property of wavelets which allows us to do the adaptation is that the difference between the original signal and the truncated signal using wavelet truncation is controlled by this threshold parameter, epsilon. Smaller it is, the less is error. Larger, the more is error. That allows us to active control the accuracy of our simulation. If you compare the decomposition of wavelets with the Fourier, you see Fourier have a mode, wavelets have bands. Because once again, it is a local, it is kind of local wave number space localized but it's not a delta function. If you do Fourier cutoff filter, you do sharp cutoff for example, you keep the lower wave number modes and throw out high frequency modes which is done in alias. With the wavelet thresholding, you keep the most energetic, most dominant modes which are marked in red and throw out the noise. Or less energetically coherent structures. This is example of what adaptation does once again to give you an idea. You see, this is a simple example of Bergus equations. You see as the small scales appears you have more and more points at the final scale. The same is when you have structure moves around. This is just one to show you how the mesh changes a different level of resolution. This is more complicated example that shows you this is the basically generation initiation process. You can see very big domain that is going on here. You only put the mesh points where necessary and you track your solutions on adaptive mesh and resolve all the scales as necessary. So now you're more let's get an idea of numeric switching method to develop. Next step is the topic of my talk, hierarchical wavelet-based turbulence monitoring. So the idea is originally suggested by Marie Farsh where she took the varticity decomposed into two parts which at that point she called coherent incoherent. And at that point for people who deal with turbulence, the definition coherency was not really clear. So but then if you decompose them using wavelet denoting procedure then there is an optimal threshold exist. And with this optimal threshold you can decompose original field into the coherent incoherent and then you can show that this incoherent field while it is really coherent because it is Gaussian white noise. And she was able to with her collaborators to show that for the different applications but mostly in the for homogeneous turbulence that the flow is incoherent and Gaussian voice noise because it is K squared. And you see and that if you compare the PDF of varticity with Gaussian you can see this is your Gaussian PDF, mimics quite well. Now, so now that was kind of a motivating work of her but now let's say we take the velocity field and study composing in terms of wavelet expansion. So the value of this threshold parameter affects everything. So remember my wavelet functions are localized in physical and wave number space. So we decompose our velocity field in terms of dominant structures which are kept by wavelet thresholding and less dominant structures. And the choice of this threshold basically controls at what level of you are. If you absent is very small, you are at the level of direct numerical simulations which we call wavelet based numerical simulations, WDNS, such a way that basically all this basically features below absent can be neglected and they don't contribute to the dynamics of the solution. Coherent WDNS simulations, it's an optimal threshold parameter where you resolve for here and structures. And what was shown that because incoherent structures they don't really, they incoherent, you can basically don't even need to put model for momentum transport because the contribution to sub grid scale model is almost zero. And then, but if absent is larger, we are going to the limit similar to NES where you cannot no longer neglect the effect of that small of the filtered motions. You have to model the effect on large scales or on the modes that you kept. So one way to understand this process if you think about this dependency diagram, if you put the basically square in this axis you think about this wave numbers. On this axis you put about the threshold. So if you have basically all the square that means you resolve everything. So here I think about it. Conveyor length scale, something like that. So if you keep all the scales we resolve everything. So now here there is this optimal threshold which separates coherent structures both small and large and incoherent ones. And now if you think classical alias is that we do take a line given wave number. So this is filtered out modes and this is resolved large scales. And basically as an alias we have to model sub-scale scales depending on the level of epsilon. If epsilon is very small, that effect is negligible. We can throw it away. And then you end up getting direct numeric simulations. Here's example of what is shedding from the cylinder at Reynolds number 500. This is the same thing with the mesh. It's superimposed and you can see the region of high level of resolution where you have, where you put where these things are happening. This is example of the flow around sphere. Now if you do as I mentioned coherent vortex simulations you can neglect this effect of MI. And if you do epsilon latch, which we talk about adaptive latch simulation, you have to model that effect. Now that is very educational slide. Let's look at that. This is a posteriori studies. You run direct numerical simulations. In this case was done by spectral methods. And then you use waveless to decompose your fields into the coherent contribution using waveless decomposition. And that's what you find is, if you calculate the sub-grid scale dissipation coming from all the modes, which is blue, that will be your black, then only from the coherent modes and incoherent modes. And what you notice, that you have less than 5% of coherent modes and the rest 95% is incoherent ones. And then you know that the sub-grid scale dissipation really mimics the full dissipation. What it means that only coherent modes are really responsible for sub-grid scale dissipation, at least context of homogeneous turbulence. And also that means that actually explains why LAS works. If all the dissipation comes from incoherent modes, it will be impossible to come up with sub-grid scale model that we use large scale velocity field and model the effect of small scales. The fact that they are coherent, that's what makes it work. And the same actually true for classical LAS. So now what we are after, we are now similar to LAS, we are resolving scales, but we are resolving energy dominant structures and model effect of less dominant structures. And we have different numbers of models because there is no homogeneous direction anymore, we don't want to do averaging. So all the models are either local or we'll do like grand-gen path line averaging. So this is, I mean, you can get details in the paper speech published. So now if you compare to the original diagram remember I showed you about LAS, the thing is slightly different now. We have this adaptive wavelength collocation solver, we have sub-grid scale model, grid. But now what happens, the result of simulations, they come back to the grid filter because the grid filter depends on flow realization. Remember this norm, it is transition filter, it's no longer linear filter, it's non-linear filter and that's what is different. Of course, you have numerical absolute filtering similar to what people use in LAS as a numerical mesh and filtered with, but the main difference now have a feedback. The next question which we want to ask is the following. When you fix absolute, you only basically, you only truncate your velocity fields. But the question I want to ask, we want to fix fidelity, what it means by, we want to fix turbulent resolution. Of course the question would be, what is turbulent resolution? And there are multiple ways to do that. One way to think about it as a ratio of sub-grid scale kinetic energy versus total kinetic energy, turbulent energy. Another way to think about it is sub-grid scale dissipation versus total dissipation. So from looking at this, I mean doesn't really make any difference both of the measurements of kinetic energy, but which one is better quantity for fidelity? This way it's easy to think about scalability studies. If for example you virtually want to do simulations, let's say LAS, and you want to keep fidelity and increase Reynolds number, what will happen? Is if you do classical LAS and you want to keep fidelity based on kinetic energy, what happens because the energy spectrum goes down, you keep increasing LAS, and at some point sub-grid scale with resolved kinetic energy is becoming significant. So basically at some point, your fidelity does not depend on Reynolds number. You can increase your Reynolds number, but effectively what you do LAS, finite Reynolds number LAS, or infinite Reynolds number LAS is the same. You won't see the difference. So that means the fidelity based on kinetic energy is not good measure. Better measure is based on the dissipation. And in this case what happens is that if you do classical LAS and do that, what happens is that as spectral increasing, you increase your Reynolds number, your peak of dissipation goes up, so does the affiliate cutoff frequency. So and if you do the scaling, it will basically like almost the same DNS scaling except it will be cheaper. It will be parallel but lower. So now we do differently. We want to control fidelity locally. And the way it is quite simple. You have this wavelength threshold which controls resolution. If locally your resolution, if sub-grid scale dissipation locally is too high, what does it mean? You are too coarse. Your solution is too coarse. You have to make it finer. And to do that you simply decrease your absolute. If sub-grid scale dissipation is too little, that means you are too fine. Your mesh is, you need to course on the mesh and to do that you simply increase your absolute. That's the whole idea of this controlling local control of fidelity. So and the way you can do it, you do it more or less in Lagrangian sense. You make an absolute as a field. You have to track. You can do either track or do simply linear interpolation. But the most important thing, this absolute needs to also be smooth. And if it is not smooth, that means it can produce high frequency which will translate into our velocity fields. So we have multiple approaches. One approach is direct solving these equations. This is your filter, it can be also related to for example, path tube averaging kind of approach. And then we solve these equations either using direct evolution or using basically characteristics backtracking. And then the idea, for example, we want to run homogeneous turbulent case and every, let's say five, eight or two number times, you want to change fidelity. We want to have 20% of fidelity, 25, 30, and then let's say jump back to 40 and we can change, see how it goes. And see if we can adjust, if we can control this fidelity in adaptive fashion. Here's example of our simulations using first order, backward interpolation, third order, and then high dissipation diffusion coefficients for our absent. And here I show that time when we change fidelity. As you can see, as you change fidelity, absent models very fast adjusts. You can see right now about adjust either grows or later goes down. And as you can see, if you have more diffusion, it changes smoothly. Less diffusion, it's more oscillatory. Now if you look on them, now average fidelity, if you take the average, remember we controlling the local fidelity, but here I'm plotting the global quantities. Average dissipation result in subriscal dissipations. So if you locally control your dissipations, then global quantity is also the same. And actually it's very good quantity. As you can see, it goes quite well, but there are two different regimes. One regime's for small diffusion and one for large. And as you can see, if your diffusion is very large, you're off. What it means, if you're smoothing your epsilon to large, it doesn't have time to basically to adapt because you're smoothing details. You have to make it small at the same time, not small enough to create artificial scales. Now the next question is how to extend this approach to the wall bounded flows. So the equation model is the same, but what we found if we start solving the equation, now the scaling original idea of feedback mechanism is longer than there. Because if you go close to the wall, what happens there, the turbulent kinetic dice is go to the viscous sublayer. And what happens if kinetic energy, basically turbulence becomes smaller, but you prescribe fidelity level, what happens is that you want to decrease resolution because it's too coarse. So it's opposite tendency. So you have to distinguish the regions of the low turbulent kinetic energy and the region of low basically stress and strain field. And that's what we did, introduced the regions, turbulent region, laminar regions where kinetic energy goes to where basically strain rates and vortices are very small, and near wall region where kinetic energy goes to zero. And then you have here, we can see this forcing mechanisms. This is turbulent forcing mechanisms. This is the laminar, we basically see this is more or less active in the laminar region. And here is this what we call wall region forcing, which is quite active in the wall region, but also away in the region where low kinetic energy region. And this is composed forcing. Here, the kind of mesh, slice of the mesh for two different goals of subriscule dissipation, 15% turbulence modeling and 5% turbulence modeling. As you can see, the less or basically need to model, the more we have to resolve. These have quite a mesh there. So this is examples of this approach which we applied. This is the results of the solution compared to the experimental results of Brun. Quite good agreement there. Here you look at the levels of our instantaneous threshold field. You see it's quite noisy. It basically follows the structures for solution. And here, three mesh example. Here you have iso-surface of Epsilon, iso-surface of vorticity and this is the mesh. So they all align with each other. This is example of the flow evolution. This is vorticity and this is mesh colored by the level of resolution. So now, what is different now compared to the previous results, we add feedback loop based on our local measurement. We estimate our quantities. For example, turbulence resolution. And we change our local mesh spacing and we change our local subriscule model. So this is different approach which is basically now not only changing the resolution but we also can change the model as well. So if you take the classical ALS approach, what we're actually doing is also going to modify to fit exactly the same thing. If you think about classical ALS in the context of adaptive mesh refinement, if you provide a feedback loop similar based on, for example, turbulent resolution and you can allow change the mesh, you can also change local mesh resolution and it's becoming basically more as the same approach. The only difference with wavelets is because wavelets have nice framework where it's easy to do that. But it can be done with any numerical code which you have provided, it has adaptivity in it. If you're interested, we can discuss later, you can incorporate this kind of ideas to your code. So now what you see, what you saw what I showed before you have a nice kind of a very nice framework for changing from direct numerical simulations to the adaptive ALS framework. And it's simply by changing epsilon, the contribution of your model would be different. In particular, you can simply do adaptive ALS and simply make your epsilon smaller and the model automatically become less significant and basically will turn it off itself. So it's the same uniform framework, very nice. But now if you want to solve the same problem now, you take, let's say, a rational service and use toxic equations and try to solve it in a steady fashion and you try to solve it on the active mesh. So the idea is that eventually, so at this point we do it as a given equations, you simply solve it, apply the numerical method to that. So in the next step, which we are not there yet, we'll start experimenting right now, is how to put all together, provide feedback between unsteady runs, whether based on steady runs and adaptive ALS and how to do it in systematic fashion. So here's example of the mean of statistics, mean pressure, mean streamwise varticity. This is my instantaneous computational mesh colored by varticity line, and this is the varticity itself. And now here I want to show a couple of things, which is for some people it is not commonly known fact, is there are different models which we use. So what I'm showing here is the fact that when people do runs models, more often than not people simply do number one steady state simulations, number two 2D simulations. So and when they do that, and if you open the literature, you'll find lots of even 3D run simulations, but what you'll find, 3D run simulations are done with very coarse number of spend-wise models. So here we performed the K omega model and K and splatter-marus models on the same mesh. And I'll show you what mesh this, but what you see depends on the model, you have different structures with splatter-marus model, which is very dissipative here, numerical viscosity or two-term viscosity is very high, you see more or less two-dimensional structures. When you do K omega model, it has behavior similar, it's unstable to three-dimensional perturbation. So, and why? Because the two-term viscosity is much smaller. So the common assumption that K-optional model 2D results are the same as 3D results are on, number one, number two, no way you can get the steady state of run equations to converge to average solution. So now this is the results where I show different results for different, for where you have 2D versus 3D effects. And what you see, we also compare with conventional U-run approach, what people did, this is a notice, the resolution 150, 150 by 16 modes in three dimensions, versus in order to get more high modes. So what it means, you have to be very careful when you do runs, but also it gives us the hope to merge them together in the sense that now you see a steadiness in the steady run simulations, you can better come up with hybrid approaches that you can glue them together because they have some intermittency. Here's the example of the sensitivity when you do the same results for splutter lemurus K omega, the same resolution, you see the results also are different. So there's a variety of models and some of them work well in the context of hybrid layers around some node, so there is a room to investigate. So to conclude, basically we develop a nice framework using wavelets for hierarchical wavelets and modeling. It has nice control of fidelity simulations. I mean, the future topic of research, we need to tie it tight with the UQ, but this framework is nicely integrated with numerics and modeling integrated nicely together. And so the whole idea later on, we want to also control not only the numerical model, but also the dystopia of the mesh. And if you need more, you can read any review of the mechanics or more papers which were recently published on this website. With that, I'm open for questions.