 Okay, so you're going to find this from the back. Yeah. Okay. Okay, so our first speaker in this section is Fernando Greenstein from the full-screen simulation of turbulent material mixing, sir. Okay, thank you. If you still set up here, we need to move the mic somewhere. Oh, I'll put that down a little bit. Is that better? Okay, so basically, the things I want to be focusing on is consequences of under-resolution, which is the typical problem we have to deal, for example, at the National Labs, for very complex systems. So we're interested in under-resolved material mixing driven by under-resolved velocity fields. And the more complicated problem when we try to compound both under-resolution of velocities with under-resolved initial conditions. So basically, the plan that I have here is to focus a little bit first on what are the basic issues of models and observations. What are the basic issues of coarse-grained simulation, which is essentially LES and IELS. And I'm going to be concentrating the discussion on two areas. The problem of scalar mixing and isotropic turbulence is a very basic problem in which to address the issue of under-resolved mixing driven by under-resolved velocities. And then I'm going to look at shock-driven turbulence mixing, and I'm going to try to look at the challenges on the modeling, which come out as consequences of the basic difficulties that we have to address. So one here that I'm going to be looking at is a challenge to the basic mix modeling that we normally have to deal with, and we try to keep as simple as possible. And then the other problem is related to initial conditions, which is the issue of how initial conditions end up affecting what could be late-time predictability issues for actual science problems. So a quick reminder of my favorite lines here that I think are very relevant. George Box has this line that says, essentially, all models are wrong, but some are useful. I would add this is also true for experiments. And what I'm thinking of is that wrong means inadequate here to address your particular question. So the other issue coming from Albert Einstein's world is you would like everything to be made as simple as possible, but it should be simpler. So this is an issue of what is a minimum set of ingredients that you need to build into your code model to be successful in your prediction. And your prediction is essentially an actual prediction or addressing the question of interest, which could be many things. So in practice, you start with some theory, you start with some modeling based on your theory, and you make hypotheses, and you have laboratory and computational observations that are going to tell you if your hypothesis is appropriate or not. And you use a validation and certain quantification metrics to decide when your model is successful. Now, the big challenge here is that modeling itself is limited because it's limited by your knowledge. It's limited by your ability to put your knowledge into some mathematical form, for example. And when you're actually doing observations, observations are intimately linked to initial and boundary conditions. And you can never know enough of them. And even if you know enough, the other challenge is how to model them. So this comes back to the final line here by Heisenberg that in practice what you actually observe is nature exposed to your method of questioning. And here concretely, your method of questioning is your modeling and your observations. And both have recurrent limitations. So what we know from turbulence is, unfortunately, we still don't have a universal nice theory of turbulence that we'd like to rely on. But there's a bunch of things that we generally agree empirically on as known. And so usually, if the Reynolds number is high enough, at some point you have an inertial range. And that usually happens above what's now called the mixing transition. And this also is very closely connected to the fact that after a certain Reynolds number, which is very close to a mixing transition Reynolds number, your dissipation becomes not only finite, but also independent of Reynolds number. So these are the two major pieces that we tend to use in simulations when we try to decide how to model in computer simulations flow. And there's three approaches that we normally talk about. And the point I want to try to make is that all simulation models reduce something and essentially what reduces the range of scale that you can capture. So if you're doing DNS, you're focusing on the very small scale physics, for example, in a region. And the price you pay in doing that is that usually you take, for example, a cube or a nice periodic domain, and you focus there. And so the assumption is that you can do that independently of the large scale dynamics of the flow. If you're doing LES, then the assumption is that you can focus on the large scale behaviors and sort of model this part in some hopefully universal fashion. And if you're doing RANS, you typically think, OK, the only thing that matters or rather what I'm going to focus on is the statistical behavior of the flow. And for example, predict statistical quantities that I can actually get in laboratory measurements. So there's assumptions in all the simulation models both DNS and LES presume scale separation. And RANS presumes typically that you have developed turbulence. So in a nutshell, there's no better method. There is the method that is appropriate and adequate for what you're trying to do. So what you do with LES is, going back to what I said, is there is an assumption that the microphone has been D from the close. OK, so the bigger, what is it, too loud? OK, hope that works. So the big assumption in LES is that small scale dynamics is enslaved to that of the large scales. And there's a bunch of ways of doing this. Historically, we've had a lot of classical, so-called classical methods that use, so you need to formulate a sandwich scale model. And there's functional forms, the structural forms. And Sago's book is probably the best textbook on that. Implicit LES, which is the field I've been heavily involved in, tries to focus on the numerics and what the numerics can implicitly do. And again, there's a couple of books, one that came out last year, I'll mention that again later. And the idea is that you have appropriate physics built into the numerics. And there is a common framework to analyze, because usually you have both of them mixed. And this is the modified equation analysis, which focuses on the equation satisfied by the actual computed solutions. So if you were to do modified equation analysis of a very simple system, and I'm focusing here on incompressible flows to keep it simple, and just adding one coupling, which is a scalar equation here, then you have to deal normally with terms for closure of the filtered equations. And then you have numerical terms here and computation error terms that have to do with the filtering, not communing with derivatives, for example. And so in the best of worlds, you have incredibly good resolution. You don't worry about the numerical terms, and usually the commutation terms are lumped into the actual explicit methods. In practice, what happens, and Gossel has a very nice paper on that. Actually, here, Hertz has an earlier paper, probably 60 or 70s, in which he also addresses this explicitly. And the idea is that typically for the resolutions you use with LES, these terms turn to be comparable. So in practice, this has motivated people like Jay Boris to say, well, then why do we just do it with the numerics? And that has been the basis for IELTS, which works very well if you have high Reynolds numbers when almost every physics of interest is convection driven. But in general, you need to have a mixture of explicit and implicit models. And for example, if you have to address diffusive mix, if you have to address combustion, or if your Reynolds number is just low. So the important thing is that you need a framework. And the modified equation analysis gives you a framework to actually analyze this. So let me start first with the first subject I wanted to address, which is a subject that is scalar mixing. So scalar mixing is for high Reynolds numbers. So it's basically driven by large scale convection. That's the entrainment and steering part of the mixing, which is many times called the interpenetration. And for high Reynolds number, this is dominating the scalar mixing. If you have a low Reynolds number, you're going to have to worry also about diffusive mixing. So again, my focus comes in complex systems. And I suppose you're doing urban transport. Urban transport of a toxic substance of some sort in an urban environment. Typically, you have maybe 10, 20 cells across the street. You're not going to be able to do a very well resolved simulation between buildings. So the other extreme is suppose this is a grid size and you want to make some model for what your volume fraction is inside for some material. There's many choices that you can make. And the question is, what kind of integral consequences of high Reynolds mixing can you predict in some sensible simulation framework? And this is where the two subjects I had in my first slide are trying to focus on. So first of all, I want to look at the problem of scalar mixing. So the focus is looking at scalar with a mean fixed gradient. And this is a problem that I think it was introduced earlier by the 90s by Steve Pope. Pulling had LES simulations with the vortex stretch model around 2000. There's been DNS by Goto in 2012. And it's been a test problem that has been looked at by many people. And for us, we decided, OK, this might be a good problem to test aisles on and see how good or how bad it does on this problem. So the idea is we did a compressible problem. We focused on Euler-based simulation. The momentum forcing has both a solenoid and a dilatational component that we can control that through initial conditions. The scalar has also a forcing, which we did a particular choice here, which is very close to what Dale did on his own work earlier. And what we use here was a multidimensional 3D FCT. We use a forcing scheme by Peterson and Levis Q. And we focused on low wave number forcing. And basically, they were too extreme. We chose a very low Mach number and a much higher Mach number, although still fairly low, 0.27 in terms of fluctuations. IR-Schmidt number was 0.7 effective Schmidt number. So there was a student, Adam Wachter, that was working with his PhD thesis on this turbine. And we tried to do all the formal studies that we normally do in turbulence. We looked at spectra. We were able to see that the scalar spectra had this additional bump that the energy spectra didn't have, which is senior DNS also. And we tried to measure an effective Reynolds number associated to the grid resolution that would allow us to characterize the inertial range emulation that we were having. So basically, we're calculating an effective viscosity based on grid resolved quantities, velocity and derivatives. We can calculate a Taylor microscale also in terms of the usual definition with variances and gradients. And we can actually look at an effective Reynolds number that we can calculate that way. And these are the resolutions. These are the Reynolds numbers that we get. So we're just getting above the mixing transition for the two final resolutions. And those are the two resolutions for which you can also see that there seems to be a nice suggestion of an inertial range for the energy spectra with a bump also that you're supposed to get also. So you can look at very detailed PDF analysis of various things. If you look at velocity and scalar, you get the expected Gaussian behaviors. If you look at vorticity, and I'm comparing here with the DNS of Jimenez for isotropic turbulence, one of the things that you can see in those PDFs is that they tend to start piling up on each other above the mixing transition. And this is what you start seeing here as a function of grid resolution. So this concept of effective viscosity associated to the effective resolution, to actual resolution is very meaningful in that respect. If you look at, so this involves transverse derivatives. If you look at longitudinal derivatives, they're known to have certain biases. They know that the tails are not Gaussian. And all these properties are captured both for the scalar and for the velocity longitudinal derivatives. I'm not showing any here, but there's also analysis that you can do in terms of joint PDFs. And it also looks like the LES results. So I'm looking at the scalar variance. And these are the results that Dale published in 2000. And one of the things that Dale showed at the time is that if he would turn his scalar model off, then he would get bad things. So one suggestion was, so what if you do IELTS in general? Is that what you get? Or is it some numerical methods that do it and some do not? That was one of the motivations we did wanted to do this problem. So you can get the constancy of the scalar variance as the previous LES showed above the mixing transition. And there is some sensitivity to the forcing, to how you actually force the isotropic turbulence. And the choice of this non-dimensional constant probably makes a difference also here. Could be one half instead of one. But Dale's data had originally the one, so we stuck to that. So this was the real interesting part. We did this a little bit for completeness, and this is what we found interesting new results. So up to 2002, there was this DNS by Pope that showed the ratio of the two Taylor microscales from the scalar and from the velocity seems to grow in Pope's DNS. Pulling was getting constancy after going much beyond the Reynolds number that the DNS was available. And the DNS by Jung was able to show a consistent result at the time. So this is actually good for modelers because the conventional wisdom is to assume that Taylor microscale for scalar and velocity will be similarly behaving for high Reynolds number. And we're not gonna need to expect any variation there. So if you look at the data that came after that, there's a 2006 theoretical analysis by Mr. Chilly, 2012 DNS by Goto et al. And we did our aisles around 2012. So we found that our aisles was exactly on top of the DNS by Pope to our surprise. We didn't expect we were gonna do that good. And then we also noticed that the trend seems to be consistent with both the theory and the DNS. So this continuous growing, I don't know up to what Reynolds number that we're seeing as a functional Reynolds number to the best of our knowledge, people have not looked at. And it is an issue for modeling. And one suggestion here that we came up with is we were trying to understand why the aisles was doing, in some way, a better capturing of the expected results than classical results, for example, by pulling, is that there might be a realizability constraints built that are built into aisles for some unknown reason we didn't, by design, do that. And it might be necessary when you, it's not just coming up with the right subredscale model, but how the subredscale model connects with the rest. And this brings the issue of what's the co-design that you're dealing with between theory and your computational paradigm. And that's the main thing I wanted to talk about here today. So when you're working with aisles, you're kind of mixing theoretical results that you're building on the models and on the numerics. And you may or may not get a better result that way. This seems to suggest that choosing a very good co-design, and for us, very good means conservation equations, space averaging, time averaging. We found that this is crucial. And then having an unoscillatory numerical scheme that will ensure that you have positivity, for example, and you can capture shocks also. So it turns out that this is the kind of thing that seems to be behind the good capturing of the DNS results in the case of the ratio of 10 or microscales. So more fundamentally, the issue is, what are we doing with aisles? It's not just a no model thing, it's more, again, there's a number of physical features that are built into the co-design there. One thing that we found working with Len Margarine and Bill Ryder at Los Alamos is this issue that everything you observe involves finance scales. So this is probably what we should be dealing with, equations for physical observables. And we could have historically gone directly from kinetic theory to this. We have gone through a very strange path, historically, on understanding aisles, which is integrating kinetic theory, going to continuum Navier-Stokes equations. Then when we happened to be solving, like we did here with non-oscillatory methods, these equations we found that we can get these results. And this is, I don't have time to get into this, but this is the work that Margarine and Ryder did in published papers. So basically the idea is what they showed is that so-called truncation error terms are not really truncation errors in the sense they're corrections when you're trying to see the difference between your finite scale equations and your continuum equation. It's the additional source terms that you need to get the equations for the finite scale observables. That's basically the bottom line. So I don't have time, like I said, to get into details, but there's a book that came out last year and the first part of the book has worked by, so this is the part I talked about earlier in Scholar-Mexin. There's a chapter by Yeezoel talking about his minimum model for turbulence and it's very relevant here. There's a chapter by Len Margarine discussing all these finite scale Navier-Stokes equations and Mr. Chilly trying to see how you formalize turbulence analysis to understand the goods and the bads of IELTS and finite scale Navier-Stokes. So I'll refer you to the book for more details and hope you will be motivated. So the other problem is the problem of shock-driven turbulence mixing. And so we went through the issue of we have a finite, finite resolution grid so we need a sub-scale model and this is essentially the same problem that people have in experiments. They have instrumentation with finite size and likewise you have initial conditions, you have a domain which is finite, so you need to introduce initial and boundary conditions. So in other words, this is really a textbook issue. If you want a unique well-posed solution you have to provide all the additional things. So you need to provide the sub-scale information, you need to provide the initial and boundary condition information. So when you're doing transitional flows like this is the shock-driven turbulence problem is one of them, you have to deal with initial conditions. So typically you care about these problems, for example for applications like ICF, capsule implosion problems. And the idea is that you would like to be able to understand the effect of having initial conditions here at the initial interfaces and how you can control the late-time effects, for example in the case of the capsule, how well or how bad you were able to concentrate this small volume there so that you're closer to having actual fusion there. So there's a bunch of problems that you do, test problems. We have been looking at the ICF, capsule implosion problems in our context. These are simulations that were mainly Brian Haines that have made them, it's Omega experiments. And this is very close to the state of the art. It was a few years ago. And basically it's this game of starting with initial conditions and having the right numerics and the right co-designed computational paradigm. And I'll get back to this towards the end. I'm trying to drop up this. So IOS has been very effective in doing this shock-driven turbulence problems because essentially it gives a way of emulating both the turbulence and the shocks. And at Los Alamos we have been trying a number of different configurations leading to the ICF capsule problem. And I'm gonna give you a quick summary of some of the issues. One issue that we found was the issue of, you start with a bunch of different initial conditions here, you try to put different, so there's an egg crate which has a lambda knot characteristic wavelength. And on top of that you have the ability of adding noise which is there in the experiment but you never know with it accuracy what exactly it is in the experiment, what is superimposed to this membrane pressed with a grid that imposes this egg crate structure. So you can try different ranges of wavelengths and Dale Pooley has done himself simulations like this. When we started ours we were trying to compare with him first. And so this is the better student experiment at Caltech and this is a case in which you are able to get the growth rates. The big challenge is what happens for late time where depending on the assumption, especially on the long wavelength assumption of that random noise superimposed to the egg crate you can get very different results. So one issue there is the predictability issue. How good can you expect to go in this process of, in this career that you're trying to build around shock-driven program simulations? So one of the things we found looking at ways of prescribing initial conditions we focused on the morphology at one stage and this is basically a measure of the characteristics of the initial condition here in terms of the amplitude and the characteristic wavelength. That's what the eta knot is. So we found that for small eta knots which is when the initial interface is fairly thin you get the traditional rich mine mesh cuff and when this is big, not surprising from what I'm going to say later you can get very different behaviors like for example your mix width increases or decreases depending on the actual eta knot and we call this the bipolar behavior of rich mine mesh cuff. And there's two properties associated. In this case it's just ballistic growth and it's what you would find as a linear growth here. In this other case you have non-linear mode coupling and it's actually not rich mine mesh cuff we would argue it's a combination of a number of instabilities. So we tried this on this very complex problem of shop gas curtain. This is one problem that we were doing in parallel at the time with the planar rich mine mesh cuff and we found that instead of the eta knot we defined a structure characteristic parameter here that characterizes the gas curtain cross section and in this limit we have the corresponding thing to the low eta knot and we found a linear growth and otherwise we found a non-linear growth. So this is interface, initial interface morphology seems to be a big issue that we need to be able to characterize in our problems. The issue is that if you really want to look at it as a function of eta knot and you look at in terms of the vortex generation mechanisms that are allowed for each case, if this is very thin you have your traditional rich mine mesh cuff, if it's thick, for example you can see here you would have a Kelvin-Helmholtz instability that could be acting because velocity, transport velocity is gonna be different within the different materials. So this rich mine mesh cuff is just one of the barotinic instabilities that are possible and the issue is that when you have a complex initial condition here, really the issue is the initial balance between the various possible instabilities depending on the initial conditions that you're imposing. And this is the typical thing that you have for example at reshock conditions and so the issue is how do you characterize this in your simulations. So we envision that we need to have some kind of useful complete description of the initial conditions for predictability, some kind of ensemble averaging or the relevant initial condition variability might be necessary. This is hybrid RANS alias might have a role here because RANS in itself involves some kind of averaging already. So there's a bunch of questions here as possible directions that we could look at. Machine learning to formulate to learn the initial conditions might be a way to go and proper token of the composition that surprisingly has come as one of the possibilities. So this was the outline. Let me just go through, I have three quick slides to wrap up. I like to say this every time I can. Subrid scale modeling is primarily empirical activity within a pragmatic practice. And the reason for this is that we don't have a nice universal theory of turbulence. So this is what the problem we have and whether we like it or not. Most of the subrid scale models are designed to be dissipative because this general assumption of coarse grain simulation seems to be what we see most of the time in the experiments. In practice you have non-dissipative physics, backscatter for example. The hope is that for sufficiently high Reynolds number they're presumably less important. This is again for each question of interest you have to address that. And coarse grain simulation might not work for that case. So I want to try to focus on that. It's not IELTS versus LES is whether LES is appropriate tool for what you're trying to do. If you have backscatter, LES as usually practitioner, this might not be the right thing to do. There are many uncharacterized variable density processes that need to be looked at. Material interface dynamics, there's not a sound mathematical formalism here that we can rely on. Vof is probably the best that we have right now but still has a lot of issues. Effect of material interface fluctuations. So the issue here is characterizing them and then finding out how to model them in your actual simulation code. Interacting shocks, short material interface and turbulence, there's a lot of open issues there. Various mechanisms of our clinic production beyond what typically we look at than Rich Maymashkoff and Raleigh Taylor. And then if you have exothermicity due to either chemical thermonuclear reaction there's a lot of unknowns there. So we noted one interesting thing that I thought we noted here is this issue of do we have an appropriately co-designed theory and algorithm paradigm? I think we normally have not looked at these things very much. We kind of found it a little bit by accident. I think it's a big issue. If you're doing DNS, you have all the resolution you want and you're doing really what DLS should be which is resolving all the necessary scales in space and time which we never really do then all these things are not the issues. Technically whatever you're doing is independent of the numerical scheme and the grid and so forth. But ideally that's not what you do in practice. In practice you have complex systems and you have under resolved conditions. So you need to address this. You're gonna be able to do some with the algorithm and some with your actual numerical model, your simulation model. So modified equation analysis is really the right way to either assess or to reverse engineer the co-designed subject scale physics. And in addition to things we talked about which is, I vaguely touched a little bit this, the issue of continuum versus discrete formulation. This is an issue in itself. The issue of you have to mix explicit and implicit sub-rescale models. There's a new field coming now which is gonna be data science driven research. For example, in machine learning this is gonna have to be incorporated. There's new architectures, new software paradigms, new ways of computing stuff. And I would say that overall, you have to deal with the interaction of all these pieces. We have been looking at this part in this talk. You start from theory, you're gonna do some theoretical approximation. It's gonna interact with the numerics. We talked a little bit about this. More generally you have a software implementation, you have a hardware capability. All these pieces are interconnected. And anybody that has been involved with the new computing architectures knows that most of the time you don't repeat results just because you didn't get the same number of processors or the same distribution of processors. So yeah, we're gonna have to find a way of factoring all this. And then there's the issue of big data science which definitely is gonna be affecting the ultimate theoretical approximation. This is another talk. So let me stop there.