 Good afternoon everyone. My name is Meng Cheng. I'm the John A. Everson Dean of the College of Engineering. A distinct pleasure to introduce our distinguished speaker today. But first of all, to those who are watching, later in the archive or right now streaming, I want to highlight that we've got a truly standing room only here in the theater in Wok. I can see there are about 50 people standing in the back. I hope they will not become turbulent over the course of this distinguished lecture. Now this is a new exciting program that the College of Engineering introduced last semester. So every academic year we'll invite six to eight distinguished lecturers to the college across different schools in the college. This is part of our effort as we aspire to attain the pinnacle of excellence at scale. And today we are so excited to welcome Dr. Jacqueline Chen, Jackie Chen as the distinguished lecturer. Jackie is a distinguished member of the technical staff at Sandia National Labs. And she has contributed broadly to research in direct numerical simulations. So DNS of turbulent combustion and elucidating turbulence chemistry interactions in turbulent flames and ignition processes. And these interactions govern the overall combustion rate, emissions, the degree of local extinction and ignition timing. And she and her collaborators have discovered new physical insights related to the turbulent premixed and stratified flame propagation, preferential diffusion, intrinsic flame instability, lifted flame stabilization and heated flows, reactive scalar mixing, compression ignition and flashback in boundary layers. Now these benchmark simulation data have also been used by the modeling community to validate turbulent combustion models. Now we'll have to keep the introduction short so that we have more time to listen to Jackie Chen. And I do want to highlight that within the past year alone, she was elected a member of the National Academy of Engineering and received the Combustion Institute's Bernard Lewis Gold Medal Award and the Society of Women Engineers Achievement Award in 2018. What a pleasure to welcome you here to Purdue Engineering. Jackie, thank you so much. Can everybody hear me? I guess I'm wired. It's a pleasure to be here at Purdue. Thank you, Dean Chen. And I'd like to also thank Bob Luck for inviting me. He's been trying to get me to come out here to give a seminar for a while now. So it's a pleasure to be here. I'm going to talk today about direct numerical simulations of turbulent combustion and complex flows. For those of you who aren't familiar with DNS as it's so called, it's where we resolve all of the turbulent scales from the largest length scales that correspond to a device, for example, an internal combustion engine, all the way down to the Kolmogorov scales where heat and kinetic energy are dissipated. We solve those exactly without any models with very accurate numerical methods. But then we do have to incorporate chemical kinetics models, spray models, radiation models, et cetera, if you want to look at the coupling of turbulence in reacting flows. So what we've been chasing for the last 20 years, I call ourselves the tornado chasers in some sense, is high performance computing to do DNS and perform it DNS in parameter ranges of interest, you have to have really large computers and a lot of computing cycles. And so like tornado chasers who chase after the next storm and risk their lives, we don't do that. We're all also chasing and watching very closely what the high performance computing industry and research communities do because we depend upon their research and their advances in order for us to do our science. And so we formed a very tight collaboration with high performance computing folks. And the next frontier right now, we're at PETA scale, 10 to 15 floating point operations per second. And in the very near future, maybe in less than five years, we're pushing towards exascale, which is another thousand fold increase in computing power. And so what's driving us is to use these machines to gain fundamental insights into multi-scale, multi-physics problems associated with turbulent flames or ignition and to use that fundamental knowledge not only for scientific, for the gleaning understanding, but also to generate high fidelity DNS database benchmarks for model development, both for RANDs as well as for large eddy simulation. And so the regimes that we're now pushing towards are more higher Reynolds numbers simulations, more representative of what are conditions in actual devices at high pressures. Gas turbines and IC engines operate at high pressures, 20 to 100 atmospheres with large intense turbulent velocity fluctuations and in some cases at very high speeds with compressibility effects. A lot of what we know in our community is inherited from what we know from non-reacting flows, but when you add combustion and reacting flows, you have variable density of heat release dilatational effects that may render some of those closures invalid. And so we want to explore using these high performance tools what closures might work in these more complex reactive flow settings. We're also interested in what happens if we inject energy at small scales. Usually think of energy coming in from large driven by large scale phenomena like mean shear or compression or something like that and then energy just goes down in the turbulence cascade in the forward direction. But what if you generate sources of energy in the dissipation range in the small scales? Does that energy move back up into larger scales or does it stay localized and then just dissipate? And so there's some fundamental questions with multi-scale energy transfer processes. And then the direction I'll say a little bit about where we're moving forward into in the next couple of years is doing these high fidelity DNS but also considering hybrid DNS and LES methodologies in combination. So we're trying to throw everything that we know how to do into the picture including adaptive mesh refinement and hybrid schemes so that we can get into realistic device level regimes. And we also are paying close attention to what our chemists or friends are doing in terms of providing adequate chemical fidelity enough to differentiate effects, fuel effects when there are very strong turbulence chemistry interactions. So the trend these days with gas turbines and IC engines is to burn overall more fuel lean, more dilute mixing with EGR, exhaust gas recirculation and so on in order to still get high efficiencies but kind of reduce the emissions of both soot as well as nitric oxides and CO. And so burning at those ragged limits or leaner limits presents challenges and greater coupling between finite rate chemistry and turbulent mixing. So a lot of my research in my group has been motivated by auto ignition processes in engines, both engines for power generation, for the airplanes that fly as well as in internal combustion engines. And for example, if you look at this local equivalence ratio plot where fuel rich conditions are up here at the top, stoichiometric is this horizontal dashed line and lean is below that, plotted versus temperature, you see that a lot of the gasoline engines or spark ignited engines that we drive are burning more or less stoichiometric conditions and they sit in this Knox Island for emissions. If we look at diesel engines and the trucks on the road, they end up straddling richer conditions that also introduce soot particulates, which is harmful to the environment and human health. And so the trend in the IC engine world has been to go to get higher compression ratios, greater efficiencies to use compression ignition technologies and to burn at lower temperatures and leaner conditions, what's known as LTC types of conditions. And similarly, in the gas turbine world, people are looking at introducing more hydrogen and syn gas types of fuels to reduce for carbon capture storage types of technologies. And there are companies like All-Stem Power and Unsolid Energy who are looking at actually staged types of sequential combustors where the products of combustion from the first stage which are vitiated at higher temperatures, you would inject additional fuel like hydrogen into the second stage which might burn because of the hot vitiated gases through different modes of combustion including auto ignition, perhaps in combination with premixed flames. And so we'd like to dive into some of the details of these types of technologies. As I said already, we're motivated by using DNS to understand mixed combustion regimes where we're overall fuel lean under partially premixed conditions. We're interested in exploring multi-stage auto ignition types of problems where you have low temperature ignition followed by high temperature ignition and looking at sensitivity of fuel chemistry and its coupling with turbulent mixing. So over the years, I've developed a DNS code in my group called S3D. It's a compressible reactive flow solver. It solves the compressible reacting knowledge stokes, total energy, species continuity equations, and uses higher order finite difference methods, eighth order in space. Do you have a different one? This one died. I think the battery died. Thanks. And it has detailed reaction kinetics treatment models for both detailed skeletal or reduced chemical models and molecular transport models. We also have a Lagrangian particle tracking method embedded in the software technology to either allow us to track tracer particles or to treat polydispersed dilute sprays as well as SUIT. More recently, I've been working with computer science groups to allow us to easily incorporate in situ analytics as well as visualization methods while the code's running. And so we can start to look at chemical analyses on the fly or various machine learning methods coupled with our calculations as it's running or volume viz or particle viz on the fly. This code has been refactored numerous times from MPI only code to MPI plus X where X can be open MP. It can be open ACC if I want to run it with pragmas on directives on graphics processing units. Or in more research-oriented environments that use dynamic task-based runtimes in systems in order to orchestrate the mapping of the code onto the compute resources with heterogeneous machines with GPUs and CPUs. Over the years, we kind of got into DNS early on in the 80s. And what I'd like to just show is that the computation of intensity of DNS has kept up with Moore's law. So we've had exponential growth in our problem sizes from about 1995 through present. So this is on a semi-log plot. In the early days, we were only able to perform DNS either on very low Reynolds numbers with one single global step or in two dimensions so you can't represent turbulence in 2D but maybe with a little bit more fidelity in the chemistry. And so it's only been recently with the advent of supercomputing at the PETA scale and TERA scale that we've been able to bring both real turbulence together with detailed chemistry. And so we're at the point now where we can, at least for small flames, smallish flames do direct comparisons with laboratory experiments for, for example, high-carlevance turbulent premixed flames or looking at multi-injection diesel types of problems. So what I'd like to do in my remaining time is to give you a taste of the kinds of things we've been studying with this tool. So I'll give you a couple of vignettes and then I'd like to discuss what the path is to actually getting, moving to exascale given the changes and advances in hardware architecture, software stack, and computing capability. So the first vignette I'd like to describe is our work in looking at turbulent auto-admission of a fuel like endodecane which is a diesel surrogate fuel at 25 bar. And so this is at relatively high pressures. And what's visualized here is one of the important low-temperature intermediate species called ketohydroperoxide or keat for short. So the interest here is to, is to really try to understand what low-temperature combustion, it's, understand the coupling between low-temperature combustion and turbulent mixing. And the idea is to burn, as I said earlier, at lower temperatures to reduce the emissions as well as keeping the efficiency high. And in these LTC conditions combustion occurs in both premixed as well as spontaneous auto-admission modes but also occurs kind of sequentially through low-temperature ignition followed by intermediate stage ignition all the way to hot ignition. And so there's very, very strong sensitivity of mixing and transport effects coupled with low-temperature chemistry. So what's known about ignition has largely been known for about high-temperature ignition. And if you look at the plot on the right here, what we see is the ignition delay time on a semi-log plot versus mixture fraction. This is the degree of mixing between fuel and oxidizer. And you see that there's a minimum point for various mechanisms. And so these homogeneous reactor calculations, that is, there's no transport here at all, it's 0D, show that there is a minimum ignition delay time and that occurs at a preferred mixture fraction in each of these instances. We also know from strained 1D flameless simulations that the ignition delay time in this bottom plot increases, or is pretty much the ignition delay time increases with the mixing rate until a critical value is reached at which point it's going to take forever to ignite. And so mixing or strain rate impedes the progress of ignition. If left alone, the thought is that that's the fastest you'll ever ignite a mixture in the absence of any kind of transport. And so the question then becomes which of these high-temperature ignition features carries over when you have low-temperature ignition? And so there's been some experimentation done in engines. And these are some experiments done by Scott Skeen at the Combustion Research Facility in an optical engine. And the kinds of measurements they can make as a function of increasing time or crank angle degree, these are after start of injection, 140 microseconds all the way to about half a millisecond, are things like formaldehyde imaging and time-resolved schlieren imaging measures the density gradient. And so you can kind of see that, okay, well, 190 microseconds after the fuel is injected into a diesel engine, you start to see formaldehyde, which is a nice marker of low-temperature ignition happening on the sides, near the head of the jet. And then that grows into the head of the jet, starts to ignite. And then the entire volume inside of that thing, inside of the leading edge of this jet nights. I should say the fuel is injected from left here and then into a heated ambient, typically about 900 Kelvin or so, at very high pressures, 40, 60 bar. And if we then look at schlieren images, you see sort of a similar pattern where if you focus your attention at point A, at one time and slightly, a little bit later time, point A, which is like an eddy that's been ejected sideways outside of the leading edge of this jet, you see that it starts to disappear and vanish because it's undergone low-temperature ignition, which generates a tiny bit of heat release and raises the temperature maybe 100 to 200 degrees, which causes a decrease in the schlieren image. And likewise you see points B present here, hasn't ignited, low-temperature ignition has happened here. So this is a consistent picture with what the formaldehyde images are also showing, that ignition seems to happen under these less fuel-rich or more lean mixtures first, and then it propagates into the center where now at this time 100 microseconds, you see that the leading edge of that jet has the schlieren images almost completely gone, where you have volumetric ignition occurring. And so that's kind of about all you can surmise from these types of measurements. So if we want to understand what's really going on, the only way at present is to do this computationally and drill down into the details. And so we set up a DNS configuration, which is a temporally evolving ignited jet at 25 bar. This is reduced oxygen, I'm sorry, reduced air. So it's 15% oxygen, 85% nitrogen in the ambient heated to 960 Kelvin. And then the fuel stream is endodecane at an equivalence ratio of 0.3. This is a pre-mixture that people have measured in the engine slightly richer, but after the fuel is evaporated that's about, and before it's ignited, that's about the condition at 450 Kelvin. And we've used a chemical kinetics mechanism for endodecane involving 35 species that's been reduced by tin-fung-lu that includes both the high temperature oxidation as well as the low temperature oxidation. And the types of turbulent Reynolds numbers that we can achieve at present are about a thousand for a turbulent Reynolds number, jet Reynolds number about 7,000. So this computation, because of the high pressure and the kinds of resolution we needed to resolve these ignition fronts, we needed three micron grids. And so that's a really tiny mesh in order to resolve the internal structures of these premixed flames and of the spontaneous ignition fronts. And so it required three billion mesh points and 40 primitive variables including 35 species that we transported plus density, momenta, and total energy. And to give you an idea, this is quite a small domain. It was only about several millimeters on each side, and we were able to go about one millisecond in physical time taking very, very small time steps to observe the dynamics of ignition through to the full burning happening in that domain. So just to give you a sense of what to expect, first we look at multi-stage ignition when it's 0D, no transport, and so you plot the ignition delay versus mixture fraction. And when you have both two-stage ignition, you have the low temperature ignition stage shown in the red dash line and the high temperature ignition shown in the solid block line. The stoichiometric mixture fraction is sitting here at kind of lean conditions at about 0.05. And you see that consistent with what Master Rackus had found for high temperature ignition, you'd have a minimum ignition delay time at a preferred mixture fraction that's slightly rich of stoichiometric for low temperature ignition and much richer at 0.12 for the high temperature ignition. So they're kind of separated in mixture fraction space. We also find that there's about a three to four-fold difference in ignition delay times between the low and high temperature ignition. And so you might expect because of this large separation in time scales that the low temperature ignition might occur first and then sequentially the high temperature ignition. However, when you get to richer mixtures, that distance, that gap shrinks and so there may be some significant overlap in the ignition processes for low and high temperature ignition for rich mixtures. Now, just to give you a sense of the dynamics of what's going on, I'm going to show you a video of a low temperature intermediate species marker called ketohydroperoxide on the left. And on the right, I'll show you what hydrogen peroxide looks like. These are things that you'd like to measure in the laboratory if you're going to demarcate low temperature and intermediate temperature ignition processes. Then on top of that, I'm going to show you on the left the hot ignition when the hot ignition kernels form and these will be demarcated by temperature threshold of 1150 kelvin and those will be shown in red. So the key I think is shown in blue. I'll go ahead and play it so you can see what happens. So this is in a shear layer. So there's a slab of this premixter in the middle and then the ambient on either side of that. And so you can see the keat forming on the left. This is a slow temperature marker and now all of a sudden you see these little red spots which are the hot ignition kernels forming sequentially. And likewise on the right, you see the formation and then the disappearance of the keat and also the disappearance and consumption of the H202 as it thermally dissociates to form OH when you undergo hot ignition. Okay, so then eventually the ignition kernels propagate out towards the stoichiometric mixture and then you end up with edge flames on the edges of the shear layer. So if we look at this phenomenon of sequential ignition in the conditional statistics of it as plotted as a function of mixture fraction, we see that temperature evolution if you follow the first top row, the keto hydroperoxide evolution is the bottom row and the conditional means and standard deviations of H202 are shown in the middle row. And so initially you just have a frozen mixing line for temperature, then you see the low temperature ignition occurs near the preferred mixture fraction which is pretty lean for low temperature ignition and then it propagates into richer mixtures and then the richer mixtures auto ignite first here at slightly richer conditions then predicted from the homogeneous scenario which I'll explain in a minute and then eventually it marches back to stoichiometric conditions where you have a high temperature flame. And likewise for keto hydroperoxide, low temperature builds up here under very lean conditions. It forms cool flames. The cool flames propagate towards richer mixtures consumed when it undergoes low temperature ignition and moves into intermediate ignition types of chemistry. So the standard deviations are indicated by the vertical bars. The red solid lines are the conditional means averages. So what's happening here and why is this quite different than what people engine designers have relied on homogeneous reactor simulations for a long time to make their predictions? So what we're finding is that if you plot the ignition delay time versus mixture fraction as I showed you earlier for homogeneous systems with no transport, we have the high temperature ignition denoted by the black solid line, the low temperature ignition delay curve denoted by the red dash line. But what you see here is the turbulent parcels of fluid for low temperature ignition are shown by these blue isocontour values where the parcels are between 5 and 50% ignited. So if you follow this, initially you see that indeed low temperature ignition happens where you have the preferred mixture fraction consistent with what Masterakis found for hot ignition and it's delayed a little bit because of mixing effects. I told you earlier that strain rate impedes ignition. But then what you see in time is that the low temperature ignition as it moves out to richer mixtures as we saw in the conditional statistics ends up actually igniting at a shorter time than the corresponding homogeneous scenario. So that kind of leads to a question in our heads how did that happen? And then more interesting is what happens with the hot, if you look at the hot fluid, the fluid parcels that undergo hot ignition shown by the red isocontours, we see that between 1% and 50% in the red is that the first hot ignition kernel happens out here at rich conditions, quite a bit richer than what homogeneous ignition, high temperature ignition would predict. And at earlier times. So at fuel-rich conditions and at smaller times. So these are both kind of contradictory to what our knowledge of ignition is. So as I said, the low temperature ignition happens at the preferred mixture fraction. The ignition wave propagates into richer mixtures. High temperature ignition happens richer and at smaller times. And so we were very curious as to what was happening. And so what we found is that if you analyze this data in detail, you see that the propagation mechanism, once you have low temperature ignition, it propagates into richer mixtures either as a premixed flame where we find a nice balance between reaction and diffusion for a species like heat or H2O2, or it propagates as just a spontaneous ignition front with lots of reaction with very little diffusion. So it basically is propagating down an ignition delay gradient in temperature or composition. And so if we analyze this, we see that the percentage, what we did was then take an H2O2 isocontour and using a marching cube segmentation method, figured out the local ignition front normals and identified the rate, going along each normal of that ignition front identified whether it was a flame or whether it was an auto-ignition type of front propagation. And if we plot the percentage of fronts that are propagating as a flame, we find that under very lean conditions and under very rich conditions, it's propagating almost exclusively as a flame, but for mixture fractions in between, you have both auto-ignition as well as low-temperature cool flame propagation that's responsible for the propagation mechanism. So these are just diffusively supported cool flames. And so what's happening is these cool flames that first ignite propagate towards richer mixtures in many instances much faster than those rich mixtures could ignite on their own. So they're basically delivering the goods, the enthalpy and the low-temperature intermediate radicals that bootstrap those harder to ignite rich conditions and leads to much faster ignition. And so the experimental engine guys have kind of observed this in the laboratory, but we're not able to explain the mechanism, which we have uncovered from these kinds of calculations. And so I won't show more of these details, but we can start to look at issues like how the Keat or the H02 low-temperature markers are correlated or how well they're correlated with the turbulent mixing rate, the scalar dissipation rate in mixture fraction space. And the essence of this, without going into the statistical details, is that ignition kernels like to form initially where they're sheltered from losses. And so they like to form where the scalar dissipation rate and the mixing rates are low. But once they're... And for those regions that have very high mixing rates, they're not able to ignite and what happens is whatever buildup of radicals or enthalpy are removed from those sheltered environments and through turbulent diffusion are brought into much richer mixtures. And so for the very rich mixtures and the very lean mixtures that have very long ignition delay times that would take a long time to self-ignite, it depends on turbulent diffusion and laminar cool flame propagation to bring the heat and the radicals to those locations. So then I just want to say that if we sum up the different combustion modes that happen during the course of this auto-ignition, we find that low-temperature induction, that is the buildup of the low-temperature radical soup, the transition of low-temperature ignition to high-temperature induction processes, these blue and green and yellow regions contributes about 30% of the overall heat release rate. And the rest of the heat release rate is predominantly due to premixed flame propagation once the high-temperature kernels have ignited. And very, very small percentages due to the high-temperature diffusion flame shown in purple. But the low-temperature ignition has contributed a non-negligible fraction of the overall heat release rate and the pressurized rate. And so it's worth getting the physics right and the models right for the low-temperature region. We also see, as kind of hinted at in the ignition delay curves, that there's really largely a separation in the low-temperature ignition occurring sequentially before high-temperature ignition processes occur. They don't really overlap much in time. So from this vignette, we kind of learned that low-temperature reactions create the conditions for hot ignition to occur faster than under homogeneous conditions. These low-temperature ignition fronts propagate through a diffusively supported cool flame in much of the region and that high-scaler dissipation rate delays the low-temperature ignition, however it leads to faster ignition under very rich mixture conditions. And that the high-temperature ignition starts at conditions richer than homogeneous conditions and eventually form edge flames at stoichiometric conditions. And so what we're doing now is to look at multi-injection processes. So we inject a pilot fuel followed by a primary injection with a dwell time in between and look at the effect of ambient temperature and the presence of these low-temperature ignition intermediates generated by the pilot and what its effect is on the primary injection auto-ignition development. We're also layering in on top of that the spray aspects to look at the enthalpy and momentum exchange between the phases and we're starting to look at adding soot into this problem as well. So let's see. What time is it? Okay, so I want to skip the next talk. Well, why don't I just quickly say that this next talk is motivated by how to stabilize a flame, for example, in a scramjet. And so we have an ongoing project together with Harsha Choliah sponsored by the Air Force and NSF where we're doing the DNS and he's doing the experiments in the wind tunnel. And so we really want to understand how to stabilize a flame under these high-speed compressibility conditions. So what we've done to our DNS code is to extend it to be multi-block, so basically think of it as Lego blocks that you can piece together to get some geometry, very simple geometry into it. So we can do flows that look like cavities or using immersed boundary methods include a close-up linear ramp cavity on the right. And then what we have done is to generate a separate turbulence inflow feed data by running a turbulent periodic channel flow. And the conditions here we're looking at are, it's a scaled-down cavity that Harsha's built. And we're looking at ethylene air at lean equivalence ratios of about 0.4, flow velocities of 200 meters a second, RMS velocities about 10% of the bulk flow velocity, pre-heated conditions of 1125 Kelvin. And so this just gives you a snapshot of what the instantaneous turbulence field looks like through the inster-fee on the left and the heat release rate on the right. It's not very clear, but there is a rectangular cavity in there that the outline of it's not projecting very well. If we take a slice of it through the center plane, we see the inster-fee and we also see the boundaries of the flame as shown by progress variable isocontours in black. So you can see the flame is this corrugated black line, two lines of progress variable between 0.05 and 0.95, an inster-fee coming from the boundary layer. Here's the step. I should say this is a case that is a little simpler than the cavity. So this is just a backward-facing step, but it still looks at the dynamics of how you anchor that flame. The interesting thing that caught our eye was that turbulence, we always think of turbulence happening on the reactant side. So the ethylene air mixture is flowing in here from left to right, and products are on the other side of the flame. And the odd thing that we saw here was that the inster-fee switches or moves from being on the reactant side being predominantly on the product side as we move downstream in the cavity. And so we thought that was a little odd and so we're trying to understand what happened. So if you look at the conditional inster-fee plot versus progress variable at three different axial locations downstream from the step, indeed it's peaked on the reactant side right immediately behind the step, and then it migrates, the peak migrates over to the product side. So if we look at the stabilization here, what we see is there is this nice recirculation zone if you do Rand's averages of the streamline. So that's recirculating radicals like OH and so on. And so if you look at the flux of OH in the axial direction, there's definitely a flux going to the right and then coming back to the left due to this large-scale recirculation region behind the step and likewise for the transverse OH flux. The other interesting thing we found is if you look at the reaction rate for OH at these different axial positions, red, green, blue, and purple as you go downstream is that there's no production of OH immediately behind the step that's not until it gets considerable distance downstream of the step that you see that production starts to kick in, whereas you do see consumption of OH throughout. And so what I think is happening, we see the similar picture for CO, and so what I think is happening is that the strain rates and the residence times are so short here due to this high-speed strained flame that you basically have quenched your oxidation layer. So you're no longer producing CO2 in water, but in fact the flame sits out closer to the product stream in the near field behind the corner and those oxidation reactions are actually operating in the reverse direction. And so you're not producing radicals and this flame is staying alive solely by the effective transport of OH-generated farther downstream and being brought back up through the recirculation bubble. Okay. Let me skip some of the rest of this and for those of you who are interested I can talk about the reheat combustion problem. Maybe I'll just show one quick slide of that. Okay. So what we found is when we looked at reheat combustion with hydrogen air systems and we are doing this together with Unsolido, Energia and Sintef they're interested in the flame stabilization mechanism when you have vitiated gases coming in in the mixing section at high temperatures in excess of a thousand. And what I'm showing here is the enstrophy in the boundary layers colored by temperature and then I'm showing you the combustion rate or heat release rate by this triangular red region highly wrinkled in the combustor section. So basically we have a mixing section followed by a sudden expansion into a combustor, the duct in a duct. And what we found from these calculations is that there are two combustion states that are observed. There's the design state which is mainly due to auto ignition in the combustion chamber with a little bit of premixed flame propagation at the corners. Again, this is a backward-facing step with recirculation. So you have premixed flames here and then auto ignition in the middle. But more interestingly, we found that intermittently we see auto ignition happening in the mixing tube in the mixing section which is an off-design state. This is for exhaust gas turbine? This is for stationary, for power. Stationary gas turbine. Right, it's a sequential burner for the GT-36. What was it? And so what we found is, for the design state we find here's a heat release and a slice of this and this part is auto ignition near the center line. These weaker heat release thin regions or flames that are anchored just like the backward-facing step problem I showed before due to recirculation behind there, right, from the sudden expansion. And we see the temperature burns more brightly for the auto ignition near the center, less brightly as you'd expect near the flames. And then we also see evidence of auto ignition due to the presence of H02 ahead of the auto ignition of the heat release. Because for hydrogen you get chain branching without thermal explosion first. And then we confirm that through various methods that transport budget analysis that it is a flame in cross-section B where we see a nice balance between diffusion and reaction when you take a cut along the normal whereas it's just auto ignition in the middle when you take a cut along A you see balance between reaction and advection. And then we've applied, I don't have time to talk about it, but more detailed chemical explosive mode analysis which is an eigen analysis of the reaction rate Jacobian and convolving that with the diffusive flux actually quantitatively distinguished between the different combustion modes. So when this ratio of the non-chemical source or diffusion term relative to the chemical source term is greater than one, you have assisted ignition propagation and when it's between minus one and one, this parameter from the chemical explosive analysis shows you that it's primarily auto ignition and when this parameter is less than minus one it's quenching. And so if we plot these parameters, it's consistent with the transport analysis I showed previously. This is just more quantitative. It shows the propensity to auto ignite towards the center line. The premixed flame propagation is shown in the red regions, propensity here. And then near the wall where there are heat losses it's propensity for diffusion to dominate chemistry. And then if we weight the fuel consumption based on the different types of modes we find that overwhelmingly the fuel consumption is due to auto ignition. So now just briefly what happens in the off design state. So when we have intermittent ignition we occasionally see these little, these blobs of kernels that extend from one wall to the other in the mixing section which you don't want to have happen because of combustion dynamics and so on. And so, and it shows up in temperature it also shows up in H02. And so the source of this we believe is due to the fact that you have auto ignition happening in the combustor section and this generates lots of pressure fluctuations both longitudinal and transverse. And those pressure waves or compression waves emanate from these ignition kernels and they propagate to the right out of the domain but they also propagate upstream back into the mixing tube. And these waves can ricochet back off the wall but occasionally you get constructive interference patterns that lead to a slight pressure rise in the mixing tube and then through isentropic compression you also get temperature rise maybe of 20 to 30 degrees Kelvin and hydrogen being as reactive a fuel as it is that will modify your ignition delay times as much as 30%. And so therefore we occasionally do see evidence of flashback and ignition happening in the mixing tube which is not desirable. So last, can I have one more minute? Okay, so last thing just to change the subject completely is what we're going to do to get onto these more difficult platforms that Exascale is bringing. And the constraints that we face are that you don't want to have a power plant just dedicated to running a supercomputing facility. So power is the major design constraint and what consumes a lot of power is not just the computing itself but rather moving data a tiny fraction across a chip. Five microns across a chip will cost 50 picajoules for operation compared to only 10 picajoules to compute a floating point operation. So what that implies is you want to reuse your data and keep it local as much as you can because if you move it a little bit even within a node it's going to cost a lot and or across the interconnect it's going to cost even more. Furthermore, since the processor speeds have kind of stalled the only way we're going to get from PETA scale to Exascale is through large concurrences. You're going to have around millions of these processes concurrently to get the parallelism. And so the big challenge then for software developers for science applications is how do you express data locality, independence and express massive parallelism, minimize data movement and reduce any need for synchronization and detect and address faults because having millions of processors all up and running 24-7 isn't going to happen. And so there has been this program at DOE it's called the Exascale Computing Project and it's adopted a kind of a holistic approach that uses the principles of co-design and integration to achieve a capable Exascale machine in the next few years. And this combines application development, software technology, hardware and systems integration involving collaborations between computer scientists, computational scientists and applied mathematicians. And so we have a piece of this project in developing combustion application and the extensions of our application are to include things like adaptive mesh refinement so we can take care of the disparate scales when you move up in pressure as well as multi-phase spray types of multi-physics and thermal radiation. And so this is a multi-laboratory project between Argon, Berkeley Labs, Oak Ridge, Sandia, NREL and several university partners funded by ECP. And so I'll just stop here by saying that we're excited with the prospects of doing DNS at these engine conditions. We've started in some preliminary AMR calculations to look at multi-injection, multi-injection diesel fuel with endodecane. So on the left movies I'm showing the mixture fraction of a pilot and a primary jet followed by the temperature, some of the low-temperature species markers and OH on the right. And we can start to get really, really detailed knowledge if we take a cut at the, again, mixture fraction of the pilot on the left followed by mixture fraction of the main on the right after a dwell time, temperature H202 and OH on the right. So we get just an incredible amount of information and we're trying to push up much more practically relevant regimes. So I'll stop here. Thank you. Is the mic on? So she has a tremendous amount of material to share. Those of us in the combustion community know that Dr. Chen answers questions very well as well, even by email. So I will take that privilege to ask her questions by email and if you mail those to us, we can forward those to Dr. Chen. In the hurry to get started, I handed the mic directly to Dr. Chang, our dean, Mung-Chan, if I may say a couple of sentences. Our dean, Mung-Chan, is the John A. Edwards, and dean of the College of Engineering. There's no need to talk about me. Only two more sentences. His research received 2013 Alan T. Waterman Award. His online courses have been taken by 250,000 students and he has founded several start-up companies. We were delighted to have dean Chen join us. Would you take one question or two questions, ma'am? We have two minutes to take two questions, so who would like to ask those two questions? One from a student. Yeah. Thank you for one of your speech. My question is, I'm a graduate student who works on computational method. So sometimes for us, the computational method, the way of how to develop the code might be a problem for us to do the research. While the most important thing for us to do is to find some breakthrough on the physics, so what do you think of the inconsistency between what we need to learn for study and what we really want to do for study? Well, so I would have said 20 years ago you can do it all. Given the complexity of where computing is headed today, I don't think the one-man shop or one-woman shop is not really feasible to develop algorithms, code them up on a large heterogeneous machine and then do the physical sciences after that. So I think other communities like the climate community and there's many others have kind of formed community codes and community ways to share data and software. I think we're kind of evolving in that direction in our community in turbulent combustion and I think there's a lot of software, maybe not your specific setup for your flow configuration which you're going to have to come up with, but maybe sharing some of the tools and the infrastructure underneath that, both codes. There are codes like OpenFoam and others that are open source codes. The new code we're developing for Exascale is also open source. So pretty soon you should be able to download that and then having some ways, gateways so that our communities can share not only software but also databases that were previously generated from computation, making those available for modelers to investigate other physics or model assumptions that the person generating the data may not have interest or had not thought about. So these calculations are very expensive. They're not something that everyone can just hop on a computer and run. They take some expertise and so I think that is a very valid point. What do we share and what parts do various people need to contribute or build on top of? Let me take a moment to say that tomorrow at 9.30 a.m., unleashing the power of computing and data at scale, panel session will be held in this building in room 3122. So your question was very valid and the panel will continue discussing it tomorrow. One last question or comment. Professor Mayer. Yeah, so just a curiosity from someone who's not specializing in computations. So you had a four nanosecond time step which is way shorter than to resolve anything in the flow. So is that based on the stability of the code? How did you arrive at that? Well, there's the CFL condition for compressible flows. We need to find mesh in order to resolve the flame front or the ignition front scales. And so that dictates the time step since it's a compressible solver that we can take. And also because when the chemical mechanism was reduced, it was reduced such that any chemical, radical time scales, you still want to preserve all of the relevant chemical time scales. And some of those are quite short. So some of the species are not put into steady state because they would make the chemistry less accurate. So it's a combination of the Karant Friedrichs, the CFL condition as well as trying to resolve accurately the chemistry. And so you have multi-resolution meshing for the spatial domain. Is there also multi-resolution in the temporal domain as well? No, but there is what's called a LOMOC limit formulation where we filter out the acoustic waves, sound waves, and therefore then we can take much bigger time steps that aren't restricted by the CFL condition. And so both of those doing adaptive mesh refinement, which would allow us to put the mesh where the high gradient regions are, like the flames, as well as filtering out the sound waves for conditions that are not so compressible. I mean, certainly right near the injector it is highly compressible, so you need to resolve that. But as you move further downstream where it becomes highly subsonic, you can maybe get away with the LOMOC formulation. So that's the direction we're headed. Let's thank Professor Chen for a great seminar. Thank you.