 go live. Okay, I think we are live. Let me put this, come on. Awesome. Welcome back everyone. Thank you for joining us for today's low physics webinar. My name is Alejandro and I'm going to be your host. Today we are presenting spectral difference methods for us physical fluid dynamics by David Velasco Romero. David earned his bachelor in physics from University of Guadalajara, then did his master's at Instituto de Ciencias Físicas at UNAM and in his PhD in Computational Modeling and Scientific Computing from Universidad Turmales de Estado de Morelos. He then moved for his first postdoc at the Institute for Computational Science at the University of Suric. David is currently a postdoctoral research associate at the Department of Astrophysical Sciences at Princeton University. His research interests include high performance computing, parallel and GPU computing, computational fluid dynamics, and planet formation, with emphasis on planet disc interactions. We're delighted to have him in low physics today. Remember, you can ask questions over email through YouTube channel or Twitter, and then the questions will be read at the end of David's talk. So without further ado, we will turn time over to David. Thanks for joining us and thank you, David, for being today with us. Thank you for having me here. Should I start sharing now my slides? Yes, you can. Perfect. Okay. Perfect. So I'll be presenting this work on spectral differences that have been developing for the past almost three years with these people, Maria Vega, Contan Begna, and Román Diziel. And well, in your award that in computational poly dynamics for astrophysics, it covers a large range of scales from galaxy clusters, galaxy stars, circumstance that are disk planets. Here are some examples in galaxy the sun, and one of the circumstellar or protoparactoid disk with here a planet. This is a simulation against an observation. And we use these magneto-hazard dynamics equations to simulate these systems to replace what experiments do in physics. We do simulations to then have something to compare against theory. So in this case, I'm going to talk about one of the methods to do this kind of discretization of these equations, which is cell-based methods. There are also particle-based methods. And I'm going to start with the equations, just the Euler equations or the Vistokes, so which are conservation laws for mass, momentum, and energy, which actually can be written in this compact form. Just the evolution, the update of a quantity is given by the divergence of the flux, where these, now we have a, for the Euler equations, we have a u-vector, which comprises the conservative variables, density, momentum, and energy, and the respective fluxes of of those quantities. Now in these mesh-based methods, what we do is we approximate the solution of this partial differential equation with segments, so we discretize our space. For instance, here I'm showing you what would be the representation of this nonlinear solution in a piecewise constant or first-order approximation. And then I'm going to talk about the, one of the most commonly used methods, the Godinov method for these Euler system equations, which is a finite volume scheme for this conservation of loss, now in integral form. So here again, I have an Euler solution, and I approximate this Euler solution by piecewise constant values. And we assign to each one of these segments the control volume average, as shown here of this segment, which also is a value at the midpoint of this cell. And then what we need is to know or compute the fluxes between these different cells to know what would be the update of the evolution of our solution. Now these fluxes are computed with what it's called, it's a real problem, so we use human solvers to compute the what would be the flux between these two things. And these fluxes are unique between these two cells in order to then enforce conservation of every of those quantities that I mentioned before. Now how do you compute these fluxes? So the naive approach would be to make an average between these two values at the interface between the cells. And if we were to do that, we would find out that that it's actually unstable. And that's unstable because we are neglecting the proper propagation of information. So if we take into account now that the velocity, for instance, in this case is positive, then we would see that at the next time step, the solution should be something like this, so that you see that the value at the interface should be the one coming from the left. Whereas if the velocity were to be negative, the value actually at the interface should be the one coming from the right. We can write this in this compact notation, which is called upwinding. So if you see, this turns to be what I mentioned before, if the velocity is positive, these terms cancel, and we recover ui. And if the velocity is negative, the other two terms cancel, and we recover u at i plus 1. Now we can rewrite this equation in this form, where these are the fluxes. And if we write it in this form, we see that it's simply the gradient of the quantity multiplied by this factor. So if we plug in again these new flux into this equation, we see that the modified equation results to be this one with an extra term, which is a diffusive term with a diffusion coefficient nu, which is given by this factor. So we now see that the numerical diffusion is going to scale with the velocity of the fluid and also with the size of the mesh. What you see is what it's called the current factor, which is a value that it's supposed to be below one, so that the time step taken in a simulation should be such that it's controlled by the velocity and the size of the mesh such that information does not skip one cell. So the propagation information or any quantity should just be advected from one cell to the neighboring one. Now, as I mentioned before, this is a first-order approximation, but we want to go to higher order to actually reduce this numerical diffusion that I mentioned before. So we can go to something like piecewise linear, now approximation of the solution, which would be second order, or even a piecewise linear parabolic approximation, which would be third-quarter, where you see that we are adding more terms. Now, in practice, we have, as I mentioned before, the finite volume method as the Kononov scheme here in second order, and here another kind of method, which is finite element method, which SD is one of those, the method spectral difference that I'm going to present is one of those. And here what I want to show is the kind of stencil needed to interpolate things at the interfaces to then compute or solve the real problem. And so here we have disconnected cells in finite volume approach that we then compute the, we solve the real problem and we compute the fluxes between these cells. This is the stencil at second order to compute those interpolated values at these phases, whereas for finite element at second order, it actually, it's something compact. So all these soup cells within what it's called an element share an stencil, a polynomial stencil. So they are all computed using all of all the other values. So if we move to third order, the stencil, for instance, for finite volume would still be this one, just to compute the, to interpolate two interface values. And for finite element, it will be something like this. Now, we have nine soup cells within the element. Then again, all of those are used to interpolate to different positions within the element. And now we can extend this to fourth order. And now the important thing here is that if we move to, to, if we're taking into account parallelization, so if we are doing massive simulations, which is commonly done in astrophysics, we have this main domain. And we have what it called these core cells. So these core cells allow for the interpolation to happen at the boundaries. Now, if this domain is too large, the common thing is to subdivide this domain in multiple sub domains, assigning this each one of these sub domains to different CPUs to be computed in parallel. And now each one of these CPUs are going to have boundaries, these core cells that are going to be boundary conditions given by the neighboring processes. So then you see that there is a need for communication between these processes. And the issue now is that we need to, we need to do methods, well, rely on methods that actually are not bounded by the cost of this memory, that actually there is a balance between the computational cost and the memory cost, or the cost to communicate this information. Now, if we move to the stencil of the reconstruction, now to solve actually the problem, we see that for instance, in third order finite volume, in order to compute these values at flux points in this side of the, here, let's think that this is a given CPU, and that this is another CPU computing this domain. Then this is information in blue, needed to compute these points in blue. And to then solve the memory problem, we would also need these red points on the neighboring process or processor. And you see that in this case, we need two layers of information to actually solve either we communicate these layers and allow this process to compute both pairs of points, or we just communicate these number of layers allowing it to compute these points. And then we send these flux points and so we can solve the problem. In the finite element method, you only need to communicate flux points as these flux points are computed with the information of the element. So they don't need information from the adjacent element. The only thing that we need to do is communicate the flux points. So in this case, you can see that the cost, this cost twice the information to be sent, then in this case, and if we move to fourth order, the same thing applies. And now you can see that the cost is actually three times larger in terms of memory to be sent information in this case than in this case. So in short, finite volume methods are light computations. They're quite simple, but they're quite heavy in communications as you go to high order vice versa. Fourth order finite element methods are really heavy in computations. They're expensive to compute this interpolation using all of these stencils. But they're really light in communications. And this turns out to be a really good match for high parallel hardware that we have available now, like for instance GPUs, graphic processing units. And that's why there's more and more people playing with finite element methods to to make use of these power given by these highly parallel processing units. Now, I'm going to talk a little bit about this spectral defense method that I mentioned before. So as I said before, we divide the domain in Cartesian cells or in this case elements. And inside each one of these set of P plus one solution points. And the solution inside the element is given by Lagrange polynomials of degree P. So we represent the solution inside the element using these bases of Lagrange polynomials. Now, we also have another set of points of P plus two flux points. Again, now using a different set of Lagrange polynomials of degree P plus one. So here I'm showing you in this red, the flux points in x in the salmon color, the flux points in the y direction. And where we compute the divergence of the flux now using another Lagrange derivative of the Lagrange polynomials at flux points. So how it looks the interpolation in this case, it's something like this. Like we have solution points, we make this operation on these solution points using the Lagrange polynomials evaluated at flux points. And we obtain these flux points with where these flux points internal to the element don't need to have a problem. The only ones that need to have a reman problem or reman solver are the ones at the edge, which are going to be shared by another element. After computing these, we then use these operation to go and compute what would be the update at the solution points. Now, of course, these being 2D, there is a part in x and it's counterpart in y. So there are two contributions to the update of even quantity. And this is how these Lagrange basis for polynomials look like. So in order to ensure instability, sorry, there is the need to actually collocate the flux points at the cost of Lagrange corridor points plus these two end points. And this is something that actually it's it's one of the pitfalls of this kind of method that for for stability reasons, it forces the flux points to be at those positions to be squeezed closer and closer to the boundary. And that keeps a non uniform mesh within the element. Now, as for the solution points, we can place them wherever we want within these now, but we call control volumes or sub cells inside the element. And for our method, we use a time integration that it's an other method, which is a predictor step in semi implicit method. Or an implicit method that it's that it's updated using a predictor corrector. And in this case, as I the stability condition that I mentioned before, the grand condition is now under by the order of the solution. And something that I forgot to mention is that this other method, we're using it to actually match in the integration in time, the order of the integration in time with the order of the spatial characterization. So now, these are how the, how the control volumes look for different orders in this method. And as I mentioned before, you can see that this requirement of stability of the flux points makes these non uniform mesh within the elements. Now here I'm going to show you the performance of finite volume second order. And these as the other for order comparing in both cases, 20 degrees of freedom per site. So here would be 20 cells. And here would be five elements of four sub cells inside each element. And if we let this evolve, we see that infinite volume, we already see the the diffusion dominating the solution. Whereas for for order, we can actually preserve after one loop of the of this advection of a sine wave, we can preserve the initial profile. And if we actually make a study of the convergence of how well it it preserves these these initial condition after advection, we see that these kind of methods, high order methods have what it's called exponential convergence. So we see that the convergence, so the error, the difference between the initial solution and the solution after being advected scales with the order of the approximation. So we see here that, for instance, at eight order, well, at seven order, sorry, it's already getting to machine precision. So that the error, it cannot go lower than this, because this is already the error given by the machine precision. So you cannot make a difference smaller than that. Whereas for something like second order, even with more with more elements or more cells, you are really far, like by a large amount of orders of magnitude from what it's what you can achieve with eight order. Now, that was the behavior of these kind of methods with smooth solutions. But if we moved now to these continuous solutions and make the same experiment as before, we encounter now these oscillations, which is simply the Gibbs phenomenon. So we can avoid this by going to infinite order. If you go, so first order, it will behave as soon as you go to higher than first order, you're going to encounter these kinds of oscillations, which are not part of the solution, it's an artifact of your finite discretization. And it's something that in physics, actually, it's could be quite dangerous, because these oscillations, for instance, if you are thinking about density or pressure, it can signify that your your density or pressure could be negative, which is something not physical. So we want to avoid this thing, both for the numerical part and for the physics part. And what we want to do, for instance, what we decided to do in this case is to mix these two kind of methods. So you can see here that this finite volume second order goes no method, although it is diffusive. It is well behaved. There are no oscillations. So we want to use this method to control these oscillations, wherever we encounter an oscillation, we are going to replace the solution, this oscillatory solution, with something well behaved or bounded. And in SD, we can do that, because there is actually an equivalence between the two methods. So in terms of how a control volume is updated, these subcells or control volumes, inner control volumes to the element, the update actually can be written instead of what I showed you before, it can be done in a finite volume way. And therefore, we can actually replace the fluxes, the high order fluxes with low or second order fluxes whenever we encounter problems. And so now we need a criteria to actually distinguish which one are the values that are incorrect. So we have two criteria, which is one is the numerical one, and the other one is a physical one. And in the numerical one, you can see here that we require that the solution at m plus one, which is the next time step, is within the values of the solution at the previous time step. If we have a new extremum, then we have a detection of the smoothness of this extremum. So if this new extremum is smooth, we allow it to evolve. We say that it's not something incorrect, but if it's not something smooth, we need to correct this thing. And as I mentioned before, we require also the density and pressure to be positive. And instead of correcting the whole solution within a cell or an element, we actually are going to make use of these equivalence to finite volume at the subcell level and only replace the fluxes of subcells within the element. So we do that, we compute a candidate solution with now these being the high order fluxes. And if this candidate solution is considered troubled by the criteria I mentioned before, we then replace the fluxes with a second order candidate method. So for instance, in 1D, we'd have that a trouble cell will have replaced its fluxes replaced with second order fluxes. And the neighboring cells would also have one of the fluxes, so one high order flux and another second order flux. So let's see how now the solution behaves for high order, now adding these correction of trouble cells. We're here in this video in red. I was showing the cells that were detected or subcells that were detected as problematic and then were corrected. And now we see that the solution is bounded by the initial condition. There are no longer oscillations. And if we evolve this thing to not just one orbit but five laps on this box, we see that for instance, second order, it's getting quite diffusive. Whereas for order, we preserve this shape that it's not symmetric, but it's now well behaved in terms of not being oscillatory. So we see that for different orders, we gain now that these oscillations that were happening before at discontinuities are now controlled. And we also see that if we go back to smooth solutions, these trouble detection is actually not hindering the performance that we were observing before. Now we can move to hydro test. So this is the shock tube test where we are going to have, we have a discontinuity between two fluids. The simulation is just like you remove the boundary between these two fluids. And then we're going to see a right of action, a contact continuity and a shock wave evolving. Again, in red, I'm showing you which ones are the cells that are corrected in a slice of these two simulations. And if we make a study of the performance of the code with second, fourth and eighth order, we see that there is a slight advantage here of going to high order. Well, in this case, it's a clear advantage. And also the nice thing here is that we see that here in gray, I'm drawing the elements. So we see that these discontinuities are actually captured within elements. And if we now check what is the behavior of the method with a constant number of degrees of freedom, so second, again, second, fourth, eighth order, but now we'd have the number of, so that the number of cells, the product between number of cells and the order plus one is constant. So in this case, 160, we see that there is a slight benefit of going to high order. And now if we move to 2D hydro test, this is known as a Kevin Helmholtz instability. So we have two fluids, one with different densities, one moving to the right, one with the left. And there are going to be some eddies that are going to be formed due to the shear between these fluids. And the size of these eddies depends both on the size of the control volumes and on the numerical diffusion. So if your code is too diffusive, these thing, these vertices are not going to form. Now, again here, I want to show that as similar to what I mentioned before for the 1D test, that these instabilities can actually be described within the soup cells of an element. And as I also mentioned before, one of the drawbacks of these kind of methods is that it imprints these non-uniform geometry of the element. Now, the previous one was a test with a shock, but not a strong one. So now if we move to what's called the double map reflection, which is again a fluid now being affected against a reflective wall. And here I'm showing you the solution for different resolutions at second order. Here, just the density, whereas here I'm showing you some contours of the density and imposing over these contours of the density, the cells or soup cells that were corrected. And you see that these attections are aligned with the discontinuities. And actually if we go to higher order, we see that because of that, that only in discontinuities, we see these kind of detections. Actually, even in these cases where you require, when you have discontinuities and you're required of this triggering of the fallback scheme, this second order fallback scheme, you still gain, you still gain from going to high order. So again, the same experiment as before, second, fourth, eighth order with a constant number of degrees of freedom, just doubling resolution row by row. And we do see that there is a benefit of going to high order, where for instance here, eight order with 100 cells, it's arguably better than what you get with second order and 1,600 cells. Now, the work that I mentioned before has already been published. It's the first paper of this code. And what we want to do now is to actually use this for some astrophysical things. And what we have in mind is to actually tackle low back number flows and solar interior problems, which are in solar materials, we have, you have these highly subsonic flows that, as mentioned here, are characterized by these small perturbations around an equilibrium solution that develop into these thermal turbulent convection. The usual approach to this kind of problem is to either do naviery strokes or include magnetism, so MHD. In some sort of approximation. So, and typically these approximations, what do is neglect the propagation of sound waves, where these flows by being really subsonic, which means that the sound speed is way larger than the velocity of the fluid, are constrained, the time step is constrained by the sound speed. So more and more time steps accumulate with the sound speed, and there you are dominated by numerical diffusion. So in order to see the performance of the method, in these kind of scenarios, we started with a low back number study without just a smooth profile, which is called the Gresh of vortex. So this is a rotating smooth profile. And I'm showing you again the same experiment, second, fourth, eight order with the same number of degrees of freedom. For here, a Mach number of 10 to the minus two. And you see that all of them perform well, you can see a little bit, this one being a little bit more diffusive. But as you decrease the Mach number, so if you go to 10 to the minus three, you start seeing that now the solution is dominated by numerical diffusion. And if you go even to a lower Mach number to 10 to the minus four, you see that this solution is completely washed off, whereas fourth and eight order can actually still preserve the solution. And in this case, you can actually fix this by just doubling the resolution, which would be equivalent to making something like this in terms of the cost of the computation. But in this case, you can see that it will take a lot of more resolution to actually do something like this. But so that is a performance of the method with smooth profiles. Now what happens if we have low Mach numbers, low Mach number flows with these continuities? So this is the Rayleigh Taylor instability, where I'm showing you here the solution for second order method with increasing resolution. So that you see that increasing the resolution, you have these Kevin Hamilton's instability, secondary instabilities developing around these initial Rayleigh Taylor instability. So we did the same study with high order. And you do see that this trend of getting more instabilities increases even at the same number of degrees of freedom by increasing the order. So we see that now, low Mach number, even with these continuities, you do get a benefit of going to high order at the same number of degrees of freedom. Where again, it's arguably similar solution. In this case, we'd have the number of degrees of freedom that in this case for second order. Now, another part of this scenario of stellar interiors is doing the evolution of a small perturbation over a hydrostatic equilibrium, where here the challenge is that your perturbation is so small over this equilibrium solution that you have to evolve, that if your solution goes to something close approaches machine precision, then your method can possibly wash off all that solution. And we can see that here, for instance, where I have a small pulse over a hydrostatic equilibrium, again, the same exercise, second forward order, forward and eight order for a pulse of 10 to the minus four. And you see that even at 10 to the minus four, second order is already smearing the solution. We go to 10 to the minus eight, we see that now forward order is not able to handle to keep this small perturbation. And if we go to even 10 to the minus 12, which is really close to machine precision, eight order is able to still able to do it is you start seeing some small relations that it's having trouble doing it, but it's still able to do it was these second and forward order are by no means able to do it. So all these three things combined are actually needed to make these kind of simulations, the sort of commutation simulations, which are in this case, I'm showing you a simplified scenario, which is a 2d stratified medium. And that started with a small perturbation on density that grows into these by, but relatively stability grows by by buoyancy grows into these thermal convection. And the diffusion of your method, it's going to, it's going to have an impact on the evolution of these thermal convection. So we, in order to make to be able to compare the performance of the method with different orders in this kind of experiment, we made a further reduction simplification of this problem, where now we're going to have instead of like these small random perturbations of density, just a perturbation bubble. So a sub dense hot bubble in the stratified medium, they're going to let a ball in these convective region. And you see here these evolving into into turbulent conduction. And we did the same experiment with different orders. Again, the same thing as mentioned before, quite repetitive second, fourth, eight order. And again, you see that also in this scenario, you gain from going to high order in terms of, at least here in quality, you see more instabilities. But what about in a, in a study more quantifying the performance of the method? So we let these things evolve two longer times. And then we analyze what was the the, the, the spectrum of the kinetic energy. And we started seeing that actually going to high order allows to have more energy at smaller scales, which I'm not showing here, but it's also the trends that you get at, for instance, second order. If you increase the resolution, you see the same trend, you see that, that you have more energy at smaller scales. So this is actually the state of, of the art of, of what we're doing. In summary, what we have is this, we have coupled these methods with, with a finite volume small back scheme that allows to control oscillations, which is the first paper. We include it in this method, what is called a well balanced scheme to actually enhance the performance of the code with these small perturbations over high resolution solutions. We have working in the code for 2D. We have another version of the code, 3D in C++. And that's to allow to do massive simulations dividing the domain. The, and we have the ongoing work is, as I showed you before, these tests for the Mac numbers that is suggesting like the benefits of going to high order. We're testing the, this image, the feature in 2D and the next steps of my work would be to include image D in 3D in DC++ code and include CUDA and MPI. Well, CUDA already has MPI, but include CUDA to actually be able to use GPUs and use the full power of, of the clusters available right now. And that's it. Thanks. Thank you, David, for this wonderful talk on several numerical methods. Let me check if there are questions over here or if someone here has a question. So let me start with one of the questions. So when you were comparing the second order with the, okay, I forgot the names, but you have this pulse that is a square and you said like the implementation of one of those was creates an asymmetry. Is there a way to understand what causes that asymmetry? Given the symmetry of the problem? Yes. So the asymmetry comes from here. So you see here that in high order, the non-controlled version, you already have an asymmetry. The asymmetry is given by the propagation, by detection. Now, once you try to correct these asymmetric oscillations, you're going to have something that even, even though it's controlled bounded, sorry, it's going to be, it's going to have the asymmetry that you were trying to control, right? And knowing that is that a way, okay, maybe, is this a problem? Is that something you want to solve? Given that, you know, where is it coming? Maybe is that a way to correct for that? Or not that I can think of? I mean, I don't think it is a problem. Okay. Because at the end, it's given by that, by how you are affecting. So if I were to affect the other way around you, I would have the, yeah, so in this case, in this case, it's, it doesn't seem asymmetric, but it's also asymmetric in this case. You see that there is more diffusion in this, in this direction. Okay. Okay. Okay. Thank you. Okay. I receive another question. How easy it is to use your code or numerical method for other applications? In, as long as it's the same, the same PDE, the same partial operation equation, it's easy. The issue right now is that it was not, not written in a way that you could usually easily, sorry, modify the partial differential equation. So if you are doing something with hydrodynamics, then yeah, it's generic. So you can tackle whatever. In principle. Okay. If I may follow up questions, but then the combination of these two methods in some sense, like the scheme, even like your, your code might not be used like that directly, maybe the scheme or the idea can be extrapolated to other application. All right. Yep. Okay. Okay. Do we have any more questions? Let me check again. Yes, Roberto. Hi, David. I wonder, because when you were, you were presenting the, how these inestabilities were appearing. My question was related with the type of the cell, in the sense, since you are using like a rectangular grid, if any effect, if you modify the shape of the cell, let's say I, or the other question, just to complement this doubt that I'm saying, the, if there are some direction in which you, the code manifest larger noise in the, in the solution. I mean, I think that if it is a movement directly in the X or Y direction, it's kind of a smooth the movement, but if it is in the diagonal or in an angle, they would present more inestabilities. Is it correct? What I'm saying is, I mean, if your fluid is moving in a way that is not horizontal or vertical, actually, it's harder to tackle these inestabilities than the one that just moving according to the grid. No, actually, actually it's the opposite. When there are, where are, where are things aligned with the, with the mesh, that can create artifacts. Like there is a, for instance, a carbonic instability. Okay. Maybe it was the opposite. And the other, the question, the other, the first question was, if it is any, any improvement or gain, if you change the shape of the grid instead of, I don't know, rectangular or square cells, you will have a hexagonal cells in order to map also the plane or something like that? More commonly, the people use triangles. That's, that's another way to, to do it. But I, actually, I have, I have no idea of the benefits of actually doing something with, with those kind of meshes. Okay. Yeah, I was wondering just if the, if the mesh has an impact in the, I mean, not the number of cells, but the, the morphology. No, for sure. It has an impact. Yeah. But maybe it's harder to, to code. The, the, the clue something that is regular or something like that. For some application related to a structure formation, but that matter and stuff like this, there were some code that they tried to use like adaptive grids, but they said that it was a nightmare. Because yeah, there are, yeah, there are different, different approaches to these kind of things that are in, in these kinds of methods like, like mesh uniform meshes, the common thing to, to do to increase just locally the, the resolution so that you don't have the cost of doing the same cell size. And just when, whenever you, in the locations that you know that there's going to be a need for more resolution, you increase the resolution, there is adaptive mesh refinement. And yeah, it's, it can be quite convoluted to do that. There are other things like Voronoi desolation where at each time step, you form these like, these cells of weird morphologies. And all of these things actually have a, have the same issue as I mentioned before here that whenever you don't have uniform, uniform mesh, you imprint the geometry of the mesh on the solution. Okay, I understand. So for instance, if the sound waves, sound waves are propagating in the, in the fluid, after passing several times in this kind of like non-uniform mesh, you are going to modify the sound waves, the front of the wave. Yeah, because it kind of, you're projecting the solution on the, on the mesh. So each time projection of projection. Okay. Thanks. Okay, I think we have time for another question. I guess I'll have to summarize this one. So for the small perturbation, how do you know that a small perturbation should not be washed out? And if it's what's happening, it's not an artifact of the higher order. How do I know that it's not? I guess, I guess the question is, if you have a small perturbation in, for the, this, this was what you talk at the end of your talk. Yeah. How small, yeah, I guess, like how small is too small such that, for example, if you had the perturbation, nothing should happen. And if something happened, is due to your numerical method, I guess that's what the question. Okay, in this case, for instance, we know that this is a solution. Okay. We know that that's a solution. That's an ethical solution. We know that this is, this is supposed to happen. So if it's, if it's doing this thing, like it's actually clipping the solution, it's not allowing, it's totally smearing the, what was happening there. We know that it's not, it's not part of the, so this is an artifact of the, of the method. This is like the method not being able to capture what is supposed to happen. And here it's capturing what is supposed to happen. Okay. I, if, if there's a follow up, we'll know. Thank you. Okay. Okay. Do we have any other question? I have a question, David, David, which is like, you show all these classical tests, is there a test that your method doesn't pass? Or that you know, or that you, you expect it to have a lot of issues? Right now with, with, with MHT at high order, there is these tests called the Orsett and Vortex, where I see that going to high order, it can, it can actually, the simulation can, can crash. So there are, and more or less, what's happening there, if you can comment a little bit. I'm still trying to understand what's happening there. Okay. So I, I guess it's at high order, you have, it's, you have more or better defined instability. So these discontinuities are more discontinuous, let's say. So I think that's, that's the issue. Whereas with, with low order, these kind of things are kind of smooth and well, are diffused by the, by the method and you don't see these like sharp discontinuous things happening. If I go to high order, I don't know, I think it's something like that. Okay. Thank you. And thank you very much for this nice webinar. I guess a lot of people, very interested in this method, so they might be able to apply in their own research, would be happy to see your talk. I don't see any more questions without, we hope to see you in the next webinar to everyone. Thanks for participating. David and see you soon. Bye-bye everyone. Okay. Okay. Ya no estamos live.