 I think to try to be fair it's hard the way things are architected now and I don't have a good answer. Thank you very much. Thank you. The next and the final talk of this workshop by John Naliboff from CIG will explain the activities that CIG is. Thanks. All right. Well first thanks for the introduction and then thanks to the organizers. This talk is going to be a little bit different than the last one. So rather than try to focus on just on CIG's activities in the TechCon community, I'm going to try and give a broader overview of the challenges and also the resources in the larger LTT community. And so in the first and so then I had two goals. Number one was really to talk about challenges, longstanding challenges in the TechCon's community and then also some solutions to them. And for that I should acknowledge the rest of the CIG group who helped me parse down a three-hour monstrosity of challenges into something digestible in 30 minutes. And so hopefully I've done that. The second goal here was to provide resources to the surface processes community for how if you want to learn more about tectonic modeling or even start it, I've provided links throughout the talk to educational resources, different codes, key papers, computational resources like XEED. All of that's hyperlinked in here and it should be downloadable somewhere. So first a few keywords, a few key slides about CIG and the mission of CIG is developing and disseminating software for geophysics and related fields. This applies to a large number of scientific domains, geodymol, mantle convection, long-term tectonic seismology. So it's much larger than just the tectonic community. And so there are over 33 codes at this point. The ones we use mostly for long-term tectonics include Aspect and Snock, but that's not up here. Also Gale a little bit, but not so much anymore. And the range of codes at CIG includes codes which are entirely funded and developed by CEG on this side and then some of them were developed independently of CIG and contributed to it and contributed. And so then you can download any CIG code by going to geodynamics.org and there's a direct link there or you can go to our GitHub page which is github.com slash geodynamics then all the codes are available. So first for if you're in the surface processes community, if you need a starting place to learn about the computational techniques and methods in the in modeling tectonic processes, I recommend starting with one of these three books. I'd say a large portion of us learn geodynamic modeling by going through Taras Berry's book. There's also a fantastic set of free lecture notes which are put together by Thorsten Becker and Boris Kalis and those are available for free on the web. And so then for the rest of the talk there's two basically two topics I'm going to cover. One I want to get everyone on the same page and so I'm going to give an overview of the tectonics problem and the numerical methods and the second part is going through the series of challenges and it was really encouraging today to see that in the breakout groups both today and yesterday. You've effectively covered all of my talk here and so then this will be a really I'm a reiteration of a lot of those points but hopefully we'll go into a bit more detail here. So what are long-term tectonic processes in terms of the modeling space? I would define that as anything ranging from crustal deformation to lithospheric deformation to mantle conduction. Effectively what a long-term tectonic code is trying to do is model plate boundary deformation. This includes materials in the elastic girdle and discus domain and in some cases the plate boundary interiors. But really the way to define this problem is to think about the spatial and temporal scales. So when we talk about long-term tectonic models and crustal deformation and lithospheric deformation mantle convection, effectively we're looking at spatial scales from tens of kilometer to thousands of kilometers and the temporal scales are ranging from hundreds to thousands of years all the way up to tens of millions or hundreds of millions or in some cases I suppose billions of years of connection. Now you can certainly go outside of these ranges but most of the studies you see long-term long-term tectonics fall in this range. And so the challenge is and we've covered this extensively in the meeting is even though we're modeling processes on these temporal spatial scales we know there are processes outside of there earthquake and faulting so the seismic processes, magnetic processes, surface processes that affect all these and so a long-standing question is are our approximation for these processes accurate and it's clearly standing or if not how do we effectively couple them. So for the basic physical approximations you start with the conservation equations and here I'm assuming we're starting with the viscous approximations. A number of codes will start with an elastic plastic from elastic plastic perspective but the majority of codes do this from the viscous perspective and so here are the conservation equations. The way I've written them down is for incompressible viscous flow and so this is the use and ask approximation. If you want to move towards the a compressible formulation so something bit more realistic you need to include think terms like adiabatic and viscous heating at phase changes but for the most part if you're just thinking about lithospheric deformation maybe upper mental deformation these equations for the most part suffice. The real trick is when you get into the rheology and the most current studies are using nonlinear rheology and so then in terms of formulating the viscous part of it you might see this type of equation where this can apply to dislocation or diffusion creep. If n is one you're in the diffusion creep mode if n is greater than one it's nonlinear and you're in the dislocation creep mode this is really activation volume and energy and this is related to grain size terms and this is pre-exponent. Now then if you want to include elastic effects you might modify your viscosity to include viscoelasticity. I've put a little note here saying that if you include viscoelasticity you actually need to modify your governing equations and then finally we're going to introduce brittle deformation or all form of it. I'm through a yield criteria and so what we'll do is we'll compute an effective yield stress in this case it's a drug or proger form of it and the yield stress is going to depend on the pressure or the coefficient of friction and the cohesion. What we'll do is if the viscous stress is going to exceed the yield stress we reduce the viscosity back down onto the yield plane. There's various other ways of doing this but this is the most common. Another thing you might see in a nonlinear rheology which are quite important is you might see grain size evolution and strain weakening of both the brittle and viscous terms so commonly what we do is we track finite strain as that accumulates we may weaken the friction angle or the cohesion or both and so forth. So how about the numerical methods? The general procedure and I apologize this is grossly oversimplified I try to fit this into just one fourth of my slide here and so what you do in the tectonic like if you're modeling tectonic process in or for any processes you choose a numerical method in our case we're normally doing this with a finite element or finite difference approximation allows you to rewrite your PDEs as algebraic equations, specify for a given problem your boundary and initial conditions and then and then you can actually start the solving process and so you might solve for velocity first you have a nonlinear rheology you should probably be doing some some nonlinear iterations to iterate out the nonlinearity then you might solve for temperature plus minus composition and at the end of that first step you might affect the free surface and affect tracers or whatever you're using to track your properties and then if you want to do more than one time step you go back to step four and keep going through this loop until you reach your final model state so what I thought it would be useful to do is show a model result here and so what do these models actually look like and so in this case this is a model from the CIG code aspect it's a simple thermal mechanical model of connellate extension so I took a 300 by 100 kilometer box pulled on the sides and I initiate and so somewhere near the center of the model beneath the moho I put a mechanically weak seed and this allowed two conjugate faults to form but it's important to remember these are not faults these are shear bands so then these are in the brutal regime right here and the resolution in this model is one kilometer and so where topography is developing in the horse and ground system and so here's our free surface right here what's bounding it is not a discrete fault it's a shear zone and the width of the shear zone is governed by the resolution if you take one thing away from this talk take this way these results are resolution dependent the resolution thickness of our shear bands is directly dependent on the size of the grip so then a bit more about the software design so what do the codes we use to model these kind of things look like so then this ranges from a simple design to a complex design and most of the codes I'm going to talk about in the next slide fall in either the complex design or somewhere between the simple design might be hundreds of lines it could be a single file straightforward simple assumptions probably 2d and you can write this in interpret language or complicate complex models so then if they're paralyzed you want to do massive 3d problems and simulate a wide range of physical behavior hundreds of thousands to millions of lines multiple packages it's built with most likely compiled language like crc plus plus a portrait it can in some cases some of these codes do use python for example that is on terra firma and it's almost always going to use so then if you're looking to get a sense of the variety of codes out there I've compiled a list of codes and I apologize if I left someone's codes out or their favorite code out of here and so these are what I call tectonics codes which are either actively being developed or actively used in the community and they're divided into three groups the first two abacus and calmsol I don't have links to them because they're commercial codes there's a huge advantage to using commercial codes and that is well tested the disadvantage is that it's not open source and and and for the most part I discourage people from using those kind of codes but um if you're going to you if you're if you're set on using a commercial code or a tectonics problem it would probably be abacus or calmsol the next group I've provided links to and these are links to papers and these are codes which are open source but they're not open access yet you might be able to ask the developers of these codes for access to them but you can't go on the web and just pull them out be taught in 3d and another of other ones might be um open access at some point soon the rest of them like aspect or lamam or dinar earth salt 2d terra firma underworld 2 sister mvp2 all of these the links here are taking you directly to their github repo and you can just pull them off the web one on point on this if you learn to program in matlab or python and you want a code you can really sink your teeth into so go on and start modifying it I'll note that sister and um mvp2 are written in matlab one is fun i difference one is finite element and so then if you're really comfortable programming matlab python this might not be a bad place to start out so next on to the challenges and at this point I think it's useful to define verification of validation these terms have come up have been in the breakout sessions these are direct quotes from provided by william overcomep in a great report he made for um sandia national lab there's also see um a cig webinar um he did a few years ago but to break these down more simply verification is basically the process of making sure your code is doing what you intended in other words are your algorithms working as intended the validation part of the the validation part basically determining whether the results of your code are applicable to the real world the verification part is a lot easier than the validation part i'm going to spend most of the rest of the talk on validation but for verification how can you break this down and so then how do you know your code is doing what you intended to do well you can look at your numerical implementations and ignore the physics all together if you have an equation y is equal to x plus one you can go inside your code and have it and write out a print statement saying if x is equal to one well y better equal to one that's the most simple example but you can do all sorts of different tests to ensure your numerical implementations independent of the physics are correct in terms of the physics implementation the first step is to compare your code to an analytical solution so something like a solution for stokes following sphere most of the problems we do are highly non-linear and so the next thing you can do is compare your code for non-linear problem to other codes and so the one and so then we call this code replication and so one example of this is a community effort to benchmark numerical models of brittle thrust wedges and so then this should be required reading for everyone this or whoever wants to think about tectonic models again if for nothing else for the discussion on plasticity and so in this case they can compare it I think it was well over 10 codes and they were also comparing to analog models as well and so the lessons from this were fascinating and one key takeaway from this was when it comes to plasticity the details really matter and how you've implemented in your code and so the last point on verification is at cig we strongly I mean every code we have is open source and we strongly encourage best practices and code design so what are the advantages of best practices and what are best practices use version control and so here I provided links to tutorials on how to use get three to get hub or bit bucket code review I was going to go to a discussion on the aspect page and so code review is basically before anything gets pushed into the main repository let's say the aspect code we do a review of the code to make sure everyone all of the maintainers are in agreement with what's being implemented and so in this case in this example all we're doing is discussing how to name different solvers and it was a lengthy discussion but discussions like that pay off in the long run if you make the effort to you know to hone in on the details from the get go it definitely pays off in the long run in terms of not trying to figure these things out later or address confusion documentation extends it it's fantastic you can extensively document your code and this is either in the source code manual for example prompts another advantage of open source is you're more likely to get more users this means more testing more contributors sustained innovation it also increases the bus factor that's basically he's saying then if a number of developers leave a project and it's the number of people you could have leave a project and still keep a project going it's a bit more more than that but that's the general idea it also means you have long-term support for most features if people are contributing to the code and contributing to the main part of the code and not just taking their own branch of it it means those features are going to be incorporated in the long run it also helps reproduce ability and replicability and so then if you have an open source code and you publish your parameter files for a given study it should be fairly straightforward to download another like the same version of the code take someone's parameter files and reproduce their results it also makes replicability a lot easier too because you can look in someone else's code see what kind of equations they're using so then on the on the challenges and here's sensitivity analysis and this has come up in the breakout groups a bit as well so I thought it'd be useful to talk about sensitivity analysis in terms of a fairly simple nonlinear tectonic problem someone might run and so this is the model of continental extension rather than having a free surface we're going to simulate a free surface with a sticky air layer it's basically isobiscus low viscosity low density layer it's going to be beneath that is going to be a uniform crust of 30 kilometers like this and might have a nonlinear viscous flow law and then beneath that is the mantle with the different nonlinear flow law and a different perhaps different brutal yield criterion and you have a geotherm characteristics of the continental will let the sphere and you're driving deformation by pulling on the sides and then you're balancing that outflow and the typical resolution you see in this models it varies anywhere from 0.1 kilometer so 100 meters all the way to 5 kilometers I'd say most models are in 0.1 to 1 kilometering but I have seen studies recently using higher resolution so then the question is in terms of sensitivity analysis so let's say you run this model you produce a beautiful set of conjugate shear bands you develop photography along the crustic air layer you have a nice symmetric forcing promises what parameters are going to affect your solution and the answer is because it's nonlinear it is absolutely everything and I'm not kidding on this and so then this is everything and so then if this were me and I were just starting out mulling like an extension problem here's the here's the parameters I would vary it to get go I vary the grid resolution particles for cell if you're using particles time step size the solver conversion settings this is no joke this does make this does have a large influence the model geometry lithology initial temperature boundary conditions and most importantly the reality there's an expression never turn your back on the ocean what comes the sensitivity test never turn your back on this I guarantee you if you change these parameters it's probably going to change it's probably going to affect your conclusion he's making this he's absolutely make a enormous difference in the final outcomes especially if you're running for tens of millions of years so with that said the solution to this is you just do lots of sensing lots of lots of sensitivity tests and you do them in 2ds in 2d it's certainly not computationally usable to use 3d but if you do do the sensitivity sensitivity test please report these in your papers and make sure your job a whole lot easier if you report those need to appendix or or in supplementary so how about the challenges and then in terms of validation and so then the challenges are going to rise from variations in spatial and temporal scales of the problems we're dealing with the rheological uncertainty the solution uniqueness and then the comparison so I thought I'd start by this in terms of looking at the and so then if you look at observations of this is a global compilation of at different scales you see there's heterogeneity at scales ranging all the way from one millimeter even smaller than that all the way up to 200 millimeter or all the way up to 200 meters or sometimes and within those scales you see variations in composition grain size and certainly for a variation point I'm getting at here is that our grid resolution models typically run at most they're normally 100 meters typically on a kilometer scale certainly in 3d because we're not capturing on any level so our assumption that is that the rheological approximations we use and being averaged over the 100 one kilometer scale is a reasonable approximation unclear that's unclear that's the case and so then in terms of what rheological flow laws we use in these models what will what you'll typically see is something like these strength profiles right and so these are based on experiment like experiment on different terminal deformation models and so what it's one thing to keep in mind on these is that these experiments to drive these flow laws they're done at extremely high strain rates and then then they're scaled scaled down to genealogy this is not a criticism of these experiments it's just the reality and the constraints in which in which they have to work with but it's useful to keep in mind that these flow laws are scaled over very large man and so then when you're running these models you also have a large choice of flow on wet bridge tri-reologies different compositions ports belts are all being used in versus location the way to handle this just do lots of simply lots of sense see how these certain flow laws affect your solution and certainly your initial conditions and tectonic setting your modeling can help guide this then again back to the original point on the previous slide is how applicable are these to the atmosphere these are all mono mineralic flow laws we know the lithosphere is bi-mineralic and it's unclear what the difference is is if these mono mineralic flow laws actually approximate the strength or the flow laws that you're getting in more realistic geological materials and again the length scales of heterogeneity in the real earth you're seeing heterogeneity and variations in strength at the outcrop scale and if you talk to field geologists a lot of them will say deformation of this scale can control the larger scale pk and we don't have a solution for this yet we haven't been able to think of it at least in my mind how you can sort of couple deformation at that outcrop scale all the way up to the large scale at which here's the worst one and this is the written and this is the brittle reality so then this has come up a few times so far these results are highly resolution and so what happens is so then this is an experiment either undergoing extension or compression and so you have a little weak notch and you develop conjugate shear bands coming off the angle of the shear bands and the thickness is directly dependent on the the angle starts to converge once you get the high enough resolutions but the thickness of the shear bands has always been viewed by the situation then in these kind of problems you see horrible you can often see horrible convergence behavior in the non-linear solvers another factor in this is i mentioned earlier we typically do strain softening as an finite strain accumulates when we can friction or completion angle the rates and magnitudes of these are highly in terms in some cases maybe applicable to reduce the friction angle from 30 to 15 degrees in some cases from 32 degrees and we don't have a good constraint on the rates at which these things should be changing and so then part of your sensitivity it's a part of your sensitivity that's probably should be to vary these and see how they affect groups another whole question is is this a reasonable approximation for seismic behavior so typically in the lithosphere if we're in the brittle regime we think of deformation is being culminated along the seismic cycle so it's this thick slip behavior is this a reasonable approximation of integrated seismicity and I think that's an open question one solution to this is a number of groups have been putting in racy friction models into their codes and so that may be one way to sort of bridge the gap between is our is our drunker proger a more cool mode of generating sheer bands equivalent to six behavior so then I mentioned these are resolute all of these brittle all the brittle behavior is highly resolution dependent so one solution to this is to introduce a characteristic length scale unfortunately no one in geodynamics of them and so then this method is quite common in engineering material science maybe it's been around since the 1980s commonly referred to as gradient plasticity and so then what the results right here are showing is that in this model and what they're modeling was gravitational failure of soils and they're developing sheer bands in response to that gravitational failure is regardless of the resolution of the model they're getting consistent thickness of their sheer bands and the total offset along the reason we haven't introduced this type of formulation is it's mathematically really complex and it's not clear how the formula how this how these kind of formulations in this kind of model would scale over to the type of model fortunately there are a number of groups working on this and so hopefully in the next five years there'll be ways to use this type of length scale into our models that sort of get around this get around this resolution another solution to the sort of the length scales of heterogeneity issues adaptive mesh refinement and so then this is one has been one of the bigger advances over the last 10 years what I'm showing here is results from a code called RAM and what they did is they wanted to model global plate notions so they input a density structure into the earth based on slab 1.0 so the model goes all the way from the surface down to the core metal boundary and they did a single solve to solve for what the predicate plate law sees now in reality what you want to do is you want high resolution near the surface and near the plate boundaries and so what they did is adaptive meshing basically resolved these plate boundaries and so then the resolution goes all the way up to one kilometer in these areas it's a maximum of 50 kilometers per man and so then this reduces the computational size of the model by over three orders of magnitude so the benefits here are enormous and there are a number of of codes which are available to the community which have adaptive mesh but there is a disadvantage and the disadvantage is that if you reduce your finest grid size by let's say a factor of 10 you need to reduce your time step size by a factor so then a model which took initially a thousand time steps to run to get to your end point may it may now take 10,000 even though you're at a higher resolution you're still going to increase potentially increase your model total runtime by up to a factor of 10 and so then AMR has huge potential benefits but it's not a magic bullet for everything here and we can't use it in a hurdle behavior until we have an internals so then what I thought it would be helpful to do in terms of sort of summarizing the validation issue is to critique a model and it would be rude for me to critique someone else's model so I'm going to critique one of my own and so then the observation I was trying to reproduce here the geologic process was that in RIF systems what you typically see is at the beginning of extension you have distributed normal faulting and as extension progresses you transition to something which is much more localized eventually you're going to get to the mid-ocean range which is extremely low and so then the way I did this is um this is a thermal mechanical model it's 500 by 500 kilometers and 100 kilometers and I'm driving extension by pulling on the side and this is orthogonal boundary so I was able to get distributed faulting so what this is showing is a strain range variant and the red colors are showing shear band so this would be the equivalent of faulting is I see the middle of the model with you know I'm an initial distribution of randomized strain and that allowed and and that allowed the shear zones to localize on them until you get distributed faulting as extension progresses at a constant rate you start to localize so fantastic I reproduced some observation why would this not be applicable to the my own criticisms of these models is that number one is that fairly low resolution 2.5 that means my shear bands are probably around five kilometers thick there are not five kilometers thick faults at the surface maybe at a subduction zone you know like a one kilometer shear zone but these are grossly larger than um the initial conditions play a key role I had to tinker endlessly to be able to get this work so it only works under a very specific set of initial final initial finite strain distributions and strain it's using a simple rheology I'm just using three different flow laws they're mono they're mono mineralic and I'm using simplify beneficial pulling on the sides but in reality plate tectonic forces are likely driving an extension system from so the question of all this is how do I know this is relevant to the atmospheric processes it looks relevant but we even know we quantify this to say hey it is relevant and so the answer to this of course is I would need to compare this to some sort of natural observation quantitative manner to make sure this is actually relevant so some examples of ways people have done the validation part in comparison to natural observations is fault populations so then this is a comparison of normal fault length versus maximum displacement and these are of course time dependent properties and these are the gray bars are natural results and these are numerical results from a series of street elements another another one you compare to is typography and in reality it really doesn't matter with the observation it doesn't matter if it's fault populations or typography or chemistry or sedimentation and erosion patterns going forward because of all this uncertainty in rheology we really need as a community to start comparing in a quantitative manner to observational days there's no other way around and the best effort on this front is not only to compare to the data sets so I use them to explain the reality the Boris Kausenberg from this one worked on by Tobias Thalman and Boris and Anton Popov is what they do is they do actually a probabilistic inversion where they run a series of models which are variable you know wedge of variations in rheology and other other parameters they compare it to observations and so this could be topographic generation this could be velocities and they'll actually do an inversion and strain what those real parameters are in my opinion this is probably the way forward for most of us this is the best effort of validation in recent years and yeah I imagine this is probably the way forward for most of us so one of the comp and so I'm almost near the end here and so what the last point I wanted to talk on was computational the issue of computational reasoning and so then a 3d model they're making quite expensive they're particularly expensive if you have strong variations in stocin which is the normal case if you're doing nonlinear models and so the question is how do you overcome this well the answer is and it's already there is parallelism and efficient solvers so I'm showing here is an example of a weak scaling result for the code aspect on stampede 2 and what is plotting is the wall clock time for two solves as a function of the number of cores and the ideal case is here this optimal line and and so then as the number of cores increases you want this type of decrease in solve times and the encouraging part of this is and so each one of these lines represents a different resolution and so then the encouraging thing here is we're getting all the way up to hundreds of millions or in some cases million degrees of freedom for a number of different codes and we're getting a reasonable scale models on the order of hundreds of millions up to millions degrees of freedom this gets you pretty far in the 3d model this is getting you a resolution of maybe one kilometer with a higher order element and like a thousand by thousand by one hundred so get you a fair bit of the way if we want to go up to models with tens of billions of degrees of freedom or a hundred billions of degrees of freedom and it's probably going to need a new 70 so in terms of resources if you're not aware of it already there is a an NSF funded program called exceed it's basically a consortium of high-performance computing centers and it's fairly easy to get computing time on these generally the process is you'll get a startup allocation and this will give you a certain amount of CPU hours to do initial testing and scaling results and then you write a proposal and this will you know get you a billion for four hours the bigger long-term issue I think is actually not the computing time it's more the storage so then these large 3d models are generating data sets especially if you output frequently on the order of terabytes to tens of terabytes and over the long-term you know petabytes well the question is where the heck are we going to store this all this data and how are we going to make it accessible to I don't think there's any real answer to that and my guess is NSF will have to come up with a program similar to exceed some sort of national facility to help us you know especially if we want to make these models available to you so then just briefly on the conclusions and outlook I'd say if there's one point to take away from here you there are huge challenges in validating many miracle models and this is arising mostly from the uncertainty in the rheologic behavior and the scales of time and space in the rheologic process now on the positive now on this is I think if we start to couple codes between tectonics code and surface processes a tectonics code and the melting model this is actually going to allow us to generate new more accurate data sets topography geochemistry which we can compare to observational data the downside of this is you're introducing more uncertainty in terminals and the question is how do we and how do we handle that how do we assess uncertainty both in those two separate codes and then how do we problem just to make it clear I did not bribe him for sure questions Greg Tucker University of Colorado so thanks John for stimulating talk you mentioned best practices how do you guys educate your community or help your community appreciate the value of best practices that's almost a question for Louise but I would say whenever new code is donated I mean and so then if you go to CIG website and I linked it on here there's a we have a list of best practices and there's minimum best practices and then there's absolute best practices and I believe Louise correct me if I'm wrong but the policy is the codes anything which is donated to CIG has to at least move to minimum best practices so the minimum best practices really are sort of minimum right and but any code that is donated to CIG and that we will host has to meet those minimum best practices and will help people meet them so if they don't have documentation if they haven't really you know adequately done version control that kind of thing licensing so and we'll help them with that and then but our codes you know we try and move everybody up and the sort of three tiers the top tier is really where we're aiming to go and it's a little bit aspirational in the sense that not every code needs it I think one of the ways we educate people is we do have events like you know talks like this and so on but also a key point is that the best practices were developed by the community essentially to you know as part of our broader you know what are we doing and how do we elevate the whole field so the community was involved in developing them and that really helps make them useful to the community so moving from better to extra scale you indicated that there are some challenges with the existing codes and I think we also have a feeling that more cores memory memory per core is an issue for smaller but where do you see the specific challenges which codes do you think have the best chance of running on the next generation and what would you do what are the problems and this is perhaps for everybody right what are these things that we cannot yet do we would want to do well I mean ideally we want to say I mean we want to run high resolution 3d bottles but I mean or 100 meter resolution because you're capturing the scale I mean I mean that's where the heterogeneity scale is sure we know it at that what do we know the heterogeneity length scale and 10 kilometers depth at what yeah at 10 kilometers depth or 20 kilometers no that's a fair point but you can at least look at I mean I mean you can always go all day get some sense so my my question we have we have a tool and tools and tools that last two times have been talking about toolboxes but I think most of us in this room are driven by science and we have science driven questions but we're clearly not in it for the month um but no I'm involved in sorry that in rheology and I took a direction away from modeling because I felt I wanted to know what the lower cost badly enough you know how it was coupled to the mantle to be able to solve some problems so how do I as a user say I have these science driven questions I don't see aspects I don't know that aspect could allow me to do that so how do we vote dig to um to take misfits and from first order observations and try and work with new advances in rheology as well I mean new ways of extracting rheology we're all trying to get at it from not just the raw mechanics side but trying on average sorry just to try and provoke some stir the pocket well I mean I think there is I mean I mean I'm certainly a motivation to have more geologically realistic models where we're coupling surface process two phase flow melting but I'm one of the points I was hoping to take away from here before we do that maybe we should take a step back and think independently what are the challenges because there are long issues plasticity at the scale of energy maybe we haven't resolved this model so is it worth pushing forward we have more complexity if we're not necessarily doing one method particularly well reader I'm not saying we shouldn't but it's just something to consider um I wanted to try to give an answer to the petar exoskeleton question um so there's no easy answer and you certainly want to involve the experts in American methods for science etc and so I think cig is the is on the correct way to it by building on existing tools and frameworks that are made and developed by the expert so for example aspect does not implement the finite elements themselves but builds on other libraries that have implemented so without concretely telling you exactly what I believe is to scale finite elements of the extra scale I can go as if you want but the point is just you rely on other people that build on an infrastructure about the coast and hopefully make that step then you call the correlate to that is I mean one of I would like to know I mean this is what my question was but of course we don't want to reinvent the wheel right in clearly we want to not do that ourselves but are we making the right choices right now that will position us in a way to make use of these right and I guess one thing that's sort of an underlying discuss this number of ways always a community you know we have great you know community organizations like mass and cig but is that enough to allow us to really ride this wave to the next level hardware and do we want to do that and and are we setting ourselves up in the right way right I mean is it I may be like explicit codes you know the way to go because they have I don't know smaller memory footprint or something like that so are there any thoughts in the community right now I mean about that I have no clue about this but because I mean sure we want to rely on packages right but if you build something around a package that sort of made the wrong bat but you know then you're screwed I do have a remark you give a little bit I think maybe a negative impression on elasticity and these benchmarks you are showing this is the absolute worst case scenario you can do so if you start to and also it doesn't exist on earth you don't have like a brittle quest with nothing on your needs that is only brittle so on the earth you have like a multi layer course you're probably a viscous lower quest it turns out if you start to include those things models are actually much better behaved except that we didn't do benchmarks yet and especially if you if you do viscous viologies not only are viscous perfectly hate so they are mesh resolution independent methods independent so that is that is one remark the other remark is that you say you said that maybe it takes five years and you have this resolution issue resolved I think it might come much faster because what happened is that we invited one of the main guys in computational mechanics to our european lithosphere and mental dynamics workshop last september the post and that turned out to be very fruitful because he started talking with a whole bunch of different people and in different directions and two weeks ago at egu the first results came out and were shown by tibu yurets who seems to have resolution and mesh sensitive independent results that converge at every time step so maybe how do you do it yeah that's this we will figure out which will happen soon why not take five years so I think we are by by addressing these problems there is a way forward with this and but I think one question that I would have for you is do you think that we as a community know which numerical method or approach is the best one use to move forward or should we leave this completely open do you mean between like a finite element or finite difference or even between the finite arm codes no I will I mean if you're to ask me right now which code I would recommend if you want to do a large 3d simulation I would probably list aspect in now in alphabetical order aspect lemm be taught in 3d or underworld I think there are a number of codes which have similar capabilities and have similar functionality I think all of them there's a probably pluses and minuses to each one but I mean ideally no one would want to hear this but you know to sort of get a handle on uncertainty how much uncertainty your method is producing maybe every study should use two different codes with two different methods but then we would need infrastructure to make this easy so I think it's super important to ask the questions you're asking and question you know different choices different numerical choices I think it might be just as important to question the first assumptions we're making right describing um so there's the viscous flow as you mentioned that but describing brittle behavior as an elasto plastic or just a plastic formulation I mean we're working we're talking about refining the numerics within a framework that we're all using but maybe we should question the framework itself to some extent