 quick recap on each of the breakout sessions see what similar what was different and each group's going to have four minutes we're going to start with the online group. So Katie or Lorraine can you share your screen through zoom you're on the projector. I'm sure. Lorraine can you do this because I'm not sure how to do it. Oh wait. There you are. Great thank you so much for doing that. All right so for the question number one outstanding questions and grand challenges a lot of the the discussion kept coming back to how do we link between surface and long-term tectonics. It requires extrapolating or cross-scale jumps both spatially and temporally and there are a lot of questions about the choices of characterization at each time scales and then what scale do we need for the processes. Just a more specific questions that was brought up I think by Ramon was that our romance sorry if I said your name incorrectly problems was virtual that we can show models and spatial patterns and rotability and that can drive patterns of deformation but can we find this in natural settings we see it in the models and then correspondingly how does that drive difference or what's driving differences in rotability is a climate gradients rock strength gradients and how do we evaluate the relative importance of these parameters and this led to another kind of discussion about what's important in each study when we're talking about this. So what's important in climate versus what's important in tectonics versus what's important in erosion. Is this kind of like dependency on what's important helping connect us across the scales or is that actually limiting some of those connections and then also a big kind of overarching and more philosophical question is is it by two different communities asking very different science questions leading to incompatible data sets which drives into the data needs question and again we broke out and had more philosophical and types of questions and the more specific needs so the more philosophical needs of we need the data at the right time scales but overall there's a big need for the sensitivity analysis both for numerical models as well as data sets and I think everybody was incredibly excited and encouraged and also brought up some thought-provoking by Katie's talk. So we were thinking that this like test of sensitivity can help drive data collection choices whether it's like boots on the ground fieldwork or remote sensing. There's also a struggle that there's different types of data available for the two different types of skills surface versus deep processes and there's abundant data for surface processes but much less so for deeper tectonic processes and so for surface processes this leads to a data-oriented and physical model-oriented approaches but the data-oriented approach is more limited for tectonics and deeper processes. So more specific needs for data more information about fault kinematics with age constraints a more in-depth catalog of material resistance beyond just lithology and getting it into more like clean mineralogy and content and particle arrangement moving into information. So on the model side of thing again there was a large philosophical discussion as well so specific needs and the more philosophical is are more complicated models more useful and I think this is an important question that requires some very careful thinking. Theoretical models versus data testing models I think this is in particular for surface processes we need to keep in mind how and which models can be tested in reality and which ones are truly hypothetical. Which models need more lower resolution data over large scales versus very high resolution data over smaller scales. I think another interesting philosophical question is is there a benefit to centralized modeling efforts like for the climate models or as a distributed modeling more efficient than hypothesis testing. So specific needs adaptive mesh again and then testing about are we actually capturing physical processes currently done. That same exact thing is probably going to happen again unless someone can show me how to use my iPhone in the next four minutes. Josh and Pedro can you guys come forward and then Scott and Jessica will do any of the breakout groups want to project. I think they can just like up here that close real close. All right here we go yeah I think there's going to be a lot of overlap between what is being said between the groups but I'll just highlight some of the main points we discussed a lot but there's a need to define a tangible overlap between the two disciplines so that we can start to then to begin to focus on the links that we need the bridge right. How much this was just said how much complexity in the surface models matter or the deep the deeper models the geodynamic models and vice versa. That's another question that popped up and we need one thing that I didn't hear so far was that we need to better explore the sedimentary records in the models because basically that's what we use for gathering the data so we need to incorporate that more in our surface process models in order to begin to understand what our data means what our observations mean. In all of these oh and there's yeah this was just brought up too how can we actually test the geodynamic models with observable data that we can go out in the field and collect and in all of these there's a resolution problem between the two fields and the molly in the two fields how we reconcile resolution in the surface model process with when we're trying to integrate the geodynamics and also scales temporal and spatial scales are an issue between the two and there's a need for an integrated model so that we can start with the simple task the simplest way to link the two and then begin advancing in that realm. You have anything to add? I guess since we have a moment I'll just add that I think the communities are still very much compelled by the potential for coupling I think there would be a really interesting discussion if we said give me the ingredients geological climate tectonic setting what have you where couplings do and don't tend to occur and I think you would get a really interesting set of outputs from a discussion like that just in terms of the amount of people that have gone out looking for them and found them in some places and not in others so a systematic approach is to where are these things to occur and why would be a useful endeavor. Another sort of theme that came out is that we don't know how much complexity each other's models require and so we had a little soul searching exercise where we said what's the biggest thing you don't understand about the other the other side if we all sort of put on a surface or a geodynamic hat and it's interesting that geodynamicist might say is it enough to just trim off a landscape at a critical taper in terms of representing erosion and for geomorphologists we often simulate uplift just by having a block go up and so what's what's good enough and and that sort of gets back to this bigger question of where and at what scales couplings are achieved and so more of this type of communication was absolutely one of the major themes that came out any other group members that would want to chime in perfect timing still on okay Scott and Jessica yeah so I guess for the grand challenges we sort of tried to distill them into I guess three we have here and one has come up a lot already and that's the disparate length in time scales and what levels of complexity are important at different scales to capture the process level feedbacks and and then how do these relate to one another the second was uncertainties so how we quantify uncertainties in the models especially as we start to couple different models and then compare them to data how this happens and then finally one of the things that we thought kind of is being worked on in in both surface models and tectonic models is trying to quantify both the controls on in the evolution of rock strength and rheology and this is a challenge for both types of models and then can kind of be a bridge between the two so this might be a fruitful area to try and bridge these two fields of focus for the future and then to to meet the answer those questions or meet those challenges we we kind of have this general theme that we need more efficient means to compare model outputs with geologic observables and so as we kind of saw two parallel databases or servers that that we really just need to organize the data sets that are already in existence on both sides of the aisle so kind of a compilation and organization task to to serve these data sets on some kind of global type platform so that we could see what observables there already are geochemical data geologic data um and then have kind of an analogous database for for model output so that you could more efficiently see what data exists and then also make that comparison between model outputs and observe and that we thought that would that first step would a little let us see where we need new data um and where we that seemed to emerge from the modeling part um was our group felt that we have the pieces largely implemented that could put them together and have a fully coupled model um so let's do it um and so so in that discussion it said well yeah we we have service processing models we have geodynamic models and and we know how to make models talk so let's let's try it with what we have today and that kind of started a general discussion of of yeah the degree a degree of around this ground challenge of what is the degree of complexity can we do some parts cheaply as josh said can we get away with just having a block uplift with a uniform uplift rate um or can you just diffuse away your topography um so so there was a discussion saying let's let's have make a fully coupled model but think carefully about um the complexity required there um and and the resolution needed um and another theme was to actually have this fully coupled model there'll be technical challenges that will likely require making more efficient parallel algorithms um so that we can solve bigger problems um as well um as run more than one realization so many of these maybe don't types of models the really only way forward is some kind of forward model and so so how can we actually get efficient enough so that we can have many realizations and do some kind of Monte Carlo type um modeling to to quantify the sensitivity and assert with those models anything else from group members perfect can we sorry I didn't yeah you guys come up great thank you um and then we'll have Stacia and Paula on deck and last but not least will be Patricia and Allison the later you go the more repeat you get to do um we also focused on spatial and temporal time scales and how to use both together or integrate different spatial and temporal time scales and about complexity um and we thought about how complexity and different spatial and temporal time scales are used in models and also in the validation of those models with every field evidence for for these um time scales being biased and limited by their own set of spatial and temporal constraints um in terms of complexity we want we talked about how how we need modular uh pieces of models that can be put together because there's too much for anyone person to be a full expert in all of this stuff but we also don't want black boxes we don't want to have to trust a community without being able to also see the raw data and the assumptions that were used to make those modular pieces and so for that we talked a lot about um sort of the meta process of how do you have a database that can capture what's out there at a very broad scale but also sort of a parallel system to the way we write papers where we have the raw data available we also also capture the methods that were used and the results and then we separate that out from the interpretations of use both the data that we use to validate models and the modeling results themselves so some sort of meta information around these models to capture this and make them reusable in the future and reinterpretable in the future yeah so I think you know other things that I have written down here that weren't repeats um uh we we spent a lot of time like it sounds like a couple groups talking about sort of the needs for large data sets but we also spent a while talking about the necessity of kind of easy ways of actually inputting standardized data in to sort of lower basically lower the bar for entry into those data sets um and so actually sort of working on good practices for database management and things like that um and then we also came up a couple times in various parts of the conversation about sort of the need for uh having a wide range of sort of actual real real world examples of some of the processes we're studying so trying to get away from fixation on very particular places that may be kind of n members which are fun to study but might not be telling us about underlying processes as much yeah that balance between understanding one place really really well that high level of complexity and then being able to simplify that in a smart way so that you can start generalizing to other places or using what Stacia and Paula um so you guys will see that our group said a lot of things that we've used already so this is good because it means that we already already organized this to sum up everything um so I will just say talked um so for the first questions um the first thing that we said was finding common goals if you don't have common goals it's kind of more difficult to do the linkage between the surface processes and the long term technologies the other the other thing that everyone already talked is the scales so when you have different temporal and spatial scales um you cannot compare them and the type of data that you will need is different so what sort of resolution would you need how can you combine all of these different things right if i'm missing something just to um then another thing is do we really understand each of the individual processes individually in isolation if we are missing some of these individual processes then how can we help them so it's more difficult let me scroll up and are the processes that we are observing in nature transferable to other scales and to the models itself can we directly transfer things that we are observing can we should we need to adjust something should we celebrate what we are seeing um and then the type of scientific questions that we want to address because that will condition overall the models that we want to implement controls the resolution on the scale that we want to use for the second question what are the data needs um we believe that in order to define the data needs we think that everyone should communicate a little more so every different type of expertise should be included and they share their opinions in order for everyone to understand what is the best type of data that we need to acquire um one thing is for instance the long term data normally has lower resolution of the short term data has a higher resolution how can we combine them is it possible how um to conduct more control natural experiment at certain sites and evaluate the data um the density of the data that we what is the density of the data that we should use and the type of data needed to address each of the questions that we want to model and then finally we think that the data should be standardized and shared across disciplines and the data needs to be shared in the format that can be used by everyone and get the industry industry on board in order for us to have more data as well for the third one what are the modeling needs so we believe that we should have a facility to support and promote the community the community growth and the community vision um so that we can all have access to codes and not only have access to the codes and models but also learn how to run them so it will be really useful to have guides and tutorials one thing is and I can get it it's almost that one thing is that doing those guides and models is a lot of effort so those should be recognized and that we have a DOI so that they can be suitable and another thing is that we believe in a combination of simple and complex models with both explain and benchmark models with geological data and bring the models outside bring all the community together one for my group thank you all right well at the risk of heckling from Jean Braun I think glaciers have been missing from our discussion so far it's a grand challenge um but actually in all seriousness um more broadly quaternary climate variability we know that drives important time variability in climate and ecosystem and base level and I'd say I think it's a challenge that these climate factors may overprint tectonic anytime that we're thinking about going past the house I think that's a real true challenge so we talked about using data to constrain or ground truth models is a challenge in itself and facing this probably requires some better communication between modelers and data people we talked about trying to figure out what data types are sampling strategies provide best constraints and allow for just discrimination between alternative hypotheses both on tectonic and surface process sides when and where are there true feedbacks between tectonics and surface processes um let's see spatial and temporal scales vary a lot and how can we span them especially if we have different processes acting at different characteristic scales related to that to what extent is an event based stochastic approach necessary and when can we assume more continuous processes so storm events we also heard about earthquake cycles maybe and then maybe glacial cycles can just be stochastic events um I don't know okay on data we needed better constraints on timing in particular of surface processes tectonics and climate uh it would be ideal to have multiple independent records of landscape change maybe we'd like sedimentary like sedimentary archives maybe even nested within the landscape ideally uh high resolution 3d mapping of lithology and fracture density and um let's see heterogeneous oh there was a point about we need to allow for consideration of heterogeneous um stress states in the upper crust especially from the geodynamic side um paleo typography paleo climate paleo tectonics all of these things would be things we'd like to know and then thinking carefully about how we're going to do data assimilation or use big data or use machine learning to help us handle large data sets and keep intact the information they contain about uncertainty and then for modeling we talked about difficulties in coupling because of scales whether we need to consider two-way coupling or whether we can do one-way coupling the importance of efficiency and algorithms so what can be parallelized and how and that efficient algorithms will allow us to use ensemble approaches and inversion techniques that that give us a better handle on uncertainty we need flexibility to allow maybe the same model or same components of the model to be used for different time and space scales we need to allow for variability in model parameters in space and time and what am I forgetting thanks everyone so if anyone has additional comments and please get those to your group leaders um and we are moving on