 Good morning everybody. Good afternoon. I hope you can hear me. This is the second webinar in the series of the CSD mass Euro and a webinar series. This is organized by CSD mess as well as by Sam, who's further going to host this CSD mess is. We're supporting a numerical modeling community. So the webinars is one of our things to support it but we also have annual meetings we provide cloud computing for those that are interested. As well as several course materials, et cetera, et cetera. So if you're interested, go to the website or we'll base that later on in the in the chat so you can have You can, you have more information and if you if you have any questions, please feel free to reach out a little book my email in in the chat as well. Today's speaker is Gordon and I'm going to end it off to Sam to give a little bit broader introduction. Go ahead, Sam. Thank you very much. Yes, so you've just always just said this is a euro systems euro CSD mess webinar and I just wanted to give a little bit of a broader context to what that means that I can see a nice selection of people that are definitely already aware with systems in the audience. But I've quite a few people from over this side of the Atlantic that possibly aren't so aware of systems already. And actually that was kind of part of the goal of these euro systems workshops was really to bring communities together. So we've got a we've got a project that's funded by the natural environment research council over in the UK. It's a fiction project and the project's all about creating a community of practice around software that supports environmental modeling. And in part that is looking at existing communities that are out there like the CSDMS community and potentially bringing communities together or extending extending their reach and hence this euro systems aspect. So we had this time round for the full webinar series to host two of these as euro systems webinars, which basically means in a slightly more friendly time zone for us folks over in Europe. So apologies for you guys over in the States. I know it's it's early for you and thank you for joining so early. Yeah, and with speakers from Europe and we've got Gordon from UK CH here today. Just as an intro to myself I'm Sam Harrison so I'm an environmental exposure modeler at the UK Center for ecology and hydrology. My main role is is modeling pollutants, like microplastics and pharmaceuticals and metals how they move around the environment. I've got a broader interest in software engineering type things and how software can help support environmental modeling. This isn't the only thing we've got going on in the project with regards to community building so I just wanted to flag that next year, probably in the in the autumn or fall. Next year we're going to be hosting a euro systems workshop of some kind, probably based in the Northwest of the UK. So watch this space for for more details about that. I'm just checking my notes to see if there's anything that I've missed that that I wanted to say I wanted to introduce the seeds community and actually this is this is quite nice segue into introducing Gordon. I see there's quite a few people on the call from seeds. So seeds is the center of environmental center of excellence for environmental data science. And it's a joint initiative between Lancaster University and UK CH, which is my Institute, all around bringing together expertise in environmental data science. I think this is great to see people from CSDMS and seeds here because I think these two communities could really have some great synergy to use a plus word by bringing together the combined knowledge so it's great to see seeds folks and systems folks here. Gordon is one of the co directors of seeds. And he's also head of environmental digital strategy, I believe is the correct title I've got that right at UK CH. And the distinguished professor of distributed systems at Lancaster University. And I'm delighted to have him here today he's going to be talking to us all about digital twins of the natural environment. So I will leave it there and hand you over to Gordon. Yeah. Thank you very much. I shall start by sharing my screen, which hopefully you can see now. All looking good. Excellent. Well, thank you for that. Albert and Simon for inviting me. I do lots of seminars and I do lots of seminars around digital twins but this one today particularly excites me for reasons that will become apparent. Maybe give you a hint. The reason I'm excited today is that I'm now highly motivated by this journey towards digital twins and I can't do it alone. And I feel that this community has got a lot of the key building blocks. So what I would love to get out of today is a momentum and drive to do something really bold and ambitious together around digital twins of the natural environment. I'll unpick as we go through why I think that you are the right community to be talking to. And how you've got a lot of the key building blocks that I actually think are missing around the digital twin debate. I kind of also want to unfold my own journey a little bit here on digital twins. Because I started off thinking, oh, is this just another buzzword? Is there anything in this? And I've been through a kind of journey of discovery and I've been thinking about what digital twins means in the UKCH context. And I've come to realise that there is something of real substance here. And it is a real challenge for the monitoring community. And I think what digital twins are doing are offering a catalyst to really think again about how we model complex environmental systems. And again, that's a subtext that I'll unpick as we go through. Sam has kindly already introduced me, but I'll put up this slide as well. If only to include a cute picture of me with one of my true passions, our flock of rare breed sheep. But when I'm not lambing or shearing, I'm thinking about the role of technology and supporting environmental science. And through my career, I've made this transition to where I am now starting off as a computer scientist. So I'm rooted in the fields of distributed systems, cloud computing and software architecture, which I think are really useful building blocks. And I've gradually, through my career transition to more applied work. And as Sam said, I'm now head of environmental digital strategy at UKCH, which includes driving forward an agenda around digital twins and also around data science. And I'm also a co-director of the marvellous collaboration we have, which is seeds. So that's me. That's the perspective I'm coming from today. So just to start off with a little bit of background on what is a digital twin and then moving on to what is a digital twin of the natural environment, which I think is actually one of the areas that is really exciting in this area. So digital twins have been around for some time, the relatively new and environmental domain, but there's been a lot of work in the engineering domain and building digital twins of engine systems of aircraft and so on. And to a certain extent, that community has established the principles and starting points for thinking about digital twins. I actually think digital twins of the natural environment are introducing added dimensions, which for me are even more exciting. But looking back in the roots, what is a digital twin? I quite like this diagram. It came from a study in the UK, a base study in the UK about digital twins of cyber physical infrastructure. And it shows the relationship between physical and digital systems. And you may build a model and people in this community build lots of models, but there's no real connection to the physical system or what we know about the physical system, or at best the relatively loosely coupled and certainly no real time information flowing between the two. And then you can think of digital shadows that arrow should actually be the other way around, by the way, where some information from the physical system is flowing to the digital system. And that, you know, maybe improving the model and maybe giving some constraints or boundary conditions or changing the parameterization of the model. But then we get to digital twins and one of the defining characteristics for me is that two way relationship, the two way relationship, the flow of information from the physical to the digital and then back to the physical again. And that for me is one of the defining characteristics. They are tightly coupled. We have a digital system, a virtual system, a digital twin that's in constant two way communication with what we know about the physical system through the underlying cyber physical infrastructure from the monitoring that's inevitably going on and increasingly going on around physical systems. So it's that constant update and two way relationship that I think is absolutely paramount. So that's kind of my definition, if you like, of digital twins. But then we move on to digital twins of the natural environment. And in many ways, I see this as really one of the killer applications, one of the driving forces for why I think that digital twins are so exciting and important going forward. Environmental systems are incredibly complex. By definition, they are complex systems. Modeling of the natural environment is an incredibly demanding area. And I feel that there's things to be done here where we can really push that modeling paradigm forward by embracing digital twins and I'll unpack what I mean by that. So massive potential in this area, huge potential, potential maybe for a step change in this area. And I think that that's also fuelled by the unprecedented amount of environmental data that is now around us and available. We just have to look, many of you on this call are environmental scientists, look at the richness of data you have available now, whether it's from remote sensing satellites, drones, aircraft, whether it's from sensors in the ground as we really start to deploy sensors. A level probably never dreamt of in the past, or from citizen science, or from mining the web for additional information, for example, about, you know, flood events. Just an incredible amount of environmental data that can really drive our understanding. But, you know, what's happening to all that data, how much of the data is actually looked at, and how much of it is actually being used to improve our understanding of the natural environment, or are we all just kind of floundering and drowning in the sea of data? Can we do better? Can we drive forward our capabilities, both in terms of now casting and forecasting? Because I don't believe that digital twins are limited to answering questions now in real time. That's part of the purpose. I think digital twins should operate over a range of time scales. So moving on, I think one of the areas that excites me around environmental data and also challenges me intellectually is the complexity of the data that we have. If you look at some fields of physics or engineering, the issue is often, in terms of this diagram you see before you, the volume of data you have, it's traditionally big data. There's lots of data, and it's coming at you fast. So in many fields of data science, then the debate centres around volume and velocity. And we're thinking about engineering techniques to process very large data streams that come at you very quickly and making sense of that data. That's in some ways the easy bit. You know, from the field I was in and distributed systems and knowing how to parallelise and distribute computation, we can do that. Scalability in terms of the size of data sets has never been an issue for me. Where it gets much more exciting is the other Vs that are sometimes second class citizens, just the sheer variety that I hinted at earlier in environmental data. The variety of data and also the veracity, the different levels of provenance or accuracy that that data brings. That's where understanding that data becomes really challenging, and that's what motivated us to set up seeds, central for excellence in environmental data science, to really understand this. And then you can layer onto that. It's not just variety and veracity. No doubt many of you know, probably a lot more than I do, that environmental data is really challenging. It's messy. It's horrible. There's missing values. Sensors go down. You want to look at longitudinal data, but there's important gaps just when you need the data. The extremes are really important. Extreme events can have very significant and extreme impacts. So you want to treat the extremes carefully and probably with a much more focused lens rather than looking at averages over time. So we need particular techniques that study extremes and extreme values. And there's a whole branch of data science around extreme value theory. The data can exist at different scales and I no doubt many of you are really grappling with how do you combine say remote sensing data with instruments in the ground and how can you build a broader understanding from integrating across scales. Our data is spatiotemporal in nature. Not all data has that complexity. So how do we extrapolate across space and time and also non-stationarity is almost the norm. We have significant change over time due to a change in climate. All of these added up imply that one of the most important areas of study now is environmental data science. To take advantage of that data, to understand that data, to extract meaning from that data. And I could also extend my conversation to AI. And I think this is maybe one of the biggest challenges. A lot of people looking at AI techniques, but how many techniques will actually deal with extremes will reason across space and time will deal with non-stationarity and so on. So what I'm saying about data science also applies perhaps even more so to AI algorithms. So big, big challenges and the need to really invest time and resources to understand how we extract meaning from data, the complex data we have. And I could give a whole talk on that. And I have given many talks on that. But that's not where I'm heading with this talk. That's an important building block, but I'm going to switch focus now to what really is at the heart of what I want to say today. And it's captured in this very simple diagram, but will be amplified with a diagram to follow. Because the other big characteristic and unique character, well, perhaps unique characteristic, not entirely unique, but very defining characteristic of environmental science is the importance of process understanding and process models. You know, Sam, for example, as he introduced works in this field and thinks about process models and pollutants in the atmosphere, and that's really important work. There is a perspective that as we move towards more and more data, we have a rather horrible view of science going forward around the dystopian view of science going forward that we extract meaning from data and that is the science. And that for me is horribly incomplete. Because environmental science cannot exist without process models and process models also have an added feature of capturing the best of breed of science. It's our understanding of science at this moment in time. So without that, the science withers and dies. So we're left with this issue of large repositories of data with lots of information to extract and analyze new capabilities and data models and you kid in the block being AI here, but the tradition of process models which is so so important. Can we reach a synergy? Can we perceive a modeling paradigm that takes advantage and actually refines and extends our process understanding while still building on this potential of new techniques to take advantage of data models. That is the heart of what I'm going to talk about today. And that's where I want to start a debate and discussion and hopefully even collaboration with this community. So that's a very simple model. What about this one? I'm not going to go through all this model in one go. I'm going to step through it slowly and allow you to absorb some of what I'm talking about here. And this came from a short paper I worked on with Pete Henry, some of you will know, who is a statistician and data scientist within UKCH. And we have many conversations about what I just talked about where his skills as a data scientist and statistician can fit in with the broader field of environmental modeling. And we sat down and we wrote a short paper that was kind of us working through this issue. And the heart of that short paper is this diagram that shows the potential synergy between data understanding and process understanding. And we wrote this and gave it the rather provocative title and praise of arrows. Because what we're kind of discovering here was that what's important in this diagram are not the individual blocks, not the process models, not the data science or AI algorithms or emulators. It's the arrows between them and how they can impact each other. So in praise of arrows, it's the animation of this diagram and how it works together and how it flows and how they inform each other. And the more perceptive amongst you might also notice that this is cyclical. There's actually cycles here and I'll unpick what I mean by that in a second. And that cycle is really important because what that means is it's a constant iterative process where data understanding and process understanding is moving together and improving over time. I'm going to quote the words of Keith Bevan, who's one of the environmental scientists who's responsible for me being here today in this community. Because Keith Bevan uses the phrase modeling as a learning process. And that's effectively what we've captured here. Modeling is a learning process that's constantly evolving and learning so that data understanding and process understanding are edging forward. And I'll refine that a little bit shortly as well. Introducing another of Keith Bevan's concepts. Keith's a hydrologist, by the way, if you don't know him. So in praise of arrows, let's just unpick how data and process understanding can advance as this kind of dynamic dance. I think one of the real opportunities is at the early phase of data ingestion. One of the bottlenecks I feel having moved over to the field of environmental science is getting the data available in a clean and accessible format. Especially now with the rate of data that's coming at us, whether it's from citizen science or whether it's coming from sensors in the ground, or whatever the source is, it's the rate at which this is coming in. And data is only useful when it's available from a repository. Well, it's a findability accessibility as well, the kind of fair credentials, but there's a lot of work to be done in getting to that point. And I see huge, huge potential. And for example, machine learning techniques and doing quality assurance and gestion cleaning the data, and also perhaps annotating the data so that you've got the right metadata. The bits that many environmental scientists really don't like, but how much of that can be automated or semi-automated? And what's the role of data science and advanced techniques and quality assurance and cleaning the data? And I see names in the call. There's many people in the call that know more about this than me. So even at the very early stages, data understanding, there's real, real potential to be transformative so that we've got good, clean, quality-assured, and perhaps even annotated data to work on. Then you get first relationship between the two. Process models can be nudged into new state based on what's being observed. Traditional data assimilation, nothing new there. We've been doing this for years, so I'm not going to dwell on that. That's important, but not where I want to centre the discussion today. Patterns or insights or potential novel features can be extracted from the data. We can use clustering techniques. We can look for potential hypotheses emerging from the data in the bottom-up fashion. We can look at potential change points, significant points of change in time series data. We can use a whole raft of techniques to look for insights. Perhaps we just don't have time to work this data to its full because the data sets today tend to be vast and multi-dimensional. Perhaps one of the important roles of AI to see if there's something in that data that might be indicative of something we've missed as scientists. Then that's quite interesting. It becomes more interesting when you combine that with process models and then you look at the observations and see what's that saying about the process models. How are the process models reacting to that if there is an extreme event and you're seeing an extreme response in the environment? Is the process model capturing that or is it representing more average behaviours? Is it missing the extreme event and its impacts? That's something that we actually worked on in SEEDS and also a previous project called Data Science of the Natural Environment around ice sheet melt. We were finding that that was really important that data can give insights that process models are not capturing. We can use the patterns and insights which may be really rubbish or may be significant to look at how process models are representing these insights. Validating or invalidating process models or ensembles of process models and picking the one that seems to be the best fit. I think that's getting quite interesting but let's keep it going. This is one of the areas where I really want to start this conversation with you guys. Process models can actually adapt themselves. Either as black box structures, this is where the software engineering comes in and the decomposition of process models that many people in the CSDMS community are really thinking about. Can you adapt the process models to represent the observations that you seem to be getting? Can the process model improve self-organise? Self-organising systems is fairly, it's not run of the mill and to me as a computer scientist but it's a well-established area where you can have large complex bodies of code that can change their internal structure to reflect what's happening in the external world. Self-organising code. Can a process model self-organise? Is there an alternative module that represents part of the physics and whatever system you're modelling that might better accurately represent this place at this time in these contexts? Now that is really starting to get exciting to me. So you can start to tailor your process models to now. And the particular circumstances you're studying. That assumes and that becomes even more exciting when process models have a clean, well-defined software architecture where they're engineered or reversed engineered so that you can get access to the internal structure of the process model. Is that something we can look at together? I certainly hope so. And then just to continue the journey around this diagram. This is studying a complex system. And complex systems have emergent behaviour. Can we build more complex data science, probably moving into the sphere of AI, more complex data models using deep learning or integrate statistical modelling that represent the system and allow us to reason about emergent behaviour and what are highly complex multivariate systems? And could that sit in tandem with process understanding and then look at the relationship between the two? Can we actually discover missing behaviour? You realise as a scientist when you look at your data that the process model just, there's something not right. There's something missing here. There's a important relationship or physical process or behaviour. It's just not captured. Can you actually look at injecting new behaviour into the model automatically or probably semi-automatically or even manually? Is that still a step forward? And see the process understanding grow over time as part of this learning process? Your process models are complex. So if you have a process model that you're happy with that represents the behaviour of a given place and time, can you then train an emulator from that? And if the behaviour of the process model changes over time, could you retrain the emulator? And then run the emulator many times is a valid representation of that process model. So that you can really look at combining that emulator with other emulators to study cascading effects of complex systems or to look at sensitivities in more depth. And also one thing that Pete Henry's is very keen on and if he was here, he'd be really jumping up and down about is there a loop back to the sense of system so that if you have uncertainties that are apparent, maybe from your data models, can you then use that to drive adaptive sampling strategies? And I saw that Tom August is on the call, one of my colleagues from your case, CH. He's been looking at this in great depth around biodiversity data and citizen science and directing citizen science participants to fill the missing gaps. I think that's really important. So hopefully that's brought this diagram to life. You know, this, this is, for me, an exciting future. I don't know how to build all that, but I'd love to learn and I'd love to try it. And to build all that would require a real massive effort. So I think that across disciplinary team of those that understand the data world, those understand the process model world, those are really thinking about internal structures, the software engineers and software architectures. And those people understand complex systems and their interactions. This is a grand challenge. This is a grand modeling challenge. And I can't think of anything more exciting to look at going forward. Keith Bevan is really annoying. He's a lovely guy, but he's really annoying. He knew this years ago. And it's only over time I've come to realize how profound his insights into modeling were. He would wrap all this up and he would refer to this as models of everywhere. Models is a learning process, but models that capture everywhere. So you can have very fine grain models that represent the particulars of particular places. And you can run models that represent that place and the model can change over time as you learn about that place. Models of everywhere. If you don't know the literature that Keith put out on this, go and seek it out. And I've also included a reference at the end about a collaboration I did with Keith on models of everywhere revisited. Why in terms of technology, it's actually the right time to build these kind of modeling futures. And just to throw in something else, I'm going to wrap up fairly soon because I want to have time for discussion. Just to throw in something else. This is something that's hot off the press. And it's something I've been thinking about very recently and it links to a proposal we just submitted. So this is may or may not happen, but we're looking for funding to think about this. I've taken a lot of inspiration from this diagram. This is a model that's important within UKCH and broader communities across the UK. This is a model called hydrojoules that looks at land surface atmosphere interactions and the importance of the water cycle. The reason I like this diagram is it starts to show us software architecture. The system is decomposed into three parts, the surface layer, the subsurface layer and the wetland rivers or lakes, the kind of hydrology part. They've worked out the interactions between them and worked out a relatively sophisticated way of coupling these parts, including over scales. But then it's kind of fractal. You look then at how this as an entity would then interact with broader systems. If you have this, which is the architecture potentially of a digital twin, if you start to combine this with data understanding, how does this then interact with an atmospheric digital twin? And how does that interact with an ocean's digital twin? And this has led me to really focus recently and think a lot about federated structures and federated digital twins. And to actually think of that intrinsically as you address digital twins, don't build a digital twin, build something that fits and slots naturally into a federated structure. And actually think of the federated structure upfront, not as an add-on. And I think if we start to do that, the potential of this technology just goes through the roof in terms of understanding complex interactions. My frustration with this diagram is that I want to go further. You know, pick any one of these components. Let's just take the subsurface. You could unpick that and you could have a complex structure underneath that that consists of components and interactions. And even within that thinking of soil moisture, you could have components interactions. So I would want to explode this out. I'm greedy. I want to explode this out much further. So just a final thing to throw in. I think to do digital twins well, you have to think about underlying digital research infrastructure. So as well as thinking about these kind of modelling architectures, we've also been thinking about platforms or digital research infrastructure platforms that support this, that enable you to do the interoperability between data and models and also the federation between different structures. So we need to think seriously about effectively an underlying platform that supports data discovery, model discovery, interoperability and federation. And in a separate project that I don't have time to talk to you about, we've been working on that. It's called an information measurement framework for the environment, which underpins digital twins. But that's for a different day because I really wanted to focus on the modelling side. So I'm going to skip over that. So really excited about the potential here. I can't do this on my own. So what I'm doing is reaching out to you fantastic modelers. And I know that the community that's out there is thinking about some of the things I've already said. And hopefully there's a meeting of minds because I think if we do, we can actually really drive forward modelling futures and start to get to the kind of models we need to understand complex systems and interactions. And that sounds like a good place to stop. And just to say if you've got copies of the slides, which I'm happy for you all to get. There's a few musings of myself and others that you can look at. Thank you for listening. Thank you very much. That was fascinating. Thank you. We've got quite a bit of time now for questions, comments, discussion. I can say a few things have come in the chat already. There was a comment around when you were talking about models as a learning process, a comment from Moira that was saying that this is very much in line with concepts of modelling cycles in complex systems modelling and collaborative participatory modelling. Kilt, keep it a learning tool. I haven't heard that before. I like that acronym. I don't know if you had anything to add to that Moira. Yeah, I very much prefer to the Kiss principle, which is keep it simple, stupid. I don't like, you know, being demeaning to people. But also it's just like, you know, just thinking of it as a learning process and this has been a practice in a lot of the complex systems process based models, modelling practice, but also just wanted to raise like how this really fits within the framework of participatory modelling as well or collaborative modelling as we're thinking of interdisciplinary work and not just across academic fields, if you will, like hydrology, soils, whatever, but also thinking of, you know, working with people and decision makers to as both sources of data but users of data and the people who in the end make sense of all this information, the modelling and the data to make them decisions with it. So just wanted to present that as another dimension. Yeah, and that's exactly what I was looking for. I mean, some of the literature and areas you alluded to there. I've heard of what I'm not really familiar with. So I think that's why we need collaboration, you know, to all get in the room and thrash out what we can learn from what you've been looking at there. What I bring from my perspective and so many other voices around the audience. Absolutely. I'm happy to share, you know, beyond this talk. Absolutely. Yeah, that'd be great. Thanks, and thanks to Matt who has posted a link to to Unify, which is the model I think underlying that the Hydrogels project that you just described. So there's a link to the chat there. Thank you Matt. And yeah, in terms of exploding out the Unify code to include nutrient flows that sounds exciting also. If there are any questions from anyone, feel free to either unmute, ask your question or put them in the chat. If there's not, I've got a bit of a question I was kind of thinking. I mean you addressed this actually already about the skill set and and when you know listening to your description of what we need for environmental digital twins is it's a huge skill set you know we need the people, not just with the process of understanding of the environment, which we do absolutely need we need the chemists the physicists etc. But we need you describe lots of stuff there that very much is is computer science software architecture. So we you know we need those people we need the software architects we need the data scientists need the environmental scientists we need a really, you know rich consortium of people to be able to build these things. So the question actually was, has this been done in other disciplines before or is the environment really unique because of that. Those four V's that you said, or other disciplines simpler to tackle and therefore don't need quite as Richard's set of skills. Yeah I think the answer to your question is yes to both in some ways. It's been done in other fields I would say yes it's been done quite a lot in engineering systems which is obviously a closer cousin to to computer science. There's lots of exciting work around self organizing software architecture and formed by data and driverless cars for example, probably that one of the best and prime examples of where you've got lots and lots of sensors and lots of data. You've got internal models that represent the behavior of the car and they're constantly updating and adapting and learning from traffic conditions. So in some ways for me that would be maybe one of the prime examples. I think environmental science is unique in its challenges. And it's unique because of the breadth of disciplines that need to be involved and that really is a step change forward in terms of the complexity of the discourse that's required. It's very easy for engineers to talk about driverless cars and bring in a software engineer that's a relative easy dialogue kind of dialogues we're talking about here are much more complex and you really need teams. And to have teams you need resources and resources you need ambition. And I just think I don't know if this will quite come out right but maybe the modeling communities a little bit stagnating. I use that term carefully but maybe we're stuck with legacy code, you know, and doing the same things maybe more and more, maybe bigger computers, but maybe we need a bold vision. Yeah there's something exciting to be done to really understand climate change and environmental change and all its complexities and to drive forward our understanding. We need a paradigm shift in modeling and we need to be bold and get out there and say to the funders this needs investment. And just of that vision and passion and direction that may help and attract resources because without that, it'd be quite hard to achieve this. It's one of the grand challenges of our time let's go for it. Yeah, and I agree with that, I mean stagnating that's probably not too bad of a description when thinking about some of the models that I've written and the fact that they're just big monolithic code bases and even if I've written them object oriented and you could split them into little objects they're all intertwined they're all stuck together and difficult. And a caveat there because it doesn't sound quite right because I'll get huge respect for the modelers out there so I would never say they're stagnating, but I think there's an opportunity here to really drive things forward. Absolutely. You mentioned funding I mean do you think the funding is already there for this type of stuff or is that also an issue that we need to convince funders that this is important? No I think the funding's not there I think people have got their eyes in other areas they've got their eyes on I mean I was in a talk yesterday about exascale computing. Funders have got their eyes on exascale and building bigger and bigger computers. There's a lot of traction to AI, AI and environmental science and massive investments in that, but for various reasons that I could go into I don't think these investments are alone or what we need. I think we need to look at modelling as a whole and science as a whole and scientific understanding as a whole. And then piece all these bits together and that would place process modelling and modelling futures at heart, not as an add-on. So I think the areas that are attracting funding are not necessarily the ones that need to attract funding to the level that they are. So it was a great talk Gordon that really I start learning more and more about digital twins so this is really helpful and maybe you touched based on a few elements already of what I'm about to ask but you see that often there are some buzzwords, words within science right that that's really traction and I remember back in the days it was a source to sink was kind of quite popular to understand systems right and how the landscape is kind of eroding and transporting sediment and then get deposits somewhere in the ocean to see that connection and that kind of you know was big maybe late 90s early or in the 90s and then 2000 and then it's slowly you don't hear that much about it anymore. I mean there's still research on it but the focus is shifted somewhat the same with AI now and maybe with digital twins as well. So how do you see what are the challenges for digital twins not to be at a buzzword but becoming more established within the science. So what do you see as roadblocks that we need to overcome? Yeah, yeah I think better communication I think I started off as a bit of a digital twin skeptic certainly when you work in technology and computer science you get fed up with the latest buzzwords. So I started off thinking is this just just another buzzword of the time. So I started to dig deeper and think through in my own mind what it meant and then I started I've kind of talked about that throughout the talks. I convinced myself there was something in this to be honest I'd be quite happy if the term digital twins disappeared. Because I think what we're really talking about is modeling futures. So I think the buzzword can evaporate but what should come back is the understanding of where we need to go in terms of how we model complex systems and how we take advantage of data. And then increasing availability of data and it's not just data simulation. So I'd be happy if I never heard the term again apart from I've just put in two bits for digital twins of X. I think it's answer your question. Yeah, yeah, no it does. Great. Are there any more questions. I'm curious actually this is a let me just check the chat. Yeah, so Joel says thanks for the full talk my niche is to use this concept to drive the data collection programs of the future. Very interested in getting a feedback loop in the process to help organize more deliberate data collection. Yeah, yeah, absolutely. Yeah and designing our monitoring systems as well over time and improving them is so important. I know the UK CH is massive in this area we effectively run a monitoring program. I think the potential for feedback loops and having more optimized data collection is just so so important. That's the bit that my collaborator Henry's is passionate about these days, as I am. Good stuff shall we we've only got a few minutes to go so should we start and we're starting to lose people so should we start wrapping up there. Unless there's any last questions or comments quickly. Yeah, can I just say I want to reach out to people so do get in touch and let's see if we can take this forward so there's been a few people saying they're interested in this interested in that let's have a let's keep this going because I'd as I say I can't do this on my own. There are many skills and there's some really interesting observations and reactions from the audience.