 effectively, what we'll do in the next hour is we'll start with asking Liz Lewis to give us a bit of an overview of the project. And then we'll look forward to having a more open discussion and we can introduce the rest of the team, perhaps Liz, once you've given everyone a bit of a flavour for what pyramid is. So, without further ado, thank you very much and over to you. Thanks, Stephen. Great. So, yeah, so I'm here to talk about the pyramid project, which is really looking at trying to model the dynamic aspects of flood modelling. And so our kind of current operational models, based on kind of static topography, but there are really kind of evolving things that happen during a flood. And one of the things that we're primarily focusing on in this project is the movement of debris during a flood. And a really stark example of this is the flash flood in 2004 in Boscastle, where we had very heavy rainfall falling on very steep sided catchment. And this created kind of a wall of water coming down the valley. And critically before it hit the village, it went through a car park which had over 100 cars parked in it. It suspended those cars, washed them down the valley. They blocked a bridge which caused water to back up and then the kind of bridge failed and then washed everything down and this caused a lot more damage to the village than would have happened previously. We see this on kind of smaller scales as well. So debris is really prevalent in causing nuisance floods. And so often you'll get lots of kind of woody debris or bins or rubbish coming through and blocking culverts and kind of trash greens and things like that causing water to back up and kind of smaller areas as well. So debris really plays an important part in flooding and we don't really capture this dynamic aspect of flooding in our current models. We do have the capability to model this so Chua Lang who's on the project and his team have developed a model called type in over the years, which is able to model the movement of debris and how this affects evolving flood risk. But this hasn't really been used operationally because it requires so much data and extra information to be able to actually do. And so this is what we've really been looking at in pyramid. We've been building a platform that allows us to run this kind of debris flood model. And to be able to do that we need lots of extra information so we use a physically based hydrological model to provide the boundary conditions to the flood model. So this gives us kind of the river flows and the water coming in at the boundary of the city that we're looking at. We also have a kind of machine learning model that will detect objects that could float during a flood so this is taking satellite imagery and point cloud data to actually identify where cars and trees are so that we can put those into the flood model and see how they move around during a flood. All of these obviously require a whole lot of data and so we're getting our data from multiple sources. We're getting our kind of official data from the Met Office and the environment agencies so lots of our agencies that are collecting lots of data to a high standard. But we're also supplementing that with really dense sensor networks from places like the Urban Observatory or the National Green Infrastructure Facility, which have hundreds of sensors over Newcastle City Centre which is where we are doing our kind of test of this modelling platform. We also use community data and citizen science data to feed into this to add even more data into the system. And so a lot of the work that we've been doing is getting all of that data into the platform and bringing it all together. We've had to think of kind of clever ways to merge this data together and make sure that it's of a good enough quality to drive these models. So because there's so much data there and so many models going on, we've tried to consolidate this into a really easy to use visualising tool. And so what you can see here is all of the input data sources being displayed so this is things like rainfall data and trash bin visualisations. You can have a look at the outputs from the different models so this is the output of the floating object detection models so you can see all the cars that have been identified as part of that model. And you can have a look at the river flow so this is the model output from the hydrological model and where each of these are a river linking you can see how the flows change throughout the simulation. And you can also kind of click on one of these cells and have a look at a time series of the data as well so we can look at this in a really interactive way and zoom in on the areas of interest. And then ultimately what we're really interested in is looking at the changing flood extent and depths of flooding in the city centre which is what we see here. To do this, we obviously require a massive project team because we're drawing on so many different areas of expertise. Lots of the people as part of this team are here today on the call so Haley and I have a specialism in kind of hydrological modelling and that's what Ben Smith is doing. Sarah and Claire have been working on the kind of community citizen science aspect. John Wen, Shidong and Maria have been doing the computer vision and developing the machine learning algorithms to identify these floating objects. QR and you are working on the actual hydrodynamic model and the debris modelling and then Robin and Amy have been working on getting all this data together and building the kind of software platform that is needed to do it. We also work with a whole range of project partners and stakeholders who have been really valuable in kind of giving us insight into what a platform like this would need to do and also how we can make it useful and valuable as well as helping them think of ways that they can visualise their data and kind of leverage their data into a really useful system. And that's my brief overview of pyramid. Well, Liz, thanks so much for that. It's great to get that sort of initial overview of the project and the focus of it and of course all the stakeholders you're working with. And the focus really is on Newcastle as you've outlined so it'll be interesting to see just how some of the approaches that you're demonstrating can be perhaps applied in wider context. Thanks also for introducing all the colleagues here. There's a whole range of the project team here. And I think one of the fascinating things about projects of this type and this magnitude is the range of different skills that need to be reflected in the team. So it's very interdisciplinary team, but you know, research software engineers hydrologists, a whole whole range of folk looking at demographic data and the impacts as well. So Haley Fowler, you're the originator and PI of this and Liz is sitting with you driving this forward. I mean I wonder if we perhaps could start with you following that introduction just to ask you to give a bit of a feeling for the, you know, what is the big challenge you've really been trying to address in this project and how it affects on on society. So what is the magnitude of the of the sort of problems that you're you're addressing. So Liz gave a really great summary of the project. And it's very much around trying to improve I suppose the immediate management of flooding. So the ultimate aim is to provide information at near real time or eventually real time for people like emergency services, etc to make decisions about, perhaps roads, perhaps where problems may have been focused around bringing lots of new data sets together as Liz has already mentioned, and very much focused around floating debris in particular, and trying to model that, but within a near real time setting. So, I mean I think it's flooding is really a very much an increasing issue in the UK and around the world. And in particular, the sort of floods that affect cities and city centres and urban settings in particular are mainly caused in the summer, mainly from these big convective storms, and we expect those to increase substantially both in frequency and intensity in the future. So we're going to see more of these flooding type events and really this is about how do we manage these events while they're occurring, and can we provide information in near real time as I say ultimately it would be good to to try and provide this information in real time. And, and this project is really very much a demonstrator of how you would do that in a city, which can then be rolled out hopefully to other cities across the UK and potentially, you know, elsewhere as well. That real that near real time is aspect is so important, but there's no, no, no use knowing that your house flooded a while ago it's to so for people to take action for authorities to take action. So that's that's a real strength of this and I guess you know you've been able will come to some of the technologies that are enabling that later in our discussion but that's clearly a huge strength of what you've you've tried to do with this project. I think so. It's, I mean, it's supposed to come I'm not going to talk too much about the computing capabilities because people other people in the team are much more expert than me about this. But very much. It's something that has, you know, computing power has improved substantially over the last five years or so. I'm feeling this sort of approach to be taken now and I know that you are Liang and his team at Loughborough have demonstrated this in previous projects as well we we certainly did some work in the Sinatra and tenderly projects, where we demonstrated very much that a real time approach was was possible for the storm Desmond event for example. But yeah it's it's trying to bring all of these models together on a platform to actually do this in real time is quite different from I suppose demonstrating this for an event. So getting involved with Daphne and producing a platform that people can then use beyond the project and hopefully you know this will be taken up and used as well. Then, then you know that's quite exciting really that Daphne being the data and analytics for national infrastructure platform which is on the way people can find out about that. And I'm just wondering perhaps for yourself and indeed to opening out to other members of the team. A lot of the thrust of the, the work here has been around this concept of digital environment. And I'm, I'm just wondering if, you know what what your collective take is on digital environments and how do you. What is what is it about the environment the challenges that you have that means the digital approach is the is the appropriate way to progress. I'm going to pass this over to Liz. Now Liz is acting the eye of this project and she's very excited about the digital environment. Yeah, I think we've got so I mean what's really exciting now is we've got the computing power to be able to run kind of complex models. We've got really well developed physically based models of the environment, and we're beginning to get together whole suites of data that we can use to drive those models. So I think we're kind of in the era of testing and, you know, bringing everything together in one place and the challenges that we've really faced in pyramid and in other projects that have been part of and have really been about, you know, just the kind of the glue for everything together Yeah, I think that's kind of the big thing that we've had to overcome here there's so much available and that we just need to kind of bring it together and make it coherent and see what is possible and what is not possible. And currently, and so that we can start making innovations in the right areas. So there's some other points that we might come back to. But I think we've spoken about data and date this is a very data heavy project clearly but you also of course then have the technologies that sit around that data to to process it across this arc that was mentioned earlier the processing chain I mean, can we just have a bit of an overview of what what sorts of technologies and maybe maybe Robin if I may 1010 to you what what sort of technologies are you using to address this challenge. Yes. So, well one one interesting aspect of the project is that somebody like me is involved. So I'm a research software engineer. And that's, it's a relatively new discipline in research. It's only been around 10 years. So the, the scale of the competing challenges increased sufficiently that dedicated software engineers really need to be involved in these kind of projects now. So, you mentioned Daphne earlier on the kind of the challenge of the project like this. And as Liz has just mentioned, they're really which is about bringing everything together all in one place. There are multiple, multiple simulators involved in this. So, I mentioned the sheet trans simulator and hyper simulator they're both very complex pieces of software themselves. There are lots of different conversions that need to be done for data between different pieces of the technology chain. There's the object detection technology. And then some machine learning technology is very complex needs a lot of data. And I think, traditionally, certainly in the university research environments being quite a struggle to get those kind of things all together in one place, different data formats different, you know, even ways of interpreting data different ways of thinking about running programs. What happened is that people might have used HPC, that has its own problems about access and jobs have to be queued, they might expect to run jobs for quite a long time and HPC it's time sharing system. And the other aspect about this project is that we're not co located either so we, there are colleagues in left borough where in Newcastle I'm not on the same part of the university has as Liz's team, no civil engineer and I work in different building. So, so we've used Daphne as a as a platform. So Daphne offers the solution to those kind of things which is to co locate simulation and data storage. So Daphne Daphne, I will go into the details of how it's constructed but it's a platform which allows you to host data. So it's a huge data storage petabytes of data it can it can handle. And that's one of its prime purposes really is to host data sets which can be used for national research. And as projects of this sort of nature, need a new generation of platform really to really operate it so you know, maybe just give a quick pen picture of what Daphne is then. Yeah, so it can host. There's around four components really data storage and models so a model is essentially a program which transforms data from one form to another. And it can be as simple as a Python script which copies some data from one form to another or it could be a full simulator like hypons. And then a workflow. So workflow allows you to chain models together. So you can pass data from one model to another and transform the data into from, you know, from raw measurements into some kind of simulation output like the flood levels. And then visualization is the final component of it. You can actually see what it is you've produced. So we've, we've used those components to host or, I think that that diagram that Liz showed earlier. The data coming into the project is composed of static and dynamic data the static data can be hosted in Daphne. And dynamic data can be pulled from API's like the Environment Agency or Urban Observatory external sensor APIs. So those those components feed into our models. Robin, I see I see there's some questions from colleagues coming in on the on the Q&A and thank you very much to everyone for proposing those and let's have some more. I have a question to you and indeed opening out to wider members of the team here a question about the sources of the data that you're you're drawing together for this project. And in particular the question from Rutger thank you is about the debris that's recorded. And on the sources of data and in fact would such data be available in other places I don't know if other colleagues on the from the team would like to address this as well. Yeah, Amy do you want to answer this one. So in terms of the data for the debris. At the moment, we're using a combination of different sources, which that's like and like our data, and which I guess will be available in other places. And a lot of the way of getting this data is through kind of like machine learning algorithms so that I guess will be available elsewhere. And what was the other question sorry. I was just, it's really about the sources of the data particularly Dave debris Amy and where that's coming from so I'm thinking from what was said earlier this might be to do with machine vision. And I'm just guessing there is what where does the debris data come from. Yeah, so a lot of it. So what we do is identify kind of a bounding box for a good example is vehicle data. So for the cars we identify bounding box which will get a score so you'll know how likely is that it's a car. And then from that bounding box after sourcing additional information so we've got things like, we've got big car data set that then you can look at the kind of length of a car and it can identify things like the weight which you need for the debris modeling. And so a lot of it is combining different data sets. And in terms of the, so we're also trying to consider woody debris, and we're going to look at getting within working with other partners to include a model that all kind of look at areas that we've identified as trees and kind of give almost probability that that would result in woody debris. Thank you very much, Amy. I see you nodding vigorously there. Did you have some comments on that sort of approach? Well, I mean, I mean for the debris, obviously this is one of the very big challenge in terms of, you know, getting accurately, I think where the debris was, you know, where that would be moved something like that. So I think in this team, you know, we're more or less using data from multiple sources. So from, you know, for example, from the most sensing, you know, or, you know, from the CCTV, you know, you know, more or less you know where we capture this kind of debris and then you know I think the great team and so they use this machine learning approach to identify them and then the shape and the size and also the possibility of moving and things like that. And then that, you know, can put in a fee into our model and then our model can, you know, simulate, you know, I think according to the type in the size of the debris and then whether that would be moved by the flow dynamics. And then how that, you know, follow the flow dynamics and then, you know, I think, I think throughout the process and then whether interact with like infrastructure building, you know, bridge and like, you know, leads, I think mentioned about what was castle and then to basically I think to assess the including this kind of extra, you know, component in the story. So because this is not really like be done in the current for assessment or management practice. So more or less, I think for pyramid, I think what we're doing here, it's like, you know, it's mentioned super exciting, the more or less bridging the current technology, you know, with the future technology, you know, that means that that's a forefront of today's technology, and that can be put into practice, you know, for example, for EA for focus in center to uptake to safe life. If you look at in the last year, you know, I think the year before last year the event in Germany and all in China, thank you hundreds people. And this kind of technology in the real time, that can really help, you know, I think this is making the public really to, you know, kind of see the with and then to manage themselves really. Yeah. I mentioned that the incredible pictures from China with the underground stations filling up and of course the Rhine flooding that systems like this clearly as highly highly as you say if they can be made real time could be a tremendous boost, but just sticking with the data for a minute so I'm wondering, you have CCTV you're looking at machine vision, using AI approaches to extract objects and so that that's, you know, classify those. And you also have presumably existing data sets like the outlines of buildings the curb heights and, and then you have projections of rainfall and so on, but one of the one of the things I'm intrigued by, and in the introduction. And one of the ideas that you gave was a discussion about the hundreds and hundreds of sensors that are out there. And I'm just wondering, you know, team, how are you, how you integrate sensor data arising from all these different environmental sensors and fold that into the modeling. Would anyone like to Robin perhaps you could give us a view on that. Yeah, I just wanted to go back a little bit actually to the, the floating debris. So the main approach at the minute is a satellite, probably speaking for she don't hear but it's just, it's an identification of bounding boxes as Amy said from satellite imagery. And then there's, she dons I know he's working on that on another approach at the minute which is identification of objects from point cloud data. If, if a general approach like this to near real time environmental disaster prediction is really becomes really important it does raise big questions about how you get data like that timely, because we don't have a very timely feed satellite data. So we don't actually know what we could get at the minute but it's certainly not like a real time feed of satellite data. One of the one of the limitations will be bandwidth just purely transferring the satellite imagery bandwidth to something like that for me and then analyzing it. So that's a big kind of open question I think at the minute so the, I think it's showing that the, the importance of having an analysis of floating objects and three days is there, but whether we can get the data of sufficient quality in a timely fashion to be able to analyze it is another matter and I think we've only just started a broke start and that's a bigger question for national infrastructure really and data provision. I think it feeds into the sensor question as well so, so the way that we try to bring all these different data sets and to take rainfall as an example is. So the Urban Observatory has kind of rain gauges around the city and it also has radar data. So it's a kind of community science collected rainfall data as well as the Met Office and environmental, the EA gauge data too so one of the big challenges is actually thinking about how we bring all of those different data sets together into one driving rainfall data set for the various models. And as Robin's just said the reliability of that data in the real time is really, really variable as well so something that we can't just focus on just the kind of blending method that we use we have to think about how that is adapted to the current available rainfall and from various sources at any given time step so it kind of has to. Yeah, change for each time step depending on what is available so it's a really big kind of outcome of the project has been thinking about increasing the reliability and density of the various input data sources to the platform. So many things you've had to grapple with and you've got all these different types of data coming in you've got different temporal frequencies different different scales and so on it's. It's a real challenge that you face and it sounds like the Daphne tool is the thing to support that. I see that Jill Thompson thank you for your question has just posed a challenge which I think actually it's something we've just been discussing about have you a source of data on the potential for different three species to be uprooted or broken up in a flood to create the woody debris in the first place and also regarding cars how well cars are sealed to prevent water ingress which would affect how likely they are to float. I don't know whether whether certain types of cars are better floaters than others but do you do you have that sort of metadata around the data you're collecting. No, definitely not. A quick comment in here. Yeah, I think that's actually a good question I think in terms of whether we can really like simulate or forecast or predict we will as it is. There's a big question big scientific expression actually to answer the answer obviously, you know for for some of the cases like this one because into here is not because we can't really like simulate exactly what happened in real world for example you know like a car seal or not seal, and then that would you know I think turned out into I think a different story in terms of floating and things like that. But we've you know we've for some of your data we can identify sample thousands of cars out there, and then statistically, we can you know, I think a good story, you know telling where the which is you know whether you know, you know I think I think the possibility of that. So you know what we're doing here we're dealing with a kind of chaotic system in a real world, because the model is such a highly sensitive, you know to this kind of initial students, you know, a condition of the objects and things like that. But again, when doing for some scenarios, you know a forecasting or, you know, working on this kind of statistic way of you know simply multiple event multiple scenario and then we can keep a good story. So that you know I think to reflect the wisdom in real world. Yeah. That's an interesting perspective, but I guess you have to challenge you have a challenge about whether you, you look at specific like car brands or whether you take an average position or adopt as the worst case position and there's a lots of strategies I guess you'd have to think about in doing that. Indeed. Yeah. You know we heard earlier about the, you know, the dreadful floods last last year in the Rhineland and in China. That's very much on the news and of course in Pakistan and other other parts of the world as you say it seems to be a phenomena of the times we live in. And it's so important that authorities have a have that have this information at their disposal to try to address these issues when they come up and now I'm just wondering really what what the scope of the policy engagement is that this this projects had you showed a wide range of stakeholders in the presentation. Thanks for that but you know how what what's planned for the future in this in this in that arena. And there's more of a kind of operational tool rather than a high level policy tools so and the way that I see it interacting with policy is more to shape the kind of national strategies around data collection and kind of computing provision, and that this tool definitely currently would give kind of policy relevant outputs is definitely more of an operational tool to help the environment agency or lead local flood authorities things like that. But the kind of the I mean the beauty of doing a demonstrated project is we've really had a go at seeing, you know, pushing everything to its limits we've pushed definitely to its limits we've pushed all the data to its limits pushed our models to their limits. And then just kind of showing really forcing to show where all the gaps are in our various kind of facilities and data sets and things like that so I think that from this we're going to have a really interesting perspective piece on what needs to be done to make a digital environment actually feasible and that will be the main kind of impact on policy I see coming from this. You've had a lot of interest from stakeholders I mean do you get a sense that these sorts of digital tools might play a part in future policy responses, or indeed almost tactical responses as well as strategic responses. Yeah, I think something like this would really benefit from being a central resource that's maybe run by someone like the flood forecasting center or the environment agency, and kind of the, the power of it will come from bringing in more and more different assets that kind of interact with each other so I think one of the particular challenges of building kind of digital twins for the digital environment as opposed to a digital twin for modeling an engine or something like that is like Chiwa said just kind of the chaos and how interconnected everything is in the real world so if we I could see this being used as kind of a central operational resource that people bring their data sets to that gives us a better kind of detail in the landscape of how things will interact with each other and then provide better information to everybody who's affected by flood risk. I'm interested you've raised the sort of theme of digital twins and you know this is clearly an area which is which is gaining quite a lot of traction and in the environmental sciences at the moment. And this, you know, following on the heels of established approaches in engineering and medicine and other other applications. What's your take on digital twins and have you have you developed a digital twin or what what would need to happen to make your approach a digital twin in the future. It's a question to all of you really. I mean, I'm not. I think this is kind of towards a digital twin rather than actually being a digital term we've definitely not closed the loop of, you know, kind of data to models to then kind of the action that is automated at that end of the digital twin process. It's a challenge getting to the stage where we're actually bringing to everything together in a in a platform so I think it's a long way to go for making digital twins in the environment as opposed to kind of some of the other areas. I don't know if anyone else has. I think there's a there's an interesting question about what near real time or real time is because we've, we've had this discussion where if it was if it was real time, say take no say it took no computational time to evaluate the simulation and you could look at the window and see what's going on. So that's actually not useful to have a real time simulation of loving it needs to be prior to it so it's more predictive. But we don't, I don't think we quite know where that boundary is. And then, then what we can study from a digital twin in terms of how accurate. So these car models are all woody debris and how accurately reflects the real real world I think that's a way away yet. I suppose one of the one is where you were one tries different scenarios and maybe planning decisions for the future could be run through this model to see, you know what what sense it would make to place this economic development or these these housing plans in a particular area and that's, you know, that's a challenge. I think we've been thinking about pyramid in kind of different use cases so we've probably chosen the most difficult use case of trying to do things in near real time. And because that you know just bringing all of that data in in near real time is really difficult. We've been thinking about it in terms of kind of forecast mode, which would be a bit more straightforward because it would just be driven kind of with a model or kind of an ensemble of forecasts coming from the Met Office. But then, again, for we could also run it in kind of historic mode with kind of complete historic data sets rather than this patchy real time data. But also kind of with longer term projections for how whether might change which could be used for planning. So what's really neat about what we've done is we've used physically based models at the heart of it which makes it as flexible as possible, and for using it in all of these different modes. And so if you did want to test a kind of building for the defenses on natural management and things like that, we've got the models are the most flexible kind of model that you could use, and we could build all of that into the platform so yeah it's been really interesting thinking about the potential futures for a tool like this. When you when you bring together all these different data, all these different data types from different sources you've just mentioned a range of different sources, the Met Office and so on, different temporal scales, geographical scales and how do you deal with the different confidences that you have and does that follow through into the confidence that you have in the projections that the models are giving you. How do you deal with uncertainty as it passes through the processing chain. I mean, is it right at the end, do you know it's right. I do want to answer this. Yeah, I think I think I think that probably go back to you know the question we discussed you know I think a few minutes ago in regarding the uncertainty in all the data things like that obviously. Obviously, we all know that no no modeling results are forecasting because you know actually in 100% reflecting the real world, that's a problem. However, we, you know, in practice and also you know scientific community we do have some approach, you know to deal with uncertainty for example ensemble, you know forecasting, which is in practical in operation now. I think the great thing about pyramid is you know, like this already mentioned we're not really digital twin yet but we'll be towards him, you know that that direction, and then more importantly it's to identify, you know where you know I think we're reaching where is that gap you know what is the different part of your data side modeling side what are you know basically different between different component so that we can match each other. So I think to handle this kind of you know I think you know I think going back to the uncertainty so if we have a system like this. Now we can model, you know, floating debris, we can model cars you know I think you're moving around the city, we can even moving that you know we can even simulate the social dynamics like how people moving around. We got model like that now. And then, you know, we fix this a system and then we can test different scenarios, you know, I think because now we're running on high perform computing that can really make it new real time for example if a council, you know organization would like to know, you know the emergency planning, you know, for example, ABC, and what the consequence would be, and then they can test it out you know using system, we're building here for example, and I said okay plan A would be better you know in this, and then plan B maybe you know save more people but you know damage more economic, and then they can test out and then to, to give them information, you know to do the, you know, I think I think decision again, and if we run you know multiple scenarios, and then we can calculate the statistic, and then to you know really identify, you know, how to make a more lesson look into the uncertainty part. So I think that, so there's no 100% accuracy accuracy, but we can play, you know, or understand the uncertainty and then, you know, build it into decision making, you know, because all the decision making is under uncertainty. So, you know, we need to really like, you know, balance that. Yeah. The next question is really important, but I think there's so much work that has to happen before you can address that. So we've spent two and a half years and we've just kind of got a complete workflow where you can even begin to start running these kinds of simulations there. So the technical details and the kind of software engineering behind building something like this is an absolutely enormous challenge so it's a really important question but there's a lot that needs to be done before you can start getting there. Hailey, yes, please. I think there's a bit of a philosophical question here as well actually. I mean if we're talking about digital twins it's kind of what one thing I suppose in terms of perhaps, as you were saying scenario planning, thinking about future plans, how do you reduce flood risk in cities, you can put in future climate scenarios or whatever and have a look at what, how you would manage that. There's a lot of changes or those risks. But I think when you when you talk about sort of near real time as well. It's almost how accurate do you need to be. And I think one of the reasons that we're quite keen to closely work with, with stakeholders and with citizen science as well is to try and understand what people need, because actually, you know, we can produce all singing all dancing models but at the end of the day it's, we don't need to have quite that level of state of the art actually. And perhaps some of those things as scientists that we think are important to include in the model perhaps aren't that important actually for managing floods. So I think that you know there has to be a balance you know as scientists we're really interested in pushing, pushing the boundaries of that modeling and that science and bringing in as many data sets as we can perhaps. But at the, you know, it might be that a more streamlined approach would be more appropriate for stakeholders. So, you know, this is very much a pilot demonstrator, as I've said before. And, you know, as Liz was saying we thought very much about how we might move this forwards to to think about a pyramid to or kind of you know an extension there's very many ways we could take this which I think has come out of the discussion. But to be practically useful, it very much has to be co created with with stakeholders. I think that's that's a really important point you know how good, how good is good enough and the answer is it depends you know on the, the stakeholders and the sorts of challenges that they're facing. There's that interface between science and policy which, which is interesting the, the requirements of the decision makers and the ability of scientists to provide almost tailored packaged information to to support those decisions in that decision making process. And I think I think this is this has been a really interesting discussion. I'm. We started off by talking about digital environment and I suppose. Now, the interesting thing is what what strides we're making in in constructing a digital environment. And, you know, what are the, what are the challenges that we've, we found out you've, you've explained several of the challenges but what would you say the main challenges that are outstanding that remain in in constructing a digital environment to anyone really on the on the group. Well, I definitely think data reliability and density. And, yeah, kind of being available easily and in a timely manner is really important and improving the quality of the data that's being provided as well, and because the data comes to some really variable states of quality. And some really kind of key missing processes and kind of models from this so we touch on the limitations of the data that we've got to create those floating objects so, and, you know, having better estimates of debris generation from wind and trees and also things like rubbish and fly tipping and stuff like that I think would be a really big gap that needs to be filled. And then the kind of the platforms themselves. So one of the really big challenges has been bringing together so many different models with different computing requirements into one place. And Robin's been working really closely with the Daphne team to kind of in, you know, kind of make changes to Daphne that are needed for doing something like this in real time. I think one of the things that has changed during the project has been being able to actually run a loop of something rather than everything being in a linear process on Daphne. So, and really there are improvements to made in all aspects of trying to do a challenge like this. I think one of the really important things that's come out of it is also the communication between all of the different kind of groups that have been working on it in areas of expertise so, and we've got a really great team on pyramid and something that I think has been really important is to get everybody together in the same room really regularly so that we can try and talk each other's language and understand how all these models and datasets need to interact with each other. So that's been a really important aspect of doing such a kind of broad scope project like this as well. It seems that in addition to the sort of fairly technical things you've mentioned about data platforms which are absolutely critical of course, you know you're highlighting the importance and the challenges of constructing additional environment of bringing the right people together to to address these complex challenges and I think the nature of the composition of your project that very interdisciplinary or bringing lots of different skills together. And I think maybe the challenges is trying to find ways of ways of working actually for for these these these sorts of these sorts of project. And also, I'm detecting the policy science interface about how to get the right pitch to provide the right information at the right time to people to actually make use of this in future applications. Any other comments on that from others. Just on a related note I think having all of this is about flooding. Last year we had a lot of drought and the model that has been developed impairment I think could be applied with our environmental simulations or structures. So, having been through the kind of these issues of data and working with colleagues from different departments and different universities and varying data sources and model sources, pulling them all together into some kind of analytical structure I think that could be applied with other environmental problems on a similar platform. So it's still a challenge but hopefully that there's a little bit of the challenges, maybe being used for for other similar projects. I see, see your hand up. I was going to actually ask how, how these approaches could be applied in other areas or Robin I think you've sort of started to address that drought being an example but Haley you had a. Yeah, I mean I guess it's a more general point, but applies to this project and other ones I've been involved in in the past, but often we take these academic projects through to produce something as a kind of pilot. It doesn't produce anywhere. So I think that in general, it would be good to see more funding to actually develop these sorts of systems through to, to useful operational tools, which I know there was there was recently a software type call. I can't remember from the research councils but you know there really does need to be more funding to actually move from just producing research projects to actually developing these sorts of systems into something useful for society, which which can't happen with the foot with the with the current funding mechanisms. One of the things that I'm really proud of in pyramid is that we've actually tried to do things properly in a reusable way so that's been one of the amazing benefits of having an actual research software engineer on the project has been everything has been properly documented everything's open source and available on GitHub. The workflow when we update the workflow will make any changes to any of the packages as part of pyramid. It all gets pulled through automatically and tested and then incorporated automatically so it's got a really, you know, relative to lots of other kind of science and engineering research projects. It's been really rigorously done to make it as robust and as reusable as possible and I think funding for that kind of tedious, but really, really important work is. Yeah, really critical, I think to making proper advances with stuff like this. I see there's, I see there's a kind offer in the, in the in the chat from Andreas thank you, thank you for that and I think, thinking about sensors, of course. What about the citizen as a sensor I mean we, we were talking about flooding here affecting homes businesses people's lives. Is there an interest in, in gathering information from citizens, system science as a way of feeding into this sort of a tool, and what role do you think that might have. Yeah, so, and so we already have some kind of citizen science and community at science as part of this so in some of the more rural areas in the time catchment we've got data coming in from community groups. And, and, but you just kind of have to be careful in how you handle that data because it requires special kind of quality control for it to add you want to say something about this as well. Yeah, I think, I think this actually I think, again, just in pyramid we, we didn't really specify the data source and we more or less collect data, I think more even data, you know from multiple sources that make the project will be challenging. Yeah, so I work for Newcastle I think for 12 years or I think, yeah, I think we also involve another, you know, I think a couple other projects, which is directly have been collected and make use of data from from from from citizens like the two chairs and also the photos and video and then I think we got you know she don't and then you know I think a team from meals in here. They, they, they develop you know the tool for some to identify for some of the water depth, we lost the you know I think things like that, you know, or extend you know from the from this kind of you know public, you know, or citizen information, and then you know I think. Yeah I think I think that's a part of the source of the data, but I think, yeah I think, again, you know I think that lots of story going out in there, but the quality. That's a big issue we need to deal with like Liz mentioned, and then go we have a huge volume of data, but and then we need a very, very careful quality control to make sure that data it's a really in the quality we can really use you know for this is making. I'm just, I'm just looking at the time it looks like we're sort of unfortunately drawing towards the end of our fascinating discussion. I had a couple of sort of quick fire questions really quick fire ones if I may, just to sort of finish off with and one is. You know, other areas where the digital environment approach doesn't work, do you think. You know, we're talking about digital environment enabling facility with other other areas where perhaps it wouldn't work. And why not. Or maybe that's not the case. But may I start first. Please do quickly. Obviously, I think when we talk about digital, you know I think environment we need digital. So I think I can, I can see that you know again this is kind of you know I think I always view this I tell the technology for tomorrow, although you know I think we should really you know this to stakeholders you know to this should make a you know what the technology available, you know for future you know I think, you know for example you know for this management. But like one typical example is like you know if you're in a low income country, they don't really have the infrastructure, you know or you know, anything related you know in place and then this kind of you know I think concept may not work, but not work for now, but that does not really mean you know for tomorrow. For example, I think, when I when I develop you know proposal I think about eight seven eight years ago or 10 years ago. And then one of the challenge, you know in that poll for some you know the countries in that poll, they say okay you know there's not many people you know I think having you know mobile mobile phone in there. But I think only three, three years you know more or less everybody got mobile phone. That's a story so I think the technology may not be you know I think you know I think in some places but I think that's a for future. So, so, so that comes to the question we discussed just now is transferability, so the approach is transferable but I think may not have technology in place enough for this implementation now. There's no getting away from digital as it's the future. All right, thank you for that and just really the final the final question is, this is a demonstrated project within the construction digital environment program. And you know just looking looking back I mean it's still obviously underway but just looking looking back. I'm interested in, you know what what what if you could sort of characterize what what are the best practices that that you've been able to demonstrate and those learning points for others following in your in your footsteps what are the best practices that you'd like to point others towards. I think having a very close team and spending a lot of time in person with each other is really important for communication and helping each other and kind of transfer of skills and things like that. And then probably they're kind of documentation and testing and trying to make everything kind of reusable and robust is really important as well for the legacy of it. Well, I think looking at the clock it's about all we've got time for today it's a shame we can carry on talking about this for a long time I'm sure. I hope that's been interesting for everyone listening in as well. I'd like to thank sincerely all of you the panelists who've given up your time to come and tell us about this fascinating project. I'd also like to thank the audience and you know the questions that came in were great I'm going to pass on some of those and I think, looking at it some have been answered as well so that's good it's been a great discussion, really about how digital environment approaches can be used to address pressing and complex issues and of course this follows on from other fascinating talks that we've already had where the videos there if you want to go and watch. And of course thinking about the future. We have the next webinar coming up on the Friday the second of. The next webinar in three weeks time, which is the decide project so Dr Michael Pocock and team will be speaking about delivering enhanced biodiversity information with adaptive citizen science and digital engagement so we'll learn all about his team and their take on citizen science will be very interesting so the video for today will be just to remind everyone on our YouTube channel. Thank you so much for the chat and no digital environments and the the link is on the channel below. Do come and discuss things with us digitally the website for the CD program. The link is also I hope on the on the chat as well. But it remains for me just to thank you again everyone who's contributed to the talk today. Thank you for your pleasant rest of the day. Thank you very much indeed.