 On behalf of my co-conveners, Dr. Sujay Kumar and Professors Nikolai Srigo, Riko Fischer and Alike Makalan, I would like to extend our sincere gratitude to the EGO organizers and geoscientific community for this opportunity. Without your support, we will not be here today. Thank you. Let's begin. And we live at the cusp of ubiquitous observation, computation, and communication, a time when chasing small sloths through the annoyement architecture is no longer sufficient. We're doing a transition to hybrid architectures with dedicated salaries and shared memory. A time when programs learn to generate programs or images from natural language. Microprocessors and sensors alike have downsized into the space, energy, and cost efficient designs. While this capacity is in internet speeds have grown exponentially, parallel to transistor densities, giving rise to computing from cloud to edge. This has democratized information, collection, storage, processing, communication, blurring the lines between the real and the virtual, letting me into the term digital twin. Earth observation systems have increased in number and sophistication, while shaking in size and cost, as earth system models are expanded to incorporate newly discovered processes. Integrated circuits designed for 3D games and smartphones have enabled everything from drones, CubeSats, modern HPC systems to the deep learning breakthrough. Sensors key to robotics, such as the inertial measurement unit have shrunk from 45 kilogram 200 watt systems to fingernail size 100 milliwatt circuits integrating global positioning receivers. Laser scanners too are now commonly implemented on chip. Graphics accelerators in the latest mobile chips can perform as many falling point operations per second or flops as the fastest supercomputers of the 1990s using 10 rather than 1 million watts. Desktop graphics accelerators have gone from just five to 103 billion flops per watt, to 21 billion fold efficiency in Greece. The main specific accelerators for AI are another 20 times more efficient at two trillion flops per watt. These numbers are so staggering that current technology would have appeared alien just 20 years ago in era to which dynamic global vegetation models can trace the roots. These advances carry profound implications for the earth sciences, enabling new capabilities in earth observation and simulation modeling or EOSM, from geometric reconstruction to scientific machine learning. This upon this technological substrate that we present an exciting series of talks that together paint a coherent vision for the future of land surface modeling from observation to simulation to their union. The key role and wide uncertainty of the land carbon sink and regulating atmospheric CO2 has placed increased importance on terrestrial biosphere models. Much of this uncertainty regards the world's forest defined by the light intercepting canopies. All existing land models contain detailed one-dimensional physical and physiological processes. Geometric realism is such an entire force of landscape would appear as a single homogenous leaf or stack of leaves. Forest dynamics and many other spatial processes remain absent. Evolution, the generative process of life is missing. This is vital implications now only for our ability to model the earth system from abiogenesis to the present, but to model any planet that may potentially harbor life. An exciting merger is currently underway. As global vegetation models look to downscale to improve geometric physical and biological realism, the computer graphics and game development communities have turned to biology and physics to improve visual realism. Computer graphics has matured from physically based rendering to biologically and physically based 4D vegetation models and related observation systems. Game developers have begun applying deep learning to earth observation records to produce realistic global environments or worlds. One recent game provides a world with over three petabytes in size from one meter to eight centimeter resolution with some two trillion trees, 117 million lakes and two billion buildings. The entire land services procedurally generated every 72 hours using machine learning and photogrammetry programs running on graphics accelerators in the cloud. The game also blends physics with real-time observations to dynamically model atmospheric conditions. Such details would be the envy of any earth scientists. Long after the invention of L systems, detailed vegetation models may soon find their way into earth system models. Earth observation data, process models and universal approximation abilities of artificial neural networks will show way forward. Now is the time to blend physics-based botanical tree models, gap models, forest landscape models and terrestrial biosphere models into new hybrid land surface models of unrivaled geometric, physical and biological realism. My personal journey down this line of research began over six years ago with two computer scientists at Stanford, Young and Lee, and Samoan Perk. Unfortunately, we're unable to be here today but who deserve the two credit for their work. Let us now proceed with the talks. Thank you very much. So we are going to start off then with Vernarama and Rupert Seidel on new deep learning based approaches for forest modeling beyond the landscape scale. So thanks for coming. I hope the rest of the talk will go smoother than the start. Adam introduced the name of the talk already per echo for a couple of times. So I skipped this part. So the perfect model should have high resolution in the functions of ecosystems that Adam already mentioned in the introduction but should also have high resolution in demographies like regeneration, seed production and so on. In forest structure and composition, so that would be nice too, should be on a fine spatial grain, should include spatial context and should include human and natural disturbances all in a way that the model is still high performance and has the ability to address large areas. So in a way that's this perfect thing that's a little bit unrealistic as this little cartoon depicts. It's like the Eier legende Wollmilchsau in German so that one farm animal that can do everything at no cost. This is clearly unrealistic and most models usually now concentrate on these two parts. So it's the high resolution in the functions and it's the high performance or at least the performance that is good enough to be able to be applied on large scales. So one different approach I'd like to introduce here. It's called the scaling vegetation dynamics concept that's something that we developed at TU Munich and I'd like to highlight some of the features and show some results of an example application from a slightly different perspective detailed models of forest dynamics they are typically applicable on small areas such as single trees, forest stands, one hectare or a couple of hectares and those models are quite good in including structured and demographic processes and what we try to do is to scale those approaches up to larger scales. So the basic idea how we do this is called a state and transition approach. So that's nothing new that's already around for a couple of years. The key idea is to have vegetation in states in concrete states and changes in vegetation are expressed as transitions between discrete states. So for example, here we have a cell let's say it's a cell in state one this can then go over time and change to another state, state two and this move is called a transition and it takes a certain time. Of course it doesn't need to be deterministic it can be probabilistic by nature so it could go different path and so on and so on. So how could you define those states in this SVD approach we have the three dimensions of forest composition that's the species on the landscape, the forest structure what we capture by using canopy height and functioning perspective. And here we use the density, the forest density and these are then combined and such a system allows you to have rather simple and easy to interpret a state for example, spruce dominated states with some beach in a certain height and in a certain density class. So the point here is that those states or that you can have hundreds and thousands of states in such a system and not just a number of them. The question is then how you can you could model the transitions between those potential states and if you think about it you will see that there are many factors influencing these probabilities. So for instance, one is the current state and the current residence time which is the time that a cell already is in a certain state then environmental drivers such as climate or site conditions and also the spatial context so the neighbors that are around that cell. So all these points taken together this could be a rather complex equation and for this equation we needed some kind of machinery and yeah, in our context that machinery is deep learning so here just very quickly what is deep learning it's basically a subfield of machine learning it's based on neural networks which are already around for maybe 50 years it's a very loosely inspired by the way how the human brain works. So that's a single neuron and the brain and below it's a single compute node and neuron in the sense of deep learning. So it's basically a set of weights and connected nodes that are then processed by a kind of activation function and connected in the network that has inputs on the one side outputs on the other side and the number of hidden layers in between and having more than one hidden layer is the reason why these are called deep neural networks and deep learning because they have more than one such layer. So in the simple case or in the most frequently used case such networks that are used in the context of supervised learning that means that the network learns from examples so that's called the training data. So the network is shown an example the network makes a prediction and the error between or the difference between the example and the prediction from the model is then back propagated and in the end updates the weights between the nodes and in that setup the training data is key is a very important part of the system. In the last years and that's also something that Adam already alluded to deep learning had a lot of raised a lot of interest and also from the industry. So nowadays we have hardware that is very well able to run deep learning predictions on GPUs that's the graphic cards in your computers. There are even specialized hardware now that is being built like this tensor processing units. Then there's the software side. So nowadays we have a lot of easy to apply frameworks that let us allow to define these DNNs to train them and to run them and they can be used in your basically in your favorite computing environments for example are. And there are a lot of other resources that allow you to get some insights into deep learning and training material and it's easy to find answers if you have if you run into questions and stack overflow and other sites. So it is increasingly easy to apply deep learning in science and it is used already. So in environmental sciences the most typical use cases of deep learning so the very classical thing is classification for instance the identification of species from images or other applications in remote sensing. Then there is this field of ecological predictions where features are learned by a network that are difficult to define by hand and there is this field of ecological models where DNNs are either used instead of or as a sub part of a classical model or in this idea of metamodeling. Now the really interesting part and this is an image from a paper from Reichstein et al is that basically the same algorithms and the same applications that are used in the industry at Google and Facebook they can be used as well in the environmental sciences. So if you have some algorithms that can differentiate between cats or dogs can also be used to classify patterns in climate data and so on. And so it's very easy for us to use the latest news to developments in this very active field of deep learning and to apply those approaches also in our little modeling world. So yeah, back to SVD in our case the deep learning networks are used for the prediction of the transition probabilities between the states and how are they then used or how is the training data from where comes the training data as this is a crucial point in the whole deep learning approach. In our case, we use process-based models on the slower scale to generate the training data that is then used to train networks. So in a way, it's very simple. You take environmental conditions, climate-side conditions and so on. You take vegetation, you put that into a process-based model, run simulations and from that simulations you generate what derive training data. The response variable is the transitions. So, and the predictors are all the factors that I described before. So basically you have climate, site, the spatial context and so on. And those lead to simulated response to a transition in the vegetation. And when you apply the learned DNN on the landscape this is as follows. So you have the landscape, you look at the little red cell, you have the DNN and then you just in a dynamic simulation put the same attributes into the DNN as during the training. So that is the state and residence time, the spatial context environmental drivers. The DNN then predicts a classification results. So that's a probability distribution over the future state and over how long it takes until the state will change in the future. And this information is then put back into the landscape and the landscape is updated accordingly. So the whole system is defined or is designed to run at the spatial grain of one hectare and at an annual time step. So to show an application, I'd like to move you to the greater Yellowstone ecosystem in the US. So it's a landscape with about 3 million hectares of forest. It's dominated by conifers, lodgepole pine, diglasphere, Anglemans bruce and sub-alpine fir in higher elevation. The historic typical disturbance regime is of stand replacing fires. The question that we face now is how resilient are those forest types to future, potentially future climate and fire conditions. So why is that? That's because more fires are expected in the GYE. And this is the result of statistical modeling and a clear effect of climate change. So it is expected that in the future or say at the end of the century, that those exceptionally large fires, such as the 1988 fire, that such fires will occur much more frequently and the fire return interval, which is historically between 100 and 300 years will in the extreme cases drop to below 20 years. So this has severe consequences on the vegetation because larger fires, more frequent fires, they limit the delivery of seeds, they limit the establishment after fire and they limit the seed supply. So trees are not able to grow to an age where they produce seeds anymore, the seed distance to seed sources gets too large and so on. And it's rather straightforward to include disturbance to the SVD system. In a way, disturbances are just rapid transitions to different vegetation state compared to this more slower transitions of a dynamic vegetation development. And we can basically just add a module to the SVD that in case of a disturbance updates the states more frequently and faster. So in our case, the SVD has a built-in fire module. This is similar as a module that is used in the process-based model islands that will be explained a little bit later. The fire spread is influenced by topography, by slope and wind direction and wind speed. And important here, it's constrained by both the availability of biomass that can burn and there's a maximum fire size, which is in our case an external parameter, which comes from statistical modeling. Just to give you a better feeling of the complete system, we simulate first with the process-based model on stand level in this application case. We run many different simulations with different vegetation, different climates and fire conditions. We then generate data, use a DNN to train basically the response of the model, whether regeneration successfully works or does not work based on the forest type and then run the combination on the whole G by E on the pretty Yellowstone ecosystem. And here is how it looks like, just an example on one animation you see on the map the different forest types and the different colors. When I start the animation, you'll see also patches where actual burns happen. And after some time, if they fail to regenerate, they will turn to black. The same is on the right-hand side, it's on the graph, so it just start now. So in the first years, the fire frequency is not that high and most fires are changed back to normal vegetation in the second half of the century. However, fires are increasingly large. The eggs, particularly on the Yellowstone plateau, the big high elevation plateau, there are many fires and a lot of them does not return to vegetation. Of course, you can then run different climate scenarios and run different analyses. Here in this particular case, the point is that with some probability, there will be a very large part of the area of the GBE that with a failing regeneration and where we expect to miss a closed forest in the future. So already I met the conclusions. So the whole approach of using deep learning and a meta modeling approach that might be one way that large-scale vegetation models can be built that include also more demographic processes and more structure and composition. And the SVD framework scales very efficiently. So we did some tests where we were at least able to run 25 million hectares in a very unmoderate hardware, so to say. So it's maybe not applicable for global simulations right now, but at least for, let's say continental scale applications, they should be in reach. Okay, thank you very much for being with us and sorry again for the start. I hope you will have some questions. Good afternoon, everyone. Thank you for the opportunity to present our previous experiences and the latest developments of Finnish Geospatial Research Institute in doing 4D measurements. Here you can see an overview of the presentation. I will first present some of the earlier FGI experiences on doing 4D measurements from the past years. Then I'll present the experiments of applying hyper temporal laser scanning time series to monitor structural dynamics in plants, FGI monitoring system, the permanent LiDAR phenology station. Finally, I'll summarize shortly the key experiences from the presented study cases. First, let's start with the earlier FGI experiences on doing 4D measurements. Already in early 2000, FGI experimented with multi-temporal change detection studies that used airborne laser scanning data. These experiments were carried out to both in build and in forest environments with the gold-automate change detection. These studies were multi-temporal with point clouds collected between one or more years. These studies focused on object-level changes like removal of full trees or their large branches as presented in the figure. In these earliest studies, the change detection was based on height model comparisons. The ALS-based change detection studies were further refined to monitor forest growth and to assess growth dynamics. Here, the bottom figure visualizes how the younger vegetation here on left and on the right side of the panel present faster growth over the five-year period than the older trees here in the middle. FGI has also developed several new laser scanning platforms to be utilized in different measurement contexts. One of the longest-running monitoring experiments that use mobile laser scanning platforms is the time series of topographic changes in River Tenno, located in Finnish Lapland. The time series collection started there in 2008 and it has been annually updated ever since up to this date. Several different mobile platforms have been used in the change mapping over the years, including card-based, boat-based, UAV and personal mapping units. And as you know, drone-based imaging and mapping platforms have made a breakthrough over the past few years. FGI built their first prototype drone platform carrying a laser scanner already in 2010. One of the test cases for this system was to detect artificially caused defoliation in the conifer. The results found a clear agreement between the needle number and point cloud density and has showed a potential in using multi-temporal UAV data for biomass change detection. The previous experiments shown have been about three V change over time. But 4D measurements can also consider the object structure and its reflectance spectrum. In 2007, an FGI research group led by Dr. Sanna Kasselin and started experiments with hyperspectral laser sources. This work culminated in 2012 into a development of a hyperspectral LiDAR prototype that was able to record simultaneous waveforms of eight user-selected wavelengths. On the right, you can see an example point cloud scanned with the hyperspectral LiDAR prototype. The point cloud represents a small apple tree colored with false colors using wavelengths selected from the visible range. In 2013, we decided to take the hyperspectral LiDAR prototype in the field to do a new hypertemporal change detection experiment. Our goal was to determine any changes in object reflectivity over a relatively short one-day time interval. Several artificial and natural targets were present in the study scene to compare temporal differences in their reflectivity. The scene was scanned once per hour with the FGI hyperspectral LiDAR. The experiment showed that the nevier response of point clouds of natural targets presented clear changes just before sunrise. A closer look into the point cloud structure revealed clear systematic movements in the tree canopy. It zoomed in windows here on the right show a clear-proving motion of perch branches that started around sunset and continued about one hour after sunrise. We knew from the measurement records that the conditions had been stable with no air flows, no lighting or with no precipitation. Therefore, the movements were likely related with the internal tree processes. To confirm this finding and to rule out any measurement errors, another experiment was set up a year later in Austria. The new study site was 1500 kilometers away from the original experiment location. While the tree and the lasers scanner were different, the new experiment took place in otherwise similar late summer conditions. Also, in this experiment, we were able to verify a similar branch movement overnight which was dubbed as tree slip by one of the orders. However, the exact reason behind the movement was still left open at the time. After several talks with University of Helsinki and University of Eastern Finland researchers informed the hypothesis that one of the main reasons behind the branch movement could be the changes in the relative water content. The hypothesis was tested with data from 3D, separate laser scanning time series experiments. Two experiments were done in the field, one in summer 2016 with leaves, leaf-on conditions shown here on the left, and the other in autumn 2016 with leaves off. The third experiment was a long-term dehydration study done in controlled laboratory conditions where two test trees were let to dry over a period of 40 days. The figure here illustrates the branch movement detected in the summer time series. On left are selected point cloud clusters located in branch tips. On right, it can be seen how these clusters move with respect to their initial position over time. With the movement having its maximum after sunrise. On right is also shown the vapor pressure deficit VPD over time. VPD is a function of air temperature and relative humidity. VPD tells about the atmospheric water demand. It can be seen how the cluster movements have a lacked correlation with the VPD. In autumn data with leaf-off conditions, the cluster movements attenuated significantly or ceased altogether. These results were in a good agreement with the hypothesis about the branch movement being related to the changes in the plant's relative water content. And a study about these experiments was published recently as a pre-print. After these experiments with individual trees, an opportunity arise to expand the concept of hyper temporal lidar time series towards more extended monitoring over wider forest area. In late 2018, FGI acquired a high performance laser scanner for an extended monitoring experiment. To monitor branch movements from a wide enough area, the scanner needed to be able to resolve two consequent points with about one centimeter spacing from each other at 100 meter distance and to have fast operation. An additional requirement was to have full year outdoor operability in finished conditions. To guarantee this, the system was installed in a protective cover. In 2019, the system was first tested in FGI premises in southern Finland for the duration of a full crowd season, lasting from April until early November. The system measured repeatedly twice per hour to see the scene shown here in the left figure. The red rectangle highlights a branch branch whose reflectance response is shown here on the right. The panels clearly demonstrate how the branch branch reflectance was changing between early spring, mid summer, and in early and late autumn. This initial experiment was tested system operability and we did not collect additional ground reference data to link the time series with environmental effects. To make the full use of the new lidar station and to better understand the daily and seasonal changes visible at the point cloud time series, you obtain the permission from University of Helsinki to install our systems in Hytjela forest refilled station. The Hytjela station has been founded over a hundred years ago and it is located in the central Finland. It is a member of several international research infrastructures that monitor ecosystem and soil forest atmosphere processes of the forests and deep lands within and near the station area. Our scan is installed about 30 meters about the ground level from where it monitors the surrounding burial forest. The installation was started in autumn 2019 when an FGI designed custom made aluminum frame was first installed in the tower. A few months later, the scanner was transported to the forest station and fixed to the frame. After initial two months testing period, regular scans were started in April 2020. In this animation, you can see an overview of the scan area between early April and late November in 2020. The view area covers about four hectares of the forest next to the tower where the scanner was installed. Each frame in the animation shows the point cloud reflectance mapped into image plane as seen from the scanner. The scans were selected with about one week interval and the scan timings were aimed near midnight. Windless and rainless conditions were also required. The areas highlighted with the blue and the much interact angles marked zoom in windows to the point cloud and demonstrate the level of detail. For example, it is possible to see how new leaves appear in May in the highlighted perch tree and how they are later set off in autumn. At present, the scanner operates all day around the year. It measures once per hour a point cloud with 600 million points. The system operation has been robust. After one full year of operation, we have had only a few days missing all data due to technical or software issues. Our time series consists now of 7,000 scans and her number is incrementing every day. Out of these 7,000 scans, your 2,000 scans have been collected in stable conditions meaning there has been low wind and no precipitation. Overall, the collective data set is now about 60 terabytes in size and any editing increases with 200 gigabytes of new data per day. The scanner sees over 1,000 trees at least partially within a 200 meter radius from the scanner. From these trees, we have been able to detect automatically over 900 tree stems near the scanner unit. On individual tree level, the time series has shown lots of interesting seasonal dynamics between different tree species. The results presented on the slide are still very preliminary and the animations are not synchronized, so please view each panel individually. All three panels show a point cloud of a single tree and every point is colored with its normalized reflectance value. The point cloud of the deciduous perch in the left panel shows a clear shift in the volume and the reflectance levels as the seasons progress from spring to summer and then to autumn. What is maybe more interesting is that if you look at the lower branches of the pine in the middle panel, it is possible to see how its branches can move up to tens of centimeters even in one week's time. On the right panel, the dynamics of this bruised point cloud are the most delicate of these examples. If you look at the top of its canopy, the branches are left gradually upwards when the autumn advances towards winter. It is easy to say that every tree is making its own dance through the seasons. Now, let's summarize our key experiences in doing 4D measurements with long-term hyper-temporal LiDAR time series. Hyper-temporal laser scanning time series presents a new exciting way to collect information on internal plant dynamics both in individual plant and in forest level. However, to make the most use out of this new data, we need to accurate up-to-date ground reference information. Already existing time series will help to select and validate the most interesting phenomena seen in our time series. To do the key phenomena selection right, good knowledge on the local ecosystem processes, plant physiology and weather conditions is essential. The hyper-temporal high-density point cloud studies presented here produce large amounts of data by design. Our permanent single wavelength station alone has produced tens of terabytes of raw data in a year. In future, it will be increasingly important to be able to select the optimal spatial, spectral and temporal resolutions without losing the main signals from data while still limiting the data amounts. Our main goal is now to demonstrate the added value of long-term LiDAR monitoring stations in studying vegetation dynamics. To achieve this, we need to be able to select and calibrate our time series for the most important detectable phenomena. The more long-term goal will be to optimize the resolution of time series collection both spatially and temporarily to allow more flexible joint work together with other remote sensing platforms. When these requirements are met, we are hoping to see a growing number of similar long-term LiDAR monitoring units installed in other research sites around the world. Thank you for your attention. Are there any questions? Thank you for the opportunity to present our previous experience. Sorry about that. Thank you, Etiu, for the very fascinating talk. And now, in the case of the interest of timeliness, we will move on to the next presenter. We have Renato Bagire presenting on Better Representing Vegetation Canopy Structure in Earth System Models. Thank you, Adam, for the introduction. Can you hear me well and see my slides? Awesome. Yes. Thanks. Okay, so thank you very much for the invitation, Adam. Such an important topic. And hopefully now it's going to go smoother than the previous ones that were also very interesting. So today I want to share with you about a bit of the work that have been developing the past few years where I thought about how to better represent vegetation canopy structure in our system models. Of course, I would like to thank all my collaborators that helped me with this work. And I also want to say that I'm all the way here in California, so it's very early. And I am the NASA Jet Propulsion Laboratory. So we are very interested in remote sensing and 3D canopy structure in general. So I always like to start this presentation about Earth System Models showing these three plots from the latest SIMIP simulations. So here these three plots show the land uptake from 1850 up to the end of the century. According to 11 Earth System Models in three different SIMIP simulations. So the one on the left-hand side shows the SIMIP-4 and the one in the middle SIMIP-5 and the one on the right-hand side shows the SIMIP-6 simulations. And what is this dragging to the eye is actually the fact that along the years although these Earth System Models have been developed and have been adding new and new processes, they still disagree not only on the sign but also the magnitude of the land uptake by the end of the century. And why is that? Why do these models disagree so among themselves? And so according to the IPCC report some crucial processes are still missing in those Earth System Models. So in this plot here I show the sensitivity of land carbon uptake to precipitation versus temperature. And each dot in the plot shows you a land surface model from those 11 Earth System Models that I showed you before. So as you can see the model has a different sensitivity to carbon uptake. And of course different models will have different sensitivities because each one of these models have different processes. Some represent nitrogen cycles. The nitrogen cycle some others don't. Some have different representations about their different processes. But in general the IPCC also points out to the fact that these models have something in common that is the and here I quote the transfer of radiation, water and heat in the vegetation soil atmosphere continuum are treated very simply in the global ecosystem models. And so I was lucky enough to spend some time in the Amazon rainforest working with flux tower data. And when I took these two pictures here so the one on the left-hand side shows me at the top of the flux tower pointing to the horizon and when we look at the Amazon rainforest from above the flux tower this flux tower is 82 meters high. We actually see that the Amazon rainforest is this massive green carpet of leaves. So from above the flux tower is actually we can actually say that the Amazon rainforest is it looks like a big leaf from up there. Now from below the flux tower we see that the Amazon rainforest is a little different. So we have the leaves have angular orientation different angular orientations the canopy has gaps where the light can propagate through the canopy has some vertical orientation and some vertical profile of leaf area index. And so the approximation of a big leaf even for the Amazon it's probably not very accurate. So one way to task that is actually to think about how this land surface models treats the transfer of radiation. And so here I show you this figure it's a representation of what a canopy a vegetation canopy looks like in one of those models. So these models often use the two stream scheme and they use the two stream scheme because the two stream scheme it's efficient very fast and can be run over very large areas for very long periods of time. It also treats the radiation separately between direct and refuse radiation accounts for all sorts of orders of mood scattering. And the idea is we have some radiation solar radiation reaching the top of the canopy and interacting with this cloud of randomly distributed green dots I must say and interacting with a radiative effective surface underneath this canopy. And so there are system model the land surface model wants to mainly two variables from the radiative transfer scheme. The first one is the amount of absorb power photosynthetically active radiation because that will drive photosynthesis later on when coupled to the eco physiology model and also the albedo of the surface that will impact all the atmospheric processes later on in terms of the surface energy budget. In reality we know that a canopy doesn't look much like a cloud of randomly distributed dots. So one way to calculate the bias between the homogeneous canopy case and heterogeneous canopy is actually comparing the three stream scheme or this 1d radiative transfer model with more complex 3d radiative transfer models or in this case I use the Maestra 3d radiative transfer model. So in here what I did was to fix LAI or the optical depth of this vegetation canopy or this scene constant and equals to 1.5 meter square per meter square and all I did was to vary the density of the canopy so adding spaces in between trees. So we have three different densities the dense case and medium case in this power case. And so in the 3d model you have to give the position of every tree how the LAI varies vertically and so on. But by doing that and calculating the sun's angle profile of FA power which is the fraction of absorbed photosynthetically active radiation and albedo power we see that the two stream scheme overestimates all the other canopy scenes in terms of fraction of absorbed photosynthetically active radiation and when we go to this power case we actually see that the bias can be huge right can be in the order of 40 something percent. The ISO here in the plots show the case where the radiation is totally diffused so we can actually see that it really doesn't matter much if the radiation is directly diffused the heterogeneous canopy in terms of gap in this impacts the absorption and reflectance of the surface in comparison to the 1d scheme. So I think one first question that people ask is like okay so if we already have 3d radiative transfer models why did we still use 1d ones well I guess for two main reasons the first one is we don't have enough data to parameterize 3d models globally that's the first one but the second one is we don't have enough computing power to run these models globally and so one way or one question we may ask is can we keep the efficiency and robustness of the two stream scheme but getting the accuracy that these 3d models provide in terms of absorption and reflectance of the surface and so to do so we brought up into these models the concept of the clumping index so the clumping index is this variable or this combination of variables this equation that we multiply our true lai by or our optical depth and turn this optical depth of this canopy into an effective optical depth in radiative terms in order to in order to replicate or to approximate the black curve to the other colorful curves in terms of FAPA so the clumping index accounts for the really the gapiness of this vegetation account or how clumped are trees or leaves within a tree but when people talk about canopy structure it's it's funny because people often mean different things so when some people talk about canopy structure they're talking about canopy high some others talk about leaf angular distribution some others talk about canopy spaces or gapiness but in reality canopy 3d structure is all that right is the is the vertical profile of lai it's the gapiness of the canopy and so on and so the way we added the clumping index into this radiative transfer models and then surface models was by changing the light extinction coefficient so this equation here and I promise is the only equation in the presentation shows the direct transmittance of the canopy which is the missing term in terms of absorptance, reflectance and the direct transmittance is the other term of the radiation partitioning the direct transmittance follows a beer lambert law where direct transmittance is equal the exponential decay of that is related to the optical depth of the canopy which is the lai divided by the cost seen of the theta, theta is the angle of the incident radiation in this canopy or in the case of a land surface model is the sun's angle and so the light extinction coefficient is this term that will modulate lai up or down to really try to account for the whole heterogeneous structure of this canopy and so often this light extinction coefficient is equals to leaf angle distribution function times the canopy index so by inverting this equation we can now derive the canopy index that we need so much and add this value into land surface models and see what happens so in here I show you the results for two sites that we tested one site is the dense old aspen site in canada and the other site is sparser blue oak savanna site in california and we compare two different types of direct transmittance for this canopy the first one is the observed direct transmittance where we actually go into the field and take pictures digital hemispherical photographs of the canopy the idea here as I show in this the first figure you see is a digital hemispherical photograph in california and we have to threshold that picture into black and white pixels and we say that whatever is black is canopy and whatever is white is background sky by doing that we can then calculate the direct transmittance for all zenith angles from the middle of the picture where zenith is 0 up to the border of the picture where zenith is 90 and all the azimuth angles as well so from 0 to 360 and of course we average out the azimuth because what we are interested on here is really the zenith profile of the direct transmittance and the second methodology is actually using 3D modeling so over tons of range in california the blue oak savanna we actually got some lighter data from a lighter that flew over the forest and we actually measured the position of every single tree and the shape of every single tree and we input the radiative transfer model and calculate the direct transmittance for the old aspen side is a much more let's say hardcore methodology where people actually go into the field and measure the position of every tree within the space and etc so when we actually derive the the direct transmissivity in terms of zenith angle we get these two results so the figure on the left hand side shows the direct transmittance for the old aspen side in canada and the right hand side for the blue oak savanna the maestro model is the 3D scheme so the red line represents the 3D model and the dashed lines represent the digital hemispherical photographs so what we can see is that the 3D model and the observations actually are underestimated by the 2 stream scheme and so what we did was what would be the number necessary by inverting that equation I showed before that will make this black curve move on top of the other two curves so for the old aspen side this value is 0.66 and for the blue oak savanna that value is 0.46 so now that we have clumpy index so we run a land surface model with that so the land surface models we run was joules, joules is the land surface model of the UKSM the breeder shear system model and here I show you the FA part the diurnal profile of FA part for the old aspen side and blue oak side the green lines here show the results from the 2 stream scheme or the land surface model joules and the red lines show the results for the for the modified version with clumping index and so what we can see is that for both sides when we add the clumping the fraction of absorbed radiation decreases in comparison to the default version of the model so okay so if we have less absorption we would expect less productivity maybe and so what we did was okay so now that we have the radiated transfer scheme with a land surface model simply let's run the full land surface model and get an estimate of gross primary production or actually the productivity of that forest GPP and so here I show you GPP versus time of the day and the green lines show the model the default version of the model and the red line shows the new model so for the old Aspen side what we see is that the clumping index version increases productivity and for the Blue Oak Savannah we don't see much difference and that's key because for the Blue Oak Savannah it's a site in California so it's very very bright and very very sparse so light is not a limiting factor for productivity for that site so that's why we don't see much change when we correct the land surface model now for the dense site we see an increase in productivity even though we decrease the fraction of absorbed radiation and why is that how is that possible that we decrease absorption but we increase productivity and that is because of the nature of Jules of this land surface model Jules is a multi-layer model so photosynthesis is calculated in layers so what happens when we add the clumping into Jules is that we create gaps in this canopy allowing radiation to go through the canopy reaching bottom layers and so all the extra productivity we see is this GP extra GPP coming from the bottom of this canopy this plot shows the vertical profile and every time you see a red area here a pink area is more productivity so the productivity is really coming from the bottom layers now if we can do that for a site level we can do that for the globe so we use a global clumping index map derived from modus that looks like this one so when clumping index is closer to one it's closer to the homogeneous case and when it's closer to zero it's closer to a more clumped canopy or more the heterogeneous case as you can see the dark green areas represent middle leaf forests that are very very clumped and so that's why we have this pattern of clumping index in the world so now when we actually plot this when we introduce that global clumping index map into Jules and run globally we can actually see what the bias would be so here in this plot I show you the GPP difference between the model with clumping minus the model without clumping and we get an extra productivity of 5.5 ptg of carbon per year globally with much of the bias 75% coming from the tropics when we compare now that with the flux net MTE GPP which is a global product of GPP that was a scale up with flux tower data and machine learning techniques we get a better agreement with the observations when we add this heterogeneity of canopies and so to conclude I think the main message is that it is possible nowadays to make a 1D radiative transfer scheme reproduce the radiation partitioning of more complex 3D ones but without losing the efficiency needed still needed by this earth system models when we add the clumping index into a land surface model we get an impact on productivity but mostly on light limited regions like the tropics or very dense sites and when we add clumping into this land surface model we increase photosynthesis worldwide by 5 ptg of carbon per year and so in the future what would be the next steps or what we think would be the next steps so first off model benchmarking so the fact that we have these models that are very simple but efficient the big leaf models and other models that are very very complex and very very heavy to run we should compare more of these models and see if they agree and if they don't agree why not and where do they agree and now we live in a very exciting era to show a beautiful data set of canopy structure from a tower and we also have GDI LAI profile now so we actually have remote sensing data scanning all over the globe and measuring the vertical profile of leaf area index and now we can actually capture the vertical structure of heterogeneous canopy and hopefully bring on all that extra information into our system models and perform better projections thank you very much for your attention thank you Renato for the very interesting discussion and for the excellent scientific contribution next we'll move on to general ecosystem models moving towards modeling responses and the facts of whole ecosystems with Mike Harf presenting can you go through my slides okay and then I'll begin so yeah welcome everybody thank you for the opportunity to present here it's very kind I'll read from back to my slide view sorry if I can you can still see my slides we can see them and we can hear you talking okay great lovely so thank a lot Adam and the convenience for the opportunity to present today so I'm going to be talking about some of the work we've been doing developing a class of models called general ecosystem models and my thoughts on how they might move us towards representing responses of whole ecosystems and effects upon global change I'm going to start off by by guest presenting this figure which demonstrates one of the striking aspects about our planet and that is the diversity of life on it and that life is obviously fundamental for us having a habitable planet and it's fundamental to what the process is that we want to represent in our system but biodiversity is also complex and this demonstrates that there's a load of processes happening at different levels of organization in that system from evolutionary and biographic ones through demographic population community ones up to the functioning of organisms at that border ecosystem levels and perhaps because of that complexity and many approaches to modeling ecosystems particularly from a biodiversity perspective taking a statistical point of view and that's particularly true from a terrestrial perspective and I'm going to make the argument here since we're in the domain of the climate system that the climate system is also complex but we've had mechanistic models of the whole climate system we've been developing them for over 60 years now from back in 1956 when the first general circulation model was put forward I'm just going to give a health warning so from my point of view it's in early days for general ecosystem models I think we're back in the 1950s and 60s of global circulation models or general circulation models we're not quite on computers with any 5 kilobytes of memory or that's improved but I think the simplicity of the model compared to reality is certainly something I would need to take home but we are developing and with lots of promise and so again so the model we wanted to build was really has some analogies to the climate system so like with those models where we have grid representation of space and we're modeling the processes by which the climate system dynamics are determined we wanted to generate a spatial explicit mechanistic model of whole ecosystems and by which we mean all organisms that can be acquired with a similar model formulation on land and in ocean environments that was specified really the level of individuals the individual level because that's the level at which really ecology plays out and then we have emergent properties or high level properties rather than merge by an empty individuals we're trying to develop the model in an open and reproducible manner and so the code is available on github yeah but you know I present all of that to model life in that general way we have to make several abstractions away from reality at least we do in the current state and the model is formulated by describing organisms and discussing their focus primarily on animals such on the plant part of the model in the little world but we describe organisms according to a set of functional traits so we don't treat species specifically with more taxonomic identity we describe animals according to how functional traits are and that means a set of categorical traits so they're feeding mode they're herbivorous, omnivorous or carnivorous they're metabolic pathway they're warm-bodied or cold-bodied how they move and how they reproduce offspring in the model and then a set of continuous traits for the adult body mass and the mass at which they become reproductively mature a human animal mass which is the mass that would enter into the model and then at any point in time a current body mass and lastly because we can't have individual separately or independently we cluster organisms together into what we call cohorts so the fundamental ages of the model have a set of traits including a current body mass which is varying through time and also an abundance of individuals in that culture and those agents undergo a set of what we describe fundamental ecological processes so they metabolize, they feed, they reproduce in a different environment then they also disperse from one location to another and we can run the model flexibly where it's in the world as I say on the line in the water and also with varying sort of grid sizes as well reflecting different variations that we want to perform and what we find really amazing is that I mean so the model is incredibly simplistic but because it's got this individual based approach it can generate a large number of different estimates or predictions for variables that are relevant for biodiversity and the long pre-consistent function and although in any given location that can be quite wrong and I'm happy to talk about some of those those mismatches in detail if you want to but they also generate predictions which are very plausible compared to reality so you can look at compare the model to empirical data across a range of scales so on the individual scale the model here represented in black points compared to empirical data in grey points predicts the growth rate of individual organisms in a really realistic way again a big point so I've got that long enough the black points are empirical and the grey points are model predictions and if we look at the if we go on the individual scale the success of individuals in the model follows a similar trend to that of empirical data but with greater variation which may reflect sampling bias as another process going on in both empirical data and also what the model is talking about at the community scale we make predictions about the numbers of individuals of different body sizes in an ecosystem and here I'm showing them in red for carnivores, blue for omnivores and green for herbivores and when we compare those to empirical data for similar for the same trophic levels within an ecosystem then a resonance with a slope of that relationship with the empirical data and then if we look at broad scale macroecological scale patterns in this case comparing on the top the model predicted functional richness the diversity of the ecosystem functioning in a given environment and then here focusing on animals and compare that to empirical functional richness we find the broad similarities and where we find more greater richness of animals it compares to the empirical data and in addition the model can speak to ecosystem functioning so to functional rates so in addition to functional diversity which I started to introduce in the last side the model can talk about mean trophic level of ecosystems which is a variable which describes how long food chains are in the system that's been used quite widely in fisheries sciences to talk about how marine ecosystems have been affected by fisheries pressure for example and as a measure of how fast energy flows through ecosystems from primary producers up to apex predators we can talk about the chemical production so how much of primary production has been consumed by herbivores we can talk about biomass and nutrient turnover rates and indeed insectivory how much feeding for example on insects is occurring which is important from a path control perspective and then also functions like nectivory or fructivory which are things we're focusing on in a minute so how much pollination services might be provided by neck-to-eating organisms or how much seed dispersal might be provided by free-to-eating organisms for example it's not being used already for some quite interesting experiments particularly in the realm of rewilding and restoration and it has I think great potential for trying to inform how nature-based solutions might play out in the future and how effective they might be from a nectivory perspective on the left here I'm just showing some show-pick interaction network from the model from a few sets of sites where we've removed a big large carnivores from the model and looked at what effect that might have had on the ecosystem structure so the orange points on the left show the original ecosystem state and the blue points and interactions represent the state that emerges once we've taken out large carnivores so we see this really mapping of ecosystems and that would have effects essentially on the vegetation dynamics in that system as well and on the right we're modelling what happens when we remove essentially mega herbivores so the largest herbivores from the simulations starting in a pre-human world in a holocene state where we have really large grazing mammals around across lots of the world what happens when we remove those and what even happens if we take out large herbivores from the present day and what we find is that from the simulations that the smaller animals, particularly smaller herbivores don't compensate for the loss of those large, larger animals from the ecosystem so we see reduction in the total biomass of organisms but also in the and then again parallel to all of that we've been using the model in the same way the climate models are used for scenario exercises so as part of the intergovernmental platform for biodiversity and ecosystem services which is the biodiversity equivalent of the IPCC in to be very succinct we've been using the model to reconstruct ecosystems in the year 1900 and run them forward in time using historical data on climates and landures through the present day or 2005 and then we're going to set scenarios into the future about how ecosystems might change and here again I'm talking mostly about animals and ecosystems might change into the future and when we do that we find again that there's been some very consistent also very persistent changes happening to ecosystem through time for the middle two figures I'm showing here represent global averages for this very well mean and appropriate level and for ecosystem metabolic rate so we can see that ecosystems have through time been getting shorter through time so we've been removing apex predators or larger higher trophic level individuals from systems and squishing down through chains to be shorter and we've seen an increase in metabolic rate of ecosystems overall as the energy tone over the systems has increased and metrics of functional diversity show we've lost quite a lot of functional richness these panels on the right are showing the thick line of sense of global global average and the thin lines represent the sub regions essentially different parts of the world and we see there's quite a lot of variation across regions with some regions declining by 25% compared to this in 1900 on average across the world we're defining by 5% to 10% so but in conjunction with that functional richness change which is the functional richness is describing the diversity of that animal community the functional turnover on the bottom is showing how similar is a given state of those functional traits of animals compared to the state in 1900 and what this showing is that there's been really ongoing turnover of the functional traits of those animals so that the ecosystems in present day and even in 2050 will be very dissimilar from those in 1900 and now we know from recent analyses that there will be significant effects of those changes in the functional composition of animals on the earth system including or in addition to human society so there's nice paper in 2018 as well Schmitz and colleague looked at or tried to summarise the effect of animals on earth system dynamics particularly nutrient cycling and they coined the phrase zoo geochemistry to explain how animals might be playing a role in geochemical cycling another paper by Chris Dutty colleagues back in 2016 tried to estimate the effects of removal of large mammals from ecosystems in this case phosphorus cycling in previous times large mammals both in the oceans and on land combined with anadromous fish and seabirds may have been recycling large amounts of phosphorus from ocean from marine systems back on to land and dispersing that across the landmass but it's just going to dig into that a little bit more detail Schmitz little paper summarised various empirical data of these in which animals of particular types of animals have been excluded from ecosystems and then the rates of carbon cycling in this case have been measured to see what the effect of those animals was on cycling rates and the figures actually are quite hard to interpret but the green lines represent where the animal and the exposure have decreased carbon cycling rates in other words animals are having a positive effect on carbon cycling and the pink lines represent the opposite and we've got herbivores on the left and carnivores on the right but the key point I want to take away from this is that the magnitude of effects of animals is really quite big so it's similar to the effect that when animals are not present in the system so in other words they could be having quite a significant or prominent effect on carbon cycling rates in different ecosystems and putting them all in a simple way this plot on the right-hand side is showing how much our prime production is being consumed by animals across different ecosystems based on empirical data because if some ecosystem is particularly grassland we get as much as a third of the MPP being consumed by herbivores in this case and across a lot of forest and temperate tropical shrines of forest then we get a 7% of potential effects of the MPP being removed by animal consumers by herbivores and of course in forested systems the primary agents of that herbivory are insects and this really neat study by Dan Metcar from colleagues back in 2013 compared the accounts of nutrient accounts of study locations where insectivory had been excluded and where it was present and they found that the insects feeding on leaves, particularly those leaves which were in a really nutritious state so they had a lot of high nitrogen to carbon phosphorus to carbon ratios and led to additions of nitrogen and phosphorus into the soils that were comparable to background rates for nitrogen or even far greater than background rates for phosphorus so the animals in this system are playing a really important role in the future of the cycle and so this leads me into the kind of developments that we're interested in from our general ecosystem and that might relate to linking animals into earth system models. So we've been comparing predictions that we make on insect herbivory rates and tropical forests to empirical data and this is very early work done in collaboration with Dr. Andrew Abraham at NMU and we find through a really interesting pattern that the pleasing thing is that the model isn't outrageously bad it's not entirely consistent and the most interesting thing to me is that we find in some locations like this set here which run along an elevation gradient because we're not representing anything about elevation effects in the model we don't capture any variation between the herbivory rates across this gradient so it's a point in the processes that might be important but in a sort of broad finger in the air the model is not doing a terrible job of representation of insects and their herbivory rates and tropical forests were important. One of those changes is incorporating vertical structure into the model so we've been working again with Chris and colleagues at NMU working with Jedi LiDOT data to try and get a better handle on vertical structure of vegetation in ecosystems so that we can evaluate animals and interact with it but the minute the model is working as in panel A so all of the animals in the model interact with is essentially a green slime that's spread over the floor whereas clearly we'd like to have some vertical structure to differentiate where productivity and the different types of productivity that occur in ecosystems so that different animals can access them so in essence in tropical forests most of the production is unavailable so we don't access the tops of trees because the trees and the branches aren't strong enough so that's worked as ongoing in trying to have animals interact with vertical structure another key piece of work is allowing the animals in the model to interact with a more sophisticated model of vegetation dynamics so in the original version of the model we used a semi-mechanistic carbon cycle model to determine production of net climbing production and allocation of production to different plant structures or strength compartments but in collaboration with Almatarnath and Jens Kraus at KIT down in Garmisch we've been linking the model to the LPJGAS model which they run there which many of you probably know is a regional based model of vegetation dynamics that's also using a functional trade based approach and just a snippet of the outcome of that which I find really fascinating what I'm showing you at the moment the only coupling we've got is the LPJGAS driving herbivory in the model so the herbivores aren't affecting the vegetation structure in LPJGAS the simple claim seems to make a big difference to the realism of both the biomass of herbivores that we have in the model and the rates at which they're consuming vegetation so in these figures on the bottom here the black line represents the empirical data I showed earlier on herbivore biomass and herbivore consumption as a function of net climbing productivity the green points represent the original modern version and the blue triangles are this one-way coupled version and we're finding a far more consistent relationship to productivity in herbivore biomass and herbivore consumption with the one-way coupled model so just a better representation of plants seems to really improve the model greatly and obviously the next step for us there is to incorporate a two-way coupling where the animal herbivory is affecting the composition in LPJGAS and nitrogen dynamics can play a role in herbivores so I'm going to finish up with some next steps particularly for the madding the aspects of this modelling so I think one of the key things that we need to do is bring the model closer to data so using data integration techniques and Bayesian statistics so that we can do several things one is to formally understand where uncertainty comes from in the ecological processes that we have in the model because I think at least in ecology we don't have that broad overview of what processes seem to be more assertive compared to others and that will also guide more objective model development which we can try to make in the modern world obviously it allows us to make predictions with uncertainty which we can associate with the information that we provide into decision makers and will probably help with the production of ecological forecasting techniques to work with a simulation of observational data to generate a re-analysis type approach and lastly to my mind I think there's really great potential and actually important in linking animals into our consideration of land system processes primarily because animals can be very important in dynamics and we see that in some really important examples for example boreal ecosystems where pest outbreaks can be driving nonlinear dynamics in those systems interacting with fire and where a whole ecosystem context there is important for those pest outbreaks and the same that for reindeer for example in tundra type ecosystems and so with that I'll say thank you very much for your attention thank you again to Adam for the opportunity to speak and happy to answer any questions if there are any. Thank you Mike for the fascinating and excellent results from the Magnely model it's absolutely wonderful to see and I think it's one of the major underserved areas of land service models currently so that's wonderful work I think for the interest of time we'll probably be skipping through the questions we have one last talk to give which is Milo's and hello so we're going to conclude our session with a true digital twin that directly couples visually and physically realistic atmospheric and vegetation models so Milo's from Avon Mikia Vicks University will now present an exciting development of 40 or interactive 3D eco climate simulations alright so I hope you can hear me well and see my screen shirt can you see the slides I hope so alright so thank you for the opportunity to present our results as Adam introduced me I'm a PhD student at Adam Mitskevic University in Poznan and in our team we have worked on ecosystems and cloud simulation mainly for the graphics community our aim was to create realistic 3D simulations with an emphasis on the computational efficiency the results we want to present today are based on our attempt to combine three models the previously mentioned ecosystem and cloud models together with a soil model these allows us to describe a more comprehensive water cycle and consequently a more accurate simulation of eco climate suitable for generating highly realistic renderings of other scenes the joint simulation of these models is a challenging task in our method we propose several different simulation spaces the main goal is to maintain a balance between accuracy and computational efficiency we use first of all an ecosystem continuous space which describes positions of plants we also use a voxel space which expresses varying light conditions available for plant development we also use a voxel space for the simulation of clouds and finally we use a vapor and precipitation maps which describe the water exchange between different simulation spaces here you can see an overview of our model first of all a user can specify plant species elevation model of the terrain as well as climatic parameters used in the simulation the vegetation model is responsible for generating the distribution of plants this distribution is then transformed into a vapor map according to the transpiration attributes of different plant species the vapor map is used by the weather model to evaluate the formation of clouds as a result of simulating cloud dynamics precipitation occurs and is subsequently accumulated in a precipitation map the precipitation map is the input to the soil model which describes the amount of water available to the plants this closes the water cycle in our model we repeatedly evaluate the steps over time to model eco-climate dynamics our vegetation model is based on the observation that trees are subsimilar structures we propose a multi-scale approach which lets us achieve interactive simulation times while maintaining high accuracy we build the instances of trees by composing a number of modules which describe the topology of tree branches the distribution of modules is a result of their self-organization finally, we create the geometry of branches for the purpose of rendering based on the topological description self-organization of tree modules is based on competition of branches for light and for space we calculate the available light at each node of tree graph representation we accumulate the light values and compute the vigor as the plant response vigor which describes the growth potential of the plant is then used to control the growth of tree modules furthermore, we compute the intersections of bounding volumes of the modules and optimize it using a gradient-based method as the result we obtain a distribution which minimizes the intersections between the modules we simulate the weather by solving a set of partial differential equations using an Eulerian solver this allows us to model transitions between different types of water clouds we extended the Kessler scheme by accounting for plant evapotranspiration and micro-climatic vapor which is external vapor not considered directly by our models the essence of this model are the phase transitions of water as vapor QV condensed water QC and terrain QR the computer terrain QR is accumulated in a precipitation map used to calculate the amount of surface water over time furthermore our soil model describes the infiltration of surface water and stores the amount of soil water available for plant growth specifically, plants take up soil water to grow and improve the rate of infiltration of surface to soil water we also take into account the topography by calculating water diffusion and advection based on the gradient of terrain slopes our soil model is based on the work by Hila Rieslambers et al. we extended the model by adding an advection term and also by computing the biomass factor from the discrete plant distribution which is computed by our ecosystem model we also sample over time average precipitation values from the weather model instead of assuming a constant value now let us show you some results we generated with our hybrid continuous and discrete modeling approach we are able to express well studied spatial vegetation patterns arising predominantly in arid climates this includes the dynamic formation of spots, labyrinths and gaps here you can see the impact of vegetation on cloud formation we are able to simulate different types of clouds based on the distribution and type of plant of course there is also an impact in the opposite direction from weather to vegetation in the top row of images you can see a dense forest growing in the Yosemite valley and a thick cloud layer passing through we then modify vapor values and continue growing the forest typical ribbon-like structures emerge due to spatial patterning of plants finally, severe forest dieback occurs resulting in an arid impression of the landscape we can observe a bi-directional feedback between forests and clouds first we develop a mature forest after deforestation we can observe a change in cloud composition as the result of wind and precipitation plants are able to regrow on the empty terrain which in turn leads to recreation of the initial cloud composition due to local description of temperature and water we are able to simulate varying microclimates for example we can observe edge effects forming at the boundaries of a forest due to different species climatic preferences when the forest regrows after deforestation a characteristic species composition emerges at the new edges we also simulated the fan effect which occurs when cold air approaches a mountain from one side as the result of bear warming on the lee side a different microclimate with different plant species composition develops this scene is composed of about 200,000 trees which are composed of over 1 million modules the terrain is 4 kilometers wide the average computation time of a simulation step takes about 8 seconds you can also notice that we simulate this on two different scales one is the scale for ecosystem simulation which one step consists of one fourth of the air and also simultaneously we calculate weather simulation at much finer step of 6.5 seconds so these are the real time values and to compute such a step we require just a few seconds of computations thus our interactive simulation enables us to validate underlying model hypothesis very efficiently compared to more accurate but computationally expensive methods thank you if you have any questions I'm open to it thank you Milos for the fascinating talk truly impressive work and if we have time now for discussions and questions answering perhaps it would be a good time if we saw some time remaining Katja, do we have time for a Q&A? yes, okay there aren't any in the chat if people have questions otherwise we just see a DOI request in the chat from Fabian I'm sorry, I have some problem with your question yes there is one now from Wabi and Reading Hey Milos, really interesting presentation and nice visualization could you provide the address that DOI is the question okay, so I can only provide references to two previous work that I mentioned mainly the ecosystem simulation and cloud dynamic simulation because the joint simulation of those two simulations which I presented today as the eco-climate project is not published yet but I can only provide the references to the previous works there's another question are there any filter or sensors used for the data or is there a database from Peter Rubig what is the question exactly about what kind of data do you mean yes, I don't understand Peter Rubig the person who asked are there any filter or sensors used for the data okay no, we actually did not use any sensors for the data we just used some visual references we could find out and just tune the parameters by ourselves that seems to be all the questions in the question and answer blog great, thank you well, it sounds like this concludes our symposium so on behalf of everyone we would like to thank the speakers and the co-authors for the remarkable scientific contributions we've had a series of truly amazing works here that I think a picture of the future of land service models thank you all for joining us today in taking part in the symposium the exciting times ahead is we push technology to new limits