 Our next speaker is Alexei Vainov. We'll be talking about low carbon and climate mitigation. Well, thanks for the invitation. Thanks for the opportunity to talk to you, though. You would probably it would be hard to place the two talks, to choose the two talks, which would be more different than the ones that you're going to be listening. So basically, you need to really switch gears to perform, let's say, real signs to some kind of hand waving that I'll be doing. I've been associated with CSDMS systems for quite some time, probably more than a decade. And it's really impressing how the whole program evolved. And apparently at the moment, we are also at the stage when at least there is some consideration of the humans in landscapes and in these geomorphic forms. And probably that's one of the reasons why I'm here talking to you about something that we're doing right now in this project. And though myself, by training computer science, just working a lot in environmental modeling, doing some watershed modeling in the past, then gradually evolving more and more towards the human aspects through stakeholder involvement and then eventually to economics. And I'll try to explain some of the reasons that drives me in this direction. And that makes it kind of interesting for me to explore these issues. Well, first of all, this is something that really bothers me, honestly. So we're at a period of time when something I call the triple whammy of today when we have these three kind of components that are interacting quite heavily. Let me try to show you what I mean. So we've got climate change. As a result of climate change, we're expecting some increased variability in floods and droughts and hurricanes and all that kind of nasty things coming, which you are well aware of. But if you start to somehow extend this chain of events, what you also realize is that, well, we need to somehow prepare for that, if not avoid it, which means that we need some more energy to do that. So we need to build more dams. We need to build more levies. And there's more energy required for that. At the same time, what happens is that we're running out of cheap energy. And let's face it, the energy that we're extracting today costs us 10 times more than what we extracted 60 years ago, which means that the efficiency of the fossil system that we are largely based on is declining. So immediately, we also see some kind of a feedback effect, because in fact, the burning of these fossil fuels is what's driving climate change. So whenever we have these kind of positive feedbacks, that's already something that makes you worry. So indeed, one of the things we're looking for to avoid the peak energy would be alternative energy. So let's find something else. The nasty thing for that is that in order to do that, we also need to build new infrastructure. So another positive feedback. So basically, alternative energy doesn't come from free. We don't have the infrastructure. We have to build it. We have to develop it. And these things also require more energy. And the other kind of thing is that all this is happening globally. So whereas before, we had these kind of problems on more or less localized. The Maya civilization and the different empires that have collapsed in the past were more or less local. Now everything is really globalized. So that's what really drives my interest in this. And something is going to happen. Let's face it. So the only kind of issue is when and to what extent we can avoid it. The interesting thing is that we actually know what we have to do, except we don't do it. Because this involves changes in human behavior. And we hate that. We see the past history of climate mitigation. And we see that even up till now, well, first of all, climate change is not real. We don't want to believe in it in some countries, especially. And then the result is that, well, actually, the trends are more or less the same per capita, more or less the same. So even though we know that something is wrong, we're not really doing much about it. So the question is, why is this? And how maybe we as scientists, the real scientists, and not the social scientists? And I'm saying this not because I think so, but it's just the kind of sentiment that I hear very often, even in the university that I work with, that there is a big difference. I think that's actually wrong. But let's try to apply some of our thinking to analyze some of the models that the social scientists are using. And immediately you find that, well, there's a lot of problems with the economic models that are used today. Well, first of all, they still assume that we have abundant natural resources. Secondly, as a result, they really are based on demand. So whenever you're looking at the models for the future economic development, it's the demand that drives supply. The assumption is that if there is demand, there will be supply. That's definitely wrong. It uses some absolutely ridiculous assumptions about what are the goals of the whole economic system. Even still now, everybody's talking about how economic growth is great and how this is something that has to be maintained and developed on a limited planet. Well, give me a break. It uses some absolutely crazy indicators, such as GDP, again, assuming that this is actually an indicator of our welfare, of our well-being, which is totally wrong. And there is a lot of explanations and examples why this is wrong. Most of the economic models are actually operating at the margin that they don't really expect any dramatic, any system change, any structural change. They assume that the whole system adapts and evolves by accommodating some of the changes in the world that we are looking at. No structural change. And also, they are spatially uniform. Either they're global or they're local, there's probably very little models that I know which actually link the different spatial and temporal scales. They also use very simple assumptions about human behavior. So rationality and homogeneity is some of the basic ideas about what economic systems are about. And obviously, no account for social learning and the fact that people are actually adapting somehow to what's going on. This is a European-funded project that I'm part of at the moment where, in a way, we're trying to apply some of the knowledge that we have gained from also natural systems modeling to probably try to improve some of the economic models that are available. So it's a joint effort between economists and social scientists and natural scientists where we have this kind of model zoom at the moment where a mix of open source and proprietary models, which describe all sorts of systems, including also a whole suite of different modeling paradigms starting with integrated assessment, hydrology models, land use models, some of the agent-based models, computer global equilibrium models, simply data, and some of the conceptual models that come from stakeholder interaction. So what we're trying to do is to make some sense between these models, and in particular, a work package on model integration is designed to try to understand how these different models talk to each other, how they can actually provide information from one model into another. Kind of sounds similar to what we've heard about the other types of models that systems is about, except that one major difference that we're running into, and in fact, the complication, is that we're dealing with models that are run by very different paradigms. So connecting, for instance, an agent-based model with a computer global equilibrium model is a problem by itself. Integrated modeling is what we're all so much fascinated about, and there are some problems which, when you're doing this kind of transdisciplinary modeling, are only aggravated. We've seen this before in the models that we've been developing with the natural kind of side of the story. Here, things become even further complicated. Pretty much the same list of software issues that we run into systems. In fact, this is from Scott's paper, where these issues are very well described, and pretty much what CSDMS has been doing over these years is gradually trying to sort out some of these issues with a varying level of success, but these things are being fixed. From the modeling angle, we still see some problems. So first of all, what we realize, and this is actually something that comes very nicely once you start discussing these models with the stakeholders and with the users, something that is in a way addressed with the EKT efforts, but still a long way to go in terms of delivery of these results, visualizing these results, explaining how the different terms and assumptions used in conceptual modeling, for instance, can be translated and transferred into numerical quantitative models and vice versa. How do we actually visualize our results to bring them back to the stakeholders and make them understand them? To what extent actually modeling is art and to what extent it's science is still a big question. And in fact, something that I kind of like to illustrate with these artistic metaphors, I think I've shown it already too many times, but I still think it's a great illustration of what might be actually happening when we put models together without sufficient reasoning and thought put in place. A nice supposedly way to integrate two models. We have one component that is coupled with another component. And the result, at least to most people, seems to be visually appealing. If we use more or less the same rules, and if we do another type of integration, we will get something like this called collective invention by Magritte. The rules are the same, yet the result does not seem to be as visually at least appealing as the previous one. So I suggest we call these kind of integrandsters. So that's what comes from integrating pieces without putting enough thought about how we actually do that. Another simple example, in geometry, we can also run into pretty nasty constructs. If we take a geometrically perfectly well designed picture of a Belvedere, we can have another one, which is also geometrically perfectly well arranged. Everything is in place, all columns, so everything makes sense. And then if we try to put them together, again following very simple rules, a column is a column. We connect columns to each other. And suddenly we realize that we start climbing a ladder from inside, and we end up in the outside. And something is totally twisted curvy here, and that's a problem. Again, we applied the rules. We got results, which were not necessarily what we expected to see. So also what comes with piling models one on top of another is this complexity curve. It's harder to communicate. It's harder to explain them. It's harder to get stakeholders actually use them appropriately. And this is something that constantly comes to the stage when we start communicating these models to the decision makers and to the policy makers. Another alternative, which has been also developing and some of the models that we're dealing with in complex are so-called integral models. Again, that's the terminology that I suggest to use. When actually a model is used to develop the whole system, but from a certain angle. So you can build a regional model from the point of view of its ecology from the point of view of its business, for instance. And then basically you're talking about knowledge integration. So what kind of knowledge comes from one type of model of the whole and another model of the whole and how can you actually then put these things together? The artistic metaphor that comes to mind when you are dealing with these sorts of models is another kind of Chinese parable about the elephant that is being explored by six blind people coming from different sides of it. So one touches the trunk and thinks that it's a snake. The one who touches the tail thinks it's a rope and so on. You know the story. So again, it's not really clear how do you communicate these models and what's even more important and what's more difficult is to connect these models with the other type of models or the other quantitative types of models that we use, which are more sector-based, disciplinary-based. So at the moment, this is the kind of first cut of models that we're dealing with and that's we're trying to connect. The agent-based, for instance, model is supposed to be using the CGE model for updating the state of these agents on different local or regional levels. There is the system dynamics model, which is kind of an integral model. And again, what we want to do is to compare, for instance, the results that come from here to what can be extracted from this other integrated effort. In any case, the technological imperative seems to be quite similar to what systems has been dealing with over these years. In a way, we're trying to make these models less of an art and more of a science, more of a software tool, which then allows us to adopt and use some of the computer artificial intelligence approaches to actually sort through the different features and different options we have with these models, using ontology engineering metadata. The development of standards is an extremely important step that we're very much involved in at the moment. So for example, I don't know if it's a good term. Still under discussion, we call it a metamodel. I know it's an overloaded term, but if we have metadata, which are data about data, I thought that metamodel is also kind of a model of a model. So why not? What we're talking about here is a hierarchy of different standards that can be used to describe these models with the goal of facilitating a whole series of different uses and applications, including model integration and interfacing these models with each other and the stakeholders. The real trick in this case and something that, again, Scott is very well aware of is the fact that we need to communicate assumptions that are in these models. And that's the tricky part of this. And the challenges are, first of all, the buy-in and acceptance of the modeling community into this process. But hopefully something that still can be done. The model integration that we're following at the moment is still very much based on loose coupling. We assume we're going to be using web services for wrapping the different models. We intend to apply some of the semantic integration based on the model documents and the anthologies that are available. So the service-oriented architecture will use the web services as wrappers to connect these models together. Semantic mediation for different types of calling conventions that, hopefully, we can also resolve with better documentation about the models. And then we get back to the social kind of imperative. So first of all, yes, we want these models to be open source, which is not the case in our project when about a quarter of them are proprietary. What do we do with them? So how do we convince these people that actually it would be better even for them if these things become open source? So far, it's still a challenge. So this is a kind of a social process of integration of scientists, if you may. Modeling with stakeholders and integrating stakeholders into the process. How do we actually allow the people to understand what these models in? And how do we develop the toolboxes for participatory modeling? So participatory modeling seems to be a very powerful tool to integrate the users and the stakeholders into the process. But there's still very big distance between the complicated, quantitative, excellent tools that we've got and the actual visualizations and the buy-in of these results by the stakeholders. How do we bring in conceptual models that come out of these workshops when people are drawing diagrams and discussing how things work and how they are connected? Is there any way that we can actually bring some of the quantitative dimension into this discussion and add to what they are discussing during these meetings? Visualization really is paramount for this process. And we do need to learn how this is done in the business and advertisement sector. They know very well how to sell their results and their issues. So the need is really to put the user up front and go out there and communicate with them. We are very much embedded into this kind of linear scheme of model development. When we start with defining the model process, specifying the modeling content, going through all these steps of putting together the best algorithms, programming them, running them, identifying the parameters, calibrating for these parameters, running these verifications, quantifying uncertainty, and then cranking out results. And now the assumption is that everybody is there waiting for our results, but the reality is that they're not. They really don't care about these results. They put together this kind of little animation to try to explain this point. And what seems to be happening is that we've got this kind of big, complex world out there. And we're sitting in front of our computer very much on top of the ivory tower, looking at what's happening in this world and assuming that somebody is telling us what to do and what kind of decisions, what kind of solutions we have to provide. And then we provide that the reality is that these goals for these modeling efforts are very often coming from the ideal world, not the real world. The results that we produce are then sent back to this world, but then they end up actually in the, you do realize that this is the paradise, the world and the hell, right? So that's where they end up, and we end up as a result. So basically what we're trying probably to do is try to get out of this ivory tower and get inside this and actually be part of this process of formulating the goals of our models, formulating the tasks for us to do, and then, oops, and then, oh, no, you don't want to look through this whole thing again. Somehow the last part got missing, so you want to really get inside and continue working with the world. We published this little paper where we actually tried to come up with some of these commandments for modeling which starts with the idea, with the notion that we really want to stop pretending that applied science is value neutral. And I kind of put applied in bold because I know that in this audience there's a lot of non-applied scientists, and for them the science is value neutral, like for instance, the gravity on Mars is probably really independent of values, but as soon as you start applying these things to like what David was telling us about earlier today about digging for oil, then values become important. What we actually put into our models and how we deal with this becomes a part of our decision. We actually need to communicate these values, and it is important for the integration process to involve these stakeholders into this process. We need to really engage with them to define the problems together on the receiving end, and we also want to engage with the policymakers to actually make sure that what we're doing is taken up and used for action. And we do want to try to turn around the weapons. So far the advertising industry is actually driving us, and I think this should really, we think it should really change because unless we are learning to communicate our results in a vivid and powerful way, we're not producing the action that is so much needed. So something that comes out of the stakeholder process where you have a much more complex interaction between different stages of the modeling, and you constantly go through these stages, including the conceptual models that are also part of this integration, is kind of promising in this context. So the real challenge is, yeah, actually just when we were finishing this paper, this article came in Guardian, which was very nice to see that it's not just us thinking that actually scientists have to do something now. So just quickly to conclude, I think it's really important that we start thinking just beyond the discipline that we are in, and trust me, it's actually a lot of fun. You do find a lot of interesting things coming from other disciplines that people are doing. The integral models may actually be sometimes more useful than integrated ones. They are actually sometimes easier to control their complexity, because that's where you really choose the important things on the model development stage. The real issue is how do you scale this complexity, and how do you continuously move from a simple model to a more complicated model as more data and more needs evolve. The temporal and spatial scales, we had an interesting discussion today in the Anthropocene group, where we immediately realized that even the definition of Anthropocene is very much dependent upon the temporal and the spatial scale that we assume. What is it, an Anthropocene, and whether this is even a proper name for this group? How do we deal with the complexity that comes out of these models? It's becoming only more difficult to communicate these models, and again, integration of knowledge, conceptual, mental models of stakeholders with the quantitative tools that we have at hand becomes really a challenge when you're dealing with these transdisciplinary types of research that I've been referring to. Thank you. I'll break a little bit on the difference between integral models and integrated models one more time, please. Okay, the difference is that integral model, a perfect example would be World Three by Meadows. You know that one? So basically you're developing a model of the system as a whole. It contains information from a lot of different disciplines in it, but this information is put together at the stage of putting the model together. It's not a result of integrating components that come from other people developing these tools before. We and many others have found that in the pedagogical treatment of some of these complex families of equations, the best way to get across much in teaching is to make experiential education part of it. And I wonder if that's something that you can see used in the stakeholder group, that is the model should have an ability for individuals to gain intuition by experimenting and finding things that are new with them and making that part of the exchange process. Absolutely, it goes even beyond that what we found with these stakeholder processes that actually helps people understand and buy in into the model is their participation in the model design. So what we do in participatory modeling, even though we have sometimes a pretty clear idea of our own what the system is and how it should be modeled, we start with these interactive discussions with the stakeholders when we ask them what they think the problem is. And then we guide them through this discussion through the conceptual model towards more or less the model that we have in mind which also evolves from the input from them. So the visualization, the gaming part and the playing with this model becomes obviously very important but also the sense of them actually finding some of the ideas and concepts that they produced in the model turns out to be very powerful for them to understand what's happening and also to accept the model as a decision-making tool. Am I off the hook?