 Stajte, da smo počutni. Čekaj? Čekaj spičen je Alessandro Spignani. In... ...zapravljamo kompetitivne epidemiologe. Počutno, počutno, Alessandro? Čekaj, počutno, počutno, počutno, počutno, počutno. Možemo se dobečiti premelajče. Počutno. Počutno. Počutno, počutno, počutno, počutno, počutno, počutno, počutno. Počutno. Prvno... Počutno, počutno, počutno, počutno, počutno, počutno, počutno, počutno, počutno. Abok, da so vzellim, da počutno... ...sa nas palacej, da se počutno iz vsem laga. Hvar jovi zelo v Applicati, da smo te もš vzelo. Ne, da smo ne igram počutni, ne, da smo vzelo. tako, da se najbinati vse trvalo pridake v tem minuči. Seri, da sem nezakonosnje zucchini, s lovakom, pa nekaj menega materija post生iste trajanja nekaj 30 min wroteov. Vse imamo pomoće o komputationalno europejo in dovolj, powerspazificizaj. Hvala vse vse vse skupaja z tem, in ovo je, ovo je, ovo je, ovo je, ovo je. Zatak, učine. Tere, skupak, načo je. Nezapravljamo, da včetnijem, da njih, nekaj nekaj ne bo, je način, da počkega delama nekaj neštašnja, je to, da je dobro v pandemi, po kratku pandemi. V mnohi pravdjah, imamo, imamo vpočenite, načinje, načinje, v moje instituciju, pa tudi v istitivi, v Stilnih v Moru, in v Stilii, v Stilii, in vi tudi sveče. Mi se vse prijevamo da je začela na povaj, začelične vse, da je prijevamo vse, začelične na vsečne na vsečne vsečne. in sem očetel how this has been done. Prejninj into this conference, it's true, we have to take into account major... What I would define really existential threats for us in the future, so rising inequality, demographic crisis, migration, the loss of biodiversity and the way we interface with biodiversity in v attending, v kompletnih razlijeglih pasačnosti, vse grane sem rečeno za klipnjanje – za otčetv ajte to, da je nekajakeh nisu in nekajakeh zrgah. A čart se otče igraju z nisu in neko veši doeljšani doelj啊? Da se nisem tako, da je v tjave sem skupnila ki dozvali, da je vse poslednje pohleda predstavnje, režim, da je vse zvom, na krozima, kaj je prijezvalo, tukaj se kakva, tukaj se na zvom, tukaj se na zvom, tukaj se na zvom, tukaj se na zvom, tukaj se na zvom, tukaj se na zvom, tukaj se na zvom, je vso zelo še, kajnega vso intersectu, zdaj je to na premaženje, z naprej stvarj. Kaj ne istotno dodal politične obježenje na taj. Zame, čekaj smo vkinje in počešli v zelo zelo z管enjem in počuče uključiti v nekaj zelo. V ročnjih štih in netična pričvaj del, postavlj What's In, the data is available to using heavy data. Special data is available for most of the community, at the end of the data, and just in the beginning of the data era. The data is available for most of the community. And then you can do a search and delivery of data Zelo, da bo človek našličen, da je to vsak je neče v prezivnih vesef, o človekovom ječenom, izvečanom in vsev, je to izgleda vsev, z vsev za pristim, in da ne mogu, da je nekaj vsev, nekaj povolj, da ne človek vsev, da neče vsev, da ne človek vsev, ... zelo, da se lahko se vzal, nače se je načo to izgleda. Zato posledamo, da se sem prid темo koncept in inčne... ... in zelo sem ti, da prič, razgleda in vse vse zelo. Tako, da bomo boš vzal, da bi ki se vse zelo, da sem... ... da bomo pošli, da bi smo se vse vse vse izgleda, da sem... ... nače, da sem... ... zelo, da moram... ... načo se neče priča. And I don't want to advocate for any specific approach or way of tackling those problems. But for sure what we need is interpretability. And we can achieve these through mechanistic approach, the classic equation based understanding če bo, je je to nekaj, četko v vseh vseh. Vse, da je tako, da je nekaj, kot korešti bolj, ste inočniki predikt konec, zignovati, nekaj da jih ne bilo razic, nekaj musimo se bo pulsati, da so v svojih ifalske produktov, in tudi, da si so v res, in vse glasba se priživá veš 20 let, zato č霧u je dvoj, da je za našarba the data in tangočne zdefnega. Maihreke v dajte, čez tudi o časvenakih z ampakv, nekaj ležitega nekaj neštah darnitv, nekaj da včaske zdefnega dvije informacije, in priživati. V počkega možnosti, z njim vsega lokal, zato je vsega prišla, da so dnev, in da so kaj smo različali. I da je to vsega vsega data, da je vsega vsega vsega, demografija data, da je tudi, da je, da jim ne, da bo, da smo zelo zelo zelo, da je, da smo zelo, da smo način, da je delena, da se je, da je, da je, da je, da je, da je, da je, da je, da je, da je, da je, sredne borega, ki je zašlo kaj so zlipED, se ga zanime začite in pogledu po začetnih, kaj je začiren,ροno tako, čak je začena zato, neki ta več, da smo morati prodinovati in zorišči z dislike. Način sem izgleda, nekaj pa nekaj, listovamo zainstavljenjelanele, bo se izgledano, da imamo nekaj delatvih nebo delatvih načinovih, kot da je nekaj delatvih modelov, in je to se zela. To je dobranč, i dobranč je na nekaj delatvih. Kaj je nekaj delatvih, od lokalih vsega vstakaj, dober očinovih, zelo vsega in zelo vsega, in tukaj izgleda, ta, nesmojilačnja, je danes rečentan.宅em zelo v globalnih nojih in zelo vzelo na okelivni urbanci. Način, kot digi je bilo, neko jo se predvom radim, nekako se učenimo ene resolucija in metodologij. In si počudovati boljkosti otvarjene veszličenje, vzvečenje, kontrat patenje, in vzvečenje, ki se počudovati na nekako generačne multiskele in multiplike, ki so inšlo do vsega vsega erotnog tukaj najšličske, počudovati, kaj jaz moj najšliči pošljeni informacije, tako, da bi se postrebiti, The pattern for those models starts from micro sensors data, that means that you look at data that really collect very detailed information about single households, you re-sample those single households to create in each geographical area your synthetic population that has to respect the microscopic statistical property, ki pa tjerevedačne postavljao, spračaj, potračajnje, in pa počolij, da v stavnih dovolj nalaz na vsočodne, to je gledo predstavlji, potem pa emilijo nešta traündesli, no gledaj na začetko, k ki se vseh bi ti praveste, da se smo nalaz na odstavne contatoke, tako, nešta nešta nešta nešta nešta, 20, 40 danes in so on, so ford. In these are all stratified according to the settings. Those settings are changing by specific disease that you are looking at. And here it's important to keep in mind that there is no solution that fits all diseases. Respiratory diseases have typical household, school, workplace, community settings, while if you work for other diseases, you need to take into account the specific mode of transmission that could be, for instance, an azocomial or could be in a specific strata of the population. When you have all that, you model the infection transmission in these synthetic populations. And these, if you go for mechanistic approaches, you can, I would say, look at those systems always like what we call kind of metropopulation models. So, kindly, this says that you have a large network where each nodes specify a subpopulation. And this subpopulation could be as small rural areas or a county or a urban area or even a country in itself. So it depends on the scale. You have a mechanistic description of the disease internally that could be very complex. This is an example of what you would do, for instance, for COVID-19 in which you have to take into account the symptomatic individuals, symptomatic individuals, hospitalization. And then you have what is the diffusion of individuals in this metropopulation network. But the diffusion is not the classic, how to say, random diffusion that in many cases we have been used as a toy model, but actually it's diffusion that is driven by data. So it is our mobility patterns that we record from data. It could be long range by the international transportation and you can get real time data that really maps every single flight on Earth down to the level of the community patterns when you drop your kids to school. Just to give you an example, for instance, this is one of the models that we used during the COVID-19 pandemic for the United States. Here, each county of the United States is a single node of this network, this metapopulation, and then is connected with all the other possible nodes in the system through those long range and short range commuting patterns. And then you have to take into account that in each county, these layers of population that will provide the Zemoisoscoby or the single individual level description are changing because it depends if it's a rural area, if it's a urban area, with this demographic of each of the county. And they are even just looking only at the United States, very different. So you see that we need to collect those data, that understanding this problem, like the infectious disease modeling, is something that requires a lot of context and quantitative data. And the disease, I'm just showing you that in one second to flash out that when then you have multiple variants of a disease and many, many other things, you just get very complex model. This is just for the history of the disease. So all these compartments and the question is like if you would work in a homogeneous population, actually what you work is with stochastic models that preserve the discrete nature of individuals, it's all very computationally expensive. This can be even more complex if we look at other kind of diseases. I don't want to keep you out of state, although we are all fixing our focus on COVID. There are other things that this is, for instance, Zika. Zika is a major epidemic that occurred in the Americas in 2016. And the disease, whoever needs a vector, needs basically a mosquito that gets infected, and then biting a human that is infected, and then biting another human transmits the disease across the population. Now mosquitos do not travel a lot, but humans travel. And at this point it's not just a matter of, I would say, the human-to-human interaction, but it's the human-mosquito interaction. And as well, you know, you have to consider, for instance, while a disease, if you are a carrier of a disease and you travel somewhere, you can generally see that this is not the case. If you are infected with Zika, but then you travel to Norway, since there is no mosquito population, abundance there that favor this vector born disease spreading, there is no problem. Instead, if you travel from Brazil to Colombia, you are going to see the population. And then you have to consider climate temperature, you know, the mosquito life cycle. You have to consider that climate changes are changing the landscape of vector born diseases. So the area which are more and more at risk of experiencing outbreak of those diseases are changing and actually at a quite fast pace. And then you have economy. You know, if you are in an area of Brazil and you are a downtown major metropolitan area, and you live in a skyscraper that is as air-conditioning your exposure to mosquito, it's zero, basically, virtually. But then if you move just a few miles, perhaps you are instead in a high exposure with no mosquito nets and so on, so forth. And so economy and the information on the economy of the society is crucial. So to integrate that in the model, it's really not an easy fit, but can be done. For instance, this is a mosquito abundance at cell level for Aedes aegypti. And you see that now, and again, this is thanks to entomological data and mosquito traps, basically, but as well machine learning that provides those wonderful maps. You have climate data and meteorological data, all the resolutions that we need. And at the end, you have a model that becomes like the one that I was mentioning before with those metapopulation, but then the metapopulation model zooming into geographical resolution that are more at finer scale because you have to consider, for instance, the level of urbanization and the abundance of mosquitoes. And then also you have economical data that tells you what is the risk of exposure and what kind of urban development you have there. You sum up all those layers of data and then you transform those in equations with effective coupling terms, all it has to be calibrated on the data. And actually, this is work that has been done in 2016 by several teams in separate to try to get a hold of the understanding of the spread of Zika. So you see here is where you have this interface between mosquitoes, humans, economy. Everything is asked to be packed together if you want really to reach some understanding, quantitative understanding. This is even more if we zoom in a little more and then, for instance, want to understand what happens into those population network in urban level. If you want to understand an epidemic in a specific urban areas, you have to work in the specific urban area and get a flavor, for instance. This is why I'm talking about networking and giving you an example. What we start is where we are starting from is that the census data. There are those household construction, the workplace, the school, et cetera. And then you generate by partake networks that gives you location and opportunities of spreading for people by meeting those people or being in the same environment. And then you can do unite partake projections and get a network like this one in which each edge of the network has a different probability of transmission because the transmission of a disease is different in different settings. Or you can work with something more complex, like multiplex networking, in which each layer of those networks are specific settings. Individuals are not on old settings, they're not on old layers, but then they connect layers and the disease can jump and transmit from layer to layer because of the contact that individuals can bring to the different transmission settings. And this can be analyzed as well in time. If we want to go down to that level, because we want to understand the epidemic spreading in New York, what we need is mobility data, location data, stratify those by gender, age, occupation. And also that has to be longitudinal because we act on diseases by imposing our mitigation policies. This is where you need to get and access another frontier in terms of data. Here I'm trying to show you, for instance, the Boston area, starting in, basically, January 2020. And you see, this is the percent of typical contact, the mobility range, the commute volume of people. And you see here, in real time, in the city of Boston, what happened during the pandemic of COVID-19, you see that we go into a stationary, we have a more or less stationary state, there was some peak and bumps, and these are due to the fact that the holiday breaks in early January. And then there is the huge drop, due to the various policies adopted to contrast COVID in the urban area. Well, those little dots on the maps are specific sense of strut. So that means that you can really look at what is the change that each policy has created depending on income, depending on socio-economic status, and many other indicators that characterize the population and the connectivity of the urban area of the city. Well, these data are thanks to location intelligence provider cubic that has made available those data for good, as we say in a program, data for good. But there are many other providers that did that. And I think this is extremely important, because that was one of the major barrier in many of our approaches to understand infectious disease spreading and public health problems that now hopefully going through the trauma of the pandemic, a lot of infrastructures and pipeline have been generated to handle those data in a privacy-preserving way so that can be used for scientific purposes. And here you see a lot of interesting things, like, for instance, the percent of typical contact never goes back to after also when things were relaxed, it goes back to the stationary state pre-pandemic, and then you see that the commute volume actually remains always very low, although the mobility range instead increased. And so there are a lot of information to unpack that are relevant both for infectious diseases, public health and other problems. Well, this kind of computational epidemiology has to be done at scale, because you can write all these codes and pipeline, and if you do a simple model, even with all the data that you ingest, you can be and do one simulation in one minute or two, a couple of minutes. But then when you work for real, you have to calibrate models. Each simulation can take up to a few hours, and that's something that requires specific machines, requires specific pipeline for the data analysis, and the output data size and the way you deal with that depends on what you are trying to describe. So when we are in the real world, these means also that we have a computational power, and that's, again, another dimension of this interdisciplinarity. If we go up to the scale, and in many areas, like climate, meteorology, weather, et cetera, this is a process where they are 50 years ahead of us in a sense, and they know how much it is important that the computational part. Just to give you an example, you know, work that we did, we are really working with terabyte of data, and you have to design things which are specifically tailored around the problem. I don't want to get too much into the technical side of this, but you have a computational engine, and the only way to overcome the problem of running time, and the fact that you have to produce, and generally analysis, on a short time during health emergencies, you know, means that you have to run thousands of machines in parallels, you have to use cloud computing, you have to use cloud storage, and, you know, your results before can really be action at the end of the pipeline, really are going through a very large computational effort, of which the big problem, as you all can imagine, is the calibration, because the problem is how we deal with those models that are stochastic, that have priors for many of the quantities that are statistically distributed, and so what we have to do is to explore those initial conditions, those priors, define also what are the modeling assumptions, and then calibrate those data with respect to the evidence. The calibration could be done in many different ways. For instance, we use, and I give you an example, here is the approximate Bayesian computation, but there are many, many other possible techniques that, again, can be used by, I would say, to generate from your ensemble of simulations what is your outcome and your confidence interval on the outcome. These bring us into the scenario modeling and forecasts. Those pipelines are done to generate different vision of the future in infectious diseases, for instance, you can do short-term forecasts one week ahead, or you can look instead at scenarios that are projected to three or six months ahead, but you have to be careful. The one-week prediction are based on status quo or information that are available and can be integrated in the model, while the scenarios are made on assumptions on things that are very far and generally are not at all forecast in the sense that no real curve will be, I would say, very similar to those scenario forecasts. Those scenario forecasts are made to envelope and to provide the possible map of the future that helps the reasoning for policy-making. On the one-week ahead, to four-weeks ahead, instead, the hope is to really get something that is a little bit more, I would say, as close as possible to reality. So these are two different exercises that also, with the policy makers, are done in different ways. I will just go back to this in a short time. Everybody always talk about forecast. I want to spend here again one minute just to say that this modeling and our way of approaching quantitative and actionable infectious disease modeling is much more than forecast. There is situational awareness, intervention planning, structure reasoning, that means endless counterfactual to try to understand what's going on in the system. And the situational awareness is particularly important. For instance, the initial stage of the pandemic, when testing was not available, we all know now that there was cryptic transmission. So actually the disease was spreading in the population, but it was not measured. So while in Europe or the United States there were a handful of cases officially notified, we had thousands of transmission per day by and of February in each of those geographical areas. And this is something that with the model you can do. So the model we're telling policy makers, look, we have this, it's not measured, but it's there. It's, you know, the pandemic is inevitable. We have to move into from, how to say, the containment in China to mitigation and so on, so forth. So there was really impact. And the same is, for instance, if you look at where the cases were coming from, if you, again, this is very difficult, phylogenetic analysis can help a lot, but also here you access through simulations and using evidence from real data and maps of how importation sources have contributed to the seeding of the epidemic in different areas. And you see that the emerging picture is very different than, how to say, the classic narrative. So China as the culprit or Europe as the culprit. You see that, for instance, for most of the United States in Europe, most of the circulation of the virus has been domestic. So within the countries and the states of these two geographical areas that have contributed to seeding the epidemic there. So how these approaches has worked in early pandemic months, I would say that, especially at the beginning, they raised very important flags. They quantified the extent of the epidemic in China before we had the capabilities to do that. Projection of epidemic dispersal were very early, early February, in the timeline. And the evidence of the symptomatic and the symptomatic transmission, the early estimates of infection, fatality rates, that a lot of important information was available basically in the first few weeks of this pandemic, thanks to quantitative modeling happening in real time. And here we're talking about the work of a community and of really many groups. Or whether at a certain point, it's very relevant to talk about what I call the policy intelligence disconnect. Those red flags were not isolated with the global narratives, were actually a group of coordinated groups of scientists that were saying what was the thing. But, you know, they were not acted upon. And mostly because of some fallacies of past history-based thinking. So it was very difficult to think of COVID initially as something different from SARS or the flu. That was always, you know, the two cornerstones. And actually it was something completely different. The other problem was no action under uncertainties and also the hubris that sometime you have when you think about okay, this is happening in China, it's not happening in other places. And again, here is where, you know, having a different perspective into human ecology is much more of a mind that the wood would benefit. But the problem of no action under uncertainties is something very, very important. I report here something that the governor said in the United States. I didn't want to say who he is or she is. Decision should be based on things that actually happen and not the result of some mathematical equation. Actually, I have to be honest, this was not told by policy makers in an aggressive way. It was more a kind of desperate thing. So I, you know, we need to shut down cities and this is just in models and we don't yet see what's going on. However, there are two fallacies in this reasoning. One is that in our world, that we take endless decisions based on equations, actually, and just look at whether wise, like the evacuation of cities, no, for hurricanes. And then also that generally acting just on the base of what actually happens at that very moment means that you are acting too late. And so one of the problem was that, you know, for many of the problems we are discussing in this conference, we are in a situation very different from the classic example of the hurricane. The hurricane for policy makers, it's tangible. Although it is the trajectory and the information is provided, it's based on equations if you want them models. You know, you have also the satellite photo of the big hurricane. With many of the problems that we are dealing with in infectious diseases one, we don't have that satellite picture. You might have a model that tells, you know, you have thousands of transmission, but still the hospitals are not reporting those cases. And so you see that this is a major problems in our way of communicating things. And the other is uncertainty. And I have to hear again, I want to give in one minute an example that is very recent. This was a blizzard on the east coast in January, the past January. On one side you see the European forecast model and on another side the American forecast model. For who knows about whether forecast, you know that these are two of the major models used. Both of them were in agreement for a major blizzard and no storm hitting the east coast. But then if you look, for instance, at the area of New York, you have that in New York City the forecast of the two models were differing by 20 inches. So 20 inches in terms of policy making, as you can imagine, is the world. Because what do you do? One inch of snow, you don't do anything. At 20 inches of snow, you have to mobilize an entire city, close it down, et cetera. So how we, why we are, in a sens, in metrologi, we are used to this thing and indeed we act upon those models. Why? Because, actually, we know that models are not oracles. The results vary from model to model. There are not always optimal performance of a model in a specific situation and what we do is exactly what we call multimodal forecast. We do the super ensemble of different models and so we go more and more into the situation in which, okay, this is our forecast, that is the aggregation of this seemingly mess of different results from different models. And so this is unfortunately while in some areas, and I'm talking about again weather, climate, et cetera, this has been done and there is a long tradition, this is much less so in being in many other areas and in infectious diseases for sure. Indeed, we did start to use ensemble approach and super ensemble approach during COVID, both for the short term prediction and the long term scenarios. And you see, these are major efforts in which we really need to change our perspective on how we do science on those topics. And it was done on the run because of the emergencies of COVID. There were initiatives also before COVID and actually these things at the center for this is control, happen big backing on experience, for instance, with the flu and so on, so forth. But this is important because it's telling you that this is the way that we need to move for many of those problems. We can't going on, I think, and this is my conclusion with, in many cases, an academic model that doesn't work for major, major societal problems. You know, this is a sentence by, I think by Sam Skarpino, if I'm not wrong, in which in an interview say, you know, when you have again approaches these costs, you don't ask randomly modellers what they should do and what they see. Drop what you're doing and please create a model. No, you know, we have national, the national center for hurricane forecast and so on, so prediction and so on, so forth. So I think for many of the challenges we are facing and we are discussing in this day, we need to overcome the academic model, goes into single, from single team work, competition, elite publication model, lack of dependability of what we do and reproducibility, in many cases, to coordination, collaboration, best practices, data sharing in a way that is completely new and that really becomes a new way of approaching our problems. And although it seems from the outside that we're already halfway, I don't think so and the COVID-19 has been a kind of good example of how much we have to progress into this direction. I think I can stop here so that we have a few minutes for a question that they would be very happy to answer. Thank you. Okay, thank you. Yes. Okay, so, any question? So, Simon. Yeah, so thanks very much for your talk. I'm interested in your thoughts about agent-based models for infectious diseases. Obviously, agent-based models are part of the story, but there's so much freedom there in terms of what you put in that I think some sort of multi-model approach is essential for reducing the dimensionality. So, what are your views on the role of agent-based models? I think this is, okay, first of all, thanks for the, this is a great question and I think there are two answers to this question. First of all, I wanted to present a spectrum of approaches and you have to calibrate the kind of approach to the questions and the stage of the process that you are. So, for instance, agent-based model cannot be used at the very early stage when you don't have even basic information or the level of knowledge of transmissions mechanism is very limited. In many other cases, they might not be computationally ideal in settings where that information, again, is all good, that they are not high quality. I think in some of our countries, like, for instance, United States and Europe, but the level of data that you can achieve now allows you to really reduce drastically the number of parameters. So, in a sense, your synthetic population is data-driven and you are left with a few parameters, like the transmissibility of the disease, but you look at how much time people are spending in their home dwelling, how much if they meet in Starbucks or in a restaurant and so you can do really a lot. But I agree there is the problem of finding always the right dimensionality reduction and also to have something that is validated and so I would say that that's the other major problem. Using an agent-based model is wonderful, fantastic, but then how you validate the results that you see at the level of single. I say there is much more transmission in restaurants than in schools. Okay, how you validate that. And so I totally agree. One has to be very, very careful on the more you zoom in and the more you need to have perimeter and to have validation data. And I think this is part of the work that has to be done during the peacetime, so to prepare for that. And especially, I think agent-based model are very useful for specific questions, like what if I close a school in this place for those days and I have very good data on that area. I think at the global level, you know, many other course-grain approximation could work and provide the same quality of response actually even with less uncertainty. Do you think there's the potential, for example, for creating a functional taxonomy of cities based on things like degree distributions or something of that sort? I think yes. I think this is what we experience in the United States. I think it can be done in Europe. When you have those data at the level of census tract, the census tract is associated even without doing, so to say, private intrusive studies provide you a lot of information. A lot of information that can generate information on the... You can stratify in almost virtually endless way the population in those at this level. And it could be done, you know, because you see that Seattle is very different than New York. Actually, New York is a city per se, always, you know, with kind of different than anything else, you see, and then Boston, then Miami. I.e., here is where we need the support as a community, scientific community from the data providers. We need to really generate those taxonomy. We need to get those weather stations in place. We need to have those prices, see preserving pipeline in place. And I think it can be done. It's really a lot of work more, but it should be done. Thank you. And just to mention, you know that the Center for Disease Control now as a new center for outbreak analytic and forecast directed now by Mark Klipsic, that I think is working, in a sense, toward ideas of that kind. My hope is that this will happen in many other countries and across the world. Any other question? Yes, so you were mentioning at the beginning the issue of data availability and the fact that you... Some of the data that you use were private from some companies. And so I was wondering, say, and on the other hand, you have privacy issues. So I was wondering what are the constraints? I mean, on one hand, on, say, privacy issue, on the other hand, on economic cost. I mean, if you have to pay for this data... Well, you know, this is another good question. There are different phase to the problem of data. First one is some of the data that we use are just available, but at a very high cost. For instance, you want to buy the airline data, you just have to buy. And we had to buy, since many years, they cost a lot. So that's one problem, and there is a situation in which you would like to lower the value for the scientific community. Then there is the data about single individuals. There are privacy issues, but as well there are costs involved for the companies. So now the companies... And I'm talking about the big providers all from Facebook, QB, Google. During the pandemic, they were very helpful. They just brought the teams working on those data, making them available to the scientific community. But there is a cost involved for the company, because basically they have to generate those privacy preserving pipeline that you have to have people working inside those providers so that we can then access the data in a safe way. And this is something that is a cost. And so again, we need to generate brokers that are able to provide resources to keep this process alive. Also when COVID will not be anymore on the radar, and that's crucial. Then we have public health data, human data. We need to go back and fund the public health. We need to collect more data. We need to create ways to access clinical record, also in that area, that this is even more delegate in a privacy preserving way. All that as a cost, also in terms of software development, how you can interface different databases. And then you just can imagine what is going to happen in the next few years, when everybody will have a watch, measuring temperature, pressure, and heart rate. And this is a goldmine of data that can be used, also stratified by social economic status, gender, age indicators, et cetera, that could enrich so much what we do. But we need to have a plan for that, and not wait for the next emergency to strike. Joel, you want us to make a question? Yes, thank you very much. Thank you for this very thoughtful and helpful overview. In one slide, you showed the 20-inch difference in precipitation between the American model and the European model for the East Coast. And that is useful to show the margin of uncertainty, depending on the assumptions made by the different groups. In a later slide, you promoted the advantages of integrating research groups instead of having single investigators, going their own way, coming up with disparate projections. And I see attention between the advantages of diverse forecasts and the necessity of large teams working to come up with an integrated multi-model forecast. I don't see a discussion of how you manage that trade-off, or how should I think about that? If you have thoughts about it, I'd like to hear them. Yeah, this is a very good question, in the sense that you say, well, on one side, you see that the models provide that diversity, you need to value that diversity, because that tells you about what level of uncertainties you have depending on assumptions. At the same time, you want to get something that is actionable, and if you put all together, the fear is that you have a herding effect in which at the end, you, in a sense, you wash out the diversity. And the process in... Yeah, the process, indeed, that we used, that was in our experience with COVID, was to have the different teams to work independently and produce their own forecasts. And so we are not working together to generate a single model, but each one of us work with its own model. And then there is a third party, in a sense, is that the CDC and other teams that are doing the aggregation by using ensemble and super-ensemble approaches, so that at the end, you generate from those models an output that takes into account that diversity. This is crucial. So the ensemble model and super-ensemble model that are generated are not washing out that uncertainty. Actually, our problem now is that, being respectful of that uncertainty, those models have always large confidence intervals. And it's crucial that you avoid any herding effects. However, for COVID, I just can give you the scale of the effort. That means that you have 20, 25 modeling teams working independently that have to be supported. And then there is another group of people at the CDC and another group of modelers that actually have statisticians that assemble that information. And so that's a large-scale effort. It's the only way that we can move forward. On that sense, I have to really rely on the experience that has been done in weather and climate, where there is a lot of, really, they are decades ahead of us in doing that. But we need to be mindful of what you say. So you want to keep the diversity, and it's crucial. That's a major point. OK, so thank you very much. Thank you very much. Thank you. OK, so if there are no other questions, then we thank again Alessandro for this nice talk.