 All right. Good morning, folks. I see a number of familiar faces in the audiences. And whoever has the mic on, please turn it off so I don't reverberate between what your mic picks up and what I say. So I wanted to give an overview of some of the connections between weather models, climate models, observations, and, in a sense, parallel Earths. There are differences and there are components that are shared among the different models. We live on a fairly complex Earth. Both weather models and climate models have to have radiation transport. That's solar radiation, UV to near-infrared coming down through the atmosphere to the surface, and atmospheric or terrestrial infrared radiation going through the atmosphere, getting absorbed, and finally part of it getting radiated out to space. Along processes, important to both but in slightly different ways, we have evaporation of water, heat exchange at the ocean surface or the land surface. And over here, we have biological processes interacting with the atmosphere. So in both climate models and weather models, these processes are included but in slightly different ways. And the reason they're included in slightly different ways is the difference in time scales. First off, we can't numerically model the planet in a continual basis, which means we have to chop up the Earth horizontally into cells and then on top of each horizontal cell, like an icosahedral, or down here, what's called a cubed sphere, we have to add vertical layers. So the horizontal gridding and the vertical layering gives us a 3D grid, and it's that 3D grid on which any numerical model operates. And by the way, feel free to ask questions along the way, and hopefully a lot of them. The natural grid has been a latitude and longitude grid, but that has a big problem around the poles in that the longitude lines all come together at the pole. So the distance from longitude line to longitude line decreases as we approach the pole and finally goes to zero. At the pole we could go around the world just by turning our body around in place. On this grid, we want to model the atmosphere and we want to use algorithms that conserve mass and energy as it moves heat and air around. And it moves air around because of pressure differences. These days there's a new emphasis on having a grid and algorithms for parallel computing, and as we move toward exoscale computing, that becomes even more stringent. Exoscale being a billion, billion floating point operations per second. Yes, Vic, to answer your question, the grid extends upwards with cells above the ones below. The distance upwards isn't as large because we don't go 6,000 kilometers up, but we do have to have multiple vertical layers because of the atmosphere. These days end of the stratosphere. It used to be climate models only went up through the troposphere, but it's been learned that stratospheric processes in chemistry are important. And the stratosphere starts at about 15 kilometers near the equator and about 8 kilometers closer to the poles, probably higher than 20 kilometers. Well, the icosahedral model can also be thought of as a hexagonal grid because the triangles in the icosahedral model do make up hexagons. So, yes, that is the equilibrium area. The tube on a sphere is not entirely equilibrium, and there are boundaries at the edge of the tube part on each face. But both of them get away from having a singular pole. The grid elements are numbered. I can immediately bridge up how they're numbered, but there is a reference system for any grid. So, as we increase the resolution and go to larger problems with a 3D grid, if you double the resolution, basically you have 8 times as many calculations. And what it's turning out is that moving data in memory or between CPUs becomes much more expensive than the actual computing. So, moving to exascale computing means changing algorithms and understanding a lot more about data boom movement and trying to minimize it and moving away from some of the old paradigms. One difference between climate models and numerical weather prediction models, NWP, is that the climate model has more coarse, also known as less resolution. The difference is something like a base resolution of 12 kilometers for the weather prediction models horizontally and more like 30 kilometers for the climate models. And the weather prediction models may have some areas with greater resolution that are run nested within its main grid to make a more accurate forecast at certain places. Yes, the climate models to go with Syzygy's question predict atmosphere conditions, but not in the same sense as the weather model and I'll be getting into that. So, to run a weather model it needs good input, a best estimate of the current state of the atmosphere and ocean surface. And the process of taking all those observations and getting them on to the model grid is known as assimilation. And that's a necessary step before the prediction can begin. The data I've listed here and some of these resources are listed on the note card. So you can look in more detail or weather stations, ship reports, ocean boys, autonomous ocean boys these days that only require occasional servicing. And one of the problems with the current pandemic is people haven't been able to go out in ships and service them. So there are lots of side effects that people might not think of immediately. And if the ocean boys start going out then the forecast become less accurate and terrain, Syzygy said. And in your note card there's a link to the shuttle radar topography data, which actually data like that in the past have actually brought some into second life to do the topography of a whole sim. So it's possible to take real life topography and scale it correctly and bring it into second life. Land cover, there's several decades now of Landsat data, which includes images of different surface types and analysis from that. Radio sands, these are weather balloons carrying sort of expendable radio packages beneath them. And one of the ways of getting data profiles, vertical profiles through the atmosphere is to launch a radio sonde up through the atmosphere. It's a local measurement, but it does give you that important vertical profile and satellite data. And satellite data has a relationship to the modeling because it also requires a similar radiative transfer model to retrieve chemical compounds and temperature from what the satellite of above the atmosphere sees. Yes, LiDAR, when the LiDAR is another instrument, it's been used both ground based and from satellites to detect dust and aerosol particles. Oh, and I have an ocean boy, one of the newer ocean boys pictured here. What the model does is take differential equations. And this is taking atmospheric equations or equations of fluid flow and that are written in terms of small changes or infinitesimal changes. And moving them forward in time, and that moving forward is called integrating in time. So you have the fluid flow dynamics, a whole domain of research on fluid dynamics there. Thermodynamics, which is the treatment of heat exchange, change in temperature with pressure. I think most of us have sprayed a compressed air can into a computer to remove dust. You probably should. And notice that after a few sprays, the can is cooled down. And this is because the air and expanding is doing work and that work comes at the expensive temperature. So as air rises in the atmosphere, it'll cool. And that's part of the thermodynamics. Processes of evaporation and convection that move air up from the surface and include formation of clouds when the air has enough moisture that it cools to its dew point. And the water vapor condenses and releases more heat, which allows the parcel to go up higher. And as I mentioned, the transport of sunlight and infrared radiation through the atmosphere, which is common to weather models, climate models, and retrieving data, retrieving properties of the atmosphere from satellite observations. What the satellite sees is just the infrared radiation, for example, at the top of the atmosphere in multiple wavelengths. And as Shiloh mentioned, including the influence of biological release of methane from agricultural fields from feedlots. I'll get to that in a moment. And one of the resources on that note card is FlexNet, which goes in and does just that. And on the website for this talk, one of the pictures, which I don't actually have in the talk, is of a tower to measure such flexes of gases from a surface. For numerical weather prediction, the atmosphere has limited predictability. And the theoretical limit has been determined to be about 14 days. You never have perfect initial conditions, and the model is an approximation. And probably the current practical predictability is 6 to 10 days. And some features can have significantly shorter predictability. So predictability isn't always just that 6 to 10 days. And I give an example on the next slide. Since the 1990s, weather prediction has gone from what was previously done was single runs of a weather model. And basically that was the basis for the forecast. Since the 1990s, the prediction is used ensembles of runs. This means maybe 100 runs of the model and the initial conditions, meaning everything on the current state of the atmosphere. And some of the parameters within the model are varied. So parameters generally come because on a 30 kilometer grid, you don't resolve turbulent diffusion. You don't resolve evaporation and then recondensation on the surface molecule by molecule with a net flow. And individual cumulus clouds aren't resolvable either. So there's a collection of parameterizations, which means approximations of some of the individual effects over the size of a model grid. And these generally have parameters that have probability distributions in themselves. And the parameters are basically determined by observations. So every individual sub-module gets looked at and determined and checked out how it compares with observations and setting the parameter values. But because the parameters have some uncertainty in making these ensemble runs, some of these parameters will be varied, which means sampling from the probability distribution. I don't know if that makes sense. And then the ensembles are further improved by using multiple models since different models created by different groups will be doing these parameterizations in slightly different ways. It helps expand the evaluation of the uncertainty. And it turns out that different places are using their own sort of weighted ensembles of different models. Like Florida State has one for looking at tropical storms. It's a weighted average. So I mentioned that I'd given an example of where predictability failed. I mean, just pretty much totally. During 15 to 16 October 1987, a violent storm tore into southern England. Steve Esther Brooke, in reviewing a talk by Tim Palmer, noted that the town of Seven Oaks where he used to live became the town of No Oaks. The wind was that strong, caused a number of deaths and millions of dollars of damage. I've set a links in that note card on the Great Storm of 1987. Early forecasts had indicated there might be a storm, and then later forecasts missed it. And this was before ensemble modeling. So it was single runs of the current models to predict the weather. One of the problems, there was a lack of observations offshore in the Atlantic. So the forecast basically said there'll be strong winds, but the real storm will be south of England. The real storm tore into both France and the south of England. Years later, probably in the 90s, when the early 2000s, there was a re-analysis or a re-prediction using technology. And a few of the runs, I think there was something like 100 runs, a few of the runs showed the severe storm. The conclusion was the storm was simply very difficult to predict. Current technology would include it as a possibility, but both the observations and the technology back then didn't exist back in 1987. So what happens when we reach the end of predictability in a numerical weather model? Does the model weather and observed weather diverge? Diverge means separate, go their own ways. It's sort of like a divorce. You're no longer able to predict whether next Saturday will be a good picnic day. Does the model produce garbage? Definitely not. It keeps on producing what looks like reasonable weather. And since a numerical prediction model doesn't have to include seasonality, it sort of like continues on as a weather of a single month. But a person on a parallel earth, see, there we get the parallel earth, where the model's weather couldn't tell from just the weather that they experienced, that they're not on our earth. Yes, it's different weather, but it's valid weather. Yes, Vic, weather is a two-week term at most. Climate is long-term and generally taken to be 30 years. And part of the reason for the 30 years is that the models produce features like eleno or inso, eleno-southern oscillation. They don't produce those features at the same time that the earth might produce them, but they do produce them with the right statistics. So because of ocean time scales, it takes about 30 years to determine the climate. That's why during the strong El Nino, skeptics sort of looked at the model result and said, well, the model result isn't doing this. And that's because climate models are not tracking these ocean oscillations in the same time as the earth, but they produce them. I have a couple of talks in that notecard that show model patterns that are emergent phenomena of the model. And the one by Steve Easterbrook shows model patterns and observed patterns at the same time. And so they're not identical because even though they might be at the same year time, the climate model is more like a parallel earth. So as we move to that last bit about what happens when we run a weather model longer than two weeks, there's a step toward climate modeling. So what's different about a climate model? Now we're doing the ensemble runs for 30 years each rather than two weeks or up to 100 years. We generally have less resolution on the horizontal grid, which also ties into less resolution on the vertical grid, 30 kilometers resolution instead of 12 or less. We now have to include the seasonal cycle. Since the earth orbits around the sun, it has summer at the north pole and south pole at different times. And also the distance of the earth from the sun changes slightly. We're basically at approaching an eccentricity, which is how the orbit differs from being circular. We're approaching a minimum in about 2,000 years, and the current eccentricity is historically small. Instead of the land and ocean surface just affecting the atmosphere, we now have to consider the way that the atmosphere affects the land surface coverage and the ocean. So there's a new feedback that the model now has to consider melting ice, changing plant coverage, changing moisture patterns. Shadow, America weather prediction only goes out two weeks, so the concentration of CO2 is not going to be changing appreciably in that two weeks. But as model runs are updated over time, the concentration of CO2 has to be updated correspondingly. In climate modeling, we're generally looking at a scenario for trace gas emission. This includes CO2 and methane. But the dynamical core radiation transport models and voice processes, evaporation, cumulus convection, we basically can inherit from a numerical weather prediction model. So, you know, if they're working in a numerical weather prediction, they're going to work in the climate model. In fact, it's easier because the resolution is less fine. When it gets down below 10 kilometers resolution, there's some additional considerations that come in and how things are treated. Yet for a global model, there aren't any edges. If you're running nested, limited region models, there are edges and basically the edges are handled by the less resolved global model. And how do we know it works? Well, the radiation transport models are used both in the America weather prediction models, which are, you know, within predictability are successful and are used in satellite retrievals. They use radio sands and local measurements to determine their accuracy. So, locally measuring to calibrate the radiation transfer model, and we know that works. Satellite retrievals have been validated by local measurements. Submodules in a climate model are compared with observations. And I've included the, in the note card, links to ARM, which is the Atmospheric Radiation Measurement Program that was started in 1989. Specifically to take coincident measures of atmospheric properties, moisture, clouds at multiple sites around the world, along with the measurements of the sunlight and infrared radiation. So that program has been used to look at model treatments and parameterizations within climate models. Another source of observations that's important is the biological flexes of gases like methane and CO2 from agricultural fields, from feedlots, and that, to cover that, I've included a link to Fluxnet and in more recently an ecological monitoring system called Neon. For the total model, there is statistical analysis of the model output and comparison of model control run statistics with observed statistics. Does it capture the rainfall patterns right and the seasonal temperature patterns is evaporating the right amount of moisture. So the models are validated in a sense both locally in the sub-processes and by looking at the overall statistics of output runs. And finally, there's a process of model-inner comparisons, sort of a bake-off called CMIP, climate model-in comparison, and I forget what the P is. So observation sets, they've been mentioning them on the atmospheric radiation measurement program with the co-located measurements of atmospheric properties and how they affect the sunlight and infrared radiation transport. Fluxnet field measurements of biological sources and sinks of gases, these are people going out into the field, building measurement towers, climbing up in the tropics into plant canopies and taking measurements up there. I read something recently about, you know, it took an hour to climb into the canopy to do the measurements. I also had a story told at a conference of a scientist getting an early morning phone call to rescue his instruments because they turned on the sprinklers. So he was out there in the field taking out the instruments while getting sprinkled with reclaimed groundwater. We now have autonomous ocean boys that can measure temperature in CO2 on floating platforms and Landsat for surface-type characteristics. Plus there's a whole, NASA has an Earth observing system, EOS, that measures various properties of the atmosphere. So these are your satellite measurements. I didn't, don't think I included that on the resource card, but I probably should have. So, Earth observing system. So in summary, weather prediction is an initial value problem and it depends very, very strongly on the current state of the atmosphere given it at the beginning of a prediction. And the prediction because of the sensitivity, non-linearity of the equation has a limited range into the future before it diverges from what's going on in the Earth. In my interest, climate models don't care terribly about their initial conditions. They run until, spin-up is the word used, until basically the initial conditions don't matter anymore and it's an energy input and distribution problem, what's known as a boundary value problem. It's solving a different type of problem. It's still producing what looks like weather. There hasn't been a way found other than running a continual stream of weather prediction and looking at the statistics and looking at where the model goes to do that. So then one looks at the long-term average of such weather in the climate. Both type of models are now done using ensembles of runs as well as runs by multiple models. And I hear these or actually read them on my NOAA forecast app on my phone when I look at the forecast discussion. The forecaster will mention specific models and this one seems to predict, like ECMWF is predicting, slightly warmer weather at the 850 millibar level. There are both differences and shared components between the models and the shared components in a sense help validate the physics that's gone into the models. When we're modeling sub-grid processes, those are partly from theory. For instance, evaporation depends on the wind speed and the saturated water vapor differences between the surface and the first atmospheric layer. So including those theoretical concepts with a constant which itself has a probability distribution. So modeling of sub-grid processes are partly from theory and partly from comparison with observations. And that's the talk. As far as other hard-to-predict stones that are missing forecasts, I'd have to look at the specifics and I haven't. I don't think weather models include earthquakes. Oh, chaos theory. Well, it's chaos theory, Ed Lorenz, did a lot of the early work on this, that limits the predictability of range and numerical weather prediction. That's the property that initially close initial conditions don't stay close or don't have to stay close. And after a while go off on totally different paths and that's where I use the concept of parallel earths. There's been modeling of other planets, Jim Pollock no longer living, and Carl Sagan, it's an atmospheric modeling of other planets. Basically, a climate model is operating within a strange attractor, which basically means that the weather doesn't just go off into something unreasonable. It's not like extrapolating a polynomial outside of its range of fitting, where it may look fine within the original data range, but go outside it and you get really strange meaningless stuff. A climate model is staying within an envelope, even though model runs with different initial conditions or slightly different parameters, won't have the same weather in the same place at the same time. So we're basically taking an ensemble model, taking tours around the strange attractor. Ah, yes. Tag, I'm reading something that in some ways the acceptance of the disinformation has little to do with actual climate and more attachment to what people consider their way of life. And of course now it's also become sort of a political identity. And simply correcting information without having become a trusted source of information just doesn't work as identity politics. What's sad is people are going to discover further on that the science was right after all. In some ways the COVID-19 pandemic is climate science on the short term because we've reached a situation where epidemiologists and healthcare practitioners give information that gets contradicted by people who don't have the expertise and are believed and those chickens are going to come home derused because the virus basically doesn't care. You can infect people before you show any symptoms, which is one of the difficulties with the virus, so you can't wait to quarantine people until they know they're sick. If the virus can infect more than one person, say two people, then it doubles sort of every infection period, the number of cases, until everybody has either had it or died. Part of the sheltering in place is to give time, but also an infected person has a limited time during which their things stay infected. And once that, they're no longer infectious. If they haven't infected anybody, you're on the way to killing off an outbreak. So there's science there and there's baloney just as there is in dealing with climate science. The theory that climate science is just used to get funding for doing climate science is that there are a lot easier ways to make money than trying to do it by writing climate research graphs. It takes a lot of dedication to do that because for any proposal written, except for very well-known people, your chances of getting funding are like 10%. So, plus the idea that thousands of scientists are all able to have a conspiracy plot together. Scientists tend to be kind of individualistic. If millions die, it is as much human action as anything else. There's the old saying that God helps those who help themselves. And I think the inverse is true too, that why rescue those who are insisting on local ignorance and not in touch with reality? As far as viruses go, both pigs and bats have lowered immune systems, which means they can exist as carriers of viruses without becoming all that sick. And with the current COVID virus, it's thought that it was carried in bats and transferred into another animal, possibly a penguin, before transferring into humans. Sometimes we're finding new viruses because we've pushed into natural terrain that we weren't in before. And there are species that are carriers of the virus. Viruses mutate regularly, so it's a matter of time before they manage to infect humans. Selling science is hard work because people make a lot of decisions by intuition or get feelings rather than rationally. Other programs like that say that, you know, get your science. NASA has had some, there's an organization of science educators who work to ensure that proper science is taught. Now, STAC workers tend to look at risk assessment as to insurance agencies. So you do have concern about sea level rise. Ontario also has tended to look at climate change as a risk-increaser. Similarly, Dan Cunningham in his book has done a lecture that's online on thinking fast and slow. Yep, it's a classic in influencing people in a sense that you have to build trust. And unfortunately people are trusting those who don't have their best interests in mind. Well thank you all for coming and participating and commenting and asking questions. And do use the resource note card if you want to look at things further. The two TED talks at the top of it are, you know, some great summaries. Karen Schmidt makes the point that models have skill and that's the real bottom line. It's not that models are correct because in this venue of atmospheric modeling, you're never going to be correct in the sense of an engineering, a fixed engineering problem. But it is important that a model have skill and give you information that you didn't have before, which is what's happening. I'll see you here.