 Hello, I'm Chancellor Rodney Bennett. Thank you for joining me for today's Nebraska lecture. Since 2003, this distinguished lecture series has elevated some of the University of Nebraska-Lincoln's most notable scholars, researchers, artists, and thinkers. Faculty who live our university's land grant mission every day and make a positive impact on our communities through their work. Committed to making Nebraska our nation, our world a better place to live, our lecturers are experts in their fields devoted to mentoring and shaping future generations and solving the most pressing challenges our society faces. They truly embody the spirit of our flagship university's land grant promise to Nebraska and the world. I am pleased to be able to share this distinguished lecture series with the broader Lincoln community and beyond. It is truly a celebration of our three primary missions of research, teaching, and service. And I'm so proud of our faculty's accomplishments and their dedication. Thank you to the Office of Research and Economic Development, the University's Research Council and the Osher Lifelong Learning Institute for partnering to sponsor this lecture series. I hope you enjoy today's lecture. So as you know, the Nebraska lecture is our twice a year opportunity to shine a spotlight on our faculty members who exemplify our vision of discovery, creativity, and innovation advancing the state, the nation, and the world. And as you saw in the video, the past lecturers truly reflect proudly on that vision. And so does today's lecturer. In fact, Adam Houston's topic is very timely considering that we are now in the unofficial kickoff of severe weather season here in Nebraska and the Great Plains. Dr. Houston is a professor of Earth and Atmospheric Sciences in the College of Arts and Sciences, and he is a national leader in researching how scientists might use drones to improve weather forecasting. He also leads the Severe Storms Research Group and is a principal investigator with the targeted observation by radars and UAS of supercells. This is a multidisciplinary research team better known as TORUS. This project is funded by the National Science Foundation and the National Oceanic and Atmospheric Administration and is charged with using radars and unoccupied aircraft systems to study supercell thunderstorms. And in Nebraska, we all know what thunderstorms are. So if you are fascinated by severe weather, you will want to visit torus.unl.edu to access a wealth of data from severe storms in the region. TORUS has been featured in local, national, and international publications, including the New York Times, Omaha World Herald, Gizmodo, Weather Nation, NET Nebraska, The Washington Post, and many others. Adam joined the UNL faculty in 2006. Previously, he served as both a visiting assistant professor and a postdoctoral research assistant at Purdue University. He received his PhD in atmospheric sciences from the University of Illinois at Urbana-Champaign and his bachelor's degree in meteorology from Texas A&M University. And as the chancellor mentioned, we definitely are grateful to the chancellor's office and the Osher Lifelong Learning Institute, which partner with us and our research council to sponsor the Nebraska lecture. So after Adam concludes his lecture, we will have a brief Q&A session. So start thinking about your questions and I will join Adam back here at the podium at the end of his presentation. So please join me in welcoming Dr. Adam Houston. Thank you for the nice introduction and of course, thank you to the research council for allowing me to give this talk. I'm truly honored. I think this is a fantastic lecture series. I think the chancellor's office, office of research and economic development, Osher Lifelong Learning Institute should be given a lot of credit for this series. So I wanted to start off with a bit of a story. A few months ago, I went into a local business and chatted up the person behind the desk and she asked what I did. I said I was a professor of atmospheric science and she said, oh, that's interesting. At some point, I was interested in becoming a meteorologist. You don't often hear that. So I was like, oh, that's interesting. And her reasoning was because she thought it'd be great to have a job where she could be wrong all the time and still get paid. Now, I've heard this enough that I am kind of mentally bracing for such a response and I was kind in my response. But it begs the question, are we really that wrong that often? And of course, she was not the only person to say something like that. Perhaps the most famous fictional TV meteorologist, Bill Murray from Groundhog Day commented this on Twitter, fool me once, shame on you, fool me twice, shame on me, fool me 350,000 times and you are a weatherman. So it's a trope that meteorologists unfortunately have to endure. But again, the question, are we really that bad? So the short answer is no, but let me prove to you why we're not actually that bad. So this is a plot and there'll be a few plots in this talk, but this one, you can all see that the lines are trending down. This is the errors in temperature between what was forecasted and what actually happened. And so this is collected by the Weather Prediction Center, they're a group associated with the National Center for Environmental Prediction. And so the fact that they're all trending down is good. It means that over time, the errors in our forecast are going down. So they're getting better. The other thing that's interesting, you'll notice that the black line up there, that's the seven day forecast. The accuracy of the seven day forecast now is about as good as the three day forecast back in the 80s. So we're doing better. And actually, if you look at the raw numbers, three degrees Fahrenheit is not too bad of an error. There are other ways that we can show this. So this is another parameter that's forecasted, another metric used for quantifying the error. The takeaway from this is the three day forecast nowadays are accurate about 97% of the time. For comparison, and this is just one comparison, the accuracy of financial market predictions, this is a study from fairly recently by financial gurus, is on average 48%. That's worse than just guessing. Now, I admit that there are big differences between financial market predictions and weather forecast, but the takeaways are these. Number one, weather forecasts are actually pretty good. They're getting better and they're better than a lot of other forecasts that we rely upon. Now, I don't wanna make it sound like I'm just patting the weather forecast in the back, but the reality is forecasts aren't always perfect. And the thing about weather forecasts is sometimes when they're wrong, they can result in death and destruction. Okay, so let's talk about death a little bit. We have to because weather fatalities are part of the challenge and part of the motivation for a lot of the work that we as atmospheric scientists do. So this is just 2022. There's some 10 year and 30 year averages on here as well. Couple of takeaways from this. Number one, overall hundreds of people are killed approaching 1,000 every year by weather. The other thing that's interesting about this is that heat, extreme heat kills more people than most other weather phenomena. That's not the main takeaway from this because what I don't want you to take away is that all of these deaths are because of poor forecast. There are a lot of things that cause fatalities due to weather. In fact, there's a whole talk that could be given, not by me, about the social science aspects of disseminating and responding to weather advisories. So that's part of this, but whether forecast errors are an issue. In fact, a recent study showed that forecasts that were one degree too cool on hot days corresponded to a 3% increase in deaths. So there's a very real impact from forecast errors on fatalities. In this talk, I wanna convince you that there are opportunities to improve weather forecast. I also wanna talk a little bit about how observations of the atmosphere are really critical for improving weather forecasts. And of course, as the names of the talk suggested, we'll talk a little bit about drones and how drones might be used as to fill the missing pieces that could lead to improvements in weather forecasting. Okay, so to talk about weather forecasting, we need to talk a little bit about the primary components that make up weather forecasting. I call them these the four pillars of weather forecasting. They are, so you don't have to tilt your head. Fundamental knowledge, environmental observations, heuristics, and numerical weather prediction. Fundamental knowledge, basically, you can't forecast something well if you don't understand it. So, and the fundamental knowledge is obviously an important part of this. Environmental observations, these are observations of the state of the atmosphere. Heuristics are tools and tricks that forecasters use using environmental observations and other sources of data to forecast, and then numerical weather prediction or complex fluid dynamics models that we use for prediction. And I'm gonna talk a little bit about numerical weather prediction because some of the significant increases in forecast accuracy over the last few decades can be attributable to improvements in numerical weather prediction. Okay, so a numerical weather prediction model, as I said, is a complex fluid dynamics model that predicts the state of the atmosphere into the future. And so essentially what is done is a three-dimensional grid of data are imposed upon an area that you're forecasting. So it could be the United States, it could be the entire globe. And at every single point in this grid, the processes responsible for the evolution of the atmosphere are predicted. And they use a bunch of equations to do this, so essentially at every single one of these points that make this grid, all of these calculations are undertaken. And so there are a lot of calculations because there are millions of points that make up this grid. And the result is a forecast of the state of the atmosphere, the three-dimensional state of the atmosphere into the future. In this case, this is pressure and temperature, but the point is that it is the four-dimensional, three dimensions of space and one dimension of time, state of the atmosphere. What limits the accuracy of numerical weather prediction models are three things. First of all, the spacing between the grid points. The idea here is that if you mash together grid points very close, you can resolve smaller scale structures, smaller scale phenomena. The second thing is computing power. So as you have all these grid points that need to have calculations done on them, you need powerful computers to do this. The third is how well the current state of the atmosphere is observed. Now these first two are related. So you could mash together all the points, have a very fine grid, but it requires a lot of computing power. And so there's a trade-off. And the good news is that over the last century, the ability of computers to computing power has increased pretty much exponentially. And so we are to the point now where we can resolve very small processes using numerical weather prediction. This is a simulation by Lee Orff at University of Wisconsin and it is of a thunderstorm. It's actually a supercell thunderstorm. And the fidelity is phenomenal. I mean, this is a very high-resolution model. This is basically a top-down, almost top-down kind of side view of this thunderstorm. And you can see, I mean, just the layperson can look at this and go, hey, that's a thunderstorm. The problem with this is that this is not a forecast. It's a prediction, but it's not a forecast. And the distinction here is that this thunderstorm never existed in reality. It's a representation of a thunderstorm and a very accurate representation, one that we can use to conduct numerical experiments, to expose the fundamental processes, to control the morphology and strength of the storm, but this will never be used, at least in the near future, to actually forecast the state of the atmosphere. And one of the reasons is because this model, this simulation took many weeks to create. It's a two-hour-long simulation. That is, the state of the atmosphere was initiated and then two hours later, he had the end of the simulation, but it took many, many weeks to get it. If you're trying to make a forecast two hours in the future and the forecast doesn't even arrive at your door for a month, not a useful forecast. The other thing is that this is not an actual thunderstorm that ever existed because the environmental conditions that were used to initialize it were very idealized. They were realistic, but they did not have the heterogeneities, the reality of the environment that you really need to capture in order to simulate accurately the state of the atmosphere. And so that last point is an important takeaway here, that the current state of the atmosphere is really important to know to accurately forecast the weather. In fact, if we return to the four pillars of weather forecasting, where we have fundamental knowledge of observations, heuristics, and numeric weather prediction, in fact, environmental observations control these other two. So yes, they're four pillars, but the reality is that environmental observations are perhaps the most important of the four pillars. Now, the environmental observations that are required to forecast anything in the atmosphere depend on what you're trying to forecast. So here's an example, United States, all those contours are isobars, lines of constant pressure, and this thing, what's the best way to do this? I'll just use the, this guy here is a mid-latitude cyclone, extra tropical cyclone, low pressure center. And we've probably all seen these on weather maps that usually have an L associated with them. And what you can tell from this is that it's pretty big, it spans multiple states. In fact, most of the southern part of the country is occupied by this mid-latitude cyclone. In contrast, if one was trying to predict this tiny little thunderstorm right here in northwest Texas, which I've zoomed in to it here, the observations required are very different. And basically what we could say is the density of observations required to forecast the weather scales inversely with the size of the phenomenon that is forecasted. So in other words, a big feature like a mid-latitude cyclone doesn't require the density of observations required for a thunderstorm. Small scale, you have a lot of observations that need to be packed in to actually represent the environment to get a good forecast of that small scale phenomenon. Okay, so this brings me to severe thunderstorms. This is a focus of much of my research. And so severe thunderstorms are not just your ordinary thunderstorms, they're thunderstorms that produce either a tornado or big hail or strong winds. And the problem with thunderstorms, number one, is that they're small. Now, if you looked at this thing coming towards you, I doubt anyone with any sense would say that that is small. It's much bigger than you, it's much bigger than the car, it's much bigger than a house, it's frightening. But reality is that relative to other geophysical phenomena, other atmospheric phenomena, this is actually pretty small. Remember that picture of the mid-latitude cyclone which spans multiple states? This guy spans not even entire counting. So we know that part of the problem is that the observation density has to be really high. But the other problem with this goes back to the work of Ed Lorenz. Back in the mid 20th century, Ed Lorenz said that small scale phenomena have an intrinsic limit in their predictability. In other words, it's okay, it's possible to reasonably accurately forecast a mid-latitude cyclone seven days in advance, right? So we can say the mid-latitude cyclone in seven days will be here, it'll be about this intensity. There is absolutely no way to do that with thunderstorms regardless of the environmental observations that you have available. There is an intrinsic limit to the ability to forecast small scale phenomena. So that's a problem. The other problem with severe thunderstorms is they have an outsized risk. They're small, but they have a big impact. And if you're returning to this fatalities from 2022, these are all fatalities due to natural hazards over 50% of them are attributable to thunderstorms. So clearly thunderstorms have a big impact, but they're really small. And so in summary, what we can say is that the problem with severe storms is that they're small, therefore they require higher observation density, they have intrinsic limits on their predictability, and they have a big impact. This is a very challenging phenomenon to deal with. Okay, so I've been alluding to, and I've been really alluding to, saying that observations are important. Let's talk a little bit about observations. So generally we classify observations by two things. Number one, what's being measured? Is it temperature or moisture or winds or pressure, and also where it's being measured? So this is an example of an ASOS station, automatic or automated surface observing system. And you can see these in lots of places in the United States, they measure temperature and humidity, pressure, winds. Temperature is measured at two meters above the ground, winds are measured at 10 meters above the ground, but generally the surface, right? So very close to the surface. This is an example of a weather balloon. So that's a helium-filled balloon, sometimes they're filled with hydrogen, but it's basically a buoyant gas that lifts this saund, which has measures temperature and humidity and pressure and winds, but now just at the surface in a vertical profile through the atmosphere. Commercial aircraft actually have sensors on board that measure temperature and humidity, pressure and winds. So every time a commercial aircraft takes off, lands, is at its cruising altitude, it's collecting data that are being sent to a repository. And those observations are important for weather forecasting. And then we have satellite data. Satellite data, obviously these are clouds, but also you can track clouds. So there are routines that track the clouds and that you can infer wind speed and direction from them. This is just a sampling, there are other observations that are out there, but these are some of the most common ones. If we were to plot the surface observation network and these are actual positions of actual observations, it would look like this. And you can tell that in parts of the country is very dense, eastern part of the United States, parts of the central United States, very dense observations, probably good enough to forecast thunderstorms, but there are other parts of the country that are not. So in our mountain west, there are clearly huge gaps in our surface observing network. This picture changes significantly if we go just 500 meters up in the atmosphere. This is the density of observations 500 meters above the ground. That's only like 1500 feet above the ground and yet this is what we have. Radio songs, the balloon born sensors that's contributing to this commercial aircraft as they take off and land, obviously from the major airports. There's some radars here that are pointing vertically and getting vertical profiles of wind, no temperature or humidity, but at least wind. There are a few profilers. Interestingly, this map shows the radiosons, so the radiosons that just blew them up into red, those are only launched nominally twice a day. So for most of the day, this is what the pattern looks like. Radiosons are not launched very often and they're not launched from very many sites. And so our picture of what's happening just 500 meters above the ground is much different than what we see at the surface. Now if we go higher in the atmosphere, say 5000 meters and higher, picture changes significantly. You have commercial aircraft if they're cruising altitudes, you have satellite born, satellite derived winds. So basically we have this gap in our observations that is pretty huge. So why does the lower atmosphere matter that much? Well for one, it's where we live, but for another thing, the lower atmosphere is actually the source of thermal energy for the rest of the atmosphere. There's a saying in the atmospheric sciences that holds true, which is that the atmosphere is heated from the ground up. The sun doesn't actually heat the atmosphere very much at all. The sun heats the ground, which converts visible radiation to infrared radiation, which then heats the air right above it, and then the lower atmosphere communicates that energy higher up. So the sun doesn't really heat the atmosphere directly, it's heating the ground, which heats the lower atmosphere and then that's spread vertically. Same thing with water vapor, the source of water is surface water, rivers, lakes, oceans, ground water that is pumped out of the ground used for irrigation, and so then that water evaporates and then creates water vapor. And so that source of water vapor is in the lower atmosphere and then that's communicated deeper into the atmosphere. And the other thing about the lower atmosphere is it's highly variable, much more variable than the mid troposphere. And so for these reasons, we really care about the lower atmosphere. And so this gap, between about 10 meters above the ground and several kilometers, is what we call the lower atmosphere data gap. And it is a blind spot for weather forecasting. So how do we fill it? Well, one possibility is drones. Okay, so what's a drone? I mean, most people I think are at least somewhat familiar with a drone, but strictly speaking, it's any reasonable unoccupied aircraft piloted remotely or operating autonomously. It goes by a whole collection of other names, unmanned aircraft system, unoccupied aircraft system, uncrewed aircraft system, unmanned aerial vehicle, remotely piloted. Okay, so in this talk, I'm just gonna call them drones. Now drones are typically categorized in two ways. The first is by how it gets lift. One way is with rotors. So the multi-rotor aircraft up here in the top right, those are what we call multi-rotor or rotary wing aircraft. The ones on the bottom row are called fixed-wing aircraft. They're more like typical aircraft. Their lift is achieved by flow over the wing. The other way that we categorize drones is based on size. So clearly this one in the middle, you can hold with one hand and then the one bottom right is basically the size of a commercial aircraft. Now, small aircraft, particularly both fixed-wing and rotary wing, have been used in the atmospheric sciences for research purposes. And their examples here, the top left is some work that we did to study severe storms. Their unmanned aircraft, unoccupied aircraft been used to study the Arctic, flexes off of ice versus open water. Drones have been used to survey tornado damage. Drones have been used to study the atmosphere response to different land characteristics. So then this is just a smattering, but a small representation of all the ways that drones have been used for atmospheric science. One example is Taurus. So Taurus is an example of how we have used drones and other platforms to study the atmosphere. Taurus targeted observation by radars and UAS of supercells. I was the lead investigator, but it involved a number of other institutions, University of Colorado, Texas Tech, the Cooperative Institute for Severe and High Impact Weather Research and Operations. It was funded by NSF and NOAA. And we used drones. This is the drone. We used the Raven fixed wing, fairly small drone. We also use mobile mesonets and mobile radars. We actually have a manned aircraft, not in the storm, but outside of the storm. And we had three years of field work. We targeted 46 supercells. And so the focus of this research was on supercells and tornadoes. We all know what a tornado is. A supercell is a thunderstorm with storm scale rotation that we call a mesocyclone. Literally, the entire thunderstorm is rotating. And the reason we care about supercells is they produce nearly all of the significant tornadoes and nearly all of the significant hail. So clearly, supercells are hazardous, which is one of the reasons we use drones as opposed to manned aircraft. You're not gonna fly a manned aircraft into a supercell, particularly at the altitudes that we're operating. The other reason is that while there are other platforms that can be used to collect observations above the surface, even inside of a storm, most of those you have no control over. It's like a balloon that is released into the storm and carrying a little sawn package. Those are good, we use them as well, but you have no control over it. So if you want targeted observations within the storm on a particular phenomenon, you have to use something that you can control. This is a video from the 27th of May of last year of operations and underneath a supercell. Now, to be clear, Taurus's main focus was on this leftmost pillar of the forecasting, advancing fundamental understanding of supercells and tornadoes. However, some of the cases that we collected during Taurus were illustrations of how we might use drones to directly impact weather forecasting. And I'm gonna talk about some examples now. One of them was from the 28th of May in 2019. You can see an evolution of the storms on the left, all the little dots moving around are Taurus assets. The interesting thing about this case is that there were two supercells, the top one, the northern one, and then the southern one. And only the southern one produced a tornado. In fact, it produced an EF2 tornado near Tipton, Kansas. The analysis of this, our analysis of this case, revealed that the reason we think, the reason the evidence suggests that the supercell in the south produced a tornado is that it had access to this very narrow ribbon of environmental air that was particularly volatile, particularly energetic. It had conditions that were favorable for tornadoes, whereas most of the rest of the environment around it didn't. And this narrow ribbon of air was ingested by the storm because it had the conditions supportive of tornadoes it produced a tornado. The interesting thing about this thing we call a MET, mesoscale air mass with high theta E, is that this feature, this narrow ribbon wasn't sampled by any of the conventional observations that were in place in northern Kansas. It was completely missed. It's pretty small, and that's the main reason. But what's also interesting is even though we saw it, we didn't see it at the surface. And so this plot here on the left, it's a variable that we call equivalent potential temperature, theta E. And basically what theta E illustrates is the vertical profile, so this is height versus the value of theta E. Theta E is a measure of the potential energy of the atmosphere. High values of theta E mean there's lots of potential energy. If that potential energy is ingested into the storm, it could be converted to kinetic energy and add vigor severity to the storm. Okay, so if you look at just the surface, the just the lowest levels, this orange profile here represents the air ahead of the mat. This red profile represents the air inside the mat. And if you look at just the surface, it looks like there's really not even a mat there. There's no difference or significant difference in theta E. In fact, the theta E was higher in this air mass ahead of the mat. But if you look at a deeper profile, that's where the mat shows up. This region of higher potential energy doesn't even show up at the surface. And so this is an example where important small scale features are completely missed by the current observing network. And some of these features exist only in that gap. Remember that gap that's 10 meters to several kilometers? This mat existed in that gap, not at the surface. This actually isn't the first time we've seen this. We had another project, not Taurus, but another project back in 2018 in the San Luis Valley of Southern Colorado. In this project, a bunch of institutions, including University of Nebraska, flew drones in this valley. And the reason we chose this valley was because on the western part of the valley, it's irrigated. And on the eastern part of the valley, it's not irrigated. And so we wanted to ask the question, what happens to the atmosphere over the irrigated versus non-irrigated parts of this valley? So we flew drones, a bunch of them, some over irrigated, some over non-irrigated. And we looked at the vertical profiles of temperature, humidity, and theta E, that measure of potential energy. And what we would expect is that over the irrigated parts of the valley, where groundwater is being pumped up, put to the surface, the water is evaporating, creating higher values of water vapor and cooler temperatures because it's evaporating, we would expect the temperatures to be cooler. And in fact, that's what we see. A vertical profile of the temperature over the irrigated is cooler than the vertical profile over the non-irrigated. So that's consistent with what we would expect. If we look at water vapor, water vapor, literally the amount of water in its gas state, we look at the non-irrigated part of the valley and it's drier. The irrigated part is more moist. But that changes about two to 300 meters above the ground. So yes, it is more moist over the irrigated compared to the non-irrigated, but not aloft. And when you combine these two into this thing called theta E, this equivalent potential temperature, which is a measure of the potential energy, we see that at the surface, it looks like the irrigated part of the valley has more energy than the non-irrigated, but aloft, that picture completely flips. And so the potential for thunderstorms to gain kinetic energy from this air mass changes whether you focus just to the surface or you look higher up in that gap. And the last example I'm gonna talk about is from June 8th of 2019. This is a supercell that formed near Goodland, Kansas. In fact, two supercells formed. The first one formed and then dissipated. The second one formed and produced multiple tornadoes. So my PhD student Matt Wilson took this case, took all of the data that Taurus collected and assimilated it into a numerical weather prediction model to see whether all of these supplemental observations could impact the forecast of this storm. And the takeaway is, yes, they did. In fact, these orange bars, these are time series plots. The orange bars represent the times where the forecast with these supplemental observations was better than what just the conventional observations. And you can see that there are significant times where the forecast improved through assimilation of these supplemental observations. So taking these data, these supplemental data, and supplementing the conventional observation network improve the forecast. And this isn't the first time that this has been shown. Going back to that project in 2018, the San Luis Valley, Anders Jensen at the National Center for Atmospheric Research assimilated those data into a numerical weather prediction model and showed that the drone observations improved forecast of thunderstorms. Lewenberger et al. focused on fog in Switzerland. They took drone observations, assimilated them into a model, and found that fog forecasts are better when drone data are used. So clearly there's evidence that drone data could be impactful. And what's interesting is, with the exception of the Lewenberger work, all of these other projects weren't focused on the answer to the question, how do drones impact weather forecasting? Almost all of these were focused on advancing fundamental understanding of some aspect of the atmosphere. And the drone observations were essentially observations of opportunity. And so we threw them into a model and sure enough, they improve the forecast. But what would happen if we developed a network of drones that was intentionally designed to improve weather forecasting? What would that look like? Well, one possibility is that we would have a region identified where there's high forecast uncertainty or there's a high likelihood of a significant impact from the weather. We scrambled drones into this region, they sampled the air mass, they, we assimilate those data and do medical weather prediction models, we send those data to the National Weather Service or other agencies, and then hopefully the forecast has improved because these data would complement the existing network of observations. Alternatively, we could just throw all drones possible at the problem, essentially cover the entire United States with drones that are constantly profiling, take those data, which is kind of like the ASOS network, the Surface Observation Network, but now not just surface, but also above the surface, take these data, assimilate them into a numerical weather prediction model, give them to forecasters and see what the impact would be. The problem is that really what we wanna determine is whether it's worth it to make the billion dollar, multi billion dollar investment in a network of drones like this. But to prove that, we have to make the multi billion dollar investment in the drone network, right? We have to put all these drones out there and prove that they have an impact. But there's another way to do it. And that's with OSSI's, Observing System Simulation Experiments. This is a really unique way of answering the question, what might the impact of a new observing system be on forecast, particularly numerical weather prediction? With an OSSI synthetic observations are used, not real observations. And the way it works is this. We take a really high resolution simulation, one that you couldn't actually use to create a forecast because this particular simulation here took many weeks to be created, right? So it does not have any utility for an actual forecast. What it does though, is it serves as the truth. Essentially, it is a surrogate for reality. It has all of the state variables, it has them on a four dimensional space and time grid. And so we use that as truth. Then we take conventional observations like ASOS, like commercial aircraft, like satellite, and we operate these inside the nature run. So we place an ASOS station, a surface observing station inside the nature run, and it collects, there are a lot of air quotes with OSSI's. It collects the observations in this simulation. We fly, okay, I'm gonna stop with the air quotes. We fly commercial aircraft in the simulation and it collects the data. You operate satellites, you operate radiosounds, and so you are creating synthetic data within this nature run. And then you fly something new or you operate something new, drones for example. So you operate these drones within the nature run, collect the data, and then you assimilate all of these data into an actual forecasting system, one that is used typically on a day to day basis. And you see what the difference is with including the new observations or not including it. You compare the simulation accuracy without the supplemental observations to the simulation accuracy with. And if there's a big impact from this new observation, you can make a case that this is a valuable contribution to weather forecasting. And we're doing this right now. So in collaboration with the Global Systems Laboratory, Sean Murdzak and Tara Ladwig, we are conducting these tests. We're trying to see the impact of drones on numerical weather prediction. And you can do some crazy things. Because you don't actually have to fly any drones, you could put drones every 35 kilometers across the entire United States. This is thousands of drones. Or you could say, all right, what would happen if you reduced that to 75 kilometers or you increased that spacing, reduced the density to 150 kilometers? How does the accuracy of the forecast change as you change the configuration of the system? You could fly the drones not just in vertical profiles, but horizontally. So essentially map out a planar space and see how that affects the forecast. You could replace all the radiosons with drones. Remember radiosons are operated twice per day. But what if we used drones and operated every two hours? How would that change the forecast? So the beauty of the Aussie is you could test some of these really important questions without making the multi-billion dollar investment and actually creating them. This information be given to policy makers, people who are actually responsible for changing the observation network. It can be given to engineers. So if it's really important that we fly where the winds are 100 knots, we probably need to modify aircraft, but that's an engineering problem. And engineers can be tasked then to make those types of aircraft, configure the aircraft to satisfy those tolerances. Okay, so I'm a big fan of Aussies, but Aussies don't answer two really important questions. One of the questions is, how do you deal with the regulatory environment? Matt Waite gave a Nebraska lecture back in 2017. Really interesting lecture. And one of the foci of his talk was how stupid the regulatory system was back in 2017. And it was. Now that was seven years ago and a lot has changed. Fortunately the environment, the regulatory environment has changed significantly in that timeframe. And as evidence, we're able to do Taurus. We're able to do not just my group, but other groups have been able to use UAS in the national airspace system to do really important research. But the thing is in any vision for using drones to modernize the weather observing network, they require what's called beyond visual line of sight operations. In other words, the aircraft has to be operating where a person who can take control of this aircraft cannot see the aircraft. They cannot see it with their naked eye. So imagine scrambling drones from major cities to a region where there is high forecast uncertainty. No one's gonna see those drones. That is the person who's responsible for controlling it cannot see that drone. There are ways to solve this. One of the ways is to actually put transponders on every drone that is being used for weather observation. Transponders, ADSB, for those who know, are transponders that are on all cooperative aircraft. And so these transponders tell other aircraft where an aircraft is. It tells the air traffic control where that aircraft is. This is a map of all the aircraft at a given time using these transponders. Another way to do this is for vertical profiling drones, ones that are just going up and down over a given point, there could be a radar that's sampling that's observing the airspace. And if a general aviation aircraft, either a cooperative or uncooperative aircraft come into that airspace, the drone can land. And so you avoid any collisions. Both of these are theoretically possible, but neither of them have gotten full approval by the FAA. The other issue that Aussies can't answer is the public perception of drones. And for this, we actually have good news. And that is that people generally feel positively towards drones used for weather research purposes and toward other drone use that are perceived as for the public good, such as drones used for firefighting and agriculture. Now this isn't a pass. We can't just say, oh, we're good. Public's gonna love what we're doing. For one, there are issues with noise pollution. There are issues with privacy that we cannot ignore. The other thing is we need to make sure that people know what the drones are being used for. So if a drone is flying over your head, it won't be right over your head. But I mean, if it's flying above you, you should know that that aircraft is going to a region where thunderstorms might form. And it's going there to take observations that might improve the forecast of said thunderstorms. Okay, so I'm gonna wrap up this talk with a bit of a pivot, but it's definitely relevant to some of the themes that I've already talked about. For anyone who's gotten funding to support technology development, they know the term readiness level. So any government agency that funds work to develop a new technology requires that you declare what your current readiness level is and most importantly, where you expect the readiness level to be once you have used their money. Readiness level is a spectrum. It starts, it basically is a mapping of where technology is from initial inception to final deployment. And so what they often want you to do is say, where are you now and where are you going to be? And they expect that you'll have a higher readiness level once that you have spent their money. There's, this is a good thing. I mean, the readiness level system makes sense. However, there are often cases where it stands in the way of innovation because some innovations require a significant risk. They may not advance the readiness level. You may spend lots of money to develop a new technology that doesn't work. There are whole businesses that have grown out of people who are willing to take a risk and invest significant dollars on new technology. And so what I wanna point out here is that there's often this question, what is the readiness level as a technology? But sometimes it maybe it's better to say, what technology would you develop if you didn't have to think about readiness levels? And maybe more importantly, what resources do you need to make this happen? I was really fortunate early in my career that there were program managers at the National Science Foundation that were willing to invest in this crazy idea of flying drones into storms. I credit them for giving us this initial funding that then led to some of the advancements, including tourists that emerged later on. What I hope is that future scientists, future engineers will have administrators and managers, program managers that are willing to take those risks because sometimes it requires a big investment of money that doesn't go anywhere. But sometimes the technology that is developed advances science and advances engineering. And with that, I'll take questions. Thanks. Thank you, Adam. You have definitely changed my perception of weatherman and I think I wanna be a meteorologist. Okay, we have a couple of people stationed myself and Becky with microphones for questions from the audience. But I will get us started with a couple. So one, I have a question about the Aussie experiments that you explained. And that is, if those experiments were to actually suggest that we do need drones in this gap area with very close proximity to each other, would we be able to afford such? I have no idea. It's a really good question and I don't know how much that would cost, to be honest. I mean, I'm throwing on the term multi-billions of dollars, that's probably correct. If you look at some of the other platforms that we use on a day-to-day basis, weather radars, the ASOS network, it's millions and billions of dollars. So, yeah, it could be very expensive. Now, there are ways to make it cheaper. That depends on the modality, whether you have this kind of targeted approach where they fly into an area and sample and then go home. It's not cheap, but it would be cheaper than perhaps outfitting thousands of drones over the country. But so that's kind of the decision-making process that we hope to contribute to. So if you have 15 drones versus 1,500 drones, what is the reduction in forecast accuracy that comes with that? And then the people that are way above my pay grade can say, yeah, it's worth the investment. This is gonna save hundreds of lives, it's gonna improve, whatever. And it's worth the investment. Also, maybe you could talk just a little bit about why did you start using drones for your research? So, back in 2008, 2007, actually, I had this idea that we could represent these features that were actually part of my dissertation research and research that I wanted to carry on studying air mass boundaries. So air mass boundaries are these boundaries between large chunks of air with different properties, so temperature and moisture. And there's been evidence, and in fact, some of the evidence from tourists suggests that these boundaries can impact the morphology, the strength of the thunderstorm. And so I thought, well, we could fly across these boundaries with drones because you need observations above the surface, you couldn't just drive a mobile mesonet across them at the surface, you need observations loft, and we could measure the characteristics and see how that affects thunderstorms. And in fact, the very first proposal that I submitted to Vortex 2, which is a project that predated tourists was gonna look at these. And the reviewer said, you probably won't see enough of them. You need to focus on something that you're guaranteed to see. And so we pivoted to looking at things within the supercell. The irony, of course, is that one of the things that came out of both Vortex 2 and Taurus is the importance of boundaries in supercells. And so I still think in the future, I wanna try to do this, take these aircraft and fly across these boundaries and see how it impacts the evolution of the storm. Very cool. Do we have questions from the audience? Race? I wanna see the race. Thank you for your talk, Dr. Houston. Very interesting stuff. I had a question, I guess, a clarification about the Aussie stuff there at the end. So you're attempting to solve what appears to be, I guess, spatial sampling optimization problem with the Aussies to try to find this happy medium for distribution of sampling at atmospheric levels. Is that correct? To some extent, I'm not sure it's a happy medium because that happy medium is really about the financial aspect of it. So I mean, I would expect that if we reduced this to 10,000 drones, one kilometer spacing in 10,000 drones, we'd get an improvement forecast. Now, I think maybe what you're saying is that at some point we might see a tailing off of that improvement. So you double the density and you only get a 50% increase. So yeah, to some extent it'd be nice to see where that curve is. But my expectation is, our expectation is that as you continue to increase the density, you're gonna continue to increase the benefit. Yeah, there'll be diminishing returns of sorts. And then on this point in your research, is there a proposed plan to physically validate something that comes from this optimization solution? Not with this project. Now, one of the things, one of the ways that we do validate is we compare the error characteristics from the conventional observations, which we do have on a daily basis. Compare the error characteristics when we assimilate the conventional observations to real models, assimilating real conventional observations. And so if the error characteristics are similar, then we can say that the model and overall OSCE system is performing correctly. But that's not really what you're asking. I think what you're saying is, what if we actually put in a bunch of drones and see how it improves the forecast? Not with this project, but definitely in the future. And again, we're probably not gonna be able to smatter 1,000 drones across the country, but we could do it in regional chunks. So we could take, say, all of Kansas and put a bunch of drones out there, fly them, assimilate those data and see if it makes an impact. I mean, at least regionally. The hope is that in the next five or so years, we can pursue that. But part of it is showing that something like that would have an impact, at least theoretically, through the OSCEs. Like a local validation. I don't mean to hog the mic, I have one more thing. So would you say that currently ground observations, less than 10 meters, HGL, they are effectively decoupled from any help observing or forecasting anything in the upper atmosphere? I wouldn't say they're decoupled. There's certainly communication between the surface and the upper atmosphere. There's a time lag, of course. I would say that there's certainly a coupling between the surface and this gap area. But sometimes, as I showed some evidence for, they are decoupled. And so it's when they're decoupled, when you cannot say that the surface observations are representing what's happening aloft that it can be really impactful to the forecast. And the reason I ask this is because I wonder about the, because there may be something embedded there that's very subtle that humans can't perceive, but perhaps applications of unsupervised machine learning and deep learning might be able to glance and provide us insights, maybe new heuristics? Sure, sure, possibly, yeah, possibly. I think even with AI, AI still depends on observations. I cannot imagine. So, I mean, numerical weather prediction is not AI, but AI is being used increasingly either to compliment or in some cases replace numerical weather prediction. But there will never be a time that I can imagine that AI can be divorced from the observations. So you have to have the observation network and probably a pretty dense observation network because again, there are fundamental limitations on our ability to resolve phenomena even with complex AI tools that can only be resolved literally by the observations. Thank you, other questions? Thank you for your talk. Let's see if I can verbalize this as well as I was trying to write it down. You talked a lot about the financial cost of setting this up if you plan to do it in the future, but I know you briefly mentioned the kind of human cost of having pilots being able to see and things like that. Is there potential if you are able to test this in a system correctly to use it in the field and use like crop pilots or local pilots, especially in the rural areas, to kind of help with some of that demand, which would also maybe help with public perception that this is a good thing because it will protect their own fields and things like that. Yeah, it's a really good point. A few years ago, we put in a proposal, a big proposal to explore that along with some other things. I used the phrase observations of opportunity, that's a good example. So the commercial aircraft are observations of opportunity, they're not out there collecting data for weather forecasting, but they're out there, so throw a sensor on them and see if it can help. So we thought about this, crop dusters as urban air mobility becomes more common. These are essentially air taxis, putting sensors on them. It's possible that it could make an impact. The problem is, number one, some of those aircraft don't really want you to know what exactly they're doing. Not necessarily urban air mobility, not necessarily the life flight helicopters, but some of the crop dusters, they don't really want you to know the kind of stuff they're doing. So positioning is clearly important. The other thing is that the density of those observations, while it would be great to compliment, really isn't gonna be a replacement for really trying to fill some of those gaps. So particularly with urban air mobility, those kinds of platforms, because they're all focused in urban areas. And if you can remember back to that map of observations of 500 meters, basically all of the observations were in urban areas. The gaps were in rural areas. I think that's kind of your point. Can you fill that with some of these other platforms that are operating in rural areas? I think the answer may be yes, but I don't think it's gonna be nearly enough to really fill these gaps. Thank you. And I guess I was also taking it one step further, but I didn't elaborate on, I know some of these require a pilot license to operate for the drone. So would they be able to use a drone from your lab as a resource and take it out and then operate those so it wouldn't be on their plane as well? Yeah, yeah, right. I mean, the hope is that the regulatory environment will allow us to do some of these beyond visual line of sight operations. We don't even have to worry about having a pilot on site. And I think it's possible. I think what we're getting there, there aren't many people who can do that now, but I think we're getting there. Least FAA is listening, at least entertaining proposals for that. As always, a great pleasure. Love the research. Thanks for a wonderful talk. I can't remember the guy's name that said there was some practical limits on the prediction, right? Some of the work you're doing makes me seem like you might question some of that. So if we could move to a hypothetical theoretical world for a second, do you think if you could just strip away all the practicality and move into that theoretical world, is in the end, do you think he's wrong? I would never question Ed Lorenz. He, as a quick aside, I hope someday Ed Lorenz will get a Nobel Prize because the work that he's done has applicability to physics to, I'm obviously an atmospheric science, but that does not happen. He's earned a number of prizes, but okay. So I don't think that what we're proposing to do challenges that assumption, that there is a fundamental limit of predictability. What I think we're doing is saying that we are nowhere close to that limit. And to get even closer to that limit, you need to essentially increase the number of observations that are out there. But so I don't think we're anywhere close, close to that limit. Is it? Yeah, that's a good question. Yeah, yeah. Yeah, okay. I don't know. It's a, I don't know. And in some ways, the Aussie is a good way to test that because you can do things that you can't do in reality. Very good question. I don't have an answer. We're gonna bring you the microphone just so our audience online can appreciate conversation. Do you think that like mathematical advancement could be something that would send that limit to zero? I don't, I'm not gonna say no, because I don't know. I mean, I would say that what Ed Lorenz did was almost purely mathematical. So he was using the fundamental fluid dynamics to change the scale and show that there was a limit of predictability. Ed Lorenz is the father of chaos theory. For those of you who weren't familiar with that, who he is. But so I'm not gonna say no. I mean, maybe someone a lot smarter than me can do that. But I would approach it probably from the other perspective and that is, can we create an observational network that eliminates all other errors except for that? And then what really is that limit? Other questions? Yeah, thanks for talk. And in your talk, you mentioned simulation data and historic data has been utilized for planning those zones. So I wonder if it is possible to leveraging the real time observation data obtained from one zone on multiple zones to real time adjust those trajectories of those zones to collect the data and stealing those trajectories in situ to increase the cost effectiveness of the data collection. Yeah, yeah. Yes. We have a project right now that's hoping to do that. I don't know whether we're gonna get there, but yeah, so being able to adapt either flight trajectories or the types of data that are being collected, but probably the former based on the observations that are being collected is something that I think is really intriguing. There's a thing that we can do with the Aussies, I know I'm coming back to that because it is a really cool tool, but there's a thing that we can do with the Aussies that would allow us to test that. So essentially, we're flying the drones in this simulated environment. Can we use the observations that they're collecting to adapt their positioning to maximize impact? And there are ways that we can actually actually characterize the uncertainty of the model as a way of informing where they should at least start by sampling. And we've done a little bit of this work in my group, but I think there's a lot more that can be done. So you take the model to determine where there's uncertainty and more importantly where the observations might reduce that uncertainty because those are the two different things. And then have flights that are operating that are adapting not only to that, but also to as those data are being assimilated into the model, it's adjusting the model. So yeah, it's this very, very complex, but very intriguing interplay between the data, real time data that are being collected, the forecast models, and the trajectories that they're executing, yeah. So I can imagine this is not adaptive in space time. It is in high dimensional attribute or latent space. Yeah, I believe that's right, yeah. Other questions? I'm gonna close this out with a question around a new movie coming out this summer. So I understand the new Twisters movie is coming out in July and the trailer has a drone in it. So do you know anything about this? Yeah, so Twisters is the follow on to Twister back in the mid-90s. They're using the Raven, the drone that we used, it's actually not pictured here, but the picture from Taurus, that aircraft, it's actually that drone. Now it's probably a computer generated version of it, but the University of Colorado, who developed the Raven actually gave them, gave the movie of an aircraft. And so I think there's one image in the trailer where some guy grabs it and like tosses it over his head. By the way, that's not how we launched the Raven. But when they did that, I was like, does that work? But anyway, so that's actually the real aircraft, but then they have another one where I think it's the computer generated. So yeah, I mean, I actually talked with my one claim to fame with Twisters, I talked to the director at one point that he was interested in Taurus and I talked to the production designer. And so I had conversations, I was not a paid consultant, but anyway, they were interested in the beginning of integrating drones into this and it looks like it'll make it into the movie. So I'm excited to see the movie, excited to see how drones are actually used. If the drone is the enemy, that would be disappointing, but we'll see. Super cool, we'll all have to be sure to see it. Please join me in thanking Dr. Houston for an energizing Nebraska lecture.