 And with that, I am going to turn this over to Christina, who's going to tell you more about what she does and take your questions. Christina. Hi, all. I'm Christina, and I'm a scientist at NCAR. And I'll be talking today about one of the projects I work on. There's several projects that I work on. But this project is focused on improving the prediction of severe thunderstorms. So let me get this started here. Maybe there it goes. All right. But before I talk about what we do during this project, I'm going to start off with a little bit of background on me and how I got interested in science. So I grew up in Cincinnati, Ohio. And there's a picture of it in the Midwest. And if you haven't been to Cincinnati, they're kind of known for this famous chili concoction that I have a picture of at the bottom. And growing up, I have an older brother and a younger sister. And they are both in science and engineering fields as well as me. And so my interest in weather started from when I was really little. We had a porch at our house. And when thunderstorms would come through in Cincinnati, I would go to the porch and sit on the porch swing and watch them come in. This drove my parents crazy. But I was particularly interested in seeing the lightning that was associated with the thunderstorms and also in the images on radar of the thunderstorms coming in and moving across. So with that, I ended up going to the Ohio State University to do my bachelor's degree in atmospheric science. And while I was in undergrad, I worked for a television meteorologist, where I put up a picture of there. And I essentially helped do some of the graphics for the morning news. And from this experience, I learned that I wasn't really interested in forecasting, but I was really interested in the different models and their performance. And so I decided to go into a more research focus. And then I did my master's at Colorado State University. My thesis was on lightning. And then I joined NCAR in 2009 in the Research Applications Laboratory. If you're unfamiliar with it, the Research Applications Laboratory does weather for specific applications, such as aviation weather with turbulence and thunderstorms would be an example of what they do. And so my work mostly focuses on using models and model output. And by model, I mean a weather model that's run on a computer where you get a likely scenario about what will happen in the future. And so my models are run on a grid, like you see here over the globe. And you get data at each of these points where the lines intersect. And with the model data, I oftentimes do some work on the output. So I may be combining different output fields to see how it improves the forecast. Or I will do some analysis looking at how well the model represented what actually happened, sometimes with statistics, sometimes just visual analysis. But the goal is to get a feel for what it did well and what it didn't do well, and then apply that back. So this project that I've been a part of for the last couple of years focuses specifically on modeling thunderstorms and severe thunderstorms. And so with that, why do we want to predict thunderstorms? Why do we want to improve the prediction? And you guys, if you have ideas, you can enter your answer in the chat on why we might want to predict thunderstorms. So I'll give you a second to do that. No responses yet. Maybe that's a, no one really knows why we did this. We'll wait a couple seconds. Wait a couple more seconds, and then I can give the answer. Oh, save lives and property. Yes. Extreme rain events, risk and hazard mitigation, lightning threat, aviation interests, no one to hide the pets from loud noises. Very important, yeah. Alerts for power outages, warnings to prevent risk. Yes, lots and lots of contributions. Risk during mountaineering, yes, if you're climbing. That's important, yeah. You know where to storm chase. There must be some storm chasers online. And someone pointed out they're the most challenging events to simulate. All right, yeah. So we want to predict thunderstorms, especially severe thunderstorms because there's hazards associated with them. So a general thunderstorm has, you know, thunder and lightning and rain. But when we're talking more severe thunderstorms, we're talking about these additional hazards that could be flooding rain, which I don't have a picture of up here, but also tornadoes or strong winds, you know, not directly associated with a tornado or even hail. And, you know, one of the things with these events is you don't really want to be outside when these events are happening. So here's a video of a hail storm that I took from my office back in, this was in June of 2019. And it almost looks like snow, but it's actually hail. So you can hear the noise that that hail is making. This is a case where you wouldn't really want to be outside. You might get injured. And so we give people alerts to take cover for storms, for these severe thunderstorms using generally either watches or warnings. So how do we get a watch or a warning to come out? And generally what happens is you start from a model or an observation of what's happening in the atmosphere currently. And a forecaster at a place like the National Weather Service or the Storm Prediction Center takes that information and then they produce a watch or a warning. So does everybody know what a watch and a warning is? If you don't, you can enter in the chat that you'd like to know what a watch and a warning is. So a watch. We don't have anyone who says, but we definitely like to clarify. Okay, so a watch means that the conditions in your area are favorable for a severe thunderstorm or a tornado or whatever the watch is issued for. Whereas a warning means that that event is actually happening and it's time to take shelter. And so if we can improve the forecasts of severe thunderstorms, we might be able to give people more lead time so that they can take shelter. Now, I don't put out watches and warnings myself, but I work on this model data back in here and improving that model data so we can hopefully have better forecasts. So there's a project I've been part of for the last three years called the Hazardous Weather Testbed Spring Forecasting Experiment. In this past year, it ran from five weeks, April 27th to May 29th. Now it takes place at the National Weather Center in Norman, Oklahoma, which I have some pictures of here. And the purpose of this experiment is to get a whole bunch of different types of scientists together to test our models and see what works and doesn't. So this is a case where I get to interact directly with forecasters and the people who use our models and with other scientists who are producing different kinds of models and we can all get together and discuss what worked and what didn't. And so here's an example of what it looks like during the Hazardous Weather Testbed Spring Forecasting Experiment. This room is generally pretty empty on a daily basis, but you can see there's a lot of people who come for the Spring Forecasting Experiment. So what do we do during this experiment? Well, in the morning, we make out forecasts. So we use models and the products that people bring and we create these forecasts at the bottom where we're focusing on forecasting specific hazards. So you can see on the left, I have a forecast for hail and on the right, I have a forecast for wind. And these look like a lot of just squiggly lines, but what they are is regions where we think there's a higher probability of these events happening. So the more lines equal, the more likely we think it is to happen. And then in the afternoon is all about evaluating these forecasts. So we evaluate the specific hazard forecasts that I spoke about on the previous slide and we do that in this case using the image at the top where we overlay the storm reports with how likely the event is and we see how well they match up. We also have surveys where we can evaluate the output from models, so not data that we specifically drew or created, but how well each model is performing in comparison to one another. And then we get together and we talk about what worked and what didn't. So I have here on this next slide an opportunity for all of you to try analyzing the output of the forecast yourself. And so what I'm showing here at the top is what actually happened and it's showing a precipitation forecast. And then the bottom is output from three different models and what they thought would happen in terms of a precipitation forecast. And so one of the most basic questions in analyzing how well forecast performed is which one looks the most like the top. So you can enter in the chat box which one you think performed the best, number one, number two, or number three. And just for a little bit of clarity for people who might not be up on what goes into a model, what precedes the output and what output looks like. What are we looking at when we look into these images? What is that that's in the picture? So it's precipitation like you would see on a radar where the stronger color or the darker color so like the reds in the yellows mean stronger precipitation and the blues in the greens are lighter areas of precipitation. So these are like giant bands of pockets of thunderstorms. Is that what we're seeing? Yeah, so this is like a giant region of thunderstorms. I mean, I guess it's not really giant, but it's the thunderstorms are kind of where you see the reds and the yellows and then the green is just going to be rain kind of on the outer edge of the thunderstorm. And what does the meteorologist enter into the model? Is it a wide variety of different things that are happening? Is that what gets put in to the model? So what gets put into the model is the conditions that the model starts with. So the initial atmosphere, temperature and then there's a number of different mathematical equations that calculate what they think will happen and where they think the storms will end up forming. And it comes together to get this output. Well, I think we've got a bunch of ringers online. I don't know. Most of the answers look like number one. Yes, and I would say the same thing in my answer. Number one definitely looks the most like the top. And so then another question we can ask when we're trying to think about improving our models is why didn't we pick numbers two and three? What about them? Wasn't as good as number one. And in this case, you might say, well, okay, the area of precipitation is just too big in number two, it's much bigger than what we saw and it's over Texas, whereas the actual storm was partly in Oklahoma and partly in Texas. And so we can take this feedback and this analysis that we get on what worked and what didn't and we can go in and try to improve our model forecasts using this information or improve which models that we use. So that was a little example of kind of what we do during the HWT spring experiment. So could we go back to us for a second? There were a couple of questions about that. And one is, is the color, does that represent, is the color represent model simulated reflectivity or precipitation per hour directly? So in this particular example, it's model simulated reflectivity but some models will also simulate precipitation per hour as well. That's just not what I'm displaying here. Okay, and the second question then is, are those limited area model outputs? In this case, the model was run across the entire United States but something I didn't talk about is during HWT, during the spring experiment, we pick a region of interest each day that we wanna focus on. And so this is just showing the region of interest. Okay, I think that's all our questions right now. Okay, well so I have here just on the last slide an image of myself and some of the people who were part of the spring experiment back in 2019. So it's a mix of forecasters, researchers, software engineers, that type of thing. And with that, if we have any more questions I'd be happy to answer them. Oh yes, and we have a question from someone who is in the notes that wants to know was that output from Worf? And before you answer that, could you just say a little bit about Worf is and what Worf is? So Worf is the weather research and forecasting model. It's a model put out by the US and in this case, no, none of those images are from Worf. So one of the things that we do in the spring experiment is use a lot of smaller models. And I can't remember directly which ones I'm showing here but I know none of them are Worf. And while we're waiting for more questions, could you tell us a little bit more about your work in lightning? So my work in lightning, my master's thesis was on cloud to ground lightning polarity. So what type of strike it is. And if there was any in looking at that relationship to various different environmental conditions. Some of my other research in lightning is, I had a project, I'm not current, that project has finished, but you were looking at how to represent the feedback from thunderstorms into a model. So we were trying to represent in our model the upward current from a thunderstorm and how it went into the global electric circuit. So that was probably my most recent lightning specific project. Okay, and we do have a question coming in. And the question is, it says, I was wondering whether you could provide any link from where I can get archived lightning data. That's a good question because I don't know if the archived lightning data that I have is available to the public or if I can just get it as a part of NCAR. So generally the lightning archives either come from the weather bug or the national lightning detection network. But like I said, I don't know what they have up there that's available specifically to the public. But if you search either weather bug or the national lightning detection network, you'll be able to see what you can get on the webpage. Okay, and we have a question about the experiment. The question is, one of the forecasters consistently better during the experiment, how does that information get used by the team? So that will get recorded in our surveys and it impacts in some case what models the team will select to weight heavily when we're drawing our hazards predictions. And it also will impact what models come back the next year. So if like a model is consistently underperforming, they might not bring it back or they might wait until they've made changes to that model before trying it again. So if that makes sense. Okay, and we have a question. This is a statement and a question. The International Center for Lightning Research and Testing was closed in 2017 due to a lack of funding. Is there work being continued, especially the part where they shot rockets to trigger lightning strikes? Do you know anything about that? So I have not worked with the International Center for Lightning Research. And so I don't unfortunately know anything about whether their work is being continued or what their status is because I just, I'm not, you know, I have never been affiliated with them. Okay, and moving right along with another question. Are models being tested like this often in research or just at the annual experiment? So in research, generally yes. But the unique thing about the testbed is that we pull way more models than you're usually working with together at one, you know, in one place and at one time. So normally when I'm testing models, I might have, you know, five or six that I'm looking at. And that's a pretty large number example. But during the spring experiment, there's like, you know, 20 or 30 that we're all looking at together. And then the other unique thing is that, you know, the people who are behind those products are all together so we can get feedback and discussions directly. So that's what's different about it. And I have a question for you. What is it that excites you about your job? What is it that draws you to this type of work? The understanding and, you know, kind of figuring out what went wrong. So I find it very interesting to look at, say, different things and notice subtle differences and say, okay, that didn't work. Or, you know, and then say, well, why didn't it work? Is there a reason or do we not know yet? And so that's what draws me in. Ooh, and we have another question in. Do you evaluate international models or just American models? So there's both at the spring forecasting experiment. I believe this past year we had one from the ECMWF and one from the UK Met Office. And in some other projects I have, I also use some global models from various international models. So both, yes. And we're just curious, wondering how many people here are students on this chat right now today? Because, and I think, Tina, you can see the results. Yeah, I have to pull it up. I have a lot of students here joining us today. Looks like one, two, three, four, five, one master's student. Not now. And a sixth, seventh, and a tenth grader as well. So we've got a whole spectrum. Looks like we're just about, what, fifth and a seventh grader? Any advice, Tina, for people who are, if they're on this talk, who are probably interested in weather, do you have any advice for someone who's in fifth grade or sixth, seventh, eighth, ninth, or even high school? Advice on how to get into weather or some other kind of advice. How to get into weather and doing it. So it's very heavy on math and science, specifically, well, depending on what type of weather you go into, generally physics and math, if you're interested in chemistry, then maybe some atmospheric chemistry, but so focus a lot on those subjects and getting really good at those subjects. And depending on where you're located, I don't know in all countries, but in the United States, sometimes you can get tours of places like the National Weather Service or of a TV station to kind of interact with more people in the field and see what they do. So those would be things that you could do to kind of move forward. Well, that's wonderful. And it looks like we have someone who might be a distant colleague of yours, so that he's a PhD student working on severe thunderstorms over the Canadian prairies. Interesting. There we go, yeah. Yeah. Oh, and we do have another question. How are charge potentials of some storms currently being measured? So for the most part, they're not because they require very high resolution electricity data, which is usually found in, well, it's found only in a couple of locations in the United States. They're measured with kind of, in those locations, there's one in New Mexico, I believe, and there's one kind of on the east coast. I can't remember if it's DC or neighboring, but it's kind of just like a time of arrival sensor where they can get a 3D mapping of the charge in a thunderstorm. And so that's the best we have, but it's most certainly not every thunderstorm or even every region. Excellent. We are right at a little bit over time and it's been really fun to explore the job of Christina Kyle today. Thanks for telling us more about your work and thanks for everyone else for joining us. We are doing Meet the Experts sessions every Thursday. So hopefully you'll join us again every other Thursday, sorry. And the next, so that makes the next session will be on October 1st at 10 a.m. mountain time. And it'll be a great follow-up to this one focused on improving hurricane forecast. We'll post the chat, the link to the chat in our Meet the Experts webpage with more information about upcoming sessions and links to recordings of our past sessions. So if you'd like to get more information there, we will put that right, let's see. We'll put that right into the chat. Let's see if I can do that for us. And send that out to everyone. And there we go, hopefully we will see you next time. Thanks everybody.