 And then panel members will have the opportunity for follow-up discussion. However, because of time limitations, the panel and presenters should not be expected to entertain questions from the members of the public. Anyone who wishes to submit written comments or other materials that are relevant to our charge should contact Ray Wassell, the responsible staff officer for this study. Before we begin the presentations, I'd like to ask the panel members to introduce themselves to the audience and indicate their affiliations. I will note in starting that I am Ted Russell and I am from the Georgia Institute of Technology. And I'm gonna call on the other members of the panel alphabetically. And so if you'd introduce yourself with your affiliations, that would be great. So first, Nusha. Hi, this is Nusha Yajami from Stanford University. Roya. Hi everybody, Roya Bahraini from University of California Riverside. Prateem, Prateem, are you on mute? Yeah, yeah, hi, Prateem Beswas. I'm from Washington University in St. Louis. Valerie. Valerie Averner from University of California at Davis. Greg. Greg Oaken from University of California at Los Angeles. Scott. Oh, Scott Tyler. Scott Tyler from the University of Nevada, Reno. Scott Van Pelt. Scott, are you on mute? Do we have him on yet? We may not have him on yet. And Venky, I know you're on. Akula Venkatram from University of California Riverside. So Scott, we see you on your name on there. I just saw a message that I was unmuted by the host. Oh, very good. So I'm Scott Van Pelt, USDA Agriculture Research Service. Very good, thank you all. Our first speaker today is Ken Richmond. He's a senior managing consultant with Ramblin. And he led the air quality modeling that was done for the 2016 Owens Valley PM State implementation plan. So with that, I'm going to turn it over to Ken and I trust you all can see his slide. Okay, is my screwed up for everybody? Yes, and just as a comment, so Ken, you've got 30 minutes. Good luck. Well, do try to stay to that. And then we've got 15 minutes for Q and A. Okay, my name's Ken Richmond. I work for Ramble. Myself and my colleagues have been doing some modeling for the district since the early 1990s. And my presentation will provide a little bit of overview of some of the previous studies, but mainly focus on the 2016 modeling approach. And we'll leave it for some questions at the end. I have like 40 something slides. So I'll be going pretty fast and it won't be talking about everything that's on every slide, but so it can serve as kind of a reference material as well as just an outline for the presentation. So first we'll start off a little history, at least the history of as long as I've been involved since the early 90s. And this is kind of an overview of the area where it's located in California. And the immediate domain around the lake is kind of the domain that is in most of the simulations that we've performed. The early modeling starting off in 1995, the emission algorithm was based on wind tunnel measurements. And it's a small portable wind tunnel and it was done on many different areas on the lake and many different seasons. Sometimes we used episode specific emissions because a wind tunnel was running either just before the episode or during the episode. The model that was being used was one that was used for new source of view in those times. It's a Gaussian model called ISC. And the region, even though it's 30 by 50 kilometers in size, we divided the lake up into regions and each region had a meteorological station and sources in it and all the regions were modeled separately. The first SIP that we did was in 1998 we did detainment modeling for. And again, it was the ISC model. We had three large regions. The regions were emitting uniformly. In other words, there was no intermittent source. The sources were all emitting according to a wind speed algorithm that was based on the wind tunnel measurements. But the fit to the wind speed versus PM10 was kind of a fit of the average. It wasn't a fit of the maximum emissions. There was two years of meteorology that were used. And following that initial SIP effort, the ARB and the EPA and DWP made some recommendations for how we can improve the SIP modeling. And one of the things was, instead of using these large uniformly emitting areas use sand motion measurements on the lake bed as a surrogate for emissions. And that would help to better temporarily and spatially resolve the emissions. And if you're gonna be modeling a large domain like that instead of dividing it up, use a model that can have 3D meteorology and can look in spatial and triple changes in time. And the one we selected and have used ever since is the CalPuff modeling system. And the concept of the, this is the kind of a picture of the concept that drives the emissions or what's assumed is that the PM10 emissions are driven by saltating sand or sandblasting. And we measure that with a sensitive and a cox sand catcher. And that's a surrogate for our PM10 emission rate with having a unit like this within each of our emitting areas. And here's the basic algorithm with journal articles that you could look up. And if you don't have them, I could send them to you. But basically just the PM10 flux is equal some constant times the sand flux, the horizontal sand flux as measured by the sensitive and cox sand catcher. And then there's a large part of the analysis is determining what this K factor might be. This is a typical sand flux monitoring site with the cox sand catcher and this is the sensitive. And then all the data is telemetered in real time now back to the Keeler or anywhere. 2003 was the first SIP that we did that used this system. So we used, not quite, we used a longer, we used a different meteorological period. Again, we used CalPuff. We divided the emitting areas up into 51 kilometer squares, each one kilometer square at a sensitive in it. And we assume that that sense that it was representative of the one kilometer square. We did have some upper error measurements with a radar wind profiler at two different locations. We inferred what that K factor constant was by comparing model predictions to observations. And we saw that there were some seasonal and spatial patterns that developed. And we assigned K factors by season and by source area. And we used those and tested model performance. It was pretty good. So that's what we went with in the SIP. Attainment is judged on the historical shoreline at 3,600 feet. That's where we stuck receptors. And then we just started cranking in controls until we showed compliance. And we used a background of 20. So only the in on lake sources were modeled and the background was 20. And this kind of shows you the source configuration showing you the larger areas on the lake where the Tealms and Metsites were. And this is just a depiction of how the sources are divided up and simulated by Kalpaf. 2008 came after that, we did not attain the standard. So we had to do this again. More areas were emitting than were more areas became erosive that were simulated in a 2003 SIP. And again, not all the controls are in place. So again, we weren't showing attainment. So we did another attainment modeling analysis. This again used Kalpaf, but used a five year meteorological data set. It used assumptions about the upper level winds based on our analysis, the radar wind profiler data, which was discontinued in 2004. Again, we had spatially variable K factors that varied by season. Instead of square areas for the 2008 SIP, we had irregular source areas that were based on GPS or remote looking at cameras. And so they were meant to represent the actual outline of the source area. And in Kalpaf, those irregular source areas were divided up into squares and modeled as square area sources. And there were four different configurations used during the five year period. Again, we used a background of 20. So only on lake sources and a background of 20 were used to assess compliance. The Keeler dunes were not included in the simulations for the 2008 SIP. Again, compliance was assessed by comparing predictions of the 3,600 foot shoreline. The areas that were already controlled were assumed to be excluded. They were assumed to be 100% controlled. And the new areas, depending on what kind of control measure was being projected for them, might have ranged from 99% which were the back-end areas to something less than that for some of the other areas. And this just kind of shows you 2008, what the network was of measurements for MET and the TLMS. This shows you early on in the five years with the configuration of the sensors look like and which are still kind of conforming to the one kilometer grid. And then this shows you later on in the period how we're starting to get away and we're assigning sensors and putting them in places that where the activity was occurring and removing them for areas that were covered by water. Since the 2008 SIP, there's a bunch of analyses that are called deceptive linearity control requirement determinations. They're done roughly every year except for the first period. And this was basically providing the district with modeling that looked at, you know, redoing the SIP modeling but looking at new sources and providing lists of areas that might need further control. Again, source area delineations and sand motion and MET data refer specific periods. Each one had a model performance test. We started collecting five minutes sand motion and MET data whereas previously we had hourly data. And we did this a number of different times and we produced each time we produced a candidate list of areas that the modeling at least suggested might need controls. Then the district used all kinds of other information that they were collecting to put together an order for the DWP to clean up. I seem to lose my cursor here. In addition to the SIP modeling and we're still continuing to do this control strategy development. If you do it about every year, we do an analysis of on-lake and off-lake days that are greater than 150 and make an assessment for how much of the overall concentration was caused from wind directions from our network and how much was caused from wind directions from mainly off-lake sources. And we provide the district with a source impact matrices that shows the contributions of all the little sources to all the receptor concentrations. And then those can be scaled to test different control strategies anyway that the district wants. We provide lists that help prioritize some of the larger contributors. A loan is a source where it's its source itself excluding all other sources contributes greater than 130 at the shoreline. So that's something that list of those sources are like the first thing that the district might look at and consider for further controls. Okay, so now let's get into the 2016 SIP modeling. So it basically followed the techniques that we were using previously. There's a five year period from June 2009 to July 2014. We transitioned over to using five minute meteorological data and sand motion data. We noticed that when we were using hourly data before and we did animations of predicted concentrations, it was very sporadic and it looked like a shotgun kind of spraying in different directions. And it missed a lot of the chaos that you see when you look at the videos. Whereas the five minute data really makes them things more chaotic. And it seemed to make the simulations much more realistic at least when compared to what the webcams are showing. Since we used the five minute data, we needed to update our version of CalPuff to a later version. Instead of four general areas on the lake, we used seven general source areas. Each one had a different key factor for the different seasons. Prior to submitting the final approach using the SIP, we tried to simulate off lake areas within two kilometers of the shoreline by using wind speed as a surrogate for the sand motion. Many of the sand motion sites are well predicted using wind speed using a model like the Gillette model, which is just basically ends up with a cubic relationship with the friction velocity. And but when we did that, the model tended not to perform as well. In fact, it performed less satisfactory than just using a constant background. So time was running out and we kind of based on EPA's recommendations, we switched modeling approaches. And we went to kind of more of a hybrid model that's not kind of trending towards rollback modeling if you're familiar with that. So we went to be monitor centric. We only assess compliance at the shoreline monitors. So not receptors on the shoreline, just the T-ohms themselves, similar to what you would do in a rollback modeling. And only on those days that exceeded the standard. Each one of those days was divided by using wind direction into periods that were from our network of senses and those that are out of the network of senses. When the winds were from our network, this contribution is based on our cow puff simulations. When the winds were out of the network, they were based on the actual monitored value. So when I say hybrid, we're actually using some of the measured hours as background and then some of the other hours, mostly hours for the larger episodes are from cow puff. And then attainment, we use the scaled cow puff in the future with different controls applied. And we added it back to the monitoring data. And we'll talk a little bit about how the monitoring data or the background is adjusted for future conditions as well. So the domain is the same as before, about 30 by 50 kilometers, one kilometer mesh for the MET data, 10 levels. Like I said, we had the meteorological model, we had to use a later version. We're using pretty low old land use, it's not reflective of the current land use, but the way we're calculating stability, it's not really sensitive to that. And again, we're using service winds, temperature-related humidity, and all the meteorological service information from the district's network, plus any DWC sites or any sites we can get a hold of, they're on the lake for the different periods. There's cloud covers not measured on the lake. It's not that important a variable because things are neutral. So, but we do, it is a requirement that the model have that. So we grab that from Bishop or China Lake, using China Lake preferably, but when it's missing, we use Bishop. Upper profiles are needed for stability estimates in upper atmosphere. Depending on the year, we use different soundings from, twice-duty soundings from different locations. Currently, Las Vegas is used for all the simulations. Upper level winds, we do not use the winds from the twice-duty soundings. We basically just use the surface winds and do a parallel aloft. And that's based on the fits to the radar wind profiler observations that we saw. When winds are, during a high wind episode at the lake, there's almost no turning with height of wind direction. In fact, there's very little increase of wind speed with height once you get above about 10 meters because the lake is so smooth. During light winds, exactly the opposite occurs, but we're not interested in light wind conditions. So, I think the parallel with height is not too bad. Maybe in the future, you might switch to wharf modeling or something like that to fill in. But the radar wind profiler was a very expensive instrument to run and you only had one of them at a time. So, we kind of went away from that. So this shows you kind of the dust ID network that started for our five-year period. And this one on the picture on the right is how it ended up. Showing you where all the different sensors are, light gray shaded areas are the areas that were emitting during that period. And it shows you where the TOM and the MET sites were. It shows you the model domain, the train in the model domain, and shows you a typical wind speed field from CalMet. How puff? Again, we used the version. I could have do five-minute data. We still ran the hourly versions for QA. Paschal Gifford dispersion curves, and my apologies to Venki. And we use partial path terrain adjustment for adjusting the winds around train based on the stability. We did consider mass depletion using a particle size distribution from an old study that was conducted on the lake. The five-year period was divided up and the five-year modeling simulation was divided up into 13 periods. Each period had a different source configuration. The sources or the source areas are their regular shapes. The shapes are determined by quite a few different techniques depending on the evolvement of what happened over time. And also we divided them up based on who owned them and what controls might be placed on them in the future or were planned so that when we were applying controls in the future, we could tease out how much was on one particular kind of land versus another. Each large source area is assigned a sensitive sand monitoring site that drives the emissions. Then you divide them up into little squares. There's up to 10,000 small squares near the shoreline. Some of the squares were 50 meters by 50 meters and the other ones were 100 meters by 100 meters. And this'll kind of show you, again, here's an example of the network. And here's an example of how it's represented in the model where Keeler dunes, some of the source areas and some of the land issues are resulting in quite small areas. So those are divided up into 50, 50 squares up here near lizard tail. It's real close to the shoreline. So again, we wanted a little bit better resolution up there. That just gives you kind of an idea. This takes a lot of time, these simulations. So we run them on our cluster and we run each source area in parallel and sum them all up. The simulations use a constant K factor and then we go on later on and scale everything. So we only run the model once and after that everything is done through post-processing. We did with these modeling analysis, we always do the K factor analysis. It tries to tease out from the mama residuals. Is there a K factor difference between the general source areas and periods of the year? And if we require when we do this that we have at least nine samples and if there aren't nine, then we use seasonal default ones that were determined from previous analyses. The screening attempts to isolate a source and a receptor. So we try and make sure that when we're looking at the TEM concentration that we're pretty sure it comes from a particular source and that way that's how we tag a source with a K factor that we calculate. And once we go through, we scale everything and we use the scale concentrations for a model performance and of course the attainment demonstration. This kind of shows you originally what the general source areas we were. So this is what we started off with. There were four general areas. Each one of these areas had a K factor that varied by season. And the 2016 SIP, we had seven different regions. And there, so it's a little bit more distinction than before. This shows you an example of the K factors. These are the ones that were used for the last SIP. You can see they vary by about an order of magnitude. Typically the sandy areas like the keeler dunes and those type of areas have a K factors on the order of one or two. And the lake bed sources tend to be a bit higher. And there tend to be a bit higher in the December through April period. Sometimes when there's a more crusty surface or friable surface, there tends to be more PM10 generated per unit sand motion. So that's why they're a little bit higher. But that's not always true. Model performance. Just some general things we noticed from model performance in the previous modeling. The unpaired in time model performance assessed through QQ plots is pretty good. We're pretty good at coming up with a distribution of the observed concentrations. We're not very good at explaining all the temporal variability in the observations. In other words, paired in time, we don't do as well, nearly as well as we do unpaired in time. The bigger the event, the larger the source area, the better we do. And we also do much better if the resources are close to the edge of the source. So if our trajectories are wrong, then it has less influence on the model predicted concentration. We switched to five-minute modeling because time steps, because the simulations look more realistic. But it only resulted in slightly better model performance. Same with trying to tease out more spatial variability in the K-facrs. Going to seven areas and updating the K-facrs tend to result in slightly better model performance, but not a lot. And overall, just my opinion, the delineation of the source areas and sand motion, those assumptions are way more important than the K-facrs or the dispersion model. It's how you initially characterize the sources, what their outlines are, and what sense of sand motion you assign to that area. Those tend to drive the results more than the dispersion model and more than our assumptions about PM-10 to sand motion constants. We'll talk a little bit about the hybrid model performance. Repair data, separate out the concedences greater than 150 into an in-network, out-of-network component. Calpuff prediction comes in the out-of-network component based on the wind direction at the T-ohm. Remember, we're only looking at the T-ohm concentrations and the T-ohms each one has a meteorological station right next to it. And then the out-of-network concentration is the actual observed concentration for that hour. Some of the statistical measures we always use, you do QQ plots, log-log scatter diagrams, geometric correlation coefficients. We've done geometric main, geometric scatter, all kinds of different measures. Ken, this is Ted. I'll know you have only about five minutes left. Okay, I'll try and be quick. Thanks a lot. You're frazzled me. These are the exceedance days that we looked at. You can see what the maximum and design concentrations were. And you can see that the out-of-network component in some instances is significant. Typically, it's about 18 micrograms per cubic meter, but there's 53 times when just the out-of-network component would have exceeded the standard. This shows you a log-log scatter diagram of the modeled portion, a QQ plot of the modeled portion, and this is the combined, this is the modeled performance results. We explained about a factor two, about 70%, geometric coefficients, not that great. But this frequency distribution, especially for the larger one, large episodes is pretty good. This is a prediction of the combined, observed plus predicted, equal to the 24-hour. Again, this is a log-log diagram. It's okay. QQ plots okay. We're going to look at 150 because we're trimming it at 150. Whereas model predictions can be anything, the observations, by their nature, have to be at least greater than 150. And when you look at the paired statistics, it's quite a bit better. Of course, some of these are, you were actually comparing observation to observation, so of course it's better. So actually applying the hybrid model, we only look at days exceeding 150, only at the modern locations. We scale future years by assumed control efficiencies for the 13 different source areas. Then we come, after we scale them, we combine the model with the background and we calculate a new design concentration. One thing we do is we roll back or we adjust the off-lake sources, assuming that most of them are secondary sources formed by deposition of on-lake sources. So over time, we would expect them to be their contribution to go down. That seems to be the case for the Keeler Dunes and the district reformed an analysis at Dirty Socks in the southern end of the lake that showed that when the winds are from the south, the off-lake influence was tending to go down as the sources, in fact, on-lake sources near Dirty Socks were controlled. But this is a big assumption in the attainment demonstration. This shows you the control efficiencies by year for the different areas. These are the assumptions in our SIP. These were not met. This shows you, ultimately, what the control on the lake looked like. Then this shows you the time schedule for when things were controlled. If you apply all that, this is what's in the SIP as the path to attainment. It shows we would attain the standard about the end of 2017. Lizard tail is the one at the north. It had the highest concentrations to begin with, and also the controls were implemented at the last. This just shows that graphically. So why didn't we attain the standard? That's what the model said. Part of it was that we didn't expect we would show attainment. The EPA required that we show attainment, or they wouldn't accept the SIP. So we designed a modeling approach that would try and satisfy their minimum requirements by only looking at the monitoring exceedances, only looking at the tail locations and not everywhere. And then using this rollback of the off-lake sources, using a half-life of about three years. So after 2015, three years later, the off-lake part that was influencing the tail would be reduced by about a half after three years. So that's quite a big assumption. So three years is probably too long for that decay. Also, probably some of the off-lake areas influencing the toms are not depositioned from previous on-lake sources. They could be lots of different things. They could be flash flood deposit or whatever. And most importantly, the on-lake controls that we assumed were not finished when we assumed they would be finished. For example, the Keeler Dunes, we assumed would be 95% control by the end of 2015. And not all ordered areas that were controlled ended up having 99% control implemented when we assumed. And there are issues with culturally sensitive areas and ownership issues and lots of different things that came into it that delayed the implementation and delayed the position of just delayed things for more than what was assumed in the set. So that's where we are now. Sorry, that was so fast. Oh, no, thank you. And it's right on time. So very good. So with that, what I want to do is open it up for questions from the panel. And we're going to not go with the hand-raising or whatever that in the Zoom. So if you've got a question, speak up and we will go from there and identify yourself first. This is Greg. And I'm sorry, I'm going to go first because I missed something maybe at the very beginning. And I was, so you spent most of your time talking about tuning these K factors. You spent very little time talking about how you estimated the Q, the horizontal flux. And I know there are sense it's that we're measured. So a couple of questions. One is were you using the sense it's only as estimates of the amount of transport or we're using only as an indicator of whether there was transport, yes or no? Or were you using those as indicators of the amount of flux? Question number one and question number two, how were those used with wind speed in a model context to estimate horizontal flux Q? Okay. The first one in the model, we use the actual sand motion. That's an input. You know, that's the surrogate for the emission rate. So the sense it measures, there's a tube at 15 centimeters that measures a total sand motion. And then the sense it resolves that total sand motion over five minute periods. So the sense it is what time resolves the catch that's in the sand catcher. And that's a surrogate for the PM 10 emission rate. And the constant K is the constant that is the proportionality constant. So you were using the sense it actually estimates the amount of flux, not just whether it exists. Right. Okay. When we attempted to model the off lake areas that didn't have sense it's we took a look at their physical characteristics and said, oh, you know, that area there kind of looks like this area over here that does have a sense it. And if we fit a wind speed or you start our relationship to the sand motion of this area that does have a sense it, we could apply that to this other area that doesn't have one by using wind speed. So we that we have done that, but in general the actual sand motion is the surrogate that drives all the emissions. So do you have a, I'd be curious to know how well you're the actual captured measurements of flux compared with your sense it measures of flux. And I know they have totally different time resolution, but they match exactly. We use the Cox sand catcher is the total amount of say, say we collect that over a month. So that's so many grams per month. And then the sense it has a signal that goes over a month. And then we use the sense it to time. Solve the monthly sand catch. Gotcha. No, that's great. Perfect. That was my that was the answer to my question. And this is wanky here. I have a question. The last slide where you said why your model did not buy when you predicted attainment in 2017 and you didn't attain it and you had a list of possible reasons that could have happened. Did you go back and remodel it and account for this to see whether these speculations were indeed true? No, we have not tried to do another attainment demonstration. We've been doing modeling since the sip, this looking at areas that have come up since then, and we've redone, you know, sort of looking at source contributions and looking in areas that might need to control to see if the district might want to ask for more control in certain areas, but we have not adjusted all the assumptions here and redone the attainment demonstration. The reason I asked the question is you have in a sense, you have the facts because this is 2019 and you predicted attainment in 2017. Wouldn't it be reasonable to go back and plug in all the mistakes or at least correct all the mistakes you've made and indeed showed that if you had all these inputs, you would have actually attained, you would have predicted the correct concentration in 2017. That sounds reasonable to me. We just haven't done that. I guess we're waiting for the next minute. The reason I asked the question is then how do we believe that you can predict the future when you haven't even predicted the past? Well, we have predicted the past. We've predicted the past. We've simulated what was observed during a five-year period. But you're saying that we could go back and redo this and see how we did. Well, first of all, we won't have the same meteorology, but yeah, I mean, I think it's a worthwhile analysis to try and revise these assumptions and seeing if the results are more in line with what actually occurred. Correct. Yeah. To me it would seem that would be natural because obviously model predictions and observations don't compare perfectly, but you had a whole list of things that could have contributed to the predictions being off in 2017. For example, the level of control you said was supposed to be 95%, but I suppose you do know what it was in killer dunes right now. And you had assumptions about lake secondary sources and things like that, which you can principally could account for and show that you actually, if you had these facts in hindsight, you could have predicted what happened in 2017. Yes, we could do that. And just along those lines, this is Ted Russell. I know between 2016 and 2017 is when you get a tremendous amount of control simulated. The peak levels go from 1684 to 142. And maybe this is again, this list of reasons, but is there some specific area that is contributing in this case? Well, the killer dunes is a contributor. In fact, that shows up currently when we do the simulations, there are a few areas that pop up, but the killer dunes are the ones that tend to have the most influence of the model sources. But really what you could say is happening is between 2016 and 2017 is that you have a tremendous amount of, you have controls on killer dunes simulated. But those controls did not actually occur. Right. And so is that, just from your observation and you're having done it, would that be the primary causes right now between those, that is that it's the lack of control on killer dunes? Well, in other areas as well. I mean, the district can, can go more towards your question about why the control areas did not meet 99% and why, what was the cause of the delay? But I think you would, the killer dunes aren't the only ones that didn't meet the, the controls that were assumed. There are other areas where land ownership issues that delayed implementation of some of the controls. Some of the controls contain culturally sensitive areas. That. And maybe I'm just looking for your guidance on this is that. From the. Calculations you've done so far, you would, you would have an idea. How much is being contributed by each of the source areas. We have that. Yes. In fact, we have it for the actual periods that were modeled. And now we, right. And so after the sip, we've still been modeling using the Met data for those periods, not just the five years in a sip. We could, we've done this modeling analysis about every year where we redo it. I give them a list of source contributions. And then the district. Takes a look at, you know, if this one were controlled by what it's supposed to be controlled. Then everything would be okay. But. Yeah, we give them a source contributions for periods after the sip so they can. See that. You know, if they would have been controlled on time. You know, or. Usually it's while we're going to control this source in about a year. So we're not going to worry about the source right here, but this one is not in our network of future controls. Maybe we ought to consider putting this one in that, in the network as well. But yeah, we give them source contributions for each year that you can test. Different controls to see what would be needed to attain the standard. For that year. This is new. So I just full disclosure. I'm not atmospheric scientist. So I might, my question might be a little. Off. But. Have you done a test to see how. Your model performs depending on the year. If you have a dry year or a wet year. Does. Does it respond differently during those years? Or you don't even have a. Sort of a climatic or hydrological. Parameters in your modeling. Not, not really. We haven't, I mean it. Like, like I. Tried to mention this. The larger the activity, the more the activity. Ten of the better. The model does the more intermittent. And smaller the sources is. The poor we do. If that's tight and there's some. Soil moisture trend. I'm not sure that usually. Things are more active. Counterintuitively. After it's rained and you had a cold winter, then the next spring. Things tend to be real, real active. Whereas. In the summer, you know, it's dry. Temperatures have gone up. The crust sets in. There's no activity. So. I think the soil moisture and things like that, the trends more affect to the emissions. And unfortunately we've never been able to. Predict or predict that. Other than very general terms. Like we can't. Simulate soil moisture. Or have not been able to. Or have not been able to. Soil moisture. Soil moisture. Or in the summer. You know, it's dry. Temperatures have gone up across that's in. There's no activity. Soil moisture. Soil moisture. Not being able to. Or. The K factors would be ideal if we didn't have to infer the K factors from the measurements. If we could predict this constant based on some soil moisture, or chemistry model. The soil or the chemistry model, the crust or something like that. But we've never been able to do that. And sorry. Like. Why or is it because. of limited data availability or is it because that's not part of your mandate to do that kind of modeling? Or because you don't have partners to do that? I'm just trying to clarify how that has not been incorporated. I guess we've never really, we've kind of thrown the too hard basket. So I think way back when in the, I think somebody tried it in the 1990s, tried to come up with a hydrogeological chemistry model that simulated more of what was happening during crust formation and things like that. But it's something that we haven't been able to do. And I don't think, I don't know of anything but out there that is applicable, but certainly if somebody could come up with something like that, that would be something useful to test to see if we could use that instead of a more empirical model like we're using now. This is Dave Allen. If I can follow up on the off-lake sources that you have up on the screen right now, you cite two different possibilities. One is the longer decay time scale of deposition from on-lake sources and then second potentially other off-lake sources. Do you have a sense of which of these is likely the stronger effect for the off-lake sources? Or if we don't have a sense of which is more important, is there a way of trying to get at that question by either data collection or modeling? Yeah, I think that the time scale is definitely too short. And there are definitely other sources around the lake that may not be subject to these. They may not be the result of deposition from on-lake sources, but they tend to be more sporadic if you like. After a flash flood event, there might be a deposit that causes an erosion, but two years after that, that's no longer there. That's not something that, you know, 20 years in the future will. It's almost a natural and natural event versus an anthropogenic event. So I guess we would consider the on-lake sources anthropogenic and secondary sources form a deposition of off-lake, of on-lake sources. I guess you consider those anthropogenic. So the SIP is meant to consider anthropogenic sources. When we've modeled the background before or attempted to model it, we never show attainment. Because any close to any fugitive dust source, natural or man-made, you know, if you're real close to it, you get a lot of concentrations. So I think the focus of the SIP modeling is kind of like the first step. Let's go after the sources that are gigantic that cause concentrations of 60,000 micrograms per cubic meter. And let's forget about the ones that, you know, cause concentrations of the order of the standard. But now we're at the point where those large sources are gone, that these off-lake sources are becoming more and more influential. And at some point I think we should probably take a stab at actually modeling the off-lake sources, all of them. It's just the undertaking is pretty big to do that. Yeah, just to comment along those lines, Ken, the Scott van Pelt, and that is we see exceedances in areas where they're not dry-lake beds or agricultural operations and all that. So I think ABC and the effect of some of those off-source areas. Yeah, when there's a regional event that's coming from the north, some of the sources are, you know, 50-100 kilometers away. And you just see a dust cloud coming down the valley. So it's pretty hard to put that in a SIP modeling unless you try and model a gigantic region. So we tend to consider those quote, exceptional events and exclude a natural acceptable event from consideration of the modeling. This is Scott Tyler. Maybe a question I have on bullet point three again, too. What does the topography or the terrain or the surface roughness look like in these areas where you're saying that the time scale for removal of on-lake sources to off-lake sources is too short? There's kind of a bench that goes around. If you look at the topography of the lake, right next to the, I guess, where the road is or the historical shoreline, there's kind of a bench that goes around the lake. And the surface roughness length on that bench is about 0.1 centimeters. So that's about 10 times rougher than the on-lake sources. Of course, once you get multiple kilometers off-lake, you're into the Sierras on the west side. You're into the White Mountains, the Corso Mountains on the other side. And you're starting to get, you know, large-scale vegetation and then, you know, surface roughness goes way up. Yeah, but I guess my question is that these under bullet point three here are the third one. These are sources. These are un-vegetated areas still. These are not scrub brush or anything like that. They're un-vegetated. They, you know, they have sporadic brush, but not, you know, it's not like a playa, but the density of the vegetation is pretty sparse in a lot of the areas. Okay. All right. And we'll see more next week. One last question, because we're going to just move on. This is Venky. One last question is, Ken, if the level of control was close to 99 percent, 99 percent, wouldn't rollback modeling be better than calf buff modeling? Yeah. Yeah, I mean, yeah, I mean, that's, but it's just, you have to figure out what source to apply the 99 percent to. Yeah, but let's assume that wherever, whenever you had control, it was close to perfect. Of course, that's an assumption. Then all you have to do is just basically say rollback modeling and make that proportional to the level of control over the lake, correct? Right. Okay. So yeah, that was my question. I mean, we are doing a lot of complicated modeling, but in principle, the single most important thing is the level of control on, and of course the measure concentrations will tell you how to do the rollback. Right. I mean, we could, we could fix control of the lake. You just fill the lake up. Exactly. Exactly. So the point that, I mean, I have a million questions, but obviously there's no time. And you're correct is that we have to move on. So our next speaker is Grace Holder. And we practiced ahead of time. So Grace, can you bring up your screen? Very good. Everybody see your screen? Yes. We see it. Very good. Great. So I'm Grace Holder with the district and we were given two basic topics to present on today. We have to figure out how to advance the slide. We're right. Right here. Right there. Okay. There we go. So here's the two basic topics just to sort of summarize. And each topic has got five different points that we were supposed to present information for. So the first topic, which is what I'm going to talk about now, then we'll allow some time for questions. Then we'll go into topic two. The first topic is vacuum performance. And there's five different things there. The first one is a fraction of Owens Lake subject to vacuum as a function of time. And then the PM10 emissions as a function of vacuum and time. Downwind monitoring concentrations over time. Modeling PM10 concentrations over time and then affect those climate variability on vacuum performance. So that's what this first presentation is going to discuss. So here's a table that shows Owens Lake dust control implementation over time. So on the far left here, we've got the different phases of control that have been implemented on the lake. Starting in 2001, all the way through phase 910, which is the most recent completed phase of dust control implementation that was finished in December of 2017. So it has the area in square miles of each different phase, the cumulative square miles of control. So as of end of phase 910, there was 47 square miles of controls on the lake. Then I just took the fraction, the proportion of that relative to the amount of lake bed below 3600, which is 110 square miles. So that's what this column here that says fraction of total lake bed with controls indicates. That's for each individual phase. And then you have got a cumulative column next to it that shows the accumulated amount of control on the lake. So as of the end of 2017, there was 42.75 square miles or 42.75 percent of the lake that was controlled. And if you can look at that in graphic form as well. So here we've got on the left, we've got the amount of control on the lake bed below 3600 that's shown in the blue. The vertical columns in dark blue indicate the different phases. And then you've got the accumulated amount of control relative to the total lake bed with that number above each one of those bars. And then the uncontrolled portion of the lake is shown over on the right side. So it just shows it in graphic form. This is sort of map view of the same type of thing of the dust control buildout over time showing the extent of dust control on the lake bed and their location. So phases one and two, which were the first phases that were implemented on the lake, were down on the south end of the lake as well as the northern end of the lake. They were almost 14 square miles. And you've got all the way through 2015-2017, which are the two latest phases of dust control for phase 7a and phase 910. And it takes you all the way up to 47 square miles of dust control on the lake. And then you've got the phases in between as well. This is a graph that shows you the PM-10 emission trend from 1999 through 2019, so 20 year period. You've got the annual PM-10 emissions in tons per year on the left side. And you've got the percentage of lake bed with controls on the right side. So there's two colored lines. There's a blue line and a red line. The blue line indicates the total PM-10 emissions in the Owens Valley planning area over time. And then the red line is just the emissions from the lake bed. So this is taken from the 2016 step. So it includes modeling data through 2015. And then the data after that is the forecast information. But it does show the decrease in emissions over time as the amount of lake bed controls increase over time. One thing that's notable on these plots is that you'll notice that the reduction in PM-10 emissions is not straight. There's high points and low points along the line. And you'll notice that there's a real high point in 2005 and then another high point in 2009. And those are directly related to later controls that were ordered. So the highs in 2005 really resulted in the 2006 settlement agreement and the areas that were ordered for control in 2008. 2008 SIPP that resulted in this lip of dust control implementation on the lake four years later. And you also have the highs in 2009 and 2010 that were directly related to the controls that were associated with the phase 9-10 project and the stipulated judgment over here five to six years later. One of the things that's important to kind of understand in terms of emissions on the lake bed is that they're really not uniform with time. So for the areas that are uncontrolled, they change pretty dramatically from year to year and seasonally as well as over time based on meteorological conditions and surface conditions. So some years are windier than others. Some years have more stronger dust events. They might have higher precipitation. Another critical factor for precipitation is not only the amount but the timing of the precipitation. So if you have snow events on the lake or you have precipitation in the winter and then it dries over a relatively cool period of time, it tends to create higher dust emissions than if you have rain in the late spring or in the summer or even in the fall. The soil type and the soil condition is also important for controlling the amount of dust from different areas. Here's a table from the 2016 step that shows the annual PN-10 emissions. So this is basically the data that's used in the plot that was shown a couple of slides earlier. So it has the amount of lake bed emissions in tons per year from 1999 to 2019, also in the Ola Valley plan area over the same period of time. And then it has the amount of lake bed controls in terms of the fraction of the overall lake shown on the far right side of the table. Here's another plot that shows PN-10 exceedances at the Ola Lake shoreline, both in terms of the average as well as the number of exceedances from 2000 to 2018. So you've got the blue columns that are showing the average number of exceedances or the average concentration, excuse me, per year. That's on the scale that's on the left. And you've got the red line that goes through, and that shows the data on the far right, the exceedance day count per year. So you can see the same general trend. You have a decrease in time in not only the number of exceedances, but also the concentration of exceedances over time from 2000 to 2018. The horizontal gray dashed line is at 150, and that refers to the PN-10 standard, federal PN-10 standard. One noticeable thing here is that in 2018 there was only eight exceedance days at the shoreline. It's some of our monitoring stations. Another way to look at it, sorry, it doesn't seem to show the numbers on the plot. It just says cell range, but I think you can still get an idea of what this is supposed to tell you. The cell range just gives you the year. So this dot down here with the black highlight around it is 2018. This I think was 2001. Up here in the big red circle, the size of the bubble indicates the number of exceedances per year. This was, I believe, 42 in 2001, and we go down to eight in 2018. And you can see there is some variability in terms of the size. And I think hopefully in the copies that you received, you should be able to see the years and the numbers that refer to each one of these bubbles. So the average exceedance is on the bottom scale and then on the vertical scale is the maximum exceedance. So the maximum exceedance in 2001 was about 14,000, and then the average for that same time was like 1,200 something. In 2018, the maximum exceedance was, I think, 260 or something like that, and the average exceedance was, I don't remember the exact numbers, but I think the average was more like 260 and the maximum was like 500 or so. This shows a map of the main monitoring sites that we have around the lake on the shoreline areas and in the local communities. So there's five sites that are highlighted on the map, and those are ones that I'm going to show data for and show that we have a relatively long set of records from those particular sites. So they show the trend over time better than some of the other sites. You'll notice that they have lines that extend from each one of the monitoring locations. Those refer to the wind direction screens that indicate whether the data from that particular site has impacts from the on lake, so from the Owens Lake area or from the off lake direction. And so the wind direction screens are showing up in the table up in the upper right. So we're going to start in the north and then we'll work our way around to the south. So we'll start in Lone Pine. So here's the trend map or the trend plot from Lone Pine from 1995 to 2018. This is not one of the sites that show our highest impacts, although early on in the record we have had, you know, significant number of exceedances of the TM10 standard in the Lone Pine community. And it has gone down over time, so now it's below the standard of one. And then you also have, so the three-year average for all of the data is shown in gray. And then we have the yellow, which is from the lake, or from the both places. I believe red is from the lake and yellow is from off lake. This is keeler. Keeler has had a significant number of exceedances every year for pretty much the whole period of time that we've been monitoring there with our team starting in 1995 to 2018. So you can see that the overall trend shown in the gray line has decreased significantly over time from a three-year average of 20 in 1995 all the way down to five in 2018. We also have the keeler dunes line shown on here. The keeler dunes line is shown in kind of this green color. And it actually increased quite a bit over time as the dunes move closer to keeler. And then since we've been implementing controls in the keeler dunes, even though we haven't controlled quite the level that we want to at this point, we have significantly decreased the number of exceedances over time. Also, you have the plot from the lake shown in red and then from off lake in the yellow. This is the shell cut site. So this is located along the southeast shore. The data record here is a little bit shorter. We didn't install this site until the early 2000. But we have a three-year average record from 2003 to 2018. And it shows a nice decrease over time starting with a peak of about 16 overall in 2003 and then decreasing to a low of about four in 2009-10. It has actually increased a little bit since then. You can see that in the overall trend as well as from the lake. And those are basically because of sources of off lake sources in that area that impact that site. Here's a dirty socks monitor. So you'll notice that this scale, all the other ones have the same vertical scale. This vertical scale is a little bit different. This is a site that was highly impacted in the beginning. So we had as many as a three-year average of over 40 exceedances per year in the early 2000s before there were dust controls implemented in the southern part of the lake. Those have decreased significantly over time. And so in 2018, we still haven't quite met the number that we're looking for overall, although from the lake is down at one. We do have a data record here with a data gap from 2012 to 2017 with the three-year average data because the site had been removed for two years because of battles with DWP big clean. It's now reinstalled though. And here's a Lancia site. So this is the community at the south end of the lake. So it also shows a decrease in time. This also is a site that has some impacts from Owens Lake, but it's a little bit out of the path of most of the dust glooms. It's shifted off to the west of most of the impacts coming from the north from Owens Lake. You can see a decrease in the number of exceedances over time. So those are the plots for the downwind impacts. So it just depends on which direction the wind blows and whether a site's going to be upwind or downwind. Those are basically the trends over time from the downwind monitors. In terms of the back-and-performance requirements over time, one of the questions was the effects of climate variability on the back-and-performance. And basically each back-and-for Owens Lake or back-and-variations have specific compliance requirements that must always be met. So they're really independent of climatic conditions and weather conditions. They're requirements that have to be met no matter what the weather conditions are during the dusty. So if it's a dry year, they still have the same amount of wetness on the lake, the same amount of vegetation cover. It's a very managed system. So they're operated and managed so that they meet those performance requirements so that they ensure that they always meet the specific control efficiency of those areas. However, as we talked about a few slides earlier, the dust emissions in the uncontrolled areas outside of the back-and-measures do change over time, and you can see that in a lot of the TM10 monitoring data. This is a table that shows some of the back-and-performance requirements for each of the different back-and-measures. So we've got shallow flooding, managed vegetation, and gravel blanket as the three back-and-measures. There's some variations for the back-and for shallow flooding. The dynamic water management, which allows for shorter dust control seasons. The brine and the tillage are two variations that allow for other measures to be implemented, but with the shallow flooding backup. And then you've got the column here that says compliance requirements that goes over the different specific requirements for each one of the areas that they have to meet. So like for shallow flooding, the main thing that they have to... For meeting compliance in the shallow flood areas is they have to meet the wetness targets. So we monitor the amount of wetness that cover each one of those areas on an every eight-day basis when the landslide imagery goes over. So the last column gives you the frequency, and then you have the different things that are monitored on a regular basis to ensure that they meet the performance requirements. Very good. That's it for topic one. Yeah, and I thought we were going to take a... We are going to take a 10-minute question and answer session on topic number one. So with that... Go ahead. So with that, questions from the panel. Yeah, this is great looking. I guess one of the questions that I... It's still a lingering question I have as we think about this is... Let's just take this slide that we have right here. The back of shallow flooding, there's a certain percent wetness requirement that is required for compliance. For that, but also all of the other, everything else in that third column. My question that I still haven't seen the documentation for is, how was that percent wetness threshold established? Or how was for percent cover of managed vegetation? How was that established? In other words, what's the data that back up those actual thresholds? Since I understand from our earlier conversation, they're strict thresholds. So what's the data that backs them up? Yeah, there's been testing that was done mostly in the 1990s that established those particular requirements. And that's kind of one of the main topics for the next presentation. So I can go over that a little bit more detail and maybe if you have, you know, additional questions after that presentation, we can get into a little bit more information about that. But that's kind of the main focus of some of the next presentation. No problem. Great. This is Scott Tyler Grace. Thank you very much. First off, just, I don't think I said this at our last meeting, but congratulations to both LEDWP and the district when you showed the plots of the reductions of PM 10 mass and days. I mean, that is a pretty, that is quite impressive. And Grace, could you just go back to the one, the one that showed some significant reductions and then some significant increases and you were saying those were related to mitigation. And I, a couple back. No, forward like this. I'm going the wrong way. Sorry. That's okay. Where was it? Yeah, that one. Yeah, that one. So just for my understanding, you have a sense of why the big peak in 2005, why the big decrease in 2006, I think you said you thought it was related to management activities. Was it because there was more dust produced during the management activity time and the surfaces were disturbed? No, I don't think it's related to management activities. This, this peak in 2005. Did I recall we did an analysis on that? It was a really windy year. We had more wind events and some of the areas that hadn't opened up before because they're maybe not as frequent opened up that year. So a lot of the areas that were sort of in the central part of the lake, you know, sort of in the heavy clay areas that have higher salt efflorescence, they became more active during that time frame. And so we have a higher peak in emissions. Okay. Okay. Good. I mean, it's great to see that the fluctuations have decreased over time. So very good. And similar, just following on that, similarly, 2006 is very low. And the explanation there is. I don't recall the exact explanation for that, but it was very low. We didn't, didn't have the, I think the areas opened up, like the areas that had opened up previously in the year before in 2005 did not open up the following year in 2006. We didn't have the winter rain. So we didn't get the salt efflorescence and the high p.m. 10 emissions from those areas. So there was probably 10 or 15 square miles of the lake that were misses the year before that were not and this is in the next year or so after that. It was also 2006, I think close to 30 square miles of dust control got implemented. That's true. There's this lip right here. So you had a large amount of shallow flooding that went into place as well. Yeah, but then it doesn't, then you start to see a continuous increase until 2009. Yeah, I mean, there's no single factor right to some of these trends. Okay. This is Venky here. I have a more basic question. One is how are these emissions estimated? These are done from the modeling. So these are actually part of this table here. So through 2015, those are actually modeled based on the modeling that was done for the 2016 set. And then so that was using actual data from the, from the lake or monitoring network. And then 2016 on those were forecast emission. The reason I asked the question is the modeling of emissions depends on the K factor, which varies by an order of magnitude. That's the reason I asked the question. These are not actual emissions. These are inferred emissions from the sand flux and the K factors. If the K factors vary by an order of magnitude, I'm not sure how you can estimate the emissions directly related to the sand flux. So it's actually a measure of emissions. No, but you still have to multiply by a K factor. A K factor itself shows large variations, which I, maybe I'm getting this wrong, but the K factor is assumed by, is, is estimated by through some calibration. I'm not sure how it is done, but you're not actually measuring. You're just measuring the sand flux, correct? Oh yeah, we're measuring the sand flux. And then we're resolving the emissions that must have been to cause a concentration on the shoreline. And therefore inferring factor wise. So you're doing some inverse modeling where you. Yes, it's inverse modeling. Yeah. We're not, we're not calibrating per se because we don't do it on an event, my event, but we, for a season or might be 40 samples that we look at the residuals and say, well, the key factor for these 40 samples would be about five times, you know, 10 times 10, the minus five or whatever. And that is what is the K factor. So it's reverse modeling. And of course we don't reverse model every hour. We reverse model over ensemble data sets. Okay. But then that would depend on the accuracy of the model, the inverse. Okay. Okay. Because. Yeah. You're not measuring actual emissions. So the uncertainty in the emissions, what would you estimate it to be? Okay. Okay. So the reason I'm asking the question is that huge horizon, then the drop and then the increase, how reliable are the strengths? I think if you look at the plot that's on the screen now, you can see the same pattern. These are actual monitored data. So you can see the same peak in 2006. And 2009 10. That nowhere from the actual monitored data. I think also if you, if you look at the data, you can see the same pattern of the actual emissions. PM 10 emissions. If you were just to plot sand motion, total sand movement on the lake, you'd see the same relationship. So the K factor remains constant. It's for the same areas. It tends to be repeatable. Not always, but the difference, the factor of 10 is between different areas on the lake. Like the Keeler dunes have a much lower K factor than the Keeler dunes like that. But yeah. Relatively small you say for the same area. Yeah. Okay. Maybe this is just related to this. So the modeling and the measurement slides that show these ups and downs trends. They are a little bit shifted in year. Correct. Or am I reading the plot wrong? I think you are reading it correct. Yeah, I think you are. It did in year. If you look at this, this is year in. So that would be. So all the data from 2000 and, you know. All the data for 2005 would be plotted here. I think the other one is actual, the actual year itself. Got it. It consistent. It's just a matter of. Other questions from the panel. This is Venky here. I will let the other panel members ask questions because I don't want to ask too many here. Sounds like you're on. Okay. Yeah. What explains the, the way that in the trend in exceedances, I noticed you haven't plotted the actual average concentration of at the monitors. You've plotted the number of exceedances. At some points, the exceedance is decreased and some points, exceedance is almost constant. What explains that trend? Are you referring to the graphs of the individual monitors? You showed at different monitors. Some monitors showed decrease in exceedances, substantial decrease in some sort of show almost the same number of exceedances year after year. Yeah. Let me get back to the map here. Sorry. I think you have all of my arrows. So if you, if you look at the map showing the location of the different monitors, I think the position of the monitor affects that quite a bit. So like Lone Pine in Elancha, it's going to be more constant over time just because they're out of the path of most of the dust impacts from the lake. So Lone Pine, for instance, when there's a south wind, most of the dust impacts, it might impact Lone Pine some, but most of the main path of the dust plume goes to the east of Lone Pine. Same thing with Elancha. For a northwest wind that's coming down the valley, Elancha will be kind of off to the side and won't get the direct impact from, from the lake. So I think a lot of the trend, the trends that you see in those spots are a reflection of the location of the monitor. So then the follow up question is then, can the model explain the strength, the number of exceedances? Can the help of model explain this sort of exceedance trend at the different monitors? Because in principle, it should be able to, correct? It does a pretty good job of predicting which, you know, which locations should we exceed, you know, have the highest number of exceedances. I see. So you can predict the trend then. Yes. Okay. At some point I'd like to see that. And I hate to cut us off, but I think we do have to move forward. There will be potentially some additional time at the end. So we can revisit some of these questions. But let's go and move to the second part of Grace's talk looking at Bacom testing and assessment. All right. So we've already sort of gone over the three different Bacom measures. So we have shallow flooding, managed vegetation and gravel. Those are the three currently approved Bacom for Owens Lake. If we just go over a timeline of the Bacom development, the first Bacom were identified in 1994 for Owens Lake and the 1994 Bacom SIP. It's actually identified three Bacoms that were shallow flooding, vegetation and riparian corridors. Further investigation on the riparian corridors determines those were kind of an infeasible measure. So in the 1997, 1998 SIP that designated Bacoms for Owens Lake, there were three that were different from the 1994. So two of them were the same with shallow flooding vegetation, but gravel was determined to be a Bacom, the third Bacom rather than the riparian corridors. So the 1997, 1998 really established the three current Bacoms that we have for Owens Lake. There's been some modifications since that point in time. In 2003, the official procedure for developing new or modified Bacom was established and approved in the 2003 SIP. And then 2011 was the first actual modification of one of the Bacoms and that was for managed vegetation. And that was approved by the district governing board and that modified managed vegetation that reduced the amount of cover that was needed and then added a spatial requirement rather than being uniform over the whole area, it allows for some variability in the distribution over the control area. In 2013, there were some modifications made to reduce the thickness of the gravel. So it didn't have to be the four inches anymore. It had a provision for reduced thickness gravel. It also allowed for brine shallow flooding. And that was through the 2013 SIP amendment and the 2013 stipulated order of abatement. And then the most recent modifications to Bacoms have been in the 2016 SIP that allowed up to 48 plant species. So before that it was just the one species of salt grass that was allowed for dust control. Now there's 48 different species that are allowed. It also allowed for the pillage with Bacom backup or the TWB squared, the brine with Bacom backup and the dynamic water management modifications to shallow flooding. So that's sort of a quick timeline overview of Bacom and its changes over time for Owens Lake. So going into actual Bacom development, hopefully this answers some of Greg's questions. The main Bacom of shallow flooding was established through the North Flood Irrigation Project, the North Fifth, which was done in 1994-1995. It was actually written up in a very comprehensive report by Hartebeck et al. in 1996. And that was a pretty detailed study that went through and looked at the water cover that was needed to get certain control levels, the sand flux, the PM-10 concentrations, the meteorology, the water distribution over the area. A lot of different things were looked at in that particular study. That really established sort of the bar for the shallow flood Bacom. Managed vegetation was done through several different studies that were a combination of field tests that were done by Lancaster et al. in 1996, testing different vegetation covers and their control efficiency on Owens Lake, up in the Owens River Delta area. Also wind tunnel tests by White from UC Davis in 1997. And then feasibility tests of actual implementation and how to grow plants on Owens Lake as summarized by Scheidlinger in 1997. There was approval in 2011 to reduce overall cover, allowed for managed vegetation. So originally through the studies that were done by Lancaster and White, the combination of those two was determined in the 1998 SEPT that you needed 50% cover of plants over the surface on every acre to get the 99% control level. That was not reached in the first vegetation project out on the lake that DWC implemented in the south end of the lake. So as a result of that and study the actual conditions of that three and a half square mile vegetation area, there were changes that were made. And the overall cover requirement was reduced to 37% average cover over the area and then it allowed some variability in the spatial distribution. So it didn't have to be uniform over the whole control area. And then in number three here, under managed vegetation just allowed for additional plants with site cell graft. In terms of gravel development, the initial development for gravel was based on tests that were conducted on Owens Lake in a couple of different locations in 1986. Very small scale, measuring different kinds of gravel, their amount of sorting as well as their thickness. And that's summarized by Cox in the 1996 reference. And then Ono and Keisler were looking at gravel in terms of the threshold, the size needed for control in terms of the threshold wind speed on the surface. And then in 2013, the 2013 SIP amendment, it just allowed for reducing the thickness of gravel from the four inches that was established in the initial back home description to two inches provided they put a geotextile fabric underneath. So some of the metrics that were used to evaluate the back homes during the testing process for shallow flooding, PM-10 monitoring was conducted in an upwind, downwind from main wind direction transect through the project. Sand flux was measured using sense sets as well as different kinds of sand catchers on the lake. The water cover was measured. Wind tunnel testing was conducted on the surface with a portable wind tunnel that was taken out on the lake bed. We did a lot of hydrologic monitoring and soil monitoring as well as the meteorology, the wind speed, wind direction that was monitored within the test site. For managed vegetation, things that were monitored and measured are the plant cover, the roughness density, wind speed, sand flux, and PM-10 in wind tunnel tests. The actual field tests were too small really to conduct much of the PM-10 monitoring in the field. And in the gravel, the metrics that were used were just observations of the surface and whether or not the material infilled with, whether the gravel blanket infilled with material from either underneath or from material that's deposited within from outside as well as the gravel size and gravel sorting. In terms of backup modification, so the requirements for modifying backup are provided in the 2016 SIPP and Attachment D of the Board Order and it has the Board Order number given up at the top there in green. There's really two different provisions allowed within the Attachment D for modifying backup. One is adjusting, making adjustments to existing backup whether it's shallow flooding or some kind of non-shallow flooding that would be managed vegetation or gravel and then research on potential new backup. So the requirements are for shallow flooding or specific provision that allows for a testing of changes to the shallow floods wetness cover curve and the amount of wetness that's needed to get 99% control. And then there's also provision that provides that all of the areas in the 2016 SIPP, the total dust control areas, that's what TDCA stands for, have not exceeded for one continuous year then there's a provision that allows for a 10% reduction in the wetness cover and the 99% control areas. In terms of other back-ups so for the non-shallow flooding back-ups there's provisions for making modifications to that in terms of the limits on the size of the area being modified and it has to be a multi-step test with the test design being approved by the Air Pollution Control Officer. So DWP would submit a plan and test design to Great Basin and it would be reviewed and approved by the Air Pollution Control Officer. They would then conduct the test and the results would be reviewed and potentially approved by APCO and then provided that happens so the city can apply to the district for supervision to refine the back-ups after three years of successful operation. So those are the provisions for making adjustments to existing back-ups that are out on the lake. Research on potential new back-ups includes allowing testing to be done at any time on the lake bed but it has to be done outside of the total dust control area so it can't be done within an existing back-up area. It has to have three years of testing and the reason for the three-year requirement is that there are variable conditions from year to year so the three-year time period would capture the range and variability whether it's wet, dry, windy, not windy, that type of thing. And then after testing of that we can apply to the district and the air vision control officer for supervision to allow for the new back-up measure to be implemented. If it is approved by the district and the ARB and then has to be submitted to EPA before it's actually approved by EPA but approved by the district and CARB then it can be implemented on only up to a half-square mile of new control areas until approval by EPA. Once it's approved by EPA then it can be used in any of the control areas on the lake. So the back-up effectiveness evaluation as we talked about before there are specific performance standards that need to be met on each back-up. Those are provided in that table in the previous presentation so that includes things like percent cover whether it's flatness cover or plant cover or gravel cover or dust thickness for the brine areas or roughness within the TWB-squared areas the sand flux, the amount of sand flux being measured within some of the control areas especially within the areas that have back-on-back-up so that would be the dynamic water management the TWB-squared or the brine area of sand flux is an important criteria that's measured. We also have the dust identification program the dust ID program and the data that's provided from all the monitoring that's done associated with that program that are used to determine emissions from the back-up so we have monitoring sites within the back-up areas and that data is input into the model so that can be used to determine how well the back-ups are performing as well as other areas on the lake outside of the dust control area. Some of the parameters that are being monitored as part of the dust ID program includes things like wind speed, wind direction sand flux, CM-10, precipitation surface conditions. We have a whole network of different cameras as well as people that are on the lake or on the ground that do the dust observations and actually delineate different source areas when they become emissive and then evaluation of the back-up under conditions that cause emissions. In terms of socioeconomic evaluation those kind of evaluations have been much more limited for the dust control project or the Owens Lake Dust Mitigation project over time. We have some sort of generalized impacts or effects of the dust project on the lake but there hasn't really been formal evaluations done that I don't associate with these. Just general observations for these categories would include things like the project activities that's been going on for 20 years now with nine phases of dust control and the impacts have been improved air quality and reduced health impacts in the local communities. That would mostly be related to areas like Keeler which have seen a significant reduction in not only the number of concentrations of dust impacts in the local community. It's also added quite a lot of jobs in the area. There's about 200 jobs that were created during peak construction activities and there's been nine phases of those construction activities over time so that adds up to a lot of jobs over the years. There's also about 70-75% permanent GDP jobs that have been created on the lake associated with day-to-day operation and maintenance work that needs to be done in the 47 square miles of dust control. Improve health and safety along the highways especially the highways that are associated right around the lake bed so 395, 136 and Highway 190 in terms of the overall cost over the 20-year period of dust control implementation about $2.25 billion that's been spent by DWP on the project since about 2000 and annual operational costs associated with those areas is about $25 million a year. In terms of water usage since 2011 after some of the larger areas have been controlled on the lake bed there was a peak of about 73,000 acre feet of water that have been used per year since then that's been reduced so that in 2018 through water conservation efforts and just being more efficient in the project the water usage has dropped to about 60,000 acre feet per year and then there's also things to consider in terms of the economic and social benefits of the water exports to Los Angeles as well as the economic and environmental costs and impacts of water exports from the Owens Valley so that's the last slide for this presentation if there's any questions thank you very much Grace questions from the panel yeah I'll start off since I was stuck with the stuff I wanted to hear so Grace I want to like I'm still a little confused I mean I'm a little if we go to the page where you have those citations about the papers or the reports that were used one of them for vegetation for instance is Lancaster 96 and I assume that's worked at Nick Lancaster did out there with Andreas Boss is it'd be interesting to see that paper I don't know if that if I'd like personally to have all of those papers in our box so that we can can evaluate them but I'm just I'm curious about how these things were established so the Lancaster data was published in a paper in 1998 that I'm familiar with in that paper they showed that a cover of 25% reduced the flux by two orders of magnitude and yet the back in requirement as you said was 50% cover so once again my question is is how is that 50% actually established if the data that is based on was another thing I'm not pressing you on that particular issue and I know I don't know who's responsible and other things but like that 50% number where did it come from because it didn't pop out of that paper at least the data that's available in literature obviously that's a good point so that number was actually sort of came from a combination of both Lancaster work that was done on the lake which I think showed that there was about 95% control with about 23% vegetation cover so that was done with work that was tested upon the delta area and areas that had various amounts of cover already established as well as a combination with the work that was done by White UC Davis and the wind tunnel so there was actually a material that was taken from Owens Lake and taken to UC Davis and used in the wind tunnel and then they did various amounts of cover within the wind tunnel and that showed that I think you needed 54% to get 99% control so it was kind of a combination of those two studies that resulted in the 50% which is kind of a more conservative number this is Ann and I'll just add that we've all of these references Grace has in the presentation has been provided to the panel so the May information request that was sent out all of these reports were sent to the academies to distribute to the panel so should already have them This is Venki here I have a question on the testing itself and this is something that is a function of my ignorance about the subject when you say 50% vegetation cover results in 99% control does it mean that when you put in 15% 50% vegetation the concentration goes down when concentration goes down by 99% is that what it means Yes So somebody made a measurement the concentration fell off That was relative to 99% control surface so for the wind tunnel testing they had the same material that was they used fake plant is my recollection to get the cover but they used the same material with no plant cover and then they gradually added cover to the surface to establish what was needed to get 99% control So 99% control refers to downwind concentrations Yes And I suppose that depends on the height of the vegetation This Yes, the height of the vegetation would be a factor but that was not a factor that was measured as part of this it was just the horizontal cover over the surface It doesn't depend on the height at all For this particular work it was just protection of the surface so it was using a very low cover plant but it was salt grass which does not typically grow very high so it was just considered to be the horizontal cover across the surface Not the height of the plant Thank you, the answer is it does matter and in fact the Lancaster paper does have lateral cover in there so that's the projection of the cover It's basically how much cover is horizontal to the wind per square meter of ground and it does matter and we can talk more about it Oh And this is Scott as I recall Again this is just recollection but they were trying to mimic in the wind tunnel testing something that looked like salt grass so it kind of had a length scale of 10 to 15 centimeters at most So Greggy does depend on lambda f and the lambda p parameters that we use to describe cover that is the projection and the horizontal plane and the vertical plane That's right so if you look at that Lancaster and Boss paper most of the x axis are actually lambda they only actually go to the horizontal cover for the last figure so they are mostly lambda This is Nisha, I have a question You mentioned you guys tested for 3 years before implementing the back ends I was wondering 3 years is a very arbitrary number it can be you can test this during 3 years that are all drought years or 3 years that are all wet years so it's just I understand if it's just has been picked because that's a number of years you want to test something but I'm not necessarily sure if climate has anything to do with that decision because it's it's not necessarily you're not necessarily going to experience all sort of climatic events within 3 years This 3 years may have originated out of a few places the first is that that's the attainment demonstration period so when the SIP was written and those provisions were put in for approval and development of new backum 3 years would allow for that attainment period to not have new backum testing influence reaching attainment as well as you know it kind of brackets the testing period for LADWP such that they won't feel like the district has this definite window but I think you know the district certainly doesn't have to approve a new backum after 3 years and could acquire additional testing if they felt like the meteorological conditions weren't representative of typical trends This is Scott Tyler it's also sort of the half time scale of a typical drought cycle for the central and southern Sierras high probability of at least catching some of the wet and dry but just from a climatic standpoint down there This is Scott Van Pelt and I think also what's important is the windiness issue as we saw on some of the grass over first presentation where one year was not windy at all and the following year might be very windy so in 3 years you're probably going to catch a windy period This is Ted just our time is about over so one or two last questions This is Venky here Ted I assume that one of the objectives is to reduce water usage so what is the preferred backum now is it managed station and gravel rather than shallow flooding That's a complicated question We met in Los Angeles one of the presentations covered the percentage of area covered by each backum and certainly in terms of reducing water usage Los Angeles Department of Water and Power prefers managed vegetation tillage and brine those still don't represent as much of the controlled areas as shallow flooding does but have increased over time but they do have additional regulatory requirements from other agencies such as Fish and Wildlife that prevent them from moving totally to water lists or water neutral backums or state lands and California State Lands Commission and isn't that true that the local communities also don't want to go all like gravel or something that doesn't use water at all that was my impression when we were down there this is new California State Lands Commission as well as the tribes are not preferential to gravel there are different agencies and different groups have different preferences you know the tribes have repeatedly said their preferences ultimately just be to refill the lake that makes sense this is new show I have one other question for you I think in the last slide it was an interesting way of showing the result saying the economic value of this water to Los Angeles and then sort of the cost to the local community and I find this a little bit I think in accurate they are measuring this because obviously there's you know Los Angeles has a big economy so it's very easy to say oh my god the economic benefit of it is so high that that you know what you're doing is right so I wonder if there has been any kind of alternative way of doing this kind of measurements sort of measuring environmental or social impacts from cost to health cost to jobs cost you know like how and then sort of trying to put it in a different scale rather than sort of comparing it to the economy of Los Angeles as far as we know there hasn't been that analysis you know the district has really been focused just on air quality and making reductions in PMPEN emissions I think recently there's a big effort at the Great Salt Lake to do this type of analysis there because they're proposing a water project that will impact lake level and create emissions so they want to know this evaluation prior to the impacts of air quality that's a different project though with that we were planning to be done by Aiden it's just a little bit after that now Eastern time and so I would like to thank both speakers and everybody else who was able to make it this evening and we're going to have another session tomorrow evening so with that thanks again to everybody involved and we'll see each other and talk again tomorrow and next week