 Let's see. I think we'll get started here. So, welcome to the session on the Future Focus Research Program, Reimagine Research. And I think some people still be coming in. I think we have caught a few online. But I'm really excited about this program. I wanted to spend a few minutes on the origin of that. Again, I'm really first and I'm the director of research programs here at the NRC. And back in fiscal year 2020, the Commission provided us funding in that year, approximately 700-some thousand to initiate this Future Focus Research Program. And what we wanted to do with this, most of our work in research obviously supports the business lines in nuclear reactor regulation and MSS as well as INSR. And that's to help the licensing offices, of course. But what I thought research needed to do for the agency as well is to look ahead, look farther ahead than what's on our immediate plate to help get the agency ready for what might be to come in the area of advanced nuclear technology. And so we initiated this program where we used our senior level advisors really to develop the program and evaluate proposals where staff internal to the NRC, not just research, but across our agency would propose ideas. And nominally to complete within 18 months, maybe up to three years, several hundred thousand at the most or even less, probably more like a hundred thousand. And they bring in ideas of what may be some technology areas that we need to look at as a regulator and what might be those regulatory implications. And with research, there's a chance that, okay, maybe what we look at turns out to be something that, well, maybe it's ahead of its time, might sit on a shelf for a little while. But in other areas, it may be that, hey, we've got something here. And it may turn out that the business lines want to then fund it to develop it further because we start seeing demands on that technology from licensee. But this gives our staffs an idea to bring up their ideas and to work on some new ideas that it's exciting to them, but may not be necessarily, funding might not necessarily be provided by the business lines yet. And so that started the program and we've really had great support from the commission. We started with a very modest, like I mentioned, FY20 is 700-some thousand and that was kind of immediate for that year. And then we put it in the budget for about a half a million dollars and two full-time equivalents. But it was kind of exciting. This fiscal year, in fiscal year 23, we asked for 500,000 and the commission gave us a million. So we had twice as much as what we asked for. And that's very exciting, but it kind of tells us that, okay, the commission supports this, but we've got to now start showing, what does it do? What are the results of that? And that's kind of in the phase we're in now. And what we have today are speakers from research and other areas, NRR and INSR, that will talk about the research. And it's in various stages. So we'll see some that's been completed as well as some that are just getting underway. So I want to leave plenty of time. I'm sorry, I got to take out my eyes here. The lighting is a little bit bad here. But I wanted to just let you know who our speakers were. And then I'll introduce them as they talk. We have James Chang. He's in my office, and he's going to talk about the use of unsupervised machine learning to prioritize inspections. And then we have on virtually, Mike Maseca, he's going to talk about applicability of current atmospheric dispersion models for extreme persistent cold locations. And L. Tarrif from our nuclear security incident response organization will talk about integration of safety, security and safeguards during design and operations. And then last but not least, Derek Halverson, who's in our office of research, he's going to talk about model based systems engineering and applied to digital instrumentation and control. So but the first presenter again, James Chang. And James, he's a risk and reliability engineer in the office of research to form a risk, a diverse risk assessment activities on safety and security of operating reactors, decommissioning reactors, spent fuel pools, you name it. He's a risk and reliability expert in our group. He's before joined in the NRC. He worked for the University of Maryland, the Paul Scheer Institute in Switzerland, the Institute of Nuclear Energy Research in Taiwan. And in these institutes, he developed advanced dynamic probabilistic risk assessment tools and perform thermal hydraulic experiments for nuclear plant safety. So with that, I'll turn it over to you, James. Thank you. Thank you. Good morning. The objective of my future focus is to perform feasibility study on the use of unsupervised machine learning to prioritize nuclear power and inspection. The motivation was that the past few years know that COVID-19, because of COVID-19, the NRC was not able to conduct on site inspection as scheduled in the inspection plans. So let's come to the thought that if we have a system can identify this inspection urgency, that would be great to help us that say that if we only can done one on site inspection, which inspection that and then also what's the area of inspection we should focus. That's kind of initial thought. And how to do it is come to the Netflix. Netflix use unsupervised machine learning technique to analyze its customers that these are the most important things that we need to do. We need to identify the movie viewing history. And then from there, that's identified the pattern that we call as cluster. That's a movie-watching pattern. And then to inform, able to use that information to inform, recommend the movie watch for individual customer. And then as Jay was saying that NRC now we have the most powerful reactors. And each reactor has a performance history as represented in the inspection report. Can we use this unsupervised machine learning technique to analyze this inspection report information and then from there identify the safety cluster or this signature and then inform individual reactors about the area of focus or trend of this trend and then from there to develop this inspection urgency or the specific area. That's kind of general overall ideas about this research project. Before doing that, you might have heard a lot of machine learning, but now this technique we want to focus unsupervised machine learning instead of the supervised machine learning. Use the naming term that IBM described unsupervised machine learning is that use machine learning algorithm to analyze and cluster an unlabeled data set. This algorithm will discover hidden patterns or the data grouping without a need for human intervention. So here I was talking about this algorithm that replaced the human intelligence in 2017, but this was talking about a mature algorithm. So from the beginning constructing the algorithm to that it's mature, that's certainly that there's a learning process that requires the human intervention to shape the algorithm. And the beauty of this thing is that using the next example, it informs one individual customer to provide recommendation to this customer for which movie to watch. It's not just based on this customer's movie watching history. It was based on the whole customer, Netflix, all the customers, and then movie watching history, then from there identify a pattern to provide recommendation for this one. So this was the same thing that we're trying to use here. Using this unsupervised machine learning technique, go into this inspection report, identify these hidden patterns, and that's maybe a safety indication for inspection. So how to compare? Currently that the NRC has dedicated staff to do the job. We have dedicated staff to review the operating experience instance coming in that this staff will review this instance and then provide analysis and then share in the operation experience communication to a broader NRC staff group. The staff here, this staff, they also analyze this instance and then also that analyze the trend. From time to time they see the certain system, say that backup power supply system has the incident caused by human error during maintenance as a trending high. So this group, they might provide recommendation say that if we inspected, performed the maintenance inspection, might want to focus on this backup power supply system, focus on paying attention to their maintenance program in this area. And that's where we're very good benchmark for this future focus study that can we use artificial intelligence to achieve the same level that the current that the NRC use staff to do the job. NRC currently does not have this expert in the machine learning, unsupervised machine learning so that we reside outside expert that the idea was to provide issue a contract to the outside finding the expert, really expert in this area to perform study. And NRC will provide the inspection reports that we have put into this inspection report in PDF format in the NRC public website for all the commercial operating power reactors. In addition, that's a subset of this inspection report when there's a finding that NRC also maintain a database that has a more labeled and unlabeled information about these findings. So these are the so that we want to use this data as the initial point to feed into this study, see what we can get. The reason I specify this study as a feasibility study is that we, first thing, we don't know much about this technique and the second that the founder previously studied before the issue in the contract that the study here that we encountered that think about what there's an identified issue in the data technology and then that's also a combination of data and technology. From the data perspective here, currently NRC has a 93 commercial power reactor distributed in the 55 sites. Taking the Arkansas 1 and 2 site as an example that has 232 inspection reports from the data from the 2002-2022. So if we look at this here, we are looking at the war part of the 10,000 inspection reports. That's a lot of reports. But from the machine learning perspective, that is a very small amount of information. Last year that APRI published a technical report titled Automating Corrective Action Program in Nuclear Industry. The topic of that report was using machine learning techniques to analyze condition reports for corrective action program. The idea was try to use the machine learning to screen out that is not important, that condition report to save them the main power in reviewing the rest of the condition reports. And that has a very degree of success. In one of the examples, one utility was using 600,000 condition reports to train its machine learning algorithm. So now we know that the inspection report cannot compare to this condition report one-on-one scales, but we see that the scale of the information needed to feed into this algorithm to train that to be mature. So that's the data side issue. And then the technical knowledge side that this project is not intended to develop a new algorithm to do custom analysis. Instead, we want to look into the off-scale, the off-shelf and the available AI system to do the job. And so what are they and how good are they that combine together with this current state of technology and the data resource and other things that we have that's how far we can get where we are. So the project status here that we just issued a contract, a world contract company that two weeks ago. And this is a learning process that we issued the first solicitation out a few months ago and we received two offers but we rejected all of them. The reason is that in that contract we require the contractor to first demonstrate the knowledge and skill to perform unsupervised machine learning. The second demonstrate the knowledge about the nuclear power plant system. Both contracts are good at the first item but we can second item that the nuclear power plant system and then based on the contracting knowledge we cannot accept it. So we withdrew the contract and then revised the criteria, weakened the requirement on the nuclear system knowledge and we're now, that we receive night offer. That's a very good offer that's after review that we pick up the current one that's selected. That's moving forward. So that was a learning experience for this performing this awarding the AI that performed this type of research. So the contract out that this contract that we have two tasks. The first thing is provided an overview of this currently commercial contract, the AI system that we target for system including Amazon's stage maker, Microsoft's rule, Google's Google AI and Mandat. So the first task was providing the preliminary analysis of these four AI systems and looking at giving this data, the data sources and then the end goal to this system is to verify the function and how good are their function to perform the job including the natural language processing, the clustering all the way to the profile recommendation. And then that's how good is the user interface, how easy to use and then what's the cost, that's the first task. And then the second task was pick up the one system, perform detail analysis to analyze this data provider process from beginning to the end that and then demonstrate that shows how good we are and then what's the resources needed for it. And this contract that's provided a question schedule that expecting that all the both tasks will be completed in four months. So maybe next year that we have the same session that I can present the result to you. But for looking at anticipated result, except that the things talk about what's the usability of this product for NRC. The first thing we saw, certainly that complements the current NRC's efforts in operating experience. And that's the technology is developed mature, that maybe have other uses. And the second is that we know the licensee is using the machine learning for some tasks like the things mentioned earlier about the correct action program. So the result that are learning from this process maybe will help NRC staff make more intelligence communication with industry in the use of this type of technology for the regulatory application. Thanks. Let's conclude my presentation. Thanks James. I had a question for you about this. I know it's at the early stages and you're working with a contractor on this. You mentioned you planned on using the inspection reports and inspection data to kind of test this out as you think I had and maybe it's a follow on project. Or are you thinking that maybe there's more out there that we traditionally might not use to prioritize inspections and of course it all could depend on accessibility data, maybe maintenance data, operations data that even help refine it, refine it better. What are your thoughts on that? Yes, this is the initial thought using the inspection reports and then when I talk to the contractor as mentioned the more data is better. After that we provide the license event reports and then there's a lot of information. A broad thought I have was once this we demonstrate this algorithm work that we might want to talk to Info about plain consolidate instant report database and then yesterday I was talking about gentlemen from Germany GRS. That's about IAEA's database that will provide a broader understanding for use and then try this algorithm. Thank you. Thanks James. Next we, I forgot to mention at the beginning remember if you have questions use the QR code associated with our session and for those of you here please, if you have any questions please submit. We have four presenters so we might not have a lot of time for questions but please do and we can get back to you later the presenters can. The next presenter is going to be virtual so we'll run the slides from here I think for the virtual or I'm not quite sure how it will work but we'll give it a shot here. He's with the NRC and he is a meteorologist with over 14 years of experience here. He's a member of the external hazards branch in the office of nuclear reactor regulation where he conducts safety and environmental related climatology and meteorological monitoring reviews. This includes accident and routine release atmospheric dispersion evaluations for new reactor design certifications site specific reactor deployments and license amendment requests. Before joining the NRC, Mike worked for nearly 30 years for two engineering procurement and construction firms supporting both domestic and international facilities for manufacturing and for nuclear and fossil plant generation. Mike began his career participating in field research for the state of Maryland and for the U.S. EPA and private industry dealing with the characterization of actual plume dispersion in the environment. So with that I'll turn it over to you Mike. Thanks. Thank you Ray. I echo Ray's welcome to the folks attending the Rick and Person and to those attending remotely like me. I'm the empty chair at the platform. Your hours depending on where you are may be all over the place. The title of this future focus research project is a bit different from most of the other conference topics. It's regulatory in nature but falls out of specific engineering related details and more importantly where some advanced reactor designs might be deployed. Where cold conditions prevail much of the time. If you could flip the next slide please. Okay I'm not seeing the slides on my screen. Yeah it went to the acronym slide. We're on page 12 here. Okay I heard this is page 2. I call this my alphabet soup slide. These acronyms are going to appear on other slides and some will recognize agency names and organizations EPA, NASA, NOAA, NRC of course ANS and NEI. Others may be more familiar to dispersion modelers and health physicists, CHI over Qs. I use the acronym MET here because it's a lot easier to say one syllable than 7 for the term meteorological. So I've shortened that. I do want to mention 7 acronyms in particular that are NRC licensing documents ESP, COL, C-P-O-L-D-C-S-D-A and ML alphabet soup. But some relevant differences are pointed out along the way. The context for this FFR project focuses primarily on the US. Our friends in Canada, the Nordic countries and others who may have different permits, licenses, and procedures but may also experience extreme and persistent cold conditions should recognize that some of these topics may also apply to them. Back a little bit to my bio as Ray said, I've been in the business for about 45 years, started my career doing field research, chasing power plant plumes on the ground and in the air along with measuring MET conditions. Some of the input went into larger EPA and state of Maryland research programs and provided input to dispersion models. And there's a connection to a photo on one of the last slides in my presentation that I'll talk about later. And as Ray also said, I moved on to engineering, procurement, and construction firms. Worked on a lot of things including nuclear waste siting studies, supported guidance development and licensing along the way. But most importantly in my opinion, I supported on the industry side about a third of the new reactor applications in the nuclear renaissance of the 2000s. I've been with NRC now as Ray said, for about 15 years and the big benefit for me as a regulator as I see it is perspective, having been on both sides of the industry. Slide three please. Let's move on. This slide shows the primary aims of this project. The fourth item should have been listed first and that's my fault. We need to better understand the limits of current atmospheric dispersion models, modeling approaches and issues related to model inputs especially met data so that NRC ends up with comprehensive discipline specific guidance for applicants and staff to use that that guidance can be used by all for better planning and in the process hopefully minimize the need for custom analysis by applicants and reviews by the NRC staff. On to slide four. This slide shows the types of advanced reactor designs that the NRC is seeing. It may not be a complete list but its purpose isn't to identify specific designers and names rather the list does give a reasonable idea of some possibly relevant categories to this project. I'd like to point out phrases in the middle three bullets on their face these may be important in extreme cold locations that is liquid metal cooled, high temperature gas cooled and molten salt. To me that's hot. These phrases describe internal design characteristics but on the other hand dispersion modeling looks at accident and routine operational releases to the atmosphere. So to me as a meteorologist and dispersion model knowing more about the characteristics of a release if any is important. Understandably the initial engineering focus appears to be on the design and whether there could be a release of radioactive material unless so on the characteristics of a release to the outdoor environment. In my mind then that represents a potential modeling gap to be looked at and if needed filled. The last bulleted item is noted applies to stationary micro reactors not mobile ones like Project Pele. The latter type though may have some similar concerns depending on where that type of design might be deployed but I'm not looking at that here. Slide five please. This slide is a result of that industry and regulatory experience. I see the process for deciding if dispersion modeling and met monitoring is needed for particular design and deployment following one of three pathways. And I'm noting that this is not or these are not official NRC positions. In the left and center columns no accident or routine radiological releases are expected or they would be minimal. Now those determinations are first made by an applicant's dose analysts. The NRC's role as a regulator is to evaluate those claims technically and against the regulations and if found to be acceptable then modeling and met monitoring doesn't appear to be needed but note however the last column in the last bullet in each of those two columns consent based siding is moving forward right now for nuclear waste sites and it may also come to pass that that approach might be used for siding and operating advanced reactors. I don't know but if so that could result in a dispersion modeling and met monitoring loop coming into play that triggers the potential issues that I'll talk about. The right most column is somewhat easier either accident or routine release emissions are not minimal or the NRC's evaluation is valid and doesn't agree with an applicant's claim so that dispersion modeling and met monitoring are needed. Slide six please. Here I followed a who, what, when, where, and why approaching putting together the next three slides. Slide six covers the first three of these questions. For who remember the seven types of permits or licenses under NRC's regulations for my alphabet soup slide. They basically fall into two groups. The first group is site specific and includes early site permits and combined licenses that are regulated under 10 CFR part 52 and construction permits and operating licenses under part 50. The second group is more design and manufacturing related that is design certifications, standard design approvals and manufacturing licenses under part 52 and there are some nuances to be aware of under each group. The what, we'll get into more details as we go along but here it's necessary to see how things fit together. Met monitoring is a key upstream activity that puts input but not the only thing to dispersion modeling. The modeling results are an input to dose calculations. Met monitoring is no trivial matter as you can see from the first sub bullet and here's one of the nuances that I mentioned. Because DC, SDA and ML applications are probably not site specific, a met monitoring program would usually not be required but for DC, SDA and ML applications it's also important to note that met data representative of the locations where a given design might be deployed should be considered. The other three sub bullets under what tell a bit more about how the dispersion modeling results are used and I'll leave that for you to scan. The last major bullet when is worth paying close attention to because it has time implications by application type and specifies the regulatory guidance that's to be followed. A minimum of two years of representative met monitoring data is needed for ESP, COL and OL applications while a minimum of one year is needed for a construction permit, a CP application. The ANS and NEI already recognize met monitoring as a long lead time item for project planning. Move on to slide seven please. On to where. The key driver behind this project is hopefully you'll see Alaska often called the last frontier. And my apologies on this map. I'm an old ice hockey goaltender and the graphic doesn't show our friends to the east and the Canadian territories and provinces. So sorry eh. Nevertheless fast facts you can see Alaska stretch and latitude and longitude that there's more than 30,000 miles of coastline which has its own dispersion modeling approaches. Numerous mountain ranges used in part in NOAA's defining 13 climate divisions. The western regional climate center further reduced these to five zones called maritime, maritime continental, transitional continental and Arctic as you can probably guess where they are from the map. And temperature and precipitation both rain and snow patterns that vary in range and amount depending on where you happen to be in the state. To varying degrees all of these characteristics can affect how any dispersion modeling and met monitoring if necessary should be done. Next slide slide eight please. So the who, what, when and where lead into the eight bullets that explain the why? The existing US fleet of nuclear reactors are in the lower 48 states. The dispersion models and regulatory guidance were largely developed around those deployments. Some of the dispersion models are old but that's not necessarily a bad thing. They were conservatively designed based on NRC's mission to protect people and the environment. And to their credit the models and guidance were also forward looking with some caveats on their limitations and alternatives that might have to be considered. The current modeling guidance most NRC guidance in fact is flexible. It allows an applicant to use different approaches as long as they are adequately justified. EPA international labs have different models and met monitoring approaches that are worth considering. So in practical terms for project planning purposes though alternate models and approaches likely mean custom analyses by an applicant and custom reviews by the NRC's staff. And that's why meeting with us early on is very important. And the last three bullets give some examples of what I mean. Different dispersion conditions like persistence variation by location and season and logistical issues to adequately monitor met in what could be a very harsh environment. Slide nine please. Let's turn the page and take an even deeper look. A key point to remember here is that we may not even know what we don't know. Nevertheless I've listed eight items and we only have time here to dig into the first four. Two slides that appear later are related to the first bullet and the characteristics of solar radiation in Alaska. This is followed by a set of four slides that look deeper into how calm winds vary by time of day and by season. And this set is followed by a great photo that illustrates the third bulleted item. The first three bullets lead into the fourth as they directly relate to dispersion modeling and met monitoring approaches that may be needed depending on a potential sites location within Alaska. But that's not to say that the other four bullets are less important. They're just not discussed in detail here. Quickly though bullet five may relate to potential sites along Alaska's vast coastline. Both bullet six and five may relate to deposition of radioactive material. Apparently of concern when NRC supported the Fukushima accident. Occasionally when conditions were right there we saw tibbles which are thermal internal boundary layers that they formed even in Japan's cold season. Bullet seven is concerned with the effects of temperature and precipitation variation across the state. And the last bullet deals with a different geography across Alaska. Remember five mountain ranges but it's not just its mountains. It could include open and often frozen bodies of water along the state's coastline and its rivers. Next slide slide ten please. So at this point a very short primer on atmospheric dispersion is appropriate I think. Basically dispersion has two parts transport and diffusion. Transport tells you where a release goes and how fast it gets there or not. Meta and engineering conditions during an accident or routine operational release important to dispersion are identified there. Diffusion counts for the volume of air that a release mixes into and is diluted again or not. Diffusion in turn is characterized in a model by physical and mechanical turbulence. Next slide please. Slides eleven. This deals with slides eleven and twelve. The following two slides were prepared using data and an app developed by NASA Goddard. They list monthly values for the duration of sunshine and the average intensity of solar radiation for four locations that span Alaska from north in the Arctic at what was once called Barrow to the south at Annette. At Annette I'm sorry along the state's southeast coast. Slide eleven shows that durations increase from north to south and this is due to the Earth's curvature. January to March and October to December are transition periods. Note that there's zero sunlight in Barrow in January and December. In contrast the sun is up at Barrow for 24 hours in June and July leading to the nickname land of the midnight sun. Durations reach their peak at all locations from June to August. Slide twelve please. Now average solar intensity by month is shown here for the same four locations. The units of measure are the same as used in one alternative approach for characterizing turbulence. What's important to note here from a modeling standpoint is that if used the defaults built into that approach are the most stable threshold from October to February. Stable means higher concentrations and that's true at all locations. Further the hours used to define day and night may also need to be modified if an alternate model or modeling approach is used. Bottom line is that off the shelf models or modeling approaches shouldn't just be used without first evaluating their applicability and their limitations. One other quick point not surprisingly the average solar intensity reasonably parallels the duration of sunlight on the previous slide. Next slide slide thirteen please. The next four slides correspond to my earlier bullet on the frequency of calm winds or low wind speeds. They're based on data from the Western Regional Climate Center. The WRC's territory covers Alaska and it's one of six regional climate centers across the U.S. WRC aggregated the wind data into night time and day time time periods. They define night time from 1 to 7 a.m. and day is from hours 11 to 18. The WRC staff looked at the available data from about 200 of these remote stations and screened out about half because of possible sighting influences on the data. This eliminated the few stations located north of the Brooks range that leads to the North Slope and the Beaufort Sea resulting in the lack of contours that you'll see in the north on these maps. Now the occurrence of low wind speeds or no winds is an important characteristic to understand because these conditions usually translate into the slow movement of a release or even stagnation of the air. In turn this usually results in higher pollutant concentrations or radiological doses. Again, poor dispersion is poor dispersion. Slides 13 and 14 plot the frequencies of calm winds at night during January and June and I apologize for the small print size. Nevertheless at night one could normally expect lighter winds. You can see higher frequencies in January sometimes reaching more than 85% of the time in some places with slightly lower values in June. The areas of relatively higher frequency are located in the east central part of the state they decrease slightly towards the center of the state and decrease a bit more but extend towards the southern portions of the main body of Alaska. Now if you will please toggle to slide 14 and spend about 10 seconds there so that the viewers can see the differences. Slide 15 please. Slides 15 and 16 plot the frequencies of calm winds in January and June again during the daytime as WRCC defined it. Although the frequency is generally a bit less than the nighttime calm winds in January notice that the areas of occurrence are about the same. In June on the other hand the frequencies are noticeably lower. Not surprising though considering that it reasonably correlates with the earlier sunshine duration and solar intensity slides. Again if you will please toggle to slide 16 and spend about 10 seconds there for our viewers to look at. Now understand that all of these plots are for nighttime and daytime as defined by the WRCC but low wind speeds and calm conditions may also occur during those adjacent hours. Next slide. Slide 17 please. Slide 17 is my favorite of the bunch. The photo it right is from a power to Ralph Turcott the photographer for his excellent vision. In this image the plumes from two stacks of different heights are traveling in two different directions. The plumes also rise at different rates. Experience shows that this is likely due to different release temperatures, exit velocities and met conditions that change with height. The effects are not unique to this location. You may recall from my bio that I started my career doing field observations. I saw the same phenomenon from stacks of different heights at a power plant about 20 miles up the road from NRC's headquarters in Maryland. My role then was to help launch and track weather balloons during the night into the morning and to measure how met conditions changed with height. Similar to the photo in slide 17 the plumes from each stack generally traveled in different directions at night. But what the image in slide 17 doesn't show and what we were able to observe in the field that as the ground surface and the air above it heated the following morning the two plume directions merged as wind conditions mixed and became more homogeneous in the vertical. A neat thing to watch or I'm easily entertained or both. So what does this mean in Alaska? Well you've seen the sunshine and solar radiation slides and the plots of frequencies of calm winds those same met conditions and that same phenomenon wouldn't be unexpected in Alaska either. Although the temperature should be much colder and the differences with height may be more intense. And there's a direct analogy that's long been observed and studied in and around Fairbanks Alaska in that high particulate concentrations occur under similar conditions because of what is known as plume trapping. Emissions simply accumulate and again poor dispersion is poor dispersion. Of course the conditions and effects will vary by location but what this points to when dispersion modeling is called for is the need for an appropriate model and modeling approaches to be selected. Representative met conditions to be acquired and engineering info on the release characteristics for accident and routine releases to be defined in order to tell and this is important where an impact will occur and what the resulting doses are expected to be. All three major bullets on this slide summarize what each represents. Heights decrease as you go down the list. There's dispersion related characteristics of releases and where design might be deployed is key in Alaska. This info is needed to determine the heights to determine which measurement data are needed and how the data is to be acquired. Next slide please, slide 18. The last detailed slide shows a number of takeaways that I consider to be important in getting to where we want to go when considering dispersion modeling and met monitoring in extreme and persistent locations and because of time I'll leave them for your review here or on the same slides that should be available to you. Please hold on to this slide for about 15 seconds to give our readers time to review it and then switch to the final slide. On the last slide is my contact information but I want to thank the research division for the opportunity to present this important info to those in attendance or listening in remotely today and to the folks who have worked very hard putting on this year's RIC. With that I'll turn it back to Ray for our next presentation thanks. Thank you Mike and I appreciate your presentation. I think we're going to want to move on to our next presentation. I want to give the presenters plenty of time to talk and then we'll come back to questions Mike if we have time. Thanks again for being with us today and I appreciate the work you're doing on the Future Focus Research Project. Next we have Mr. Al Tarrif. He's our Senior Security Specialist in the office of Nuclear Security Incident Response. He's been with the NRC for 22 years and his background includes eight years with US Army Chemical Research Development Engineering Center. He's responsible for several patent and invention registrations related to gas mask improvements. After joining the Department of Energy Al spent two years with the office environmental management developing radiography and waste process monitors. He next spent two years with the DOE's office of security affairs managing state-of-the-art programs in material control and accounting. Mr. Tarrif completed a career by working four years at DOE's Rocky Flats field office as the site security systems engineer responsible for overseeing all physical protection devices. Mr. Tarrif holds a BS in chemical engineering and an MS in technology management both from the University of Maryland. Elle, we're looking forward to your presentation so back to you. Good morning. Today I'll be talking about a new future-focused research project titled integration of safety, security, and safeguards during design and operations. This is the visualization of the concept of the integration of safety, security, and safeguards. We may find both synergistic and conflicting elements in this research of considering all safety, security, and safeguards in parallel. The components of safety, security, and safeguards are essential for day-to-day operations at a nuclear facility. However, they are comprehensively addressed in the parallel during the design. More efficient operational conduct can be a result. In addition, addressing safety, security, and safeguards early in the design process may circumvent extensive redesign efforts in the later stages of design finalization. Motivation and drivers. Integration of safety and security during the design phase can provide adequate protection in an efficient manner. Additionally, identification of the synergies between safety, security, and safeguards in the design operations of an advanced reactor could enhance their economic viability. Integrating safety, security, and safeguards in the design may result in greater efficiencies and produce reduced human action burden. The NRC staff is working to increase the recognition of the advanced reactor community and the designers of nuclear facilities as to the value of safety, security, and safeguards integration. Some background. A 2008 Commission Advanced Reactor Policy statement, which was published as a federal register notice, expressed the NRC staff's expectation that designs include considerations for safety and security requirements together in the design process through facility design and engineering security features with reduced reliance on human actions. This concept of safety, security, and safeguards builds upon the 2008 Commission approach, which recommended pursuit of safety, security, and design as well as safeguards attributes all through the implementation of engineered features. The objectives. What capabilities address the assessment of the interdependencies and integration of safety, security, and safeguards? Exploration will be performed to identify what methods or analyses or advanced modeling simulation tools may be utilized to assess the interdependencies and integration of safety, security, and safeguards. Technical challenges. So far, integration of safety, security, and safeguards has been challenging. And also, material control and accounting methodologies for some advanced reactor designs are in the early development stages. Exploration will be performed to identify the key working interfaces that influence the effectiveness of each safety, security, and safeguards component. We are looking to identify the metrics for assessment of adequate integration. We're questioning, can the silos of safety, security, and safeguards be circumvented or reduced? Can the correct balance of safety, security, and safeguards be achieved with minimal effort? Can other methods be identified which meet the NRC performance based regulations? Strategy. Strategy is to identify analysis and modeling simulation methods for integration and assessment of safety, security, and safeguards, interdependencies, influences, and their synergies. To build the NRC knowledge base. We want to identify regulatory issues and tools and identify tools with their limitations and capabilities defined. We want to conduct internal coordination. We also want to conduct external coordination with the Department of Energy and international entities. NRC staff is coordinating with the Department of Energy staff with ongoing projects being conducted in the DOE Light Water Reactor Sustainability and Advanced Reactor Safeguards project activities. Expected outcomes and impacts. We expect key interfaces to be identified. Synergies and conflicts to be identified. Regulatory challenges to be identified. Tool capabilities and limitations recognized. We want to be ready for future licensing reviews. We wish to have staff competencies in modeling and simulation for safety, security, and safeguards. We expect to see broad applicability beyond fixed site reactors. We also expect to see expanded international and inter-agency cooperation on safety, security, and safeguards. We're looking to identify key safety, security, and safeguards interfaces. Synergies and conflicts and specific regulatory challenges when integrating safety, security, and safeguards. We're also looking to find modeling simulation tool capabilities and their limitations. We're also looking to have the NRC prepared for future advanced reactor reviews on applications that integrate safety, security, and safeguards. In addition, we have the desire to build staff expertise on modeling simulation tools that articulate safety, security, and safeguards. We believe that safety, security, and safeguards integration may have a broad application past advanced reactors. For example, for fuel fabrication facilities. And furthermore, we are looking to boost international and inter-agency cooperation on safety, security, and safeguards considering the NRC regulatory framework. In summary, we are exploring the value of safety, security, and safeguards integration in the design of nuclear facilities including advanced reactors. We're looking to identify the modeling and simulation tools that can be utilized to examine the safety, security, and safeguards integration advantages and conflicts. And we are looking to expand the NRC staff expertise and safety, security, and safeguards process integration and the assessment of, and that concludes my presentation. Right, thank you. Thank you all. Hey, I had a question as you're talking. You asked, you mentioned about the, you're exploring the technical challenges and the regulatory challenges. It's kind of a loaded question, but what's more of a challenge to you? Is it ourselves and our regulation? Or is it a real technical challenge? I say the technical challenges are identifying the interfaces and the interdependencies and finding tools that can perform those assessments accurately. What about the, where you see, and I know you're just starting on this, where do you see the regulatory challenges being the main ones? Relatory challenges, maybe to regulate the interfaces like we did for nuclear power plants where we had the regulation for safety, security, interface. We may have to provide, develop regulations if necessary for the interfaces of safety, security, and safeguards. All right, thanks, L. I appreciate your welcome presentation there. Next, our final presenter is Derek Halverson. And Dr. Halverson is a digital instrumentation control engineer in my organization in research. He's in the division of engineering. Derek's been with the NRC since 2008. His background is in computer engineering and magnetic manipulation of colloids in the micro and nano scale. At the NRC, his current focus areas are in embedded digital devices, operational experience with assurance of digital systems verification and validation of the digital systems and model based systems engineering. He serves in our staff as a point of contact for digital information and control efforts with the Halden Human Technology Organization in Norway and the Electric Power Research Institute here in the U.S. Dr. Halverson received his bachelor's degree in computer engineering from Iowa State University and his doctorate in electrical engineering and electrophysics from Drexel University. Welcome and it's yours. Thanks, Derek. So my future focused research project was looking at model based systems engineering applied to digital instrumentation and controls. For us, this was a future focused research project. It's new, but this is something we're aware of existing in other industries. And as I've started talking about this, I'm finding that it's actually already in ours, we just haven't worked with it in licensing applications at the NRC. So if you're hearing me I'd appreciate finding out if you're working in this if you're in industry. So what is model based systems engineering and who does it apply to? Well, at a high level the term is used very broadly and it applies to all of you. So each of my colleagues today has talked about models as part of what they do. And the drive this is in our project. This is from the systems engineering research center's presentation on a drone something they're working on for the DOD. But this is the sort of thing we were seeing that led to the future focused research project. So they're bringing together the concepts, they're bringing together the different multi-physics models it's not on here. I think in the audience I'm seeing some risk folks so they've got fault trees modeling risk, they've got it connected to the cost modeling that they mentioned there. Ironically the digital instrumentation and controls were a latecomer to the digital models of everything else. But they're definitely included now as the DOD drives to have a single source of truth to model their systems before they make them. Again the slide there is doing everything in models to demonstrate the art of the possible. But our focus for this project was on the digital instrumentation and controls portion. So this is the definition we're working from. Model based systems engineering is the formalized application of modeling to support system requirements, design analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout the development and later life cycle phases. The definition is from NASA. There are similar definitions used by INCOSE by the object management group. Again we're more focused in this project on the digital INC. So in this presentation I'm going to go through the goals and what we are seeing in modern development practices leading into the future focused research. The actual project itself called Hardin's. And then discussions of potential regulatory utility and some resources for you outside of the presentation. Alright, so some background issues we're looking at going into the project. So with digital INC you have the risk of systematic non-random concurrent failures. People in DNC will know what I'm talking about there. We've been dealing with it for a long, long time. Digital INC brings some new issues compared to traditional INC just from the increased complexity and also especially from interactions not just between digital systems but digital systems in the physical environment that you may have anticipated. So there are several approaches to addressing that. Currently we rely on simplicity that's explicitly mentioned in some of our guidance. But moving forward you can look to modern systems thinking including things like hazard analysis, STPA, and simulation such as model based systems engineering and models. And also we just want to be a regulator that uses the best modern practices in determining the acceptability of our systems. So this is some of the modern development practices we were seeing. So this is again also not our project. Just some generic from Wikipedia pictures of SysML. But the flavor is that we're moving from the waterfall paper based model or paper based process that we've used in the past to electronic development environments and models. Part of that is also that we're aware there's a greater alliance on the development tools themselves versus human checks throughout the process. So one of the issues was in discussing this and thinking about it there really wasn't a concrete example in our domain that we could share or look at ourselves and discuss about. So we got some external experts to produce an actual example for us and that was Galois with their Hardin's project, the high assurance rigorous digital engineering for nuclear safety they like acronyms. And the purpose is just to provide an education and a discussion point for other stakeholders to look into these technologies. And something I want to emphasize is that even though this is sort of made it resembles a reactor protection system it's in no way an indication of endorsement by the NRC. So within Hardin's Galois designed and built what they call a reactor trip system you'll see that acronym throughout the presentation. It's meant to be representative of a DINC system in a nuclear power plant. It actually runs on a FPGA and has sensors and digital twins. And I want to talk about that just a little bit because people talk about digital twins a lot. Oftentimes what they mean are digital twins in the operation and maintenance phase. It's something that you actually have connected your physical process through sensors to your models and they're working together in various ways. These digital twins are what NASA would consider a digital twin for prototyping. It's used in the earlier life cycle phases but often they would be connected. So these aren't sort of competing projects. These would be things that could be very synergistic. And this is a completed project. It finished in October and it is available on GitHub. It is open source. There's also a report there. So as I go through this I'm really just kind of teasing you about the tool chain but the information and the actual code is all there in the repository. So the physical hardware that they got just one channel, the rest of it's in models. So you've got an FPGA board which a lot comes with those these days that can be applied to the models. We'll talk about that later. Temperature sensor, pressure sensor, solenoids. The system just has three trips but it also has bypass. It's got maintenance mode. It has a self-test. And then it has a number of requirements from IEEE 603. You've got completion of protective action, independence that we wanted to have them prove completeness of requirements. So this is that tool chain I started to mention. So this is what they consider the specification portion of it. So it starts with just markdown. That's just plain text. Mostly just pulled from the contract we put together. Then they start getting more precise. You have Lando. That's where they sort of describe all the behaviors of the system and all the components within the system. And they combine those into scenarios which will eventually be used in the testing of the system. Lobot has their feature model. So sometimes you'll want to compile all of this code to something actually on the physical FPGA board. Sometimes you'll want to do it as an emulation. Sometimes you'll want to make a digital twin. You set that there. And then you become more formal as you work through the tool chain. So you take all of those requirements in Lando or the ones related to behavior. They also included some stakeholder requirements like identify regulatory gaps in our report. You can't formally model that as a behavior. But that's a NASA tool. And that starts letting you perform simulations, model checks, diagrams. I don't show it here unfortunately. So there's issues with IP with showing these tools. But also it just is not readable if I put it on the slide. So I'm afraid this is going to be more text than graphics. Although it is largely how in their rigorous process they operate. It is actually much less visual models and a lot more actual code or outputs. Almost looking more like a compiler. So you have Fret. And that lets you do various checks. You're already able to do some formal checks on that on realizability, completeness, consistency of the requirements. And then they go to SysML v2 which is more formal than SysML had been in the past. Although some of the visualization isn't there for it yet. That is coming. And that allows you to include features of the hardware and the physical processes. Connect them to the INC logic portions. You have executable behavior models. You've got Cryptol which comes out of the cyber realm. That's why it's got that name. But this is where you can actually compile it and run the programs even at very early life cycle processes. A lot more formal checks in Cryptol and CC. Which is it's C but it's got preconditions, post-conditions, other restrictions. Let's you use it with other analysis tools. And they also include some of the test benches as part of the specification. So you notice me talking a couple times about formal methods. And that was something that wasn't initially in our contract. But it is very much a part of how GAWA performs their rigorous digital engineering process. So we are aware of formal methods. They've been around for a long time. But in the past they've been very laborious, very, something you had to perform by hand, very limited in what they could really do. And now it turns out that there are very sophisticated tool supports that automatically provides a great deal of theorem proving. If you look up GAWA papers online, you'll find a number they have with graduate programs where they would sort of have students use formal methods without realizing it. You start off with your requirements. You get more specific and explicit in your requirements until you really have them sorted out. And by the time you get to that point then you can just hook in these formal methods tools without the students really realizing they were doing formal methods in the first place. And it's not automatic connection to requirements that is leverageable by GAWA for producing correct code, but may also be very useful for a regulator in assuring that the code was correct. So every fret requirement is refined into cryptol. Again, I already talked about some of the satisfiability tests. I also can do dynamic testing. That's another thing. Frama C is a suite of analysis tools for C. I'll talk about more of that later. There's also other formally verified pieces out there. So when they were doing this project, you need to have a system on a chip to work that FPGA and they were able to get one that was itself formally verified and just out there in the open source world. So this slide is on the implementation of the RPS, RTS. And a key thing I want to emphasize here, it's a limited on time in this is that as you go from left to right, you're starting off from cryptol, you're automatically compiling to C or system verilog, which you'll also see some handwritten code in there. And that is done in parallel. So they emphasize that as part of a rigorous process to make safe software, you also want to include sort of multiple tool chains and be able to check them against each other. So you can compile using for portions of the system's handwritten C handwritten system verilog or use the types composed from cryptol. And that also gives you some sort of rigorously defined oracles for testing later. And these can be combined down into emulation or on the board. So these are some assurance artifacts, code. Again, what you get is you can compile from the cryptol or the ACSLC into tests. There are some that are also handwritten. But an advantage of the ones that are automatically generated is it gets you around some of the issues with regression testing. If you make a change, it can automatically propagate down into your test scenarios and you may catch things because, again, you're doing different decompositions as you go through. Verification and validation, again, you have the executable behavior models so very early you have things you can actually run and test based on the logic, simulate the hardware, the digital twins I was talking about. I want to talk a little bit about what they call compositional assurance because they have a claim that implicit traceability is superior to explicit and usually we consider it to be the other way around. We're very big on requirements traceability matrices allowing you to go from high level requirements all the way down into the code. Their point is that that's a human process to create those matrices outside of the actual process of the code development. And if you're making changes, you've got to make sure you track those properly. The implicit traceability is the traceability through the actual development process, compiling one thing into another, refining one thing into another, which I'll talk about later. And again, the claim that since that's got these formal proofs on it and automation, it's a superior form of evidence. And of course, formal static verification, various guarantees about the absence of runtime errors that are from things like out of bounds arrays, null pointers, that kind of thing. I'd mentioned it a couple times, I'll be able to go through it quickly here, but you have the idea of equivalence. So they have this saw tool I mentioned earlier that'll prove that some handwritten C is equivalent to a cryptal model or the cryptal developed C. You also have refinements that formally relate something earlier in the life cycle in the tool chain to something coming later. Other forms of evaluation, we were looking at this system as a regulator. So the models are able to give us various pieces of evidence to work with. You have the dynamic modeling. They sort of have their ways of defining what our rigorous test set, again being able to leverage these digital twins. I'll come back to correct by construction a little bit later. And then because it's a system on a chip on an FPGA, much of the hardware can also be formally verified because it's a system on a chip. And the digital threads are referring to this process to be able to walk through the tool chain with this implicit traceability. So I want to talk about potential regulatory utility, but again a caveat. So this is a future focused research project. It's not intended to be a technical basis, much less guidance. And this is only a discussion we're starting even within the NRC. So take these with a grain of salt, but these are the discussions I want to have moving forward. So some of what we're talking about today can be used for verification validation. I tripled 10, 12, 2016, which is not yet endorsed by the NRC. Explicitly mentioned formal methods and structured analysis methods. We may be able to just meet our existing regulations and guidance without changing them at all, just by using models although we would need some guidance to connect the two. The issue that I mentioned earlier of the systematic faults, it's a way to catch or confirm their absence given that you have something else that gives you correct requirements to start with. And it may allow us to reach a safety conclusion at an earlier point in the life cycle, which is desired by industry. So again, this is already a completed project. You'll find the tools online. There's a report online. These slides should be provided as part of the RICS to be able to find them. There's further public resources that will help you actually get into the details to understand what I've been talking about. These are quite good. I encourage you to go check them out. And again, so for us, this is a future focused research project, but I think for a lot of folks out there, it's something they're already working with, or there may be ways that your models could work with our models. That might be something else outside the scope of this small project that we look at going forward. So I hope you'll reach out and contact me. Thank you. Yeah, thanks, Daryl. Thanks everybody. We have a few minutes for questions. And the first one I'm going to start with is kind of centered on Derek and James being in research for a while. So I'll let you start with the answer. There's a question with respect to the research in our office in general. With respect to NRC research, is it standard to have that research published in peer review literature prior to incorporation into NRC guidance, that sort of thing? So from your experience, how does that work? In the research that we do or we sponsor, how does that get peer reviewed and published or is it before it goes into, let's say, a new reg or a reg guide? I think it's from my limited perspective, right? I think oftentimes it is I'll be doing a paper on this at the MPIC conference. It does get vetted in various ways. And then of course for some that's a technical basis that may or may not receive a public comment period, but certainly anything being turned into guidance will have public involvement. We'll be interested in this one in particular, maybe looking to, we'll see as we move forward, but having a public meeting. So you'll have those as well. Also I'll give a shout out since I have the point of contact for Halden. They'll occasionally have workshops too if you're part of the Halden technology organization that we need themselves. So it's various ways that information is discussed out there. Yes, we talked about the product that will be used for NRC regulatory arena. That's, yes, certain that will go through the series of review process. I used the example that we developed the Human Reliability Analysis Method ideas, for example, that we developed this method. It's not only to go through the internal review, but also that we need to have a field check that makes sure that the targeted user that our senior risk analyst, that's this method that meet the way that they work. And then after that, we go through this process, we also work industry making the effort that we're getting inputs. And then putting into the public domain that there's no registration to seek for the public comment. And we even have the NRC's open meeting that's for public to comment on this method. And once through this process, we will address the comment that now the managers want to use this method to replace the existing SPIGE method for NRC's future HIA application. That's the serious question that we went through to have a product for regulatory use. Thanks, James. The next question, Mike, I'll put it to you. It's sort of a general question, but you can apply it to the dispersion models you work with. The question is, does the U.S. NRC develop models independently to verify applicant models and their outputs when the applicant's model is unique or heavily customized? Thank you for the question. The NRC basically does confirmatory analyses with their own models. I guess we would look at whether those results are equivalent. If our models do not have that capability then we would have to get into the guts of the alternative approach. That may be an HIA model. It may be one from the national labs. It may be one that is developed by an applicant themselves, but the greater the differences of the proposed model from the NRC models translates, I think, into more review time for the NRC. The models, as I mentioned during the presentation that we use for design basis analyses are fairly old. Again, that doesn't make them bad, but we would have to look at the differences. Hope that answered the question. Thanks, Mike. We have one for you on what means do you have for representing the integration of 3S to make it easily comprehensible and accessible? Looking at the project, that's something that we're looking at to identify. We want to see what are the metrics for adequate integration and find the conflicts and synergies. This is a new project. That's something that we want to find in the future. Looks like we'll have the result that we could reach that objective in a year or two. Right now, that's what the project is saying to do, is to find those interdependencies, the synergies, the conflicts, and to assess when is integration better than having these siloed. Thanks, Al. Derek, one for you on your presentation. Do you have any trust considerations applying such a method like Hardin's and if yes, where is that? I'm not sure exactly what the question is. I think I'm hot. Again, Hardin's shouldn't be something that's open and used for discussion. It's not meant to be actually loaded into a plan. I'm not worried about this project for cyber concerns. Obviously, cyber concerns come in if you're talking about the actual something that would run. In trust, in the sense of do we trust the tools in future work, that's going to be a big part of it. Currently, you can rely a lot on either the simplicity of the system or human checks. We've got a variety of ways to develop a certain level of trust. As you lean more on tools, we'll need ways to certify or otherwise build up trust in the tool chain itself in a reasonably timely way in order to trust its outputs. I know you mentioned in your presentation about part of this is maybe the human checks. Some of the human checks get lessened, maybe. But maybe that ties into the trust question. In just what you're seeing so far in this type of methodology, where are the human checks? Where do you see those as still being vital? Well, I think we'll still be doing, so we talked about the implicit traceability, we'll still be doing thread audits. It'll just look a little bit different. But we'll still be checking requirements end to end. Much of the review I expect will stay the same. We'll still be checking the requirements. We'll still be checking overall system properties. It's just how much can we lean on the formal methods to really believe that if we trust the initial requirements, everything downstream is correct. And how much can we trust the simulations, digital twins for prototypes at early life cycle phases that may allow us to say at that point that we have very high confidence in this system. Okay. Thanks, thanks Derek. I think we have time for one last question and it's for you James. Going back to your presentation, the question is why not use simple, well-established statistical analysis principles to analyze the data? This approach would be much more applicable for the limited data that we have on inspections. Yes, certainly use the existing statistical method. That would be easier if we have a clear goal. But for this purpose, it has a broader longer perspective is understanding this machine learning and this technique that a lot of things we don't understand. And then looking for a long term, what are the potential of this technology from the NRC's workforce side. And then also looking for how industry use this technology and the NRC down the road, how we regulate this technology. So this is beyond just looking for this specific task that looking for analyzing the inspection report for the prioritized inspection activity. But it's more looking for the broader perspective about this machine learning technology. Right, thank you James. I think our time is just about up. And so I'd like to thank all the speakers and for participating in the panel and for participating in our future focused research program. And as we sure look for input on how to make it better and if you go to our website, we're going to try to make things publicly available. We're still developing the program, getting projects finalized, getting those reports done. But my intention as a research director is to make everything we do to the extent possible publicly available. It's the best way to make the program better. We need that feedback. That's what research is about. So again I'd like to thank the presenters and thank the audience for attending today. And with that I'll close the session. Thanks everybody.