 Welcome. My name is Terri LaLaine. I'm the Deputy Director in the Division of Systems Analysis in the Office of Nuclear Regulatory Research here at the NRC. And your session chair for Am I a Robot? How Artificial Intelligence and Machine Learning are impacting the NRC and nuclear industry? Next slide, please. AI is one of the fastest growing technologies globally and is the next frontier of technological adoption in many industries, including the nuclear industry. As a modern risk-informed regulator, we must keep pace with technological innovations while ensuring the safe and secure use of AI in nuclear facilities. Our expert panel today brings a range of AI perspectives and experience from domestic to international in the nuclear industry, our federal partners and their approaches to similar questions regarding AI and the NRC's current activities. Following the briefings, we'll have an open discussion and audience question and answer period. So please be sure to submit your questions throughout the session. Next slide, please. It's my pleasure to introduce our panel today. So welcome to Mr. Gene Kelly, Senior Manager at Constellation Generation. Mr. Kelly has over 40 years of experience in the nuclear industry, including design, analysis and licensing. He's a Senior Manager in Risk Management for Constellation Generation, responsible for risk-informed initiatives across the Constellation fleet. He was also the technical lead responsible for re-licensing of the Limerick Nuclear Station, managed engineering programs and designs at Limerick, and worked previously with the NRC as a branch chief and a senior resident inspector. Mr. Kelly holds a bachelor's degree in physics from Villanova and a master's degree in mechanical engineering from the University of Pennsylvania. Welcome to Ms. Aline Bay-Claw. She has been recently appointed as the division director of nuclear power in the Department of Nuclear Energy of the IAEA. Ms. Bay-Claw has an extensive experience as a program director of several new build projects. She managed large investments projects for conversion and enrichment for facilities such as Flammenville III EPR and a portfolio of nuclear civil and equipment activities including SMR development. She's also engaged in gender balance and diversity actions, notably present of WYN, Women in Nuclear, for France and is an active member of WYN Global. Ms. Bay-Claw holds a master's degree in science and engineering technology from E. Cola Polytechnique, a master's degree in civil engineering technology from the E. Cola de Ponzi Tuesdays and an MBA from College Day Ageneers. Welcome, Mr. Ben Schumeg. Mr. Schumeg is the software quality lead in the quality engineering and system assurances directorate of the U.S. Army Futures Command DEVCOM Armament Center in the U.S. Department of the Army. He leads research in testing and valuation and verification and validation capabilities for artificial intelligence, machine learning, automation and other technologies and assisting the quality engineering and system assurance directorate in developing policies and procedures to be used by the Armament Center. He currently leads the Army AI software safety subgroup focused on the test and evaluation and verification and validation of AI systems and data. Mr. Schumeg also spent a year with the safety and mission assurance office at NASA's Johnson Space Center assisting in software quality assurance for commercial visiting vehicles to the International Space Station. He holds a bachelor's degree in computer engineering from the Pennsylvania State University and his master's degree in computer engineering from the Stevens Institute of Technology. And welcome to Mr. Louis Benicourt, the chief of the Accident Analysis Branch in the U.S. Nuclear Regulatory Commission's Office of Nuclear Regulatory Research. Mr. Benicourt leads highly skilled data scientists in developing the NRC's artificial intelligence AI strategic plan to enable the safe and secure use of AI in nuclear facilities and accelerate AI utilization across the NRC. Mr. Benicourt joined the NRC in 2008, has a digital instrumentation and controls engineer in research. Since that time, he's held several positions from the technical assistant for NRC, acting chief of the Instrumentation Controls and Electronics Engineering Branch, an instrumentation and controls engineer and a new reactor project manager. Throughout his career, he's been a key proponent of science, technology, engineering and mathematics education and continues to volunteer and represent the agency in multiple annual youth outreach and vets in the Washington, D.C. area. Before joining the NRC, he worked as a control engineer for GE Aviation and a new project engineer striker endoscopy. Mr. Benicourt has a BS in electrical engineering from the University of Puerto Rico, a professional certificate in public sector leadership from Cornell University. He's a senior member of the Institute of Electrical and Electronics Engineers and a registered professional engineer in the state of Maryland. That, I welcome all of our presenters. And I will start our briefings with Mr. Gene Kelly's presentation. Stay in your lane, dude. Thank you, Terry. And good afternoon, everyone. And I'm very honored to be on this panel with a very excellent group of panelists and experts here in the area. And what I'm hoping to share with you today, as we put the slides up, is some of the lessons learned that we've garnered here at Constellation Energy now as we've deployed some of these new technologies and artificial intelligence. And we're going to share those lessons learned with you here. Next slide, please. And you're probably wondering why I've used and chosen this picture. And it turned out I was watching one of my favorite movies, The Big Lebowski, with Jeff Bridges and John Goodman and Steve Buscemi. And, you know, happened to be talking to one of our project experts and leads. And he had been driving home and in his new car. And it was a very difficult trip up I-95. And it was raining very heavily. He couldn't see well. And he said that the technology in the car now enabled him to stay in the lane even though he could hardly see the road. And it occurred to me that, you know, in the theme of this conference that there is concern sometimes that we go to full autonomy with artificial intelligence and machine learning. But the reality is when you look at automotive applications, there's various levels of autonomy. And we're far from a totally autonomous vehicle. And basically the applications we've developed thus far at Constellation are really intended to keep the users fully engaged and, in essence, keep them in their lane so they can focus on what's important. And, you know, we're going to walk you through some of the examples here in the subsequent slide. So that's really the reason for the humor and The Big Lebowski. Next slide, please. Now, this slide is pretty interesting in its sequences. So I'm going to ask you to bump it a little bit. But, you know, we started out this way with what you see with the initial ideas of here's what we're going to do and we were going to go in and automate certain aspects of our corrective action process and our work control process. And then we sat down and engaged the end users. And, you know, that's really our first and maybe most important lesson is you really find out what the problem you're going to solve when you sit down and engage the end users. And there's just no substitute for doing this due diligence. It takes some time, it takes some effort, but it's worth its weight in gold because it really tells you the problem you really need to solve. So if you hit the next button, what you'll see is once we sat down with them, just click on that slide, we found out that there were other things that they wanted to add. And that's when we started to understand what we could really do for them to really kind of reduce the effort and really help them in doing their job every day. So if you hit the button again, you'll see this slide kind of fills in as we started to learn more on the left hand side about, you know, what we were going to do with our corrective action screening and prioritization. And if you hit the button again on the right, we sat down with work week managers and what we call cycle managers, you can hit it again there. And you can see that we eventually filled in the blanks of all the things we want to do. And, you know, we're ended up really designing 11 different algorithms and models. But this is worth its weight in gold because this is where we really honed in on where the savings are going to be. Next slide, please. Many times people ask, you know, well, why cap data, corrective action process data. And I mean, it's first of all, it's a big data source, right? We all in the nuclear industry, we generate a number of condition reports every year on the order of, you know, five, 6,000 per site. And it's a big data source, right? It's also an important cornerstone of the NRC's reactor oversight process. And the way I would term it is that just about everything that happens at a plant that's important is reflected in that cap data. But you can see from the statistics that we have a scheme for both significance and severity and type. And, you know, there's thankfully very few very significant things that happen that require extensive investigations. And the vast majority of the data, almost 99% of it is low level significant. And really the message on this slide is that our algorithms and what we're doing to automate aspects of the process is going to allow us to focus on the really important conditions, which is where we think our, you know, focus should be. Next slide, please. I bring this up just because this is an application we've already had in place. This has been very successful. We've had it in place two years now at Constellation. It's used for our maintenance rule process. And we've been able to automatically identify potential maintenance rule functional failures. The users have provided excellent feedback. And I think it's worth pointing out in that second bullet that the software really isn't making the failure determination, right? All it's doing is flagging those condition reports that are worthy of human review. So, you know, the message here is that the end user is still fully engaged. And even more so, they're backstop, fully backstop, because our system engineers and strategic engineers still monitor the day-to-day traffic in that system for their systems and the components in those systems. And so, you know, this is fully backstopped such that, you know, you're not just totally relying on a software. And, you know, we've gained confidence with this over two years through the continuous feedback from the users. And lastly, I would just point out that we've biased the software in a way that's more focused on high-safety significant component failures so that we have very few, if any, miss rates. In fact, our miss rate has been zero for two years. So, we think this has been very successful. And the key is we've now built subsequent applications based on this first successful one. Next slide, please. This slide probably bears some real close looking. And I guess if I were to pick one slide that was the most important in the whole presentation, this is it, because this is the graphical user interface. This is what the end user sees as a result of the algorithm that we built. And it's really awesome. I don't have the time here to explain all the details, but it's really showing you the confidence values and why certain condition reports are flagged. It has textual comments to provide the context on how the decisions reached. It shows you what's called the word grams, which is how the artificial neural networks are built. And finally, you know, you have to revisit this. You can't just walk away from it after you build it, because you may have procedure or rule changes in your process. Your performance data may change the plant. And, you know, so it's really important here that humans continue to validate the model's predictions. And again, the time with the end users is very well spent to develop that graphical user interface. Next slide, please. Just a few words here about, you know, everybody who's in any of these innovations knows you have to make the business case. I would point out that our industry has many processes. So there's lots of opportunities there to apply these technologies in these processes. And, you know, we see that we can improve data quality. We can improve our organizational decision making and also employee bandwidth. I think one of the commissioners talked about that this morning, but, you know, particularly for us as a new company who's just split and we're getting into new areas, you know, you want to be able to deploy your resources and your people, you know, where the new priorities and work is. So this is really going to give us the opportunity to do that. Maybe probably one of the most important bullets here is that this is an opportunity for us to eliminate low value work. We talk about that a lot in our workplaces. It's easy to say, it's hard to do and it's hard to let go, but this has really given us a golden opportunity to eliminate low value work. Next slide, please. And I should say as we go to the next one here that, you know, the key message from that last slide is that it's really helping us to focus on what's important. And if there's any one theme throughout this whole prezo, that's the one I would continue to reemphasize. This technology is helping us to focus on what's really important. We have worked and collaborated with the Department of Energy and Idaho National Labs. And what we're finding, and it was a surprise to me, I'm not a data scientist, but there are a variety of methods and all sorts of approaches and hybrid approaches supervised, unsupervised. And what we're finding is literally what the slide says that, you know, one size doesn't fit all. And, you know, I love this quote from the article. I've read a lot here and the journey over the last year or so, but you know, it really, the algorithms you're going to pick and the techniques you're going to pick are going to depend upon the kind of data you're working with and the problem you want to solve and, you know, what you want to get to. So the bottom line is when you get into another lesson learned, we've had is as you get into these, you'll find that there's many ways to do this. And it's not just one or two approaches. So interesting lesson we've learned here thus far. Next slide, please. So finally, you know, where are we headed? You know, and I guess I would point out that with each successive application we've done, we've learned a little more and we've built upon it. So that first one with maintenance or functional failure has been pretty successful. And we're going to build on that with the next two, we're going to start the pilots for the corrective action and the new work screening here later this month. And then we're going to set our sites and some other processes. And like I say, there's a lot of processes that you can aim this at. But one of the biggest challenges when you read the literature is that integrating this into your systems and your processes is probably one of the biggest challenges. So, you know, we're going to continue to look at additional areas. We have a lot of good ideas on where we can apply it. But we start first with small things and then work up from there. Next slide, please. So, you know, I guess I'd end today with, you know, sharing with you. This is a feeling that's been with me the whole time I've been involved with this for the better part of a year or so. And that is when I think about artificial intelligence and machine learning, it's really not a matter of if it's only when I think that we're all going to be there. And, you know, the picture here, of course, is to say that, you know, it's probably only a matter of if when we're going to be driving autonomous vehicles as well, you know. But I really do think that this technology allows us to really focus on what's important. And boy, that's just so valuable in our business for safety. And the second bullet is very fascinating to me. But, you know, a lot of us in our companies struggle or have the challenge of, you know, knowledge retention and retaining tribal knowledge as people leave and retire and new people come in. And, you know, the use of this, it gives you a solution, I think, in that regard and that you can continue to make this algorithm smart. And it retains the wisdom. And so, you know, perhaps there's a solution there for all of us on, you know, how to, you know, solve the knowledge retention issues as well and various processes. And again, you know, there's probably the opportunity here for a very powerful industry outcome. And as one of the DOE directors, Dr. Curtis Smith has said to me, and I think he aptly described AI and ML, it's the new math. So, you know, with that, I think I'll stop. And thank you, Terry. I'm done my presentation. All right. Thank you, Jean. Our next panelist is Ms. Elaine DeClo, with the presentation AI for nuclear energy. Okay, thank you, Terry. So, do you see my presentation? So, I am very honored to be part of this panel today. I am director of the division of nuclear power in the IAEA, Department of Nuclear Energy. And it's really in our mission in the agency to share knowledge among all our member states about new technologies to enable the development of these technologies to define the necessary condition. And artificial intelligence is really part of our task. So, I will, next slide, please. Yes. So, I will tell you where we are today, because it's quite a long journey. Well, this slide shows you in a broad view what is artificial intelligence in a common language. So, it's leveraged computers and machines, too many problem-solving and decision-making capabilities of human mind as a general topic. And so, where can we apply this in the nuclear industry in several fields, as you can see on this slide. So, regarding machine learning and deep learning, which is on the left top part of the slide, we can support predictive analysis. For example, on nuclear power plants, we can use that to improve modeling and simulation capabilities, as well as end-performance of digital twins by adding simulation tools to these twins. Another part is natural language processing, which is a branch that enables machine to understand human language. We can use that in the support of classifications, translation, and data extraction. For example, we can use it in analysis nuclear power-specific requirements. It's a field where quality assurance can benefit. For example, by ensuring the product or service is meeting the specified requirements through techniques of natural language processing. Another field is the expert system. It emulates decision-making ability of human experts. It can be used for knowledge representation for generation of models, for processing of models, particularly for diagnosis. And this can have wide application to nuclear safety. If we go to technologies like computer vision, there are also quite interesting technologies to take meaningful information from the digital image. We all have in mind the image coming from regular inspection and disruptive inspection, for example. And it can provide insights that would be missed by human manual analysis only. Automation is not really, and robotics is not really a new technology. However, these techniques can be really enhanced by artificial intelligence, for example, by using computer vision technologies. And last but not least, all these base algorithms could potentially also be used for design and optimization of nuclear reactor cores designs. So this is quite a broad view. Next slide. And now I will go a little deeper in what we do in the IAEA. So next slide. So we have had several technical meetings, working groups and technical meetings. And this slide shows you where we are, what is the state of the art in the AI, where it is applied. And this is really taken from return of experience of our experts participating in these technical meetings. So as I've said, one of the first quite obvious field is automation, because it can automation, automated process can reduce, really, the human factor in the work activity, nuclear activities. It increases reliability, it reduces time also of operation. In optimization also is a part where we can optimize complex processes, like for example, plan of strategies and strategies for inventory management, outage scheduling, fuel cycle parameters. So it can help to process a lot of data. It's also in use in building information modeling and also for verification and validation. Another part, another field where we also see many applications is analytics. Also for model validation, for advanced computer simulation. And as I said at the beginning, it's of use in digital twin application. And another part is prediction prognosis by looking at events we can reduce failure or at least detect failure or in advance, assess current asset conditions, and for example, remaining use of the useful life of components. And all these insights will help to extract and use data from multiple knowledge sources. And that are collected from thousands of years of operating experience, massive libraries of scientific tech mark and validation experiments. So all these techniques of use are now more and more commonly deployed. However, next slide please. We know all that there are department challenges. And this is I think today's topic. First of all, because this data can be or the result of AI can be interpretable. We don't, there is a question of trust, of robustness of the performance of AI. We cannot use the traditional verification and validation approaches for AI because it's quite limited transparency. And the high level regulatory safety assessment principle and guidance may need to be developed. It's not yet really recognized worldwide. And of course, all the security, cyber security issues with data, with adversarial attacks are there, are already there, but we also have an increased risk of cybersecurity by using artificial intelligence and like, well, also due to the limited transparency of what's in the machine learning tools. So what's next? Can you change please the slide? Yes. So we have, we work on different aspects. First on less mature technology, what is that so-called technology development. And so we need further development of technology before applying that on nuclear power plants. That's our view at least. And we have also categorized some technology with which you call, which are in a deployment stage. For example, all these automated analysis of non-destructive examination. It's almost, I think it's more and more commonly used. Or all what is about predictive maintenance procedure. And then there is a field also where we work, it's technology enabling. So of course, by developing legal regulation for this application, by developing common requirement database and common requirements that are understandable by AI for use of optimization, simplification, and specification, because it's not today's the requirement out there written. And it's dependent on the user, mainly and the operator. And we also have to develop algorithms that are accessible. So give more transparency to the algorithm and understand able to artificial intelligence. Next slide. Next two slides. Yeah. So what we do, for example, for activities. So last year we had a big technical meeting on artificial intelligence for nuclear. So you can see that there are many fields. It's not only nuclear power, but it also relates to ethics, food and agriculture, health, and nuclear and physics. Next slide. We are also part of the International Telecommunication Union of the United Nations. And we participate in webinars like this one was AI for Good or AI for Atoms. And next slide, please. And every year there is a publication from the ITU, not specifically only to nuclear, but there we have quite a few numbers of example, and we share the development of AI for nuclear technology and application. So it's also accessible on the internet side. And before finishing, I would like also to mention one point that for me is very important, especially in this day of Women's International Day, is that it relates also to ethics, is that there are a lot of, well, the developers are mainly men, I can say. And in computer and IT science, we are lacking of women. So if we could do all everything to attract women, that would be very good because I think that diversity in developing algorithm, in yes, looking at the requirements are very important to have something which is very close to human brain and to have all the diversity. And I would like Teri to offer you this, because the question is, am I a robot? So definitely, I don't know if I am a robot, but if I would be one, I would choose this image with a nice picture. And I think that we should share this to attract more young girls into our domain. Thank you very much. Thank you, Elaine. Reminder, you can submit your questions for our Q&A sessions. So if you have any questions for our speakers, please make sure to submit those. Our next panelist is Mr. Benjamin Schumig with the presentation US Army Combat Capabilities Development Command Armament Center every evening. Thank you. Good morning and good afternoon, everyone. As Dr. Elaine mentioned, my name is Ben Schumig. I'm representing the ASB Armament Center, Devcom Armament Center, specifically our Quality Engineering and System Assurance group. So also, thank you for having me today. I know I'm, maybe I'll say the slight oddball in the group here as I'm more from the DOD, but hopefully kind of going through this presentation, I can give you an idea as to why we kind of feel that it's important that we kind of talk together and work together on some of these challenges with artificial intelligence, especially when it comes to safety of those systems. Next slide, please. So this first slide kind of talks a little bit about some of the reasons why the DOD specifically has been very kind of aware and tracking what's going on with artificial intelligence and especially some of those challenges. Probably the biggest thing that came out was the NSDAI or the National Security Commission on Artificial Intelligence, which was, I believe, a congressional led and congressional funded research into what artificial intelligence means, not only for the DOD, but of course, for the federal government. And that report really pointed out many key areas that need to be followed. And I kind of highlighted a couple here that really impacts myself as part of our quality engineering group, thinking about data science, verification, validation, reliability, safety, and of course, human system integration. A lot of these other reports that you can see on the screen also talk to these very same aspects, especially safety, you know, one of the reasons I'm here today. And I wanted to point out that last one on that bottom right, a little hard to see, but that is the responsible AI memo that was released by the honorable Secretary Hicks concerning how we are going to ensure that the systems that are developed by the DOD maintain those five ethical principles. Next slide, please. So just kind of a little bit about why I'm here and who I am. So Armament Center is the primary, I'll say, development organization, development command that's looking at conventional weapons systems and ammunition for the Army. So it's, you know, as with any kind of system and new novel technology, there are ways that this could revolutionize the way, you know, AI and ML could revolutionize the way that these technologies are being developed by Armament Center. But of course, you know, that brings challenges and that brings things that we want to ensure that we're looking at. So some of these challenges, of course, you know, we're looking at what does it mean for continuous learning? What does it mean for these very complex, statistical algorithms that are going to be used and how are we going to ensure configuration management? What kind of new methods or procedures or processes are going to have to implement to make sure that we can assure, and I'll talk about that in a second, assure that what we are developing meets the intent and the needs of what we are developing it for. Making sure to look at different sensors, different inputs, and how this data, as you know, you'll see data is very critical from a machine learning perspective. How can we assure that it is unbiased, that it's correct, that it's accurate, that it meets the context of the environment that it's being used in? And still maintaining these reliable, ethical, safe, and robust capabilities of this system. So what the Armament Center did is we looked at what's called the Armament Material Release Process, which is the final gate that a system must go through before it can be deployed and be utilized out in the field. Next slide. And so I'll kind of briefly just talk about that for just a second. We want to ensure that anything that's released by the DOD meets these, what we call the three Ss, safe, suitable, and supportable. I won't go to each question here, but as you can imagine, safety being one of our top priorities. So we have a lot of things and a lot of stakeholders and a lot of different milestones, documentation, deliverable, things like that that have to be met. And those are listed on the left side. Suitable, you know, is it the right system? Was it developed correctly? Does it meet verification requirements? Does it meet validation requirements? So we have a lot of independent testing that takes place, a lot of safety assessments that will take place to make sure that that system meets that suitability requirement. And lastly, supportability, can the system be supported in the field? Do we have the right logistics in place? Do we have the right fielding plans and the right training for any sort of operators of any of our systems? So this applies to any system, you know, any system that's being released by the army that will go through our office. It must meet all of these requirements before it can be, as we say, kind of put in the hands of a soldier. Next slide, please. So I wanted to touch just briefly on one of those aspects, you know, we're working a lot of different things, and I'll show you that in a second. But I wanted to touch on safety, because I feel that that's probably where we'll have a lot of cross-collaboration and a lot of good technical cross discussions with the NRC and their partners. So I think it goes about saying that the safety challenges are significant when you're thinking in ANML systems. There's a lot of complexity to that design, you know, there could be changing and differing and off nominal environments. How we're looking at the cognitive interaction of the human in that loop of the system. And what kind of perceptions are they going to have about different behavior or unexpected, possibly unexpected, then behavior of that system. And so looking at what do our levels of rigor, when we look at different software-intensive systems, need to be changed. So some of those things that we're looking at, looking at different safety methodologies, different safety precepts, looking at ways to adjust or recommend new ways to do a functional hazard analysis, general safety requirements, what artifacts might need to be needed, and sort of identifying AI safety critical functions and any of that data that leads to that function, be it as part of design or as part of what we call inference, when the actual model is active. Of course, understanding the concept of operations, what are environments, understanding those enabling technologies, and what kind of autonomy may or may not be involved in that system. Taking all of that in and thinking about what kind of levels of rigor must take place, what kind of metrics and measures must be developed, and what artifacts can be delivered. Lastly, looking at both the hazard mitigation guidance, as well as any sort of adjustments to kind of our safety risk assessment approaches for AI, the different levels of autonomy, LOLRs, kind of summarizing it into what we believe would be good practices, possible regulations or policy changes. And that's why I have that little blurb there about Mill Standard 882 ECHO. That is our safety standard that we follow within the army, which is undergoing revision. And we plan to submit a lot of suggested changes and working with that group to make sure that any of the needs that come from AI and ML technologies are appropriately included in that. Next slide. And I believe this is my last slide. So, you know, I just touched on that one point about safety, but we're looking at a lot of different things at Armour and Sensor. We're viewing a lot of the policies and identifying the gaps in those policies, you know, with many, many Army regulations, DOD directive, DOD instructions. So kind of doing our analysis of that to see where we are and where we think there could be better, you know, things made better and better improvements. Looking at data science, you know, with, as I kind of said already, AI and ML is, or ML specifically is very critical of data science and making sure you have the right data and making sure you analyze that data and understand that data as it will be developing that system for you. Verification and validation, of course, goes without saying very important, very critical part of any system development. So we want to ensure that whatever methods that might need to be adjusted or created or developed or collaborated with developing organizations is done as well. Safety, you know, spoke to that already a little bit, but again, you know, trying to ensure that these systems that are developed are still safe and remain appropriate for their use. Material release, that is kind of, as I mentioned, our final gate, where we're kind of culminating a lot of these data points that you just saw into that material release that can be reviewed by stakeholders and by all these different panelists, very similar to today, to ensure that that system is good to go for deployment. And lastly, it kind of brings us to trust, you know, we want to have that trust and what we're kind of calling assured trust in that system, not over trusting and not under trusting, but finding that right level of trust through things like human system integration, what we call soldier touch points, and things like that to make sure that that system is going to be used the way that it was intended to be used and that the soldier or operator trust it and will advise by what they need to do to utilize it. And I believe that was my last slide. So thank you for your time. Thank you, Ben. Our next panelist is Mr. Louis Benakort with the presentation, Increasing NRC Readiness and Artificial Intelligence Decision Making. Over to you, Luis. Thank you, Dr. Lane. Good morning and good afternoon, everyone. As Dr. Lane said, my name is Luis Benakort, and I am the branch of champion for artificial intelligence. I am pleased to be here today to discuss what are we doing as an agency to increase our readiness in evaluating AI technologies. Next slide, please. So as Dr. Lane mentioned in her opening remarks, AI is actually one of the fastest growing technologies globally, and it's actually the next frontier of technological adoption for the nuclear industry. It has the potential to transform the industry by providing new and better insights into vast amounts of data generated during the design and operation of a nuclear facility. And it offers new opportunities to potentially enhance safety, security, improve operational performance, and potentially implement autonomous control and operation. And as a result, we have been seeing that the industry is researching and using AI applications to meet future energy demands. It is critical for us as an agency to focus on how these external factors are driving an evolving landscape and growing interest in deploying AI technologies. So over the last year, we have been seeing that landscape steadily evolving, and AI is currently being used in a wide range of nuclear power operations, including what you heard today from Jim from mining nuclear data for predictor maintenance to understanding core dynamics for more accurate reload planning. So we as an agency, we recognize the potential for using data science and AI in regulatory decision making. But at the end of the day, what we are interested is understanding what are the possible regulatory implications of using AI within a nuclear power plant. So at the end of the day, what we want to do is to ensure that these technologies are developed safely and securely. So we see today that this is an opportunity for us to start shaping the norms and the values to enable the responsible and ethical use of AI. So we as an agency, we must be prepared to evaluate these technologies. Next slide, please. So we as an agency, we are anticipating that the industry will be deploying AI technologies that may require regulatory review and approval in the next five years and beyond. As such, we are proactively developing an AI artificial intelligence that's a strategic plan to better position the agency in AI decision making. So the plan could only have the goals for AI partnerships, like what you see here today, contributing an AI proficient workforce, utilizing AI tools to enhance our energy processes. But at the end of the day to assure our readiness for AI decision making. So we want to use this plan as a tool to increase our regulatory stability uncertainty. And the plan will also facilitate communication to enable the staff to provide timely regulatory information to our internal and external stakeholders. So when we were developing the plan, we formed an interdisciplinary team of AI so-called matter experts across the agency. And to be able to increase the awareness of AI's technological adoption in the industry, we hosted three public workshops in 2021 that basically brought together the nuclear tool community to be able to discuss current and future state of AI. We also initiated dialogues within the nuclear community and with our international counterparts getting valuable insights and identifying potential areas of collaboration. One thing to know, like you heard from Ben, like the energy is not alone when it comes to overseeing the safe and secure deployment of AI. The topics of espionability, trustworthiness, bias, robustness, ethics, security and risk are actually coming from any entity that wants to deploy AI technologies in designing and operating a nuclear facility. So that's one of the reasons that we're meeting with other government agencies, including the Department of Defense, to be able to identify new partnerships to leverage their expertise and experience of AI. Lastly, we are committed in providing opportunities for the public to be able to participate in a meaningful way in our decision-making process. So as we continue developing this plan, we plan to solicit comments from the public and feedback from the advisory committee on reactor safeguards in the summer of 2022. Next slide, please. As I mentioned earlier, we do recognize the public interest in the potential regulatory implications of AI. We want to provide opportunities for the public to be heard. That's one of the reasons that we were trying to meet the principle of good regulation to be open and to inspire and in everything that we do. And to be able to ensure stakeholder engagement, we have developed this timeline shown in the slide of what are our current activities for the remainder of the year. So I do encourage everybody here to participate and provide comments on our plan. Our team is planning to host an AI workshop in the summer of 2022 to be able to remain aware of the fast pace of technological adoption of AI in the industry, but as well as we want to communicate with our state callers about the energy progress on AI activities. Lastly, our plan is to issue the strategic plan by the fall of 2022. But I want to mention that early condemnation, dialogue, and pre-planning are key for us to increase in our regulatory readiness and stability for the industry to be able to deploy these technologies. As you heard today from one of the commissioners, we don't want to become a barrier. We want to become an enabler for this technology if the industry decides to move forward with that. So early engagement and information exchange is important for supporting that standard knowledge to be able to have that timely deployment and the execution of the strategy. Next slide, please. So in closing, here's our contact information. So if you want to reach out to us after the break, that basically concludes our presentation. And I would like to now turn it over to Dr. Alain so we can commence the Q&A session. So Dr. Alain, back to you. All right. Thank you, Louise. So we're now going to move into the question and answer portion. You can continue to submit questions. So please do so as we chat this afternoon. So the first one, Louise, I'm going to hand over to you. Are you finding any unique skills necessary in the area of AI and data analytics and how are you addressing skill needs? That's a really good question. I think data science works actually is a unique skill set that the agency really needs to have. But that field of science actually has several sub-domains, as we know we have computer science, mathematics, and statistics. For data science skills, I think it's important for that person to know a lot about Python or Java, which are basically very commonly sought after. But one of the things that we're doing as an agency is developing this AI strategic plan. And one of the goals that we have is cultivating an AI professional workforce. And as part of that, what we're trying to do is to identify what is the pipeline of data science as staff to be able to evaluate an AI technology coming down the road, and also to be able to develop AI tools internally to be able to better improve our processes. And as part of that, we develop a data science training qualification plan. And then the plan basically provides on the job training, as well as some of the skillsets that we believe our staff needs to be able to evaluate these technologies. Thank you, Louise. Jean, a question for you. I came in. What happens to the reports that are not worth human review? Yeah, so the analytic will look at what are probable failures or probable outcomes that we're looking for, and it will assign a confidence level, and it will allow the end user to make the call, if you will. For the ones that aren't shown, they're usually very low confidence. So they're not shown. However, as I mentioned, there's backstop processes that still provide feedback, for example, if we would have misses. And what we've learned is that it's important to have those backstop processes so that if you do have a miss, and it's not shown to the end user in that process, you still get the opportunity to understand why you missed and then go correct the algorithm. And that's indeed what we've done on our first application with maintenanceful functional failures. And so far we've had zero misses since we've done that. But again, you'd rely on backstop processes to see those misses as they're called. Thank you. Hey, question for Ben. On your slide for Path to Assured AI, I'm interested in understanding a bit more about the VNV frameworks for AI ML. Any suggestions? Sure. So I will say, of course, VNV, I think of AI systems is always going to be fraught with challenges, especially when you're talking, let's say a machine learning deep neural network, understanding what each of those nodes can achieve, what is being activated and how that impacts your final result is going to be challenging. But some of the things that we're kind of looking at, and let's see here, I kind of jawed a few down, looking at modeling simulation. I think that's always going to be a factor in the VNV of an AI system, putting it into that simulated environment and trying to see how it reacts. Currently with that, thinking about design and experiment, thinking about Monte Carlo simulations, again, putting them through kind of that simulated environment to see how it reacts. And I should clarify, this is not necessarily just for image. You could do images, you could do classification, linear regression, decision making, to the trees even, a lot of these different things through simulations of data inputs and mapping that to their outputs. Something we're looking at, explainability of AI not necessarily is a way to prove how something is working, but also as a way to help validate what an AI system might be trying to achieve or trying to decide or trying to, that answer that it's trying to arrive at can give us some guidance as to how it's getting there. And I think the last thing I'll mention kind of is instrumentation of that AI, trying to, we may not know exactly why, let's say a node has activated for a deep neural network, but maybe we can compare that to other nodes, or maybe we can kind of compare it to other similar systems that may not use AI to try to understand how those lower level functions are impacting that decision to give us that confidence during a VNV assessment. Thank you, Ben. All right, next question for Aline. How is your organization identifying areas where AI or data analytic approaches are applicable and have the potential for the greatest positive impact? Well, that's what I explained in my presentation. We have a methodology based on organizing technical meetings while we define with our member states a mandate, and then we deploy these methods. So the technical meeting we organized last year was really serving that purpose, which was to provide international prospecting forum to discuss foster cooperation and artificial intelligence application, methodologies, and enabling infrastructure to have the potential to advance nuclear technology and application. So it's quite a long title. And so through this meeting, we are able to understand state of art. We identify our role also in the acceleration of AI in the nuclear field, and we have quite a large view from R&D to already technologies that are deployed. So it includes nuclear data, nuclear fusion, nuclear physics, as I show in the picture. So nuclear power, security, radiation protection, and nuclear safeguard because also I was more or less speaking about nuclear power, but AI applies also on all this domain. And this AI methodology can have very positive impact to improve modeling and simulation capabilities. So that's how we organize ourselves. Yes. And of course, it's available as public information. Wonderful. Thank you. All right, Luis, the next questions for you. How are the strategic plans fit in with the NRC's hierarchy of documents? And what's next after the strategic plan is released? That's a good question. So we are looking at that right now. So the strategy will be a NRC report, kind of similar of the rest of the agency strategy documents. The strategy itself is no longer 15 pages. However, there's a companion document that we're developing that is called like an AI roadmap. And the AI roadmap has basically the what, how are we going to be doing this? And one of the things that we want to do is to start doing some research on an AI methodology to have a basis to because the industry during the last workshop, they mentioned that they are interested in for the NRC to provide some type of a regulatory guide or guidance document. But in order for us to develop that guidance document, we need to have some type of a white paper or technical basis that we can put into that regulatory guidance. So what we want to do after the strategy plan will do some research, but at the same time, we want to keep engaging the industry and what are their plans in potentially deploying because in order for us to develop guidance, we need to have a better understanding of where industry is planning to use this. Is industry interested in autonomous control? Is industry interested in using AI for safety systems? Depending on what we hear about those discussions, we'll start doing more research and the idea is for us to be agile. We want to have the framework available in the next five years. All right. Next one for Gene. I'm going to combine a couple of questions here. So this is around the CAP tool. If that's an office shelf and then how your data science team is set up and built around your capabilities. Yeah. So the tool we're using right now was developed by Jensen Hughes. Jensen Hughes is a company we've worked with for many years at Constellation. They've done a lot of our probabilistic risk assessment and models and they have great capabilities in the area of AI and ML. And again, they started with this first application two years ago. So they had already developed an algorithm. They understood our interfaces with the IT systems and databases and servers. They had relationships with our IT people. So they were in essence the perfect storm. So they've developed this algorithm. They call it data advisor and we're now starting to look at other applications to use that particular technology. And we think this has real benefits because we already have contractual situations set up with them. They're very familiar with our programs and processes and procedures. Many of their engineers hold Constellation qualifications, technical qualifications. So we find that working with them is very seamless and smooth. On the second question, I think this sort of goes a long way towards answering that. It would become expensive if you go outside and you go to various vendors, but we're finding by utilizing them working with our own IT people, it's been very efficient thus far. But these are small applications we've started with. We haven't really tried big yet. If you read some of the literature, they advise against big moon shots, take small steps, small bites of the elephant, look to achieve adoption and confidence as you move into the bigger application. So for example, what Ben said, we don't have deep learning algorithms yet. Those would present bigger challenges for V and V and things like that. But right now we're trying to stay small, get some wins and build on that as we move forward. So I'll kind of stop there, Teri, I think if that answered the question. Thanks. So Ben, can you talk a little bit more about repeatability, especially in the context of AI and ML and what is kind of in that framework of what might be achievable? Sure. So from kind of my perspective, repeatability is going to be paramount. We don't want to have a system, I think for anyone, for the DoD, for the NRC, as one of their customers, they don't want a system that they don't feel is going to be repeatable in terms of the way it operates. So we kind of are taking the idea that whatever system is presented, it has to be repeatable and we have to be able to prove that to the best of our abilities. And one of the things I feel we can achieve with AI and ML systems is that if we are able to identify all of the inputs that a system is going to be receiving when it gets to that decision, that will give us a good step towards meeting that repeatability. We're not going to be, at least I don't believe we're going to be looking at systems that are coming up with new, what should I call it, kind of new methods of completing tasks or looking at the way things are working on their own. We kind of call it online learning, I think, which I don't know if that's an official term, but because that's where you do start to run into those issues of repeatability, if something has been retrained or relearned. But if you have the ability for what I'll call a static AI and ML system to lock down that system and lock down that training and truly understand, and that's the key point, truly understand the inputs to that system, I believe you can obtain that repeatability. And I think we are going to have to get to that achievable state of repeatability. If not, then we have to start thinking risk mitigation, risk assessment, and possibly bounding of system capabilities to make sure that if it's not going to repeat exactly the way it should, we have hard stops, we have the ability to bound the system so that if it doesn't go repeatable, it still stays in with that bound. So I would say the goal, if you will, the objective is to have a fully repeatable system, but the threshold is repeatable with some guidance and some bounding in the off chance that we've encountered something that makes it no longer repeatable. So Aline, question for you. Does the international environment have unique challenges for AI development and use? Yes, I can say unique in the term of, yes, indeed, there are many AI now in the industrial world, and we have to apply it in the nuclear industry. And we know that there are a lot of conservatism in our world, and especially linked with the nuclear safety that we are trying to do. So yes, it's a unique challenge, but I would say it's multiple because AI covers a lot of techniques and a lot of application, and I guess that some are easier to use than others. And really, what is for me important to have is a kind of framework where even if step by step VNV is not possible, we define the conditions, the outer conditions that are necessary to have a safe development of AI, meaning what are the physical running the model in certain ways that we are sure that it does not exceed certain limits in the result. And one part of the challenge also is to have uniform requirements to feed the system, because not everyone, but AI, at least the deep learning machine, all these things that well is building itself when running, when filling data, all these systems, you cannot feed them with the same requirements. And what was said before is that, yes, if we can have something repeated data and get the same results, it's true if you don't change the system inside. But, and so we also have to work to develop a kind of international recognized standards on how to settle the requirements input data to the system, so that it can be repeatable not only because we have the same data and the same system, but because we have the same data and we want to have more or less the same results. I don't know if I can be understood, but it's, so not only a question of the internal system, but it's also standardization for what should be the requirement input data, the format itself. Oh, great, thank you. All right, so we've done a question that's for all the panelists, so we'll go around on this one. And it's your thoughts on cyber. So as we work in the area of AI, how do we know that the AI hasn't been cyber compromised? How do you basically build that trust with the AI known cyber landscape? So I'm going to start, Gene, with you. Yeah, when I saw the question, my first thought was that, you know, where it's embedded and used is within internal systems that are already cyber protected. So, you know, this is not like it's external and separate from any of the databases and softwares that Constellation already uses. So I would say we just rely on the existing cyber protections. Ben, your thoughts. Sure. I think I do agree with Gene, you know, a lot of cyber hardening is going to be dependent on the system. But I think something to keep in mind that we're looking at as well is cyber security of your supply chain. So as an example, for an AI ML system supply chain to be your data. So not only about the security and cyber resiliency of your development environment, but also your data, you know, has your data been compromised that's going to be used to train that system? Has there been an injection of bias or poisoning into that data stream that's going to be used during training? You know, I would like to think that we have good cyber assessments and assurance, you know, against Gene on systems that are actively being used. But something that we want to start looking at is before use, you know, during development, is there enough cybersecurity on that development side to ensure that what we're getting at the end is still a cyber secure product? If I can add to that, that's one of the things we are looking at an AI strategic plan that we're working with our answer folks to answer that question. It's a hard question to follow us to answer. I think the problem that we have right now is that industry is starting to do this little by little, integrating this into plan operations. And the question now becomes, okay, how is the system not going to be used for plan operations? Is the system not going to be doing a lot of the decision making? How is that data being used and transmitted to the outside? That's kind of the questions that we're asking and what would be the regular implications upon that. And that's one of the first things that we have to start thinking about. And I know when we met with ACRS back in the summer, they were concerned about this question as well. So it's a hard question to answer at this point, but that's part of what we're developing the AI strategic plan to be able to tackle that. Yeah, Terry, if I can add to what Luis said, we're not even remotely thinking about operating systems at the plant or equipment with AI and ML. Now that might be down the road and in the future, but I would call that one of those moonshots that your advice not to go after too quickly. You start small. So I mean, right now we're looking at processes and portions of processes and tasks. And as one of our folks put it, actually to me yesterday, we're using as it a decision support tool, right? Not a decision making tool. So it's still something that human has override capabilities, understands completely from an explainability perspective where the results came from. So we're not at the fully autonomous stage by any stretch yet. So I think you're not going to see that for a while until you first get confidence in the smaller projects. And yes, if I may add something, that's why we have technology for development and technology for deployment. And the technology for deployment, as Jane said, are helping and supporting decisions that we don't decide or the machine don't decide for ourselves. And I also think that we need to have, yes, to define limits, acceptable limits for the performance of the system. And if the result is out of the limits, then we go back to manual process. That's also a way to do. And of course, applying all the cybersecurity because that's already known as data management and prediction again, attacks and so on. But really, we are not yet in a mode where we can be fully automated. Jean, over to you. How do you see AI and data analytics providing a positive safety benefit for nuclear power plants? Well, and that's a common question that we get. And I would just say simply stated, this is a golden opportunity for us to first eliminate low-value work and second, better focus on what's important or significant. So that's really it in the nutshell. When it comes to the cap screening and prioritization, you want to focus on those more significant conditions. And this enables us to quickly do that and spend that time to understand those completely while not completely ignoring the lower significant stuff. And in the area of work screening and work management and work requests, we're able to look at the more higher priority equipment failures and quickly understand how to code those, how to get them properly sequenced out to the work groups, start to order parts. I mean, the sooner you fix things like that, the better and safer your plan is. So in a nutshell, it's really just enabling us to better focus on what's really important. I think that's the big benefit to safety right now. Thank you. Louise, you got a question about the thoughts of how autonomous systems may be used at decommissioning of nuclear power plants? That's a good question. I think the person is asking more about, I'm inferring, it's more about the use of drones for inspections. At the end of the day, we shouldn't be a barrier if industry wants to use that for doing some inspections. I think it boils down to what is the level of autonomy of that system being used in decommissioning. Is the system used more for improving operational performance? That doesn't have a lot in it to safety. So I don't see we as a regulator will have that impact. But now, if that is impacting a safety system and autonomy is involved, like what Ben mentioned, can we have that assured trust of that system to be able to do what is intended? That's where the regulatory implication will come in. At the end of the day, the industry should not be a barrier, it should be an abler if industry wants to do that. But we need to have trust and assurance that if they want to pursue that, that we know how that system is actually, we need to have a better understanding of how the system was trained. Can we trust this system to be fully autonomous? Or what is human in the loop in this case? So that's the things that we need to consider if industry wants to go there. I don't know anybody else from the panel wants to comment about that. Yeah, definitely to send that home, hopefully. That's something we definitely have to consider with any of the systems that we developed. How is the human integrated into that system? And how is that oversight maintained? Because we need to ensure that trust of the system, we need to ensure that that use of that system, it's still meeting the intent of its design. And I think that's going to be very critical moving forward for sure. This one's for all the panelists. So we've been talking about the AI at this stage of use is a lot more for a decision tool. I think we'll get the data. So the questions that came in, some of the questions that come in are along the theme of when could the algorithm, a dance to a point where it could change its code on its own. So a model that would learn and further develop. And how would we go about moving into that area where the AI may be different from what it was originally programmed or coded for. I'll go ahead and start with Gene on thoughts of. Yeah, you know, when I talk to our experts, and I'm not a data analyst, but you know, I know a lot of them now. And you know, we're far away from using deep learning, I think, right now. And deep learning, of course, would be where it's almost fully autonomous and it's learning on its own, and it's getting smarter. And it's doing things that on its own, including perhaps changing its code. And I think Ben mentioned that earlier, that creates unique challenges for verification and validation. So I mean, speaking for what we're doing right now, I think we're still far ways from being there. Do I think we can get there? I think we can. My reading tells me that the smart people say, start slow, start small, build for adoption and credibility. Don't build for the big hits and the big solves. And, you know, gain confidence and build on that as you move. So I think we still need to, you know, build on the smaller projects before we tackle those types of challenges, Teri. Aline, what are your thoughts? Well, it's, that's what, yes, that's what you call deep learning. I guess that all these systems will be of use as support systems. For example, when we do predictive analysis, among a lot of data. So of course we can, not of course, but we can have machine learning and deep learning by feeding existing data and trying to predict what's going on. But it will be used as an additional support system for the operator, for example, or for the designer of Weaver. But we don't see it really as a direct application, at least for the time being, because we need to understand what's going on in the AI system. What I wanted also to point out, even in normal safety INC system, there is a request when developing the system to have independent verification validation. So even if there won't be the traditional VNV method for this AI system, I suppose that there will be the regulator, but it's not for me to answer it to the specialist, that there will be kind of independent verification validation system imposed by the regulator when we go to really safety systems. So that's also another way to control or not to control, but to have more trust in what's coming out of the AI systems. Yeah, and I think you hit the nail through a swordiness comes to my mind. How can we How can we trust that AI too? Because at the end of the day, what we care about the agency is can we understand how that AI made that decision? But what are the factors that would include in order to make that decision to operate as as intended and compliance of the regulation? I think that's what we need to be caring about if that ever happens. I don't think that's going to happen in the near term, but that's something that is always on the back of our mind that if the industry wants to deploy this kind of technologies into the field, like we need to be asking kind of these questions about expandable AI, transporting SAI, ethical AI, and even the data, some of the bias that will be included, like Ben mentioned, that also has some implications at the back end on the testing and evaluation. So hopefully we're not going to go there, but it doesn't mean that we shouldn't go there. Yeah, Terry, if I may add, in the nuclear industry, we have plenty of processes that we can look to to apply this technology. And I think the solution that comes to mind when I hear the other speakers talk is a very well-designed user interface. When you have a really well-defined user interface, it's explainable. And the end-users understand how you're getting that decision. And to answer one of the questions earlier with Ben, we use a multi-metric method where there's four or five different things that combine to give us a confidence level on this is a potential failure, or this is a potential 3B condition report. So I think having a really well-defined user interface goes a long way towards achieving what we're talking about here. But there's plenty of processes in our business that I think we can turn to and start to try to apply this technology. One idea, for example, would be causal analysis. We have plants and equipment fails, and what do we do? We scramble our resources and go into DEF CON 2 and decide why did it fail, support refute matrices, failure modes and effects. And this technology could help us to quickly establish causal because that data is out there just waiting to be interrogated. So I think there's plenty of processes in our business that we can apply this to without having to worry about being fully autonomous and help support the decision-making process. Just some added thoughts there. And I just wanted to speak for one second on part of that original question as well concerning what I called online learning. I do think that's something that's going to be far off. I can't predict the future necessarily, but if we want to still achieve that safety assurance, that repeatability, those different aspects that are important to all of us, I think here on the call, it's going to be a while. We need to build that trust and we need to build that capability to have that confidence in that system from safety, from VNB, from reliability, all those illities, as they say. And I think that's going to be very difficult when you start thinking systems that can adjust themselves moving forward. Great points. Thank you, panel. Next question that came in. I'm going to start with Luis and then go over to Ben. So as you both were talking about your teams and working on the AI initiatives, question came in asking if social scientists are included on your teams. That's a really good question. At the moment, our core AI team does not include social scientists, but that doesn't mean that we're talking to social scientists across the agency. For example, one of the areas that we're talking about is in the area of human factors. So they're currently looking at the strategy right now, they're providing comments, but as part of the core team, we didn't have that, but eventually we need to have some person from that field into the team. Sure. And from my perspective, we have human system integration experts that are part of our teams. We also have ethicists that can be part of the team. We have legal authorities that can review different things. So we try to make sure that we continue to have kind of that broad review of those systems. AI or not, even for non-AI ML systems, we try to ensure that we have as part of that mature release process that I showed earlier, that those reviews are still taking place regardless. So we'll just work to adjust them or integrate new aspects for AI and ML technologies. And that's definitely something that we're looking at right now, identifying those gaps and looking for ways to fill them. Thank you. So we've gotten several questions around data. So I'm going to steer this one to the panel from your different perspectives. So of course, in AI, it's built off of data. Models are only good as kind of the data they're based on. So how do we go about developing in your thoughts on those data sets for the AI's that we see that may be used in our respective organizations? Aline, can I start with you? Well, data sets, as I said, we participate in standardization for that. So there is work ongoing in the IEC. I guess that there are other regulatory, at least standardization bodies that are working on that because, as I said, AI systems develop quite fast. There are numerous startups and we cannot adapt all the data to the different systems which are available. Of course, standardization comes a little bit later than what's available in the market, but it's really necessary. And we participate in this field. It's an important part of the development that we follow with the IEA. Almost made it through without doing that. Luis, your thoughts? Yeah, I think data quality is very important, both in the data that you train the model and also the data that goes after that. I think for us as an agency, the question that comes to my mind is are we going to be requiring data for the licenses in some of these submittals? My God, feel is no, but that's some of the things that we need to consider in evaluating some of these technologies. The other thing that comes to my mind for internal purposes, before we go to the data, we need to better step back and what is the problem that we're trying to solve? What is the process that will benefit the most of using AI? That's the case, do we have the right data? Is the data on structure? Is the data already structured? So these are the questions that we are going to be looking and moving forward. Because as you know, Adams, he has a huge repository of information, but the data is not structured. How can we structure that data that it can become machine learning ready for not only for NRC staff, but also for the industry and members of the public to be able to use that data? Right. Ben, any thoughts? Yes, many thoughts. Data quality, in my opinion at least, is so important and also so challenging. AI, while it's been around, I guess I'll say around for a while, I don't think everyone really realized that the data that you have for your AI system might not be to the quality that you need it to be. We've been collecting data I think in industry and everywhere for a very long time, but does that data have all the metadata? Does it have all the features? Does it have all the extra information that you really need to create a quality AI system? So you might claim that you have big data. You have all this information, but is it the right data? Is it unbiased? Does it have the right amount of context and right amount of diversity within it of environments, let's say? So I think kind of assessing that is going to be one of the first big challenges when you're thinking data quality and something we're looking at is developing a data safety management plan or some sort of data I don't want to use the word certification, but some sort of maybe I'll say data assessment to understand the quote again illities of your data to make sure that is it appropriate? Is it the right data? Do you actually have enough of that data and to a high enough quality to make it usable? And I'm concerned that may be a challenge for a lot of different organizations when you really start to look at the depth and breadth of your data. But that's kind of my thought, but I think a lot of people kind of feel the same way and we'll just we'll see what their data looks like. Great. Jean, any quick thoughts? I know we're starting to run out of time. Yeah, I'd say a million like Ben said. This is why we pick CAP data, right? CAP data is a good source because it's big data. We've got a big fleet, so we've got 12 plants we can draw upon, right? And we recently gave 600,000 records of data back to DOE for them to do research and play with. And so I think that's a good data source in the sense that it's a structured one. It operates by rules, there's procedures, so it's not unstructured and it's not all over the place. Now that said, I think you'd be full on yourself if you thought it was consistent from plant to plant to plant. And so I think one of the real values of what we're doing is we're going to improve data quality because we're going to achieve a level of consistency now with the algorithm that perhaps we didn't have before because each station, each plant is different, different cultures, different performance, different people. So I think we're going to improve data quality and I think CAP data is the perfect storm to go use these techniques on. I think it screams for these AI ML techniques, frankly. Thank you. What a great session. AI for today is definitely a multifaceted area. Lots of things to look into as we move forward. If we could get the contact slide on the screen. I want to thank our panelists, our session coordinators, Matt, Dennis and Trey Hathaway, all the support from the RIC team for this session and the research AI team for keeping tabs on this dynamic area. And thank you to all of you who participated today. The presentations are available on the RIC website under the program agenda for this session and they'll be in the RIC agency's document repository following the RIC event. I've been pleased to be your session chair today. And then you've got my contact information. And with that, I will close the session. Thank you, everyone. Have a great day. Thank you very much. Thank you. Thank you.