 Good afternoon. My name is Paul Krohn. I'm the Session Chair for today. I'm a Deputy Director in Region 1 in the Division of Operating Reactor Safety. Before we start to get into the program, how A&I became friends in the next steps for the AI journey, I just want to go through some notes about the session. We will have a Q&A session at the end for those in the room. You've got the QR code that will come up at the appropriate time. Using that in your phone, you'll be able to submit a question to the group here when we get to the Q&A period. For those joining virtually for the Q&A period, once you're logged into the session, you'll find a tab for electronically submitting a question and we'll get through it that way. The other thing we're going to do is have a couple of live polling questions between the speakers. I think you'll find those interesting and engaging for the topics we're going to talk about. Some real-time data and feedback. Similarly, you'll get a code and some instructions on the screen for those in the room to respond to the live polling questions and for those online, you'll have a tab on this session that will say polls. Also, when this session is over since we're the last session today, we'll be available here to talk to any of us should you have any follow-up questions. And finally, as has been mentioned in other sessions, we value your feedback on this particular web page for this session. There's a tab for feedback, so please take some time to give us some of the feedback. Okay, so I went and pulled four quotes offline. Take a look at these four quotes. Can you tell me which one is AI generated? So maybe a little humorous, but the third one, seek success but prepare for vegetables. It kind of goes along to show the potential of AI in our applications, but it's not ready for primetime nuclear applications yet, although it's got some potential. So I think you're going to hear that idea in our speakers today. There is potential there. We're here to make sure it promotes safety, gets you appropriate regulatory reviews, and takes us to a better place. Incidentally, the first quote comes from Amit Ray, AI scientist who wrote a book called Compassionate Artificial Intelligence. The second quote actually is from Elon Musk, and the fourth quote is from Alan Kay, an American computer scientist. So let's talk a little bit about our session today. Certainly the NRC is committed to enabling the use of nuclear technologies that promote safety. That's our mission. So there's a lot of potential uses for artificial intelligence. You should get a good sense of that today. Some of the things we're going to talk about are applications you might well see in the U.S. domestic industry, some international AI initiatives. We have somebody joining us from the United Kingdom's regulator, internal NRC AI activities to prepare for potential applications, and interestingly enough, a speaker from the federal government on AI experience, U.S. Food and Drug Administration. I think you'll find that interesting. As I said, we will have some polling questions in between. So briefly now, I'll go ahead and talk about some of our speakers. Their full bios are on the webpage, so I'll just give you a synopsis here. Rick Zock will be our first speaker. He's a senior manager for Nuclear Innovation for Constellation. Rick will be followed by Andrew White, Superintendent Nuclear Inspector UK Office for Nuclear Regulation. He'll join us virtually. Luis Bettencourt, Branch Chief Research, Accident Analysis Branch, NRC, and then Sean Forrest, Digital Health Specialist, Food and Drug Administration. Very distinguished panel, and I think you'll enjoy everything they have to provide. Okay, at this point, let's go into our first live poll. And after that, Rick will take over. So let's queue up the first live poll question, please. Okay, so you can text the response to those numbers on the screen. The first live polling question, and we'll get the results in real time here. What nuclear technology areas could potentially benefit most from using AI applications? Go ahead and vote. We'll give it a chance to stabilize and see where it comes out. Interesting. Okay? Looks like the first answer is favorite. An undistructive testing, predictive maintenance, or condition monitoring. Good. Appreciate that. Okay, we'll do something similar between the other speakers. All right? I'm going to hand it over to Rick. Thank you. Good afternoon. Hey, as Paul mentioned, and I like that last poll, we're going to talk a little bit about maintenance optimization and what we've done within constellation energy regarding that and the use of analytics and machine learning. Those terms are used sometimes interchangeably, and I'm going to continue to do that in the industry, but I'll use those general terms throughout. As some of you may or may not know, constellation owns 21 of the 93 reactors in the U.S. So we have a lot of internal operating experience that we can take advantage of and learn from both internally and with the rest of the industry with regards to analytic development and really the use of digital technologies to not only operate but to maintain our nuclear reactors. I wanted to share with you, first of all, at a high level, what our vision is for what we call our digital generation business, and at a high level, and again, I think the polling kind of pulled out on it, we're looking at, and we're doing, by the way, I said vision. A lot of this is reality today that continues to develop. So we're automating work processes using some business process automation as well as use of analytics, together what we call smart processes. In other words, we're taking our handheld hard copy procedures, and those are being digitized so that a worker can actually take that procedure to the field in a digital format, put in data and have that go right into the proper, I'd say, house or database or data repository that previously was done manually. Trending is also done automatically when you collect data that way. You can instantly get historical trends from that information. Vision making through analytics, and that's what I really want to focus on today. And then you can read the rest, but that predictive plant performance is key there at the bottom, all with a centralized information backbone. In other words, what's the infrastructure look like to collect data? That needs to be built as well. And even more important, how to drive that whole culture of innovation across your organization, so that every worker is an idea maker, if you will, with regards to developing new technologies and ways to do work more efficiently using analytics and other technologies. Here's a pictorial view of what I just went through. This topic of use of this term, station digital twin, is used throughout the nuclear industry. Everybody has their own definition of it, and beyond the nuclear industry, there are different definitions. So I've taken that liberty to define it for our company. And if you look to the left of that clouded area that represents the digital twin, by the way, that silhouette to the right in that cloud, you see the zeros and the ones, that's the digital version of a physical plant. So the left, you'll see those arrows poking in, the very top one being process automation. I just talked about that. The middle, using data analytics to aid decision making, using data. And the third one at the bottom is that mobile worker concept, where just as I mentioned with the example of digital procedures, where the worker has information drawings and access to information in the palm of his or her hand, versus having to have done that with hard copies in the past and traveling back and forth to the shop to do work. All of those three, I'd say definitions or input into what we call a digital twin are supported to the left of what we call a data hub. That's that infrastructure that I talked about that is built in order to collect data from multiple inputs and sources and have it all in one place, such that you can explore it, use it, and just explore data to provide insights that normally a human being may not be able to do with regards to establishing correlations to large groups of data. To the very bottom, you see this user interface. And if you look down at the six o'clock, it talks about the corporate station monitoring. That's a central monitoring station that we have in our corporate office outside of Philadelphia that looks at all 21 of our reactors from an equipment performance perspective. So with the use of analytics and automation, we can actually look at equipment health well in advance, I'd say in the old school way, where it would use experienced engineers and data that's grabbed manly throughout the years on when a piece of equipment is either failing or about to fail. With the use of analytics, that can be predicted well in advance, months in advance so we can take the appropriate corrective action and avoid and to avert equipment failure. And then the other two representations also represent the mobile worker to the left of that and to the right, down there around the five o'clock areas, each worker having access to this same data at the workplace or out in the field. The value statement off the top right, I think, is really important, and I'll talk about that a little bit more as well, but the bottom line is we want to maintain or improve our safety margins where applicable. Mega watt electric generation output is important so we can increase that or gain efficiencies and hours, what I call hours back to the business work efficiency such that we can reinvest those hours and savings from a worker perspective into other areas of the business. A couple examples of our current analytics, the first one I touched on it a little bit when I talked about the corporate monitoring station outside of Philadelphia. That particular analytic in many of the commercial reactors in the U.S. used this advanced pattern recognition technique where again we can avert equipment failures well in advance of previous methods. A second one I'll touch on a little bit in more detail, but all of our operators go through a very rigorous training program as many of you know, 12 months to 18 months long, and we have come up with an analytic that looks at each student on an individual basis and can determine strengths and competency weaknesses through how they answered each question on all the tests that they're taking throughout the period of that license qualification period. We can identify those weaknesses well in advance that an instructor would in the past and it's automatic. It's basically press up a button and the idea is to remediate much sooner so we can avert student failures through our accreditation program. I'll go through the rest very quickly, but there's the maintenance role which is a regulatory process. We've used some analytics there to make maintenance role decisions or to inform those decisions to engineering. We're doing the same with what we call our condition reports or issue reports where we can automatically categorize those and rank them and assign them using analytics. This is advanced pattern recognition I talked about earlier. I'll just wait one more note on that, that when we see what I'll call a faint signal, if you will, or a piece of equipment that just doesn't seem right using this advanced pattern recognition technique, our central monitoring station will call the reactor or the power plant and talk to maintenance and or engineering and give them that heads up. It starts off a dialogue of hey, this is what I'm seeing, it starts the validation process and we've identified several and prevented several failures that if left unattended and normally they would have been, would have resulted in equipment failure which could have led to a plant transient or some other disruption for the units. This is an example of one of the screenshots that we would have on our license training analytic that I talked to you about and by the way the names on here are fictitious, they're not real, they're not real students but those little blips if you will show students that may have an area of focus and what we call, again I'll use that term again a faint signal. It's an indication that a student may need help in either the calculus area or a system knowledge or electrical theory whatever and we can get that well in advance. I talked about maintenance rule a little bit with you and this is what we call an analytic, this is a screen that we use and an output that we use to automatically predict, I won't say predict, but to determine condition report status, ranking and assignment and this has been in place for about two months now and what's real exciting about the use of these tools is to see the users get comfortable with the use and to gain the confidence in the use and it's really not only does it save a lot of hours of research and prep but you actually get a more accurate, a more reliable output because data always tells the truth versus relying on experience and if we can continue to tweak these analytics as we build and use them over time, by the way this particular analytic is self-correcting so over time we can correct and continue to fine tune it. As you do that, you gain more confidence and again you end up with a more accurate and streamlined approach to getting there. Some future ideas that we have that are in what we call our pipeline is and I'm going to start with, let's see, it's the third one down, advanced equipment performance analytics. I talked a little bit about the predictive analytics currently in place in our monitoring center. There are other tools out there as well that can predict different types of equipment even more in advance so we can eventually not only notify the station but allow enough time and to automatically connect that to the work order process so if a piece of equipment is failing it identifies the failure, the type of failure, the probable fix and then automatically generates the work order and puts that into the process to fix that piece of equipment. This has been done actually in some non-nuclear applications and we're looking to fine tune that for our industry. In the second from the bottom outage scheduling predictions, we've used analytics in the past again using historical data to predict outage challenges throughout the evolution of a refueling outage for a reactor and we've done that with some success. More fine tunings required there to get really the users what we call our OCC, our outage control center. That's a control center that we put in place and a lot of the commercial plants have this that oversee and manage an outage to gain their confidence in that tool so that's in development more work to be done there. In summary I would say that we've applied analytics to areas where the human isn't totally removed but less are required to make decisions and as that confidence is gained we can apply these analytics of more confidence to other areas of our business and I do want to stress and we've had a lot of discussions on this within the industry as well outside of consolation that there's no near term plan at least for consolation and in general I would say in general for the industry to apply analytics to the main control room where the operator would rely on analytics to make make a decision. Right now we're preserving and reserving the use of these tools for applications where we can gain efficiencies and maintain and improve safety margins but we're staying away from that control room decision-making process and from a business standpoint from a business standpoint it makes sense for the industry and for consolation as well because the gain would be minimal the control room is well coordinated well designed training it's it's robust and the number of operators is about right so there's little benefit gained by retrofitting analytics that's how consolation views it in that specific area that's it Paul can I hand it back to you. Thank you Rick appreciate that outstanding industry perspectives more to come you're gonna hear from the NRC next and FDA after that but let's go to our second polling question which is how likely do you believe it is that new nuclear and energy technologies will include aspects of AI in their design and operations so a range of choices there how likely do you think that new nuclear will use AI in their design and operations and by the way Andy in O&R on the other side of the pond you're on deck here in a moment. Okay thank you. Okay thanks Andy that's pretty favorable looks like we got very likely that's good to hear. Okay Andy are you ready? Yes I am. The floor is yours sir. Right okay you've got my slides I can see that's good. Okay so my name's Andy White I'm the super attending inspector for nuclear safety working for the Office for Nuclear Regulation that's the equivalent of the US NRC in the UK. I'm leading for O&R on AI and AI for regulating AI so I suppose it's my fault if it's if it all goes pear shaped. What I'm intending to do with this presentation I'm just trying to move on a slide here we are got it in the end so I want to talk about why as regulators we should encourage the use of AI for nuclear safety applications. I want to also talk about how we in the UK regulate conventional computer systems because it is rather different and and I think it bears comparison with AI systems. I also want to look at the differences between AI systems and conventional computer systems because of the complexity of conventional computer systems they can actually quite often not do what we expect them to do. I want to think about regulatory approaches and how they may vary according to the different application of AI systems rather than apply a blanket approach. I also want to explore a range of different regulatory options to license AI and then outline our current approach in the UK for how we're going to regulate AI. So why should we encourage the use of AI for nuclear safety applications? I think I've given two reasons here but in essence I think actually history will show a looking back that in fact if we didn't use AI we had a drop in safety so I think AI is giving us opportunities to improve safety and I want to be sure that as regulators we will we maximise those benefits whilst ensuring that the risks are adequately controlled. So my tagline at the bottom is there is a clear need for strong and effective regulation. So ONR regulates conventional computer-based safety systems for nuclear using what we call a two-legged approach. So effectively it's using processes which have been developed and are codified in standards and using that as a means for saying that the systems are safe. But we also use a separate system called independent confidence-building measures where you actually confirm that those processes have been successful in actually achieving the level of safety required. So this process is actually something that we want to apply to AI but is rather more complex as a consequence of the way that AI is designed. So how are conventional computer systems different from AI systems? Obviously conventional computer systems use a requirements-based strategy in terms of deciding how the various components are going to work, how they're going to interact, what they're going to do and what they're not going to do. AI is designed using a data-based approach which is use of data to infer things. It's inherently complex. It's not something which can be decomposed necessarily. And one of the things I want to talk about later is about architectures and whether architectures can actually help us. It's clear that both systems can suddenly fail due to a fault. But I think it's also clear that there's differences in the way that the two systems may fail. And in fact, it may be that it is very difficult to actually identify whether an AI system has failed because you don't actually know what it should be doing in the first place. We've done some research here in the UK and I think interaction with humans could be potentially quite challenging because the humans may not be able to detect failure. So how might applications of AI affect regulatory outcomes or regulatory approaches? I think first of all, and I think the previous speaker spoke about this, is if the AI provides only marginal benefits and there's significant challenges to the use of AI and particular challenges in terms of the safety achieved, then really should we be using AI? I think it's going to be an evolving picture. I think there's going to be some confidence which is going to be gaining through the use of AI. But I think also there's going to be some situations where AI fails and we have difficulties actually understanding how those failures have occurred. In some ways, the automotive industry is going through this to some extent at the moment with self-driving cars. So it may be that we're in a position where a hazard might take some time to cause damage or loss, in which case it may be that AI can be used because it is offline and it may be that we can intervene in the event that a failure has occurred. So I think the timing, the on or offline nature of the AI may affect the way that we look at it as a regulator. I think also looking at it from a very strongly consequence-based view will also give us some clear indications as to whether we should be relying on AI where the failure of that AI could cause some significant consequences. And I've given an example of criticality or offsite release here where I think that if those failures occur, then it would be quite difficult to justify the use of AI. Of course, AI could actually be implicated in improved designs. It could actually drive better designs. I'm mindful of the fact that as a regulator, I see designs which are sometimes and quite often decades old and getting more modern designs and designs which consider more and a wider range of faults may be achievable using AI in a way that it's not easy using humans. Also, it's clear, I think, that AI could improve analytical techniques. So what potential approaches could be taken to regulating AI? I think we could regulate against the requirements of standards. But the problem we're facing with AI at the present time is the standards are lagging way behind the development of the technology. And is it reasonable to wait until standards are in place before we push the button on AI? I don't think so. Testing is, of course, a way that, in particular, the automotive industry has used to try to demonstrate correct behavior and using iteration to correct faults. But there's a big question, I think, which is that is testing, how much testing is enough, particularly bearing in mind the fragility of some AI systems. We could insist that AI systems are built so that they can be verified and validated. But at the present time, that technology doesn't appear to exist. So it may not be reasonable to expect that. We could, of course, throw in the towel and we could say, OK, AI failure cannot lead to a hazardous event. That may limit the applications that we can use AI for, and that might limit its benefits. So what we've done in O&R so far, because we want to find our way through this, is we've engaged with licensees. We've worked with licensees to develop an approach for using AI and, in fact, one of our licensees has produced an AI strategy, which has highlighted quite a number of potential applications. We've also worked with other UK regulators. I've given a list here of the wide range of regulators and understand what they're doing. Some areas are very advanced. Maritime autonomous ships are quite common now. So there are some real regulatory experience here. We're also working at government level, and we're working with academia trying to influence the approaches to the design and development of new AI techniques that can be validated in some way. We're also working, of course, with institutions, standards, making bodies, etc. We've produced a, we've commissioned some research and produced a research report, which is which is present and available to you at the web address given. We've worked at AI, AI guidance level to try and develop some initiatives in this area. We're also working with the CNSC and the CNSC to identify some principles that can allow us to move forward with AI regulation. So what we're trying to do in the UK is to develop a route map to regulating AI. So what we're trying to do is to develop a way by which we can engage early with operators and provide advice. So we're using an innovation cell and a sandboxing arrangement to allow us to work as a very soft in a very soft regulatory way to make improvements and to give people the opportunity to learn, including ourselves. We've also wished to start with AI applications that have no direct impact on safety, but will be beneficial. Some of those could be, for example, analysing scans and identifying defects in parallel with humans doing what humans have always done. We also want to progress applications where the benefits of AI are significant, but where we can use conventional approaches to achieve safety. So this could be robots for cleaning up operations where existing safety approaches can be used. We'd like then to progress to applications where the benefits are clearly outweigh the disbenefit of the AI going wrong and accept that actually nuclear consequences may arise, but ensure that those are acceptable. Then we want when we're ready to progress to applications with more significant consequences and then to progress further into areas where AI uses continuous learning and where that's necessary and beneficial to produce safety results. So I'll finish there. Thank you very much. Great, Andy. Thank you very much for those perspectives. Always enjoyed hearing that from a peer regulator. So we're going to move on to our third polling question. Wait a moment for it to come up. There we go. How soon do you think commercial nuclear will be using AI applications in an NRC regulated activity? This is a timing question. Answer should be interesting. So go ahead, throw texture answers. One, one to five years. Interesting. Very good. All right. At this point, I'm going to hand you over to Luis Betancourt. He's going to provide you some NRC perspectives and where we are to prepare for AI applications applications in the nuclear industry. Thank you, Paul. So good afternoon, everyone. As Paul mentioned, my name is Luis Betancourt and I am the branch chief champion for artificial intelligence at the NRC. What I plan to speak today is what are our next steps in AI journey? So as you can see from the AI landscape, we have seen that in the last couple of years, there has been an increase of AI applications in the industry to be able to improve operational performance and mitigate operational risk. In order for us to prepare this future, as you can see from the polling questions, we develop AI strategy plan so we can be ready when the industry decides to put this in an NRC regulatory activity. But we also recognize that there are some factors that we need to be aware of. One of them is the executive order 131960, which is basically pushing the agencies to consider how do we implement AI internally. There's also the other factors of the evidence act, which is basically necessitating to make this data more publicly accessible for industry as well as in the rest of stakeholders to do their own analysis. And even though these are important factors, these are outside of the scope of the strategic plan. We're basically trying to prepare the agency for potential uses of AI applications. But we also recognize that there's a benefit for us to do our business better as an agency in using AI for internal applications. So with that being said, in the last summer, we issued a draft AI strategic plan that we went out to solicit comments. And one of the things that we wanted to do is to enable the safer security use of these technologies in the nuclear industry. And in order for us to do that, we came up with this strategy that has five strategic goals that you can see in the slide. So goal number one is basically what is that framework that we need to be able to have in place so we can be ready to review those applications. Go on number two, we need to develop an internal structure to be able to review these applications. Go on number three, which is one of the things that we invited FDA as well as our ONRs, that we don't want to do this in a vacuum, that we want to be able to leverage the expertise of AI across the nuclear sector as well as outside of the nuclear industry. Goal number four, we need to prepare our workforce not to train everyone on AI, but basically to have the right people at the right time to review these applications. And goal number five, we want to pursue some use cases to be able to build that foundational foundation of how to review these technologies. And appended speed, which is basically to acknowledge, yeah, we understand that we need to use AI internally, but that's outside of the scope of the strategy. We're on the process of analyzing the strategy at the moment. We are incorporating all of the stakeholder feedback and we expect to issue the final strategy by the end of the spring. So as part of our next steps in executing the strategy, one of the first things that we wanted to do is to build that organizational framework. In the last two weeks, we have our first AI steering committee meeting. So that part is basically done. And one of the things that we want to do next is to have an AI community of practice internally to the NRC and that should be able to complete one of the first strategies on the goal in the summer. In order for us to have early dialogue with industry, as you heard from Andy in the last dialogue, one of the first things that we wanted is to have a summer workshop with industry to start having a discussion of what is AI and one of the things that we saw in the strategy that we put different levels of autonomy and AI. We want to initiate that dialogue with industry. What is it that we should consider in every single one of those levels? What are those attributes and those characteristics that we need to consider while we're developing the framework? And we want to do this early and often. One of the things that we want to do after this workshop is to have a regulatory gaps analysis workshop that we can consider. What are those gaps that we need to consider? On the right side of the slide, I'm not going to go through all of them, but these are some of the research priorities that we have identified in the next four or five years. This year, we plan to initiate a regulatory gap analysis and one of the things that we want to start doing is to start embarking in some use cases so we can start building the foundational aspect of the NRC. One of the other things that we want to do, we need to start surveying what are those methods and some tools that we can consider acceptable, that we can put out in a future regulatory guide or on the standard review plan. You heard from Andy regarding the use of industry standards. We know that in the outside of nuclear, there's some standards that might be applicable to us, so we need to start surveying and identifying if there's something that we can endorse at the end of the day. And finally, we want to continue to have those partnerships to be able to leverage that AI expertise. As I mentioned before, we are not working in a vacuum and one of the things that for us to be successful is to continue maintaining awareness of the activities in the nuclear industry as well as outside of the nuclear industry. So one of the things that we are maintaining awareness is the NIST AI risk management framework on development standards. And as I mentioned before, we want to have early engagement with industry through a series of public workshops so we can start having the conversations early and often. The other thing that we want to do, we don't want to keep our knowledge all to ourselves, so we want to participate in different communication forums and to have multiple public meetings to disseminate that knowledge to the nuclear industry as well as to have information to our staff to a series of seminars. And lastly, we are participating in a series of working partnerships. One of them is the NRC DOA and every MOU that we're trying to maintain awareness of what is industry doing in the international arena. We are participating in a gamut of IAEA activities that's an area that we are actively working upon as Sandy mentioned in the last presentation. And lastly, one of the things that we learned through our interactions with other federal partners like the FDA, they develop a series of principles that is getting their industry and we thought that was something that was important to bring to the nuclear industry. So we just embarked in a joint collaboration with the Canadians and the Kingdom to have a white paper to what are those overarching principles that we should consider as the three nations in evaluating AI technology into the future. That will provide harmonization as you heard from a couple of the commissioners and we expect that that could become a good technical basis that we can have a discussion with industry. So moving ahead, 2023 is a very important year for us as we embark in this journey and for us to be successful, it's important for the industry to keep maintaining the certain focus. So that is very paramount for us to continue moving forward. For us, it's important that we should be doing things differently in developing this AI framework. While trying to avoid bringing new, undue burden to the industry, we don't want to stifle innovation. We want to enable innovation. So we welcome having that discussion with industry in order for us to continue to be successful. It's important for us to maintain and learn more about what is everybody doing inside the industry and outside of the nuclear industry. And lastly, what I want to mention to the members of the public and industry, please engage with us early and often. We want to understand how do you use an AI application so we can better tailor the AI framework for those early adopters of AI, so that way we can enable this technology in the nuclear industry. Lastly, this is my contact information. One of the things that we want to do through the AI public website that is listed at the bottom, we're trying to maintain that information up and out of concurrence as we continue embarking on our journey. We're trying to put as much information on the public website. So I encourage you guys to visit the website. There's a link for you. You want to contact us. So that being said, thank you for the presentation and I look forward for the questions later today. All right. Thank you, Luis. We're going to do one more polling question before we go on to Sean. Okay. The fourth polling question. What industry do you view as leading AI safety and security assessments? Very interesting. A lot less spread than some of the other categories. Okay. That'll be our last live polling question. With that, let me hand it over to Sean. Sean is from the FDA and we're talking about some novel ways the FDA is using AI. Thank you for that introduction. She had a little runny nose and do not want to be the public health official who comes to this conference and gets somebody sick. Yeah. So I'm going to talk to you a little bit about our experience in putting together regulatory framework for AI. I've been at the Food and Drug Administration for about 16 years now in the last couple of years has been with the Digital Health Center of Excellence, which I'll talk a little bit more about as I go. Is a disclaimer they tell me to have. And first I'm going to, as I mentioned, talk about our Center of Excellence and a little bit about artificial intelligence, machine learning, enabled medical devices. And most of the time I'd like to spend on our AI ML action plan. So our Digital Health Center of Excellence is within our Center for Devices and Radiologic Health. So we are focused on empowering advancement of public health through innovation. And we do that by making connections, partnerships with important stakeholders and sharing information across those stakeholders and looking for ways to innovate our regulatory processes, specifically for these types of technologies that are not as not as effectively regulated through traditional means. So I'm just going to go quickly past this slide, but I understand that with the Rick you have the slides. You can look if you're interested in more detail. This tells a little bit more about what we've done and are doing within our Center of Excellence. For now I'm just going to sum it up by saying there's a number of topics that we work on. One of those is artificial intelligence and machine learning within our Center of Excellence. So moving to AI ML enabled medical devices. So one of the first steps we took was in getting definitions in terminology settled on that we're speaking the same language. And that was a little bit of a challenge. This was there's an international medical device regulators forum that we worked with a number of different countries in developing a consensus on some of this terminology. And I think this this is a theme that Louise mentioned is when they're pursuing as well of doing things internationally to try to make it as efficient as possible and as cross cutting the work that we do. So these are a few examples. I'm not going to spend time describing them in detail. There are links in those slides to get more information. But suffice to say there are a number of different applications within medicine of AI ML technology. And actually these are I didn't take time to update these with more recent ones. But we do have a website that has a list of over 500 medical device submissions that are AI ML related going all the way back to 1995. So we have a long history of working with these types of technologies. Many times in fairly simple ways that are not too different from your typical computerized systems. But this shows gives you an idea of the work that we've done in this area. And it's not comprehensive but helpful to point to some examples. So with all these examples we see as has been talked about earlier. I'm not going to spend too much on this because I think it's been well covered. Andy in particular. There's a number of opportunities that AI ML technology offers us. And there are there are challenges that come along with them. There are that are in some cases unique to AI ML. But the opportunities are significant enough that it's worth figuring out how to how to work with these and regulate them well. Some of the challenges are that the data it's very data hungry. It's difficult to identify all all the bias that may be there in systems. Their black box systems often. And as we get into adaptive algorithms that change over time that becomes more challenging to regulate. And then ensuring transparency where it's needed is important with these systems to be able to use them as their intended. So I'll move on now to our action plan. This is an overview of some of the milestones over the last several years that we followed to develop this regulatory paradigm. We started with a discussion paper in 2019. So I'll just focus you on. I guess I don't have the pointer. But the two blue documents at the top. First is our discussion paper and then we followed that with an action plan. And in between there there was several interactive collaborative events where we got feedback on these concepts that we proposed. And I'm going to go into a little more detail on how on what that feedback was. So specifically we heard that we we want we need a development of a better framework for particularly these continuously changing algorithms that that can be applied to medical devices. Good machine learning practices developing and harmonizing those practices is important. Transparency knowing what information each of the different stakeholders within a medical scenario needs and ensuring that that information is available. Regulatory science particularly in the topics of algorithmic bias and robustness. So I'll talk a little bit more about each of these. But and then the real world performance. How do you use real world data in these systems for for follow up to to ensure that they're working as they're intended to. So from these we created an action plan that was to build an AML framework for that change control process. There was developing GMLP being involved with standards development and other initiatives that determine what those practices are and harmonize them. Creative patient centered approach with a workshop on transparency. The development of regulatory science methods for bias and robustness and advancing some real world pilots that many of which are already underway. So the this is our five aspects of that plan. I'm going to touch on each of them one by one. So starting with the regulatory framework. So this figure is something that's been around for the base. The basis of it is something that's been around for many years of a total product lifecycle that you don't just even though we review something before it goes on the market you don't just put it on the on the market and we forget about it. There's a continuous maintenance that has to happen in order to make sure that these systems are safe. And that's for any device. But particularly for AIML devices that that change and are updated with data. It's important to have a framework that uses this total product lifecycle. So we layered on this framework a change control plan that includes a software prespecification that describes what aspects are going to be changed. We look at what things are appropriate to for us to review ahead of time in a pre-market application and and understand how they're going to affect the entire system and then have an algorithm change protocol that tells how you're going to ensure that those risks are mitigated that result from that those changes. So we're developing a draft guidance actually it's very close to being published in draft and we get feedback on on that. So that's the first aspect of our plan and these are some of the considerations that were key in in in identifying that is making sure that we have prompt availability of high quality digital health products and assure the safety of and effectiveness allow rapid improvement because these things are developed over over time even after they are on the market and then least burdensome is is part of our mandate for regulating these systems. So moving to the second aspect of good machine learning practice. These are accepted practices in in AI ML algorithm design and they cover development training testing throughout the process and they come from these industries of so quality systems of software reliability machine learning. There are industries that have these they just need to be applied to medical devices and some in some cases specifically in other cases they can be layered right on. So there are different groups there's a lot of acronyms on here I apologize for that you may recognize ISO and some of these but many are are medical specific. They are standards development organizations. There's a number of those that are developing medical device specific as well as general use standards for artificial intelligence and and then collaborative communities that are in usually in specific areas of health care but some of them general and developing which allows for a community to discuss what the needs are and us to be at the table but not necessarily driving the the conversation and then I mentioned IMDRF the International Medical Device Regulators Forum is a way for us to discuss with other regulators being consistent and collaborative on these topics. And we did put out a set of 10 guiding principles a couple years ago that discuss the at a high level what are things that we're looking for in these practices and it's intended to inform the development of of GMLP and and promote this hybrid harmonization. So this is those 10 principles they they go across the development and testing and and monitoring of life cycle. And we we did those with MHRA and and Health Canada and there's also several other international in the international community that are working with us to develop them. And then this patient center transparency it's important for these devices I mentioned earlier to to have the information that's needed for the different stakeholders. We held a patient engagement and buyer's committee meeting first of all to get this input and then and followed that up with a workshop also a couple years ago to understand a number of different perspectives on this topic. And these are some of the topics that were covered in that in that workshop. I'm going to jump to regulatory science methods. So as I mentioned the key focus for these that we that we heard from feedback was this algorithm bias and robustness. So needing improved methods to evaluate these systems that are often difficult to know exactly how they're making decisions. And then understanding that the other half of that is understanding how they how well they perform when their inputs change a little bit or when their context of use changes a little bit because they can be brittle. And so finding methods to explore that and demonstrate that in regulatory submissions. So we have research efforts that are within the FDA our office of science and engineering laboratories. And then we also have centers of excellence for regulatory science that we work with collaboratively on some of these topics. And these are some of the regulatory science gaps and challenges that our research group identified and are kind of driving their their program of research. I'm not going to go into them in detail. But these are the topics that our internal research group are focused on in actual projects. So broken into data and data challenges and then validation challenges essentially with these technologies. And these are the two CIRC centers of excellence in regulatory science that are also on so bias and then change control. So lastly the last aspect is real world performance. So how do you use real world performance? This is again highlighting this total product life cycle as you are using this in the real world sometimes the performance of systems will be different than in the data that it was trained and validated on. So we're looking at in these pilots how you understand when those changes happen. So these are some of the questions that have come up in our discussion so far. And then I think I have one more slide of the actions that we're taking on supporting these pilots and coordinating with some of the other programs that are already in place on this topic. So those are the five aspects of our action plan kind of a whirlwind tour of them. I will end on this slide that's kind of that overview slide. So there's been again several interactive efforts that we've had in collaborating on this approach. We are working with a number of groups in especially developing these practices and identifying the kind of next steps in guiding the industry in this process. So I'm open to questions and also if any questions that we don't have time for here today I'll direct you to this digital health inbox where we can respond to them offline. Thank you, Sean. Very, very informative from all the panelists. We have about 25 minutes left for Q's and A's. So a couple of questions have been coming in. Please keep them coming in. We'll start with the first one and I think it's more appropriate for Rick Zakhir on the first one. And it reads, for issue reports or condition reports, screening analytics. What is their training or requirements in place for people that write them to ensure they are putting a level of detail needed for the AI system to review it or application to review it? I think it looks at the quality of the information and the thoroughness coming in I think is the intent there, Rick. Right. Great question. The answer is yes, but not related to the analytic. We require that level of training and detailed write-ups and condition reports, issue reports, regardless of whether or not we're going to use analytics to categorize the information. So yes, however, from an analytics perspective, it's designed using various techniques. One of them is a word search recognition such that it can take various levels of detail and again, using years and years of historical information and condition reports, which as I mentioned, we have 21 reactors, we have lots of data. We were able to train these analytics to really incorporate a broad range of level of detail and come up with very accurate results. In addition, the analytic is designed to give you various confidence numbers. So it will give you an output and a degree of confidence in that result. In other words, that analytic will tell you or our display will tell you, we think it should be categorized as such because of this. But I've only got a 66% confidence, whatever the number. That would cause us, the users, the human to go in and say, well, what would it take to get me to 90% or 100% confidence? So that's a good check on that. And it validates the accuracy of the tool. And it helps us fine tune and fine tune the tool. But the short answer is, the analytic is, I'd say, savvy enough and accurate enough to take in a broad range of inputs. Great. Are there any other panelists who would like to comment on that or touch sort of the issue report, condition report, problem statement aspect of what licensees you're using out there Andy, I don't know if you had anything on that. Yeah, for me, I think this is quite challenging. I think bias is a potential problem where you are reliant on the user actually giving all the information that's necessary or important. So I can see when you talk to people who maybe don't perhaps understand fully what the problem is that they may leave out vital information. And there's a need for proper interaction, I think. If you were talking to an expert, they would ask for clarification. So for me, I see it as being a problem of actual positive interaction to ensure that the issue is fully understood and then the correct action is taken. Oh, thanks, Andy. I think for us as a regulator, I think it's more about having that question in attitude. And what I mean by that is that we, as humans, we may tend to rely on the AI set as we must be right. So there could be times that the AI could be wrong. So having that question in attitude is really important for us, especially on, yeah, it's important to have a data-driven decision-making. But at the same time, does the data make sense? Has there's something that has changed that we haven't seen before? So it's important for us that the industry keeps in mind to have that question in attitude. Does this make sense or not? Very good. So the next question I'm going to have a combination of Louise and Sean address. And it's about AI workshops. So Louise, for you, how does the industry get involved in the workshops in the AI strategic plan, for instance, that you talked about in your remarks? And I'm wondering, Sean, if you could add, when you had workshops from an FDA perspective, how did you make them successful? Are there any tips you could give us about how to get people involved, what groups should be involved to make it successful? So let's start with Louise. Sure. So the last series of workshops that we have was in 2021. So we have a good, perhaps a list of people who are interested. All of this information is on our AI public website. So I do encourage you guys to go out and to the website. There is an area that it goes to a mailbox. So if you guys are interested in participating in the workshop and having a presentation, please send us an email to that mailbox. So we are making you aware of it. Obviously, we have meeting notices then this in advance. But we're planning to advertise this workshop at least a couple of months before so we can have ample representation across the industry. So not having personally been involved in the actual planning of our workshops, I think my first hint is have amazing colleagues that do these things. But I know that we have leaned heavily on the partnerships that we have to identify experts that bring a perspective and start the conversation in a really good place and identify the questions that are the root of the challenges so that we're focused in those sessions. Keeping them pretty short and focused with the questions that are at the root of the challenge. Here's another one for you, Sean, just came in. I think this is kind of interesting. It goes along the lines of how is medical device AI data gathered universally across many applications? And then how do you gather the data while still adhering to patient health privacy laws? Yeah, so this is certainly something that's playing out a great deal in our industry because the data is being gathered constantly in a number of different, you know, in health systems. There are registries that are gathering data that can be used at least for specific uses. The PII in, you know, patient identifiable information is usually something that is extracted from the data that you actually analyze and in certain use cases you can get the information you need without that PII attached to the data. I think this is going to play out over time and the legal structure around data is kind of in flux right now. But this is a very important point for AI because of the need for a large amount of data. I think the other aspect of this that plays into this need is interoperability. So having standards in place such that when you do gather data it's in a form that can be shared with other health systems in this case and doesn't have to be translated to a great degree so that studies don't happen because they're too expensive to transfer a large enough amount of data to not produce a biased model. Okay, thanks. Thanks, Sean. Anybody else want to comment on that before we go on to another question? Can I just say I think there's a potential issue with user acceptance of AI if people feel that they are going to be identified or going to be called out for their behavior or what they've done. We've certainly come across this with vision systems which are recording data and then transferring that somewhere else and where AI systems are interacting with people. So I think this potentially could be an issue with user acceptance. Oh, very good. Thanks, Andy. All right, we're going to move on to another question. This one is an industry-type question back to Rick. AI is particularly useful for us to make a data-driven decision on nuclear safety. Is it possible in the future to utilize AI for nuclear security enhancement? So it's kind of flipping the paradigm asking is there a position or a place for it in a security environment? Yes, I think it could be. In fact, we've talked about that within our fleet of reactors. Hey, how about security? And I think across the industry, I speak for the industry, security like all of us that are technical types are great innovators. They come up with great ideas and since 9-11, all of us have innovated quite well to streamline and to provide a very secure environment for our stations as well as our communities throughout the U.S. And it's not only AI, but it's other technologies that we are and have been using throughout the industry. I see that continuing to grow and to develop in lots of different ways all the way from concept of an event through application of actual physical security at the end. So broad range. One thing to keep in mind that I spoke about earlier is is there true business value in doing that? Just like for the control room. We see today for an operating reactor small business value for implementing AI inside the control room. And that would have to be considered as well in the security area. Is it really bringing value and value to the business? Thanks. Great, thanks, Eric. Thanks, Eric. So the next question has to do with benchmarks to validate AI application. So I'm going to throw this to Louise and Andy. And then Sean, if you could tag on the end with any FDA's experience. So here's the question. What benchmarks will you need to have to validate AI applications for use in safety systems and components? How will AI methods be validated and verified for use? So I think it's a question asking about what will the regulator look for in terms of benchmarks? For safety-related applications. So Louise, I don't know if you can start some thoughts and hand it off to Andy. I can start. I know Andy has a lot of thoughts on this once, but it's mostly on what are the claims that needs to be validated or upon. And I think that's one of the areas that we want to embark research early to start identifying one of those methods that we need to make sure that it could be found acceptable throughout regulatory guide at the end of the day. Unfortunately, I don't have an answer at the moment because that will become like an active area of research, but that's a certain in an area that is gonna be one of the first things that we wanna look upon. So Andy? Yes, so in the UK, what we look at is we look at risk balance. So we look at the benefit versus the disbenefit, considering what might happen if the AI system goes wrong. So we're always asking the question as regulators, how could it go wrong? Or what could the consequences of it going wrong be? And can you mitigate those consequences? So sometimes that results in the vendor going back and creating a new architecture or going about things in a different way. I think one of the questions we're going to be asking is explicitly, are you using the right technology? And I think Rick, you covered this in saying, well, look, are you using AI in a way which you're able to get those benefits? And that would be one of the first questions I think we would ask. If you can use a simple fence to keep people safe, then why on earth are you using an AI system and a camera to try and identify people? And it sort of seems obvious, but people are carried away with technology and with what technology can do and start then focusing on, oh, well, it's nearly 100% reliable and how on earth can you not accept it? So what we do is we ask those questions and say, okay, right, what else could you do? Ultimately, it is for the courts to decide in the UK if somebody wishes to do something. And this is challenging, certainly in the area of autonomous driving. This is challenging because at the present, there's very little evidence out there or there's very little decided court judgments, which actually is saying this is acceptable or that's not acceptable. And ultimately it is for society to decide what it considers to be acceptable. As regulators, we need to be able to respond to that and approach it in a way which allows those benefits to be achieved whilst if possible, avoiding the worst risks that can occur. Great, thank you, Andy. Sean, anything you want to add? Yeah, I would echo some of what Andy said about, so our approach is very much a benefit risk and there are applications where you're mitigating risk through other measures as you apply AI. The other aspect of this is that this technology and our abilities with it are changing by the year. I mean, really fast. So I know that currently developing, so the state of the art, there are methods to fully verify AI models when they have a limited number of parameters. Now this is not cases of self-driving cars where the parameter space is huge, but for models that are using a small number of parameters, you, and this is fairly new, that you're able to fully guarantee 100% that the output based on the input space. So there is, there's science that's developing and I think we need to be cognizant of where that is as we're looking at methods at the application of AI because, and not discount it. Great, Sean, Sean, I'm gonna throw another question your way. It starts with a compliment, actually. Terrific presentation, FDA seems years ahead in thinking about regulating AI. What do you consider the most important lesson to learn from FDA's journey so far? Well, it's certainly that there's a lot to learn. There's a long road ahead of us that we see to really feel like we have a handle on this technology and feel comfortable with it. I don't think there's anybody out there that is at that point. But I think that the nice thing about it is that there are applications that you can learn from along the way. As I mentioned, we've been, we've found through doing our analysis that we've had AI systems at least depending on how, one challenge is how you define these. But AI systems all the way back to the 90s and we've been learning for a long time how to apply these systems and use mitigations so in medical contexts so that a person is overseeing it and they have enough information to be able to determine whether that AI output is believable and have what they need to make choices based on that. And so I think learning along the way we don't have to do a step function and say, okay, now we're gonna take AI fully autonomous, whatever. And I think that that's a much easier road to follow. All right, thanks, John. Andy, this one is for you. Has O&R reviewed and approved? And approved. I'll start again, Andy. Has O&R reviewed and approved any AI applications submitted by its nuclear licensees? And if you haven't reviewed any today, do you have any in the process of being reviewed? That is the question. Right, so the other, yes we have. So I've reviewed an application which was for an autonomous underwater vehicle for use in surveying some waste ponds. Unfortunately, the application was very poorly presented and didn't clearly identify the hazards, which is the essence of understanding exactly what the application needed to do, what it needed to avoid doing. It provided some very useful information on the actual AI system, but it didn't cover all aspects of this system that needed to be considered. So for example, it didn't consider the hardware, both electronic and physical. It didn't include the operating systems, the application, the AI application itself. However, so we provided a determination on that, which was a negative determination. And basically it wasn't a basket case, but it was certainly a case of going back and you need to provide us with more information in these areas. Subsequently, that actual AI engine was redeployed on a maritime vessel and worked straight out of the box and has worked faultlessly since. So maybe we need to be careful about what we are looking for in terms of evaluating evidence. However, the consequences in this particular case were the potential for a waste to be disturbed for the evolution of hydrogen and for hydrogen explosion. So there was quite a significant consequence. So the answer is yes, and it was a negative determination, which perhaps doesn't help anyone other than to highlight how much information is needed in order to make that determination and to make it solid. Thank you, I appreciate that, Andy. Luis, a question for you. Do you see AI adoption by U.S. utilities as being driven primarily by, list of choices, individual utilities, suppliers, or central entities like an EPRI, or another government, or a non-government organization? I think that was a hard question, and at the end of the day, the utilities have to decide what is the business case, what they want to use AI. And what I have been seeing most, at least in the nuclear industry, is that they're using AI to streamline a business process or within the assistance regulatory framework, like for example, like the automation of the CAP program that Constellation is doing. It's basically driven or what necessitates, like what is the business case for the industry. We cannot dictate how industry can use AI, but if they want to come in and tell us, okay, this is how we want to do a task where we will come in, but I don't have an answer of how or where industry should be using AI. I don't think this is a role to dictate that to the industry, but we should be an enabler or whatever they decide or where they want to use it. Any other panelists' comments on that? Biggest driver of AI applications? I agree. I don't hear that often. I think, for me, I think there are certain applications which would be very challenging to put in the AI arena. And one example I give is use of AI in a reactor protection system, for example, where consequence of failure is so significant. And I think Rick probably covered that in terms of saying, well, the business benefits are not worth it, but I think it would be really good to think about the risk benefits versus the risk disbenefits so that a balanced judgment can be made. Good point, Rick. And if I can add to your comment, Luis, I think one of the successes that we've had was the CAP program that we were able to apply analytics as well as the maintenance rule is that we did that in collaboration with the regulator, with the NRC. So as we developed and learned, we shared the learnings both with regional and with headquarters on what we were doing, what we were thinking, how we were going about it. And I think that's gonna be a key for our success as an industry in the future is openness, 100% transparency, and learning together. And I think we've already been able to demonstrate that. And this is just the tip of the iceberg. I think the applications can be much more broad-reaching if we can achieve much greater business value and maintain or further improve nuclear safety at the same time and partnership with the regulator. So that's how we've started. And I see that doing nothing, but getting stronger as we go. I think that's super important. I agree, we have one last question. We'll address it to everybody on the panel and then I'll make some closing remarks. So it's hard to talk about AI without talking about the context of cybersecurity. So I'd ask for each of your views on the role you think cybersecurity should take in AI, where we are at this early stage in AI, and what we should be thinking about in terms of cybersecurity going forward. So Sean, let me start with you. Are you probably farthest down the road in that arena? Yeah, so I think this is one of the biggest things that motivates me in us exploring explainability in AI because you need to be able to understand what your model is doing. And at least the manufacturer, so this is something that we're exploring. How much do manufacturers need to make transparent to the user versus do we need to make sure as a regulator that they understand how their model is working. And because there are some significant challenges in keeping AI models secure. Now, we talked a little bit earlier about use of AI in security. I think AI is very well positioned to be used for monitoring for security because it's very good at identifying things that are different, that maybe people don't. Small things that are different enough. But I think as you're talking about security of the AI model itself, there need to be windows into it that allow you to see what's going on and allow you to, because we've seen in the research that very small perturbations in images can make a model do something completely wacky. And so you need the ability to monitor that, what the model is seeing and how it's seeing it. And the good news is that there are methods out there. Like these have been developed and are being developed even more so now that are countering these security threats. Good to know. Andy, same question over to you. Yeah, so for me, I think the fundamentals are that cybersecurity should be considered at a very early stage in the development of the AI system so that the challenges posed by cybersecurity can actually be dealt with in a structural way rather than by a backfitting of some technique or approach because it wasn't thought about early enough. So having the right architecture of the AI, Sean, as a really contribution about the, having the view into the AI, I think for me that's particularly challenging because of the complexity of AI and the way in which the data progresses through an AI system. It means that you cannot view it in a human way. So it may be that we have AI looking at AI. I could see that as certainly, certainly rising. But for me, I think as a regulator, we will be asking questions about, well, okay, how can cybersecurity damage, for example, the data or the data sets that you're using? How could they damage the actual systems that are running the AI systems? And what are you doing about it? And that way, I think we can ensure that cybersecurity and how it might affect the AI system has actually been considered and that there is actually a plan for either detecting it or preventing that from causing a failure. Great, thanks, Andy. We are right at five, so we're gonna wrap it up here quickly. But before we do, Rick, your thoughts on cyber and then Louise. One thing I would add is that it's really important to have a very robust cybersecurity review process that's integrated into a design process. So we have that, it is robust, but it's evolving. So we have to be careful and be ready to evolve the cybersecurity approach that we take. And it's not just AI, it's all data that's nuclear that could end up in different types of sources, monitoring only, automation, analytics, et cetera. So it's bigger than AI, it's data-centric, and it's something that, of course, the industry is very focused on and we'll continue to evolve on that together again. Again, that's another collaboration opportunity that we learned together on how to make sure that our plans are secure with regards to transmission of data. I agree with what Rick mentioned. Like for me, AI is the engine, but data is the fuel at the end of the day. So that we, as part of our process, we're planning to leverage our cybersecurity program, but I think it's bigger than cybersecurity, and I see Kim on the bottom. There are other security aspects that we need to be looking upon. I think one of the things that Sean mentioned is data poisoning. That's an area that we need to be looking for, like if an adversary poisons the data, like does the data make sense or not? So as we go to this journey, obviously at the end of the day, we wanna leverage our system programs as much as possible, so industry can use those programs like the cybersecurity program, but we can find something that there's a potential gap, and obviously we'll work with industry in identifying those gaps and addressing the challenge. Okay, please remember to fill out the feedback forms. Again, you can access it from the sessions page, and I'll give you my reflections on this in 15 seconds or less. You can hold me to that. First, the NRC will be ready for the reviews when they come in. Luis talked about the strategic plan we have. That'll be in place. We will be ready. Second, Rick, all of us talked about this a lot. Collaboration is essential, domestically and internationally, if we're gonna do this well and get the right successes we want out of it. And finally, many of you probably heard Commissioner Wright's discussion of the plenary session. He was very poignant when he said, where would you rather be? And that idea of where would you rather be is important for us here now, because I think AI from that standpoint is at a very incipient stage, right? I think it's about to take off some of the polar responses showed that. So I think we are where we want to be right now. And we'll probably be back here in a year or two talking about a huge amount of progress. I look forward to that. So with that said, let me close this session. Thank you for your attendance.