 Alright, good afternoon. Well, I've got news for you, you're in the right place. It is, what are we, are we Wednesday today? Wednesday? 3.30, we're in deep into the rick. It's been a long day, coffee's been served, cookies have been served. It is 70 degrees outside and you are not outside. You are definitely in the right place. Thank you for coming to today's session. We're going to be talking about AI, artificial intelligence and autonomy. And we have over 1,000 registrants for this session. Let me repeat that. We have 1,000 registrants for this session. I went to a rock concert last night. That's why my voice is messed up. It's not COVID. I almost have to give that caveat these days. And then we're less than 1,000 people at that concert. And I'm really blown away by the interest, but not surprised at all. I mean, as soon as you get on your news, whatever news feed you go for, AI is popping up nonstop. They're talking about AI on Jimmy, Kimmel, on every TV show. So the fact that you're here today is heartwarming. And we're going to make you a part of today's session. So thank you so much to the 1,000 people that signed up to join us today. A couple of housekeeping items before we dig in. By now everyone should have the Wi-Fi. We're going to be playing with our phones today. So we'll ask you to use your phones for polls and Q's and A's. This is your tool for today, but please put it on silent for today's session. That's going to be a my ask of you. And for awareness, all our sessions are recorded. And all the slides that you see today will be up on our website. So for those folks who are joining us virtually, thank you. You're making up part of that 1,000. For the question and answer and for the polls, you should have a tab online that shows you where to click on polls and questions. And if you're in the room, if you haven't already scanned the QR code, I mean, we can flash the QR code if we get a sec. The QR code is going to be your key to the website where you'll have. Thank you for the polling and the questions. So please make sure you sign in for this session, the QR code, and I see some folks with your phones out. You're winning. You're doing it. Thank you. All right. And one quick tip. Here's a hot tip. So we're talking about AI, right? You've heard about prompt engineering, hopefully. This is the new hot job. If you haven't, it's the way to go. If you're a prompt engineer, you're going to make some money in Silicon Valley. My tip for you on prompt engineering is if you say the name of the person you want the question to go to, like Albert hyphen, my question is, what's your favorite color? That's an easy way to get the question to Albert. Otherwise, you're relying on me to distribute the question and you don't want to rely on me. So prompt engineering 101. All right. Let's dig into it. Let's jump to slide three. Okay. So my name of the way is Vic Hall. I work for the Nucleotorio Commission. I am also the responsible AI official, but more importantly today, I am your emcee. I am your host, and I want to tell you a quick story. So one of my very good friends, and I'm not going to name names, right there, told me about a fantastic podcast. If you ever, if you know me at all, I don't, I don't do podcasts, not my thing, but she said, check it out. And I spent three hours listening to this podcast. It was a fantastic podcast that broke down artificial intelligence. And it was a six month old podcast. It was already out of date, but it had some really good points in it. And my favorite part of it was the way it broke down kind of the world of AI and the way that people are viewing AI. And I see Commissioner Postalaki said, it's great to chat with you before the session to get your feeling on it, because there's a groups of people who are not quite there on AI. There's groups of people who are really rolling in embracing AI. And the podcast talked about this group of people who are, if you're on one side, you're like, oh my gosh, this is amazing what AI can accomplish. But you can generate images and videos that could scare the heck out of people. And it could be the terminator coming. So there's a group of people think, hey, AI is going to lead to the terminator. And then you've got the other group saying, man, AI is amazing. I could use AI to generate my rich speech, or I can use AI to cure cancer or solve world peace and hunger. So you have these two divided camps. So for the sake of our wonderful panelists today, I want to gauge the temperature in the room of where this room is at. So this is where you're going to play along, and I'm going to mandatory participation. You have to vote. Either AI is going to be the terminator, or AI is going to cure cancer. So, okay, here's what we're going to do. If you're with me on AI as rock and roll, we're going to solve the world. It's got to go rock and roll, rock and roll, okay? And if you're not on the bandwagon, and AI is these little scary terminator, you're going to wave bye-bye to AI, okay? So with me in the room, mandatory participation, okay? On one, two, three, we're going to go AI with me or AI bye-bye, okay? I want to see some hands. Let's go, let's go, let's go. Okay, what do you got? What do you got? Okay, okay. We're gathering data. Come on, keep it up. One more. One more. Come on, folks, come on, folks. All right. So I'm going to say, what do you guys think? It goes about 20% terminators and 80%. So based upon that, you can gauge where the crowd is going to be today, okay? All right. Thanks for participating. I do appreciate that. I think the reality is, you know, AI is moving quickly, but we need to focus on where AI is today and where the near future is going to be. And when it comes to nuclear safety, I think the two most important things we can do is building our technical capabilities and collaborating, okay? The nuclear industry and the NRC are not known for adopting new technologies on the fly. We're not going to jump into something and rightfully so because of the consequences of possibilities. But when we're talking about safety, it's imperative that we start now to coordinate with other industries and we have a fantastic panel today with diverse industries that we talk, we have these conversations now, so for ready what can come. And your role as participants, as the thousands of participants that are interested, is going to be critically important today. So we're going to rely on you to ask some difficult questions for the questioning attitude. And I want to thank you for joining, for being here today on a very sunny, beautiful Wednesday day to be in this room to talk about AI and being part of the process. All right, so with that, I'm going to do quick introductions on our speakers and then we're going to do an actual poll and then we're going to do a little more bios as we go for each speaker. So I mentioned our fantastic panel and I want to thank you guys all again for flying in from different parts of the country to join us here today and taking time out of your schedule. We have a good mix, so we have Dr. Darren Cofer who is a principal fellow at Collins Aerospace, so representing the aerospace industry and what they're doing with automation. We're going to have on the big screen is going to be Chris Dixon joining us remotely from Canada. Chris is the facility operations director at Global First Power. If you're walking these halls, you may have heard of Ultrasafe Nuclear, so they're in the nuclear industry and they're on. Good to see you. Chris is going to talk to us about the work that he's doing with small modular reactors. Third, we'll have a look to the future. Albert van der Bilge? How'd I do? I do. Close. Okay, close, close, close. To touch the name, it's hard, okay. And so Albert's going to be speaking of the future. Albert has had some interesting experience with chemical processing at the Nakagawa plant and the press release I saw said it autonomously for 35 days and I think that's old news now. You're much beyond that. So looking very much forward to hearing autonomous operation using AI that has worked. And finally, I do not play favorites. My favorite speaker of the day is Matt Dennis. Matt works with me in the Nuclear Rhetoric Commission. Matt, I'm incredibly impressed by Matt because he can not only code in Python but he can explain what he's doing in plain English and thankful for that. He talked to us today about the AI strategic plan that the NRC has in place. So with that, let's go to our first poll and this is where I need you folks to pitch in and give us some information, give us some data as we get rolling. And as we do that, I'll introduce Darren, he'll be up next here. So let's run the first poll, please. First is step one. All right. So the question for you folks here in the audience is to what extent do you believe AI-enabled autonomy can positively impact the safety and reliability in the nuclear industry? So please spend a couple minutes doing that and as you're doing that, I'm going to introduce Dr. Cofer. So Dr. Cofer is a principal fellow at Collins Aerospace. He earned his PhD in electrical and computer engineering from the University of Texas in Austin. His area of expertise is in developing and applying advanced analysis methods and tools for verification and certification of high integrity systems. His background includes work methods for systems and software analysis. The design of real-time embedded systems for safety critical applications and the development of nuclear propulsion systems in the U.S. Navy. Dr. Cofer has served as principal investigator on government-sponsored research programs with NASA, NSA, the Air Force Research Laboratory, and DARPA, that's our Pentagon's research folks who are responsible for giving us the iPhone, the Internet and all kinds of fun stuff. So pretty impressive group of folks you work with. I know he's also been a member of the SAE Committee on artificial intelligence for aviation and aerospace control and guidance systems committee, and you're also senior member of IEEE. So I'm going to ask you, you can rock and roll, you can wave, but please do put your hands together and a round of applause for Dr. Cofer. Thank you. Thanks, Vic. And thanks for inviting me here today. It's been a while since I've been into Rick. I do get to dabble a bit in this area because we have a lot of commonality between our industries. We're both dealing with safety critical systems and highly regulated industries. One difference is you guys can shut down your plant. We can't do that. Got to keep flying. And I'm also glad to be here. I'm a Naval Reactors and I, so any other NR folks, come say hi afterwards. So why are we thinking about yeah, why are we thinking about AI and machine learning in aviation? There's a number of driving factors. We've got increasing demand for new kinds of aviation, more commercial air travel, more cargo supply chain activities, service into aviation. We've got highly dense urban areas, new kinds of vehicles, electric vehicles, just a lot going on there. Simultaneously, we're facing a shortage of trained pilots, and that's going to go on for the foreseeable future. So there are definitely proposals for having reduced numbers of crew in the cockpit, going to maybe single pilot operations, that's going to require automation of tasks that are currently done by human pilots, and many of those are safety critical. And so many of those tasks are going to be implemented using machine learning and AI technology. But just to be clear, we're not talking about the Terminator or Sentient Chat GPT kind of AI here. We're on the far other end of the spectrum. We're talking about neural networks, supervised machine learning, much more concrete down-to-earth applications of AIML technologies. But it's a starting point. But we definitely want to start with the easier problems. And what I want you to understand is this is happening now in our industry, in the aviation industry. And so we're a little bit ahead of the curve where you all are. So maybe there's going to be some good lessons learned, good regulations and standards and things put into place that might be helpful. Also turns out there's lots of other use cases for AI and ML that really don't have anything to do with autonomy. It just turns out that a neural network is a super-efficient way to approximate a complex function. And we can save lots of computing resources, CPU time and memory in a lot of our platforms. Why is this a problem at all? You've probably seen lots of stuff in the news. It's easy to wham on Tesla and whatever the crash of the day is. Tesla cars run into emergency vehicles. Alright, that seems bad. Maybe this is another one that would have been harder to anticipate in the drive-through at the Waterburger in Texas. And of course there's some dudes on horses in front of you in the car that think of that because it wasn't trained on it, right? So there's one of the key aspects or concerns for machine learning systems is what is it going to do when it experiences, when it encounters inputs that it wasn't trained on? That's, you know, how do you detect and prevent unintended behaviors? That's the biggest safety concern that we have. Here's one with phantom breaking. Why? Because there was a stop sign on the billboard there and it was a stop sign. I'm going to stop. Here's another one that's harder to explain. It's signaling a left turn into the wall of the tunnel and then stopping and then causes a huge pile up. So this, don't be this, right? So in aviation we don't want to be this. We don't want to be in the news. We don't want to cause crashes. That's just, we don't, just like you guys, we don't beta test on our customers. All right, so it's going to go down a lot more different in our world. Okay, so from a regulatory standpoint, everything traces back to Title 14 Part 25 for transport category airplanes. What y'all would have, you know, if you fly anywhere, that's probably what you fly on. The key elements are highlighted here. We have to show that airplane and all the systems performs their intended function under any foreseeable operating conditions and in a failure condition that could, you know, terminate the flight or end safe flight has to be extremely probable, improbable. And we have a whole bunch of standards, industry standards that serve as a means of compliance that regulators use when evaluating whether, you know, we as an applicant come to them and say, hey, I got a new airplane, I got a new system, I want you to approve it. And then, you know, industry standards and everybody sort of agrees that that's, you can use that as a way of complying with the regulations and these cover everything from safety analysis to system design to hardware and software design. And I'm really thinking about, you know, the computational parts of the airplane. But this is, these are all aimed at showing that the aircraft of the system satisfies its requirements and has no unintended behavior, no surprises. So, thinking back to the pictures of the Tesla accidents and stuff, there are those, that kind of helps you see that there's some technical problems, but there's also certification problems for us. These search standards make certain assumptions about the kind of system that you're building, the nature of the software. And it turns out that machine learning data-driven functions learned from data break a lot of the assumptions that we have. And so there's an SAE publication that came out a couple years ago where our committee that's working on new standards in this area identified what the concerns are from a regulatory standpoint and from a certification process standpoint. So that's, you know, a lot of the kinds of testing that we do, structural coverage analysis, traceability analysis. Those cause problems. Here's a XKCD cartoon that you may have seen if you look at such things that actually is a really good summary of what these, what some of these challenges are, right? This is your machine learning system. Yep, pour the data into this big pile of linear algebra and collect the answers on the other side. What if they're wrong? Stir the pile until they start looking right. Okay, haha, but this is too close to the truth as it turns out. The answers are wrong. That's the verification question. It's actually hard to tell a lot of our verification processes don't work. What does that mean for them to start looking right? It's actually challenging to come up with requirements for a lot of these systems that we can independently verify. And then the big pile of linear algebra speaks to the some implementation concerns related to languages. So here's kind of a summary of what, you know, what those are. But as I mentioned the structural coverage metrics that don't work in this world, those are one of our key tools for detecting and eliminating unintended behavior, eliminating those surprises. So we have to do, we have to bring other technologies to bear in order to deal with that. So the authorities in our world are engaging with this problem in different ways. The FAA equivalent in Europe is IASA. They are anticipating lots of applicants in this area and so they've been trying to get out in front of it and push for standardization to put up some guard rails. There's a lot of really good reports that you can search for. These are really high quality, a lot of good information on what objectives you should use to evaluate these kinds of systems and what possible means of compliance might be. The FAA is taking a little different approach or taking more of a bottom up approach. They want applicants to bring them actual candidate systems, issue paper process to evaluate them on a one by one basis and come up with individual means of compliance. Learn from that and then figure out what their process and what the standardization should be. Hopefully we'll meet in the middle somewhere and in that middle ground is this SAE industry committee G34 that's developing a new standard that will fill this gap for where our current cert processes don't work. That's something else to keep an eye on. We have actual applications actual regulations, actual industry standards that are under development right now that might be super useful. I'll just say that Collins just completed one of these issue paper approvals for a very, very simple neural network based machine learning system and got FAA approval for one of these issue papers as far as I know where the person wants to do it. But we picked something super simple so we could just focus on what are the AIML unique aspects of this function and work out what the approval process should be. So we'll build on that. I mentioned new technologies that might be required. One is what's called formal methods that is really just using mathematical logic based analysis tools to analyze these systems. We have these kinds of tools that allow us to analyze and make proofs of correctness about traditional software system. There's a new category of tools that allow us to analyze neural network models and basically propagate inputs through the system and comprehensively analyze all of their behavior over the entire input space in order to avoid these unintended behaviors or unexpected behaviors. What that means is even though the system was trained on some hundreds of thousands or millions of data points there's an infinite number of points that we didn't train it on. This kind of a tool allows us to say that even on those untested points we can bound what the output is. There's scalability limits for the size of systems that we can approach there but it's a really good method. Another methodology that we're developing is runtime assurance architectures. This is essentially building trustworthy monitors and backup functions that you embed or wrap around the less trusted machine learning system in order to detect when it is leaving its safe region of operation and intervene with some safe backup action. Excuse me. We've done some nice flight testing with this in an application where the neural network was generating collision avoidance trajectories for two aircraft that were flying at each other and then our system would detect whether or not the neural network was actually providing a safe trajectory and intervene. There's a cool video of that on this website. The last thing I want to leave you with here is this picture that emphasizes the fact that there's no one size fits all. The way I like to break this down is this axis of criticality on the X axis and complexity on the Y axis. If we're just looking at low criticality systems where the impact these are not safety critical system, the impact is small, we can treat a machine learning system as a black box, use a lot of our existing tools and verify it through testing. But if we want to do something more critical the higher criticality applications for ones that are of limited complexity, we can apply these new formal methods tools to allow us to possibly use smaller systems in high criticality applications. When I say small I'm talking of things that have thousands, tens of thousands of neurons or parameters in them. Those things, there's a lot of research right now scaling up those tools to hundreds of thousands and millions of neurons. However, we're still not going to get to vision based perception kinds of systems which is where there's a lot of interest. So that's where new research is really important. So where does that leave us? AI and ML are going to be used to meet demands for increasingly autonomous aircraft. There's a lot of technical challenges and certification barriers in the aviation world dealing with unknown unknowns. New cert guidance is being developed, new assurance technologies are being developed but there's never going to be any one size fits all but we can make progress by focusing on these very specific applications. There's a couple of interesting papers if you're interested in reading more. Thanks. Thanks again Darren, appreciate that. Alright folks, we're going to go to our second poll and gather some more data. We have the second poll please. Alright so the question for poll number two what do you consider the biggest potential risk associated with AI enabled autonomy in nuclear operations? Alright, so please take a minute to fill out the poll while you're doing that, I will introduce our second speaker who will be joining us virtually in a minute. So Chris Dixon who is our facility operations director he is the facility operations director at Global First Power. Again, Global First Power if you don't know them as a joint venture between the ultra safe nuclear company and Ontario power generation, OPG Global First is currently working to construct and operate a small modular reactor called the Micro-Modular Reactor. We're doing it at the Chalk River Laboratories in Ontario it's a site owned by the AECL the Atomic Energy of Canada Limited and managed by the Canadian Nuclear Laboratories CNL. The project they're looking to have serve as a model for future nuclear energy projects in remote communities and heavy industry so it's good we have kind of a forward-looking nuclear industry joining us here today. And Chris brings over 29 years of nuclear operations leadership experience as a licensed shift manager at the Pickering Nuclear Generating Station and assistant operations manager for the Pickering and Darlington Nuclear Generating Station. Chris's diverse and inclusive teamwork of focused leader. He's dedicated to the integration of new technological innovations that will enable small modular reactors to be an inherently safe, carbon-free and cost-effective community for electrical generation technology supporting positive climate change action. So with that Chris, I hope you can hear the applause in the room because we're going to give you a nice warm round of applause and wish you were here joining us in Rockville because this is again 70 degrees under Europe north in Canada where it's probably a little chillier. So on that note, everyone please put your hands together and welcome Chris. Hi everybody, I can't hear everybody applause. Can everybody hear me? Loud and clear, yeah. And again, I really wish I could have actually come down but here in Canada it is actually officially gone above the freezing mark. It is also 70 degrees Celsius here. So we're now exiting our hibernation phase and essentially it's a federally mandated patio season now for a year until October. So I need to participate in that as well. So again, thank you for having me here. I am Chris Dixon, facility operations manager for Global First Power. You could go to the next slide please. As said before, we are in an album of the joint venture of Ultrasave Nuclear Corporation to provide the design knowledge. On terror power generation my, as I say, home office were recently to provide project management and in my case operational experience. And we have kind of a unique business model we, especially here in Canada and also looking at island states, we're looking to essentially provide safe, inherently safe clean energy nuclear energy for off-grid capabilities which provides us a really kind of unique challenge and that's why we really dug deep into sort of looking at autonomous operation and AI. Move to the next slide please. So where we are in the nuclear power industry right now is unfortunately we are kind of lagging behind a lot of the industry when it really comes about how we're looking to automation and just sort of data management sets that are coming. So we exist in the operational technology or the OT world. We pulled at it in and up until relatively recently, we haven't done a lot of that. Data and performance is being pulled into monitoring and diagnostic centers but that information is generally just manually kind of reviewed for engineers or analysis to kind of understand potential trends or problems within components or systems. But again, it's very heavy, very labor-intensive. There have been more recently in the last couple of years a little bit more of a kind of scratching at the surface of utilization of digital twins but even that's very much in its infancy. There are a few projects here and there in engineering space and design space. There are obviously industries which are much further ahead in us, say the oil and gas industry for utilization of digital twins but at this point we're just starting to dig into that and but the final part of that is really the actions and the recommendations that we get from our digital twins or from our AI or from our machine bargaining which is essentially what can we use what knowledge can we gain from the machine to understand what kind of predictive analytics or even operational decision-making that the machine can assist us with to essentially remove a lot of the burden in terms of the decision-making by an individual operator and unfortunately right now this is essentially future tech. Move to the next slide please. So I'm sure everyone has seen this. This is essentially the five levels of automation that's done in your REG 700. Really what I want this as is kind of a baseline of discussion of although it feels very future tech there are elements especially where I grew up which was essentially the can-do industry. There are levels of automation that approach what we would consider almost autonomous operation and there's a big distinction between automation and autonomous and for the purposes of the operational decision-making kind of guidance that I'd like to propose here I'd like to kind of look at item number five and that's autonomous operation or in can-do space it's essentially our reactor control system. So next slide please. So if anybody's had the opportunity to be able to run a can-do they are relatively complex machines there's hundreds of points of data which comes in into the reactor regulating system where multiple algorithms that actually control reactor power control you know reactivity control devices liquid zone and of course it's bound within its own failure mechanism once you know a number of parameters go beyond to kind of you know what would be considered as not a design limit but to sort of a safety margin you know it fails safe shuts down the reactor as expected however there are also elements of you know essentially input failure which could cause like a bolted strip for example that then wouldn't actually reach the tolerance for actually shutting down the reactor but may actually cause what we call an unrequested power change now with an operator I can't possibly understand the hundreds of inputs and diagnose each one specifically so but what I'm trained in is to understand you know how does the algorithms work what are the main control functions what are the sort of the major failure modes and ultimately what is the response on me and on the reactor to be able to do that so knowing that as I said there's hundreds of points of data which come in in a very quick succession essentially my operational decision making is procedurally enabled and so it's basically it falls into some standard sets which is obviously the alarm and parameter monitoring it's safe saving the reactor being able to reduce reactor power to give you margin to safety to then to move into a diagnosis and understanding and that diagnosis is time limited of course and unrequested power chains it's either understand the failure mode a safe state that particular parameter and to be able to determine where the operability is that or you know if that knowledge isn't immediately wrecked to safe state the reactor put it into a shut down and guaranteed safe state over our king priority of course is the safe state of the reactor rather than the diagnosis of the reactor or the failure mechanism itself the next slide please so as we start to move into AI technology or machine learning is that the AI in itself and automated systems will ultimately start to perform a lot of the safe stating and diagnosis response that normally I would be doing as a licensed operator and that will put me into a very different kind of position or my operators and that will be we're ultimately be the verifiers of actions by the machine rather than the initiators of the action itself and as such then the AI control systems must be designed essentially with an AI risk management framework to allow for the responsible development of AI and more importantly to increase the trust worthiness of AI systems so for I as an operator to be able to use and trust an AI system it needs to be designed essentially with the four missed principles here for AI development and so start off obviously with the AI risk management framework it clearly has to be safe it has to prioritize safe response it has to be secure cyber security is with a doubt the the most important aspect of this but it also has to be able to be resilient in terms of like brief power interruptions or communications interruptions and most importantly it has to be understandable to the operator if you go back to the concept of how I would respond to a kind of a very complex system ultimately I need to understand why a machine has done what it's done for me to verify that's the correct action and to ensure that the machine continues to run or a safe state and so the understanding of that really then goes into the system design principles for AI and that is essentially the explanation has to be first and fundamentally it has to be human centered it has to be understandable to the operator itself so it's not just why an action has happened but the rationale of why what inputs drove a machine to take the automated action that is that happened and of course it has to be accurate in terms of the explanations and the final part is like any AI is that there have to be fundamental in terms of what its operability is and that is that once it goes beyond a certain parameter it will automatically safe state without operator intervention next slide please so automation doesn't happen over day because it was not built in the day either and this is going to be a long drawn out evolution that is going to require the collaboration of everybody in the industry so if you take a look at the automation evolution here right now we essentially really are in the far left we are under the reporting data comes in from instrumentation control in the field currently we are starting to analyze it but that's done on a very manual basis the first step that we're going to be looking for is essentially the predictive analytics the idea is that the machine can diagnose through algorithms or through the AI itself to predict when components may failure to do kind of a predictive analysis of when a component will fail so that you can replace it in advance and obviously reduce FLR ultimately the change over from the OT to the IOT world is how you start to integrate that asset management into essentially your IT systems, your architecture your enterprise resource processes or your enterprise asset management essentially your APM that the machine then feeds into essentially your business management systems to automatically schedule maintenance to be able to essentially establish maintenance in advance of failure and then as we ensure that the safety of the systems as we give it more OPEX to be able to generate and as become more comfortable with it then and then when we start to really kind of move into essentially more of a recommended actions for the operator you know that a not just a failure may happen but you know I have swapped over a feed train because of these parameters the operator initially will confirm and allow that action to happen but then ultimately what we were looking for is essentially the autonomous or the autonomy of that machine that the machine may make that the action as long as it is inherently safe the operator can verify that that the action is correct and that the response is correct and essentially become a monitor of that next slide please so initially this is the first phase here which is essentially how you actually do the predictive analytics in your asset management there's nothing to do with control of the system yet but ultimately this is how you're just going to be able to protect the asset on an economical aspect as I said ultimately it will feed an automatic into the APM will feed into EAF and ERP and schedule maintenance the next slide please and finally obviously is that the demonstration phases it doesn't happen just with the first reactor but it will have to happen with multiple iterations of you know future phases of reactors we start wide we move in closer but essentially once you can prove the inherent safety or the inherent security of the reactor design as you start to pull more information off the machine then you can start to automate output of it starting off with a relatively simple non-safety related ones and then ultimately potentially moving into safety related system that clearly will only happen after a lot of time a lot of experience and with fixed algorithms rather than just an open machine learning work that's all I have questions and answers I will take after the presentations thank you very much thank you Chris appreciate the presentation anybody who has systems drawings on his board in the background means serious business as a guarantee alright we're going to go to our third polling question and then we're going to glimpse into the future with Elbert's presentation so if I could ask our folks to bring up poll question number three please and again this is where I'm asking our wonderful audience to participate both online and here in person ok let's go for question number three we're going to go for the third one please ok that looks like number two still there we go so question number three and again we're going to glimpse into the future with Elbert's presentation so this is a good good again temperature measurement of the crowd how soon do you think commercial nuclear will be using AI applications in an NRC regulated activity so looking forward to your responses there as you think about that we'll welcome Elbert Van Der Bael I think I got that right okay Van Der Bael that's a tough name to the stage Elbert is the director of marketing and solutions consulting at Yokagawa it's the Yokagawa Corporation for America he joined Yokagawa in 2008 and has held various roles in marketing and sales throughout his tenure but don't be fooled extremely technical and on game he holds a master's degree in chemical engineering from Delft University of Technology and has been working in the industrial automation industry for 25 years in addition to his passion for co-innovating with end users in the process industry Elbert has had the pleasure of working closely with the headquarters organization in Japan as well as several research and development centers within Yokagawa globally on new technology developments including artificial intelligence and supervisory control and data acquisitions and again he's going to be getting this glimpse into what is possible at least in the chemical industry so please again you can rock out and you can wave please put your hands together thank you very much and good afternoon everybody it's a pleasure to be here at this stage and to share some of the experiences Yokagawa was able to generate back in Japan and I'm happy to share with you some of the approach we took and what the results were with this specific chemical customer so I will be talking about the use case in the chemical industry and hopefully that will appeal to you in the nuclear industry as well we are familiar with the nuclear industry we sell some recorders we sell actually some safety systems up to sale classification level 4 which is solid state technology and that's a little bit far away from artificial intelligence I guess but nonetheless I think this is an interesting use case so let's see what we can get out of this and I'll start off with a picture where we can see the future going in the process industries in general and I think a lot of this has been reflected already in earlier presentations today also in the previous session when they talked about humans factors engineering and we were challenged to think about the most interesting things that will influence our future and this picture is reflecting some of that so if you look at the state of the industry most of our customers are currently running an autonomous automated state and that means there is a lot of drive in the industry actually to go to a higher level of autonomy for the last 15 years we heard a lot about the retirement wave but what we are actually facing as an industry now is that after COVID we are actually suffering from it and we really start lacking the skill sets that are required to continuously operate our facilities and that's really what we see now as an uptake in the industry to go to that higher level of autonomy so that's really depicted on the top side of the picture then looking back at my history with 25 years of history in automation when I joined the industrial automation segment we had proprietary systems nothing was connected air gaped technology people were actually referencing to know our systems our operational systems are not connected to anything but then the industry wanted us to move to come off the shelf technology we embraced Microsoft software and we started making systems more open so that's all great and that's really the basis for ITOT integration and that's what you see in the bottom of this slide is really the drive in the industry to get more connected and what you see now happening is that it's not even limited to companies itself but it's actually going into the supply chain and even into society and I always like to give the example of the refinery in Rotterdam from Shell that is having excess carbon dioxide and excess heat and they're actually putting that into the greenhouses and the district heating and that's great, that's great that's a good example of sustainability towards the future but you need to do a lot with data integration and security to make that happen so I think the previous session actually touched upon this increasing or ever increasing complexity that we see and that's also what we see and we hope as a company to be ready to help our customers going forward but I think this is the journey we're all on our world is becoming more and more complex and we're all struggling to get the right resources in place we're able to deal with that complexity so let's dive a little bit in and talk about AI and what is required in general to get to that higher level of autonomy so AI is essential to get to that higher level of autonomy so it's really talking about getting people out of harm's way it's making the industry a safer place to live near to and to prevent incidents from happening and please do not send humans to inspect the gas leak and those type of things but send a robot it's already using artificial intelligence and I think I saw 20% of the respondents actually stating that they're already implementing AI so I'm really curious and would invite those 20% to also speak up and share those use cases because I think it's those type of use cases that enable us to faster adoption a lot of the things here were already emphasized by Chris and are related to asset management, predictive maintenance and it's not new, we're already doing that as an industry, we have soft sensors for the last 15 years already in place, AI is just enabling us to do it even better and yesterday I was present at the AFPM, the North American Association of Refiners and Petrochemical Companies and they were also talking about some of the successes they had to deploy AI and speed up the early detection for example for heat exchanges and that allows them now 44 days in advance to already detect an upcoming issue and to address that issue before the heat exchanger would actually trip the facility for example so I think it's taking models, technology that we already have and augment them with artificial intelligence to make them even faster, more efficient and more capable of getting better results the thing I will be talking about in this pitch is mainly the advanced operation side of things so this is where it becomes a little bit scary because are we really going to deploy AI on reactor control and nuclear or on chemical processes and I think there are ways to do that safely even with artificial intelligence so before I go in the actual use case I just want to talk about this one and maybe by raise of hands who is familiar with this basic control dilemma that is given to students there are probably some people there so this is the three tank challenge that you get and it's just hard to control that so that's why it's always given as students to start working on control and how you can get to a stable situation where you're not oscillating for example in the outputs and I have a small video on how we are deploying what we call FK DPP factorial kernel dynamic which is actually the machine learning reinforcement learning algorithm that we developed with the NARA Institute in Japan and that is used also on this simple use case before I talk more about the industrial use case so I'm going to show the movie and I will narrate through it so this is showing the three tanks obviously and then it's also showing how we actually go through that process of learning so the FK DPP allows you to get to an optimized AI control model within 30 iterations so this is just the first trial and you see there's not really a stable situation and the process is not getting to a stable state it's just over floating one of the reactors but then after 20 iterations it actually becomes more interesting so you see that the AI control model is actually slower getting into filling up that tank we're looking at and then you see it reaches a state of oscillation which is not preferred because you're overshooting and undershooting so after 25 iterations of learning you will see that the AI control model is actually getting a little bit more intelligent and even though there is still a little bit of oscillating oscillation you see that the model is actually able to already start controlling the process very fast and efficiently and then the 30 iteration this is just where it starts to be a little bit amazing and stable so this is where we actually notice that with the AI control model that was developed on a simulator model and then trained on a simulator and then actually deployed on the actual unit was able to do it 50 to 70% quicker, faster and more efficiently than any of the other control algorithms we had in place so I think this movie is just an illustration of what is possible with AI and it's just hopeful also for you guys to take a look at what the possibilities are to deploy such a thing in a safe way so let me talk a little bit about the actual chemical use case that we were dealing with in Japan the customer was Ineos a chemical customer who had a simulation process where the operator needed to make a manual intervention every 15 minutes so this is challenging you don't want to have manual operation 24-7 every 15 minutes doesn't make sense so the customer actually selected this distillation process to try out the AI control model so first of all we started developing the simulator model because there was no simulator for this unit so we had to go through months of creating the simulation model and then we were able to use reinforcement learning so we penalized bad behavior and outcomes and we reward good behaviors and outcomes and that's actually how you train the model what is interesting is that we're not controlling the set point value so the AI control model is actually directly controlling the outputs so that is also why it's capable of doing things more faster and more efficient or advanced process control or PID control that we're looking at so in the AI control model we actually gave two objectives one was the quality of the distillation process to make sure that enough segregation is taking place in the distillation process but the other one was really the energy optimization so as you could see in the previous slides we were actually controlling the two waste heat process streams to make sure that you can manipulate the actual energy efficiency in the process so this is really what we actually did and it is a butadiene do I say that correctly I'm always struggling with that I am a chemical engineer though but forgive me and I think what was the outcome of this learning process is that we were able to actually do autonomous control and first of all it was only 35 days and people ask me all the time why only 35 days well they had a scheduled maintenance stop okay so that's interesting so the next question is how long have you been running now autonomously the answer is 22 months without any manual intervention so this is a clear example of what can be achieved with AI control model in a safe way and I think that's the main thing as a nuclear audience what you put it in is how can you deploy something which is maybe not immediately a generative AI model that can cause a runaway in a facility but actually have an AI control model developed offline and then being deployed online it also means that if something major changes in the process you have to retrain the model however small changes or even upsets you can train in a simulated environment guess what most simulators today you can speed up a factor 60 so the whole training of the actual AI control model is not years as with traditional APC where you have to go through step change procedures in your process but it's actually a matter of weeks to months to come up with these type of AI control models so I think the case is made by this one that it can be implemented in a safe way obviously there are a lot of things you need to take care of when you go through this implementation we had a lead time of one and a half year and the last phase was really how to integrate the AI control model in the existing controls and how to integrate that in the safety interlocks that are actually in the plant so I think that is without question when you start using AI control models or AI technology in general it's not just deploying some new technology in your facility it comes with all the change management steps that you need to take to train the people make sure the operators understand what you're doing and the operators were part of this process because in the end they need to rely on this so in summary I think the nearest case is a clear example of the option of AI in the industry and the process industries I think it is proven that you can deploy it in a safe way in process industries and I would really advise everybody to take a look at what is possible with AI even for control or advanced reactor control on the nuclear side and I'm really looking forward to any discussion we can have maybe during the panel on some of the other use cases in the nuclear industry and how that resembles what I've just presented so much thank you Albert we're going to run to our fourth and final polling question before we introduce Matt so if I could ask the kind folks at AV to bring up our fourth and final polling question please and again a reminder the phone is the best way to ask questions we have a few questions coming in thanks and let's keep those coming because we're going to have plenty of time for questions after this so our fourth and final question what process would be most effective in promoting responsible development and deployment of AI enabled autonomy in the nuclear industry so folks have a think on that and please respond as we're doing that we're going to welcome our anchor to the stage Matt Dennis Matt Dennis is a data scientist here at the nuclear regulatory commission in the office of nuclear regulatory research my favorite office of all the agency he leads the agency's effort in developing and implementing the NRC artificial intelligence strategic plan which is no small feat additionally Matt supports the development and maintenance of the max consequence analysis suite of codes and conducts severe accident consequence analyses prior to joining the NRC Matt held positions at north of Grumman and Sandia national laboratories he also has a BS and an MS in nuclear engineering from Missouri University of science and technology in Rala right all right so please put your hands together or a big rock and roll welcome to Matt Dennis thank you Vic and since I'm the last speaker and I want to give us plenty of time for questions and panel discussion I will keep my presentation hopefully brief and I will say that last question I was actually disappointed to see that the front vote wasn't for federal regulation that was the lowest one there so I'm guessing people don't want me to over-regulate the use of AI in the nuclear industry so disappointing but I'll survive so my presentation today I'm going to talk about what we're doing at the NRC if you've heard my remarks at some of our previous public workshops this a lot of this shouldn't come as a surprise to you but I will disappoint you all that I don't have any AI generated photos of the royal family I don't have any panda pictures or Tesla crashes my other slide decks do and I do talk about ensuring safe adoption of AI and ML with respect to some Tesla crashes but not today so today I'm going to talk a little bit about where we're at and where we're going give you a little bit of a story arc of the NRC's progress in artificial intelligence and machine learning so we recognized three to four years ago that the industry wants to use AI and that shouldn't come as a surprise although three years ago we didn't have chat GPT so our environment was quite different from what it is today even a year ago as I reflect back on the AI session we had last year things have changed quite a lot so that is part of why the AI strategic plan was developed was to be flexible and look forward to the next few years so that we can prepare the staff for this ever-changing technology should the day come when there is something that the NRC needs to evaluate with respect to an AI usage in an NRC regulated activity so the second box in the middle you'll notice some discussion about what the federal agencies the U.S. federal government there is a slew of stuff that has come out from the current administration there's executive orders there's office of management budget guidance so not only us as the NRC are thinking about how we would review evaluate AI technology that same thing is being applied to us so there are some guidance documents the NIST AI risk management framework so the federal government at large is considering this issue too and I'll make a comment about the some of the stuff we've seen across the federal agencies in a little bit but we have been involved in a number of activities we're involved in IAEA working groups we're involved in some trilateral engagements with the Canadian nuclear regulator and the UK nuclear regulator not to mention that at any one time you can find an AI conference that's coming up and so there's a lot of material out there for us to learn from and then finally on this slide internally we are also like I said grappling with this issue as well as our industry wants to use AI for beneficial things so do we so the chair recently put out a tasking memo and the group that's been working on that project has gotten a slew of ideas recently and they're working through those so I guess my point is there's no shortage of topics where AI could be used both internally and externally and I'll mention the interest is not just in our area a couple of my colleagues and I went to a DoD conference so the Department of Defense is there also grappling with this and the plenary at the very end of the session one of my takeaways was the gentleman stood up and he said we're collecting our use cases and it should not come as a surprise that 60% of our DoD use cases are build me a chatbot so the industry's wanting to do this we're wanting to look at using generative AI so it's definitely something that's front and center so for a discussion that happened in the previous session and one that's been talked and brought up here is clarifying some things about automation autonomy in AI so AI is but one way to get to that panacea of autonomy and so we're looking at AI as an enabling technology it's just one way to get there so not all uses of AI are fully autonomous and as we put in the strategic plan and I'll talk about on the next slide we're looking at this from a graded approach there's going to be use cases where AI is really a tool it's an enabling tool for a design engineer or an operator to get information and then make a more informed decision so we're looking at a spectrum from early use cases where it's an enabling tool for decision making all the way up to the area where one day it could be as Chris mentioned in his remarks used for autonomous operation and so there's multiple definitions that exist but I'm not here today to help define what AI is we made a valiant attempt in our strategic plan to talk about what we consider AI to be and I will tell you that took a lot of back and forth and going round and round so there's a lot of definitions of AI what I want to discriminate here on is automation and autonomy clarifying that automation is based on prescriptive and predefined rules autonomy allows for the system to respond to situations that were not pre-programmed or anticipated and so we're looking at this like I said as a spectrum of integration with human decision makers and so the last session if you were able to attend that talked a little bit about how AI might come up in remote operations and as you're aware there is going to be a graded approach of human involvement in AI integration and so this table here is taken directly out of our AI strategic plan and the original intent was to start the conversation on where does your use case fall in this spectrum so these categories are not that different from new rego 700 on the levels of automation and Chris had a similar thing in his slide earlier that you saw so we just wanted to start the conversation about what that would look like from where you fall in your use case is it insight or fully autonomous and from our public works our data science and regulatory AI regulatory applications public workshops that we've had starting in 2021 I can fairly confidently say that most use cases fall in the level one category maybe the level two so where are we going with all of this the end goal you'll see the purple line on the slide that basically says where we're at today that's we're March 2024 we put out the project plan this year in September September 2023 and the goal down here is to start looking in FY 2025 at developing a framework for AI enabled autonomous operations and so part of that is being fed in by a regulatory analysis that we're currently doing and then a technical evaluation framework in these tasks so all of this is leading towards that end where we would potentially be considering what the future looks like if you were to have an AI enabled autonomous reactor system or and I should copy out this not just reactors could be fuel cycle facilities it could be anything under the NRC's umbrella of things that we regulate so moving forward we must remain vigilant and we keep we're trying to keep tabs on this technology as I mentioned two years ago chat GBT wasn't a thing and that totally had to you know changed our way of looking at this and we had to pivot so the NRC's been proactively looking at this technology and trying to address the gaps that may exist in our regulation or guidance and that's part of why we started doing the regulatory gap analysis which will be a topic of discussion at our next public meeting in September of this year so we're working on training programs for staff so that we can upskill all the very smart people that we have at the agency so that we can prepare them for those that are interested in the topic can participate in the technical discussions on evaluating the technology and then lastly we continue to encourage stakeholders to reach out to us our public workshops have been a great engagement tool as is the Rick here every day we hear about some new use case so I continue to encourage our stakeholders to reach out to us and let us know what is going on in this area so with that I conclude my remarks and I think we transition over back to Vic and Q&A thanks. All right thanks Matt and I think we're going to do questions and answers now so if I can ask the AV folks to bring up the QR code again for folks to ask questions and we do have some questions coming in so we'll spend about 25 minutes or so doing questions. All right let's start here we go let's start with question one this is going to be for Darren, Albert and Chris how do you prepare for unforeseen circumstances how effective is AI in scenarios it is not trained for so Darren I can ask you to go and then we'll go to Albert and then Chris why have you come on? Sure and as Matt mentioned responding to situations that weren't explicitly trained for generalizing if you will that's really one of the main strengths of AI and machine learning systems so it's a capability that we rely on that the key is determining those circumstances what is it supposed to do what is it generalizing to or from what's the correct behavior and can you bound that so depending on the application remember one size fits all there are different ways to do that part one critical thing for us is to define what's called the operational design domain what is the region of input space over which this system operates and have you sufficiently covered that space either through testing to show that your data sets are complete and representative relative to that ODD or through analysis do you have tools that allow you to completely assess what the behavior of this system can be again you can do that kind of thing for smaller systems we reach scalability limits for larger vision and perception based systems where it's actually very difficult to define exactly what it means to for a data set to be complete relative to the scenarios that you expect to encounter so it's not a solved problem by any means but there are certain use cases which actually are operationally of interest to us where you can put precise bounds on what this system will do even under what are supposed to be unanticipated systems or untrained inputs thanks for highlighting that and I want to dive a little bit deeper in the example I gave on how we actually deployed the AI control model and I think I mentioned in my presentation that actually if you look at for example advanced process control you have to do step changes in the process so there's actually things you can do with AI control model and learning on a simulator where you can actually make the model more robust than what you're capable of creating today so I think the AI is actually allowing us to create a way more robust model instead of making it more unforeseen if you will and that was also why after those 35 days the Japanese customer wanted to go through winter time, summertime to look at the impacts to the facility would the maintenance stop actually disrupt the AI control model or as the AI control model immediately able again to pick up the control and the answer was yes the AI control model was trained that well on the historical data and in the training data sets that it was actually able to deal with a lot more unforeseen circumstances than regular control would allow you to do so I think that you can even include accident situations and unsafe situations that you wouldn't be able to do on real systems for myself I would say that having clear limits of authority of what the AI can actually do and having those actually defined and recognizable by the operator I know when a parameter is out of spec such that the AI needs to trip off and I think defining that is very on-set with the understanding that the operator essentially will become the verifier and the decision maker of the initial aspects of the AI and then if for any reason that we would consider that the AI is outside of those parameters then obviously the training for the operator will be able to safe state it great thank you Chris alright next question is going to go to Matt so Matt how can AI explain its decision to the operator as the black box concept do you need a separate AI explanation for the AI's operation from the regulatory perspective short answer yes so that's one of the the tripping points right now is explainability and I was thinking about this in light of the comments of the panel here today that industries of what we've heard can be as skeptical of this technology as we are maybe as a regulator so the person on the end user of using the technology can actually be quite skeptical about the feedback that it's getting I don't know about if anyone remembers the first time they ever drove an autonomous operated vehicle or maybe one that at least had emergency braking I was quite skeptical of the first time thinking is it actually going to stop is my car going to slow down when I approach the vehicle ahead of me so until there's a level of trust that's built up or a method to explain to the user not just the regulator the user in a very transparent way about why the system is making the decision that it's doing this is no different than a traditional control logic system if you can't interpret it and make a decision and trust that it's doing what it's doing then that's a hard sell to get someone adjusted to that and I think about even some of the systems that have been demoed to me my first question was why did it do that why are the uncertainty bands this big what if I pick a different model so those are all the questions that come up that I think are quite important in developing upfront a user interface the model and all of that that goes into it to explain make the AI system explainable to the end user and the reviewer so yeah I would like to build to that and I want to emphasize the importance of the humans factors engineering and the whole change management process right and we have done that for the last 20 years with going from PID control to advanced process control and customers that already went through that process they have a lot of learnings on how to bring the operator along in that process and explain what the CPC is doing and I think those are in the advantage because they will already be used to that process and will be much more tempted to accept the AI control model in that sense people who are going from primatic or PID control immediately to AI I think that will be a big culture shock and you really have to think how you set up your program to involve all the people at site to gain that trust and confidence in the AI control model I want to push back on that just a little bit because start a fight and I think you said this Matt there's aspects of this where explainability doesn't have to be something some new AI specific problem it's true for any system that we design that it may or may not require explainability the operator, the pilot may or may not need to know or anticipate what the system is going to do many of the things that we're working on now pilot has no interest whatsoever how the altimeter came up with the number that it's going to display or the recommended most efficient altitude that it should fly at or whatever the function is they just expect to be given that information the explainability is kind of baked into the requirements for the system and that includes the human factors aspects of it there's probably some applications where we have to deal with that specifically but a lot of times for in my experience it's just part of the requirements for the system so I want to help out Matt a little bit if you look at the generative AI I think and chat GTPT was reference natural learning processing methodologies that's actually where you get a lot of that black box feeling and that's where validation and verification become more important yeah I agree I'm not putting that on an airplane though I hope that makes you happy Chris don't you feel left out of the explainability argument here well I was just thinking Chris I've always made really good prayers of breaking great design so I think it's really important to be able to bring the operations kind of team early in the development of the AI because I'm going to defend Matt there in the sense that you have to build trust with the operations because operators will tend to overthink situations it's only when they can piece the elements of information together that they can have the confidence that the machine is working correctly that they'll actually follow what they're doing otherwise they're you know they're going to get skitterish and try to say stated and that's I think when you start to get that from point to Matt alright thanks for the discussion guys that was good yeah keep it going Darren a question for you what from the aviation domain is most amenable for adoption by nuclear I know you have I think you said we were chatting earlier nuclear was your hobby a paddle ball is maybe between aviation and nuclear yeah so there I think I mean clearly a lot of things that we're working on probably don't have direct analogies like these the perception based systems where we we need to automate a pilot looking out the window or something like that one that comes to mind is we're working on systems now to detect pilot fatigue especially as you start thinking about maybe having fewer pilots in the cockpit if you only have one pilot you want them to be awake and alert and I think this is a concern for nuclear operators as well and so some of the really useful technology for that is camera based systems and that monitor mouth and eyelid position and blink rates and are able to correlate that with a fatigue level so we're bringing this technology into our world oddly enough from the automotive world where it's fairly well advanced and we're looking at what has already been done there that we could take credit for in some of these automotive and trucking industry fatigue detection systems that we will harden and reimplement and recertify in our world so that's one good example I think. Thank you. We'll take a little different track and this is directed Albert to you but I think this is going to touch everybody so is AI being used to provide cyber security? So I think that if you look at cyber security it is everywhere right? The threats of being hacked have intruders in your software somewhere everything is connected it becomes more and more important that the design of the products is in compliance with the standards the IEC 60443 specifically is very important in that aspect and I think I haven't seen too much of AI deployments on our side with cyber security but I think AI really adds value in being able to detect intrusions all these type of things just from my personal experience I haven't seen a lot of AI and similarly actually on the safety side of things I think AI can help us so much out with the validation process and if you look at that current process with the thick manuals we have with daytime and in the meantime between failure that are all essential to do your safety loop calculations I think with sitting on that tremendous amount of data AI can make our life so much easier so I see it as a possibility to make our lives easier and protect us to whatever is out there but I myself am not that familiar with implementations to date specifically on cyber security I'll add one comment that it's actually it's clearly stated in our strategic plan to look at safety and security uses of AI technology and so our office of research actually has at least one if not more research projects on this very topic of looking at using AI ML to detect cyber security intrusion so it is something so our other office insert is very keenly concerned with this exact topic with cyber security both for using it as a tool to detect intrusions and also as it is a new vector for considering attacks and so we are definitely those are two topics that are definitely front and center I think our presentation tends to skew towards the operating reactor fleet but as I said at the end of my remarks that is not it that's not the end story we have materials facilities we have cyber security physical security we have and the operating fleet so it's just not it's not just power reactors that we're thinking about when we look at the umbrella of our strategic AI strategic plan Chris and Darren if you guys want to jump in on cyber security any thoughts or experience or what you've seen well aside from it has to be the centerpiece is the the question we have to be able to answer especially when we move into autonomy we start to move towards more safety control systems and I think that's sort of the collaboration of the industry to make that the principal focus is the highest one yeah and I'm not involved in this kind of research but in our research organization we definitely have people that are looking at evaluating data patterns to detect attacks DARPA did a program a few years ago the cyber grand challenge where they had robots attacking robots and defending against the other robots and so that was kind of an interesting interaction on both sides of using advanced techniques from AI to both create attacks and defend against attacks but also the the new threat vectors is very much on our radar as well maybe to add coming from the chemical industry and I think if you look at some of the majors they actually spend a lot of attention on cyber security and making sure all the layers of protection are in place I also deal with a lot of the smaller type or medium size companies and often they even say no no our operational system is air gapped right it's not connected to the that's not true so there's a lot of misconception about cyber security and not all sides are as cyber security safe as they believe they are currently today so that also comes back to education cross industry collaboration I think it's finding each other in this type of conferences and share experiences with each other to make sure that we're doing the right things I appreciate that guys so we have a lot of questions you guys really did your job on getting us questions thank you we have too many questions so I'm going to do a choose your story we have questions about hallucinations and questions about bias so again we have a wealth of experience here maybe you can each share a story of what you may have experienced for a hallucination in the AI system or where bias has thrown a monkey wrench into what you've done Chris you're thinking so we'll come back to you Chris Darren do you want to go first can I put you on the spot sure the the term hallucinations refers to generative AI or large language models like chat GPT you know will often give you a response with high confidence that turns out to be wrong and so the the you know sometimes this you know can be related to just mathematical facts that it wasn't able to memorize things that actually required you know detective reasoning but what that points to is we really if to use that kind of generative AI technology in safety critical or safety adjacent applications that means we have to have some way of detecting these hallucinations or basically just wrong wrong answers so the ideal architecture is one where you use the generative AI to give you lots of really interesting creative potential solutions exploring a design space but you still have some way to verify whether those solutions are correct and meet your requirements and so we have a lot of applications internally for doing design design exploration even developing mathematical proofs that we can independently check where generative AI is really great at coming up with these solutions but the key is having some independent way to check validate the accuracy of those candidate solutions I think the hallucination is exactly coming from the generative AI side so no discussion or debate here I'm in agreement with you it's sometimes just scary if you get the wrong suggestions right and you would be taking the wrong actions as a consequence for me what on the positive side of generative AI I've seen the possibilities to create your own protected environment where you can upload all of your technical document, instruction manuals general specification sheets it's like an improved search engine search engines can find stuff and now strengthened with AI you can actually start asking questions and sometimes I get questions from my customers that say hey why do you have four power supplies in this junction box I don't know so it takes me an hour to find that answer somewhere hidden in an instruction manual so actually we have now seen that by uploading all those documents into the database and training the AI model that you now can actually not only get that answer but it also points you to in which document it found it the resource was actually giving you that answer and how trustworthy that resource is so I think these are tremendous benefits that we can see on the generative AI side specifically Thanks Albert Chris, go for it please Chris Yeah I definitely see generative AI never in actually control space I just don't you know I can see machine learning I can see how it can kind of refine itself but I think because of the risk of having essentially the machine have hallucination we try to be able to create you know essentially a run that says this is the answer but it being wrong I don't think that's a risk we can ever really accept and I think within control it has to be much much much more refined much better understood than just allowing as I said the AI to run rampant and make a decision for us Thanks Chris So we're getting short on time Matt I'm gonna throw one more question at you but I love the way the question was written so I'm gonna give a credit to Taquia for asking this question I love this so people who possess expertise in both nuclear engineering and information technology are likely to be highly valued in the market I could not agree more Matt you gotta stay with us don't leave us securing such rare talents within the government often results in a significant discrepancy in compensation compared to their market value what strategies could be considered by the government to secure such talents How do we keep our talented how do we recruit the good folks Remote work I'll put in a plug for remote work but I joking aside I put that out as I throw that out because there was recently a posting by the Department of Homeland Security for 50 positions that I I was shared with because I'm part of the GSA AI community of practice and that was one of their selling points was remote work high paid position big breadth of high security job opportunity but going back to the question was I do really think that our industry as a whole and I've sort of informally pulled people as we talk about it we have a lot of smart people and so training someone from the ground up maybe is not the best solution and so one tech that I see a lot of people taking is upskilling their existing staff because this is an area where people are interested both in their personal lives and professional lives and so I've noticed a lot of industry utilities we're doing the same thing other industries the DOD all over the place basically putting training programs in place such that they can you can take someone who knows a lot about math science engineering statistics law all sorts of stuff and focus them on this area and upskill into this position into these positions and then the retaining them part that's the harder question yeah but money you've made them very attractive to poachers it is as Vic said at the outside of the situation prompt engineering a job now like that didn't exist two years ago a prompt engineer so I guess my point is that the Google's Metas and all that we need to throw at highly educated people the rest of us have to be a little bit more mission oriented with some nice nice cities there to really go for you know keeping us around I guess just to build on that and I fully agree actually so we just completed some of the engagement survey within our organization and the things that stood out were autonomy and not autonomous operation autonomy for people right being able to make decisions work life balance and career development and training so I believe if you take care of your people in a proper way then remuneration might actually become even less of an issue right no I'm serious I don't know I like money let's see what I got here we've definitely in our industry we've definitely asked people to the the Google's and the amnesons of the world but you know what would you rather do build airplanes or build web servers I mean well if you get stressed and burned out you might want to reconsider yeah exactly I don't know I also think another aspect of it is actually kind of be able to entice new people into the particular industry I mean 20 plus years my office was a stale green control room with no window some very old dials pen and paper I've never been more engaged in my life than I have been in the last few years embracing new technology I think the nuclear industry has been slow to adopt that we're very conservative we like to do the safe and insured thing but I think actually and as we should but I think getting people excited about what we're trying to do with autonomous operation whether it's about AI or whether it's like a new nuclear technology I think there's just not enough of us in the industry to be able to meet the demand that is happening now to scale up and I think the only way to be able to do that is really starting to bring other people into it and I think that's done through technology and just the excitement behind it awesome thanks Chris and thanks all so I do want to bring us back to what I open with on the podcast because this podcast was a great podcast talking about the industry moving so fast and AI that there really are no AI experts out there because it's just moving too fast and when I first heard that I said yeah that makes sense but as we put this session together and I've heard you all speak I said that's wrong and the fact that you brought your expertise and that we have this level of expertise on the panel warms my heart to know we have incredible folks working on this collaborating talking so really a heartfelt thank you for joining us today thank you for joining us here at the RIC thank you for in the crowd for one of the best attended sessions of the day and of the entire RIC and with that I'll say thank you let's rock and roll let's go out