 I'm going to go ahead and get started again here. Do we have any questions from anything we talked about already today? Please. OK. Let me maybe get the question. I can answer the question at a very high level. I'll actually repeat the question, so that's fine. The question was, what model is used for independent verification of safety assessment? That's OK. I'll just repeat the question. That's like, are you twisted? Oh, got it. OK, thanks. OK, all right. Thanks. And then you turn it off by pressing it again, right? OK. OK. OK, I'm just going to turn this off for now, but OK. Press it. The person, they were off. Oh, there we go. Off. OK, good. Yeah. I'm going to kind of defer the answer a little bit the next week, because we have quite a bit of verification and validation. I mean, is your question more a kind of process to put in place, or is it more the how to actually do it, more on the process side? OK, all right. OK, that's a good question. I like that question. OK, I should have written that myself. OK, model for verification. OK, let me put this back over here. Ah, OK, all right. Well, I repeated that question, Barbara, so we don't need the microphone. Hopefully I didn't just break the mic. Look, I hate to kind of put this question, the answer, in this perspective. I mean, there really is no one way to do it, I guess. And that's probably why you're having this discussion back and forth with your regulatory authority. But I mean, if you think about process, let me highlight a couple of things from experience that I'd recommend focusing on. OK, one, again, is the interface between you and your vendors. OK, and this is getting at not just, you're not talking about just verification of the codes, right? You're talking about verification of the model itself, the computer model itself, right? OK. And the reason I want to focus on the interface issues are there are many opportunities for mistakes. How the information flows from you to your fuel vendor, for example, and then back to you. When they're designing a fuel product, for example, your fuel over the years will change. You're not going to be using the exact same fuel in 15 years that you're using now. Your fuel will change over the years. So you need to have a process in place to capture that change in your safety analysis and all of the computer codes. And these can be simple changes, things like changes in enrichment. They can be larger changes, changes in the actual diameter of the fuel itself. You could change the spacers in the assembly. Go to more high-performance spacers, things that allow you to have a more thermal margin in the reactor. People use those for, say, power upgrades, for example. They want to make more power. You have to have more thermal margin so you can increase the thermal power of the reactor. There are ways of doing that by modifying the spacer grids themselves in the fuel assembly. These are examples of changes that you have to capture in your safety analysis to make sure that the codes themselves are physically modeling the real plan. So that's one area where I would recommend definitely having a process in place. The other part, I think, is independent verification. I'm not sure how your organization functions, whether you do your own actual calculations in-house or whether you have a contract to do them. Well, that's a very typical process. In terms of process, I would advise that you have a procedure in place to, upon acceptance of the product from your contractor, you have a process in place where you independently verify the results of the calculation and the input to the calculation, how the calculation was conducted. Now there are several ways of doing that. Some organizations, I know, physically have their own suite of computer codes. They run an independent calculation. Some people don't have that. In some cases, people go through and they physically inspect the numbers to make sure the numbers are correct. Sometimes I've seen people require a contractor to run calculations against, say, a known set of results to compare to make sure that you have confidence in what they're doing. So there are many different ways of approaching that. But that's another area. I would advise having a process written down where you actually go through and you can demonstrate that you, as the operator, physically checked the input from the contractor. Now, I've seen, in some cases, people physically write that into the contract to ensure that you, as the operator, get enough information to make that assessment. Again, I've seen situations where organizations say, well, I can't do that verification, because my contractor doesn't give me the information that I need. Well, I would argue that you're the customer right into the contract, that they will supply you the information that you need or make it accessible to you. They don't need to physically give it to you. You can get on a plane and go to their office and do an audit of their information. But there should be a process in place to control that information. You need to think about, there should be a process in place for how you then go from the results of the safety analysis to how they're used in the plant. For example, when you do a new calculation with a new fuel product, that could impact your technical specifications. Could change, like your DNBR ratio. It could change the allowable insertion limits for your control rods, et cetera. Make sure that that information is properly translated into your operations department. So Ops has the right numbers. So when the operators are physically controlling the plant, they're doing it within the boundaries of the safety analysis. Let's get back to this sort of man-machine interface question. You should have a process in place to ensure that that's checked to make sure that it's correct. Because the calculation's worthless if the plant doesn't operate as assumed in the calculation. And the tech specs change. They're not, if you look at technical specifications of power plants, I've seen technical specifications of power plants, literally over the 40-year lifetime of a power plant, they've changed a lot. Not just in the numbers. Sometimes new limits are established, depending upon it. If you change a fuel vendor, for example, you're going to get a whole new set of tech specs. Because vendors do things differently. And they're not going to change their ways for you. You're going to have to adapt to them. So these things do happen. It's all about the process. So that's what comes into my mind about things to think about in terms of the oversight of the process. And this has nothing to do with the codes themselves. If you're talking about computer code verification, then you as the customer, I would advise that when your supplier uses a computer code, you should have access to their verification documentation for that code. So you can go and independently oversee what they did. You don't have to do it. Make sure that they have the verification manual, the QA manual to make sure that they're using the code correctly and that it's been verified and validated. Again, it's not a matter of you as the customer and certainly you as the operating organization having the ultimate responsibility for safety. It's incumbent upon you to oversee that activity. You don't have to do it yourself, but you should definitely audit it. And this gets back to how you set up your relationships with your suppliers. I would argue that there should be some agreement that gives you access to that information. Don't just take Real App 5 as a black box. That's just the example I personally know. There are many, many codes out there. But Real App 5, speaking for personal experience, has a lot of problems. And it's not to say it's a bad code, but it's to say you have to know how it's being used. And you're responsible for it, well, not you personally, but your organization is responsible for it. And so there has to be this relationship where you can access the information from the vendor, the contractor, and you can take a look at the verification and validation documentation and use your own independent judgment about whether it's adequate or not. So yeah, another area. That's a good question, though. Thanks. Any other questions? OK, yeah. Hold on, let me get the mic back up here. What'd I do with it? Bring it up here. Should we just run it on the 7-Eleven? Yeah, thanks. You walked past your idea. What you are saying, we have this kind of procedure in Brazil now, because we have a contractor from Argentine industry. And now in my department, we have the analysis of these documents, their documents. And when I started this analysis, we found some mistakes, but not something not so expected. And so we needed to have some conversation between the constructor and our analysis team. And the other consideration will be performed by the Brazilian Commission later. So during the project, we need to have this kind of exchange experience in checkpoints to analyze if what we are expecting to receive as a design is the same we were imagining in the beginning when we were going to contract to the industry or the project manager. Good, yeah. And again, I can maybe just hand this down to me. I think, again, let me turn this off so we don't have any feedback here. This isn't about, it's not about malice. It's not about blame. I mean, I can tell you, I've made a lot of mistakes with computer codes over the years personally. And I never took it personal when someone said, this is wrong. It's a matter of we're all human beings, we all make mistakes. And the point is, we have to have processes in place to try to capture those. Because once they're in the computer model, in the computer code, they could live on for decades. I mean, again, from personal experience, I can tell you of some of the computer models that I was involved in the creation of the new NRC code called the Trace Code, which is sort of the relap5 replacement. And one of the things we found when we started using the new code with the old input decks, the answers were totally screwed up. So what's wrong? Well, there was a problem. Well, what we uncovered is there was an error in the older software that had been there for 35 years. And it had just been covered up by the analysts who went in and corrected the error with input. But you put that into a new code and something goes completely wrong. So again, the point is, is exactly, you need to find these errors early. Because if you don't, they can take on a life of their own and they can be there for a very long time. Because after a while, you begin to just assume that what you get information-wise is correct because it's been through so many people, you just think that it's correct. So it's very easy to have these kind of mistakes. So again, this is good. All right, thanks. Good questions. Any other questions? Thanks for that thought. Okay, so let's move on here. So now we're gonna move into a little bit deeper into what we call the fundamental safety concepts. Again, these are embodied in, mainly in SF1, which is the high-level IAEA document. But we're gonna get into some other concepts here as well as we go throughout the rest of this presentation. Okay, so again, the point of the next hour here is to go through and understand the fundamental safety functions and some key safety principles. All right. I wanna point these documents out again and you'll hear about these documents quite often throughout the next two weeks. Let me just apologize up front here for the repetition, but I'm gonna apologize to a point because we do this for a very good reason. And that is that these documents shouldn't, these documents really, truly embody decades of experience. You could almost argue decades of mistakes that then you don't have to make if you're able to implement these concepts in your power plants and your facilities. These are the representation of the best practices of the international community from many, many, many years of experience. Okay, so this is what I'm gonna go through for the next hour. Let me move on. I've already talked about this again, but I wanna emphasize this one more time. Again, we are all safety professionals, every one of us. First, I argue, that's my argument. And our job is to protect people in the environment from the harmful effects of ionizing radiation. Okay, fundamental safety functions. Does anybody know them? There's three. Come on, you gotta know these. Anyone know of three of them? What's the first one? Activity, right? What's the second one? Pardon? Well, that's the concept overall, but we have three actual specific principles. Cool, I heard of some of our cooling, right? Yeah, cooling the core? What's the third one? Containment, confinement? Okay, confinement of radioactive materials. Again, high-level concepts, pretty easy to say, right? This is three basic, what seems to be very simple concepts, not easy to do. And that's what we're here to talk about. Okay, so control of reactivity. For those of you obviously who have worked in power plants, you know this very well. For those of you who have read about power plants or seen designs, you'll also know this. There are two main ways of doing this. We use control rods or some sort of a soluble, soluble poison concentration, boron being the most typical, which we put into our reactors. We talk about cooling the core. We have to think about many, many things. We can't just stop with the reactor itself. We have to think about how we're gonna get the heat from the fuel, the actual physical fuel pellet itself, the really small little pellet, which is in each individual fuel rod. From there, all the way out to the ultimate heat sink. We have to think about how we can do that reliably. As we learn from Fukushima Daiichi, we have to think about how we can do it under many conditions, including accident conditions. How can we maintain cooling of the fuel? All right, that is a fundamental safety function. And it's not a simple thing to do. It involves many potential components, many, many, many potential accident sequences that we have to think about. Okay, so basic picture of your standard, typical pressurized water reactor configuration and the main components. All right, confinement. Again, we also have unfortunately seen several examples in the past of a failure to confine radioactive material. So it's incumbent upon us to think about how do we confine radioactive materials inside, ultimately, the containment. We take a defense in depth approach here. We don't just rely on one system. We have a containment system, which is sort of the ultimate system in place to confine radioactive material. But we also have the primary coolant system itself. That includes the pipes and the pressurizer and the pipes in the steam generator tube sheets and all these parts and the reactor vessel itself. And then inside there, obviously we have the fuel cladding, which I showed a picture of that earlier. So again, easy to say, not so easy to do. Okay, so this is the point here where we're lining up some of the material from the documentation with the basic principles. So I'm not gonna read every one of these. This is really more for your reference in the future. But this just shows you that we have developed information material to assist in demonstrating that we meet the fundamental safety principles and this just kind of points to some of the concepts that are in the document. All right. Just to kind of go a little deeper into the control of reactivity, I wanna use a boiling water reactor example because really boiling water reactors are kind of a reactivity machine. I mean, that's really in part of the react... It's a very large part of the day-to-day operations of a BWR. I don't know if you are actually familiar with boiling water reactor technology. It's actually one of the things I've worked on mainly in my own career. The BWR, you don't have any soluble poison. It just can't work. You're boiling the water right in the core, so obviously you can't put an acid in there which is gonna then plate out on the fuel surface. It's not gonna work. So one water reactors are controlled by control rods. And also you control them with the speed of the recirculation pumps in the reactor itself. Again, these are different ways of approaching reactivity control. But the point is, is that from a safety perspective, what we're concerned about is not so much operationally. We're concerned about how do we ensure that the reactor is always shut down and can always be shut down with margin to safety under all conditions. And so there are many different ways in a boiling water reactor to do this. Several different systems. Some of the complexities in the BWR is the control rods actually come in from the bottom. So they actually have to have the capability to reliably defy gravity. And there are systems in place to ensure with high reliability that works, okay? These are just some examples. Reactivity control operationally, again, in a BWR, is done, for those of you who aren't familiar with how these reactors operate, you control reactivity with a combination of the actual reactor flow itself and the control rods. So it's not just, you know, we're not just keeping the reactor pumps running at the same speed and changing the boron concentration, we're physically changing the speed of the pumps as the reactor operates to account for reactivity changes. This may be a more, this may be an example of a system that you're more familiar with. Again, this is actually a Westinghouse-type system, which is what I personally worked on over the years. The point here is to show that reactivity control is more than just, you know, having enough control rods to ensure that we can shut down. We have to think about, you know, are we controlling the reactivity throughout the operation of the reactor to ensure that we don't have any hot spots in the reactor, things that we don't expect, things we didn't analyze. So we have to have in-core instrumentation. We can't just simply assume that the reactor's functioning fine and never take a look at it. So it's very important that we're constantly surveying the reactor to make sure that it's performing as we analyze. And again, the point here is, let's go back to the question we had earlier from our colleague from Pakistan about how do we transfer information from safety analysis to the real machine, the actual operational machine that the operators are looking at, you know, every day in the control room. They need to have a series of instrumentation and this reactor here, the instrumentation happens to come in from the bottom. There are many different ways of approaching this. Some designers put fixed instrumentation inside the core. Some have probes that move up and down in the reactor. Some have instrumentation outside of the vessel, X-core instruments combined with in-core instruments, all different ways of approaching it. The point is, we have to be able to demonstrate that we are staying within the calculation, which is the point I wanted to make here, getting back to the question. We have a safety analysis, which is intended really to constrain operations. We want to constrain operations within an envelope that we've analyzed. So if I have an accident, I will be able to continue to protect the public and the environment. And that's the bottom line here, okay? You know, we can run our reactor above the envelope that we've analyzed, it will do it. But what happens if there's an accident? At that point, I cannot make the statement that I have confidence that I will be able to protect the public and the environment because I'm no longer within the boundaries of my safety analysis. Let me pause here and ask a question if that makes sense because this is an important point and that actually was a great question that sparked my thinking on this, so thank you. Does this make sense, what I'm saying? This is a very important concept. Yes, no, I'm not getting a lot of confidence here. Let me try again. Okay, so the reason why we have to have instrumentation, okay, is really quite simple. I run the computer calculation, I do the calculation, I get an answer. Okay, let's say I calculate that I'm able to have a radial peaking factor of 1.25 in my reactor, right? That's how, if you think about the neutron flux itself radially, it's not just flat, right? It's gonna follow some kind of either, some kind of chop cosine or some kind of strange, you know, multi-humped pattern depending upon our fuel management strategy, but it's not just gonna be one flat distribution, that's not physically possible. So I have to constrain that to a number. Let's just pick 1.25, the maximum allowed peaking. Well, how do I know that? Again, it's easy to run the calculation, the code is the easy part, I can get the number. How do I use it? How do I know that the reactor is within that boundary? Are you looking at it right here? Instrumentation, I measure the flux in the reactor at these given points, and I have a set of limits on the power plant. Some, you know, most countries that I'm familiar with call them technical specifications, I believe there are some of, there is other set of terminology out there. Basically it's like the speed limit sign, if you will, okay? The reactor cannot operate beyond this number. If it does, there are a set of predefined actions that should be taken up to and including immediate reactor shutdown, depending upon the severity of the problem. That should already be thought out beforehand, it should be in the technical specifications, but every single thing in there always goes back to the calculations, goes back to the safety analysis. And the key, again, is the linkage, and how do I know that I'm staying within my boundaries instrumentation? Okay, any other questions on this? Let me move on, I'll shoot. Okay, so core cooling. These are just some personal examples that I'm familiar with once again. There are several, there are again many different ways of approaching the question of core cooling. The more modern techniques that you're finding in the new plants that are being designed by vendors, and in some cases currently being constructed by vendors, you see more passive systems. Basically a bottom line, a passive system just relies on gravity forces to move water around the reactor. That's really, that's what it does. As opposed to the old way of doing things where you had to rely on a pump, some active systems, the valve to open, the actuator to open the valve, the pump has to start, the pump has to run, all the valving systems on the inlet and discharge of the pump have to open. There's a lot of moving parts that have to work. So this is an evolution in design, certainly leads to a more reliable result when you think about passive systems. Okay, these are just two examples again that you're gonna find. There are many, many other ways of doing this. Okay, confinement of radioactivity. Again, I'm just gonna bring in my own personal experience here. There are many, many different ways of doing this. This happens to be the AP1000 approach. We think about confinement of radioactive materials. We're trying to ultimately protect, in the end, this barrier right here, which is the containment dome itself. So we think about what we have to do in order to do that. We have to be able to remove energy from the containment. It can't just sit there forever. It will eventually over pressurize and fail. So we have systems, this is a passive system, which is designed to cool through natural circulation forces, the containment itself and remove heat. We also have to think about, under certain situations, we could have severe damage in the core. Again, look at our Fukushima Daiichi example. We're not, it's not known yet exactly the condition of all three of those reactors, but it's the general consensus is all three of them suffered damage to the lower head and the fuel is somewhere in the containment, on the base matter of the containment in some form. We don't know at this point what, but that's the general consensus. So more modern thinking on that is, well, let's just flood the containment under those circumstances. Now this particular situation here is intended to try to protect the vessel integrity. This is a system called in vessel retention. So we have water here, which is intended to transfer heat from the lower head into this water here, which ultimately comes up and will ultimately make a heat transfer interaction with this surface and reject the heat to the environment. Other systems have other ways of doing it. This is a situation where I'm assuming, I'm just gonna assume that the lower head fails. I'm making an assumption the lower head will fail. So I'm gonna try to capture the molten corium in this sort of, in what's called a core spreading area, but I also have to think about cooling. I can't just let the material sit there. I still have to remove the heat. So I have a system in place where I, where I have water inside the containment and I'm providing cooling of the material, which is ultimately, again, rejected to the environment. Now, there are other systems. I don't have a picture of this, which is called a core catcher. It's a system under the vessel head itself, and it's intended for, and if we assume that the lower head fails, the material will fall into this core catcher. It has material in it, which will retain the corium, plus it has mechanisms similar to what you see here to remove heat ultimately from the core catcher to the environment. Again, it's an evolution in thinking from the early reactor designs, the point being, we're trying to address the concerns with lower head failure, and ultimately the loss of containment function by providing for more passive systems, be it any number of three different ways that I mentioned, and there may be other ways as well. I hate the prejudice and outcome. To think through the process of fuel in the lower head, either trying to keep it there, or go ahead and let the head fail, but we'll, and get the heat out to the environment. And the evolution in thinking again also is, let's rely more on passive systems, natural forces, which should make that process much, much more reliable. Okay. Yeah, okay. So this is a picture that, I don't know where Joseph Mieshek got this, but I love this picture. Sort of picture of defense in depth. You can see our mailman who unfortunately encounters the mailbox being guarded by three progressively larger, very mean looking dogs. The idea here again is we're not just gonna have, we're not just gonna have one dog protecting our mailbox, we're gonna have multiple dogs, because we really don't want that piece of mail in our mailbox. In this case, we wanna keep the radiation, if we carry the analogy to the next level with inside the mail pouch itself. Okay. There's been many, many things written about defense in depth. INSAG 10 to my knowledge is really the first place where IAEA really made a definitive statement on defense in depth. This was a document written by the INSAG group. It's INSAG 10. They wrote that the hierarchical deployment of different levels of equipment and procedures, and we're talking about the maintenance of the effectiveness of physical barriers. Okay. Well, when I read that, what I think is, I think cladding, vessel containment. I think engineered systems in place to provide levels of defense in depth. Okay. That's what I see there. We have to think about margins. You know, again, this business is all, has many different factors. One of them is we never just assume that something is gonna perform exactly as is written down on the specification. We have to think about the uncertainty in those parameters, the uncertainty in how the operator's going to perform. You know, again, put yourself in the control room at Fukushima Daiichi on March that day, 2011. How would we have performed? I don't know. But I can tell you that it was a high stress environment and that the question of human performance needs to be considered. Because we have to think about that possibility and we have to try to model it in our safety analysis. And then we have to think about provision of different levels and the independence of those levels. Again, as the thought process has evolved over the years, we think about the installed equipment that we put in the plant to deal with our suite of design basis accidents. You know, the one that's, again, most commonly used that I'm familiar with is the large break loss of fooling analysis. We design equipment specifically to address that accident. Okay, but it's not the only equipment in the plant. Okay, so then the question becomes and some of the current thinking is, well, how do we put systems in place to deal with beyond design basis accidents? Okay, I'm not gonna go into the details on that now, but it's just the evolution in the thought process. Should I, you know, and we think about independence, the safety standards argue that there should be independence in that set of equipment. That can mean that I physically install extra equipment to deal with the more severe accidents, for example. There are other ways of thinking about that. And we'll have more on that later in the workshop. But the point is, is we can't, the argument in the safety standards is that you have to have independence between the levels of defense and depth is the main point. Okay, I've already talked about this. This is the kind of a call it the traditional INSAG-10 model, if you will. We have our three barriers, starting with the fuel cladding itself, the vessel, and then ultimately the containment structure. So I'm not gonna dwell on that. We've already talked about this, so let me move past. You know, this level of defense and depth is, this is basically the INSAG-5 level of defense and depth, and I know there has been some, again, there's been some evolution in this, and I'm kinda deferring on this, as we have some experts who are gonna come and be able to talk definitively about the current level of IEA thinking on this that comes from our member states and the interaction that we've had through the working on the latest revision of SSR-2-1 with the introduction of design extension conditions, et cetera. So I'm gonna go through this very quickly, so I wanna defer the conversation to our experts. One who happens to be in the room for you today, Marco, and I know at the end of the week, Javier, will also be here to talk about this in more detail. So this is kinda more the traditional INSAG-10 approach, is really my point that I wanna make, and this is also more the traditional INSAG-10 approach. I'm gonna move past this as well. Okay, so let's move into this idea of safety margins. Again, we don't, do we have any experimentalists in the room, people who've done testing? In the lab or large facilities? Anybody done any testing work in your career? No, okay, well, I've been involved with some testing over the years. Or I can tell you that if I sit down and run a test twice, I'm not gonna get the same measure and answer twice. This is not gonna happen. It's not possible for any real process that I'm gonna measure for a reactor. Let's think about, think about the simple process, what should be simple, the process of boiling water. We do it probably every day in our house when we're gonna cook. That's one of the most complicated physical processes that I can even imagine. The simple process of boiling water. And you think about applying that to a reactor system that's an added level of complexity, because I don't have to, because it's not just boiling water, there's also nuclear effects, there's structural effects, there's material effects. But the point being, if I'm gonna measure, let's say for example, let's pick one of the most important and critical parameters that limits reactor operations and also provides us the measure of safety. Let's look at the DNBR ratio, or if you're a boiling water reactor person, it's called the critical power ratio, or the MCPR. Okay, so it's basically the same physical process. I mean, what that really is, is it's the point, physically, where I stop having liquid water on the surface of the fuel platic. I mean, that's really what it is physically. And what happens there is my heat transfer drops almost instantaneously, orders of magnitude lower, and the fuel will rapidly heat up. I mean, that's the physical process. It sounds real simple, right? I can guarantee you it's extremely difficult to measure that parameter, okay? I mean, if you look at a plot, I don't have one here with me, unfortunately, but if you look at a plot of measured critical power over a range of parameters, it literally looks like a cloud in the sky. It is not a straight line by any stretch of the imagination. There are many reasons for that, one of which is instrumentation uncertainty. Instruments are uncertain. Under these conditions, instruments are, you know, have actually quite a bit of uncertainty in them. Obviously, over the years, the instrumentation has improved, but there's still uncertainty in the measurements. There's uncertainty in how I take the data from the measurements, and I'm sorry, Mr. Schmack, I'm here. I'll turn this off. How I take the data in the measurements and actually then go in and correlate it, okay? To the value that I'm gonna put in the computer code, you know? I mean, I've seen data spreads that, you know, literally you just have a big blob of data, and you could draw any number of six lines through it, and mathematically have exactly the same confidence in the fit of that data. So what are you gonna do with that data? This is a question for the regulators as well. What are you gonna do except with that data? What level of confidence do you need? So the point being is where I'm gonna go with this is we have to continue to, we have to provide for that uncertainty, okay? It is not a fixed known value, is the point. And there are many ways of doing it. I'm gonna show you one example here which is quite typical that we talk about. Let me just go through the end of this and I'll talk through it. Okay. So the safety limit, let me just kind of work this from the top down. That's usually the easiest way to do this. The safety limit itself, this is where the barrier physically fails. Now there's a couple different ways of doing it. Again, let's go back to our 1200C example, okay? Now we assume that is the safety limit itself. Not everybody does, but let's just, actually let's go back to DNBR. That's a better example because generally, I think everyone accepts 1200 as an accepted value. Let's go to DNBR. Okay, with DNBR of one means I have violated my departure from nuclear boiling ratio criterion and I have gone into a critical boiling regime and I should presumably expect the fuel will just instantaneously collapse, right? Is that true or not? Actually it's not. I mean, just because the fuel physically goes through the DNBR threshold does not mean that it's going to fail. I mean, it's an assumption made for a couple of reasons. One, it keeps me out of the extremely uncertain range of knowing what's gonna happen. I have confidence if I stay with a DNBR larger than one with a DNBR larger than one, that the fuel will be safe. And it's a number that I can measure pretty well in the reactor, okay? I can take my instrumentation and I can actually use that instrumentation to determine relatively precisely what the actual DNBR in the reactor is. So those are really two very good reasons for choosing that parameter. But the point is the safety limit itself, the actual damage assumption, still has a level of conservatism in it, okay? It does not necessarily mean that the physical barrier will really collapse and this gets back at the cliff edge effect that we talked about. If I set my safety limit at the absolute failure point, I've just established a cliff edge effect in my reactor. We don't wanna do that. So again, the point is the safety limit is an assumption based on measurements and it's assumption that we use to sort of surrogate for the damage of the barrier. Okay, now our regulators, being good regulators that we are, will always require some kind of an acceptance criteria. We're not just gonna take one as the value, right? I mean, what's a typical licensing DNBR? My experience is it's like 1.25, 1.3, something like that as the actual limit that the regulator will assume. Well, that's really done for a couple of reasons. One, we're all engineers. We think about factors of safety. Plus, it allows us to account for some of the uncertainty in the parameters. You can remember my description of the ability to physically measure the critical power in a reactor is a very complicated thing to do. There is always uncertainty in that parameter. It is never precise. We have to account for that uncertainty and one of the ways we do that is we build it into our actual acceptance criterion itself. This is what our regulator's gonna accept. Okay, so let's keep going down. Now, we have a couple of different, margin, make sure I understand what this is. Okay, calculated conservative value. Okay, so we'll get more into this later, but the point is is there are a couple of different ways there are a couple of different ways of doing safety analysis. Sort of what we call the conservative approach and then there's more best estimate plus uncertainty. I don't wanna get into the details here because we have much more on that next week. This is just intended to highlight some of the little visual clarity here, the kind of expected values. Now, we would expect if we have a conservative value, ultimately that my calculation is gonna lead to a higher result, right? I mean, by definition, that's what making the conservative assumption should do. It should force the result to a higher value. This is what you see here. This is, because the operating parameter itself is gonna be down here well below the, what's on the PowerPoint. If we think about the change in a transient state, we're moving up on this plot here. So the conservative value should always be higher than our best estimate plus uncertainty values, which, if they truly are our best estimate, should bound the real value, okay? So I don't wanna dwell on this because we have much more on this next week, but the points I wanna make on this really are, A, we think about a safety limit, and not necessarily mean that it's a cliff edge effect. We don't wanna have cliff edge effects. The safety limit itself is based on testing and measurement, and that information is used to derive the limit, but that does not necessarily mean again that that is the ultimate failure point. And then the acceptance criterion itself, which is applied by the regulatory authorities intended to encompass the uncertainty in some of the measurements. So, any questions? I know this was an awful lot of words, and I promise there's more on this later, so this will make more sense. Yeah, please. Hold on, hold on, hold on. Actually, questions throughout the presentation are great, and that's actually preferred for me, so. Oh, I'm confused about this slide, because the safety limit is a theoretical number. Right. Yeah? Well, it depends. I wouldn't call it a theoretical number. I'm thinking some, it's in my experience, the safety limit is always derived from some level of testing. It's an experimentally derived number. Let's take the, what's an example of a safety limit that you can think of? For example, you have when you do, when you have an accident that you analyze, and you have the general safety criteria. That numbers are the safety limit. Safety, can you give me a specific example of a number? For example, the NBR. Okay. The NBR, when you analyze it by deterministic safety analysis, one accident, you have the initiating event, then you analyze it with relapse, then you have some results. These results are not the safety limit. Right. No, the results are these. These are the resulted numbers, depending on what approach you take. This is the actual result of calculation, if you will, what's in blue here. Look at the blue lines of the calculated results. These lines up here are the results that are established by your regulator and by your testing. And these are the numbers that you wanna make sure that you stay below. Does that make sense? You're sure? Not just saying yes. Okay. Okay, so again, if I run relapse five, and let's say I get a DNBR ratio of 1.36724, whatever, that's this number right here. Calculated conservative value is that number, physically that I get from relapse five, okay? Now the point is, I wanna make sure that that stays above 1.3, which is this number. This is a number that was established by my regulator, and they tell me I cannot go below that. I need to stay below that. Well, I'm actually below or above, depending upon the parameter. Yeah, exactly, right, yeah, yeah. I have to stay below the acceptance rate there. Yeah, this is the safety limit, and there is another number that I saw in another figure that's about allowable limit and another one. Allowable safety limit and a lot of numbers in instrumentation slides. Yeah, right, right, right, okay. All right, well, let me try to take it from a calculational perspective, okay, again. Thank you. We, let me step back and try another approach here. Okay, let's go take a look at, let's look at large break loss of cooling accident. Let's try that, okay? We establish safety limits for large break loss of cooling accident. Now the safety limits are in place to ensure the integrity of the barriers ultimately and the ability to protect the public and the environment for the large break loss of cooling accident. Now let's look at the common parameters. I have the 1200 centigrade for the peak cladding temperature. I have limits on oxidation, typically 17% oxidation limit. I remember there's a global oxidation parameter. It's, I want to say two or 3%, I can't remember the exact number. And that's sort of the entire reactor core itself. Okay, so those are safety limits, okay? Now those limits have been established based on literally hundreds of tests, experiments in which have been conducted to calculate what are the conditions at which the fuel will lead to a point where I would expect it to begin to fail. In other words, I'm gonna lose a barrier, okay? In this case, the cladding itself. Now the point is 1200 C has been established as the peak cladding temperature limit. Now the reason for that is really that happens to be the point at which, roughly the zirconium oxidation reaction begins to become extremely, you know, rapid, okay? I start having this extremely highly exothermic reaction in my fuel, which will realistically speaking, if it keeps going and I don't stop it, it will fail the fuel. And if you look at a highly oxidized piece of fuel, I can literally go up with my fingertip and flick it like this and it will just crumble. It has an absolutely zero strength left in it. So I wanna avoid that because my fuel will fail and I will no longer be able to contain radiation within the fuel cladding. So we establish a limit. The limit is 1200 C. That is this number right here, okay? That's the safety limit. That is the assumed failure point of the barrier, which in this case is the fuel cladding. And now the point I wanted to make is if you look physically at fuel cladding, you know, you can heat cladding up above 1200 and it can be subsequently quenched and the cladding can still maintain its integrity. All right, so in other words, just because we set the barrier at 1200, that does not mean if I go to 1201, I will immediately have massive fuel failure. And the point there again is to avoid a cliff edge effect. I don't wanna have my barriers right on the edge of the cliff. So if I move forward a little bit, I'm gonna fall right off. Okay, that's the point of that barrier or that's the point of that limit. Now some regulators, not so much for 1200 C, but for some regulators, you'll establish a limit which is a little bit below that. So you have a little bit of margin in there and you will not allow the calculation results from Relat 5 to exceed this parameter. And this gives you this little bit of margin in here between the acceptance value and the real physical measured parameter or value at which I assume fuel failure. So does that make sense? Maybe? Okay, we have more on this next week. Okay, the point I wanted to make here again was really looking at the top two numbers here. Okay, are there any other questions? Single failure criterion. Okay, this is getting more into how we do our calculations. Oh, what you have to do is you have to hold the red button and that'll hold it down and that'll turn the mic off. Or I can do it again. You got it? There's a red button and you just hold it in. Okay, so single failure criterion. I mean what this really, this is the single failure criterion in words. But let's talk about it by a physical example. Okay, we're all, let's think again about it again. Once again, let me go back to my large break loss of fuel in accident because it's really a very good example. I'm gonna, I have a physically capable system in my power plant which consists of a diesel generator, series of high pressure pumps, large sources of water, valves, actuators, components, all these parts that allow me to inject enough water into the reactor to keep the fuel cooled if I assume that the largest pipe in the reactor just instantaneously breaks in half. And that's what we, that's the design point that we put into our power reactors. Now in order to meet the single failure criterion, I can't just put one system in the reactor. I have to put two in at least. Some designers even put more in. Some put three or four levels of redundancy in their equipment in the plant. Because when I apply the single failure criterion, I have to assume that one of my entire systems is just simply doesn't work. Okay, and again, this is a defense and depth concept. It's a conservative assumption. And it's there to add that added level of confidence in the results of the safety analysis. I think I have more on this here. Okay. You know, this can be applied in many different ways. We think about electrical systems. You know, we assume the single component is faulty. I, you know, we'll have more on this later this week, but we think about electrical and INC systems. You know, I'm not just gonna have one instrument, right? That's not gonna work. So I have to assume that one of my instruments fails if it's a critical component in the analysis. So I have to have extra instrumentation. So I have multiple sources of measuring the same parameter. We talked about mechanical systems already. This was the, this was the ECCS system example. Think about failure of active components, pumps, valves, emergency diesel generators. Think about passive systems. Well, some of the examples are, let's assume that I have a blockage of a flow. You know, let's look at our passive heat exchanger systems. Okay. If I'm gonna put a heat exchanger in a passive system, in other words, just natural forces, gravity forces. Where do I wanna put the heat exchanger tubes in the system? I mean, physically. Where's the point I want those tubes to be? In order to have the largest potential for a flow. Anybody have any ideas? The highest point, exactly. Okay, but that has a problem associated with it. Okay. Do you know what the potential risk of having a set of heat exchanger tubes at the very high point is? Well, what happens if I get a bunch of non-condensable gases in my reactor, and there's a whole bunch of non-condensable gases in the reactor systems? Air, you know, if I have any potential hydrogen, for example, during an accident. If I have any, oh, I don't know. You know, gases from accumulator injection, which is usually nitrogen. Well, where's that gas gonna go? In my passive system. It's the lightest substance, right? It's gonna go right up into those tubes. Okay, well, what's the worst thing you can do to heat exchanger? Non-condensable gases. Okay, so I gotta think about that. It's a potential flow blockage. So what happens when I put non-condensable gases in my passive heat exchanger, and I have no means of getting them out of there? I have just failed my entire passive heat exchanger system. It will no longer work. Okay. So designers have to think about that. They have to think about the single failure approach to that. How do you design around that, basically making the assumption that this passive heat exchanger is going to fail? Because I foul it with non-condensable gases. Again, these are the thought processes that we have to go through. Okay. All right, let's see here. I'm gonna move past this. This is an example of a single failure, but I think we talked through this already. Think about applying the single failure criterion to human reliability. Is that what this is talking about? Yeah, exact time must be 24 hours. Unavailable human error. Okay. This isn't just human error. We think about ways of account, ways to account for the single failure criterion in our human error. We think about independence. This gets back to, let's assume, let's not just install electrically driven pumps in our plant, put some steam driven pumps in. I have an independent source of the ability to inject water. If I lose electricity, you can still run a steam pump by physically running out and opening the hand crank on the pump. It will work. Okay, this is an example of independence. They did that at Fukushima Daiichi. The reactor core isolation cooling system there worked for 72 hours in unit two with zero electricity. It's a steam driven system. It just kept running because they had the injection valve open and it just kept injecting water doing its job for 72 hours. So it's an example of independence. Okay, redundancies. Some people, again, they put in multiple levels of redundancies into our systems. And then another example is also diversity, which I guess is actually the steam electrical example. I apologize, but independence is having a physically independent set of equipment. So I misspoke there. Okay, here, this is a pretty busy picture. So I'm not gonna dwell on that too much. I wanna focus on this a little bit here though because this gets into more of the physical concepts of separation. Here you see that equipment is physically located in different locations. Of the plant itself. And this gets at this idea of, what if I have some sort of a challenge at the plant in one particular part, maybe a fire or something locally within the reactor building. If I have all of my equipment in one place, it could all be failed by the fire. So let's think about the layout of the plant. Let's make sure we have that independence built into our design of the plant itself. And these are just pictures of the different kinds of equipment, which is in this particular facility. Redundancy and diversity of ECCS. Let me, I think in the interest of time, I think I'm gonna go through this here pretty quickly because again, we have much more on this later. Independence, again, this is just a picture from the USA BWR of how they approach it in their reactor. Okay. So the application of the single failure criterion, how do we actually apply it? It's not as easy as you might think. We have to think about what is the worst possible failure? I mean, what could, you know, what has the worst impact? There's a couple ways of approaching it. The more traditional approach has been over the years, we just run a series of sensitivity calculations. And we use that as the base of information coupled with engineering judgment to make a decision about whether or not this particular failure is the worst one. Now, we're beginning to use more PSA insights, bringing that into the picture. Now that we have that information available to us, we can use the PSA to inform us what's the worst single failure. It's just another source of information that we have to use. Now, we also have to think about the sequence itself. It's not, you know, each individual sequence, be it, in this case, we're talking about, talking about fuel rod cooling calculations, or we can also talk about over-pressurization in the containment. The phenomenology is different. The sequences are different. Most likely the worst case single failure is probably gonna be different. So this is not as simple as just assuming that I lose one train of ECCS for every potential calculation and I've covered the single failure criteria. Now, this is not a simple thing. We have to put quite a bit of thought into it, quite a bit of preparation. Again, going back to the earlier parts of this presentation, this material needs to be documented, written down, so it can be independently evaluated, independently audited. All right, we don't wanna just have one group of people making these decisions and not have somebody else take a look at it. So again, the point here is it's not as simple as it may seem from the first look. All right. Safety system classifications. This is what I wanna say on this here. There's a couple of points to make on this here, but I wanna be a little hesitant going through some of this, because again, I'm getting back to some of the current evolution in this area of equipment and safety classification as we think about it within the context of the IAEA safety standards. A lot of this again reflects some of the current thinking, which is coming out of the lessons learned from Fukushima Daiichi and even some of the work that it had been going on prior to the accident. But so let's start up. We have a body of plant equipment. I mean, this can be anything. Pumps, valves, actuators, water tanks, anything you think of. Let's just put it in his body of plant equipment. Now, what has been traditionally done is we've broken those down into things we consider items important to safety, structures, systems or components, and things that are not safety related. Now, this is the area where I wanna be a little hesitant because there are some thoughts about whether or not we need to maybe consider how we can use some of this non-safety related equipment in certain types of accident mitigation. Again, I'll defer that discussion to the experts who will be here later in the week on this, but I wanna be a little hesitant here with this discussion. And then we can go down on the left-hand side under what we call items important to safety, and we have what we call safety related items and then safety systems. Now, I think safety systems are pretty easy to define. Those are the items that are necessary to mitigate our set of design basis accidents, okay? If I make an assumption in a computer code that I need a particular valve to mitigate that accident, that valve is now a safety system because I have to, because that valve must perform as I have assumed in my calculation. And this gets back again to our discussion about going from the calculation to the real operating machine, okay? The only reason we run calculations is so we can establish a box within which we want the plant to run. And that means for all of our components, for example, the safety systems, they have to be demonstrated to perform as I assume, so that's really the distinction. Now, practically speaking, what that means is I have to establish a technical specification. I'm gonna have extremely high levels of quality assurance in manufacturing and procurement. I'm going to have a need to do regular maintenance, regular surveillance, regular testing. That's what it means physically in the plant and for our operators in the room, you'll know this better than I will. You have to do constant testing. And then we have this category of systems we call safety related items. Now, this is a little bit hard to define specifically, and I've heard various approaches over the years of doing this. But let's take, for example, look at look at the high pressure air system in our power plant. Is high pressure air a safety system? Is it safety related? I mean, there are various interpretations, which is why I'm asking the question. Okay, let's take a look at, again, let's go back to the Fukushima Daiichi example, okay? The safety relief valves on a BWR-4 Mark I system, that they require two things to work. One is DC power and two is high pressure air. They need to have high pressure air, the valve will not open. So is that a safety system or is it a safety related system? I can tell you that in the BWR-4, high pressure air is a safety related system. It is not a safety system. But it is necessary for the safety system to function. I must have some source of high pressure air, be it a local attached accumulator, be it the regular instrument air system in the plant, I have to have some means of physically opening that valve. Okay, so that's just an example of what, you know, of what from my own personal experience is a safety related system. But a very important one, it must work. Or you must have some capability of providing for the function or the valve will not work. Okay, so if we go back to the safety systems and we get down to the next level, then we're thinking about, you know, we have our protection systems, you know, reactor protection system, various sources of nuclear instrumentation, for example, as an example of a reactor protection system, necessary to actuate the safety systems automatically to shut the plant down. If I exceed a technical specification. Very simple, okay. Actuation systems are necessary for the protection systems to be able to physically actuate the systems themselves, and then I have a series of support systems. Again, some people would probably lump instrument air into a support system. Some would not, it's a question open to interpretation. These are just examples. Okay, any questions on this? Yeah. Oh, actually, oh, which one I got on the mic there? I think I should be turned on. Oh, it's on, good. It's on, go ahead, it's on. On, what is the difference between safety related items and support systems? Okay, that's a good question. And I can not directly answer that question. That's a question that has to be, that has to be judged based on the individual or requirements of a different member state. What the regulatory body, the member state requires. But the examples of the systems are, for example, high pressure air, is one that I know has gone either way. Certain countries require that, you know. That's a support system to a safety level. I know, it's exactly, it is a support system, but it has a safety function. So some people, you know, so it's approached in different ways. For example, if I know that I have a functional requirement for high pressure air on a safety system, maybe I'll require, as a backup, to have a high pressure accumulator. You know, an air source there that will be reliable, that will allow the valve to function if I lose instrument air. These are just examples. But there is no one way to answer your question. The point is it's a member state question that has to be worked out as the licensing process is evolving for a given facility. But that's just an example. Yeah, I wish I could give you a direct answer, but there really is no direct answer. It's really more of a member state function. Go ahead, yeah. And now this, okay. This is, of course, the terminology. I will go back to this during my presentation. Maybe I can give you some details because this represents what is in our safety standards. And some member states, they use different terminology. U.S. first state safety-related items has a little different definition in U.S. But remaining with the definition of the agency, safety systems are those systems that are necessary, Tony said very clearly, to mitigate, to deal with design-based accidents, nothing else. The same safety system is a complex structure. There is a protection system, normally INC, actuation system like control rod or whatever, valve whatever, and support system like compressed air, cooling of the oil, cooling of the components. And also this part in the terminology of the agency are safety systems, are part of the safety systems. When we say safety-related is a component or a system that, if it fails, can have an effect on safety, but it's not for design-based accidents. Example, the control system. The control system is important to safety because if the control system doesn't work, you have to call into operation safety systems, for example. And this is a challenge to safety, but it's not a safety system. The fire protection system is important for the safety of the plant, is a safety system? No. So, there are system structures that have an effect in the safety of the plant, but they are not there to deal with design-based accidents. As I said, this is IAEA terminology. I know that there are other interpretations in different countries, of course. Yeah, thanks, Mark. I appreciate that. I wanted to bring in, you make the point that when these items are interpreted, different member states take different approaches, but that's a great example you brought up. I hadn't thought about the control system example. Yeah. Would you grab the microphone, please, thank you. So, according to Mark's explanation, safety-related system is not a safety system, right? And it would not affect the safety or... And the operation of the... It's not a safety system. Okay, but if the safety-related items fails, would the safety system operate or function... So, why do we call them safety-related items? I think an example might be, let's go back to this high-pressure air question. I mean, high-pressure air is essential for certain valves to function, but if high-pressure air fails, it's not a safety system, traditionally. We use my personal experience there because I have for valves that are safety systems, I have another source of highly reliable high-pressure air that will be there to allow the valve to perform its safety function if I need to. The main source of air, for example, for the safety relief valve is the instrument air. That's where it gets its primary source of air. But for the safety valves, they have a backup source of high-pressure, which I think is usually like nitrogen or something. It's not air, for example, to operate the valve if the instrument air system is not available. So, it's not a safety-related system. I don't have to shut the plant down. If the instrument air system fails, I can still perform my safety functions, but it's gonna challenge the system if it's called upon because then I have to use this backup equipment, I guess is kind of the point. That's just another example. Exactly. These concepts have extremely practical reasons. I mean, we cannot make every single component in the plant safety-related piece of equipment. I mean, how many tens of thousands of little pieces of equipment are in your power plant? It's just not physically possible to do that because you cannot test every single component. We use some of the resources to do it. So, we have to build a body of safety systems that we will continually test to evaluate to ensure that they are highly reliable and that we can rely on them to perform their safety function. We just have to make those practical determinations. We simply cannot make every single part of the plant a safety system. That's just another practical example of why this classification is done. Okay, thanks. Excellent questions. All right, well, I'll tell you what. Actually, that's the end of the presentation. So, not bad. We're a little bit early, but that's okay. So, do we have any other questions? Anything that we wanna go back over here? Because again, the whole point of this morning was to just introduce some concepts. We will go into every one of these items in more detail later. And the areas where I was a little cautious is really because I wanna give you the opportunity of having the benefit of the real expert speaking about these issues because we have been doing quite a bit of work in the agency working on updating our safety standards to reflect some new thinking. And we have the experts here this week who will be able to share what is the current thinking? What are the current practices in these areas? So that's why I was a little hesitant to get into some of these details right now. Again, oh, question. Do we have the mic somewhere? Can you pass it up there, please? Thank you. Sir, I have a doubt on single failure criterion. So there are actually a failure of one system, whether it is called a single failure criterion or a failure of components in the system is called a single failure criterion. For example, suppose SDS-1 fails, then there is another shutdown system there, shutdown system two. So whether we'll call it a single failure or a failure in some rods in the SDS-1 will call be single failure. Yeah, well the way I would interpret that, I believe you're referring to like the Can-Do example, right? They have the SDS-1. That is actually, so once there are two shutdown systems, but once the shutdown system fails, so there may be some control, some elements. So two one element fails, then that shutdown system will work. Okay, all right. I think I understand the question. The way that I would interpret the single failure criterion as applied to the reactivity control system or the shutdown system is we would assume that the entire system fails and we need to have a backup system which has a completely 100% redundant capability of providing shutdown. That's how we would interpret the single failure criterion in that situation, yeah. And then sir, that means one system fails, then there is another backup system is there. So that is called single failure. Now suppose in the case of fast system, the fast system, there may be one component fails, but then the system can take care itself to perform the intended function. What is the, which system, PAA? That's suppose there's one system, like one protection system if you take. In protection system there are so many rods are there in mechanical rods. So if also some rods fails, then that protection system can function. Let's talk about, the reactivity control systems are difficult to talk about in terms of single failure because typically regulatory requirements require that extra level of redundancy because reactivity is kind of a unique thing. Let's talk about say more of like the water injection systems, that's a better example I think to talk about single failure. And let's take a look at, we have to assume, let's say we assume that the single failure of the worst component in a system for the emergency core cooling system in the reactor. And that's generally, one of the obvious ones that comes to mind is we assume a failure of the emergency diesel generator that powers that system. The entire system will not function. Okay, that depends on the electrical configuration, but if we make the worst case assumption, we're assuming that one of our entire ECCS trains is now not available. So I have to have a completely redundant system in place in order to address that. And the reason why there's a distinction between that and the reactivity control system is reactivity control systems tend to have a specific regulatory requirement which requires the extra level of redundancy you're talking about, in other words, having a backup system. Whereas you get into the other components in the plant, they don't typically have that kind of requirement and what usually leads to this level of redundancy is the application of the single failure criteria. I mean, that's the distinction. Does that make sense at all? It's hard to talk about reactivity systems because then you talk about, is a single failure the assumption of one control rod not working or two control rods not working, for example, that that's a very difficult example to really bring in. I think it's better to talk more on the other systems. Any other thoughts? Questions? Everybody's hungry. Oh, yeah. My question is that the single failure criteria, it is applicable to all plants in the world or there is also any other, for example, two system failure criteria in any plant? Well, I mean that the single failure criterion should be applied at all plants in the world. And that's what the IAEA safety standards require, yeah. And certainly good practice requires that. You don't need the safety standards to come to that conclusion, yeah. That's what the safety standards would require. Now the implementation of it is certainly, is where the discussions come in and that's obviously the member state responsibility to implement the single failure criterion using their own experience, using the safety guides to the extent possible to come to a consensus within the member state on how to apply the single failure criterion. Yeah, but it is certainly an absolute requirement, yeah. Questions? No, yeah, up there. You previously mentioned there is a regulatory body except a safety margin, margin concept. So what is the current practice of regulatory body, the margin rate in terms of percentage of a single failure in reactors? Yeah, I think we're mixing up two concepts there. The single failure criterion doesn't really have an impact on the margin. In other words, I'm assuming, actually let me go back here to this margin picture. Maybe that's not a good idea, let me try this here. There we are. When we talk about safety margins, we're not talking about the single failure criterion at all. In other words, we're gonna make these assumptions regardless of the single failure criterion. I have to be able to meet these assumptions even by assuming the single failure criterion. So I'm in effect, it's an additional level of defense and depth, if you will, which we're adding. Because we're adding on top of, for example, this margin that I'm just adding in here by the regulatory authority between the acceptance and the safety limit to account for uncertainties, to account for good engineering practice. I have to meet this margin even by assuming the single failure criterion and the calculations. So does that answer your question? I mean, there are two different concepts. Because it's a point, and one is not related to the other, because we wanna have that additional level of conservatism, basically, is what it comes down to. Yeah. Yeah, we have a question in the back. Can you pass the mic up, please? I just wonder if you have a presentation about how IEA classify the systems and components. We do, yeah. Yeah, safety class one, two, and three. I don't think it's gonna go into, I don't know what level of detail you're planning on going into, Marco, but I know you have a presentation on it. Yes. Okay, thank you. Tomorrow, yeah, definitely. Again, we wanted to introduce concepts this morning and we'll let the real expert talk about it tomorrow, so. Okay. Question? Well, this single failure criteria, is there any discussion or statistical data about that, at how many plants, because of this single failure criteria, the second system coped for the safety system? Hope you understand what I am saying. I understand you. We have, of course, around the world, mostly single failure criteria is applied. So many systems, backup systems are there. Have we ever analyzed or do we have any statistical data that one system failed and the second one coped? Not to my knowledge directly, but I know that this is the kind of information that would be captured in the PSA model. We look at equipment reliability, and that information would be captured in that process, and then that could be determined as part, and that information could be basically fed back into the process to help us identify whether I properly captured the worst case single failure because equipment reliability, for example, might impact my thinking. May or may not, but this is the kind of information I would expect to see in a PSA, for example, yeah. But I don't have any specific knowledge of any statistics looking at that exact question, though. Any other questions? Okay, so with that, I think we're done with, actually, right on time. We're ready for lunch break until we'll come back at 2 p.m., back to this room, right, Barbara? 2 p.m. here, and the cafeteria is... Okay, so thank you very much. See you at 2.