 Hello, everyone. Thank you for joining us today for our virtual meeting on the Extremely Low Probability of Rupture or XLPR Probabilistic Fraction Mechanics Code, Advanced Methods. My name is Matthew Homiak. I am the NRC's lead in the Office of Nuclear Regulatory Research for the XLPR program. And I'm joined by my counterpart today, Craig Harrington, from the Electric Power Research Institute. Purpose of today's meeting is to help new XLPR users by showing them some advanced methods. The meeting today is the last in the four-part technical seminar series that we've been sponsoring surrounding public release of XLPR version 2.1. In the first seminar, we reviewed all the models that are programmed into XLPR. In the second seminar, we showed how to set up the inputs directly using the input set file and using the companion submitter. In the third seminar, we showed you how to run an XLPR simulation and look at the results. In today's seminar, we will be covering a range of what we would consider to be more advanced methods. This will be an extension of the prior seminars on inputs running the simulation and reviewing results. These will be some additional tools that you want to keep in mind as you progress beyond initial training and familiarization with the code and into running your own analyses for your problems. If you're not yet there yet, then no worries. You can consider these topics for awareness and later review when you're ready. Here's our agenda for today, currently in the introduction and opening remarks portion. We'll talk about some advanced methods related to inputs, cover sampling strategies, look at some advanced ways to extract results and look at the outputs, and then we'll cover ways you can run your simulations more efficiently, primarily using multiprocessing. And throughout, we'll have questions and remarks, and then we'll end the meeting with a few closing remarks. We're recording all these seminars for later reviewing, so don't worry if you missed the detail. You can watch it again. You can view those by going to youtube.com and searching for XLPR and looking for our logo. You can also find them through the NRC's YouTube channel. The videos for the first two seminars are posted, and we've also posted the video from our public release meeting that was held back in April. I'd expect us to have the last two videos up within the next few weeks or so, and I'll send an announcement to all the XLPR users when that happens. This is an NRC category three public meeting, which means the public is invited to participate by providing comments and asking questions throughout. We're using the WebEx platform to deliver the meeting today, and this is how we plan to take your questions. You can submit your questions and comments at any time using the Q&A feature. Shown here on the slide is a short review on how you can display that feature, depending on whether you're using WebEx in your internet browser or through your desktop client. At designated points, we may also invite you to ask questions verbally. If you'd like to ask a question then, please use the raise hand feature. We'll call on you and unmute your line. Today, our main presenter is Marcus Burkhart from Dominion Engineering. He was also a presenter on the XLPR model seminar, and backing him up is our usual team of experts. That includes Craig and myself, Cedric Salibary from Engineering Mechanics Corporation of Columbus, Nathan Glunt from the EPRI staff, and Margerix from Phoenix Engineering Associates. Giovanni Facco from the NRC staff is our WebEx host. Can reach out to them through the chat feature if you're having any difficulties with your WebEx setup. Right now, Giovanni is going to share a short poll with you all. This is the same poll as we shared last time, and it's just to help the presentation team here gauge where you're at at this point. So, please take a minute or so to complete that. We'd appreciate it very much, and don't forget to hit the submit button at the very end to enter your responses. Craig, while everyone is completing that poll, would you like to add anything? Thanks, Matt. I just want to express my appreciation to this team that has put these training sessions together. Emphasize the fact that they will be available as all of our users work through the process of understanding XLPR how to run it. Hopefully, these will provide useful guidance and input on that journey through this code. And as we've said many times, please, if you have issues or general questions, as you use the code, please reach out to us. Let us know how it's going, what your experience has been, challenges and opportunities. We want to keep up with what you're doing and how it's going for you. So, we appreciate everyone joining the call today, and turn it back to you, Matt. All right. Thank you, Craig, and thanks for everyone for responding to our poll. Just to have a few references here in the beginning. These are pertinent to today's webinar, and we've shown these to you in the past, so that's really nothing new. With that, I'm just going to turn it over to Marcus to do the presentations. Thanks, everybody. So, now I'm going to go through an assortment of topics that are various different advanced methods, and I'm kind of grouping them by category. This first category will be focused on inputs related topics. So, the first item is related to log normal distributions. And so, one item that has led to some confusion in the use of XLPR is the definition of log normal distributions within Golbson. Typically, log normal distributions are defined by mu, or the mean of the log transform data, and sigma, or the standard deviation of the log transpose data. These are sometimes also referred to as log mean or log sigma. Instead, Golbson defines the log normal distribution either based on the true arithmetic mean and standard deviation, or the geometric mean and standard deviation. In the XLPR input set, there's a flag with a value of zero or one that is used to select if the true arithmetic or geometric mean and standard deviations are input to define the log normal distribution. So, on the slide, I show how the true arithmetic or geometric mean and standard deviation can be calculated in more classical log mu and log sigma. These are the equations you see on the left. In the figures on the right, you can see how the probability density function for a log normal distribution changes. If you switch from true to geometric, believe the mean standard deviation inputs unchanged. XLPR also gives the user the option to pairwise correlate sample values for several of the inputs. Inputs that can be correlated include several probability detection model parameters on the properties tab, as well as a few general material properties, initiation model parameters and growth model parameters, which are on the left pipe, right pipe, welds, and mitigation tabs of the input set. At the bottom of these tabs, there's a box labeled correlations, which identifies which pairs of inputs can be correlated. The user can specify the strength of the correlation by inputting a ranked correlation coefficient, which can have a value ranging between negative one and one. Automated error checking for input values is also performed by the input set. The allowable range of input values is shown in the leftmost column of the input set. This generally ensures that input values are physical, so for example, ensuring that the wall thickness is positive. Any values that are outside of the allowable range are then automatically highlighted in red in the input set. You see an example of that below. Some of these ranges are set up to be dynamic and can be changed based on other inputs. So for example, the allowable range for values in the ESGY input can be no greater than the value of the plant operation time input. So in the example below, you see that you get the laser. Here the range is from 0 to 60 since the plant operation time is set to 60 and then the EFPY deterministic value, you know, not be greater than that value. However, if he set the plant operation time to 80 years, then this would be an acceptable input value. The SIM editor also performs similar input range checks as those performed by the input set. This was demonstrated in seminar number two, which was focused on the inputs. In addition to checks for physical values, models implemented in XLPR may have additional ranges of validity or applicability. To check over which ranges the models are defined, you can look at appendix E of the XLPR user manual. Here, specific input limits are provided for each model. If any of these input ranges are violated, that module will lose an error that's captured in the run log. Details on model specific errors are also provided in the user manual. Furthermore, module subgroup reports provide documentation on the range over which a given model is validated, as well as further details on the model validation itself. Although module subgroup reports are currently not available, they will be released to users at a later date. There are also several tools available for comparing input sets. You may want to use such tools to determine what changes have been made to a given input set or to compare two different cases that you've run. Some Microsoft Office licenses include a Microsoft spreadsheet repair program, which is a separate program from XL, and I've included a link here that gives a description of that program. There's also XL Compare, which offers a 30-day free trial version. Again, I've included a link so you can look into that program further if you're interested. You can also use conditional formatting or Boolean logic within Excel to identify any differences in input values between two different input sets. There are also many other options. For example, you could write a script that reads in values from two Excel files and then identifies any differences between the two. I'll now jump into a couple of sort of topics related to sampling. There are several sampling methods available within Excel at the RC Plan 1. These sampling options are available for the user for both epistemic outer and aleatory interweaves. So first, the user chooses between single random sampling and line-hypercute sampling. Then the user can decide if they want to apply importance to one or several of the inputs. Additionally, the user has the option to decide if they want to apply a discrete probability distribution. So as the name implies, simple random sampling is the simplest of the sampling options. It is the easiest method to analyze, compare results across plans, and calculate sampling uncertainty. Latin hypercube sampling improves sample space coverage versus simple random sampling, without increasing computation time or complexity of the processing. Important sampling can be used to target tails of distributions, helping estimate very small probabilities in a reasonable computing time. You won't typically start off running simulations with important sampling enables, but it may be activated after some preliminary sensitivity studies have been deployed. Finally, discrete probability distribution results in samples that are always uniformly distributed over the sample space. But results in fewer unique sampled values. This may be useful if the simulation sample size is formulated. However, adding additional realizations may not improve estimates impacted by the distribution tails. Over the next few slides, I'll go over each of these sampling schemes in a little bit more detail and show you how to activate them in XLDR. For simple random sampling, all inputs are randomly sampled from their distributions. This makes simple random sampling easy to implement, easy to explain, and easy to analyze. However, also greater number of realizations are needed to achieve reasonably low sampling uncertainty. Latin hypercube sampling forces the sampled values to be spread out across an input distribution. For the same number of realizations, Latin hypercube sampling results in a lower sampling uncertainty than simple random sampling. It's also fairly easily analyzed. However, estimating sampling uncertainty is also a bit more difficult than for simple random sampling. Generally, we recommend using Latin hypercube sample for a majority of simulations XLDR. Latin hypercube sampling better covers the distribution tails, which may have a more significant impact for low probability events. However, if you're using 10,000 realizations, the difference between simple random sampling and Latin hypercube sampling can be very significant. These differences tend to be more significant for smaller numbers of realizations. As you can see in the figure below, for an example of 100 realizations, Latin hypercube sampling more consistently covers a sample space than simple random sampling. This results in a more converged result for the same number of realizations. So how do you switch between simple random sampling and Latin hypercube sampling? There are two sets of simulation settings, one for the epistemic outer loop and the other for the aleatory inner loop. To get to the epistemic outer loop simulation settings, you can click on the menu bar and then simulation settings followed by the Monte Carlo tab. Or you can click the setup epistemic sample size and random seed button on the XLDR global settings dashboard. To open up aleatory inner loop simulation settings, you go to the model root and right click main model and then go to the Monte Carlo tab. Or from the XLDR global setting dashboard, you can click setup aleatory random seed button and then go to the Monte Carlo tab. I'll go over those again in our demo shortly. You will then see a check box with an option to use Latin hypercube sampling for either of these simulation settings. The box is unchecked and simple random sampling will be used. The box is checked and Latin hypercube sampling will be applied instead. Keep in mind their inputs within the input set for the user to record whether or not Latin hypercube sampling is used for a given run. Although Goldson doesn't directly read these input values, it is good practice to record the simulation setting options for a given run within the input set. Important sampling focuses the number of samples in a specific region of interest. So this helps results in a better estimation of low probability events. For example, if you want to focus in on the tail of a given distribution. It's noted that important sampling is harder to implement and can make analysis of data more difficult. Kind of implementation can also actually increase sampling uncertainty. XLPR accounts for any bias introduced by important sampling when the final results are calculated. So that is not something that XLPR users need to worry about. So how does one turn on important sampling? The user decides whether to apply important sampling for each variable within XLPR. For each variable where you activate important sampling, half of the samples for that variable are concentrated within a region about a user-specific quantile. This quantile can be different for each important sample variable. The exact width of this region depends on the number of inputs that are selected for important sampling. In general, we recommend that if important sampling is performed, it's only important sample one or perhaps a few inputs. The more inputs that are selected for important sampling, the more the impact of important sampling for a given variable is diminished. And I'll show you how to activate important sampling for a given input. First on the use for options tab, you need to activate important sampling either for the epistemic loop, the aleatorial loop, or both. Then for the specific input you want to important sample, you need to switch the important sampling input from no to yes and then specify a region of importance. So for the example I'm showing here, important sampling on input 2543, which is a multiplier on the probability constant A for direct model one. And I'm applying the important sampling around the quantile 0.995. So as this variable is set to be sampled in the epistemic loop, I've also activated important sampling for the epistemic loop. Discrete probability distribution discretizes the domain in a number of equal probable strata. The number of strata is a user-specified input. You have a separate input for both the epistemic and aleatorial loops. After the sample space is partitioned, the simulation is executed by applying the conditional mean of that stratum. So if five levels are defined, the quantiles are split into 0 to 0.2, 0.2 to 0.4, 0.4 to 0.6, and so on. Any sampled value within the quantile 0 to 0.2 will then be set to the distribution mean over that range. If DPD is turned on for either the epistemic outer or aleatory inner loop, discretization is applied to all variables within that loop. So for example, if you wanted to turn on DPD for the aleatory loop, you would set input 0111 to 1, and then set input 0112 equal to the number of strata you'd want to use to partition the sample space for inputs within that loop. Note that the number of strata must be greater than 1 and less than the sample size for that loop. Although XLPR includes both epistemic outer and aleatory inner loops, it is also possible to run a more classical single loop monocarlous inflation. If you want to sample all variables in the epistemic outer loop, first in the inputs set, you need to change all sampled value variables to epistemic. You then need to adjust the monocarlo simulation settings to the aleatory inner loop. However, simulation settings can be adjusted to run only one realization. This is done by checking the run one realization only box and specifying the realization number. If using Latin hypercube sampling, running a single loop monocarlo simulation in the epistemic outer loop allows for larger sample sizes than doing so in the aleatory inner loop. If you want to sample all variables in the aleatory inner loop, first you need to change all sampled variables to aleatory in the input set. Then in the monocarlo simulation settings for the aleatory inner loop, you check the box, use a different random speed for each realization of the parent model. You can then use any number of samples in the epistemic outer and aleatory inner loops to run the simulation. For example, setting the number of epistemic realizations to 100 and the number of aleatory realizations to 100 would then result in a total of 10,000 realizations, all of which are sampled in the aleatory inner loop. Thus, if using simple random sampling, the aleatory inner loop allows for larger sample sizes in a given run. For a deterministic single realization run, you want to run only one realization in both the epistemic outer and aleatory inner loops. This is done by changing monocarlo settings for both the epistemic and aleatory loops. For the epistemic loop shown on the left, you'll want to set the number of realizations to one. For the aleatory loop shown on the right, you'll want to check to run the following realization only box. Furthermore, so that you know which input values are being used in your deterministic run, all inputs should also be set to constant within the input set. Keep in mind, there are inputs within the input set for the user to record the number of realizations which should also be updated. For the aleatory number of realizations, setting this to two, which is enforced by the inputs at error checking, but then running only one realization is considered okay. Now, I'll jump into a quick demo within XLTR on the simulation settings. So first, I'll show you how to modify the monocarlo settings for the epistemic. One way to reach those is by going to the set the epistemic sample size and random seed button on the colo setting dashboard. You then go to the monocarlo tab. We always execute runs using the probabilistic simulation option. Below that, you see the input for the number of realizations to the epistemic outer loop. If you'd like to run only one of those realizations, for example, if looking into specific results or debugging, you can then specify below that which realization you would love to run. So for example, if I have 100 realizations and then I want to run realization number 10, I can do so that way. You then have the option to switch back and forth between Latin hypercube sampling and simple random sampling. As I mentioned, when this box is unchecked, simple random sampling is applied. And when the box is checked, Latin hypercube sampling is applied. You then have a dropdown to select whether to use random points in the strata or midpoints in the strata. I always use the random points within the strata when applying Latin hypercube sampling. You can then also specify if sampling sequences should be defined using a specific random seed or not. If you specify the random seed, then the results are repeatable and can be recreated. At the bottom of the window, you see an estimate of the result size for the loop. You also see this number goes up and down as you increase or decrease the number of realizations. So as I take the number of realizations from 100 to 1000, you can see that size increase because we're running only one realization. So if you run more than one realization, you see the size increase and decrease. So there are two ways to get to the epistemic loop settings. Either through the button I showed you here or through run and then simulation settings and then going to the Monte Carlo stand. For the aleatory loop settings, you can either go to the setup aleatory random seed button and then Monte Carlo. Or from the main model, you can right click the main model, go to properties and then Monte Carlo. The settings in the aleatory inner loop look very similar to those in the epistemic outer loop. Again, there are two ways to get to these settings as I already showed you. Here, the number of realizations is tied to input 0107 in the input set. Again, you can specify a specific realization should be run. You can specify a simple random sampling or a Latin hyper key sampling should be applied. And you can specify the random seed for that set of realizations. One additional option that is available to use is a different random seed for each realization of the parent model. This means that for each epistemic realization, a new random seed is selected for the aleatory loop within the epistemic realization. So I will briefly pause for questions. Do you have any questions? Please feel free to submit them. You can use your raise your hand here. We'll just take a short break to get everybody caught up. Of course, if you have a question that's not related to the presentation today, we could entertain that as well. Again, I'm now catching up on the Q&A and it looks like all of the written questions so far have been responded to. Okay, I think I will go ahead and proceed in the presentation. There will be several additional points where we'll pause for questions. And so at those points, we'll go ahead and catch up to anything that we haven't responded to yet in the chat. So the next set of topics is focused on the results and outputs and advanced methods related to those. After each run, the Bolton environment creates the run log. The user should inspect the run log for warnings and error messages. Warnings are also printed when the model is activated. The user should just be aware of these. If there are any errors that are reported, the user should look at the run log to obtain more information. After a run is completed, PapaBucks asks the user if they want to view the run log. Another way to view the run log is to go to the run in the menu bar and then select view run log. It has been also saved in the same directory as the Bolton file and it is named GoldSimRunLog.txt. The run log captures a few other pieces of information in addition to warnings and errors. These include file name, simulation start time, simulation run time, the number of epistemic outer loop realizations, and the epistemic outer loop random seed. In the prior seminar on running the simulation, Cedric talked about exporting results from GoldSim. In addition to using copy paste to specific results, you can also export results to a specific Excel or a text file. This is not done by default but can be activated in the properties for time history result elements located in the model loop. To do so, use the export results to drop down menu to select if you'd like to export to Excel or text file. In the export tab, you can then specify the file name, when to export, and for the case of Excel files, where in the Excel file the results are exported to. You can either use this to export results after completing a run or you can set this up prior to a run and then have the results exported when the simulation completes. Conditional results can also be generated by creating new results elements. Add a new result element is only possible within GoldSim Pro. However, once results elements have been created, GoldSim model file can be saved in player format and then viewed and run with GoldSim Player. New results elements can be inserted by right clicking and selecting insert element and then results. Then you select the desired result type. Of these shown in the figure, the time history result, distribution result, and array result are the ones most commonly used in Excel PR. Note that the saved results in any GoldSim element can be displayed by left clicking on the green output port. This causes an output interface to pop up which displays the outputs of that element. If you then right click on a given output, you can then view those as a time history result or a distribution result. It is noted that these outputs, if within the main model, only represent the last value that is calculated. So it is for the aleatory realizations of the last executed epistemic loop. If you want an added output to be calculated for all epistemic and aleatory realizations, you need to first get the calculated results out of the main model. This is something you need to do before running a simulation. Again, it is only possible to make these changes in GoldSim Pro. Once completed, the file can also be saved in the player format. To get the results out of the main model, you have to add that result to the main model interface. I'll go over this again in a demo shortly. From the model route, you right click main model, then go to properties, and then the interface tab. Additional output variables can then be added using the green plus sign under the output interface definition. From here, you can then browse to select which output to export as well as what format that output should be used for. To view the time history, where the options are statistic history or realization history, or final value, where the options are statistic or distribution. So if you're running a simulation with a large sample size, it may be difficult to extract all of the results. In some cases, viewing results on a realization by realization basis. GoldSim may display values only from the first thousand realizations at a time. However, a screening feature exists that allows you to view realization-specific results for other realizations. Screenings performed at the epistemic or outer loop level. A description of how to perform such screening is provided in volume two of Zibble's user manual. And I'll go over this in the demo as well. There are several ways to access the result screening options. One is by going to run and simulation settings, Monte Carlo, and then selecting results options. Alternatively, in a results element, you can click the edit properties option, and then Monte Carlo results options. You can also right click the results element, select properties, and then Monte Carlo results options. Once in the Monte Carlo result display properties, you can change settings for realization classification and screening. By default, screening is set to include all realizations. Additional conditions can be added and applied for screening. This is done by clicking on add, and then to add a new category. When you do so, a new category pops up, but first does not have any conditioning. The condition fields can then be edited by double clicking on the corresponding cells in the new category. So in the example here, I've entered a condition to include only epistemic realizations less than 30. I then unchecked the all realizations category to exclude any other realizations. The gross percent and net percent numbers on the right tell you what percentage of the realizations fall within each category. So then if you open up a results element, it will only show you the results for the first 29 epistemic realizations, if that's all that you have screened in using these conditions. When it comes to screening of results, there are a few important things to note. Conditions can also be used to screen out results. So for example, here by checking category one and unchecking category two, only results with epistemic realizations greater than equal to 30 will be displayed. When screening is activated, it is applied to the entire Goldson plot. So as a reminder, the stats text in the lower right corner, the main Goldson window is changed from result mode to result mode screened. Unchecking all categories may lead to Goldson crashing, and so it is recommended to always save once a simulation has been run prior to doing any screening. Multiple conditions can also be applied combined using operators such as and. An example shown here, I've screened in realizations greater than 10 and less than 31. So now I'll walk through an example for if you wanted to screen results based on realizations that have had cracks initiate. In this example, I will work through again as part of a demo. So first, we would add an additional output to the main model interface. We right click the main model and select properties to go to the interface tab. Then we use the green plus sign under the output interface definition to add an additional output variable. We then point that new output variable to the occurrence of crack output, which is located in Excel care model, post processing occurrence of crack and occurrence of crack. And we'll call this new variable is cracked. We then need to set the output type to final value and the output definition to statistic and mean at the model root, we then add a datum element that links to main model dot is cracked. So we do this by right clicking the background and selecting insert element inputs and data. We then change the element ID to is cracked and the data definition to main model dot is cracked. Then when we set up the screening, we can access this newly added variable as cracks. We can using this, we can now screen in or out epistemic realizations that have had cracking by screening based on the values of this variable that are non-sterile. Note that if you're using multiple alitory realizations with an epistemic realization, if any one alitory realization associated with an epistemic realization is cracking, then all alitory realizations associated with that epistemic screened in. And so all, as I mentioned, I'll go over all these steps again in the demo. So once you've extracted results from Golbson into your desired format, you can then perform post-processing to calculate any outputs not directly calculated in XLPR. For example, some outputs that were considered as part of the NRC and EPRI typing system analyses included the leak before break ratio, which was defined as the ratio between critical crack size and crack size at detectable leakage, as well as the time from detectable leakage to rupture. For the post-processing, any desired software to look and be applied. So some examples that have been used in the past include XLPR or Python. Other examples of post-processing that have been performed are sensitivity analyses. Sensitivity analyses can be used to better understand the relationship between model inputs and outputs. For example, sensitivity analyses can be used to identify the inputs out of the most significant impact on the results of the given model. The knowledge of the most important results can be used to identify inputs where more information collection could decrease uncertainty, or it could help inform additional simulations with important sampling active on the inputs which drive the model results. Many statistical methodologies exist to determine which sample inputs have the greatest influence on outputs of interest. Some include regression techniques, bottom modeling techniques, or adjoint methods. So keep in mind in order to apply these sorts of methodologies, sample values for all inputs as well as the outputs of interest need to be saved. The figures on the right show an example of two sampled inputs, one of which is highly correlated with the occurrence of crack output, the other of which which is not highly correlated with the occurrence of crack output. Now I will jump into the demo on how to set up screening for results. So first, we would need to set up a file that adds additional output that we'll use for the screening. And this is something that we need to do before a simulation is run. So we'll go to the main model, right click, select properties, and we'll go to the interface tab. Then under the output interface definition, we'll press the green plus button to add an additional output variable. So the value for occurrence of crack can be found under XLPR model and post-processing. And under occurrence of crack, we select occurrence of crack and press OK. We'll rename the output to is cracked. And then we verify that the output type is final value, statistic, and mean. Again, there are several options for output type. So the spinal value or time history, and for the output definition, you can select last calculated statistic or distribution. And then you can either specify mean or some other specific percentile. One press OK and close. So at the model root, we now add a data element that links to the main model that is cracked, item that we just added to the interface. We do this by right clicking the background, selecting insert element, inputs, and data. We'll then change the element ID to is cracked. And then data definition will point it to main model that is cracked. Once the variable is a valid variable name, it turns to green. We then press OK. So now we can perform screening on the newly added output is cracked. So for an output, we go to properties, monocular results options. And then now we have access to the realization classification and screening. So we can add that new output for screening by pressing add. And double clicking here and selecting is cracked, there's a zero. We can then uncheck all realizations to only screen in realizations where cracking has occurred. Or you can run with all realizations being shown. And then when you investigate the results, you can then perform screening at that time. So to save time for the demo, I'll now switch to another GoldSim file where I've already completed these steps and run the simulation and save the results. So just take a moment to open. Since GoldSim saves all results within the GoldSim file, if you run larger realization numbers or save a lot of the inputs and outputs, doing certain operations on the file can take a little bit of time. So now let's look at the comparison detection output. So currently I do not have screening applied. You can see it just says result modes. And so we should see all realizations when this result table opens up. So here you can see all individual realizations for this run. And for this run, I ran 2,500 realizations in total. So for the occurrence of crack output, if you look at the values, most of them are zero, indicating that no cracks initiated. There's a handful of them where cracks did initiate. So now let's use screening to more closely look at the realizations where cracking did initiate. We'll go to the edit properties, Monte Carlo results options. We'll uncheck all realizations to only look at the realizations where cracking occurred, which is actually the only fraction of a percent for the inputs that I used. We're going to press OK. So now it shows result mode screen. So I should just take a moment and then I'll be able to close this window. So while we're waiting for Goldson to continue thinking, I just wanted to point out that to access the export results options, you can do this do so here through the dropdown for export results too. And then under that dropdown, you can select either tasks or Excel file as options for the export and close this. The bottom right, we see that screening is now active. And now we see the six realizations where cracking has occurred. And so by scrolling here, you can more clearly see when the occurrence of crack outputs, which is from zero to one for each of those realizations, indicating when the crack has initiated. So now if we wanted to look into any further details for other values calculated in Excel PR associated with those realizations, it's a lot easier to do so. As you can see, I closed the comparison detection output and screening remained active. So now we could go to the total leak rate history output. And the screening remains active for showing those six realizations where cracking has occurred. We can then scroll down again and see where and when leakage occurred for each of these realizations as well as what the leak rate was for those realizations. So that's all for the screening demo. Open up the next file here in the background of this list for the presentation. So I'd like to pause again for questions to see if there's any additional questions that are unanswered before I jump into the next topic. Yeah, I think Marcus, we're all caught up on the Q&A questions. If anybody wanted to raise their hand right now and ask a question or the topics up until this point, please go ahead. We can take your question verbally if you'd like. I'm not seeing anything, so I'll wait for another moment. And then if not, I'll roll on to the next topic. I'll start talking about certain ways that you can improve efficiency in SLPR. So by default, time histories or final values are saved for many of the Goldstone elements. In the player version, you can disable some results elements. In Goldstone Pro, you can also edit the settings of some of the elements to disable them as well. So options for disabling look like one of these options where you have various checkboxes that allows you to determine if you want to enable or disable those elements. To see what is being saved, you go to view and highlight saved results, then check the time history results and the final value results. Any value that is being saved is then shown in bold text below. You can also look at the result size in the simulation settings for the epistemic outer and aleatory inner loops. Goldstone containers allow some settings, such as results saving, to be changed recursively. So you only need to change the settings for a container, and those settings changes are then propagated to all contents of the container. Even if elements are disabled, the calculations are still being performed. It's just the results aren't being saved directly in the Goldstone file once the simulation is complete. So Goldstone is a 32-bit program, and it retains all saved information in RAM. So this means that the 32-bit memory limitation also limits the number of realizations that can be executed in a given run. So by disabling outputs or intermediate variables, this reduces the amount of information being saved, and thus allows you to run an increased number of realizations in a given run. Now if you do exceed the Goldstone memory limits, there are several errors that might pop up. So for example, you might be warned in the simulation settings with a triangle and exclamation point. This one doesn't necessarily mean you only exceed the Goldstone memory limits, but it means that you're getting close, and you should be careful. You might also see other errors that just don't make quite as much sense, and so I've included two examples of those. One where you just get a black box error, and you can't read anything, and another where you see an application where you then need to terminate the program. Finally, Goldstone may also just crash during the run if you exceed memory limits. So Goldstone saves the results at every time step for every realization. So in addition to disabling outputs or calculating intermediate variables, you can also reduce the frequency at which values are saved. For example, you could decide to only save every other time step rather than every time step. To do so in the model root, you right click on the main model, and select properties. You go to the time tab. You can then adjust the output frequency to be every several time steps rather than every time step. Then an estimated result size is also shown at the bottom of that window to help you determine how frequently you may want to save those results. So in the main model properties, so the properties for the alliatory loop, you also have the ability to change the time step settings. So you get that here by going to set up the alliatory random CV button, then going over to the time tab, and then here under the basic step you can change the time step in months. In XLPR, the default time step is set to one month. This is where you may want to vary the time step, or if you want to investigate temporal convergence for a simulation. Another thing you can do to decrease runtime is to run XLPR in parallel. So I'll go over the steps to do this in another demo tourally. So with Goldson Pro, up to four Goldson Slate processes can be run in parallel. If you have the distributed processing module, then there is no limit to the number of slave processes that can be run. First, you'll want to run the code without distributed processing and a small number of realizations. The fastest option is to perform this on a single alliatory and single epistemic realization. I've shown here one realization for each loop. This is to confirm that all input values from the input set have been updated. Although this should have been done automatically, we've seen a bug where inputs were not updated if going directly to running XLPR in parallel. So Goldson has been made aware of this issue. Then when we're ready to run the simulation using distributed processing, we can do so on up to n minus one slave processes, where n is the number of cores in the compute processor. So for example, if you have eight cores, then you can have the distributed processing module. Then you can run up to seven slave processes. So that's what I'll be showing in the demo. The slave processes can be started using the Windows Run utility, which you access by pressing the Windows key and R on your keyboard. Inside the Run utility, you can then enter a command pointing to the location of the Goldson executable with a dash S, depending to the end, to specify that a Goldson slave process should be run. Each time this command is executed, one slave process is started. And so then these steps can be repeated for as many slave processes as you'd like to run, be it four or seven or however many. Once the slave processes have been started, we then need to connect to the master Goldson file with the slave processes. You do this by selecting Run and then Run on Network. You can then connect to the master of the slaves. For slave processes on the same computer, you can use the local host address to point to the slaves. Once all slaves have been added to the list, you can click an update status button, which is behind this text box here, to confirm the link between the master and the slaves. You can see that button here. So parallel execution is only applied to the epistemic outer loop. Another thing that you have control over is the number of epistemic realizations that are passed to a slave at a given time. Adjusting this number can also help improve runtime. So too few at a time and more processing time spent transferring data back and forth between the master and the slaves. Too many, and then you start to lose some of the benefits of parallel execution, plus data transfer takes a lot longer for each chunk of realizations. So rule of thumb that we would recommend this to be used is somewhere between a hundred and a thousand realizations per transaction. So in the example shown here, I've selected 500 realizations at a time. Then once this is all set up, to start the simulation, you press the Run Simulation button. So now go through those steps again as part of the demo. So first, we want to run the simulation without multi-processing and ensure that all inputs have been updated. In this case, I'll do a quick run with one epistemic and one alleys for realization. And in this case, we won't need to update the input side. So I go to the epistemic sample size and random seed. I verify that one realization is set. So we're good there. Press OK. Then for the alliatory random seed, go to Monte Carlo. And then I only want to run one simulation or one realization. So I select one here and then press OK. We then go to run model. I'm going to run and then run model. Use the run controller. Now it's importing data from the input set and gives us the option to display the run log file. For this case, I'm not going to bother with opening that up. But if you wanted to, you would press yes. If you close out of this and then do want to look at the run log file later, you can select run and then view run log. All right. So now that we've done this, we're certain that the inputs have been updated correctly. And we will go ahead and clear the results return to edit mode. So now we'll update the input set with the number of realizations that we want to run. In this case, we'll do 2,500 epistemic realizations. An epistemic random seed of one, two, three. We'll do one alliatory realization. I need to enter two in the input set. I set this to one. The dynamic highlighting shows this as red and a valid value. So we'll go ahead and change this back to two. And then the alliatory random seed will set to four class types. Once we've met these changes, we'll go ahead and hit save. Now we need to update our simulation settings for both the epistemic and alliatory routes. So first, for the epistemic loop, we'll do 2,500 realizations. And we'll use Latin hyperheat sampling. We'll use one, two, three, alliatory loop. We just want to run the one alliatory realization. And for that one, we'll use a sampling sequence four, five, six, and press OK. So at this point, I like to save the model to ensure that all of my changes to the simulation settings have been saved. I go to save. Then I want to spin up the number of Goldson slaves that I want to run. I'll just move Goldson here. And then I'll press the Windows R button to open up the run utility. So here you have the path, the Goldson file, executable with a dash s for slave. And press OK. And that starts up one Goldson slave. So I'll then repeat that process until I have a total of seven Goldson slaves start up. Two, three, four, five, six, seven. Since you can see the communication output from each of these slaves, I usually like to tile them so that I can see the output from each one, or at least partially see it. Then we go to run, run on network to set up the settings for the multi-processing. So if you haven't already added the addresses for the slave processes, you can do so by pressing the add button. I'll remove one of these and then demonstrate. First add. And then for slaves that are on the computer that you're running Goldson on, you can use the local post address. To connect to the slaves, you can click one of these check boxes. So each time I connect to a check box, I connect to an additional slave. And then the status, it tells me which slave I've connected to. I can then press update status to make sure that the status for each of the slaves is up to date. I can then set the number of realizations to process for the slave transaction. So for this example, we'll just use 100. Then here you have a drop list option to save files on the slave computer. You can either save all files, auxiliary files or not. So I'm going to go ahead and press none. Once you've done that, you're ready to start the start the simulation. So you press run simulation. You can then see that the slaves start receiving files and start loading the model files. And here you can also see that the connections have been established and files are being transferred. You then see which realization each slave is working on. And then when the simulation starts running, the simulation timer will also start up, as shown here. And you can also see the gears turning to help verify that the slave is in performing calculations. So that's that's pretty much it for distributed processing. Now I'm going to switch back to the presentation. That's it for the match topics. So I'm going to go ahead and turn it back over to Matt. All right. Thank you, Marcus. Did anybody have any questions there on the last portion about efficiency gains in XLPR or any of the topics actually? We can do Q&A. I know we're all cut up on the question here. All right. So the houses are run with larger sample sizes with some modifications. So I went over those modifications as part of the presentation. They were focused on the enabling and disabling specific outputs and intermediate variables as well as adjusting how frequently certain values are saved. And so by reducing the number of elements that are saved per run, you can then execute greater numbers and realizations in an XLPR. Thank you, David, for your question. Anybody else have any other questions about presentations today or any of our webinars? Or just general questions about the XLPR code? Yeah, certainly the presentations today being on more advanced methods. Most folks, I would guess, probably haven't gotten that far yet. So I'm not surprised we don't have a ton of questions on the presentations today. And we've also provided a lot of information over the last couple of weeks now. So it's a lot of information to take in. By the way, we've gone through it here and we're recording this. So you can have it handy for later viewing to re-review any of those topics. All right. I'm not seeing any other questions come in. So I'll go ahead and take us to the closing remarks. So in today's meeting, we covered some of the advanced features for the XLPR code and these built upon topics introduced at our previous seminars. Specifically, we covered advanced topics related to setting up the inputs, selecting appropriate sampling strategies, screening and creating custom results, and maximizing the efficiency of the XLPR simulations by altering the outputs saved and using multi-processing. So with this being our last seminar, you may seem like it's the end, but I think we're late from NRC at every perspective. This is just the beginning. We are currently working on setting up an XLPR user group and some of that framework for the user group has already been established. And I'll give you a few details here. It will be fee-based and offered under a separate license agreement and it will provide access to things like the source code, our entire library of the XLPR software quality assurance documentation, which is north of 3,000 pages. It's going to also allow for external uses of the code, meaning you can use it to do consulting for others. And the other features as well that we haven't determined yet, but some things we have in mind are like real-time bug reporting, other services like maybe user support, you know, training like we've been doing in these meetings, access to beta releases, user form, those kind of things. So those are all in the works right now and we're going to use some of the feedback we've been getting in these webinars to help establish that. And we'd also like to send a survey the next month or two here to all our XLPR users to get some feedback on those kind of things and features we may want to have included in the users group. So please look out for that survey. It will be coming soon as well as we'll be doing other XLPR communications as well. So look out for that. Greg, did you want to add anything else on that? No, I think you've covered that well. We were working to put that together and have it available and we'll try to structure it in a way that will most appropriately suit the needs and interests of our audience of users. So we'll look forward to continuing to interact with everyone and using XLPR. Okay, am I posited? Are there any questions on the user group before we take us to the end? Yeah, okay. Well, if you have any questions, of course, you can reach out to us here at the usual contact information. I want to thank everyone again for their time and after the meeting ends, if you'd like to provide feedback to the NRC on this meeting, how it went, you can go to the nrc.gov public meeting schedule, find this meeting listed there and you can click on a feedback form and submit that to us. We'd appreciate it. So just some final reminders for you. Again, I suggest that you complete the introductory training, which is available in the training materials included in the XLPR version 2.1 release package. You're welcome to re-watch, of course, any of these videos that we're posting up on YouTube. And then, of course, like we said, look for our future announcements and information on the user group. I want to thank everyone again for staying with us in these webinars and we look forward very much to engaging with you in the future on XLPR. Thanks again and with that we'll adjourn.