 Welcome to the second lecture of today. We will have a closer look to the value of information analysis and decision analysis types. So when we think of the scheme, we have for this training school, now we are taking the complete part where we have models and we are aware of our actual performance and reality. So what I will talk about is types of value of information which have been stated in the early works about value of information going back to Alpine Schleifer. I will talk of analysis types. So that goes to the extensive and normal form analysis. So it's a form of decision analysis, especially preposterior decision analysis. And then we look to decision rules and we are having a look to a few examples where we think of the analysis types and we think about how to simplify the calculations. And this is done on an exemplary basis. You may have already gone through the lecture slides. I will probably not be able to finish on time at 12, but we are having the lecture at 1 with Karl Meilings and live connection then to Matteo Potsy in US in Pittsburgh. And so we are not going to postpone this lecture. We will take it and we probably have to split up then here this lecture and take the rest in the afternoon. OK, so when we look in the book of Reifer and Schleifer, the value of information analysis, we can find there that with four elements. And we remember that on the first day we have been talking about the value of information analysis and we identified somewhere in these sheets here. Maybe it's even good. We can find it, but we are on the first page. Yeah, here we develop this. This is what we talked to. And then I introduce this sketch and this is basically what we find here and how the value of information analysis is described in Reifer and Schleifer. And I think this was a very important point yesterday. The information that we are dealing with, the SHM information, is characterized by the type. Data, right? It's the type. So that goes what does it mean for the structural system performance? More specifically, its position or accuracy or uncertainty and its cost. So now we have been adapting for the last years this approach to the value of SHM information. And here you should be SHM. So we may have the case where we have no SHM information and then we have the choice of our actions and the chance of the life cycle performance. With SHM, we are getting additional information. We may be described with strategy and information requirements strategy or an SHM strategy and the chance of the outcomes. So this is the adaptation of this decision tree to the value of SHM information. And what we have basically introduced here is the basic decision whether to perform SHM or not to perform SHM. So this is very clearly added here, this choice. And this decision tree corresponds basically to this one. And then we have also explicitly illustrated our prior decision analysis, which is here. So this implies already, we will see it a little later, that we are after the expected value of information. Because this is a preposterior of decision analysis and this is a prior decision analysis. And if we subtract the expected benefit of having SHM information and having not SHM information, so B, let's say the B1 here and the B0 here, then we are calculating the expected value of information. But we see that there are other value of information types. So this is what I just introduced, and the value of information, it's B1 minus B0. And then B0 corresponds to the expected benefit given the optimal action for the prior decision analysis. So if I have this decision tree and I formulate this mathematically, very close to what's a life in Schleifer, formulate it, then it's the expectation operator for the benefits associated with the optimal action given the life cycle performance here, or the dependency of the life cycle performance. And this is basically how this optimal action is defined mathematically, so it's the A which maximizes this expression. I can also write an equation for this part of the decision tree. Here I'm taking also the expectation in regard to the life cycle performance, but it's the posterior expectation. So that means I'm using the SHM information and the outcomes. And then it's an expectation in regard to the outcomes of our SHM. And again, it is with the optimal action, but also here now it is the optimal SHM strategy in general. So that's my B1, so I may have different SHM strategies, like it is shown here. And I would identify the optimal strategy, that's my B1, and that's the value of information. OK, another aspect, so I can calculate here the value of information gained. It is an absolute value, but I could also relate this to B0, then I have a relative value of information. So this is basically how we start out. And I will try to make this in the lecture. I will say I will try to make it clear. And we will try to work through the meaning step by step and try to identify also some ways of easing these calculations. And we would rather not be able, in this notation, to see what this means and how we can ease calculations. So let's introduce the value of information types. So we are basically after an expected value of information analysis. So we have an expected value of sample information analysis. So there is, so this is abbreviated EVSI. So we have the difference, or we have a value of information analysis, is the quantification of utility gain. So it's the utility gain in pre-posterior and prior decision analysis. We could also, and this has been introduced in Reifer and Schleifer, calculate the difference in the utilities, or the utility gain where we connect posterior and prior decision analysis. And the sample information refer to information with the final precision. So we have uncertainties associated to this information. Taking these concepts, what situation in what situation would we calculate a conditional value of sample information? Anybody has an idea in what situation would we calculate a conditional value of information analysis? Yes, when we have the data, then we can do a conditional value of information analysis. But then we see afterwards, after posteriority, we see whether it was worth or not to acquire the information. But we cannot influence the information requirement at that stage and the strategy and how we should do it. So that's why we are rather aiming at an expected value of sample information analysis here. Another distinction, which is in the textbook of Reifer and Schleifer, is the distinction between sample and perfect information. And perfect information are infinitely precise information. So we have no uncertainties associated with the way that could then be useful to calculate the perfect information. What do we get if we calculate the perfect information? Or what do we get if we, pardon? Yeah, we are optimizing this, right? We get boundaries. Yes, and exactly what boundary? The next row? Yes, we get an upper boundary of the expected utility gain or the utility gain or expected benefit gain. OK, so this is to illustrate the two types. So for the value of the expected information, I'm asking, will the information requirement be cost efficient? It's a pretty much serial decision analysis. And here, for the value of conditional information, so that goes to the first two points here. I'm asking, has the spent money for acquiring the additional information has this been cost effective? So it's after the information. OK, let's come to an example. And we will go through this example a few times. So let's think of a good turbine. And there's a lot of control data retrieved in operation. So there's a machinery. And the control data revealed that there is a problem with or there may be a problem with resonance. And it is estimated that with the probability of 20%, there's a resonance problem. So what does that mean for a wind turbine? Resonance, any idea? So with a wind turbine is in resonance, what does that mean? Where's the excitation? Pardon? Yes, it's a very, if there was no damping, the amplitude will be infinite. So it's very dangerous. But where's the excitation here? The blade passing? Yes, it's basically the rotor. And then it's the blade passing frequencies and the multiples of the blade passing frequencies. Yes. And where's the resonance? Or where's the excitation? And what is excited with this structure? Excited, exactly. So fine. So when we normally design a structure, we just keep apart the structural modes from the excitation modes. Can we do it in a wind turbine? Is it this simple? In a wind turbine? Can we? Because the excitation comes from the structure. Well, the point here is the rotor evolution. So the rotor speed is varying. So we have varying excitations. And we cannot keep out the natural frequencies out of the excitation range. It needs to be passed. So this is this illustration about, here we have the rotor evolutions. And then we have the rotor excitations of 1p and 3p and 6p. And the excitations are varying. And then we have the first natural frequency here of our structure. And the important thing is, in the operation range, this range here, rotor evolutions. So there's a cut in wind speed and a cut out wind speed. And there is the rotor evolutions associated to. So in this operation range, the turbine should not be in resonance. So that's the important thing. But when the energy production starts, the rotor needs to pass through the natural frequency, basically, the first natural frequency. OK. So it is very important to know the first natural frequency. And it's also true for the higher natural frequencies. But they are not so critical. So it's very important to know exactly the first natural frequency. So what happens if you are passing through here and we are operating just a little higher? The excitation is a little higher. And the rotor evolutions are very near to the starting point here, where this is cut. Is there any resonance or no resonance? There will be resonance. It will be amplified. Yeah. It's not so straightforward. If the frequencies are just the excitation and the natural frequencies are just a little, or they are in resonance, and then the rotor speed is increasing, so the frequencies are separated. But the system may not be completely out of resonance. And this phenomenon is called soma field effect. And actually, there has been a wind turbine where this was a problem, but nobody knew. Unless there was a very precise analyzes and also an SHM system on it, so that we could find out that the turbine was already in production, but there was still resonance. And even there was a problem with the rotor evolutions. They correspond to a wind speed. And even here, this area was the mean wind speed. So very often, this turbine was in a resonance state. Not in a state where the frequencies are just overlying, but they were a little apart. But due to the soma field effect, due to basically the system needs energy input to get out of the resonance state. And if that did not happen, then the turbine is still in resonance. So that's the background for this task. So you have basically two action options. One is to do nothing. And the other one is to modify the operational range. So then you basically shift this line here a little to here, and you don't start power production. You just wait that the rotor is just released if there was enough wind speed so that the rotor evolutions will be somewhere here. But then you lose energy production. And this is basically reflected in the system states we have here. So if there was no resonance, we have a benefit of 100. And if there was resonance, then there would be a risk of minus 200. And if you modify the operational range, it's safe. But here we have a lower benefit, a lower energy production. And of course, we can do something about knowing the first natural frequency very precisely. So you can calculate the first natural frequencies with a model. So for this type of structure, you will be able to do this for the first natural frequency with a beam model. You just need to have the stiffnesses and the masses right. So the first natural frequency, you can almost take any finite element model. If you want to have higher frequencies, this is more complicated than you need, basically, a shale model. And yes, and then the mass distribution if the rotor is turning and the laser is turning, the mass distribution will be different. And so this is then more complex. But the first natural frequency in Europe, almost any model where the masses and the stiffnesses are probably represented will give you a relatively good estimation. But you have an uncertainty here for the first natural frequency. So you can do SHM. So meaning you can do experimental model analysis. And from experience, you know that. Can you see the last one? Yeah. Just go one back, I think, for clarification, that it's classed here for everybody. I think you should relate this horizontal line in natural frequency and the associative uncertainty. This is directly related with this statement that we might have frequency properties, 20%. So that somehow, as you can imagine, this horizontal line is somehow distributed, workably, the density function. And the exceedance probability that we are in the critical range is 20%. And that depends on the width of this density function. And now with the updating, we want to make this more precise. I think that's important. Because that's the up-to-early uncertainty we have about whether we have a problem or not. And I think in practical considerations, this is very critical to modify. Yeah, we can think of it like this. At least, as I understand, it's very nice and sound. Yeah, well, we can think of it like this. But we, yeah, OK, thank you. OK, so we can do a model analysis. And the model analysis indicates the proper system states with high probabilities. But there's also a probability of falsification here. And we have a cost of 10. So we can readily do a prior decision analysis. So we have the probabilities of our system states. And we have the consequences. They would be here. And by the calculus we have just seen, we can calculate the expected benefits here and here. And we take the maximum here, because we are maximizing the benefit. So we just take 50 here. This is our B0. OK, so this was this part here. Now we can do the same prior analysis. But we say we have additional information z1. So we have an indication of z1 so that the system is in state x1. And this is the posterior decision analysis. And here it's the same decision tree. But now we have z1. And we would calculate here an updated probability of the system states. The same we can do with z2. And that together would be the pre-posterior. Or that would be even the value of information analysis. We have the pre-posterior part here, prior here. So if you do this decision analysis, we can put the benefits as we had it on the slide here. The costs here. So we have here the cost of the action a1 that was 20. And here we have obtained the, or we are doing an experiment. We are doing the model analysis. And then we have costs of 10 here throughout here. And when we do the action a1, we add 20 here. That's accumulated. We have total consequences, benefits and consequences here. And we just worked through our decision tree. The updating here with z1 gives the posterior probability of the system state x1. So it's x1 given the information z1. And that has been calculated to 0.96. And then we have the complementary event, it's 0.04. So this is also if you have the action a1. It's the same probabilities. If we have the indication z2, then we have different posterior probabilities here. And we then need here the probabilities of indication z1. And that is the point from yesterday. The probability of the indication z1 needs or depends on our prior probabilities of the system states. Here we do it simply with discrete probabilities. And yesterday we did it with continuous probability functions. So and then, so here again we have the decision nodes. So we take the maximum from these two. So that's 40. We take the maximum from these two. That's 78. And then it's 0.25 times 40 plus 0.75 times 78. And this will give us the b1. And this is 68.5. So this is mainly to establish the decision tree. It's an example for the decision analysis, which we will use to see something of the introduced concepts at the beginning. So this was the decision tree for the pre-posterior decision analysis. Here we have basically this tree. Here this is the pre-posterior part. We have also added here the prior part. And we see that the expected benefits here are higher than here. So this is the decision node. So the decision is that the experimental model analysis should be performed as we have higher expected benefits. So that's the decision node. And we take the maximum of this and of this. So we also should recognize here the branches which are leading to the optimal utilities or optimal expected benefits. And here for the prior decision analysis, it is we need to modify the operational range. That means we are producing less energy. But that's optimal, given that there is a 20% chance of a system state we don't want to have. So this is basically the branch where we calculate where this 50 comes from. That's very obvious. Here we also have a decision node. And if you have a decision node, it's either this branch or this branch. So what? And we can read here the optimal actions form. So if you get the indication that one, the optimal action is A1. And this is the branch which provides this relight here. If you get the indication that two, then the optimal branch is to, again, to modify the operational range. Here we don't need to modify the operational range because the indication is that the structure is OK. And here the indication is that the structure is not OK. So we need to modify the operational range. So one could say, we should not calculate the complete decision tree, but we could somehow make it a little easier. But before we come to that, let's have a look to our types of value of information analysis. So if you have this decision tree for this example, we find the expected value of information. We find it as the difference between the B0 star and the B1 star. So this is where we find the expected value of information, where we are primarily after. The conditional value of information, we always use here our maximum expected benefit for the prior decision analysis. So we always use the B0 star here. And then we can calculate the conditional value of the information Z1. So given the indication Z1, we can calculate the conditional value of information. Or we could calculate the conditional value of information given Z2. So that's the illustration of the different types of value of information. With this decision tree, we can really calculate it. So that means the value of information, given that we have an indication that 1 is 28, that means here the indication is that the structure is OK. So this has a value because we don't have to do anything. That's the mechanism. But if you get the indication that 2, then this has a negative value because we have to modify the operational range. When we calculate the expected value of some information, then still, or it is positive, it's 18.5. And that has to do also with the fact that the probability of the indication that the structure is intact is relatively high. And we should be aware of here. We just have one SHM strategy. We just have this model analysis. We did not vary any factor in the model analysis or we did not have any other approach. So our E1 is basically associated to just one experiment E1. But there could be another experiment E2. And we could have another branch E2 here with also expected benefits. And of course, the conditional value of sample information here is also conditional on the experiment E1. We could have another experiment E2 and also outcomes maybe even more like we've seen in the previous examples. OK. So this was the types of value of information analysis. What am I after here? OK, I would like to introduce now decision analysis types. So that goes now to the way of how we are working through the decision tree. So you may have noticed that we have gone through this decision tree here from this side. We're calculating from this side to here. This was the last result. How is it done in practice? So when we think of a practical situation, so we have instruction and we have our measurement equipment, then we go there. What happens? We go there, we measure, we get an indication. So we're coming from this side. So that's the practical way. And we will need to know what. If we have the indication, we need to know what we should do. We need to have decision rules. OK. So the decision analysis also knows about this. So we have been working through the example form, because here you need to work through all the branches. And you cannot miss a branch, basically. And we have been writing it like this, but we could also write it in the normal form. And here we see that the expectation operators are exchanged, and here we have a conditional expectation. But we are, again, after the optimal SHM strategy. And now here, there's also a difference. Here we have the optimal action. But here we are looking for the optimal decision rule. And this is basically what we just found out. I need to know with what indication I should do. And then there's a decision rule in the dependency of the indication leading, basically, to the action. This is the decision rule. So OK. So this is very close to what we find in Weifern Schleifern. And it's very hard, at least it was for me, to imagine what this really means. Probably it's also hard for you. More than that. So let's go through this. So we have the extensive form where we are working through this entry in this direction. And because now the extensive form analyzes by applying the introduced formulas to the example. So it looks a little better. And basically, we are describing here the probability. We have the Z1 branch here. That's the probability of Z1. And then we have a maximization operation for this expression referring to the A0 branch here for this branch. And the other part will be here. That's the A1 branch. Plus the probability of Z2 times the maximization of this branch and all this branch. So this is the extensive form. And we now just need to replace or we just need to rewrite this expression here. So that's the Bayesian updating. So it's posterior in that's the posterior probability in relation to Z1 because we are in this branch here. So it's the probability of X1 given Z1. And this is the Bayesian update. And we now replace all the posterior probabilities with this expression. Meaning, of course, here it's the probability of X2 given Z1. And here it's the probability of X2 given Z2. And when we do this, we note that the denominator here is p, the probability of the indication Z1. But we multiply it here. So we multiply this in. And then we come to this expression. And that's the normal form, very simple. And now also this arrow has changed. That's the normal form analysis, very, very simple. I noticed that I'm much better than yesterday with the number of slides. Yesterday it was 10 after 1 hour. OK. And now let's say we have done this decision analysis in extensive form. And we know the optimal branches or the branches we need to calculate to come to our result. And then we meet here to define. So that comes because we are working through the decision tree in this direction. So we need to define here a decision rule so that we can connect the outcome to the action. And now because we have done the extensive form analysis, we know the decision rule or the optimal decision rule or the decision rule which leads to optimality. So let's highlight it here. Let's say this is our decision rule D. We should also be, if you do the normal form analysis, we are not consistently of the other way around. If you do the normal form analysis and we have not done the extensive form analysis, we need to define the decision rule so that all branches of the decision tree are covered. This is the comment here. OK. So we work with this decision rule because we already identified it. And then we see we can cross out the branch here and this branch here. So this corresponds to the branches we do not need. And then we can also cross out the maximization and it becomes much easier. So it is this expression in normal form analysis. And this was the expression in the extensive form analysis. And that's what we already did. But we could do it, we could have it simpler like this. But we would need, of course, we would need to know the decision rules which are leading to optimality. And if you do not know the decision rules leading to optimality, we would have this expression. But we see that we do not have Bayesian updating in here. So we don't have to update in the normal form analysis. Actually, in the extensive form analysis, we are first dividing by the probability of Z1. And afterwards, we are multiplying with the probability of Z1. So that's maybe one operation too much. OK. So if you really do this and we do this for our example now, we get the same result. So the E0 is 50 and the E1. So we just input the expressions here like we've done before. And it's the same expected value of information. So the analysis forms are equivalent. First thing, it is about Bayesian updating conditional probabilities, total probability theory. That's basically all we need. OK. Just for summary, extensive form right-hand side to the left-hand side, Bayesian updating is required. And in the normal form, it's the other way around. It can be computationally more efficient because there's no unnecessary operations. And if the optimal branches were known before, then this can be very powerful. But here for the normal form, OK. So this is normal form, extensive form. And this command goes to what you also found out in a practical situation. When we go there, we will get an indication and we will need to know what this means or what we should do. And that's why for the practical implementation, we also need the decision rules. So what have we just done? Branch eliminating? Can we try one more idea on branch eliminating? Because it can ease our calculation we have found out. So now we do, we separate the decision analysis and two decision trees. Decision tree one is only associated with benefits and costs associated to decision states and SHM. So we do an experiment and we have a cost. So we have 10 throughout this decision tree. And then it's also the system states. The costs for the system states, they are associated to the system states. And it's the same for the complete decision tree. And here we have a decision tree two. And here we have only the action-dependent costs. So we have the action here and these actions in here. And then there's also costs which are the summation of these two must give the decision tree we had previously. And also the consequences here on the right side of the decision tree. So that's why there is here some numbers. So then we take the decision tree one. So this one where we have only the benefits and costs associated to the system states and SHM. And we can write this expression. So here the 10, that's the inspection or the costs associated to the problem one. We have the 10s here. So we just can take it out. And then it's still the maximization operation. And so we write this in normal form analysis. But here now the B is only dependent on the system state. And there's no dependency on the outcome or on the action. And then there must already be something visible. What's that? OK, we can reformulate this equation to this one. And now we work with the conditional probability and the commutative law of the intersection operator. So that's basically the set of laws which are responsible for the patient updating. So this is set operation. This is the definition of the conditional probability. And due to the commutative law, we can write it the other way around. And it's basically the patient updating. If you put this into this expression here, then we have a very simple expression for calculating the expected benefit here for our decision tree one. So that only depends on the prior probabilities. And then we come up with 30 here. Now we need to take the second decision tree. And we also define a decision rule here. Because we already know it's, again, the optimality decision. We have identified the x-hands form. So it's basically, if you have the indication that one, we will do a zero. But there are zeros here. So this branch has vanished. So we just have to calculate one branch here. And if you do this in normal form, then it's 38.5. And 30 plus 38.5 is the result of the original decision tree. And in between, we had two decision trees. But we could solve it extremely easy. And it's also interesting to note that in the decision analysis, basically what we're doing here, it's about the actions. And some costs may be completely independent of the actions. And this can also be extended to the system states, basically, so that the system states are, so that goes to the first decision tree here. That the benefits and costs associated to the system states are only dependent on the prior problem. OK. This looks like that I can finish shortly after 12. This lecture with 50 tree slides. Yesterday it was 40 slides. So OK. Are there any questions at this point? There's one of the tree points. Yes. Repeat the question, if you could. Sorry. Could you speak louder? Here, this one? Yeah. Wait. This is the action here. And there's one decision tree plus or this table plus this table must give something which we have had some time before. Yeah, here. This one. It must give this one the original decision or the original consequence set. That's tricky. Yeah, but you define the, you go with the definitions. Here, you put in here the benefits and costs associated to the system states and SHM. And here, you only put in the action dependent costs. So this is 20. And something which needs to be done so that this decision tree or the consequences here plus these consequences here give the original problem. So it's fine. OK. Let's talk a little bit about reliability and risk-based inspection planning from a decision analyzed perspective. So we have been seeing these pictures yesterday. And the inspection planning is about that we do an inspection plan for a few decades normally. And it is pre-possereal decision analysis problem. And it can also be formulated as a value of information analysis. But here, we look at it from the perspective of a pre-possereal decision analyzed. And let's work through a decision tree for a component with eight years of service life with two inspections. And for the inspection planning, the probabilities of safe and fair states are usually described with fracture mechanics model. That's the underlying modeling for our events. And the inspection provides any information about the presence of a crack, a solid location or a location. So we have heard yesterday how this can be modeled. That's the NDT and the performance modeling. And here, this is the very important thing you have here in the decision analyze. And we have had it yesterday for continuous damages. So no indication is dependent on the prediction of our damages. So the decision tree looks like this. It is exponential with the number of inspection times. So it's to the power of two. The branches are increasing exponentially with the number of inspections. And it's proportional to the number of locations and the number of actions. So it's two times two to the power of two. That's four to the power of two. It's 16 branches. This is what we end up here. So what we have here is in year zero, it's safe. And then it's always fair and safe. Fair and safe. Oh, safe and fair. And then we can do an inspection. And we may have here no indication. We can do nothing or we can do repair. If we have an indication, then we can also do nothing or do repair. So even for eight years, just two inspection times, just two outcomes, just two actions, we have already a decision tree which we cannot work through like I've done just now. OK, this is what I just taught. So what can we do? Any ideas? What can we do? Yes. Can you reduce the number of branches? Fairly safe. You have one each year at the moment. Could you get a confidence level period for the next inspection? Yes. OK, you could calculate from here directly to here, right? Every time, sir. Yes, yes, yes. But still, you would need to have here in this branch, you would need to account for the no indication information. And here for the no indication information and what they repaired it, what can we do? So suppose you're going out with your inspection equipment and you get an indication. So what will you do? Yeah, OK. So basically, if you get an indication, you should repair. That's the best information you have. And because as the inspection operator, you do not know really how critical this is. So it's also a reason for just doing the repair. And then there's another reason you may have. So if you think of offshore structures, you're going anyway to the structure, you have a vessel. And you don't go there again with some slightly different equipment. So you repair. And this is one, this is a decision rule, basically. And what we just talked about, this decision rule is somehow already there, it's implemented. It's how? And the offshore oil and gas industry inspections and repair actions are performed first. And this is what we have shown this year. It's also the optimal action. It's an optimal value of information analysis to do the repair just after the inspection if you get an indication. So that's the red dots here. If you detect the damage, then from here, four in the service life of 30 years, it's optimal just to do after the indication. In the very beginning, it's optimal to repair later. But the optimality for most of the times during the service life is just to do the repair after inspection. So this is super. And now we have this decision tree. So we can shift around the time axis here from year 0 to year 8. And then we have a decision rule in our decision tree. So if we have no indication, we don't do anything. If we have an indication, we do repair. And now we have a fair branches left. So that's super. What else can we do? Still four branches. I think it's nothing important just to know. Because on this tree, between year 5 and 6, you can have indication or no indication. And then if you have an indication and repair, OK, exactly, you have only the survival option. Instead of between year 2 and 3, I see that after repair, you can have a survivor and failure. That was just beginning. Probably this has been forgotten here. So let's think about what the repair means. So how does a component behave after the repair? So it's fatigue cracks. And if you know that there's one, it will be grinded. And then it will be weighted. That's the procedure. It can be done underwater also. Ideally, it would be a new component. It would behave like a new component, so we would go here. Yeah. The other idea is, you probably already know, these simplification rules. The other idea could be that that component behaves like one component which had no indication. So if you do this, then we have just one branch left. And if you do the other simplification rule, we have also only one branch. But now it's a little more complicated because we need to go back to the decision. The three is here with different service lines. And this has been basically done in 2000 by Michael F. Hauer, the simplification rule 1. And here, I think, the simplification rule 2. This goes to what you have said. This has been implemented, or has been found out here in the impugnate thesis of Daniel Schraub. So to conclude, was there anything else? I am OK. We will go through the rest of the slides in the afternoon. But this is rather the task we are doing. But to conclude, for a value of information analysis, for decision analysis, we need a decision tree, a clear definition of the events. And the events are defined with our limit state functions. And if you have that, we need to work through the decision tree. And we need to find a way of solving it efficiently because it can easily explode. And we can work through it efficiently by being aware of the extensive and normal form analysis, by being aware of decision rules, by being aware of identifying decision rules, optimal decision rules with the extensive form, and then utilizing it in the normal form. That's one procedure. There was another approach where we just had the branch eliminating. So we could create zeros and have two different decision trees. And then we need to associate our decision tree. How can we simplify it? And we need to associate it to the real world. Other decision rules are already there. It's the decision scenario, the integrity management. And they go there anyway with the inspection and repair equipment. That is, the decision rule is already there. So we can just enjoy it. And then we can think of finding physical relations and to avoid some branches. So this was the last part of it. OK. Thank you very much for your attention.