 Okay, so it's 8.30, so we start with the lecture. Well, how to begin? You have seen the lecture plan. I will basically give the introductory lecture, and then I will be a little more in the background, leaving the floor to Daniel Straub and Jochen Köhler. So then let's start with some introduction. Rather in general about the topic of quantifying the value of structural health monitoring. When we started to work on this, we found in a literature review, summarizing the literature from 1996 to 2001, this sentence so that the ultimate goal of structural health monitoring is perceived as damage prognosis associated with tremendous economic and life safety benefits of structural health monitoring, these technologies. But what we didn't find at that time was someone really quantifying the economic and life safety benefits. Somehow the field of structural health monitoring emerging from the technologies and the possible application to structures. This field and the researchers they knew about what it is about, but it was not really quantified. But there is the Bayesian decision theory from Reifer and Schleifer. This is actually copied out of the book of Reifer and Schleifer from 1961. And this has received attention throughout the scientific community. So the background was Reifer and Schleifer wrote a book, they were mathematicians, they worked with statistics, and they wrote a book about management principles in mathematical terms and there this concept of value of information was also introduced in this book. This concept has been appreciated throughout the decades in several different fields of science. But to put this together with structural health monitoring we only saw a few scientific studies published around 2010 and later years. This publication from 2011 from Pozzi and de Kroegen explicitly addressed the Bayesian decision theory and also the 2013 papers where I was involved with Michael Faber and then there were somehow papers which had similar ideas of performing such a decision analysis, but without addressing explicitly the Bayesian decision theory. This goes to work from Michael Todd from 2010 very early and it also goes to my PhD thesis which I finished in 2011. And to the field of structural health monitoring, when we think a little bit broader in terms of inspections then we find something in the works very early from Daniel. At least 2004 or earlier where the concept of value of information was associated to inspection planning. If you go with this ultimate goal and the economic and life safety benefits then it is Bayesian decision theory which we can use for the quantification. So this has been explored from, it's very diverse like I just told. And of course I stopped in 2013 with the list. Since then we have I think an exponential increase of the papers on this topic. 2013 was also the year where we decided to go for a cost action on quantifying the value of structural health monitoring and we were fortunate to win the cost action tier 14 or 2. It started late 2014 and runs until end of April next year. And it's a scientific networking project. So we basically organize activities like for instance the training schools or workshops with a large network consisting of researchers, engineering consultants. We have industry representatives from industrial companies here but also from infrastructure operators and authorities, standardization bodies and so on. And we have about 130 participants from Europe. So this is, it's a European project so the core work is done in Europe but we also have an international network and we are connected to Australia, to Professor Mark Stewart, to Professor Dagang Lu in China. In the application phase we had also Armander Kuregin involved and from our international workshop again network, again Michael Todd is here. So this is great. And a short overview, we have been working on the theoretical framework on quantifying the value of structural health monitoring. We have been exploring what is there from the side of SHM strategies and the structural performance models. So this is what we need to put together and this is one of the main steps throughout science and but also engineering that the SHM information we are getting need somehow to be connected to an action. And this is very important to quantify the value of structural health monitoring or the value of information. So it's only somehow valuable if we can change the way we do things then information may have a value. But this is very important but at the same time there are other aspects so there may also be a value in information for knowledge gain. But still it's then related to future actions. So we have also been exploring methods and tools. We are now or still working with the case studies in our network and we are developing guidelines. One more focusing on theoretical aspects within the four, the JCSS and one more practical guideline. Okay, so this is an overview over the scientific program of what topics we have been working on as an overview in the last four years. And what did we do? We have been organizing special sessions in a variety of conferences. In 2015 we started with the ICAST conference and we had I think five to six contributions here and one special session. And you can see we have been organizing more and more special sessions so we have built up the field on quantifying the value of structural health monitoring. In the scientific community there was just the UPSI conference in Nantes where we had a special session but there are much more conferences in 2018 and we have partially also rather large sessions receiving around 15 to 20 papers and presentations. Of course we have published, we have been organizing workshops larger or smaller workshops and two of these 14 is the training schools we had last year one in Italy near Lake Como and this is the training school of 2018 and we have scientific missions and this is what we are still doing and what we have budget for. So if you are a member of the cost action tier 142 you can go to another institution and work on a topic and you get financial support for travelling and for your living expenses there. So you can use this as a cost action member or if you have something very interesting and idea and a partner somewhere you apply to become a member of the cost action and you can write me directly an email with this and then we will have a look and we would be happy if the idea is really good we would be more than happy to support you. And I should also say there is also the option that someone from tier 142 goes to Michael Todd. We also can support this. He is officially involved in the cost action so someone can go to Michael Todd and work there for a few weeks or months even. San Diego is almost his nicest friend. Maybe we go the other way around, I don't know. Yeah, we have to find out. So it is still 8.30 so let's start with the value of information concepts and decisions. One of the main things I have learnt when working on this topic on the value of information is that when we talk about the value of information then we talk about the structural performance model we talk about an SHM performance model we talk about actions and we talk about it all together and we have spatial boundaries but we also have temporal boundaries and when we talk about decisions and decision analysis and a special case of value of information analysis we need these first steps, the scope and the scenarios. So basically what is our decision scenario? What exactly are we talking about? Especially when we talk about case studies we say we have a wind turbine or we have a bridge this is fine but we need much more to talk about value of information and this is the complete scenario of what information do we get what actions we are supporting, how these actions are planned and how is the information interacting with the structural performance and the planning of these actions. The analysis part is basically here, this is the fourth step there is much more or many more steps before so this is what I just covered and then there is also the assessment of the preferences of the decision makers so this is often termed as a utility model and the utility model can be associated then with decision attributes like expected costs or expected benefits or risks. I think there is a very good introduction or even more than that in the lecture notes of Daniel Straub on how to model the preferences of the decision maker. Okay, so a very important point before we start the analysis we need to have the clarity of the scope, our decision scenario our utility model, so the preferences of the decision maker and then we can basically only then we can start the analysis. When we do the analysis or even the steps before how to start the analysis and how to structure the decision problem we may refer to the early works, Reifer and Schleifer where they wrote about the value of information and what they came up with a modeling scheme called Decision Tree and the basic approach is that we have experiments we have an outcome of the experiments and the experiments, this is a D, this is a decision so we decide about the experiments we have a chance of the outcome, that's the C and then we have actions, that's A, it's another decision of the actions we can do and there's also a chance of the, I think they called it the true states of nature so and here basically when we talk of the value of structure health monitoring the structure, the structure performance is modded so as I said in the introduction this concept is very widely applicable it's applied throughout sciences it is heavily applied in medical sciences but also in agriculture modeling but where it's not so much applied is engineering but we do so this is our original generic decision tree what we have developed in the cost action to get hold of our decision scenarios is this scheme so this was developed in a workshop that's the eighth T142 workshop at the Technical University of Munich Daniel was organizing this and it's now documented in a fact sheet on framework and categorization for value of information analysis so what is in here in here is that we have a real world system and we model this real world system so we need to be aware of that there is the real world and all models we can observe the performance of our real world system so this is true indicators and observations and this can be feed in or fed in to our models with the methods of the structure reliability with Bayesian updating what we observe and which indicators we go with this is basically an issue of how to collect information and there's different strategies of doing this so basically the observation so this is basically all let's say it's from from data to indicators to observation to information and essentially this is about gaining knowledge so the information is about gaining knowledge but there's another way of we cannot just observe but the other way of influencing the system or the way of influencing the system is to perform actions so like repair maintenance strengthening and renewal so then we can this is the way basically of making the system perform and of course this should be based on our model, on our knowledge and on the actual knowledge and observations so what we are after with our information is knowledge gain so that our models are performing better and yes, you have to decide before you observe what you observe and there's a large variety of indicators of the structural performance like you can think of very basic mechanical measurements like deflections but there may be also very abstract measurements so-called damage indicators this can be all types of deviations from mechanical properties or analysis of the static and dynamic behavior over time and so on but in this figure it's just indicated that you have an indicator and it might be observed or not an observation is a subset of the indicator and then you are either observed or you are observed I wouldn't say it's a subset I mean it can be a subset I mean in some cases it's an indicator and you can die if you observe it but you can also be relatively interested in the stiffness of the structure for itself and you might however not directly observe the stiffness but you observe it might be deformations but we don't have an expert in dilemma now no, we just observe the indicator but the indicator can be the stiffness or the deflection or the deflection, yes or the color or anything no, you can say that the deflection is the indicator but you might want to say that the stiffness is the indicator and you might have different observations to learn that indicator so it might not only be measured deflection or deflection it's kind of like that it's an observability criteria however, I would say that something you might observe it depends how well you are and what you observe correlates to what you actually care about what is it you're trying to but I would just like to discuss based on that slide okay, we have a complete discussion okay, yeah yeah, this is also the kind of interaction we need these days this was not a joke I was serious and yes of course we have I think one important point here is there may be several perspectives and the most important thing is to fully understand these perspectives and to clearly say what we are exactly meaning okay so information is about knowledge gain and actions is about making the system perform so what we have only talked about the upper part so far and the main main approach of the decision theory is that you maximize the expected utility this was first formulated by Daniel Bernoulli in 1713 but without any mathematical concept but this is the very origin of utility theory and then the Lederbergs came not just by Riefer & Schleifer in 1961 but also von Neumann & Wangenstein before on the utility theory and the axioms which are now the basis of our economy that's game theory okay, so this is the very origin we optimize the expected utility and then specific objectives can be that we maximize the expected benefits or we minimize the expected costs or we minimize the risks if you talk about civil engineering then it's always about risks we are the design concepts whatever it is semi probabilistic, full probabilistic they are largely driven by risk and this can also be historical reasons okay but anyway let's not go too deep in here optimization requires an objective function first and second decision variables so okay we want to optimize this but what can we change and this is in the context of structural health monitoring the measurement locations the measurement period the technology this maybe all in here and another example is to repair strategies so then we are going with the actions basically okay, yeah there's several other ways of of getting hold of the scenario of value of information and decision analysis we can categorize like it is shown here so what type of structure do we have what life cycle phases we are looking at so is it the design phase or is it the operation phase what structural performance models do we have and then we have the decision scenario so who is the decision maker so basically for whom we perform the analyzes and I think this is also a very important important point we think of decision analyzes and we can optimize and we can find our parameters but in the end we don't do the decisions we analyze we do decision support and this is a very important point you and we will not be the decision makers usually and I think this is also very important why do I say this our analyzes our models are limited and they work within their limitations and we will see when we specifically model that there will be all kinds of assumptions and only if our decision analyzes are very comprehensive and the assumptions are well defined in the decision context and in the decision scenario and if the simplifications are not so large then we are actually able to really identify a decision and then there may also be the chance that the decision maker is exactly doing this but if our modeling is yeah, involves too many assumptions then there may be an offset between the decision maker and the analyst so this is to illustrate again with the distinction of our models and the real world this is what we also need to be aware this is what we are aware when modeling our structural performance and also the observations but it also holds for our topic decision analyzes itself okay, so who is the decision maker what is the decision point in time so we decide and what is the objective we had this on the other slide before what are the decision variables here actions and informations basically and this is a way of summarizing our analyzes so in this context what did we what value of information did we quantify what decision rules can we derive so yes we can do decision analyze we as researchers can spend a long time analyzing but in the real world so to say decision makers do not have time so we need to focus our efforts in a way that other have it simpler we do the complicated things as researchers but the ones we would like to provide the decision support for they must have it simpler than us maybe to say very clearly what a decision rule is I have an experiment I have an outcome I have several outcomes and basically decision rule is when I have one outcome what is the optimal action this is a decision rule the connection between the outcome and the action this requires the decision analyzes this requires we will see in a few minutes a maximization of the expected utility at this point and and when we have done this we can easily derive here the decision rules and we go for comprehensive models so we can say ok if you have this outcome you do this or transferred to the decision here about our experiments in this situation when the structure is highly deteriorated you do this type of SHM strategy so this is the decision support we have to provide another example is Bayesian networks here Daniel Straub will say more to that we have a comprehensive influence diagram shown here but I will leave this to Daniel and to the next days to go true I think we planned until 9.30 this is right this lecture until 10 the coffee break is at 10 so I have until 10 so this is not too bad now we have seen the generic decision tree from Reifer and Schleifer with this 4 nodes 1, 2, 3, 4 so this is basically here this picture is the translation of this decision tree from Reifer and Schleifer to the value of structure health monitoring information we can have the situation that we need to or that we decide that's a choice here or a decision about the information type the chances of the outcome the adaptive action and the life cycle performance but we also have here a basic choice of performing structure health monitoring at all so we have another decision here so this is for going for structure health and implement a strategy or it may be that it is even that it leads to higher expected utilities if you don't do it so then we just have our actions and the chances here and this is the life cycle performance or system states and there's another node introduced here the life cycle benefits or it's the expected utilities so this would mean some of you may already know very well so this means if we decide which SHM strategy should be performed we do the analysis before we install the SHM system and in the context of a decision theory this is called pre-posterior decision analysis this is the most powerful concept we have in decision analysis but it's also a little challenging to do this but that's why we have the training school and why do we have these two basic branches we will see later one is the value of structural health monitoring is the utility gain or the difference between these two between the expected utilities so if you can gain an expected utility by SHM then you have a value of information we may also have the situation that we already have the information and this part of the decision tree stays like it was before so in this situation we can obviously with the decision tree also quantify a utility gain as the difference between the expected utilities and these two branches but now we have already obtained the SHM information and we have an outcome so we don't have a decision here, there's only one branch and the outcome we have also already obtained so then only the decision about the optimal action is left so it's still a decision analysis since there's one decision out left and here we have two branches basically this we have already performed the value of information analysis we have already performed the SHM so in this case we can do a value of information analysis but that's then conditional on the information we have already obtained so it's a conditional value of information analysis and we can what it is good for we can quantify after we have performed a measurement was it worth to do it or not we may also have the situation where we have different SHM strategies and we have not performed it and we quantify here or we are working with infinitely precise information which I have analyzed with this and if we work with infinitely precise information it's also termed as perfect information so we'll see a little later what that means so this was to introduce the types of value of information analysis so we have an expected or conditional value of sample information for the expected value of sample information we have pre-posterior and the prior decision analysis so this is this decision tree, this is a pre-posterior decision analysis, this is a prior decision analysis we also have the concept of the conditional value of sample information so this is the difference of the expected utilities and the posterior and the prior decision analysis so this is this decision tree this is the posterior decision analysis and so we have the expected and conditional value we know the difference now here we need to talk about the sample information and sample information refers to information with the finite precision so we have uncertainties here but we can also quantify the expected value and the conditional value of perfect information so here we are working with the rather theoretical case of infinitely precise information so we don't work with uncertainties here and this is basically good for quantifying the maximum value of information we could obtain if the information was very precise so when we do a value of expected information then we ask or we answer the question will the information be cost efficient so it's a pre-posterior decision analysis so before we obtain we ask and we answer and the value of conditional information as I said we can from the perspective that we already have the information we can determine whether the money spent for acquiring this additional information was cost efficient and we need a posterior decision analysis this is one of the shortest ways to mathematically write a decision analysis so it's about the expected utility we have an expectation operator and it's about a maximization of the expected utility or benefit so our benefit this is a prior decision analysis is dependent on the actions and the system states and this is basically this branch here so we have A and the life cycle performance here this is what our benefit is dependent on and we maximize the possible actions for the pre-posterior decision analysis we have two expectation operators and now our expected utility or expected benefit depends on the SHM strategy on the outcome and again on the action and the system states but now we take a posterior expectation what this can mean we come to that in a few moments we take the posterior expectation and respect to the system states first and then we take an expectation and respect to the SHM outcomes and again here is the maximization of this expression now in terms of the actions and in terms of the strategies yes with these two expressions here you can quantify the value so this is the expected benefit or utility gain you are quantifying this is the value of structural health monitoring information in the context of an expected value of sample information analyzes you can also normalize this value with B0 and this is somehow an expression of significance so it will be very low here close to 0 if there was a very small number here and a large number here but if the expected benefit here is much larger than here then there will be something like for our problems in the orders of 10, 20 or more percent it can be very high but only for very specific situations so it's rather a measure of significance okay expected value of sample information if we formulate this for the conditional value of sample information then we don't we do not maximize to find out the optimal strategy because it has already been performed so we just maximize the expectation in regard to the system states the dependency of the actions so this is basically this operator is gone the prior analysis part stays the same okay even this expression is very similar because we don't have parameters in here anymore and the maximization advantages so this is the conditional value of sample information analysis types, expected value conditional value of sample and perfect information now there is also a concept of how to go to the decision tree so maybe let's think jointly about the situation that we would like to perform a measurement and maybe we do it experimentally where do we start in the decision tree so let's this branch so we have let's say three technologies and we have a specimen in the laboratory and how do we optimize our how would we do it maybe the question is not too clear I observe myself let's try it another way or maybe I just tell you and we come to I will try to find more interaction the when I still have time in this lecture the basic ways of especially analyzing here this tree is to go from this direction it's the analyzes or to go from this direction but having in mind what is happening basically here so there's two basic ways of formulating the analyzes when you are in practice and this is where our decision will come again into play is that in practice there is a measurement performed there is an outcome and people then need to know what to do with this outcome they need to know the action so this could be repair immediately repair a little later so in practice we always have this situation that we go from this side and maybe to answer the question I pose myself if we want to do this this decision tree experimentally we go to the lab we have the three SHM strategies so we start with one we take the outcome and we try all the different actions so we need enough specimens and we also model the costs and the benefits and this is what you need to repair for all three strategies so you go in practice from this side and that's why the decision rules are so important so with what outcome it is what optimal to do this is about the decision rules and the step before in what situation is it optimal to do SHM or not okay so what you may you have of course a photographic memory have you seen the slide before yes so what I've introduced is the extensive form but the extensive form is basically going from here so we first take the expectation to the system states but the posterior expectation so we are coming from here and work this way so we then take the expectation later in regard to our SHM outcomes this is the normal form so here we take first the expectation of that conditional on the on the life cycle performance so we are working through our decision tree from this side this is the normal form so the analyzes forms normal form and the extensive form they are related to the way from which side we are going through our decision tree but it's only relevant for the pre-posterior decision analyzes so and maybe for the posterior also yeah okay let's say so far for the pre-posterior decision analyzes okay two analyzes forms extensive form and normal form let's make it a little bit more illustrative with an example some of you know this example I often refer to this example in my lectures so behind this example is the situation that Windpark is operating but somehow it's there may be relatively high probability that there is a resonance problem of the support structure and rotor excitations but this is quite especially in the beginning of offshore wind energy this was a problem and I know very well the case where this was a problem but the resonance problem itself was solved but there is something else namely the soma field effect so if a system was in resonance it can the excitation frequency and the natural frequency they may be separated but still the system needs some energy input to get out of the resonance and this is the soma field effect and this was actually a problem for one prototype of a wind turbine and there is two papers in 2015 about this ok so well this is a campaign diagram so here we see in dependency of the rotor evolutions the rotor frequency excitation so this is one piece so one blade passes 3p is the frequency that any of the blades passes and in the operational range the natural frequency is somehow in between these two excitations so the wind turbine needs when it gets an operation to pass the resonance basically but if it was not in production then this is fine but they can be a resonance problem in this area if this was not so good separated so this is the this is the background so you can do nothing about it or what you can do is modify the operational range so basically you shift the here this is the operation and you shift it to to a range where the excitation is more separated from the first natural frequency and this is actually what they did after we found out but only we found out the operational guys they have been operating it and we how we found it out was that we looked at strain measurements which was not expected and then we looked to the to the reason and this was the soma field effect ok but coming back to this problem we can have two action options and we have the system states so this can be no resonance x1 x2 is resonance and if you don't do if you have no resonance and we don't do anything we have a benefit of 100 if we do anything we have a benefit of 70 so this means we reduce the operational range and we have less benefits so we can produce less energy and if you have resonance do nothing there is higher risk of production loss but if you do something about it then we don't have this problem and we have the same benefit like we had for the no resonance so and then we can ask the way of finding out the natural frequency is basically to do experimental model analysis this is rather precise but not fully precise so we can find out that there is no resonance with 90% probability but there may also be an indication that there is no resonance but we have resonance in 15% of the cases and also here so this is the information precision we are modeling here and there is the cost of the analysis here assumed to 10 so then we can basically draw the full decision tree and this is actually already the analysis result so if you don't do the experimental the experimental model analysis we have an expected benefit of 50 if you do it we have an expected benefit of 68.5 so the optimal solution is to do the experimental model analysis but what we are actually behind is an illustration of how to quantify the value of information and we see here if we calculate b1-b0 this is the expected value of information so it's the difference between here and here and the conditional value of information we find here the prior decision analysis always stays the same so we always have the b0 here but we now have the conditional value of information so in case we find out z1 then it's these two or if we find out z2 then it's this value here and this is the equations so that the expected value of sample information is 18.5 if we find out z1 this would afterwards it would have a value but if we find out z2 then it doesn't have a value so the conditional value of sample information analysis is rather yeah we can do it but it's better to do this because then we really can decide or we can provide more comprehensive decision support if we look to the branches which basically lead to optimality so they have these red lines and this example is very simple and the operation you do here is basically you choose the maximum of these two so this decision node is only a maximization operation so we see that we don't need all the branches and this is what we keep in our mind so and now we have a closer look on what it means when we do the extensive form analysis so we take the lower part of the decision tree the extensive form analysis is only applicable to the preposterior part of the decision analysis not here so we take only this part and if we write it down and we replace here the discrete probabilities in the original formulation we still have this expression here and this goes to the e1 branch z1 z1 and then x1 so we multiply here the posterior probability that the decision is in state x1 so with the information of z1 and then we multiply this with the benefit here and this can be derived for all the other branches and we have the maximization operation so now we replace the basically we just rewrite this is the posterior probability of x1 that it is in state x1 given z1 and for this we can simply write the expression for Bayesian updating with discrete probabilities and if we replace this we come to the normal form analysis so they are this is the first observation they are equivalent if we now have decision rules ok you could ask where the decision rules are coming from so they may come from an extensive decision extensive form decision analysis and we know that they are also holding for the same decision scenario we just have to different numbers so we need to quantify so if we have fixed decision rules so that means for like here if the indication is there is no resonance problem we don't do anything if we have the indication that there is a resonance problem we do we modify the operational range and the decision rule incorporates also the decision that we do a model analysis so if we do this we see that if we apply for a normal decision analysis the decision rules then we see that some branches we don't have to calculate at all and then we even don't need the maximization operator so it becomes more simple and this is just the so to say proof that we get the same value for the extensive and normal form analysis so extensive and normal form are equivalent it's from the right hand side to the left hand side the extensive form we need to do Bayesian updating and in the normal form we go from left hand side to the right hand side we need to define decision rules or we say we go to any decision rule they can be computationally more efficient but maybe I didn't point to one important point here this is basically okay you see here that we have the multiplication with the probability of Z1 if we replace this expression with Bayesian updating we see that Z1 is here so the multiplication and the division of the probability of Z1 cancels out so we don't have to do any Bayesian updating in the normal form analysis and that's why it can be computationally more efficient but we have to be a little bit careful there's quite some limitations of doing normal form analysis and extensive form analysis so that they are equivalent and one thing goes to the actions so if the actions change the probabilities here in the system states then we need to be very careful of modeling so then the equivalency may not hold any more of the normal and extensive form in this in this way in this way it is written here okay they can be computationally more efficient and if we know a little problem they can be very efficient because we only have to care about the optimal branches okay and this is the very important point I was trying to say a few times in practice we need the decision rules so there will be someone doing the experimental model analysis he will get an indication and then they need to know what to do in practice so that's why the decision rules are required in practice thank you for your attention so I've overdrawn a little but are there any questions pardon you talked about the equivalence of the normal and extensive form and you said that it may not work if the probabilities change significantly or no if the actions influence the probability of the system states so it works because the probability of the system states is unchanged by the actions it's only changed by Z it's only where to see best we always have this expression here in the normal form and in the extensive form it's also the system states are only influenced the probability of the system states are only influenced by the outcomes if there's another influence like from the actions then we need to be very careful about modeling we then have a different system the way of what I've done here is basically the actions they do not influence the probability but the actions influence the the benefits and costs so if the actions influence the benefits and costs then in this analyzes the equivalency in this formulation holds so let's go to the cover break, thank you