 If you have read the books and you have looked through the PowerPoint presentations, then what we could do is that we could talk about this figure here which you probably cannot see everything on, but I would like to introduce it. In the past days, you have been looking at probabilistic methods and the focus of this course is of course value of information and structural health monitoring, but in order to appreciate what we can do with this domain of research, it has been necessary that we talk about more basic concepts in probabilistic modeling and reliability analysis, looking at systems, looking at components, looking at the background for probabilistic modeling and analysis in engineering and part of the big challenge is to get a hold on how we actually model the performance of the systems which we are looking at and we are trying to engineer and trying to engineer involves a number of issues which I will come back to but generally what we sometimes do and the joint committee has proposed a kind of model basis where we look at what we call exposures acting on the systems, so this is the system and then we have that the exposures they may lead to effects on the system which we are looking at and those effects can generate consequences. We will also come back to that briefly and we separate between what we call direct and indirect consequences and the degree to which we have direct consequences we measure through what we call vulnerability of the system, so to what degree does it get damaged due to the exposures and then to what degree do these damages propagate further on and develop what we call indirect consequences and the connection between the direct and indirect consequences we try to describe by the concept of robustness, so will a small damage to the system or will a certain amount of damage to the system will that propagate into further consequences and failures in the system. Now when we talk about integrity management and structural health monitoring what we can do is that we can of course observe the system at different levels so you can look at the exposures what is happening to the system and how is the system performing in terms of damages and propagation of failures due to the damages, so there are different levels we can look at the system, we can monitor, we can measure, we can achieve knowledge on the performance of the system at these three stages and we can control the risks so that's the whole idea. Knowledge facilitates that we can do something, but only, I mean and this is very important and most of us probably do not fully appreciate that knowledge is only of value if we connect the knowledge we have with actions we may take so it's the actions which we may take based on the knowledge we are able to observe which facilitates management of the system, the performances of the system, the risks or the reliability or the safety or the robustness or the vulnerability or whatever characteristic describing the performance of the system which we would like to manage. So looking at this I call it a desktop model because that is what it is, it is only something we can formulate and describe on the desktop and of course it relates to a real structure which we are trying to model right, so we do the best we can to make a model and then what we are able to do is that we are able to observe how the actual structure is doing, we can relate those observations to indicators to our desktop model and then we can start playing around, so based on our desktop model we can get an idea on what eventually it would be good to do to the real structure or the real system in reality, of course we should not forget that it is only a desktop model and typically even though we can make quite nice and advanced models using mechanics and probabilistic mechanics it will only be like a shadow of reality and therefore whatever we can observe we can feed in to our desktop model and we can make it better right, so we can do patient updating, we can adapt models and in this way we get closer and closer to the reality which we are interested in and then by combining observations with decision rules we are able to optimize strategies for actually changing, so physically making changes to our model like making a repair or strengthening a part of the system we are dealing with or trying to reduce the direct contact between the possible exposures and the system which we are trying to manage or what we can do is that we can decide, so the decision alternatives also at stake which only are aiming to collect additional information so not changing the physics but trying to get more knowledge about what is going on and whereas most engineers they have a tendency it's just like medical doctors like you have medical doctors who are focusing on diagnosing and then finding some pills and then there are some medical doctors they were born in a life in the hand so they want to make physical changes so any disease that be a flu or just a cold has a medication with a knife right normally engineers are focusing on exchanging parts of the system as structure making repairs or building something new but in many cases it actually makes sense to simply just to try to get more knowledge and feed that knowledge into the models which then facilitates that we can evaluate let's say physical means for managing the performance of the system and what this course is ultimately about is establishing such models which facilitate that we are able to see based on the knowledge we have when we are trying to assist it to what extent would additional knowledge on these different parts of the model complex provide value by facilitating that we are able in a better way to rank decision alternatives on physical changes of the system and of course that also includes always the option of doing nothing so that's basically where I suppose that you are now you have an idea on a good idea right you have an exam yesterday no okay anyway you will have an exam one of these days on how well you are able to do this modeling here um and what we will talk about today really now concerns on how to throw a formalism on decision analysis on this modeling facilitating that we can we can actually see what value potentially achievable information could provide for the management of the system that's the idea okay any questions now since you have read the books and you have looked through the lecture material for today I suggest that instead of me lecturing that we simply have an hour and a half of questions that didn't fly that didn't fly okay um yes you see I had a list of contents and it's very advanced so this important information or preamble then a little bit about literature and then one point actually two points these are the two main points here but of course and then also this slide number three of god knows how many so I didn't want to make you scared so I didn't want to write the number I don't know if you are normally having a break at some point in time but maybe we could aim for a break in 45 minutes is okay is are there any possibilities for coffee at that point in time somewhere okay that's after my lecture right we will try for five minutes break in 45 minutes and then we will just continue again okay yes preamble the important thing is not to stop questioning curiosity has its own reason for existing and that was a really clever guy who came up with that among many other good observations I would say that if you and you should be curious so after today maybe when you get back home wherever you are then please remember to buy on amazon an old version or whatever you can buy them relatively inexpensively on amazon the rifle and schleifer a book on applied statistical decision theory this thing here we have available in pdf so you don't need to buy it if you're interested you can have it there some some of the material which i'm going to talk about today is also covered by this leg to note then this book here you already know off and then if you want to see let's say a more practical example on using value of information analysis in structural health monitoring then one of the examples which will illustrate that is collected in this paper and of course there are lots of other papers i believe that the whole thing took well that there are two phases in the development of value of information analysis in structural health monitoring and the second wave started around this this point a little bit earlier around the group of arming the q-ray game in burglary well the context of decision analysis in engineering is that we we have facilities we have involving a lot of structures and they're subjected to a number of scenarios where what people call exposure events are acting on them they can lead to wear and tear and different types of damages and failures in different types of situations and we need to be able to manage that in a safe way for the people who are exposed in one way or the other but of course it's also really important that these these facilities are operating efficiently optimally to provide whatever functionality they're supposed to provide and that really matters and this is not so easy because things actually do fail or do get worn out and needs to be exchanged or needs to be repaired and these facilities they are quite big and complex in many cases so it is a tremendously important decision problem on how to do this optimally and well many things can happen accidents can happen of different types and as i indicated also a degradation phenomenon i like like for t here i just take the offshore application area as an example but i also wanted to point at the offshore application area not too many of you are from that domain and i want to highlight that probabilistic modeling and analysis and decision making based on risk information was quite early utilized in this particular application domain so if if you want to try to look at the other systems in engineering and you maybe you cannot find too many examples or too much literature in that particular area then have a look at what they did in offshore engineering right because they were maybe out 15 or 20 years earlier in that domain than in other domains now what we are able to do using risk informed decision support is that we can actually quantify the knowledge which we have related to the structural performance and we can update this based on information we achieve in any way what we also can do is that we can and this is what we are going to look at much more today we can quantify the value of additional information which has not yet been achieved and we can we can we can we can find a scheme we have a very formalized scheme on how to optimize and rank decision alternatives for the management and what is often being a little bit forgotten is that an important aspect of all this is that we based based on a clean and straight and honest management of the knowledge we have about the systems only then are we able to actually also document why we are doing what we are doing and not doing also what we could have done right and and this is this is something we always should be able to do so when when people ask us but why are you recommending this then we cannot just say ah well experience shows that this is a good idea or right i feel that this is the right thing to do or if we don't do it i cannot sleep in the night or things like that but you you're laughing but this is just so often the case that and you add one typical example which i i have it just makes my it makes me goosebumps right is when people they they they go out to observe something or they go to the laboratory to do some tests and you can always ask but why are you doing this and why are you doing it like the way you have so why are you doing it in the way you're doing it so why in this way why not in another way and it's it's basically almost impossible to get reasonable answers for that after today i hope you will be able to to help turning around a little bit the community to focus more on how and why we are collecting the information we are collecting yeah as i indicated this is generally the the modeling of of consequences which we are generally using when we're looking at systems so this could be any system this is completely generic any engineer system we're looking at exposure events they're acting on the system exposure events could be extreme waves could be wind it could be a chloride trying to ingress into a concrete structure it could be some stresses caused by time varying loading and then in having the effect to change the system physically right so this is a system change and in the figure i showed you initially we had direct and indirect consequences described by the performance characteristics vulnerability and robustness right and that leads to contributions to the total consequences so we call this part here of the of the consequences event imposed consequences but then due to the system change due to what actually actually happens to the system we are we are dealing with there may also be and that is tremendously important but we will not talk about this in this lecture namely how is that perceived by by people how is so we are losing things here we are losing assets we are losing some lives we are losing a platform we have severe damages down on the seabed on the world and we are damaging the the the entire Gulf of Mexico for i don't know how many years right and other industries like the shrimp the the shrinkers on the american side basically disappeared for a very long period of time etc etc so there's lots of consequences here but the perception of what happened had a lot of additional consequences to society and of course we also need to take them into account when we are trying to manage the system right and what for sure we want to avoid is that the perception consequences are not out of tune with what had happened so we don't want to have any overreactions but what we see in reality almost always have overreactions and very bad political decisions made in the aftermath or when something goes seriously wrong right and how can we try to manage that i will just give you the true information so communication is the way to manage this type of risk here and then i'm not saying more about that yeah so again we have the real world over here we are able to make observations measurements and that feeds into the possibility of optimizing decisions and there are lots of things we can do in order to try to manage the system representing our picture of the real world right and that's what we talked a little bit about before and many of those activities many of those measures for the management involved collection of new information and again what you have learned about is how to formulate probabilistic models which are relevant when we are trying to model the real world here so a probabilistic a joint probabilistic description of the exposure events and then fundamentally what we need to be able to do is to model all the individual scenarios which leads from exposure events to direct consequences and to indirect consequences and i would like so i'm not doing this with you but keep in mind what you really want to do is to be able to model the scenarios of these evolutions of consequences probabilistically then i know that there sometimes have been a little bit of confusion on on on the concept of observations and indicators and stuff like that but one way of looking at the picture here is that you have a facility right and you would like to observe something on was a utilize information from this facility to try to manage the facility and you can imagine that some event takes place and this event may be observed and that now relates in some sort of data which can be stored now what we need to be able to do in order to take benefit from this data is that we we need to establish a relationship to the data and some sort of indicator related to the performance of the structure a typical performance type of of relation so an indicator an indicator can an indicator can be a change in the stiffness or a change in in a in a dimension or it can be an increase of crack links or things like that now the next step is from this indicator here to the performance of the structure which we are interested in and there there are many different types of performances we could be interested in reliability safety life cycle costs or repair costs for the next year or i mean we can formulate any any type of performance characteristic but what we would like to be able to do is that we would be like so we would like to be able to relate this change in the or this indicator we have eventually achieved through the event and the observations of the event on this performance indicator and the way we do that is by formulating likelihoods and that requires probabilistic modeling now then using this information this likelihood describing how the observed indicator relates to the performance characteristic we can feed that in to what we you could term the knowledge and the knowledge of course also has to take basis in all past experience so you can imagine that this is driving around in a loop so we are always accumulating knowledge at one point in time becomes experienced at the next point in time and then feeds in in the context of new information and that knowledge facilitates decision making and decision making is about taking actions actions including not doing anything of course always and the actions are actually what manages the facility which we are concerned about and this part here you can say is the part where we have the patient updating right so that goes from the indicator to the development of the knowledge and we can do this at many different scales depending on the decision context that's very important to appreciate that the detailing the level of details of the scale which we are looking at an engineered system is actually decided or determined what people always ask me but at what level of detail do we need to look at our system and the decision analysis theory actually tells us exactly how to do that so you need to identify a scale of detailing which facilitates that you are able consistently to differentiate let's say the benefit associated with different position alternatives for the management of the system that means that the detail of the system should be able to represent the characteristics of your system the performance of your system and a label where you can see in your representation of the system how your different decision alternatives can affect the performance of your system so the scale is really related to it has to be able to facilitate that you're actually building those decision alternatives into your modeling of the facility that's the scale and of course that was not let's say a very very conch so it's not like a recipe on how to decide on the scale but it gives you good guidance the scale should be able to represent your decision alternatives and we will come back to the ranking of decision alternatives but the ranking needs to be consistent yeah we also so we have quite let's say a straightforward means for ranking decision alternatives and you have all heard about the cost-benefit analysis and so in reality what people understand to be cost-benefit analysis is strongly related to let's say simple forms of decision analysis but there are other formulations of decision analysis which are a little bit more involving a little more they are stronger on the identification of optimal decision alternatives especially with respect to collection of additional information and and they're not so well known and what i'm stating here is that our field of of engineering probabilistic probabilistic representation of the performance of structures and engineered facilities in general i would say it very much took start with the people around Freud and Tarl in the forties and let's say based on further developments by Reifer and Slythera and then some pioneers from from the United States Benjamin and Connell if you can find that book on amazon i would strongly recommend you to do that this is a super book and these guys were so clever at this early point in time just a few years after the publication by Reifer and Slythera which is i mean this is at the level of yeah at the level of of Nobel prizes right then just a few years after some engineers they took over this completely and show this is the way you do it in engineering i find that to be pretty remarkable yeah despite all these years the merits the strong potential of especially what we call the people's theory of decision analysis has not really been appreciated and and this value of information analysis which we are focusing on also in this course here is the type of people's theory analysis it's one of the interesting perspectives facilitated by this particular analysis and you need to you need to know this we need to then then also there's a few points on this slide here what i would like maybe to emphasize the most is this point here namely that management of structural safety reliability and any other characteristics which you might be interested in with respect to the to the structure or the system that the management of these performances is actually so very hardcore and information management problem and i would like to stress that because my background and the background of the people who are teaching courses like these comes from typically from structural engineering somehow right and then we got involved in probabilistic modeling of mechanical systems and stuff like that from the corner of probability analysis so from the concept of probability and Bayesian statistics so this is where we come from this is our origin and we are a little bit failing in our community to take the step into the domain of information theory but it could be really cool if some of you guys could find the energy and also let's say a less damaged mind and then to see what things could be picked up in that domain of research and how might that actually contribute to our domain so we have been failing on that it's it's up to you or also up to you we will also make an attempt but you're probably a little bit too late on the biological clock really to get into that but it is enormously important to understand that what we are actually doing is that we are simply just managing zeros and ones right this is what we're doing anything we know about structures and engineering systems is knowledge when when we decide to build something by choosing a material of a particular grade and we want to buy that in order to construct for instance a structure then you you can you can swap a picture and you can look at it that in the way that we are we actually not buying a concrete structure of grade a compression straight strength 40 mega per scale no we are buying information which tells us something about the performance of the structure which will be built with that material we're buying information right and then we are combining this information with all the other types of information comprising let's say all the information of relevance for describing the performance of this system the structure or whatever engineering system that we want to establish and manage and therefore it is fundamentally an information management problem yeah we could probably talk much more about that but there are lots of choices relating to information during the service life of structural systems like site investigations early on laboratory experiments then there are choice of design methods construction concepts structural concepts this is this is where maybe this is where engineers typically are that they choose a static system right then they choose some materials dimensions and then they make two lines under the result and this is this is our structure but there's a lot of other issues which are really important namely quality control later on in the service life assessments maintenance strategies monitoring strategies and then also of course you need to take into account how do we want to replace the whole thing when it needs to be replaced and recycled and the choices really define the prior knowledge we have regarding structural performances risk safety service life cost but also whatever options there are to influence these characteristics over time in the context of codes in these days we are looking at quite deep into let's say the revision of the of the euro codes which is ongoing and this is just an illustration illustrating here on the y-axis we have consequences of failures you can imagine that you have different types of classes of structures or structural systems and depending on their use the consequences of failure might be might be different and here we have increasing consequences of failure and then here in this direction we have the level of knowledge which is available in the best practice at the point where you make the design right and you can imagine that when you have different types classes of structures you can somehow organize them within this within this square and if if we know a lot so if it's a very common type of structural system then we are up in this end here of the figure so maybe the consequences are low or maybe they're high but we know a lot about it we have a lot of experience and the performances of the structures are well understood the materials are well known we have used them a billion times before there's nothing much new to be said about it and this is a very good domain for simplified design concepts like semi probabilistic safety formats deemed to satisfy rules can also apply in many cases we you don't really need to do much engineering because it has all been done before right but when we go out in this direction here we have less and less experience and we know less and less about the real performance of the structures maybe it becomes highly non-linear with all sorts of discontinuities in the performance in the responses that be geometrical or other type of discontinuities it becomes it becomes very complex and some of these systems which we are actually trying to model are super complex I mean these are when we're dealing with really non-linear performances of structural systems then this this is definitely this is not easy and and the uncertainty associated with what we know when we go up in this domain here just increases and increases and that also means that concepts for the design or say for the management of the safety and reliability of structures as we go up into this domain here if the consequences are high or moderately high then no simplified way of doing that is adequate and then you just simply just need to look a little more at the details and try to utilize the information you have in a more consistent manner to see how those uncertainties really affect what we know about the performances right and you see in the in the code what we try to do is that as we go up somehow in in this direction here we try to impose we try to put requirements where we are increasing let's say the necessary efforts to to to know and utilize the knowledge we have so we a little more control a little more rigor as we go up into this domain here sometimes a lot more right okay they're different the different context and and and yeah basically different context where we can take benefit from structural health monitoring and value of information analysis one i will not have time to go through in detail but it's all provided in the in the overhead to go back to it but prototype development is is one of them and prototype development is a little bit uh a curiosity so it's it's a it's a strange thing since structural engineering right because normally structural engineering this is really about making a new structure fundamentally every time so the context is new the structure looks slightly different than the latest structure which has been constructed using the same concepts but it's located in a different place the loadings are different the use will be slightly different so let's say mass production quality management rarely really applies in structural engineering but when we look at some classes of structures like for instance wind turbines then we get closer to something you could call mass production and and some of the information you can collect over the service lives of these things here can actually be used as a part of let's say experimentally supported adaptive design optimization of these systems so this is what i mean by prototype development so you you build 50 of these you put them out somewhere you observe and then you realize are some of the performance characteristics are quite optimal the next one we will make with slightly different dimensions for instance here in this joint right and then you can do the same again you observe and see and then you adapt your design concepts and in this way you develop design concepts then of course code making and code calibration for design and assessment of structures is is one way there are lots of things we can observe from the performance of structures in real life which has and a potential for calibrating our management of safety in regulations and in design codes so typically we develop all these models trying to manage the safety and we have targets let's say reliability and stuff like that in the code but what we can do is under certain conditions we can go out and see how the structures actually perform how many types of failures do we get of this particular category etc etc and we can relate that to the basis for the design which we have incorporated into the code and if there's any important discrepancy we are able to try to analyze why and then that might be a reason to change a little bit the modeling behind the design codes and also some of the targets or a little bit the safety format managing then if we are dealing with the the engineer systems where we are able of course to observe how the systems are performing over time it's an obvious application of value of information and structural health monitoring so like observing how cracks develop initiate and develop over time the same applies for any type of corrosion phenomena or wear and tear phenomena if we have if we have an idea so you of course you can imagine that all sorts of things can happen to an engineered facility right but for those things which can happen where we have a prior idea on what is it that can happen like when we can do that when we're looking at the T crack growth right we have models describing how these damages may evolve as a function of floating conditions and based on all this knowledge we are also able to focus on where to look for that type of damage and actually also when to start a looping so this is one of the situations where we have an idea and as I said there are many other cases corrosion of concrete structures of different types is also one of these things scour on the foundations of of bridges we also have models understanding which can guide us to look at the right places at the right points in time and all these facilitates that we can manage the performances of the systems better we can reduce risks at a cost right so this is an obvious and of course also for optimizing schemes which may already be risk based or risk informed schemes for maintenance planning they can be enhanced by collecting additional information like from model drawing okay we are getting closer to the very short break but before we actually go to that I would like to emphasize that so high fans life and they develop this Bayesian decision analysis what a fundamental result they are relying on was actually postulate from Banuil long time ago namely that that the ranking of decisions should take basis in in or decision alternatives should take basis in in in the expected value of utility associated with these different decision alternatives right now Daniel Bernoulli in 1738 he postulated that and then he was fighting for a few years to try to prove it but he didn't manage it was only in 1947 by Neumann and Morgenstern and therefore actions of the utility theory that we actually know that Daniel Bernoulli's postulate was right and and this is a very strong thing and and please don't forget it and don't get confused don't get confused if you have a complete model of all the relevant consequences for your system that basically also means that you are able to evaluate all let's say different aspects of the risks in the system and the risks associated with the different decision alternatives then when you're evaluating which are the best and second best and third best decision alternatives for managing etc for managing the performance of the system then what you need to do is to calculate the expected value of utility I'll talk a little bit more about utility in a few minutes but what you need to focus on is the expected value of utility associated with the different decision alternatives you calculate them and then you see okay so which decision alternative will give me the highest expected utility and that is the right choice right okay never forget that starting point you have already a complete model for all the consequences in your system then the only talking about expected there may be lots of well if you look at the how people have tried to take benefit from decision analysis in all sorts of different areas going from the insurance sciences or to mathematical modeling and decision making in the financial markets to engineering applications you see all sorts of diversions from the fundamental principle of the actions of utility theory and and you see people trying to accommodate somehow for the fact that the let's say the uncertainty associated with the utility and not only the expected value of the utility could in some ways play a role on which are the optimal decision alternatives I'm telling you all this so that you don't make any mistakes right so this this uncertainty associated with the value of utility it's a little bit confusing for many people but the reason why it is confusing is because they do not have a complete model they do not take into account all the possible consequences which may arise from their decision making due to the system they're trying to manage right and in those cases yes you can argue yeah in that case it might be an idea to look a little bit on the on the uncertainty associated with the utility because you have an incomplete model right and you would like to see how the incompleteness of your model actually influences your your decision making due to the uncertainty we can also do that but we try not to do that so what we really try to do is that we try to get the relevant complete model up and standing for the decision context which we are in then everything is very easy don't get confused expected values nothing else now these people here in the western world they came up with a very nice framework for decision analysis I also would like to to highlight that the similar work was actually conducted in China by the emperor Zhang Long from the Qing dynasty and he was in force from 1735 to 96 a little bit interesting that it's it's the same period of time and what is that he writes here this is from the forbidden city an inscription I found the way of having of the way of heaven is profound and mysterious and mankind is difficult only if if we make and and follow a unified plan we will be able to rule the country well and and oh no only if we follow the doctrine of the mean I can it's so small I can't read it and so on but it says only if we follow the doctrine of the means we are able to rule the country well and the doctrine of the mean of course I interpret immediately as the expected value right so this is the expected value principle and they agree and I think this is a good point in time to take a small I'm going to introduce three three different types of decision analysis namely the prior the posterior and the previous area decision analysis and it's funny so one of the of the tools in trying to illustrate what is going on is that we of course the objective is to optimize decisions max maximizing the expected value of utility as we talked about before and for now let's just associate with the term utility and we will not have too much time to go into the detail on on on utility modeling let's just for simplicity associate with the term utility term is more frequently used namely benefit so this is whatever benefit we can draw out of our decision making and typically you will you will see that it's probably a good idea to to do the benefit in terms of monetary net net income so any benefit net of expenses this is what we want to maximize and it's fundamental here that we are we are buying information by choice of of of of of prior density this is our tool as I indicated before we can be a managing information rather than physics but we manage the information by physics right and by also other ways of collecting information so we have we have a variety of decision alternatives that is illustrated here then we have the the the outcome of the system which we are trying to manage which is associated with uncertainty illustrated by the random vector x here and this is a space of possible events which may take place and at this point in time you may remember this this type of system model with exposure direct and indirect consequences this is where this actually comes into play because here you have all the different scenarios which may take place and the fact let's say the total consequences which now we are focusing on the benefit associated with these scenarios so we actually have complex scenarios of decisions outcomes of the of the universe and the sequences of the exposure events direct and indirect consequences and how that leads to benefits and what we want to do is to maximize decision alternatives we do this here the decision alternatives a so we are finding the the best decision alternatives maximizing the expected value of of benefit and I write here one prime indicating this is based on our prior probabilistic models which we have to describe how this system here actually works right and again this this can be I mean this is an expected value so we can easily write that in terms of the benefit multiplied by the prior probability density function of of all the joint probability density function of all the let's say random variables which are used to describe the performance of the of the system here so and then we integrate or the entire outcome space of all uncertainties and then we get the expected value so then we optimize so this is the way we do it this is a prior decision analysis and I thought this this would be a good illustration after the dinner yesterday this is the hangover dilemma you imagine that after the dinner yesterday we are lying in bed in the morning and we have just the horrible headache and we want to find out whether it's a good idea to try to move to the bathroom to find an aspirin and eat that and then to see if it helps but you feel so bad that it's actually a barrier to get out of the bed and move to the bathroom this is a high this is a large convenience now but you also know by experience that if you don't do anything it will take approximately six hours before you feel well again and the this utility associated the six hour is the one this utility per hour you're suffering right and fundamentally it's a decision problem should you move out there the thing is that if you actually do move out there and you take an aspirin then you know from experience that it doesn't always work but in 80% of the cases you you know that the aspirin that they they actually work now looking at the discomfort by getting out of bed the question is and and if if the aspirin works then it cuts off four hours of discomfort so you only suffer two hours and then the aspirin is fully functional and you can get up and you can get a life again but of course everything depends a little bit on how you value this discomfort of getting out of bed and you're a little bit in doubt this is always the dilemma right do i want to move around no anyway so here i try to put up the decision of entry and i put on here the utilities as i just explained if you don't do anything you have this utility of minus six hours one per hour and then you can analyze here so if you actually do get out and try with an aspirin it may work 80% then you have a dish utility so you you cut the pain the suffering four hours but you have to get out of bed so you have the dish utility d there it may also not work so uh you multiply 0.8 with this here and you add it to 0.2 times this this is the expected value of utility associated with the decision alternative to get out of bed and likewise here if you stay in bed you will probability one you have dish utility of minus six now you can usually just write up the balance equation here and you find that the balance point is a discomfort of 3.2 so if you value the inconvenience of actually getting out of bed in the order of of 3.2 hours of suffering then you would not know exactly what to do but if if this discomfort is smaller than 3.2 hours then you should get out of bed and need that aspirin and get on with your life right okay now the posterior decision and this is maybe a little bit decision analysis for dummies no or normal people i don't know it's uh it's close to real life anyway now the thing is that if we have any new information so we start out with our prior description of the universe and if we and this is described by the the joint the prior joint probability distribution of all the uncertainties affecting our decision problem now we may get some some additional information and we want to be able to use this information to establish a new probabilistic model and you all know how that works using the scheme of space so we formulate the likelihood function which actually relates this information to our prior probability or our prior model and we can calculate an updated joint probability density function for our our uncertainties and we mark that with or indicate that with the double prime so this is the posterior so it's a likelihood multiplied by the prior and then we need to normalize to get the let's say the total probability of the observation we have made so we normalize with that yeah and and of course set the information we had we can also index with an e describing the experiments by which we are achieving sit so now we get a little bit into let's say the planning of the collection of the information right so if we write the likelihood like this then we have the experimental planning uh into the picture so you already now see that we are trying to link let's say the the information achieved about the performance of the system with a planning on how to get that information in the first place and and this is clearly a decision alternative right this is the plan for collecting additional information but now when we have this posterior probability density function then we can just do a decision analysis again of course this decision analysis now takes into account this new information that's cool we have a better picture of reality and we can do the thing again just like we did it the first time but now everything is with a double product right nothing else has changed but that's nice when we talk about collection of information you know people they speak about monitoring as if that is something very special and and something distinct from other types of collections of information but from fundamentals it's all the same whether we call it this or that it is collection of information and it's just about how we plan to collect information then it falls into what other people call assessments or inspections whatever it's just information collection now looking at at the aspect of actually planning the collection of information we can add that to the decision inventory by illustrated here so this is what we had before from here and onwards now I'm just adding here the component on how actually to collect the information and so here you have the the the plans the decision alternatives alternatives relating to how we want to collect the information and here we have the set random vector indicating that we have an outcome space of different possible outcomes of the observations from the experiments right and it's clear now in this concept of prior posterior decision analysis that we can use this information to update our prior models and then we get posterior models and then we optimize decisions based on this posterior probabilistic model just as we did before but because we don't know we don't know the outcome of these observations here we don't know we only have a probabilistic description right then we have to establish let's say conditional on the outcome the the expected value the conditional expected values of utilities and identify the optimal decision alternatives based on that and then we integrate out all possible outcomes of the experiments outside and this we can write in this way here so we can optimize the choice of experiment by optimizing here outside outside the posterior decision analysis optimization which is which we have inside here right here we we just have the usual posterior decision analysis but it's conditional on the outcome of set set however a random variable because we don't know the outcome and therefore we have to maximize outside taking into account all the possible outcomes of the experiments in terms of the expected value of set this is what we call the pre-pastirial decision analysis and it's it's cool from concept that it's like pulling yourself up in your own hair in the sense that we are trying to take into account outcomes of information collections even in the situation that we do not really know what those outcomes are but the central thing here is that we model those possible outcomes using the prior the prior model of the universe we're dealing with so it's the best available knowledge we have about the possible outcomes of experiments given an experiment plan so there's no whose pose here and the same why this actually makes a difference is that depending on the outcome of the experience of the observations the outcome of the experiments right we are able to make optimal choices with respect to A and they depend they depend on these outcomes they depend on how these out what what they tell us about the performance of the system and and and thereby we can optimize so we are actually changing something in the system depending on what we observe in the experiment otherwise of course collecting information about anything if that information does not lead to a change of the system we are trying to understand better that would not lead to anything that has to be a specific decision associated or triggered by what we observe yeah and that leads me to the possibly defect car dilemma so we have two dilemmas today and imagine that you have a car and then suddenly Toyota they said and it's a no and then fear in the radio they say well if your car is built in in the year 2000 to 2002 then it's really old but you may also have a problem because we found we found a mistake somehow in the engineering now this mistake is not in all the cars built in this period of time but in some of them and it is it actually is a problem with your car then so it belongs to a bad batch then there is a certain probability that it will fail and and this failure is associated not with any laws of life but severe engine damage and it's going to cost you 10 000 euro right if that happens then it can be fixed if it happens but it's it's quite costly and that's basically what they say and then they also say that of course if your car is belonging to this bad batch then the probability that it actually fails is 0.2 and that it survives is 0.8 so now given this given this information and also they say that approximately half of the cars produced in this period of time they will have this type of defect and the other half they will be okay they will be normal right and if if if your car is in a good batch which has probability of 0.5 then you may assume that it will just continue according to normal normal business now typically for fiat they don't offer you to come to the workshop and they will do the check for you free of course so it's it's up to you to decide on whether you actually want to spend money on it and in order to identify whether it makes sense for you you you try to do a decision analysis and this is really a collection of information problem because you can choose to drive your car to the workshop they can do the check they immediately and without any possibility of mistakes they identify whether it's a good batch or a bad batch and and and if it is a bad batch of course it comes out with the same probability as here but if it is a bad batch then they will do a repair and but you know workshops not completely good so they may do a successful repair or unsuccessful repair yeah and i i tried to i i think i swapped probably these probabilities here or maybe maybe i was really thinking about fear no i i'm i'm very biased because this is this is how my workshop works and okay so never mind the message is clear if it is an unsuccessful repair then you have a certain probability of failure and no failure and if it is a successful repair then everything is good but of course if you go up into the decision of driving to a workshop you have let's say the disutility associated with the cost of driving to the workshop and now you can do the same thing again you can you can multiply let's say probabilities according to the different branches here with the corresponding utilities and add them up for the different two decision alternatives to do it or not to do it and then you can see by keeping this as an unknown you can see what is the balance point so if the cost of driving and having a chip is around i think 100 compared to the sin to the 10 000 then it actually makes sense for you to do this right so it needs to be a relatively inexpensive workshop now so this is an information collection problem so you're simply evaluating whether it makes sense right and another thing is that that it's a type of of of a situation where you have perfect information so the information which you get from the workshop so they immediately they are able to identify whether the car belongs to the bad badge or the good badge in reality there would be typically an uncertainty associated with that and that has to do with let's say the quality of the collection of information and in engineering it's important to realize that that this quality plays a very significant role so we can go out we can collect information about the condition of basically anything but there is a certain probability that we are not getting the real picture when we collect the information we can make different types of errors and mistakes and we need to model that into the decision event tree anyway by assuming perfect information we can get a picture on what can be achieved right so this is like the ultimate use of of information of achievable information which we can imagine and then reality may reduce that because reality may not be that generous as we have here this perfect so that result of 100 it's your decision about for example to let's say in a common sense say for example if i go to make an evaluation of the repair if it's above 100 let's say euros perhaps it's not making sense to to go to repairing right it's not the yeah the decision let's say that is that's the idea so what you can do is that you can now start calling the workshops yes and you can ask this service here what what do you offer that for and they say if they say it's above 100 perhaps you decide more yeah then then you just wait for the thing to to crash right or you hope for this that is the decision yeah it's when you call it you know what's the constant you compare with this role that you got from inside but then then comes i'm sorry guys jokin if you could follow i'm sorry no but here i just want to make a remark regarding the complete incomplete system model right no i know that is no no i want to make that remark because that's really important so you could imagine that there is a household where if you lose 10 000 euro you're really in deep shit right because you don't have them on your bank account it's a lot of money yeah and and if you actually if if you have to pay 10 000 euro you may have to go out and get a loan right now if if you so this is an incomplete model because we don't take that into account we have an unlimited budget consideration here but if if you wanted to make a complete model for which the expected value of utility really applies right then you would need to augment your your utility modeling with all the possible and say follow on indirect consequences associated with this way that value will change and and then something additional will come on and which will go up that final one then you will have the 10 000 plus something yeah depending so that that result is mainly the case that you have a limited budget yeah that this is already your full system now so when we talk about value of information analysis uh what we really do is that we compare the optimal decisions and the corresponding expected let's say optimized expected value of utility for the pre-posterior analysis here with the optimized prior decision analysis uh expected values of utility and the difference between the two they show they show to what extent an experiment for collecting additionally an optimized experiment for collecting additional information uh whether or not that will add how much value it will add now there's a principle in the in uh in decision analysis which says that information cannot hurt so information cannot hurt but it can be costly and if connecting so it will it will provide it will provide value but the value may exceed uh or let's say the value it will provide may be less than the cost of collecting the information okay and we need to understand uh this difference before we can actually decide on whether we should collect the information this is what we do and Sudastia is going to talk much more about that um what else would I like to highlight yeah so here you see the prior decision analysis again uh but in this figure here I again I show uh different uh different decision event trees but what I want to illustrate here is that typically we look um we we look at the modeling of the world as if we actually know what world to model sometimes we do actually not fully understand which world we are in so which is the actual system which we are trying to optimize decisions for and we need to take into account uh let's say the possibility that there are uh alternative systems than those we are actually uh so that there may be a range of possible uh different uh universes if you like and um different uh different uh let's say fundamental different universes can be uh there can be many situations in practice where that concept comes off as um is a relevant concept so you can imagine what what one universe is a universe where you are assuming that your system is influenced by uh let's say development of um of fatigue cracks which are reducing stiffnesses in your system right and of course when you measure your system you can you can measure these reduced stiffnesses you can monitor them and and and then based on this let's say system understanding that changes in stiffnesses they originate from fatigue cracks you can do all sorts of modeling you can identify where you think they are and you can also let's say optimize decisions on their management but the changes in stiffnesses could also originate from let's say uh systematic changes over time of the support conditions for your structure so it could be changes in stiffness of the soil or it could be dirt in the uh in the abutments of uh of a slab which simply changes the static system and therefore also changes what you observe and interpret as stiffness changes so it is in a way it is it is a change of of the stiffness of the system but has completely different uh let's say courses and those two situations corresponds to two let's say competing systems which may uh uh with different degrees uh of appropriateness be able to describe what you observe but the thing is that the actions you need to try to take in order to manage the systems are completely different so the actions that would be relevant in the case that your static system is simply changing due to uh let's say uh change support conditions um those types of decisions you can manage this situation with are not the same type of actions which you can manage a fatigue deteriorating system so it you cannot take the optimal decisions in the fatigue context and scale them up or scale them down and then you will find something which is useful for the for the changing support condition system right they do not belong in the same universe and therefore uh it it it it really is important that that we take into account those those uh non-affine mappings of different possible universes directly into the into the decision making and and one context which of course also is very relevant is the context of climate change so you could also imagine that within one assumption for societal development so this is one of the scenarios of the the international panel on climate change they have different scenarios of societal developments and then within each scenario there's a lot of uncertainty associated with what actually happens but it's conditional on the scenario right so you can imagine this corresponds to optimal decision making for one scenario this for another one and this for a certain one accounting for all the involved uncertainties and what we need to do is of course to optimize decisions with a due consideration of the uncertainty associated with what system we actually are dealing with now when we look at systems in this way we call it small world uh systems modeling decision analysis and if if we look at everything then you will find in the literature this is what we call the big world type of of representation and decision analysis good well i actually uh well i'm i'm giving you the choice so we have a small example which is well documented in in the PowerPoint presentation on how to do the prior posterior and pre-posterior decision analysis for a case where the problem is to decide on how long piles should be used as a support for structure and and the it's a problem to decide because we don't know the soil layer thickness uh down to firm ground and and we want piles which fit uh nicely that they stand on firm ground somewhere down in the ground uh and the sickness uh is associated with uncertainty but it has only two discrete states uh either the soft soil uh layer is 40 feet or it's 50 feet we don't know and therefore we have a choice uh between piles which are 40 feet long or 50 feet long so this is fundamentally the decision problem and then uh besides these two choices we also have a choice to make a test in order to get an estimate uh on the actual uh sickness of the soft soil right and we can do that by decision analysis we start with a prior decision analysis um and uh the thing is that if we if we if we choose if we choose a pile which is 40 feet long and the soil layer sickness is 40 feet then we have no disadvantage uh everything is good we make everything right and if we choose a 40 feet pile or 40 foot pile and the depth is actually equal to 50 feet then we have a problem because we have put the pile down but it still has no firm ground so we need to put a splice in uh and and and and then that is associated uh with a disadvantage of 400 monetary units if on the other hand we choose a 50 foot pile and it turns out that the soil layer is only 40 feet then when when we get it down and we cannot get it further down we just cut it off uh and that is associated with the disadvantage of 100 and uh in in the case that the soil uh layer is also 50 feet everything is good and there's no disadvantage so it's very easy to calculate the expected value uh uh associated with the two different choices and now in this example we then introduce a testing scheme uh with imperfect information whereby we can update the probability that we will go out to either of these two branches here um and then we do the posterior uh and we can also do the preposterior now it's gonna take 10 to 12 minutes to show you that but we are well into the coffee break already so i will give you the option to say coffee break instead and you can easily read this stuff if you want but i'm very happy to pull you through in 10 or 12 minutes so what what would you like continue okay it's your own choice and we are now on 41 of god knows how many okay so i already described the problem we are describing these two choices by a zero and a one yeah and the two states of nature it's a very funny nature the the true state of nature only has two possible states it would be nice if it's like that it's never like that okay so i already said the two decision alternatives 40 feet 50 feet now the prior probability which we have on the two states of nature the prior probability this is from experience and uh judgments from geotechnicians they are fantastic in making spooky judgments and we don't never know where they come from but they can do that this is what they do for living and and then uh yeah and then we have the utilities i explained before so what we we typically do is that we put up a box here to the different decision alternatives to show the expected value associated with the different decision alternatives so 0.7 times 0 plus 0.3 times 400 is 120 0.7 times 100 plus 0.3 times 0 is 70 now these are the expected values of of utilities associated with decision alternatives a zero and a one and of course it's obvious from this little analysis maybe we would also know it by stomach whatever we call it that that would be a logical thing to do but here it's documented that the right choice is to take a 50 foot pile and if something goes wrong we'll just cut it off right and we can write that up what i just explained uh so it's a minimization problem of the decisions and now we choose the decision uh using the lowest expected value of cost 50 foot pile right is now we have base these are the prior probabilities of the two different states of nature and here we have experiment outcomes given two possible different states of nature right and then we are normalizing but what we really want to achieve is a posterior probability distribution or a probability assignment of the different states of nature using information well and we know all this now this testing scheme here this is a fantastic testing scheme so the idea is that uh we make a small explosion on the surface of the soil like this and then when we do that we have a microphone uh on the place where we make the explosion and of course the microphone is able to record the exact point in time when the explosion goes off and then comes down the the sound wave to the soil and it hits at some point in time uh not after very long time it hits the heart's soil layer or let's say the intersection between the soft and the hard soil and then comes up a reflection and that hits the microphone right now by analyzing this a little bit and making some averages and some modeling of the velocity of sound through different types of soft soils and by the knowledge we have about this soft soil here we are actually able to uh depending on uh well we are actually able to estimate the layer of the um of the soil based on this type of measurement right and here uh conditional on the true state of nature so given you can imagine that now we have conducted with this equipment millions of tests and we have found that given the true state of nature is 40 feet the likelihood that we will actually get a 40 foot indication is 0.6 so it's not perfect you see it's not perfect i'll come back to that this i don't know it was probably a geotrignition who had to develop this equipment because uh it also gives a possibility for 50 foot indication and the probability of that uh or say the likelihood of that is 0.1 given the true state of nature is 40 but i don't know why they they devised this equipment so it also comes up with an indication of 45 feet this is uh you never get something which really fits you um and and and there is actually a relatively high likelihood of getting such an indication in in the situation we have a 40 foot uh soil layer and now of course the true state of nature can also be 50 feet and based on all this testing of the equipment uh we have the associated likelihood 0.1 uh to get a 40 feet uh indication 0.7 to get a 50 feet that's relatively good uh and and 0.2 to get this saying we don't really know what to do with right um um now these numbers do they make sense they make sense so well you sum up to one that's good um uh what else can we say uh yes maybe so this 0.6 is lower than the 0.7 and so that physically makes sense that if the true state of nature uh is that we have 50 feet then let's say the uncertainty associated with estimating the velocity of the sound wave through the soil and back again uh probably reduces right so that's a variability in whatever soil characteristics there may be is probably bigger uh for a shorter distance than it is for a longer distance so to me that also makes sense so i think this is a plausible model and this this is a type of model which you really need in order to be able to assess the value of information so you need you need you need a description like this you need the likelihoods of all the possible different indications it provides relative to all the possible true states of nature this is a likelihood function and we need that and if providers of monitoring equipment or inspection equipment cannot also deliver something like this then it's not good it's not good we should not be satisfied this is key i mean if we don't have this we we don't have any real means for assessing the value of information i mean it's fundamental right that you want to know it's like when you go to the doctor and he says yes you we have a positive find for you then you would like to know okay so with what probability does that mean that i will die next year or tomorrow or in 100 years yeah so this is fundamental now we can update using base formula of course our prior model using these these test results and and well you know you have the prior probability and then you have the likelihoods you see here this is actually the likelihood of getting set two and set two was a 45 feet indication given that the true state of nature is 40 feet so it's 45 given 40 we just go back here given the true state of failure true state of nature is 40 feet so it's 0.3 right this is the situation and this is what you see here and then multiplied by the prior probability which we have and then then i skipped the normalization we can always normalize so we just focus on the products here and then we normalize to make sure that the sum is equal to 1 this is what we do here so this now our posterior model and now we can analyze based on a test result which gives us a 45 feet indication based on this information we can make a posterior analysis so we imagine that we have conducted one experiment the result is a 45 feet indication this is our posterior model and this is our updated decision analysis this is what it provides we see that it changes a little bit the numbers here but the decision is still the same choose the first full type so we can write it up a little more formally now in the pre-posterior analysis what we do is that we imagine not only that we have conducted one experiment and we get a 45 feet indication but we have conducted also the experiment where we get a 40 feet indication and the experiment where we get a 50 feet indication and for those additional two cases we also update using base and we conduct the posterior decision analysis and we identify for each of those cases the optimal decision alternative we can do that we can certainly do that you can imagine this is what we have done here so we minimize the expected value of cost given the different possible outcomes of the experiments and now what we what we need to do is that we actually need to integrate out over all the possible outcomes of the experiments and this is what we do here with the sum this is just an integration and we weigh those expected values of benefits or let's say costs with the prior probability that we will actually get this experiment outcome which we have conditioned on and we do that for 40, 50 and 45 feet right so what we need to do first is to establish the probability the prior probability of having these different outcomes of the experiment and this is what we do here looking at the total probability theorem so we say that the prior probability of getting a particular decision outcome here can be written as the likelihood of getting this particular indication given the true state of nature is equal to 40 feet multiplied by the prior probability of having a 40 feet true state of nature plus the likelihood of having the indication of interest conditional on the true state of nature being equal to 50 feet multiplied by the prior probability of having a soil layer of 50 feet we can do that for the two cases or for the not for the two cases for the three cases the three different possible indications and and you just get these corresponding I don't know if you can see it but yes you see it now these are the prior probabilities of the different experiment results right so that's pretty easy now whoops now we can do the it's complicated now we can do the the optimization here the pre-posterior analysis uh and we find we find that that in in the case so we we find that conducting the experiment we'll I won't jump to this one here so this is now we have evaluated these prior probabilities of the different indications and we have calculated not only for the case of the 45 feet indication but also for the three other ones the corresponding maximized expected values of utilities and those were for the 45 feet you remember it was 78 that was by choosing a 50 foot pile we got that this here corresponds to the choice of the 50 foot pile and this year for choice of 40 foot pile and these are the corresponding probabilities of getting a 40 foot indication a 50 foot indication and a 45 feet indication and of course this can be added up and then you get an expected value of losses or benefits so we call it utility which is equal to 40 and that was for the pre-posterior analysis where we had the option of making the experiment so where we actually looked at the optimal decisions we could make given that we conduct an experiment and you remember the prior decision analysis where we did not account for any possible experiment or results from experiments the optimal choice was to choose a 50 foot pile right and that gives us an expected value of course equal to 70 now what we do here is that we simply subtract this number here from the 70 and then we get 30 and the 30 monetary units they correspond to what the cost of an experiment can so this is the maximum cost if it's more expensive it doesn't make sense if it's less expensive it makes good sense to make the experiment right so this this is the value of information in in that simple simple case and now I think it's high time for a call