 Thanks a lot, Sebastian, for the invitation. You know, I hear a lot of very nice story about this construction and I'm very glad to finally be present in one of the meetings. I also wish to thank the Trinity College for hosting this event. You know, I'm really honored to give a speech in this ancient and so honorary institution. So today I prepare a presentation about my recent and not so recent work about the value of information that is kind of relevant for this idea of quantifying what's the benefit of monitoring infrastructure components. First of all, we have acknowledgments. Basically, most of what I show today is the work of two great PhD students. Milad graduated a bit more than one year ago and Carl graduated very recently and then Shaolin, his picture is smaller because he's just a master's student, but who knows, maybe we'll grow. And then the financial support of some agency in the US, National Site Foundation and PETA is an agency that tried to facilitate collaboration with industry and then there are some initiatives for smart cities and for the energy sector inside Carnegie Mellon University. That is the institution that I work in in Pittsburgh, Pennsylvania. So the basic motivation of my research is related to the state of the bed state, some sense of infrastructure system in the US. This is the famous card provided by the American Society of Civil Engineers that essentially claim that huge investments are needed to upgrade the state of many kind of infrastructure system, roads that we need about 200 billion investment bridges. There the problem is mainly related to their age, you know, the assets become older and older because like a replacement. And then same story goes also for the energy system. And there are a lot of possible line of research that can be addressed, these issues, but specifically me and also of course, this large community of, you know, also structural monitoring, try to use data and technology to optimize the use of resources for improving the state of these infrastructure components. For example, now kind of the buzzwords is that of cyber physical system, that you have integrating computation and sensing into the physical assets to improve their management, this is a company in Pittsburgh that they use this pigs, this pipe, inspection on gauges to see what is the condition of sewer pipes. And there is some activities going on the university about using un-main viral vehicles to inspect bridges. So the core of my approach to that is this concept of value information that I think here in this community, you know very well just a small recap what it is that you're dealing, you have this decision-making problem, you're dealing with some part of the world, you know, this F indicates this part of the world, you're dealing that, maybe you don't know what it is, you don't know what is the state supposed of an infrastructure component, but you can observe maybe indirectly with some noise and in precise way, you can get some information about the state and then given this information you have, after that you have to take a decision about supposed the maintenance of this component and then you have to pay a loss, a cost suppose that is a function both of the action that you take and the hidden state of the work. So to solve this kind of problem of decision-making and uncertainty, first of all, you have to treat, to process the information and this is what Bayesian inference allows you to do, the idea that given is observed why you compute maybe what is the posterior probability of the state of the war given the observation and then you can also find out what, for any given action, what is the expected loss. You can optimize your action given there's a kind of noisy posterior state of mind in which you have still some uncertainty about the condition and then if you marginalize on all the possible scenarios about what kind of information this sensor suppose the sensor network can give you, then you realize what would be the expected cost having this possibility of managing this structural component with the sensor and when you compare that with a higher expected cost not having the sensor, so having to take a decision under this higher uncertainty about what is the state of infrastructure when you compare that you can define this value information as the difference of this again this supposedly high expected cost without the sensor and the lowest one with the sensor and it's guaranteed to be under some condition it's guaranteed to be on a negative but I will say this something a bit more in the end of my talk if I have time. So this is already, computationally is already a challenging quantity to assess because first consider that you are designing maybe a sensing system, a network made by different sensors, first of all you have to ask yourself what kind of measures this sensor should give me then how shall I react to this measure how can I process this information what will be the optimal action and if you integrate on all this possible information that you can get that you can get the value of information so but then if you are in the stage of designing what is the optimal configuration of your sensor you have essentially to repeat this exploration again and again you see what if I remove one sensor what if I add another sensor what would be the additional volume doing that and if you are able to compute the value for alternative for example configuration of sensor system you can figure out what would be the optimal way of exploring so it's very challenging but if you're able to do that of course the benefit is pretty high for example you can buy the value information kind of tells you what is the maximum amount of money maybe that you should be willing to pay for getting this information because if you pay more the overall gain would be negative you should not pay an infinite amount of money and you should not overpay for information let's say on the other hand you can also compare a kind of explorative and exploitative action that maybe this is terminology or computer science but essentially means that there are actions like repairing a bridge there are actions like installing a sensor that are different cost and in the asset management setting you have to figure out if it is better to invest in sensor or in exploitative action in some sense and then again you can use this metric for ranking alternative exploratory action for example it's better to monitor in this component or this over component or inside this component it's better to use this kind of sensor or this over again you can rank them in terms of value information figure out what is the optimal way to do so personally I've used this concept for using in terms of a specially distributed system for example a network of bridging under seismic risk or I think I will have also a slide about temperature fields and using temporal models in which essentially the problem is more complicated because you have to take a sequence of action one after the other and you have some how to predict maybe what is the future relevance of collecting a piece of information right now just you know a brief I will not cover much about a specially distributed system but this is an idea we have now a project about URBOT the so-called URBOT Edylan effect the effect of temperature on the city and the idea is that we have to combine this uncertain information that we have a distribution of temperature and some risk effect that the high temperature have to people and so we have calibrated some model some probabilistic model about how temperature will evolve in time in collaboration with Princeton and then you know in term we are recommending the placement of some sensor but in this case our water sorry our weather station in the city of Pittsburgh and I think you know maybe the only kind of really rather few years to see this kind of classical graph of how the value information goes when you place more and more sensor you know if you place just one sensor you have a high benefit and then if you go on placing more and more sensor of course you know if there's no cost in doing that you have a monotonically increasing function because the more information you have the better but you have this kind of diminution low curve according to which you start having more and more redundant information because if the temperature feels suppose it's rather smooth if you place many many termometers when you start receiving again this kind of redundant information this is why kind of the derivative is going down and so if you include a cost for adding maybe a linear cost for adding more and more termometers you see that basically there is a peak and then goes down depending on so you know if the cost of termometer maybe is this one maybe the optimal number of termometers is nine if the termometers are more expensive you know the optimal number is four okay this is just you know so this is just to again this final work of the student Carl is about the idea that suppose you have a physical phenomenon of a change in space suppose contamination this can represent maybe contamination or again a temperature field the change in time and this is you know just a realization of what can happen you know this is the field changing in space and time and then when you see when the field goes above a specific threshold when this surface goes above this threshold you see this area thing that is contaminated or is a dangerous area and given that the question is of course you can't in reality observe the entire field because you have to put sensor for observing that so the question would be how can you locate the sensor and move it adaptively depending on what is your current observation about this field so in this simulation essentially you see by processing the current information that we have about that we place the sensor and then this is our kind of our decision some sense this about mitigating this area and this is the true optimal kind of the true correct area to be mitigated and so essentially maybe it's not so easy to to follow this but essentially it's interesting to notice that the value information generally is high when you don't know what to do it's not high when the field is high or when the field is low it's really high when you are more or less above the threshold because in this case you don't really know what to do right you the question is should I expand my mitigation area or not so this is you know just to show how generally in this problem of sensor placement and scheduling you know what are the features that make value information high is because maybe you if you have high uncertainty there's good benefit in measuring that but you know also if the field is high generally you know you may think that there is a high benefit and then maybe you know you have the idea that when you measure one point you learn a lot if the field is moved you learn about a white area that is the case again it's another feature that may value high and then you know in temporal process it would be more complicated because you know you can think that it's worth taking a measure now for knowing something nice about what happened in the future so this is maybe I this is one one piece of work but now I want to you know talk maybe about something more related to it really the idea of also of monitoring structural and for such a component so just to give an idea for so in in a to to to my general landscape is that these values this concept that again this community may already knows very well has been developed starting from probably from the 60s I think there's some controversy that the Russian got it first but I'm not really an expert of that but my key reference for that is Howard Reifer and Ron Howard in the 60s and then of course there's been in the last decades a lot of work on that in our in our community I think I'm again I'm not trace all the all the all the reference but I think Michael Faber started working on that that Daniel also in his PhD is working on that I started working with that with Daniela and Daniela Zonta and then with Arman and then Sebastian and James you know these are probably you know among the probably the most active researchers in this area for for civil engineering and generally infrastructure systems so you know for what I'm presenting now I'm going to give you the basic the basic simple possible problem that one can face in infrastructure management that you have a system that has a binary state can be failure or damage you know just you know it's a binary variable that define kind of the state of this component and then you can take a binary action that is do nothing or repair and the question is what shall you do should you repair or do nothing and then there is a loss function loss matrix very simple let's say essentially if the infrastructure is undamaged well if you do nothing you pay nothing that's very nice but if you do nothing and the infrastructure component is damaged you're going to pay the cost of failure but potentially very high but as an alternative you can repair this component and if you do that just with by paying the cost the cost for repairing that generally is much less than the cost of failure you essentially eliminate the risk of failure so that is the case the question what shall you do and this is a graph depending on the probability of failure essentially what you know about this binary component is summarizing but in just one number range it from zero one that tells you what is the probability that the component is going to fail and if you decide to repair you know the cost you're going to pay is flat in this case was I don't know four thousand dollars you know no matter what is your belief because you see basically that no matter if the component is undamaged or not you know this is what you're going to pay but if you more aggressive sometimes if you do nothing of course you have to face a risk that grows linearly for zero if your probability of failure is zero you're absolutely sure that component is fine and so you're doing nothing you're going to pay nothing on the other hand if this one you're absolutely sure the component is going to fail and you're going to pay the cost of failure and in between the expected cost is just linear of course just probability of failure multiply by the cost of failure so this is the minimum cost that is related to an abnormal policy that is essentially doing nothing until you know it's too risky and it's too risky in this case just probability of failure this free show is just the ratio between the cost of failure and the cost of repair so you do nothing up to here but if you find yourself believing that probability of failure is too high you do repair this is the pretty obvious optimal policy now if you can observe the state of this variable before you take a decision now it's much better because if information is perfect you can implement this very simple policy that is if you observe that is if it is undamaged do nothing if you observe it and it's going to fail, repair and in this case the cost you're going to pay essentially is linearly between zero and the cost of repair why because essentially failure goes completely out of the window the component will never fail because you can always eliminate the failure essentially you're going just to repair and pay the cost of repair if the component is going to fail and so the risk is really the product of probability of failure and cost of repair so you see basically this expected cost is always less than the higher expected cost that the guy without information is going to pay and the difference between the two is the value of information in this case would be expected value of perfect information but essentially here is just well it's also a function of over factors but here just body in terms of probability of failure so of course it's zero there's nothing to learn if you know already that the component is perfect if it's nothing to learn if you know there's going to fail if there is uncertainty there is some value knowing before you've got a decision and specifically the maximum value is not when the probability of failure is very very high is essentially the point in which essentially you don't know what to do all right this key in this point these two actions has the same expected cost so you don't really know what to do and there is this very high value in learning what's going on what if you have to take a lot of decision one after the other unfortunately it's become much more complicated you can think of of the you know of the time of alternating maybe action observation action observation and so on unfortunately this grows exponentially with the number of steps in the future that the kind of the depth grows because you know if you have to look at for 10 observation and any you have to maybe 10 step in the future you have a lot of observation and action available in every step you know the number of leaves in the trees grows very very fast but if you are able to do that that's perfect this is exactly the optimal you know you're able to figure out exactly what is the optimal strategy so this idea of using some marker process is based on the idea that when you look at the tree you may realize that in some different point in the tree essentially there's some sufficient statistic some knowledge that is the same okay so maybe I don't know think of a simple case in which in this path the component state can just range between one and five and you can observe exactly one being perfect five being horrible now if you you see what's going on after I don't know three years well if after three years I'm in state three no matter how I arrive there if I arrive from this path of the tree on that path of the tree what are the optimal action after you know observing that the component is in state three after three years suppose is the same okay and so if you relied on this assumption that is gained ready to some Markovian principle then hopefully you can think that the complexity grows much slower that in the in the entire tree because essentially you have just to take care of this possible belief that you may have in different steps so this is related with the use of partially observable market decision process for solving this kind of problem that is something that you know the student Bilad has done with me in the past so the idea here is that instead of facing again just one decision you have to face a sequence of decision you know you have an infrastructure component and this state of infrastructure component change in time right this is this is a hidden chain is how you know year by year suppose the infrastructure component change in state and is affected by your actions every year suppose you have to take an action like repair do nothing son and these affect the evolution but you can't observe that directly you can just observe this indirectly through for example for using some sensor and then you have to pay a cost that is a function of your action and the hidden state so just to give you an example suppose there are three possible states and damage damage and collapse is how the infrastructure component can be there are three possible action do nothing inspect repair and four possible observation but maybe noisy version or what the state is and your goal is to minimize the expected discounted cost on this process where you have to take an action right now and then a sequence of action with idea of minimizing this expected discounted cost ah well this so the key observation here is that there is a sufficient statistic but it's called the belief you know it's not seems not really a technical term but this is you know is used technically as most of you may know and it is essentially the at any moment in time is the probability the posterior probability of the current state given all the observation that you have collected so far so for example in one moment in time you can may think that with 70% probability the state is on damage with 10% probability is already collapsed with 20% probability is damaged so you see basically because there are three possible states the belief is a pie with three slices and essentially the idea is that your action your current action should be based on what you believe is so you know this is again just a representation of the possible belief this is a cube with side one so you know all the all the vector like another 10, 20, 30% are represented by point inside this cube but among those vector only those normalized to one are relevant because the belief has to be normalized to one you know the pie chart has to sum to one and so this triangle represent all the possible beliefs right all the well normalized belief so if you look from above this triangle this is how it looks like you know it is essentially if you if your belief is a one-third, one-third, one-third about this three possible states and damage, damage collapse then you will be here while if you are in the vertex you mean that you know we have certainty 100% probability the component is undamaged damaged or collapsed so for example this specific chart is represented by this point this is your if this is what you think you are here in this domain okay and then essentially what happens that you are able to predict how your belief will change depending on the observation that you have for example under the action do nothing you can think that you can receive specific information that if you receive some bad symptoms suppose this is how you believe will change with certain probability towards this is you know essentially certainly of collapse and if you receive some good symptoms maybe you will date in this direction your belief towards undamaged or so for example if you repair you can also think in the kind of a Buddhist sense that the factor of repairing is well first okay an effect on the physical infrastructure but it's also effect on your belief because if you repair even if you don't expect you you think that the state will be much better because maybe repairing has a very good tradition of improving what the state is and then maybe you can take a visual inspection and if you take a visual inspection you think maybe the belief will collapse to one of a corner okay you know you go there you inspect it may be expensive to do that but as an effect you know you will update your belief towards a certainty to one of the state and so given that you know the question is okay I see how stochastically the belief evolves in time what is the optimal policy what shall I do right if I specify the cost and so on and for doing that you have to solve the bellman equation yep solving bellman equation and this will be for example an example of an outcome of the bellman equation that define what the optimal policy is the optimal policy is depending on your belief maybe you should do nothing or only if you are here very close to you know the probability of damage or failure is pretty low you are almost very confident that the component is in damage in that area you do nothing on the other hand if you have a significant high probability of the collapse of damage you do repair and in this in between you know maybe you do a visual inspection under this specific circumstances so this is the optimal policy if you believe it this is how you know you should you know behave in acting with you know confronting with this component so for example you know the optimal policy is optimal because it's guaranteed to give you this minimum cost so think here there is a surface that depending on where you are in terms of belief you know I can tell you what is the expected cost to go all right so if you manage a component started from this point and going on you can figure out what the cost is for example along this line just not to flat surfaces so here along this line this is how the cost goes if you certainly to be in damage the cost is pretty low forty thousand dollars it's higher if there is some probability of damage essentially and now you can ask what if I install better sensor what would be the fact of installing better sensor on this component if you do that essentially you know you have just to solve another pure MDP another partial observable macro decision process another kind of dependent problem in which maybe the relation between the state and the observation is closer right maybe the observation has a higher quality and consequently you know you just slightly change the problem and if you do that for example you think that two things happen first you know the policy is changing maybe you have not to do visual inspection anymore because now you have such good sensor that allows you to avoid visual inspection and on the other hand you can see that the expected cost to go goes down and you can think of this is you know in this context this would be the value of information right the idea that if I install you know not just once but you know for a very long supposed management process if I solve better sensor how is the cost influence you see the cost goes down you can measure what it is and this is the value of information and then you know just to I think you know well maybe skip that that's not really important so as a detail unfortunately this partial observable macro decision process may be a very pretty you know complicated to solve depending on the context but luckily there are effective approaches and software that are able to do that this is what developing the University of Singapore a few years ago and it's based on what is called a point-based value iteration that is a nice approximate way for solving NPMDPs so how much time do we have more is okay good so you know I would just give you now to I think to to to steal two more section one is a recent work that shown as done and probably would be presented in IWSHM so it's something pretty pretty new but it's just essentially parametric analysis how the value information depends on specific context to do that we start with a very simple problem again a component that can be in free state intact damage and failure and the idea that if you don't have a monitoring system you don't see the damage right when it fails you see it right but you can't distinguish intact and damage suppose you know the the damage is inside the components there's no way if you saw the sensor that you can notice right but if it fails that of course you notice but then you know if this is if you do nothing you can pay some cost of repair if you do that no matter if it's damage or after the failure you can go back to the intact state so there is some you know numbers maybe cost of failure half a million dollar cost of repair 10k and then you have some probability of deterioration essentially and what we do without the sensor what can you can call an open loop policy in which essentially remember you don't see the damage so you have to you know rely on some prior model that tells you that randomly the damage will occur sooner or later and you have to base your repair on that maybe it's something similar to what you do with the chain of your engine something like after some miles better to repair because otherwise it's going to break so this is what you do is the intervent becomes damaged you don't notice but then you repair and then again and maybe maybe you repair when it was never it was not even damaged because you know you don't know when you repair and then sometimes the damage and failure is so quick when unfortunately it you know it with certain probability it fails before you you you have implemented your open loop periodic repair suppose right but if you've got sensors maybe you think that every now and then you you have some measure what the state is and damage the damage and it's perfect but maybe maybe sometimes the sensor also give you wrong answer you know incorrect detection but now you can base your decision about what the sensor tells you right maybe if you receive good symptom you postpone the repair if you have bad symptom this is you see good symptom postpone the repair bad symptom you do repair and now you're able to see what is the cost in the open loop what is the cost in closed loop you make the difference this is the value of information and so we try to see okay how does this you know when we make the problem bit more more general how does this is influenced by the accuracy of the measure that we have by how many measures we collect by the information that we have apart you know on top of the sensor the repair cost and so on to just give you some graph about that this is how the value of information change depending on the viability of the sensor essentially more or less how frequently you inspect your component okay how frequently you collect some information and quite you know intuitively it's monotonically with that the more information you collect the higher is this value again as for this consider that I show you before in when you collect temperature measuring a city you know generally there is a high benefit in the first measure and then it's like flattened down if you collect more and more frequently there's not really such a huge benefit in that um okay right you know what if for example the same graph you see what if for example you change the degradation what if you consider I'm keeping all the other parameters the same components that decorate faster a component with the degraded slower yeah generally you know the idea could be that if there is a component that degrade faster you know there's higher value in inspecting that and there is higher value in expected in very very frequently while if a degradation is very very slow you know you can think that even if you you know rarely inspect it you know you will able to to catch the the damage before it's too late so the idea that this curve for example is very flat tells you that right there is some benefit in inspecting every ten years suppose every ten steps of this component but then you know if you if you inspect more frequently there's not really much additional benefit because the the process is so slow but you should not go there any any day inspect what happened because you know you know you have time when the damage arise to to take an action before it's too late I know very you know as in accuracy that tells you this is how better the sensor what is the mismatch between the true state and the outcome of your sensor so this would be a perfect measure you know if there is a damage if the inaccuracy is zero the sensor tells you that there is a damage and vice versa there's no damage no false alarm but then when you have inaccuracy essentially you have potentially some misinformation and then the value information kind of intuitively is more tonically decreasing in that the the worst are your sensor and the lowest the value information is up to a certain point that the sensor are so so bad maybe even sorry sorry sorry I mean independent respect to the state of the component and this is the case there's no value at all essentially okay just this is unpredictability suppose you know the idea that even without the sensor there's a long tradition of developing models that tell you something about what would be the failure of time so for example if I do nothing how long does it make for my component to fail this is the uncertainty of that it is called unpredictability and so the idea is that if this is very small you're doing pretty well even without the sensor because you can the upper loop policy works pretty well because you know the bright colleagues in maybe in a in a solid mechanic and so on are able to predict how the deterioration will be even without sensor that is the case value slow if on the other hand the you have a high uncertainty about so essentially if you don't know what's going on without the sensor there is why the value of the sensor is high this is if you think it's mathematical important it's not really proof so it's not really true always true that uncertainly prior information is monotonically with with that information but in this case it is and then we we investigate two more things one is reaction time you know in order for having some some value you should be able to in depending on the problem in some limited time get the information and react to implement some policy in if you think maybe in some context you can't do that right you collect information but then if you want to repair it takes many months or years to repair if you do that force of value information goes down because even if you have very precise very good sensor steal the fact that you need so long for reacting may kind of some sense you know defeat the the old purpose of the monitoring system right some sense and so here we show essentially if this is really mathematically proven that of course you know if you have this constraint that when you say I want to repair you have to wait TR TR years like five seven so on then the value information is monotonic going to decrease in that and then just you know this would be what I know how does the value information change with repair cost the idea is that the repair cost is zero there's no value because you always fair so cheap if there the repair cost is is very very high you never repair you just take the chance of dealing with with a failure component and so the value information is not monotonic in the cost of repair there is an optimal you know an optimal setting when the cost of repair is so that essentially you don't really know what to do more or less again this is where the value is high but otherwise essentially is you know it's not true even in the in the first branch is not true that that the highest the cost of repair and the most you know the most benefit is to collect information and then discount factor essentially is a way of of saying how long is your management process very short there's a short value if when this discount factor tends to be one turns out to be equivalent to say that the monitoring process is very long and that is the case the value information goes goes up essentially so maybe ten minutes is it no ten minutes five tell five five or ten okay maybe okay this is just another application that given you know before it was the value of this following a monitoring system that is there forever then I I've done some research in terms of if you have many components and you have to send right now what component to inspect you know how can you decide what component to inspect and maybe I will skip that but the basic idea is something like this you know you can track what is the current value of information of inspecting each of the component and then somehow the component compete among themselves you can say if I monitor in the first component I get some value monitoring the second one I get some number of value and then when the value is high same thing maybe you have brand new component value is low and then you have uncertainty the value goes up and then in this kind of auction model when one component wins you inspect it and then you know what is the state where the value goes down and so on so this is just kind of a framework for kind of adaptive scheduling of inspections about component and then the last thing that I want to mention is this well maybe I can skip the risk because I already mentioned that you know is this looking with I mean in the same in the section of Sebastian we are now investigating this problem that maybe kind of maybe you're familiar with this we may call it information avoidance the fact that sometimes people pretend you know prefer not to know and there is some rationality behind that not always but there could be so the idea just to give an example is this one suppose that you're following a building code and the building code tells you that you have to repair when the probability of failure is too high okay there is a free show here and say in the building content if the probability of failure is above whatever you have to repair even if you prefer not to because you are very risky agent and you don't really care if it fails maybe morally is very bad but you can you can you can have this different utility function and that is the case you would prefer not to repair but but you but you but you are forced to that um so if this is the case you know it turns out if you think about in this specific context in which the probability of failure is here maybe you you'll find out to repairing very convenient because you are obeying the the rules of the building code and and you have to invest a lot of money in repairing your infrastructure component but if you can inspect it you can escape this for you you know um unfair constraint however there is an interest in the opposite case of that is kind of interesting suppose that you are here maybe you know the current uh probability of failure is below the threshold so according to to the code your building suppose is safe enough okay suppose the probability of you know the threshold is 10 to the minus 3 and this is I don't know 0.5 10 to minus 3 so you are safe enough right so you are very happy and until someone knock in your door and want to install some sensor and the question is shall you install this sensor now the sensor is free okay there's no cost to that and so according to this non-negativity of value information should say yes why not but actually the reason reason why not because you say okay what if I receive bad news and the sensor tells me that the probability of failure is higher than my prior one and in this case you know I have this negative effect that the probability of failure uh make my component to be in this under kind of the the the constrain of the code and essentially as a consequence I need to repair my component okay so I can figure out that is better for me not to not to install the the system even if it is free and even if you know kind of paradoxical way I'm I'm if I compute value information is negative meaning that I should be willing to pay this guy not to install the sensor okay so we try to say what would happen okay and we will have a very simple model of that these are the same slide I showed you before but with imperfect information the value you see is is lower when you have imperfect information and now in this specific setting in which the value information like this is essentially the same setting as before there is society and society has a different cost matrix respect to yours for society for some reason the cost of failure is really the failure of really really you know a bad event much worse variance for you so the cost of failure for society from you know as judged from the societal point of view is much higher than yours if that is the case essentially what society does by using the building code is implementing a policy that is more restrictive that yours you have your own policy your policy will be you know be pretty you know aggressive maybe up to this probability of failure but because failure is so bad for society society tells you no no no you have to repair if you are above this free show you don't agree with that but you have to you have to pay and then the question is in this context someone offering you a piece of information and and and you have to accept or reject this piece of information essentially you have to compute what the value is and well first sorry this this now sorry if you think about it this is the new optimal cost you know this is when you are free of the society rules but this is when the society rules is is active so here you have this continuity right of the of the cost that if you think about it is pretty significant weird because before even if you know even before you have to to do two different action repair or do nothing very different action so it may be the essentially that you know if you update your probability of failure there is a point in which a very small perturbation takes you be all the boundary the decision boundary and you completely change your action instead of doing nothing and and staying in your office all day you decide to repair the bridge very complete very different action but in terms of cost you see the expected cost is continuous okay and for example a consequence of that that if the information is so is so noisy that it moves you really of a small quantity in this in terms of probability of failure the value is very small but here is very different because here society put a constraint suppose a 10 to the minus 3 and if you are just just a bit below or just a bit above you make a huge difference right you have this finite huge gap when you cross that and because of that the value of information kind of becomes more complicated and really can become negative it means that you can really you know you can there's really people that prefer not to know even rationally prefer you know this is essentially just one of explanation there are many reason why people prefer not to know but so in this context is really the idea that the the observation takes you to the risk of being you know under the control of society and you prefer to do that because because you're a greedy anarchist essentially and why? so at the moment we are we are analyzing this problem and you know in terms of public policy the question would be how can society try to alleviate this problem you know because if a building code tells you you have to repair above a certain threshold that of course is very good for preventing people to be too risky with with their own asset but has this is negative consequence about about you know sub suboptimality in information maybe you know for example where people insert certain specific circumstance that will try to avoid collecting information so the question is we're trying out how to solve that if you think about it maybe you know you have also better ideas that I do one would be you know to force they require the collection of information inside the code of course there's some example of that it seems to me a bit complicated because you know the perfect way would be that's essentially so agents now are not even free to collect information the building codes so I'll follow this formula that gives you what is called about information if this is above about the cost of information you are forced to acquire information okay it seems a bit complicated to implement and be over you know the other direction we'd be to remove the constraint and say to put the responsibility really in the hands of the users okay you you know maybe there's some insurance some mechanisms some incentives so so that there's no force constraint and so when there is no constraint everything is smooth the value of information is guaranteed to be to be positive not negative with this you know I conclude my presentation thank you