 So let's continue we Very we and especially you have heard a lot already and Now the idea is that you hear some more Yeah, we Daniel especially has gone through all kinds of modeling and Bayesian network Modeling and different ways of Thinking and of accounting for information with different Characteristics now we Of course this is very important for our decision analysis and value of information analysis But but so to say is it is only about the first two branches of our Pre posterior decision analysis, which is a part of the value of information analysis. So it's the information type direct indirect for instance and The outcome which can be probabilistic or in the case that we have perfect information. It's rather deterministic So we have in this decision tree. We have covered the two branches Or maybe even we have a look to to the first lecture So we have been when we look to this flow chart here. We have been covering these aspects and We somehow already Worked with a structural reliability model Even in the first day, but now we pose a little more light on The methods which are behind Modeling the performance of a system in terms of the structural reliability and the structural system reliability and We also look At this interface, how do we Update our structural reliability and the structural system reliability and There is I will introduce the main Aspects which need to be considered So the first point is structural reliability there's two systems and The question is to you What is a structural reliability or where to which type of Structures you see here the structural reliability applies Yes Yes Yes, exactly. This was the answer to the definition or one definition But if you look at these of this at this structure, this is a bridge this is obviously offshore wind turbine or off-road wind park and Topside where the conversion equipment is on so to what Yes, this is also an aspect we will consider But to what parts of these structures the structural reliability applies Yeah, we have the cables Okay, so this looks like This can be covered with the structure reliability. How about here looking at the wind turbine? Yeah, this is structural reliability and this belongs to a structure, right and how about the this part here the machinery and the Control unit This structural reliability Yeah, I know it isn't and this is what we need to be aware of in structural reliability It's about the structures and It requires a Special way of thinking to calculate the structural reliability and it's different from the machinery Reliability so here we observe that Yeah, well this bridge and many other bridge. They are rather unique Although there are types like stable cable state bridges. So structural components and systems are unique and What we Are doing when we think of the probability of failure is that we look at failure mechanisms and We often have extreme loadings like Yeah, here we have the traffic loads in combination with wind For instance and then there's an event which leads Which may lead to the situation that the loading exceeds the Capacity and then we have a failure So We have Basically we consider the loading and the resistance And we have probabilistic models for the loading and the resistance Which are derived from data and then we estimate the distribution parameters and we calculate the probability that of that The loading exceeds the resistance but this is different from what is called the classical reliability analyzes and This applies to machinery systems Here the situation is different here, I usually have a Large number of components and of identical components and I have statistics of Component failures and even of system failures So I have data on time to failure and then I can derive a model for the time to failure estimate the distribution parameters and then I have the statistics on the time to failure and that's Largely done in for machinery systems. So this can be found Yeah for trains cars for power plants So in case of a change in that Problems for this classical Reliability we consider only one dimension. That's the time to failure, right? We consider entire population where we assume that they have exactly the same boundary conditions They're exactly the same they have the same resistance. They have the same exposure to load and that makes actually only seven cents even in the machinery part of the offshore wind part We could integrate Literally the loading of a gearbox, for instance This is hardly done. All gearboxes are taken in one sec and we consider this as just one population It makes statistics in this population It's a big difference between structural ability there we separate the problem and separationary cuts What's the question I think Sebastian already did in some Publications when we apply the philosophy of structure engineering to classical reliability Normally you can gain a lot because you differentiate the load in different states and exposure of the different components And you can make a much more Different state of judgment We can say that we as the structural Reliability engineers researchers we know much more About the failure and the damage mechanisms then usually the people from classical reliability engineering know probably most Likely is that there's Not a clear knowledge on exactly how the component fails and what is the mechanism behind So there we are we are much better the structural engineers and Reliability researchers and engineers so Yeah, okay, these are the points Maybe for for wind turbine. This is not and not the classical Structure Reliability problem here. It's rather an interaction between the structure and the machinery. So normally our loads are rather Yeah, environmental loads We have to consider but here it's a control system so it's variable loading and the control how this is done how the Rotor is turned and the blades are turned pitched this influences the the loading Yeah, okay, let's let's go on I have a Lot of interesting things we could talk about but let's be a little bit more focused So when we look at this equation R minus s lower equal to zero we Can reformulate this as an integral of the CDF times the PDF the CDF of the resistance times the PDF of s and we integrate over from minus infinity to infinity over all the random variables This can be in a PDF diagram. We can visualize it like this or we can Introduce here the joints or Density function of the probability of failure this looks like this and we may have the situation that We have only one random variable here, but this is a large simplification and I think this goes to the discussion in the lunch break We have to be very careful with such analyzes and it may also be possible to take this expression and to derive a Safety margin, so This is a simple simple Random variable calculation m is defined as r minus s and then if we integrate Our density here of the safety margin then we have the probability of failure and this is The integration from here to infinity provides the probability of survival and So if we we can draw Or we can visualize this equation Here with the two PDFs Or we could draw it like this That we have a joint probability density function off there's a On one dimension there is the resistance and on the other one there is the loading Anybody knows what the difference Between this diagram and this diagram is What do we neglect here in regards to this Exactly Super this is the we neglect that we have dependencies that we may have dependencies which come to by Yeah, true common influencing random variables, so the most general way of Writing the structural reliability problem is that the probability of failure is defined as integral over the joint probability density function in the area where the Limit state function becomes Lower or equal than zero so this we need to integrate over this area and Yeah There are several ways of solving this problem Here we have form and form that's the first order reliability method and the second order reliability method The first order reliability methods relies on A linearization of the limit state function, which is here G and then Finding out the closest distance to the to the origin and this in the standard normal space And then the idea of the second order reliability method is I do not approximate Made with the linear function, but I can take a second order Function to approximate the limit state function Yeah, this is some standard Solution methods especially Form is is often used I Don't want to go into detail with the form and the form methods You Can find it in just structural reliability books and Of course and in almost any lecture note, but let's have a closer look to simulation methods and Well the Procedure of working with the simulation method is that you Generate Realizations of your random variables For which for every simulation or for every realization you compute the value of the limit state function And then you count basically how often? You observe a failure and divided by the M realizations And then you have the probability of failure so it's rather This looks rather straightforward, but maybe some more background on why it works and It works Because we have Yeah, this expression here it's rather Something which is also close to our observation now we can calculate the expected value of a function g of x with With m samples, so we Yeah We calculate our limit state function and sum it and divided over the number of samples So this is one Approximation which may be rather straightforward But now let's get back to the probability of failure problem So we have been writing it like this we integrate here over the area where the limit state function is lower or equal than zero but The problem is that we don't know this area So that's why we need to replace here the boundaries With Yeah, it's that would be good if you just could sample from minus infinity to infinity so there there's no Limitations and in order to achieve that a Indicator function is introduced basically it's one if there was a failure and zero if there was no failure So that's the very straightforward mathematical formulation and then and then we Arrive at this formula So and this is the background for the Monte Carlo simulation a few more words on the steps to the formula Monte Carlo simulation and One point is the generation of the random samples here It's it's not such a problem anymore, but at the in the beginning of Of the development of computer technology then the Generation of random samples was a problem Here we need to be aware of of that there is an algorithm which Where the randomness is Yeah Which provides a randomness Without short periods of repetition so you can find this in the Software description if you use mudlap then you can look in the random number generators and they they give the seat so How many really random samples are Generated Today is more the problem that the random number generators may have compromised by some Some people who are interested in Regenerating the random samples so that's not random anymore But it looks like random okay This is yeah, so we need random numbers and Random generators they are usually providing numbers between zero and one and how do we come to our distribution function? This is done with the inverse transformation method So we have here our uniformly distributed random numbers between zero and one and if you take then the inverse cumulative distribution function We can generate samples Which have a distribution like Like we want and of course in python and mud mud up you can very easily with one command Generate for many distribution types directly the the realizations But that's basically behind what is already implemented in mud up where you need it They may need to implement this method is if you want to generate correlated samples Yeah, then you Should use the inverse transformation method or you may come across a distribution function, which is not implemented in mudlap or in python or any other Programming language then you can Work with the inverse transformation method What we always need to be aware of is the precision of our Calculation and here The rule is that and there's a derivation for it the standard deviation of A Monte Carlo simulation Equals the square root of the probability of failure divided by the number of samples after after convergence convergence So this means if you are Calculating a probability of failure in the order of 10 to the power of minus 3 And you want to calculate it with the coefficient of variation of 0.1 Then you need 10 to the power of 5 samples if you want to have for the same a precision of Yeah, of 1% coefficient of variation then you need 10 to the power of 7 samples very important and this goes for Yeah, so the limitation is Monte Carlo simulation if you have any updating in your code or Any intersection operations then this does not hold necessarily Anymore depends on how your intersection and union operators how they are working Wait so we can use Monte Carlo and There's a rather straightforward mathematical operation for shifting the The integration integration area to minus infinity to infinity, so it's very easy to do to implement but the price is the computational effort and Yeah Here most of the samples are useless They're here in the safe region, but we are basically Would like to know yeah this region here, but of course here is the highest Probability density and there's the most samples and most of the samples are useless so in order to overcome that There is this is here also introduced very basic We could try to have a different sampling density Or we would like to come to decouple the Joint probability density function of our random variables from the sampling density so that we can Sample more here in this region and this can be introduced with with the random variable v and a Joint probability density function and Now the We can do still Monte Carlo, but we are much more precise Okay, it doesn't go any further So this is the hint that we that the basic idea is to decouple the sampling density from the probability density function of our random variables and over the decades There has been There is a lot of methods Which have emerged They are called adaptive sampling is very good sampling. It's subspace sampling and They are all there basically To overcome the Yeah resource challenge we have with Monte Carlo simulation. There's also respond surfaces and Surrogate models that's all basically the idea to overcome the computational burden of Monte Carlo simulation Does anybody know the Criteria from from computer science whether a problem can be solved or not What is the criterion? from computer science for Solvability of a problem or a problem which cannot be solved Anybody are recognizing these letters? Anybody knows the meaning? P and NP In computer science a problem is considered to be solvable If it can be solved in polynomial time If the problem Cannot be solved in polynomial time. It's determined as NP problem. So not in non polynomial time. So this is a Problem which for instance Yeah leads to the situation that there's an exponential Grow growth of the of the problem which requires a yeah an exponential time To solve So this is a distinction of computer science and this is what Daniel referred to in the last lectures So yes, the decision trees can explode exponentially and then we have a problem and this is the problem It is considered by computer science not solvable Not completely solvable of course for very small problems we can find a solution Okay, so this is the distinction by computer science So anybody knows the Growth of the computational power over the last 50 years What is that called? that's the Moore's law right and What is that what shape does it have? Exactly So this computational power grows exponentially well, I didn't have the chance to confront computer science people with this problem and why is there this distinction if there was exponential growth the computational power okay Yeah, so somehow to the P and NP problem there should be as few more boundaries also temporal boundaries Because if we take away the temporal boundaries, then we say okay the combination of power it grows exponentially So why there is this distinction? anyway If you want to solve very complex problems today We need to do this, but maybe not in the future anyway good good So several solution methods form For structural reliability form so on so this is approximate methods And they are precise for normal distributions, especially the form so Anytime you have some skewed distributions some extreme value distributions then form may not be precise anymore you have to be very careful and Form may get also if you have a large number of random variables May not work anymore If you take Monte Carlo, it's independent of basically independent of the number of random variables It's independent of the distributions. It will give you A precise solution if you have enough samples and you can computationally handle the problem So this is the important points and then there's all kinds of schemes of making an Monte Carlo simulation more efficient Yeah So this is the main points here for structural reliability Yeah, yeah, we have for the structural reliability quite some challenges because we have very small probabilities of failure and Yeah, well structures do not fail very often. This is good for the computation we Especially with Monte Carlo as the precision depends on the probability of failure and The resources depends on the probability of failure. This is quite a challenge and Then another challenge is that we have systems and An r-s for one component a Beam maybe Solvable and there's enough modeling options But when we go to such a system This is Extremely complex Okay For the structural reliability, there's software packages So of course you can work with programming language, but there's also Sturl and comre UK lab Sarah Sarah is One tool which is connected to a finite element Software also by Chavenger and Calry and Ferum. That's from the University of California in Berkeley Yeah, there's all there's various kinds of methods and also various kinds of software packages Mainly coming from the academic field, but there's also a few commercial softwares available structural system reliability Yeah, why Why do we talk about systems? the very important point here is The determination of target reliability and the risk analyzes so the determination of target reliability We find in the Eurocode zero And in the JCS as probabilistic model code basis of design And they are determined with a Yeah, but the risk analyzes as part of a decision analysis. Jochen is the expert here and So if you have a risk analyzes then we have You have a probability times consequences so and if the and the largest Largest risks are associated to the situation of system failure. So there's Not so big consequences if here in the bridge Maybe there's failing or one sub component then maybe one lane is just One lane is blocked and the other lanes are open it can operate there will be a few more traffic Jams Around but this is not large consequences. There is very large consequences in the case where the bridge fails because then Well, the failure itself may involve quite some Consequences so the bridge has gone basically, but there may also be some fatalities and injured people and then here in this case that must be the Sturrobelts or the Ursum bridge. I don't know. Maybe these stirrer Sturrobelts bridge then the connection between Yeah, some islands of Denmark Is is gone and even the highest points in Denmark. They would be gone. That's here So the highest points in Denmark is the top of the pylons of the Sturrobelts bridge. There's no Hill higher than that in Denmark Okay There's large consequences for system failure. So the target reliability's they are determined on the basis of the system failure and We of course there's some other aspects in systems so system design is very important because It should not be the way a system should be somehow be a robustly designed. It should not be the way that if one component feels Maybe a minor component this should lead to the system collapse. So this why that's why system reliability is important Some basic modeling of systems Logical systems We can have serious systems and parallel systems and Yeah, a serious system is like a chain of one Yeah, if one component fields the system fails if you have maybe 10 chains and every chain is a component and And Yeah, this is an example for parallel systems So you have 10 chains and one change is rupturing then the rest the nine chain chains are holding And this is a example of a parallel system and then there's a mixture of these two systems So For a series system and a parallel system we can calculate the probability of failure For a series system. It's the union of the failure events And this corresponds To This expression here. So we take the minimum of the limit state functions. So we may have Here six limit state functions and If you do the sampling Monte Carlo sampling so and calculate the limit set values and we take the minimum out of these six Limit state functions and Calculate then the probability that this is Smaller or equal than zero Then we have the probability of system failure This can also be Achieve with this operation for parallel system, we have the intersection operator or the maximization operator in terms of the Limit state values. Yeah, and here we can work with any correlation and There are simplifications for the case where there's no correlation between the failure events And The simplification if there's full correlation between the failure events, but yeah, you can find this elsewhere If the If you only have normal distributed Random variables or normally distributed safety margins Then the probability of system failure for logical system for a series system and for a parallel system can be calculated with the multivariate normal distribution and the Better, so that's the reliability in these days for each of the component and then a correlation matrix of How each component is correlated with each other So this is logical system modeling Sorry Yeah Yeah, we But for the topic of structural reliability structural system reliability and updating there could be three lectures So this is the version putting it into one lecture Yeah So this is another kind of system modeling and it goes to a publication of Daniels and Daniels has published about the statistical theory of the strength of bundles of threads That's the cotton industry. Yeah for the cotton industry. They were interested in the reliability of strengths of accumulation of threads in In the cable of wool Every remarkable in this publication is that Yes It has been published and then there were public comments To the research work so when we publish we first have the review process and we get the comments And then the article is published and the comments are not public, but here If you find this article You see also Comments by colleagues of Daniels. Okay So what is the Daniels system it yeah, it's a bundle of strands and This is a represents a pearl it system, but additionally he found expressions which take into account the behavior of the components. So whether they behave somehow Somehow a duck tile so that After the maximum of the strength maximum is reached that they are That they can still carry some load or the case That they react brittle so after reaching the maximum the capacity is gone. So this is the basic idea and Then there is for these cases Approaches how to calculate the probability of system failure if you have a duck tile system So that means we have some capacity After failure, then it's the sum of the resistances minus the system loading Rather simple straightforward and For the case that there's a brittle failure. So that means there's no capacity then we We have here a maximization operation which takes into account the num Yeah, the number of components and then There's this expression multiplied with the realization of the resistance and We take the maximum out of Yeah, this multiplication and Then it's again minus the system loading and this provides the probability of system failure if the components Behave ideally brittle so structural system reliability with varying number of components So let's have a look at this so we have now We We just saw how a series system can be calculated a parallel system can be calculated and a special cases of a parallel system if we consider also the Component behavior duck tile and brittle we have the danier system modeling so Yeah so the assumptions here is that All the components have the same reliability and There's no correlation between the component failures. So I'm adding to a system one component with the same Reliability and This is actually from yeah from golewitzer and ragwitz who have been working with the Daniels system modeling and To an extent which I did not show so but if we Increased the number of components here for an Ideal series system we see that the system reliability Yeah, that's the system reliability here It decreases If you take an ideal elastic brittle systems or brittle then your system the Probability of system failure It does not change or the reliability does not change largely. It drops a little and then goes a little up If we have an ideal parallel system Then the Structural reliability largely increases with the number of components and if there was an ideal duck tile system then Yeah, still quite steep decrease of increase of the structural reliability So this summary of what we have just gone through so now we vary the failure correlation and If you think of a chain and You have one case where there's no correlation and you have one case where all the components are fully correlated So that the correlation means there's dependencies So they somehow May produce with the very same Resistance properties and the loading is fully correlated because I'm taking a chain in my hands and I'm Yeah, making it straight in this way There's a full correlation of the load. Yeah, if I Have more strength than in all components the load will go up And if I release a little then there will be less but it's in all components It's not somehow varying for each component. So the loading is fully correlated And let's say the resistance is fully correlated. This is one case and then the failure is also fully correlated This is one case and the other case is that it's somehow random. So we can imagine this is a chain I'm holding but There's 10 of okay Depends on a number of components. Let's say it's I have two components in my hand But there's eight components left so eight people from you are also coming to the chain and they are doing some random movements so that will lead to a random load in the chain and then we also assume that the resistance so the production of the chains chain members chain components Was completely random so It's it's also distributed and it's not correlated. So this is two cases So and each of the components has the same Reliability so the distribution of the Loading and the resistance is the same, but it's not correlated So we cannot say the one realization if I have a higher realization of the resistance for one component it must not be that there's a higher realization of the Loading in the other component. Yeah, that's randomness and that's uncorrelated Probabilistic properties So taking these two examples an uncorrelated Chain with ten components and a correlated chain with one component with ten components For which loading of For which system the probability of failure is higher But it's it's a serious system. Yes This is right We are in the context of a serious system and the question is I'm varying the correlation between the component component failures and For what situation the probability of system failure is higher for the case of no correlation or for full correlation okay, yeah, so we have an ideal serious system and Here we have the Case of no correlation and We have a low reliability and the high probability of failure and if you have a high correlation then We see that the probability of system failure is Lower and reliability is higher. It's the other way around for Ideal ductile then a system and for the case of an ideal elastic brittle Daniel system this yeah first the system reliability drops and then it increases and What we also see here is If all the components are correlated Then the system behaves Basically like one component. So the system reliability Here is equal to two and two is the component Reliability, okay So The system reliability depends on the type of the system The reliability of the components the come the number of the components and the dependencies. So this is main influencing factors and Yeah, let's think about a little let's think a little about system reliability we have for instance in nuclear power plants a Requirement of redundancy of form so the reason redundancy is required if we now Think of the system reliability What Can it mean and is redundancy a requirement somehow sufficient or not? So for instance Where I've seen cases where Yeah, there's a redundancy requirement of four. So we put four of these emergency Power generators. We just put it side by side Okay, but here we have we say it's an at least in two different rooms and they have put identical generators in one room and identical generators in the second room So what can we say about the power supply reliability so the power supply works if one of the generators Produces power. Yeah, so it's a parallel system Yeah Yeah, the yeah exactly Basically the two parallel systems we have for each room We have a parallel system and they are fully correlated. So the system reliability equals The component reliability the reliability of one Emergency generator So it goes in in the direction It must be yeah the requirement that it must be for Emergency generators, but the system reliability Is not a parallel system of four because two are correlated and Where this is a little compromising the the safety And there of course it depends on the scenario, but if the scenario is flooding of of a room Then they are also fully correlated Or if you think of deterioration of the of aging of these power supply generators It it's the same they can be fully correlated it maybe the same Aging mechanism which leads to failure of this power supply Yeah Okay Where failure is the event but you can combine these all kinds of events Either as an intersection or as a union So the intersection being the pilot system the union being So especially in the context of inspection and your observations actually an observation Of that some value is larger or smaller than the threshold. It's also an event Then we have to represent the unit of different Instruction events then we can use exactly the system Ability problems we can use form analysis We can explicitly consider the correlation because normally when we have several observations And they are also correlated to each other and that makes them a system reliability So really for our problem And then of course when we relate it to structures, it's important that Zero system does not always have to look like a jing So it's just all combinations that they have one failure needs to That's a simple trust system that's a start study determine So all study the current systems that Zero And Or a denier system, that's thank you for switching to the next slide Yeah, but this is the most important points back to structural reliability and Updating of structural reliability where this is the Yeah base theorem written with events And here also with the total probability theorem to calculate the probability of a which is Symbolized here So with discrete probabilities and if you work with patient networks and we have discrete probabilities, this is The so to say simple formulation, but we can also write we have seen this already by Daniel We can determine the posterior probability density function By multiplying the likelihood and the prior probability density functions and then Integrate but this expression is basically there for normalizing The Surface under the function under the probability density function to one So we don't have to solve this integral or we like Daniel wrote There was a proper proportionality operator here. So this function is proportionality proportional to this product So This would be patient updating for continuous random variables and We also can formulate patient updating for the parameters of random variables So that means we may have a random variable where the parameters like the mean in the standard deviation for normal distribution they are somehow also Following a distribution and then we can update the distributions of the parameters and this is the formulation here Yeah, in short what is happening if we work with with distributions and We may have a prior distribution like this and the likelihood Which is which has rather low probability density. So we could say this Yeah likelihood is more Yeah the information is More imprecise because the standard deviation is larger here the prior information has higher a lower standard deviation So it's more precise. So the posterior will be close to the here in this case more precise information the other way around if we have prior which is Rather wide Where the density is rather wide? so no high density low information content and We have likelihood which is has a low Standard deviation then again the posterior follows the more precise information and this time it's likelihood not the prior So this is what we need to be aware of when we work with patient updating And sometimes we need to ask ourselves Yeah Is our measurement our likelihoods really somewhat Comparable with our prior information or are the prior Information are more comprehensive and we just have a special case and it may so It's also some thinking about the boundaries when we when we update And Okay, so we have numerical approaches for solving the updating also the updating of events and this is what you also should know there is Something like Analytical solutions if the prior and the posterior distribution Is of Yeah, some special Families so that's the concept of the conjugate priors you find it in textbooks and there can be a straightforward analytical formulation for that so in when we only look at the at the updating we take our NdT reliability modeling or the inspection modeling from the lecture from Daniel so we can determine Probability of indication curve and we can describe the event of indication or no indication Independency of the defect size then we can calculate Or Yeah, we can calculate conditional probabilities and these conditional probabilities are then calculated with Bayesian updating like like it was written for events and or maybe rewritten by the definition of the Conditional probability of events and this can be implemented in In Monte Carlo simulation so we need to know here What does our indication refer to refers to a damage or failure a component failure then we can update the Probability of damage with the inspection outcome or the indication outcome And we should be aware of and if you draw a decision-free You should be aware of that you can update with an no indication event and with an indication event. It's both So yeah for damage inferior, this is the options and this is the example for Updating the probability of damage with no indication information We may have a limit state function of a damage with a model uncertainty relating to the damage Resistance and the model uncertainty relating to the damage loading. So that's Easy extension from r minus s to describe a damage of of a component It's important to hear that we describe the event of no indication and And So we are in the context that we are measuring on a structure and we have a prior model So we know the probability Of that there is a damage and we also know The distribution of the damage size so and that's why we need to calculate The probability of indication here in this way it is dependent on the realizations of the of the damage so we go in with the distribution of the damage and Go into our I have it in the example then Maybe there's a sheet left. Yeah, that's okay. So that's the it's the indication and this is the damage size D That's the probability of indication and dependency of D so the point is that We have a random variable corresponding to this D which goes into our structural liability analysis and somehow this random Variable D has a density and we need to go in with each realization of this density to calculate Probability of indication and this is basically this expression here Which is then Goes in the inverse CDF of the normal distribution and here this is a I think uniformly distributed random variable Thought it was somewhere there. Okay, maybe on another slide So this is our Limit state function for modeling this event and it's dependent on Our prior model. This is the important thing. Yeah, and then the yeah Just moment. Yep, and so Here if this is smaller Smaller or equal than zero. It's it's another Monte Carlo simulation. You will get then the probability of No indication in this case Yeah, well the derivation can be found at home But is it any logical explanation behind the yeah, then yeah, well the well the logic yeah, there's a derivation on Getting out is formula. So that's that's mathematics. Yeah, and it's more comprehensively described in Hong 1997 you can just Google for that But the somehow logical explanation is that we have an indication event We need a limit state function And the probability of indication or no indication Depends on the damage state of our structure But we know the damage state of our structure. We this is our prior model So this is the kind of the logical Explanation Yeah Okay, yeah, thanks, Lidia Okay But maybe let's discuss about two points here This point of the yeah, that's the so-called often referred to as the POD curve Or well, but More precisely it's models deep for the indication So let's first discuss about This point what value do we have here one? Yeah, and why do we have this here? Yeah, okay, this can be a reason but but let's assume we Have a extremely large damage, but still the probability of indication of that damage There's not one why? Yeah Yeah, it can be the Detection system it can also be the operation. It's often the operation. It's the human errors so even if the detection system Maybe a reliable there may be human errors which lead to Yeah The situation that it may never be exactly one How is such a Curve derived? Yeah Yep, it's called round Robin tests its experiments and there's Round Robin tests and there's even standards for that so Especially in aviation When there's a new method The you know best probably then then there's tests in different laboratories with different teams using the same technology and Blind test specimens where They don't know What to detect they just take the technology different teams and different laboratories and then it is counted Who had a success detecting this defect and who not and different defect sizes so this covers Yeah, this is established and it covers a lot of uncertainties or It's very comprehensive in covering the uncertainties of of the operators and even different operators teams even other laboratories, but also specifically of the technology the The measurement uncertainty is covered. So there's a measurement process Yeah, well strain gauge you are measuring a voltage difference because of a resistance difference and this is amplified so it's electrical process and you can model this probabilistically and then you Yeah, get out the measurement uncertainty and for all other measurement processes it's a conversion of often of Electrical and mechanical units and so on so you can model this process and every technology will Have a variation of of the outcome so it's the Uncertainty related to the measurement process to the human performance to the operation So this is quite comprehensive covered here In the determination of the PD So when we look here at this point Where there is a zero So So that's the probability of indication Given D equals zero What is this? Yeah, that's the probability of false alarm Yeah, will this be you will this be zero so If we start the diagram with zero Then we have the probability of detection given a defect which is by definition the POD and If you start by zero we have also included here the probability of false alarm. So there must be a finite value here It should not be zero Okay Okay, so the difference between this slide here and this slide here is that There is an index S here and this refers to information I may have on system level not on component level for a Inspection I have very local information But I may also have information which referred to the complete system Anybody has an example? Yeah Yeah Yeah, Dominic when when do we have the next coffee break now, okay Okay, very we I think I may finish in 10 minutes Yeah Okay, fine Yeah, I and Regarding the coffee breaks. I cannot keep this secret anymore. So you can get really espresso here This machine, yeah Yeah, you just have to ask for no and not not by ourselves, but we can ask for it Okay What did we talk about information on system level? Right Something based on vibration information and there's a huge field in structure health monitoring. I'm looking to Michael Todd Working with vibration analysis and determining damage indicators So this is the information We have on system level here on structure and system level for bridge The vibration is caused by the complete structure Any other ideas of information on system level Pardon survival Yeah, yeah, exactly that's proof loading. Yep, super. So here Yeah We need To update On system level so we may update the system damage state or it may also be the system failure event but here very careful Modeling is Yeah, it's required here So we put we need to put them together the system modeling Yeah, the logical system modeling or the Daniel system modeling Together with the with the updating And of course we need for our indication event. We would need to develop a Limit state function like we had for the inspection So for the proof loading it would be rather the system and now not with the loading due to the Yeah to the wind waves our environment, but now with rather Yeah, rather certain loading for the proof loading so this yeah, there will be Yeah, only very low uncertainties related to the proof loading load obviously Okay Yeah, okay, then there is Okay, ten minutes. We can also exploit here For structure health monitoring a property of the model uncertainties, so Yeah for understanding we are leaving now the indications by Proof loading which is survival or by an inspection which may detect the damage We now think of Model uncertainties. So what is a model uncertainty? Anybody knows anybody can explain Why Okay, yeah Anybody else Distribution Yeah, okay, then there may be also model uncertainties for probabilistic modeling But let's change the context and say we We have a finite element model predicting the Yeah, let's say the failure load Yeah, yeah, yeah Yes, yes, yes, yes, okay, so let's say Our finite element model we predict the failure load of a beam. Yeah, and then I have I can calculate Days weeks months and find out that the failure load is maybe 10 So how do I determine the model uncertainty? Yes, exactly. So I do 10 experiments Or 20 experiments and each of the experiments Will lead Not exactly to what I calculated 10 But there may be a variation and that comes to due to the assumptions you have described that simplifications of our models So in order to To account for the precision of our model prediction finite element model I can do experiments and I can forecast Then The precision of the model so there will be somehow a distribution that can be Yeah, very precise models may have even a negligible model uncertainty, but usually and Especially for fatigue modeling the model uncertainties rather large or high So, okay, so I've I have a prediction and I have these 10 experiments. So Now I'm taking one of these specimens. So I've had Or I make 20 new specimens and I Consider just one of the specimens That's the distribution of the model uncertainty apply here or Can we play so to say a little trick? Do we understand a little more? So or maybe I'm yeah, so again One prediction with the finite element model 20 experiments and The difference between the prediction and the real distribution is Calculated with the model uncertainty now. I It's just enough to cast one more beam What can we say about this beam? Will it is the distribution of the model uncertainty still applicable to this one beam Yeah, maybe that goes in the right direction To trace the variation between the prediction and the experimental results. Yeah, you need to modify also finite element Missions to predict Different. Yeah. Yeah. Yeah. Okay. Yes. Yes, this goes in the right direction. So you could You could measure you could do some measurements at the one beam you are considering and Where will it lead to if the measure? What will we observe in and try to connect it with the model uncertainty? Okay, the answer is the new beam will be a realization of the model uncertainty Like all the other beams have been for finding out the distribution, but we have to find It will be a realization, but we have to find out the realization. We don't know that's why we can measure and We may not measure as precise as in our laboratory experiments. It may be and in Zito beam That's why we have here a measurement or SHM uncertainty but we will find out the realization of the model uncertainty because It's very similar to determine the model uncertainties So and then we can exploit this property With Thinking of all possible realizations and multiplying it with the probability density then we are per definition at the expected value of the model uncertainty But we may have here some additional SHM uncertainties but But of course it It must not be that we find the expected value because there was a variation So it can basically be any realization we need to find out But it is a way of predicting What we can expect from a measurement and Well, this is rather crude modeling, but we can think of Yeah, we could find out this realization and then our components behaves as designed or even better Or it may be the case that we find a realization a high realization of the model uncertainty And it's the loading model uncertainty. So there is an Yeah, and Unexpectedly higher loading Yeah, but it's still covered in our distribution. It can happen. And if you find it we may do something like strength strengthening so Then we can distinguish between two ranges of realizations of model uncertainties and can connect this To consequences to costs in the context of decision analysis, okay So this was the Last point of modeling. Yes, there is Often very challenging because in the experiment We are only able to measuring to measure all the input of the models Not directly, but only indirectly For instance, take your example about the finite element model Then we want to determine the model uncertainty of the spine and element model but in order to do that we have to do a Model calculation for given material properties for given loads So we actually have even to represent the real Material distribution spatial distribution in the entire element But what we can do The only thing we can do we can make a little sample test from the specimen and put this as an input So actually the difference between the experiment and your modeling population does not only include the uncertainty about the model, but also the uncertainty between your material measurement you have in your Experimental specimen and the real property all over the place in your specimen You know what I mean? Yeah, yeah, that's a big problem. So Normally when we make this Experiments very fancy to find out the model uncertainty, for instance of this very nice final element programs for the blade buckling we use for offshore platform design very interesting to know the model uncertainty because But how do we measure the material properties? We don't know that so the model uncertainty that we can find out in these kind of experiments includes the model uncertainty and the material uncertainty That's a rather tricky Yeah, yeah, yeah when in the specifics modeling specifics Actually It can be that it is That it can be him but the determination of the model uncertainties is a little more complicated than I just outlined so you can have also a finite element model a probabilistic finite element model and you predict the failure load with a distribution and Then you have the distribution in the experiments But I Understand your point. It's very important that we are aware of the boundaries of the tests of the model uncertainties And and reality Okay, so To finish and to be able to get to the coffee break Yeah, we can have Indications and we work then with Patient updating likelihood functions or limit state function which functions which are modeling our indication event To update the structural reliability We may update on component level, but there's also a structural health information on system level like proof loading like vibration based methods damage indicators which are relying on a distributed Sensor system So we also are confronted with the system reliability and The updating of the system reliability and then there's also the point of exploiting the model uncertainties In a way that We can think of a structure is a realization of of a model uncertainty Okay, thank you for your attention I think we make a break and then we can continue our discussion and I have also an example So we have been going to structure reliability analyzes With different methods forms some Monte Carlo then we observed. Okay, this is fine But just for one component we real-world systems have a lot of components So we also need on top of that the structural system modeling and then This is what we have been building up in the last Yeah, one and a half days. We have structural information We have an idea how we can model these structural information and the one of the Essence of The value of information analysis is that we are coupling our structural performance model and the shm Performance model because the value of the information will be somehow determined of Yeah, in a way the information is used for Either Getting better knowledge about the performance of a real-world system Which we are describing with our models. So this could be in terms of risk reduction Or We can better plan our actions so that we In a way that we have to intervene less To make the system perform. So this could also see that this is a kind of a better knowledge of our real-world system and it may be Related to the actions we are planning. So I have now here Very simple system some of you I Recognize to who may know this problem already Maybe there's one or two more But it's a simple system it's One of the simple system we can think of it's Consistence of two components one component would not be a system, but two is a system. So let's start with with this and our System performs As here described we have A damage and the model uncertainty of the damage and then we also have a damage resistance We work with Yeah, okay, we can we work with uncorrelated damage development and Fully correlated model uncertainties and also the damage Resistance is fully correlated. So this would mean a system which has been Produced for example in a batch This year has been produced in a batch where there's no Over there is a certain variation, but it's the same batch and that's why it's somehow correlated The resistance is fully correlated, but the damage mechanism Is not correlated at all. So it can rather Occure either here or here it does not necessarily It may not be the situation that it occurs only Yeah That it occurs that is To the same extent in both components that is rather random. So the first step is to design the structure in terms that we get hold of our Component damage probability and we work here with the target a damage probability I think here the very important point is To Get to understand the connection of the problem with your own Not the problem with your own the presentation or the example your own has presented And in this way Your own presented a case where we have the One parameter of the structural resistance And in the design phase we optimize So this is a real decision analysis And Then there is the reliability based design and in the reliability based design we are Working only with our probabilistic models. We don't have consequence or utility models and then we compare the Reliability of Component performance or of the system performance with the target reliability and here the decision analyzes Like Jochen has presented and you've been implementing is replaced by reliability based design and This is always the case when you work against or With target reliability's you find in the your code zero or better Better described in the probabilistic model code basis of design from the joint committee on structural safety so here we Circumvent the Or we replace the decision analyzes And optimization by accounting for the target reliability then For your understanding You don't do an optimization and the decision analyzes but the determination of the target reliability is a decision analyzes in itself so here basically the target reliability has been derived by Yeah economic and society optimization for a large set of structures which was behind the calibration of the eurocodes Okay, we are circumvent the decision analyzes and calibrate the component probability of damage so that it equals One times ten to the power of minus two We we can then also calculate the probability of damage for the system What is the system model? Yeah, so the determinate structure goes to the situation where Yeah, one component can fail and then the structure has gone so the Yeah It's a serious system For failure, but now we just have a damage model. So it's right. It's a serious system if we consider the failure because of the redundancy so if I'm modeling this as a serious system, what Do I find out when I'm modeling this with a serious system so This goes in and if I'm modeling the serious system, what situation do I model? What is the meaning? So if I have the meaning is obvious, it's if I have damage in one component I have the system damage so you model basically the occurrence probability of a failure of a damage in the system you know If you have a larger system like a wind turbine And you take a serious system then you Can model for instance the Probability of occurrence of a fatigue damage for the complete system. So in any of the hotspots a damage can develop and The serious system then accounts for all the components So we can model this as a serious system Then I would like you to To derive a probability of of indication independence of the damage size and You take basis in the slides you have seen by Daniel and we have a single distribution for the situation where there is a damage a and This is the the mean and this is standard deviation of a normal distributed signal and it changes with the damage size from one from zero to ten Please consider these this range and then there is the noise distribution So the noise is the situation where there is no damage, but there's still a signal or a noise And this is normal distributed with the mean of one and Standard deviation of 0.5. So and then you set the threshold to 1.5 and you can calculate the probability of indication curve Can you also Calculate the receiver operating characteristics. So for the Probability of indication or what is often referred to as the probability of detection you're changing the damage size What is changed for the receiver operating characteristics? Yeah, this is what you compute the probability of detection and false alarm But what is what is the parameter? It's the threshold So you don't fix the threshold if you calculate the ROC you fix the damage size Yeah, so the ROC is calculated with fixing the Damage size. It's just for one damage size And the threshold is varied Okay, so yeah, you're welcome to play around with this. It's rather simple, but if you play around if you you can derive the ROC and the probability of of indication and Probability of false alarm here right for zero so Now we have a structural performance model In terms of that we are modeling the damage and Damage probability of the components and the system now we have also Yeah, now the inspection performance And now we can Yeah, this is the simple for an inspection can inspect component one or component two or component one and You then basically implement the updating. This is one slide of This is this slide oh you could Monitor this component the You do damage load monitoring. Yeah, so that's the variable D and This can be rather straightforward implemented Being aware of the boundaries, but with the one slide I had and Using the characteristics of the model uncertainty Okay, now I stopped talking and you start to work Feel free to ask Anytime All right Yeah You know how it works, you know how it works Okay, okay Yeah, good point for the ones who know this problem already, please think about How we can model the Steel beam example your hand it So that we somehow connect The modeling in this example to the steel beam example This is the task for the ones who know this problem already. So and to give a little hint is we have a Struck yeah, we we have one component and we have We have a probability of failure, but what we measure here Is the damage? That's the exercise. So what does the damage mean for the? Probability of failure this is what you need to think first about yeah, we we have here a very generic Oops damage model Like basically yeah like this one here and How may this be related to the resistance? This is the first step and then we can think about How when we have this relation and we can model it then we can think of updating with information about the damage state like Damage loading measurement and an inspection Are you referring to what I to the second part of the task or to this task please ask again So you you update the component damage probabilities with inspection information And you update the component probabilities with load monitoring information. Yes. Yes Yes, yes exactly Maybe you can also help the others right this aspect Because you you basically know right the direction I Should put us on the file sharing right That's maybe the most straightforward one is we multiply something on the on the resistance like one minus some a Times t t being the time But then we multiply and then and then a damage a can be normally disappeared if you don't know We would like but I'm going to increase inside Over time we have a reduction of our which we don't know The problem is when we want to keep it simple from a perspective of reliability assessment, we want to keep it Then maybe we don't have a multiple multiplicative Increase of the screens but And it is for just subtract a times t So that's just a new component in the living state and That's then normally superior so you can still use the straightforward Then it's the question how do we connect our observation to a So We have a task from the lecture task what you see on the slides We have an alternative task of the ones who are familiar with this example to work further and connect this example to the steel beam example and Point number three as introduced by on and Maybe it's a good point in time to introduce your report task So three different things But maybe it's good also to have an orientation already on the report task. This is what Jochen talked about So We would be very happy if you could use the methods you have learned here and apply this to a Topic of your own interest of your own research This would be the ideal case and then also we would help you to Progress in your PhD research work so this what is Is for us the ideal case where we are both profiting most Even I Then even I misunderstood so there are two things there's this task and the report That's right. We have you have three different things this what is here the connection between this example and the steel beam and Then the report task the ideal report task is that you do it on your on a topic of your own research But if in case you cannot find this We have an alternative task and this is basically the task number two where we relate this to The steel beam example. Yeah, so if you cannot find something in your research Where you can apply this then you take basis in the steel beam example from Jochen and in this example and you try to connect it Yes Yeah, yeah, well, let's work with it's no constraints, but Yes scientific and good reasoning and Yeah, we would be happy to support your new thinking. Yeah. Yes No Yeah Yeah, yeah, yeah, you're free to choose the order to choose and to select the methods you Would like to work with but it must be for the report task. Yeah, it must be a value of information analysis So what we do here? We are just working with the reliability. So there's no decision scenario So the report task is about the value of information analysis and to utilize the methods you have been learning here But you start Like we start in the first lecture with setting up a decision scenario So that you have the structure performance You have somehow the structure information model and you have the consequences and you have the actions you need all all these Yeah, I think it was good to take it to talk about it already now We will talk maybe a few times Tomorrow again about this and we will also help you to set up your case Your report task now calculating the probability of Of indication For dependent on the damage size so on the slide where you have the Yeah on the slide where there's task two you have the signal distribution and the noise distribution and The signal and then where the signal distribution depends on the damage size on a so for each Damage size a you have a distribution and you have a fixed threshold And if you integrate this part of the distribution It's the probability of no indication Given a and if you integrate this part of the distribution you Calculate the probability of indication for the correct size Correct size or damage size Yeah, so this is Yeah explicitly written here and of course No indication and indication are complementary events So you can have this even more simple. You don't have to integrate Integrate twice. So then you have a state Where there's no damage and then you have to noise so this is this distribution and And it's usually below the threshold or I think Daniel drop heads it in the other way around because for his specific Technology it was the other way around so you have a high what was it a high voltage for the passive case and the low voltage for the active case but It's the basic principle is the same but you need to understand is your threshold so the threshold defines Whether they are so damaged or no damage or whether There should be an indication or no indication So here we have in the technology. We have the case that the signal you see this from the distribution the signal Increases the mean of the signal Increases for a larger damage and standard deviation decreases for a higher damage now and The in the reference state or the undamaged state where a is zero it's simply that That we have a fixed distribution of the signal