 There are three subjects of this next lecture here. So the first one is something about the modeling of these uncertainties, how to do that within structural engineering, what are the typical models to be used. So that will mainly be an overview about what can be done and then some references to other publications where you can find more information. And then there will be something about the modeling of systems, including Daniel's systems, but in general serious and parallel systems and how they can be modeled and how we can estimate the probability of failure given a model. And then at the end, something about target vulnerabilities, so which variable should we design for. There are some considerations about that. It's not something that all agree on, so therefore you will see different proposals. But for different applications also. So that's the three parts in this lecture. And there is a lot of literature that you can look at and most of it is auto-distributed, I think in the email that was sent around. So some of it is more for you, but it's documents that can be relevant to look at if you want to have more background, especially for the uncertainties model. Yes, so the first part will be on modeling of uncertainties related to loads and assistance. So we have in general three groups of uncertainties that belong to models, so there is one related to loads and loads can be wind-loaded, it can be snow-loaded, it can be snow-loaded, wave-loaded and so on. And it can be related to different types of structures, buildings, bridges, wind turbines and so on. So there will be some information related to that and we have the strengths and we also have the model uncertainties which are very important also to model and to know something about. But it is also very important to note that when we are talking about model uncertainties then they are related to a given model. So it's not so that when you have one model for a model uncertainty then you can use that for all models, for example, for the buckling strength of the SDE column. It depends on which model it's connected to so that's important to remember. And in general we model all the uncertainties by a vector x in the same way as Jochen did and we have these four types of uncertainties that we generally need to consider and include in the model. So it is important that we do not only include the physical uncertainty but we also include the other uncertainties because they can be quite important and of the same size as the physical uncertainty. And then I have written here at the bottom that what is not included in this that's another aspect that in fact is maybe even more important and that is human errors, cross errors done during the design and during execution and maybe also during operation. So we know that most of the failures that really happen they are due to human errors of some kind but they are generally, or they are not included in the uncertainties that we model here and when we calculate the probability of failure so they have to be in fact to be added or considered in another way. And typically what we do is that we have quality control regulations in the codes and in the building regulations that make sure that we limit these human errors as much as possible. So in a way what we assume is that they are handled in other ways but they are important, very important. Okay, then a little bit about some general considerations about how to model the different uncertainties. So if you look at extreme loads from wind and snow and temperature and so on then typically a gumbel distribution is used and if you look into the euro codes you will see that maybe it's not written directly but if you look in some of the equations written in both wind, snow and temperature codes then you will see that a gumbel distribution is what has been assumed. So that is generally used and also we know from theoretical a gumbel distribution is a very good candidate to use for extreme loads. But of course there are other extreme value distributions that can be relevant to look at so there is also a generalized extreme value distribution which is a generalization of the gumbel distribution which is also relevant to look at and that can contain an upper bound of the loads that can occur and then of course it can always be questioned if there is really an upper bound on the wind speed but that's something that is being discussed quite a lot but gumbel distribution is used very much for waves. The starting point is typically to use a bible distribution but also there are other more general extreme value distributions can be relevant to look at but in many cases a bible distribution is the one to be used. And then of course what is typically done is that we specify the distribution function for the annual maximum load so there is always a reference period connected to a distribution function for an extreme load and sometimes we want to have the distribution function for the 50 year maximum load and then what is very often assumed is that the load, the annual maximum load that they are statistically independent and then we can easily obtain the distribution function for the 50 year maximum load or the 100 year maximum load simply by taking the distribution function for the annual maximum load and then raise that to the power of the number of years so that is if you consider the load the physical uncertainty related to a load then you can make that assumption and here it may be also worse to mention that if you include model uncertainty then the assumption of independence from year to year is not in very rare cases fulfilled so in that way you cannot use this simple equation here and you don't need to do more careful considerations then we also have fatigue loads and for that typically log norm distributions or bible distributions they are used to model loads but I will come a little bit more back to that later then we have material strengths and there are three, I can say candidates for distribution functions to be used for material strengths so one of them is the normal distribution function and it can be shown theoretically that you can consider the strengths of a material as a summation of a number of contributions then if you have enough of those and no one of them is dominating then the strengths will be normal distributed and that is typically what you can assume if you have a doctor and material then you could argue that a normal distribution function would be a good candidate as the distribution function but it has that drawback that you can get negative variations for the normal distribution and of course that has no meaning for a strength so that is why the normal distribution is almost never used only in, you can say, crude calculations so instead the log normal distribution is the one that is used a lot and theoretically you can show that that is what you would obtain if the strengths can be considered as a product of a number of contributions then theoretically you will get a normal distribution log normal distribution that is maybe not always so easy to use that argument but the other argument that it will never be negative you will never have negative variations that's quite a good argument but then of course you also need to look at the data themselves to see if they really fit to a log normal distribution or a normal, but the log normal distribution is the one that is used almost everywhere in the unicode system so all the material strengths there are typically assumed to be log normal distribution but you could also consider to use a Bible distribution for strengths per meter and theoretically you will get the Bible distribution if you have some defects in the material and the strengths depends on the largest defect then theoretically you will end up with the Bible distribution so that is for example what you could expect in some ceramic materials and also for glass fiber to be used in wind turbine plates the Bible distribution is for some of the strengths per meters theoretically correct distribution function to use so therefore that one is also relevant to look at and then of course there are many many more distribution functions that the principle can be used but you will see that these mentioned here they are typically those that are used within structural reliability and if you want to get more information about all this then we have the joint committee on structural safety probabilistic model code which has three parts so there is a part in that one related to basis of design which is more or less what we are describing here from a theoretical point of view but it also contains a lot of information about how to model blows and resistances from a probabilistic point of view so you can see here there are many many different blows and materials considered and for most of them there is a probabilistic model code that you can apply and all that you can find on this web page here which is a web page of the joint committee on structural safety so that's a good place to look to get more information and it should be said that this probabilistic model code is being updated all the time so of course it's not stationary what's inside we get new knowledge every day and that is put into this one there are also some of the loads and resistances that are not yet covered but we hope that that will be done within the near future but it takes quite a lot of time to develop those okay then an example on how some so I'll take this example here also to illustrate some aspects of how to model strings by a stochastic model so the example here is related to tip-ups to some samples from Norway's roots which are in general categorized or divided into different groups so there's one called LT20 so that's the one that's considered here that's string to class 20 and there are 194 data for that one and for that one I'll show you some candidate distribution functions stochastic models so if we just look at the data we can calculate the mean value we can calculate the coefficient of variation which is 0.26 the mean from value with the maximum value and then typically we are also interested in the characteristic value which is 0.25% quantile and here we get 21.6 if we just look at the data without fitting a distribution function then as our pitch has been done and that has been done by using a normal distribution a log normal and the two parameter bible distribution and the three parameter bible distribution where the threshold has been chosen as a little bit smaller and two types of fit have been made so one where we fit to all the data using the maximum likelihood method as we saw yesterday how that works so that can be done but also another fit where we only fit to the tail to the lower tail because that is the one that is really interesting to know something about for a lot of build analysis all the large values they are without much interest because failure will of course occur so it is important to fit the tail and if we have enough data which we have in this case here then we can do that so here the 30% of the smallest data values have been used to fit the distribution functions the result is shown here but it is maybe more interesting to look at the plots so if you take the normal distribution and fit that to all data then you get something like this so you can tell this look quite nice and also fit only to the tail so to the lower tail here then that is maybe not but you can see it fits a little bit better than what we have over here but the fit in the top here for the large values is not that good and that becomes even more clear if you look at the log normal distribution so the overall fit is in general okay but we get a quite good fit here for the lower tail and a very bad fit for the upper part which is not so important because we will never be really interested in the values here if we are looking for structural rapidity and you can do the same for the two vital distributions so we can get quite a good fit for the lower tail if we use the tail fit and an average would fit if we take all the data so I take this just to illustrate that it is important to fit the lower tail that is the one we are interested in and also to obtain 5% quanta so the value that's around because that's the value that is specified if you want to do calculation based on codes using partial factors but for a variability analysis it's the whole part here that's of interest and then I could have used a lot of time discussing how to model loads because we have different loads and there are different stochastic models connected to those which are in a way different depending on the type of load but I will only show a very little about that because we do not have so much time so of course permanent loads they are constantly timed and modeled by a normal distribution wind loads the basic time interval are used for wind loads that is a 10 minute period where we have a mean value and then statistics is made based on that if we look at snow load then at least in some countries we could say that the snow load is constant within periods of 14 days at least that's the case in Denmark so it disappears again and maybe some months later there will be a little bit snow again so in that way we can make a model based on that and if you look at imposed load then typically those are divided into two groups one group that is called sustained load so that is when you have some storage rooms for example and you put some thing into that room then it will be there for some time typically 5-10 years and then you change for something else and then you get a new load and so on so we always have a load but it changes with quite a long time interval and the other type of load for imposed load that is transient loads so that's where you have a high load in a short time period and then after that you have no load until the next load arrives so that's typically when many people are together in a room then you will have a high load for maybe some hours but sometimes also up to some days and there are statistics or at least some proposal for stochastic models related to these two kinds of imposed load and to look at the mean wind speed then typically there are two applications for needs for modeling wind speed one is if you want to do fatigue calculations where you need to account for all the wind load during the whole year through the whole lifetime and that's also what you need if you want to look at energy produced from wind turbines for example and there you simply use all the wind data that you have during the whole year, the whole lifetime and that is typically modeled by a vital distribution whereas when you are looking at the extreme wind then it is a common distribution and then I have some tables here so they should be considered as as proposal for how to model the different loads and materials that we are typically used within structural availability so there is a model here like an old one from 99 from a Nordic project called SACO and I will not go in detail with that but what you can see is that typically the permanent load is normal distributed the variable loads so that's environmental loads and it's the opposed loads modeled by a common distribution and we have some strength parameters that are log normal distributed we have some geometrical uncertainties that are normal distributed and here model uncertainty is normal distributed and there are some numbers here for the coefficient of variation which is the really important number that characterizes how uncertainty is a given parameter and I'm going to show you these tables because then you can get an impression of where to find some information on these uncertainties and in some of those that we are actually now discussing also in relation to the development of or the revision of the those and calibration of partial factors so there is another one here taken from another project which is related to steel structures mainly so this report here is also in the references that you have got so you can take a more close look but it's just shown as an example on how this can be done so that also contains models for the uncertainties for wind load, for snow load and here is also shown that sometimes we have the stochastic model for a one year reference period so the maximum load within one year and sometimes we have it for the maximum load within 50 years and then of course we need to do the transformation from one year to 50 years so that they are consistent hopefully and here you can again see that the common distribution is the one that is used for the variable loads and the normal distribution for the permanent load and for the material strength the log normal distribution is used with some coefficients of the ascents and also some model uncertainties as shown here geometrical uncertainties and model uncertainties and also a recent model where Jochen and I have been involved also in calibration for the uracodes so there is another model here which is not far away from the others but is a little bit different so that's also giving uncertainty models for the loads and for the strength parameters and for the model uncertainties and one thing that is important to mention is that if we look at the model uncertainties then we can have this bias that yesterday was called B so we can also consider that as a kind of hidden safety in the design equations that we use in the uracodes and that bias is very important to include in these probabilistic models in order to have something that is consistent with the models that we use to calculate resistances and to calculate loads so without going in detail with that I will just mention that this is an important thing to look at and in all these tables it is also shown how the characteristic value is defined so although we are not here looking at how to calibrate partial factors then the definition of the characteristic value by which the partial factor is multiplied or divided is also very important to know how that is done so that is also indicated in these tables there is even more in that I will not go in detail but you have the documents where this is described in more detail so I and then there is also one here so this is not related to you can say the uracode system or buildings and bridges but it is related to wind turbines which is another interesting application of the decision making tools that you learn about in this course here because there is really a lot that can be done within wind turbine partly because there you do not have the same or you can do real economic optimization because the consequence of a failure typically will not influence human lives and there is no pollution if you have a failure so it is an area where cost optimization can be done and you have a lot of possibilities to do measurements and there is already a lot of information available from a wind turbine so this is an interesting area and I am working a lot with wind energy so I have been involved in quite a lot of these aspects but also holistic modeling of the loads so there is a table here and it is part of the background for the calibration of the partial factors to be used for wind turbines so it just shows that also within that area there are stochastic models that can be used and which is written in the background document so they can be used as a starting point for reliability analysis but it follows more or less the same ideas as for those used for buildings and bridges there is not a big difference the only difference is that a wind turbine is spatial in the way that it is a controlled machine so in that way there is also the possibility that we can have some errors in the control so that there are faults so that we are not able to control the wind turbine so that gives some additional failure modes to be considered that can be quite interesting so this is also for information and then finally a table here related to fatigue so of course also for fatigue we need a stochastic model for the uncertainties and there typically we need a stochastic model for cases where we use the ASN approach together with minus rule to assess the fatigue life and we need probabilistic models when we do our use of fracture mechanics both of those are included in this table here which is a table from the joint community structure of safety and probabilistic model I see one of the material parameters in here is always deterministic, is there a reason you don't go to M, I guess that's the parasol inspired is there a reason that's not a model probabilistic model? Yes, you could do that so you model both and then proxy here which is the other parameter in the ASN curve as the stochastic and then you make a fit then typically what you will find is that they are highly correlated very close to one so that's one reason that you can in fact choose one of them deterministic and then you put all your uncertainty on the other one or you can account for the correlation and then you could do that but that's typically what you will find but you can say when they are put deterministic here then maybe the uncertainty on the proxy parameter will be a little bit larger include because then you will not be able to fit as good the data but this is typically what is done that the slope parameter is put deterministic and then no one will know why generally in regression analysis you get a high negative correlation of the slope and the axis of course you have this cloud of points and then the axis point is higher and the slope must get also higher so yeah these two are very close to one and you can get all kind of problems correlation 0.99 so there is really no need to go into that and it is the same not somewhere here it should be the same for the practice mechanics model because sometimes when you look at the connection between delta k and the increase in the crack size where you can also have a linear curve or a linear curve but there the model is deterministic so that is the same so there are a lot of numbers here that are not going to the detail but you have the information here yes question no we have information I think that is all good then the next part here is related to systems so now this is related very much to what Jochen told you about before the break so we start with a number of components now and Jochen showed you how to calculate the probability of failure in principle for one component where you have a limit state equation that can be written in this way here where if this this d value here is negative then you have failure so you can say this is one component and we can have a number of components which can model real structural components but it can also model other failure modes that is not necessarily a structural component so part of what is coming in the next slides is more general than just connected to structural applications but most of it is and we have some stochastic variables and then what can be important is the mechanical behavior of the components so we can have two principle different behaviors we can have a brittle behavior which is the one that is shown here and you can have a ductile behavior that is the one shown here so if you have a brittle failure then you completely lose the load being capacity when you have failure where as if it is ductile then you still have load being capacity and that is for parallel systems very important to come so that we will see in the following and basically we have two different kinds of systems we can have a series system like this chain here so if one of the components fail then you have failure of the system so that can be modeled as like a system of this kind here or we can have a parallel system so that could be a cable consisting of a number of wires so if one of the wires fail then we do not have failure of the whole cable we need that they all fail so therefore that can be shown by a model like this one here but real systems they are typically a combination so that is what is indicated here we can have a number of failure modes where each failure mode is modeled by a parallel system so each of the parallel systems here model in principle one failure mode and then there can be many of those so if one of them fails then of course we have failure of the structure so a more general model could be one like this and what we want to be able to is to estimate the probability of failure for such a system here so one example of a series system is a statical determinate cross system shown here so if just one of the components here fail then we have failure and failure can be yielding in buckling and other failure modes and then we could put up an expression here for how to obtain the probability of failure so here is a quite long list of equations but it is not so complicated so what we want to calculate is the probability of failure for the series system that we are looking at here and we have failure if just one of the component fails so that means probability that component one fails component two or component three so that means it is the union of the failure events of all the elements in the system so that can be written in this way here then we can introduce the limit state equation for each of the failure modes and then we can if you want to make calculations using the first order reliability method as Jochen also showed then we can make a transformation to the new space and we have the normalized stochastic variables that are normally distributed with value zero and standard equation one and we can do that by a transformation which is here called T and there are general transformations possibilities and the most general one is so-called Worsen-Blatt transformation by which you can transform any stochastic variable X here to the new space by this transformation T so we can now formulate the limit state equation in the new space and then we can apply the first order reliability method to find the design points and the associated reliability index which is the distance from the protocol to the limit state equation the shortest distance so that's the reliability index and you can also find this alpha vector given the sensitivity with respect to the stochastic variables and then if you analyze the beta point you get this expression here beta minus alpha times u so here we have introduced an approximation and we still have a union over all the possible failure modes and then we can reformulate that because that is also the same as one minus the probability that the system does not fail is the probability that component 1 does not fail and component 2 does not fail and so on so then instead of having the union here then we have the intersection because now they all have to be safe and then we all have to change the inequality sign here to this one here so that this expression here express the event that we do not have failure and this is also the so called democracy but it's not that difficult to understand that it has to be like this and then we reformulate that a little bit it can also be written like this and then all this is formulated in the standard normal space so therefore we can also express that probability that we have here using the m dimensional standard normal so instead of having the one dimensional one that we normally use then that's of course an m dimensional one so that's the one we have here and the input for that one that's the vector containing the reliability and this is for each of the components so that's those that we have obtained and then we also need the correlation matrix and that one can simply be obtained by using these alpha vectors because if you take the dot product of those corresponding to the two components we are looking at then that will give the correlation coefficient between the two components and that is what we need here so in that way we can calculate the probability of failure of our system by this expression here but it is an approximation because we have this step here where we use the linearization in the meter point and then the next question is of course how can we calculate this one here that cannot be done by hand and you cannot find a table in the book of statistics where you have the enormous distribution function but there is a quite good approximation called the Ruhm-Mickler approximation that can calculate this one quite fast and quite accurately and that is used in many of the computer programs that can be done for our system so this can be calculated so in that way we can obtain the Ruhm-Mickler and here is an illustration about what is the domain of failure for a series system so in this case we have three limit states and we have two stochastic values so we are in the U-space here you want U2 and you have three failure modes so that is one here and that is one here so three limit state equations and the failure domain that is then the union of the failure domains of the three components so that is the probability of failure that you want to obtain and then what we do is that we linearize in heats of the components and then we get these linearized models for the failure domain and the probability of failure we estimate then costs one for the linearized version we can also calculate bounds on the probability very useful and much much faster than calculating the exact probability of failure so there are simple bounds and they are really simple because the lower bound on the probability of failure is to take the maximum probability of failure for each of the components that will of course be a lower bound because then we only look at one of the components and we know that the real probability of failure will be larger and the upper bound is the summation of each of the probabilities of failure of the components and there are better bounds there are so called details and bounds they are shown here so they are more complicated to calculate but they are also much better so the difference between the simple bounds and the details and bounds is that you now also take into account the joint probability of failure of two components so instead of having only probability of these components to be used then we also have these joint probability of failures and then it can be shown that we can obtain these bounds here so I will not go in detail with how that can be done but you will see in an example in the next slide that these bounds here they are very very good in those situations and they are not that difficult to calculate although it is not so easy by hand I would say and the example is shown here we have four lip state equations here and they are shown in this figure here where we have the first axis here and the second one here U1 U2 and then we have the different failure modes here and then you can do the calculations so for component one you will find the reliability index is 3.5 the corresponding probability of failure corresponding to the beta point and you also have here the coordinate of the beta point and that you can do for each of the four components so that we have the probability of failure for the components and you also have the alpha vectors put into the column here and then we can calculate the correlation between the components simply by taking the stock product of the alpha vectors we have a correlation coefficient matrix as the one shown here indicating how correlated the failure modes that they are then we can calculate the simple bounds and here is shown in terms of reliability index so that means that if we calculate for example the upper bound here so this is the probability of failure then we can also calculate a corresponding reliability index for all distribution problems and then find what is the corresponding reliability index that would give the same probability of failure as this probability of failure that we have here so if that is done then we find the bounds here as 3.28 and 3.51 and the 3.51 that's the one that you can see here so that's the smallest of the reliability index so that will be up bound on the system reliability and the lower bound is then 3.28 but if you calculate the distribution bounds then we find that the bounds are 3.381 and 3.383 so very very close and much better than this but also more complicated to calculate then there is another example here which is related to the case where we have a series system of components they have the same correlation between each other that is called rho and then there is a solution to that problem to obtain the probability of failure that we do not need to look in but what is interesting here is to look at this as a result that we have here showing the probability of failure as a function of the correlation coefficient on the x-axis and then for different number of components in the series system so if we choose that the correlation coefficient is maybe 0.2 then we can see of course that the probability of failure increases when we have more components that's also what we will expect but we also see that when we increase the correlation coefficient then the probability of failure goes down so it is a good idea to have a high correlation between the elements in the series system because then the probability will increase the probability of failure goes down and they end at the probability of failure for one component if they are all really correlated of course because then if one has a high strength then they all have a high strength so they simply behave as one component and a little bit later you will see the complete opposite behavior of a parallel system but this I think this is a good picture to show the effect for a series system when looking at the correlation between the components and the number of components then sometimes you can also be interested in doing sensitivity analysis so to find the sensitivity of the probability of failure or the reliability of the system with respect to a given parameter so that can be relevant if you want to do design and you would like to see what is the effect if we increase one of the cross-sectional dimensions by 2-1% how much the system reliability then changes and that ends in some a little bit complicated expressions here but that can also be calculated in fact so we can estimate quite easily the sensitivity of the single reliability index with respect to a parameter so there is a solution for that and you can also do it for the system I will not go in detail with that but it can be done but instead continue with a parallel system so for a parallel system we need to take now more into account the mechanical behavior because that was not important at all for the series system that it could be written or it could be docker that did not have any influence on the mechanical behavior that we calculated but that will be the case for a parallel system so now we will be looking more on the effect of these aspects and a parallel system that is simply what we obtain if we have a static interterminate system because then more than one component need to fail before we have failure of the structure that could be something like this but in general what we can do is that if we have a structural system then one way to construct the parallel system for that one is to to see one by one what happens if a component fails so we could model a parallel system by saying that component one here corresponds to failure of the weakest element and then formulate a limit state equation using the whole system but looking at a specific component then you could then say that this component has failed and then we look at the rest and we look at the second weakest element and formulate the limit state equation corresponding to that but assuming that one of the components has failed and then we can continue in that way so we get a sequence limit state equations corresponding to a sequence of failed components so at the end we have failure of the whole system so this is one way to formulate a parallel system for one failure mode and then of course you do not always know which one is the weakest one so you need to to look at different combinations in order to at least try to find the most significant failure modes and then include those in the whole system modeling so that for each of the failure modes will have a parallel system but they then need to be put into a serious system model but this is one way to formulate a parallel system so that means this could be this one here so this is one failure mode another failure mode could be that only one element fails so that is a really critical failure mode so that would be if we look at the uracode system then that would be a key element because if you have failure of that then you would have failure of the whole structure so in principle we can formulate a city system like this that we need to to analyze and again models for brittle failure so that is the one we have here for ductile failure but we can also have something in between where we still have some load being capacity after failure and you will see that there are general models how to to model that in connection with both parallel and city system modeling but that will be a little bit later in the presentation but there are first two very simple examples to illustrate the more principle behaviors of the ductile system and the brittle system so the first one here is we look at a very simple system so that should be in components here in parallel and they are assumed to be ductile and it is also assumed that they have the same correlation coefficient between any two components so that could be a fiber bundle with in fibers and we assume that the strengths they are the same for all the components and they are normally distributed with a mean value the standard is a sigma and they are all correlated with the same correlation coefficient load and we assume that the load is deterministic and is shared equally on all the components so the total load can be written as in times is R which is then the load on one fiber so then we can calculate the reliability for one of the fibers because that would be the mean value of the strengths minus the deterministic load divided by the standard deviation of the strengths so we have no uncertainty on the load here that's why it becomes this simple expression here and then we know because it's assumed to be a ductile system that the strengths of the whole fiber bundle will be the sum of the strengths of each of the fibers so it will be simply a summation of the single strengths so that means that the mean value of the strengths of the fiber bundle that will be n times the strengths of each of them and the standard deviation or more generally variance of the strengths of the whole fiber bundle can also be obtained and here we need to take into account that now we have correlation between the fibers so it can be written in this way here where we have the matrix with the cobalances so the standard deviation squared right here in the diagonal and outside we have the standard basis squared multiplied with a correlation coefficient so it ends in this expression here where we have n times these variance of each of the fiber strengths so that corresponds to what we have in the diagonal and then we need to account for the correlation and then it's done in this term here so the number of elements outside the diagonal is n times i n minus 1 and then we reply with the number here so that gives the total variance of the strengths including the effect of the correlation now we can calculate the reliability index for the fiber bundle for the whole system so that is the mean value the one that we have here minus the total load and then divide it by the standard deviation so the square root of the variance and then we can introduce the expressions that we have here and do a little bit of calculation and then we end in this expression here which then gives the reliability of the parallel system of the fiber bundle as a function of the reliability of each of the components and the number of components n and the correlation coefficient and that is what is shown in this figure here which is also very illustrative for the behavior of a parallel system so in this one we also the correlation coefficient on the x axis we have lines here curves here for the number of components going from 1 to 5 and 10 and we have the probability of failure here on the y axis and of course when we increase the number of components the probability of failure goes down and it goes down to a very small number so we can see the 10 components the probability of failure is very very small if they are in parallel and if we have no correlation but when we increase the correlation then the probability of failure increases and it goes up to the probability of failure of one component if they are all fully correlated that is of course also what we would expect but that is very clearly seen here so it is a very good idea to have a parallel system model where we have as low correlation as possible because that would give a high reliability a very low probability of failure and that is of course also a consideration to have when looking at the robustness considerations for a system because there you can really get a lot if you can have something that can be considered as a parallel system and you have low correlation between the components so if that is possible then that is a good idea and that is not good and that is of course because if one of the components is weak as a low strength then all the others will also have that if they are highly correlated ok so that was one so one aspect is the correlation between the stochastic variables so we have the stochastic variables x here so that is a long vector of stochastic variables and these can be correlated so if you have a strength in different components they can be correlated due to different reasons but if they are coming from the same producer for example then you could expect that there are some correlation between the strengths so if one of the strengths is large then the others would also typically be large so there can be correlation between the stochastic variables we have in the x vector and that is taken into account in the calculation here because when we do the transformation into the u space that correlation is taken into account by this Cholesky triangulation that Jochen also mentioned so that is accounted for in that way but then we also have the correlation between the failure modes between the components and that of course includes the correlation between the stochastic variables but it also will be influenced by the way that we have formulated the limited stochastic equations here common stochastic variables in different limited state equations for different components then because there are the same stochastic variables in the limited state then they will automatically be correlated and then in addition the stochastic variables can be correlated themselves then that add to the correlation between the components so the correlation between the components that is what is accounted for by this correlation coefficient here and that is not at all the correlation between the stochastic variables because that's another input to the calculation but the way that the limited state equations are formulated themselves gives a correlation and that is in fact what maybe can be understood from this figure here because here we have the limited state equations and if they were all fully correlated they would all be parallel so the linearized palers here would all be parallel to this one here if they were fully correlated and if they were fully independent then they would be particular to each other but that's due to the limited state equation how that looks like that we get that correlation is the consensus of the correlation between the components? no so they could be independent or they could be correlated because in the way that we have formulated the limited state equation so which stochastic variables it contains and how they are combined that in itself will give a correlation so if we have common load that is typically what we have it's the same wind load that is acting on the building and that will have an influence on almost all failure modes all limited state equations so just due to that we have a common wind load that will give a correlation between the failure loads this diagram is a little bit simpler because it's only two variables so the only way to imagine who uncorrelated failure modes would be for instance a limit state that is totally horizontal and one that is totally vertical so the vertical one would only depend on u1 and the horizontal one would only depend on u2 and then they would be totally independent but as long as they are not particular to each other these limit states are correlated and in a multidimensional space of course you have much more possibilities to get a correlation and one correlation so I think the important to remember is also that there are these two levels of correlation something between the stochastic variables and something that is related much more to the failure modes due to common stochastic variables in the formulation but all that is in the way it's formulated here so when you can play this probability of failure here then you account for those types of correlations and this is not only for a stochastic system that is general for also if the failure mode is something completely different that you'll probably see in the next lecture so at least tomorrow that these failure modes they can also model other events than that so I think that we ended here so now a Daniel's system so a Daniel's system is related to brittle failure so that means that if we have failure of a component then we lose completely the low bend capacity so that could be stability of a column that would typically be like that so just to show how we then can obtain a model for the system reliability that is a very simple starting example here and if we assume that we know the strengths of the fibers and we arrange them so that component one is weakest one so that's our one component two is the second weakest and so on so they are ordered in this way here then because we have a brittle failure then we can also say that the strengths of the whole system will be the largest of n times r1 so that means that they are all still there but we know that the strengths is at least equal to the weakest one and there are n of those so it could be n times r1 but it could also be that the weakest one had failed we have no low bend capacity included from that one but the second weakest is r2 and the strengths will then be at least n-1 times that one and then the two weakest could have failed and then it would be n-2 times r2 and so on up to 2 times rn-1 which is the second strongest or it could be rn which is the strongest one so it will be the largest one of all these and a simple example is shown here so if you have three components with strengths 3, 7 and 10 then we can do the calculation because 2 times 7 will be the largest of these three so that would be the strengths of that system if they are brittle and if that type of argument is used when we say that now the strengths are not known deterministically but they are random and modeled by a stochastic verbal which we assume have a distribution function that we know then it can be shown that the strengths of the system will asymptotically be normally distributed with a mean value that is μr and the standard deviation that is σr and they are calculated from these two expressions here so the mean value as n times a number r0 times 1 minus the distribution function calculated in r0 and this r0 is obtained as the maximum of this expression here where we have the strengths times 1 minus the distribution function and then you could say why should it be like this but this maximization that we do here that is simply the same type of maximization as we do here is just generalized to a large number of components then we get you can see here the strengths is also n and n-1 and so on and that simply corresponds to the strengths times and then 1 minus the distribution function so finding the maximum of this one here corresponding to finding the maximum of this one here and this maximum strength r0 is then the one to be used here to obtain the mean value and then you can in the same way obtain the standard deviation squared so there also this r0 value has to be used and there is a simple example here so if you have a distribution function that could look like this and then if you multiply so 1 minus that one times r is what is shown in the figure here below and that will have a clear maximum as shown here and that is then the r0 value to be used here so that is a completely different type of consideration as compared to the ductile system and such a system here is called a Daniel's system and then of course this can be generalized to something that is more complicated than more general and maybe much more close to what you could imagine can be used so there is a paper here by Goldmitter and Rackwitz which is very good to look in to get a general idea about the behavior of a more general system and it is here a system where we have a number of components and we have a load here and then we assume that we have in a way a very strong beam here so that we have the same load effect in each of the components here and each of the components here they can then be brittle or they can be ductile or they can be in a way more general but that is what is described in more detail in this paper and I will not go into detail to that but that is also one of the papers that you have got distributed so that is something for self-study but there are two very interesting figures in that paper so this figure here and this one here so I just say a few words about those because they in a way show the same as the two figures I have shown before illustrating a series system and a parallel system so what is shown here is the number of components in the system and then on the y-axis the system that I built here next so it is not the probability or failure of the system but the system that I built here but they are of course closely connected then there are shown a number of curves here going from an ideal series system so if it is a series system and you in fact would not take into account any mechanical behavior anything then you get this curve here which is the same as the one I showed you before so if you increase the number of components then of course the system reliability goes down and at the other extreme here we have an ideal parallel system so if we increase the number of components then the reliability of the system goes up quite a lot but then in between there are some curves here showing the behavior if you now take into account the mechanical behavior so there is an ideal ductile behavior and there is an ideal elastic brittle behavior so that is close to this brittle behavior I had before and then there is something in between here where there is medium and brittleness includes meaning that now real stress strain relationship is included that looks much more like what we normally see and then it is the ductility related to that that is expressed by these curves here and we see that then we get something that is in between what we have here so that is related to the behavior of this system that we have here and the other figure here shows the effect of the correlation between the components so again we have the ideal ductile here and we have the ideal series system here but they all converge to the same point here see if the correlation goes to one because then they behave like just one component and we can see that in some situations here we have really a high effect of the correlation on the system and this is for a system with five components and the element reliability index of two so without going in details these figures here also illustrates important aspects of serious and parallel systems so in fact in the end of the slides here I have added a number of additional slides giving some more information about this in fact some slides that Sebastian has provided for this presentation but they are not going through that instead I will say a little bit about how to obtain the probability of failure for a more general parallel system so where we do not explicitly take into account if it is ductile or brittle but we assume that we have a number of limit state equations which can be derived taking into account the mechanical behavior so simply say that now one element fails and then we see the consequence of that and that could be then that we have a ductile behavior of that element or a brittle so that would influence the limit state equation for the next component to include but all that is assumed to be included in the limit state equations here and then we can set up an expression to obtain the probability of failure for a parallel system in the same way as before but now we know that the probability of failure for a parallel system is one where component one has to fail and component two and component three and so on so we have the intersection between we have an and between the failure events so that can be written in this way and if we introduce a limit state expression here and we make a transformation into a huge space then we have this model here for the failure event which are intersections between failure of each of the component but formulated in the huge space meaning that if we show the limit state in the huge space and that is what is done here so this is U1 and U2 and we have one failure mode here and another failure mode here then the failure domain for the parallel system that is the intersection area here and this will be this one so it is the probability of failure for that domain here to estimate and of course then we could use the same approach as we did for the series system so we can find the variability index for each of the two components here it is not very clear here but there is a leader point here so this is for the blue limit state limit state equation here this is the point which is closest to 0,0 so the variability index for this component will be this distance here and we can denerize in this point here and we can do the same for the red one here the leader point is here and we can denerize and then we can use these two denerized failure modes to obtain an approximation for the probability of failure for the parallel system and that would correspond to calculating the probability that we are within this domain here corresponding to the failure domain for the denerized expressions and then we will end in this expression here for the probability of failure so it is still the n-dimensional normal distribution function but now for minus the beta vector and still with a matrix which accounts for the correlations in the same way as before but you can see it is different than the expression for the series system where it was 1 minus the n-dimensional normal distribution function in plus the beta vector common the correlation matrix this is called a crude solution because you can see the approximation here around this point here is not very good and this point here is interesting because that is the one that contributes both to the probability of failure so we would like to have a very good approximation around this point here so what can be done to obtain that is that instead of linearizing in the beta point for each of the components we linearize in the joint design point so the point here where all the components are in failure but which is closest to 0,0 so if we linearize in this point here of course we get a much better approximation to a real failure domain but then we have to solve a more general optimization problem because now we have to make sure that all components are in failure but that can also be done and then you can calculate the probability of failure corresponding to this domain and that is what is shown on this slide here that I don't need to go in detail with but one thing you should realize is that not always all components need to be active because there could be a certain failure mode here which is maybe something like this and that would not be active because that one will always be in failure when you are in this point here so it's not all the components that are necessarily active and also for parallel systems we can obtain bounds on the probability of failure basically by assuming that we have full correlation or no correlation so the lower bound is 0 and the upper bound is the smallest probability of failure so that's a general a general bound they are not very good but they are simple bounds if we know that all the correlation coefficients that they are larger than 0 then instead of using 0 here as a lower bound we can use the product of the probability of failure for each of the components and that simply corresponds to assuming that they are all independent bounds here but that can only be used if individual correlation coefficients they are all positive and we can also obtain a second order upper bound but not something like the distribution bounds for a parallel system and then we have the general system again here and if that is formulated in the same way as before then we get this combination of union and intersections that we have here and in general that is what we want to be able to calculate and that is for sure not something that can be done by hand and you will need to program for some hours before you will be able to calculate this one union but there are some approximations also in computer programs so there is this program called Proband which is from the Norske Veritas and there is also Struvel RCP which can do these calculations in an approximate way but quite accurately so we will not show you exactly how to use those but there are commercial available programs that can do this and then you can always choose to use a simulation based approach so crude Monte Carlo simulation can also solve this one here quite easy it can take some time to do the calculations but you can do it and you can easily program it yeah sorry, yes can I ask a question about the graph before where we had a form for that one so it's a good solution for this is part of the system when we have two components but what if we have more components then there is not a single spot there will still be a single point where all components will be in failure which is closest to 0, 0 there will always be such a point that is the closest one and because we can imagine that we have if we draw the contour lines of the standard normal density function which will be this clockwise distribution if we draw that around 0, 0 here then we know that the density becomes smaller and smaller and smaller but this point here which is closest will be then the point which has the largest value of the density function which will then contribute most to the probability of failure so it will still be this point which is the interesting one so therefore if you solve this general optimization problem here you will find the point regardless of the number yes but of course this first order availability approach is not always good because it should be mentioned that this that you have here is not as nice as the one I have drawn here because it could be that it looks like this and goes like that but then it could go down like this over here because there is another type of critical behavior of the component in this area here so sometimes it happens that there are more than one beta point one design point minimum points and that is something that that can happen then you have to be careful but typically that's not a problem but it can be yes and then we are close to lunchtime but I have a few slides on target reliability so that's in a way something different but also very important because when you have to do what we could call probabilistic design then we need a reliability index that we aim for that we consider as being sufficient and there are some considerations about that and I will only give a short very short introduction to that but one way to look at this is to look at the levels of decision making that is introduced in this ISO standard called 94 which is the ISO standard on general principles for reliability of structures that is in fact a very good standard that you should take a look into because that's giving the basis for what we are doing cowering the most general way to make decisions using risk based approach so that is what we will learn much more about in the following lectures of course here so it is called risk informed decision making so that's the most general one where you have taken into account both the probability of failure of the structure but also the consequences so in basic product of these two and then we can have a reliability based decision making approach which we can all call probabilistic design so that is when you calculate the reliability and then make sure that the reliability is at a certain level a target reliability level or minimum reliability level or you could use the most simple one which is called the semi probabilistic method which is also the same as the passive safety factor approach used for example in the Euro groups so we have these three levels of decision making the simple one is to use safety factors the next one is to calculate reliability indices and make sure that they are sufficiently high and the most general one is to make a fixed design where you also take directly into account the consequences but it is also so that you can use this risk based approach here to obtain the target reliability level that you need on level two here and you can also use the reliability based decision principles here to find the partial safety factors to be used in for example the Euro groups so we can calibrate the safety factors so that you obtain a given reliability level and you can do that by using the methods here on the level two so there is a connection between the three levels but what this is about is how to what can be said about the reliability level to be used here at level two doing probabilistic design and here you can also find some information about that when looking in this joint community structural safety probabilistic model group and the most general considerations they are also shown here so that there are basically two approaches or two aspects to be looked at so one is that if you formulate or you can formulate an optimization problem where you take into account all consequences so that is really to use a risk based approach and that's the same as saying that you formulate a utility function so that's a cost-benefit consideration and that is what is illustrated in this figure here so if you have the decision parameter here so that could be a cross-sectional parameter and you increase that one then the probability of failure goes down so the losses will decrease same figure as you often showed earlier today but you will also have a cost of the structure because that will be more and more expensive so that will typically go like this but it is the sum of these two that's interesting so that is a total one that is shown here that will be the total cost and of course you are looking for the minimum one the minimum cost and there will be an associated probability of failure corresponding to that one so you can say that this will give a target or a nominal reliability level that corresponds to the cost-optimal decision so this is one way to look at this but then another aspect that you also need to take into account is that we can have a risk with respect to human life that we need to take into account and then there are considerations on the life quality index and the marginal life saving cost aspects that will give a minimum value for the availability level that you need to satisfy in order to satisfy the requirements with respect to human life to risk to life so that will give a minimum level the minimum acceptable level that is the one that is shown here which are based on these principles here that I will not go in detail with but you can find that also in the documents that are sent to you so that will give a minimum value so that means we have an acceptable region here but within that it's not so that we always have to go down to the minimum value here because of course we will also choose the cost-optimal one if that is possible and then we will go for this relic level here so there are these two aspects that are important to remember I do a question I will assess this risk to life yes that is done yes but many say lives about the human life human life that is what is that can be done so you can find numbers for the value of a human life that's quite polite that's not something yes but of course the insurance companies they have some numbers but you can also pre-formulate what does it cost to save a human life insurance companies have not much value but you can also another way to consider that is that if you are in a given country then you could have the decision to make should we improve the reliability of all the structures so that the probability of structural failure is low or should we use the money on hospitals and schools and so on so it a little bit depends on what are the possibilities in a given country so it's related to the gross national product and that is what this life quality index is about so that makes this relation to the possibilities for a given country so the target reliability level is not the same or this minimum one here is not the same in all countries so that would differ quite a lot but it is I think it is a real decision to use the money on making all structures very safe also better use some of the money on hospitals schools and so on so one needs to make that decision so that is what is hidden behind this lower level here but in Europe that is not a big difference if you go outside Europe you can find larger differences but what you can really find is tables like those that are shown here which indicates what level of reliability that can be used for design of structures so there is a table here which is from the joint committee on structural safety and it is also from this ISO standard 2394 giving the target reliability level expressed as or connected to an annual probability of failure so the reference time is one year so it is related to annual probability of failure and it gives a value here which is 4.2 for the reliability index if the consequences of failure is moderate and if the so-called cost of safety measure is normal so that means that this table here introduce two aspects when choosing the target reliability level namely the consequence of failure so it is clear it is not the same if it is a house like this or if it is a high rise building then the consequences of failure of a column will be different so that needs to be taken into account so if the consequences are larger then we need a larger reliability level so that is what you can see here and these consequence classes are more or less the consequence classes as we know them from the Eurogroups but then there is the other aspect here the relative cost of safety measure and that is related to how much does it cost to make the slope a little bit more safe and if that is very costly then you can accept a little bit lower reliability level but if in the other on the other hand it is very cheap to make the structure a little bit more safe then you should design for a higher reliability level and of course that sounds ok that there is such a relation and one application of this consideration is for existing structures where we typically know that it is quite expensive to increase the reliability of the structure if you already have a structure and you need to so therefore for existing structures typically we can accept a little bit lower reliability than we do for a new structure and this table here and the aspect here related to the relative cost of safety measure can be used as argument for doing that then there is also another two other tables here which are from the Eurocodes so if you look into the present Eurocodes in NXC and B you will find tables like this showing that the reliability level corresponding to a one year reference period for ultimate limit states is 4.7 and that is different from 4.2 here in this table here that is just illustrating that there is no common agreement about what should exactly be this target reliability level but it is something in between 4.2 and 4.7 and then it depends on different arguments but there is a range as you can see it can also be given with a 50 year reference period then the Eurocodes specifies 3.8 as the reliability level but this is corresponding to a reference period of 50 years so it is corresponding to a probability of failure within 50 years and not within one year so of course then the reliability level is smaller and this number here has been derived from this one here assuming independence between each year of the failure events and that is not completely correct because we know that for example strengths will be the same for all years the model uncertainties will also typically be the same for all years so they are not independent but that is what is behind these numbers here and there are some numbers for serviceability limit states over here and these reliability levels are much smaller of course because the consequence of failure is exceeding the serviceability limit state is much smaller and I have taken here another slide which brings in another aspect that in some countries are taken into account namely that we have the different consequence classes here called safety classes but we can also have different failure types so we can have brittle and ductile failure and of course that can also have influence on which reliability level to choose so in these tables here which are from an old report from an Nordic committee on reliability of structures the failure mode is taken into account so that here we have failure type 1, 2 and 3 which corresponds to brittle failure, a ductile failure and a ductile failure with additional local capacity as we have it for normal steel structures and here you can see that the influence of this failure type is that we jump with factors of 10 on the probability of failure it will go one class up and down that also can be taken into account and then this is maybe the final slide just to show that we also have reliability levels for other types of structures so here it is for wind turbines and the standard used for design of wind turbines that is an IEC standard 61,400 dash 1 and that one has just been updated so there is an almost finished version of that one and in that one you can find information about the reliability level to be used for design of wind turbine components and that indicates that for wind turbine components where the consequence of failure is much smaller than for a building or a bridge the reliability level is also smaller so the target reliability level is 5 times 10 to the minus form per year the reliability index of 3.3 and that is smaller than the numbers that we see here but that's because the consequences they are smaller and this is based on more directly economic consideration so it's just to to illustrate that there are different reliability levels for different types of structures yes that was what I had can I have a question in the previous slide can you go to the other one that one in the last demo we have an image state related with serviceability but when you are at the top the target reliability is 100 million states put on so the table is the one on it but the bullet point that you have says reliability for ultimate image states that's the first line in the bullet list in the second bullet it says reliability it says the target reliability for ultimate image states that's the serviceability so that's the table that's the first table it says that for the serviceability levels you can define also a reliability index because normally we talk about this only for ultimate image states failure failure but sometimes in serviceability you can have a crack for example for bridges it's mainly for serviceability and it's you should move those ones that's those you can find in a table in the universe and of course and here it's written that it is an e-fever civil so if it is a civil then it would be different so there are these three levels for those in situ that you have a crack but it's not within that failure of the structure because mainly for serviceability one of you that's exactly what we can see here but of course still important because the consequence in the end could be large I think if you control certain problems you will have a failure perhaps yes yes I have a question about the existence structures yes so I know it's a huge topic now but if we have a structure now standing for 100 years and of course we want to assess it and we have a European system and we do not know let's say we know the material properties more certain we can achieve it more certain and they are probably higher than 5% and how can we if we are doing the design of the existing structure what is your recommendation and how to approach on this problem so first I would say that there is a new Eurocode under development related to existing structures not a real Eurocode but a so-called technical specification that gives some hints about what can you do for an existing structure but what you can do is that you can do reliability analysis and then you can update the knowledge on the stochastic variables with the information that you have obtained so if you start with some of the tables I had in the beginning about stochastic models for the strength parameters so they are general models you can say but then if you have specific knowledge for a given structure you can update that model taking that into account by for example a patient approach or you can simply use statistics if you have enough data from this structure so that you have better stochastic models for the strength and then you can estimate the reliability based on that you can also take into account that you know that the structure has survived until now and then you think that this may be very good information but then you need to know which loads it has been exposed to and you will find out that you need really high loads before you really get the benefit or an effect of that but that's also something that you can take into account and if you use a safety factor approach then it is also possible or at least that is indicated in this new Eurocode you should modify the partial safety factors if you know that the uncertainty is lower than what was originally used for obtaining the safety factors but of course you can also go the other way that you find the uncertainty is larger then you need to increase the safety factor but in general you can use a reliability based approach by simply updating based on the information that you have and that can be related to each of the stochastic variables but it can also be events like no failure there is also in the Joint Community Stochastic a guideline or a recommendation on what to do with the existing structures as well as the publication and there are a number of papers and reports on assessment of existing structures so that's the most general I can say this is also an area where a lot of discussions are also on a reliability level to require for an existing structure yes