 Today we will talk about something that is much closer to what you do as a classical structural engineer. We will talk about structural integrity and structural reliability. So let's start with something everybody knows. This is something simple to call a beam with a stem L and with a load in the center called this load G. This is something we know and now we want to elaborate on this beam and then we want to calculate the probability of failure of such a beam and then we have to develop a limit state function. This is a very central thing and the limit state function for such a beam given it has a rectangular cross section. Looks like this. We have the moment resistance of the beam and we have the load effect both expressed in units of moments. And this is important. Of course we make this comparison in the same unit and also we formulate such a limit state function that it contains a decision variable. So the decision variable in this case now is the height of the beam as we would normally do. It's much more effective as you see to change the height of the beam in order to increase the moment resistance. And then we compare these two properties, resistance and load effect and then we can start with our elaborations and this lecture now is how to solve this problem. How to compute the failure probability based on such a limit state function. You can imagine this is a very simple example. Practical engineering problems are normally more complicated but in order to illustrate the concepts this is absolutely sufficient. So we have to deal with uncertainties of course if we would exactly know all the properties of this limit state function then it would just be to decide whether given our problem at hand we are positive or negative with this limit state function saying that we are, when this becomes negative we are failure and when it becomes positive we have survival. When we would know everything it would be of course no problem to see which variables would you say are uncertain in a typical problem here which properties we don't know exactly, the span? The span may be something we don't know per millimeter or the tens of a millimeter but we may, we might control it as we can control the height of the beam and the width but which one are, what? The property and the load, right? So given the load is just a human person, the person and the weight of the person is of course we don't know when we don't know the person and even if you would know it's me then you would not know exactly how much it is. So that's the uncertainty. We have uncertain properties under certain assumptions we could assume that the geometry maybe is known but the load and the material property they are unknown the material property could be for instance the bending resistance of a timber material. So we have to deal with these uncertainties we have to integrate actually random variables in order to solve this problem so this is the link to yesterday. And this just a little historical view this is also a very new science compared to other things we do in our daily life so this was first introduced in 1926 by a German engineer but he was widely ignored by the engineering community and this topic came to the surface of the general discourse end of the 60s. This is a very fresh topic, of course not so much and then I was also a Swedish guy at the same time they did not communicate but at the same time he wrote a paper where we are talking about that we actually have to minimize the cost when we construct a building that we have to find the balance between the investments into more safety and the risk that the structure induces. So in our example what would be the cost of more safety is related to the material use we have when we increase age so we have a clear relationship between the choice of the height of the beam and the reliability to become more safe when it becomes higher and therefore we have an interrelationship we have a cost to pay in order to reduce the risk so we have to find the balance. That's again the example the decision variable age and now it's about finding age that we find satisfying. So actually the search for right age is not only a problem of structural reliability it's actually a problem of how can we optimally invest our money into the structure. So the reliability itself is only an intermediate result that seen on its own has not such a big importance. So how should we judge whether a yearly failure probability of 10 by the power of minus 4 is too high or too low? So we have to look what is it cost wise. So we can use this limit state function we can find a way to calculate the failure probability in order to solve such an optimization problem. So that's the objective. Now this is for design of a new structure. When we do structural health monitoring we actually construct the same principles a much more complex objective function where we also want to optimize costs. But here now it's only finding an optimal height that's our decision parameter where we balance the expected costs that's the risk. So if you choose a higher beam then we reduce the risk that's the blue line but at the same time we invest more in material that's the green line. And there we find an optimum where both cost components are in balance. And that's what we actually want to find out. But in order to do that because we have to scale the damage costs or the failure costs by its probability then it occurs the expected failure cost we have to calculate the failure probability. So this by the way is not the main focus of today but it's the definition of risk. Risk is defined as a sum of when you have certain events that can occur it's the sum of the risk contributors of these different events and it's the sum of the probability of an event happens and it's consequences. That's the definition of risk and that's also here in the graph named expected costs expected failure costs. So to conclude we have to look at objective function and a very central part of this objective function is actually the probability of failure as we see here. So this is the objective function expressing this optimization problem and here we have the expected benefit being the benefit of the structure times the probability that the structure survives minus the cost of the design this is what we invest in the structure and minus the expected failure costs the failure consequences times the failure probability. So we have the failure probability here two times it's very important to know the failure probability in order to find this optimum. We find this optimum by deriving it over the cost of investment safety costs and then we find this optimum. That's the problem. We leave this problem now and concentrate us how to solve this probability of failure. So this is... I hope you have seen this before for calculating probability of failure for academic problem for a classroom problem where we assume both the resistance and the load normal distributed and independent. That's the two conditions we have to have. Then we can construct a rather easy and straightforward scheme. We can define based on these two normal distributed random variables we can define a new normal distributed random variable which is the safety margin M and then we have to explore the mean value of the standard deviation of this safety margin M and actually we can calculate the probability that R minus S takes value smaller or equal than zero which is equivalent to the event that M takes value smaller or equal than zero. So we can have this new mean value of the safety margin and the standard deviation of the safety margin having it here in the standard normal operator and we actually express the non-excedence probability of zero for this normal distribution with a mean value M and the standard deviation sigma. And then this is already a very interesting result. This is minus the mean value of the safety margin divided by the standard deviation of the safety margin and this is equal to minus beta. So beta is defined as the mean value of the safety margin divided by the standard deviation of the safety margin. So that's the deficient definition of beta in this context and we look at a very, very simple problem. Who of you has seen this before? Very good. Not everybody, huh? So back to the example here on the board we have a limit state function where we assume that R and S are independent normal distributed random variables with non-statistical characteristics and so we write this like this. We have S and R being normal distributed with a mean value and a standard deviation and then we can first draw a large graph where we have a practical or where we have an illustrative idea about how the problem looks like. The resistance tends to be a little bit higher than the load, that's nice but then we can construct a safety margin and with a mean value and the standard deviation and then the probability of failure is here, this little part below zero. So it's the part integrated from minus infinity to zero and this is for the normal distribution expressed with the standard normal operator, zero. So the reliability index has a certain geometrical interpretation when we take the density function of the safety margin the better index indicates how many times the standard deviation of the safety margin fits between the mean value and zero. So if you have a better value of four more or less the range where we are operating for structural failure then we can put four times the standard deviation of the safety margin between zero and the mean value. So in reality you see that when you have a little bit of experience this is maybe a better index from point two or something like that in reality this curve is far apart from zero you can put four times the standard deviation between the mean value and zero and then we get probability values or yearly probability of failure in the range where we are operating normally all of them are true to us. So now we put some numbers on this example it's meant really as an introduction we have a timber bending strength with two total values we have 40.8 newton per millimeter squared mean and standard deviation corresponds to 25% coefficient of variation and on the other side we really look at the property of the weight or the force induced by weight of a human body and we assume a mean value of 0.18 and the standard deviation of 0.15 then we have a span of 8 meters and we had some properties of the cross section 10 times 10 centimeters and then we can calculate the probability of failure following that scheme we have the mean value and the standard deviation of r and s and then we can compute the mean value of m and the standard deviation of m and then having the mean value and the standard deviation of the safety margin we can compute the better is equal to 3 now we can do the following we can say we measure or we weight the person that is standing on the beam so we can say this person has 0.85 or induces a force of 0.85 kilo newton meaning approximately 85 kilos and now we know that so we introduce information into this problem and now let's recalculate or let's calculate back so when you have 0.85 kilo newton the mean value of the of the load effect of the moment induced becomes 1.7 right? 1.7 and the standard deviation is 0 because we know it 1.7 so just memorize 1.7 here we don't change anything and then we are here with the resistance and then we have the 1.7 here and the standard deviation now it becomes much more easy because the standard deviation remains the same right? so here we don't have 1.73 we have 1.7 and now we can look at what is 6.8 minus 1.7 that's 5.1 so we get 5.1 divided by 1.7 this calculate so 3 yeah very good you have to clear mine so 3 that's interesting that's an interesting result right? so the reliability becomes only slightly lower even so we know that the person we put on the beam is larger than the mean so we have a person that is larger than the mean but we know exactly the weight so we induce some information so the reliability remains the same and that's due to the fact that these two probabilities now these two probabilities of failure are conditional to different sets of information that's exactly what we tried to discuss yesterday so at the same time we could measure the stiffness of the beam we could measure the stiffness of the beam and of course the stiffness of the beam is correlated to the strength of the beam so we could update our model for the resistance if you have a very stiff beam then we can somehow conclude that maybe it's also higher than the normal average so we can introduce new information and we get a different probability of failure our reliability index so we get two different reliability indexes for the same problem but we have two times different sets of information so that's the lesson learned in this very simple example so this reliability index or the probability of failure is not the property of the beam it's the property what we put in into our analysis what we know about the beam that's very important and that's the entire principle of this decision theory that comes later today so you can imagine in the real world R and S we can always in most of the problems we can somehow classify in resistance and in load but the resistance and the load are very often functions more sophisticated functions functions of other basic variables random variables so the capital X stands for a vector of random variables that we have to consider so in general this is not sufficient to look at it as simple as we already did in general we formulate a limit state function we traditionally call this G of a vector of random variables and we want to elaborate on the problem or on the event that this G becomes smaller than zero then we define failure so when we have this when we have this formulated this more sophisticated or more general and more generic limit state function we can look at two-dimensional case because it's still convenient to draw an illustration from so here we now have a graph where we have the load and the resistance that's now a different way of illustrating this problem we still have two variables and we compute the joint probability density function of these two variables that's now a different way of representing the problem so we look at the joint probability density function density function from yesterday these bell shaped curves now we look at two-dimensional density function so it's a hill like we have here many around Leicobre so we have this mountain here and now we define our limit state the limit state in our case was r minus s is equal to zero so this is the same as r is equal to s so this diagram this diagram this is just a 45 degree line in this diagram this 45 degree line is somehow used as a limit that separates this density function into a domain where we have failure on that side and the domain where we don't have failure and in more mathematical terms this can be expressed as follows so we have the joint probability density function of our variables in this example it's r and s in a more general problem it could be 10 different variables then it would be very hard to make a drawing out of it so this r minus s in our example so we integrate over this density function in the domain in the domain where the limit state function becomes smaller or equal to zero so this is exactly what we do here when we draw this line and we integrate on that side so in this two-dimensional case we have to find the volume that is in the failure domain that is actually this integral and the entire structural reliability methods we look at today mainly on two is only for solving this integral and we look at two different strategies solving this integral this is a nice illustration from the time in Zurich there's a group of mitre-farver we developed a craft like that where we illustrated what the problem is about so in more sophisticated problems you see here these are the altitude lines of the hill and this hill now is not two normal distributions it could be other distributions so it gets a little bit skewed and also what we have here is a non-linear limit state function this is also very typical for more sophisticated problems so now we cut from this cake a piece of this piece so this is the solution of this integral we could of course use numerical integration it's always used by people that not want to spend time in more appropriate methods they use this integration with their nice computers but this integration has many pitfalls numerical wise because we are operating on very very small probabilities so the volume we are talking about is very small and as you might imagine that numerical integration is about it's very sensitive to these assumptions especially when we talk about very small volumes so it's very tricky especially when we have many variables and we have to solve this integration problem in multi-dimensions so we need more appropriate methods and today so these are actual reliability methods some of them are always used in combination so we first of all can have first order and the second order reliability method we look at the first order reliability method then we have surrogate models response surfaces based on that we normally operate then with first and second order then we have Monte Carlo simulation and now we will look first on Monte Carlo simulation that's maybe the most straightforward and intuitive way to go and then we look at the very elegant iterative solution scheme that is the first order reliability method so Monte Carlo simulation is widely used it was developed in the 30s and afterwards during the Second World War and it's based on the principle of trial and error so that makes it so intuitive many times in our daily life we do this trial and error thing and Monte Carlo can be called exact when we assume that we do infinitive experiment virtual experiment so it converges always to an exact solution that's a very good property we have different approaches we to implement this Monte Carlo simulation we will look today at the inverse transformation method we have also some other methods that are now just plotted here for information we don't have time to look into that the inverse transformation method is based on the principle that we can look at the cumulative distribution function cumulative distribution function and based on this cumulative distribution function we can based on a random number between 0 and 1 we can produce random realization from this distribution function we look at that now in this graph so here we have the cumulative distribution function the cumulative distribution function is related to the density function just to get sure that we are on the same page we have the probability density function of a variable x then we can solve the integral to a certain x and we get the cumulative density function that's a little bit out of scale sort of that that gives us this volume that area here so this y and this y is the same that's the cumulative so in the cumulative distribution function due to the axioms of probability theory from yesterday we know exactly that this is between 0 and 1 and most computer programs have at least or most computer programs you suppose to use have at least a possibility to generate random numbers between 0 and 1 and random number generator between 0 and 1 produces uniform distributed numbers between 0 and 1 so we have a uniform probability that one of these numbers between 0 and 1 appears now we can generate this uniform distributed random realizations with such a realization we can go into our cumulative distribution function and we can draw a realization that follows this distribution function so by doing that we transform this uniform distributed stream that comes into a stream that is distributed exactly as our cumulative distribution and here are some suggestions when you want to implement this in MATLAB when you want to do it in the old style then you can use this u 1 this produces a uniform distribution between 0 and 1 but we can also use much faster run n that makes already we don't have to use this transformation we directly can produce a normal distributed random variable and that's so called standard normal distributed random variable with a mean value of 0 and the standard deviation of 1 and then we have also some functions that produce other kinds of random variables directly but now to introduce the concepts we rely on this first simple and old school methodology where we first produce a uniform distributed random number transform it to our distribution we want to have step number 3 we simulate many of these realizations we simulate realizations from the resistance and from the load and then we can plot them in our graph so this is now the same graph that we had before where we had this hill on but we put it up so the hill would be now here and now we observe the density of dots and this density somehow reflects the density function we had before so this is just a way to integrate and the way to integrate is to count the numbers outside the line and compare this number with the total number that's the possibility to integrate something something for which we know that it has the volume of 1 so the joint probability density function is a probability density function it has a volume of 1 so that's totally coherent to this integration to make this just with random draws and count the numbers in this case we have 3 observations in the failure domain and in total we had 1000 simulations so the probability of failure is 0.003 so now we can go home right do you believe this result is true? this is now an interesting thing because this is pure frackantistics right so what would happen when we do another 1000 experiments would be maybe a different result maybe we are lucky to find the same result but there is a certain variation when we now make an experiment we make 10 times 1000 experiments and we compare our probability of failure estimates at different numbers so there is a variability and we have to reduce this variability by making sufficient experiments right so now based on these results if we would do 1 million simulations then we would get 3000 failure observations maybe this is based on this so at least around 3000 and this 3000 would be much more stable this number so the variability is decreasing as more simulations we do and we will look at some approximate formulas how you can find how many simulations you should do later on but we also had an analytical solution from before this is actually the simulation of exactly the same problem it's the Monte Carlo simulation solution for the problem we solved before with the two normal institutions running around for this simple problem it was possible to make it very simple as we did before but now we are just for interest we try to apply the Monte Carlo simulation to this simple problem and we see that as quite a big difference so this is so to say the exact result we have statistical uncertainty you treat it actually for Monte Carlo simulation you treat the result you get out as an expected result as an expected value, as a mean value of something and you quantify coefficient of variation of this result and then you can find out, we will discuss this later how much simulation should I do in order to reduce this coefficient of variation to 10% so we will look at that result so now we do this step by step we generate first a random number so this is something that comes out totally randomly out from our computer then we transform it to our resistance variable and then we do the same for the load it's important to say that we make a totally fresh draw of a new random number meaning that we assume the load and the resistance to be independent so we have now 0.69 and get our realization of the load and then if r minus s is the realization of the resistance minus the realization of the load we get something that is bigger than 0 that means that experiments are wiped and we can have this in our graph and we see that it's far in the safe domain now we repeat this 1000 times based on this one observation we would say probably you have failed at 0 it's totally nonsense of course but if we repeat this many times then we get some observations in the failure domain and now this coefficient of variation of our result the coefficient of variation of our result is only dependent on the number of observations in the failure domain it's only dependent on that so when we want to detect something that is very unlikely it's very low probability this means we have to make a lot of simulations in the case for structural reliability problems so we are talking about very low probabilities of failure and that sets a very high demand on the number of simulations now we discussed something so that was an easy start it's not very complicated to do that it's also only a few lines in your MATLAB sheet when you do something like that but of course reality is a little bit more complicated random variables are very often correlated to each other and that we have to take into account somehow because it makes a big effect on the structural ability so here we have a clear correlation between the variable r and s so what it's a negative correlation and you see a negative correlation means that we have more observations in the failure domain because negative correlation means we have coincidentally a very high realization of the load it's negative correlated we got coincidentally a very low realization of the resistance that's not good for us we don't want to have negative correlation when it's positive correlated then it's other way around and of course it's very nice you don't even have any observation because positive correlation means that when we have a high realization of the load we have a high realization of the resistance so we have to take this correlation into account in our Monte Carlo simulation somehow and that can be done by a method in principle we turn around the I may be a little bit boring about that so when we have a correlated a correlated random variable in our coordinate system then it might look like this these are the altitude lines of the hill so they're inclined and now what we do is we just turn the coordinate system in order to get a non-inclined curve and when we have a non-inclined curve this means that it's a no correlation so this is what happens here we transform based on our original coordinates we get a coordinate x dash and y dash and then we express in simplification when we consider the correlation coefficient being the cosinos of the turning anchor alpha we can express the new x dash based on this correlation coefficient directly so the algorithm we follow here to get a B-variant normal random variable that has some correlation that is described by the correlation coefficient rho is we generate x and y in the standard normal distribution and then we calculate the new x based on the old x and the old y integrating this rho according to this formula and then we get a correlated pair of standard normal distributed random variables which is the original x and the new x so here we get non-correlated realizations of the B normal B normal distributed variable and here we can transform them into correlated ones and then we can transform this this x and this x prime into a normal distribution with a mean value and a standard deviation now we have to introduce something that is very important also for the remaining part of this lecture we do a transformation of the standard normal distribution so the standard normal distribution standard normal distribution is a very boring distribution because it has always a mean value of zero and the standard deviation of one so it looks more or less like that and here zero so now we have many solutions based on the standard normal distribution it's very convenient to operate with the standard normal distribution and based on the standard normal distribution we can always generate a normal distribution by shifting from zero to the real mean value and scaling the standard deviation one to the real standard deviation so we shift and we might scale and that's exactly done by this here based on the standard normal distribution we can shift by the mean value and this is our standard normal distributed variable we scale by the standard deviation this is very ugly I give it a little bit more room so we have our x variable so we shift this so we have plus x and then we also have to scale the standard deviation of our new variable so based on the standard normal distribution we can produce any normal distribution and scale so this is what happens here so this is how it can be implemented so these are our uncorrelated realizations of the standard normal distribution it's boring it's around zero here and then we produce the correlated set so we exchange this axis here with the new x prime with the correlation coefficient 0.8 and it looks like this now it's clearly correlated on the end on this we have exactly the same realizations but we shift with them we distribute them in this direction and then we want to produce two random variables with the mean value of 3 and the standard deviation of 1 so this is the transformation to some normal distributions now we have 2 it's a little bit boring in this example 2 is a mean value of 3 and the standard deviation remains the same as 1 then we want to have the standard deviation of 2 then we just set it to here so now we have correlated normal distributed random variables this is how it is done based on some data so we have x and y taken from the standard normal distribution and we transform this into the correlated variables that have a mean value of 3 and the standard deviation of 1 so now we of course want to do also a normal distribution we are actually not so much bounded to the normal distribution anymore with a Monte Carlo simulation we can do things with any kind of distribution we just have to transform over the distribution function now we could have an idea that we do that with an exponential distribution so we produce two dimensionally exponential distributed random realizations and then we apply the same formula and then we get something like this this is of course nonsense it's not allowed to do that so this transformation it's only allowed for normal distributed variables but of course people dealing with this are very pride and they have good ideas so the idea now is that we can generate a b-variate normal number and then we transform these two multivariate normal distributed random variable into a correlated uniform distributed random variable or numbers and then using the CDF of the normal distribution to re re-transform into the variable we want to have so when we have a correlated set of uniform distributed random variables then we can transform them actually in the distribution we want to have then we can for instance use the exponential distribution to transform them and this is called notar from normal to anything so we always we always apply our correlation property on the standard normal distributed random variables then we transform them the realizations into a uniform distributed variable and we can use this uniform distributed variable to inversely get the realizations of a variable of interest so we generate as we did before a pair of correlated standard normal distributed variables xx prime and then we transform them into the uniform distributed into uniform distributed random variables so we get correlated uniform random numbers and with this correlated uniform distributed random numbers we can generate our random numbers for our problem for instance exponential distributed gamma distributed here we have an example we again start with realizations of uniform standard normal distributed random numbers then we transform them into a correlated uniform distributed random numbers then we transform them into the uniform distributed correlated random numbers and then we make the this transformation into here for instance an exponential distributed random numbers so when we do that the same example as before but now we do the several steps over the standard normal distributed and uniform distributed random numbers we apply this from normal to anything and then we get a correlated set of exponential distributed random numbers so a Monte Carlo simulation is a very straightforward tool to do things even if you don't have the strongest background in this topic but this this might seem a little bit awkward to you but this is very important that you at least can implement this because in practical problems we very often have correlation and it would be really pity when the consideration of correlation and Monte Carlo simulation would be an obstacle for you so by looking at this more closely now this is of course a very fast exposure to this you have to look at it again you are able when you follow these different steps of these transformations you are able to consider correlated random variables in your Monte Carlo simulation here's another example where it's not so boring here we have X1 and X2 and X1 is a gumbel following a gumbel distribution X2 is following an exponential distribution and we have a correlation coefficient of 0.8 when you do that for more than two dimensions then you can generalize this formula we had with a correlation coefficient and with this X prime and with X with a use of the Juleski transformation so we search a matrix R for that we have the following properties and then we can use this matrix R in order to transform Y in order to uncorrelated random variable X into a correlated random variable Y so the correlation structure of our multiple random variables is defined by the correlation matrix we have for instance four random variables we look at the same time then we have a four times four correlation matrix this is the C I did not mention that the correlation matrix that we should find other matrix R that has the properties as indicated and then we can use this R that we actually find this R by the so called Juleski transformation decomposition and then we use this R for finding very straightforwardly Y which is the correlated random variables on the uncorrelated random variable X and that sounds now we don't want to use IGBRA anymore but with MATLAB this is just a command you have the Juleski decomposition as a command and then you can also make this vector operation here very straightforwardly actually compared with what we looked before a more or less Excel based with these tables this is the much more straightforward way to do it once you have understood it so for implementing in your code this is actually the way to go it's just two lines here's an example I have to speed up a little bit for this decomposition so we get always a set of correlated standard normal distributed random variables based on correlated set so now we talk about accuracy of Monte Carlo estimation and the accuracy is actually as I said dependent on the event we want to detect with a Monte Carlo simulation it's dependent on the size of the piece of the cake we want to cut out from the normal distribution from the joint probability density function so we are talking about event probabilities that are very low this order of magnitude and that means when we have when we want to have a 10% confidence in the result we have to do accordingly this number of simulations when we want to do that this line is now for a confidence of 10% when we want to be even more confident that this line moves up this is a guideline how many simulations we should do yes, of course here when you talk about the number of simulations is the number of one simulation or the total number not the total number of virtual experiments the total number where you compute the limit state function based on your realizations if you generate your random variables of 2000 experiments and then another one of the experiments is 2000 yes, of course you count on one so it makes no sense to make 10 times 1000 then you do just 10,000 ok because the other question was there is a little difference between making one simulation with 10,000 or different simulation with few experiments in the sense that when you generate the random variable 1000 you are generating what time are different from the 1000 you are generating in the second time so usually in multi-gallon simulation it is better to have one simulation with many senses or many simulations with less senses that's an interesting question when your realizations are independent so when and then it makes then 10,000 simulations is the same then 10 times 1000 simulations yes, of course you just accumulate your experiments and you always take as most as you can to calculate the failure probability so this is how the estimate based on the Monte Carlo simulation after a certain number of simulations so we have 10 by the power of 4 here so here after 10,000 simulations we can compute the failure probability after only 1000 simulations we are here so here we see our result our result is converging towards the true result so Monte Carlo simulation only provides a convergence towards the true result and now it's the question when can we stop because when we stop it doesn't take this result we don't know these two lines and that of course also actually you can make a free posterior analysis how much computer time you can afford in order to increase the precision of your result in the end effect is the same principle we have to stop somewhere we don't know this red line so we would stop always here this is our expectation no no no the number is supposed to get better and better even though it does not look like that so if it converges in theory if we could put in the number of simulations it would give the exact result yes, it's intuitive is there an expression for choosing the number of simulations or only those graphs for example yeah I have it I think this is an approximation so here you can have a coefficient of variation or you can approximate the coefficient of variation for your Monte Carlo simulation when you have 10% then this is the way to go this gives you an approximation how much it should be does it depend also on the number of random variables you have no it depends only on the observations in the field of domain there is only the numbers in the domain of the failure domain so just to we don't go into that we have variance reducing methodologies we have methodologies that when we have a very computational expensive evaluation of the limit state what would it be so now we have the limit state r-s it's very fast to compute what kind of limit states do we also know especially for more advanced problems what do we do in modern structure engineering we do a finite element calculation for instance for more advanced problems we might do finite element calculation and the finite element calculation we don't have this calculation of the load effect the central load times L divided by 4 we do something numerical numerical means that it costs time so when we have for the evaluation of the limit state in our simple problem maybe we have a millions fraction of a second for the evaluation of one it's very fast, we have a modern computer we get a result like this after one million simulations but given you have a finite element model and you need 0.1 seconds to evaluate that 0.1 seconds to evaluate one time the limit state function based on such a finite element and then you want to do 10 million simulations then you have 1 million seconds and you can easily calculate that that gives you a problem if you have to submit your report next week another example is that and then it appears 0.1 seconds for a finite element and you know models that are much slower another problem is very often our reliability estimate is only part of a bigger optimization problem what we discuss later today this decision analysis is one example for it so when the probability of failure is part of an objective function where we want to do an optimization we have to evaluate this failure probability in one algorithm maybe 10,000 times in order to find an optimum for several solutions and things like that and that's even for simple problems then we are talking about the boundaries of what we can do with one decarbonization when we have complex program settings or very computational costly evaluations of the limit state we have to think about more effective methods so we have to think about methods that make Monte Carlo simulation to converge faster and we know methods and I just give the keywords as important sampling or stratified sampling and I just explain you the principle of important sampling without going more into detail and then we make a little break so important sampling is you as we did when we did the Monte Carlo simulation but Monte Carlo simulation seems very inefficient because we sample points actually mainly in the mean domain where we not at all expect failure and we even sample points where we know exactly that there is no failure we produce all these points only to find out these two guys here very inefficient so now important sampling the idea is that we move the focus of our point cannon so to say so we shoot the points actually around the point where we expect failure to be and then we count these points and there will be much more points on the failure domain but we divide these points by its low probability that the points will take place exactly there that's a very efficient method so we have the joint probability density function we have the joint probability density function describing our variables and now we formulate a so-called suggested probability function or proposed density function that's here and now we can count a lot of points we find out a very big integral on this side maybe it's in all of magnitude of 0.5 but now we have to divide this by the low probability that our original density function is placed there or produces numbers there so we calculate this large number of simulations backed by the low probability that our original points will end up there so that's just to give you a little teaser to ask me in the break more about that we will not have time to go into more detail so now actually I want to conclude with warning when you do this variance reducing methods you should be very careful because you always make this problem biased and make it open for possible bias and bias is actually nothing we want to have when we do this real-time experiment so now after a small break we will talk about the second strategy to solve this integral and the second strategy to solve this integral is so-called form analysis so the idea to solve this integral now is different before we just use the cannon and throw balls in order to represent this density and then use a very simple relation between the number of points in the failure domain divided by the whole number in order to solve this integral now we do a different approach first of all we have to recognize that this is now our graphical representation of this two-dimensional case so we look at a joint probability density function that is represented by its edge group lines like in a topographical map when you go hiking and here we have the limit state as a line and this is the case where we have two random variables when we have three random variables it becomes very hard to illustrate already and then the limit state becomes a plane and when we have a higher number of variables then we talk about a hyperplane so the limit state is then described in a hyperplane so now we stick to this graphical representation and we look at it again so we have the problem here where we have the random variables with the real means at the standard deviations and where we have the limit state function in the original space and now we transform the entire thing here into the standard normal space and there we have a mean value of 0 and the standard deviation of 1 so we always have this concentric circles so this circles in the standard normal space they look always the same but still we represent now this individual problem in this space so how it is possible where is all the information gone from the distribution functions in the original space where is the information where is it? that's in the limit state function very good and how does it come into the limit state function oh no it's in the limit state function now it's here we have the limit state function R G from R and S equal to 0 and here we have the limit state function the standard normal space which is a function of U, R and U, S and these are standard normal distributed random variables so we remember this transformation rule and we write down G from R is equal to R minus S and we know that R and S are normally distributed R is R is a random variable normally distributed with a mean value of R and the standard deviation of R and also S is normally distributed with a mean value of S and standard deviation of R so how do we get from this G from R, S to G from U, R and U, S we transform also the limit state function into the standard normal space so we represent the limit state function not as a function of R and S anymore we represent it as a function of U, R and U, S standard normal distributed and how do we do that? We use the transformation as we did before so we write G from U, R U, S is equal to mu, R is the first one then you continue I'm sorry for that not U I'm a little bit tired so I write it again for you very clearly equal to mu, R plus U, R standard deviation of R and then to make it more clear we make minus and then we have mu, S plus U, S sigma, S and now we have the entire information of our problem the individual information from the mean and standard deviation of our two variables we have in this equation and this allows us to make our entire continuation of our solution based in the standard normal space where the probabilistic information from the standard normal space itself is always the same but we put the mean value or the probabilistic characteristics of our problem we put directly in the limit state equation and now this transformation is very important sorry for that this transformation is very important because now we are still talking about normal distribution here in the original space but without going into more detail when we have non-normal distributed variables in the original space we can also use some kind of transformation in the standard normal space that is a little bit different than that one but it's always possible to make transformations that we can go into the standard normal space for instance you find solutions and one of the examples I distributed for the Gumbel distribution so when one of the variables is Gumbel distributed typically it would be the load then we can make a transformation but then the transformation is such that our limit state function in the standard normal space becomes non-linear by this transformation from the Gumbel distribution in the standard normal space so that you are asked to look up in this one of these examples and now we are in the standard normal space and normally in the standard normal space we don't draw this circles anymore because they are always the same so they contain no information in the standard normal space even so saying it we still have it there but later on we have skipped the circles and now we look at our limit state equation in the standard normal space and the limit state function could be non-linear right due to transformation from non-normal distributed random variables but also due to the fact that our mechanical problem is a non-linear one right could be non-linear the beta index gets a new graphical interpretation the beta index is actually the point where the failure surface is closest to the origin in the standard normal space so that's better now now it's about finding an algorithm in this space here to detect the smallest distance between the origin and the failure surface and we do that now by first order reliability method we do that by an approximation where we have a time game in this point and we say the failure probability we assess based on a surrogate limit state that is linear in the design point so the closest point from the limit state to the origin is called a design point approximation of this limit state in the design point so we have a standard normal distribution here we have a linear approximation of the limit state function we have this distance here between the design point and the origin and now you see the graphical interpretation is still the same than before this distance how many times the standard deviation which in this case is 1 fits between the origin and the design point so it's kind of similar of what we did before but in the two-dimensional space and now it's worth mentioning relating it to your normal engineering practice where you use characteristic values and partial factors for the design so what is the resistance characteristic value divided by the partial factor how do we call it it's the design value and the design value of the resistance and the load is actually found exactly here so this is what we want to represent also in the partial factor design format this is the so-called design point and on one side we have the design point for the load and in the other dimension we have the design point for the resistance so this partial factor design format is directly related to these considerations so now let's don't look at these formulas I explain them in a minute and now we look at this standard normal space and as I indicated we skipped consensual lines we just have our failure our limit state function equal to 0 in the standard normal space yeah question from the previous slide we have a vector oh yeah that's the direction vector of the curved surface yeah so that's the normal vector through this tongue end here we will come to that now so here we have the limit state function equal to 0 that's missing here so g from u u1 and u2 is equal to 0 this line here and now we already agreed on that the better we find where we have the closest distance between this limit state and the origin that's both the definition of better and now how do we find this better so we can of course make a drawing like this and then we use our ruler and try to find it but in a mathematical and generic and especially in a multi-dimensional case this is not possible so we need an algorithm that we can illustrate and exemplify on this easy example two-dimensional that is possible also to solve in a 12-dimensional space so therefore we look now even so it would be straightforward to find it by hand we now define a generic algorithm that can be used and that can be implemented and that finds the solution the closest point on the failure surface to the origin surprisingly fast and efficient so now this is what we search for and now we introduce the algorithm so we have a problem at hand and we don't know how it looks why but we can define this failure surface equal to zero the limit state equal to zero and we can start with a first estimate of the alpha values of a direction vector so this is a normal vector that has a certain direction it has the length of one and it has two coordinates one and two corresponding to U1 and U2 so this is just a direction we have to indicate as a starting value of the algorithm and typically when we look at the entire coordinate system we have two directions or four directions negative and positive we have the low direction and the resistance direction in the standard normal space and now we have to think about where to shoot first where do we expect the design point to be so do we expect the design point to be in the lower domain of the resistance is it a low realization of the resistance that leads to failure yeah and it's a high realization of the of the load that leads to failure right so we expect failure at the high realization of the load and the low realization of the resistance so that means that's not indicated here that we actually have to shoot in a negative resistance direction and in a positive load direction that's where we would suppose the design point to be this is our first direction we shoot and then this equation actually we search for the given direction we search for the point that lies on the limit state equation exactly in that direction that means that we put alpha the alpha from the direction vector from the normal vector times beta into the limit state function such that the limit state function gets 0 so in practice we just search the point that lies on the limit state function in this direction because this function here is defined as being g from u1 and u2 equal to 0 and then we from this equation we get out a beta so that's what we get out here so we have a design point that is beta alpha1 and beta beta2 so that's now a point that lies on the failure surface but it's still not our design point it's still not the point that is closest to the origin do you agree so it's some point in some random direction because this was the direction we initially shoot at and we don't know where the design point is so what we do now is we have a tongue end in that point and that's what we do in here here by this partial derivatives of the limit state function over u times beta times alpha we can compute the direction vector or the normal vector of this tongue end so that's just so that's now the direction vector that is orthogonal to the tongue end and that is now defining the new direction we shoot at so you see this is orthogonal to this tongue end here and that's our new direction to shoot at now in this new direction we search again now we iterate right we move to that point again we search for a point that lies directly on the limit state we linearize again that's that part again so we do it repeatedly we linearize again and get the green line that's the tongue end in that point and based on this tongue end we have the new normal vector which is the green one it's coming out very nicely with this theme and then again we search the point on the limit state and repeat that actually it's surprisingly fast after a few iterations you come to the closest point to the origin and this is then our design point and the better we find out in our last iteration scheme where we have somehow defined criteria where we can stop this algorithm where we are satisfied with the position of the result and this is then the better that is better we use as a result so the time constraints are not such that we can go more into detail to that but I have prepared in the folder you receive from Sebastian you find this presentation and then you find also a folder with two exercises nice PDF files where you have some text and some principles but also introduces one example about the Monte Carlo simulation in one document and in the other document an example about this form analysis and it's actually not possible that you now want to know how all these things work after this lecture the time constraints of this course does not make it possible with my students in Trondheim I have four or five lectures but now of course you are adult, you are PhD students or postdocs you go now to these documents and you try to implement these principles and as you have seen yesterday once you have implemented something it loses somehow its fanciness it's really not rocket science and it's explained even with graphical animations this form algorithm based on an example of a serviceability limit state so the probabilities of failures we get out are much higher but it's a very illustrative sample and if you really want to learn this then you should spend some time this evening or later on to go through this exercise it's really an opportunity now you already start thinking about this now it's actually the time to go through this and I'm sorry for this privity but it was not possible to make it more half hours Monte Carlo and formal analysis but I don't go through this text but more principle what we should understand when we have a non-linear limit state function and we use the first order reliability method then we make an error, we know that because it's an approximation and somebody seeing the part of the cake we are actually not assessing correctly here the probability analysis was finding out what is the volume of the cake we cut out from the moment we cut out the first order reliability method we cut with a straight knife that's the linear assumption we just cut it up but the real limit state function is actually a crocodile so this volume here between this volume here we don't count so in this case the real failure probability is lower than the failure probability we calculate by the first order reliability because the part we cut off is a little bit larger than the part originally should cut off by the limit state function so this is the error, luckily the error is the distance is very small in the area where the slope is still steep so it gets a little bit bigger in the main of the mountain where we don't have much volume to count this is actually not counting so much because we are already in the valley but we have to be aware about this in accuracy there's another method you might have wondered what the second order reliability method means that's actually not a linear approximation of the limit state it's a quadratic one a little bit slower a little bit less that's just a final concluding remark about this form analysis it's an approximate method you don't get exact results with the Monte Carlo simulation and you have an easy problem it's easy to implement 100 million simulations there you will get much better results based on that but it's very fast, you will be surprised how fast this iteration can be introduced converges that's the strength of this algorithm so when we have a problem where we have to have a fast result in a complex optimization problem that is actually the way to go any questions? sorry, just have questions about those alpha factors yes, they are the same then you use for instance they have alphas as much as the random variables yes so the alpha values they are also used in the Eurocode for design assisted by testing, they are the same alpha values and the alpha values they are called sensitivity factors and they indicate the importance of the variance of a variable for the reliability problem so that's the contribution when you want to go back to the very easy safety margin case which we introduced before it's the contribution of the variability of one variable to the variability of the safety margin I am a little bit over time but now we have maybe 20 minutes for a break so we meet 5 minutes after half so 20 minutes is appropriate to eat at least 2 pieces of cake