 called based on updating of statistical parameters and the probability models for ice peak loads, so This is now based on previous study which was presented in the last workshop one of the examples and The objective is to present some probabilistic models for peak ice loads acting on a ship in Arctic areas Also illustrate based on updating of statistical parameter for a simplistic data set Just to make it more transparent and also Based on updating for mixture of different models, which are members of a selected model space or Model universe, so this is similar to what we have also heard from costas Earlier where there was different models which are competing to get most Likelihood or or the highest possibility so Also to compute probability of failure for fixed capacity threshold for different Probability models based on the posterior distribution functions. So it's to illustrate The this space and updating procedure. So now the the data which was behind the study was from expedition with the coast guard vessels fall bar in 2007 and This is the fiber optic sensors on the internal frames so there are Five four frames in the front in the bow area, which are instrumented and then there's one frame in the midship, but most of the ice peak peak loads will occur in the bow area typically This is now the root of the the ship So, I guess you know where Svalbard is is up in the far far north in Arctic and This is now how you can use the the measurements to estimate the load It's based on share the formation so you can based on estimation of share forces. You can estimate the external load So this is now the local loading between two frames. So it's a line load estimation. So this is and this is what the typical record looks like you have a lot of these peaks and The intensity is pretty high. So we see that the scale on the horizontal axis is like five minutes between each of the marks 755 in the morning 8 in the morning 8 to 5 in the morning and so on. So we see we have a lot of impacts on the hull during even five minutes. So now what is the statistical model for this process and the data they show like two statistical Populations in a way you see the upper Upper data they are due to the hard inclusions in eyes. So they they show a different slope a Smaller slope than the for the lower part where you have most of the data So if you try to fit a straight line, we see that misses the what is happening in extreme cases where you really Where you have interest if you want to calculate the extreme loading anyway fatigue is different But for extreme loading, it's the upper parts The hard inclusions that really matter. So That means you should really Fit two slopes. So if you are using for example an exponential model, you need a Mixed model with two slopes and some mixing coefficient for the exponential part Now this is what you also can get if you use a single exponential model You have the lambda the intensity parameter of the exponential depends on the eyestickness. So you can Plot this as a function of the eyestickness and we see there's a large scatter, but there's some Decreasing trend that means that the thicker eyes the the smaller the intensity of the Lambda parameter of the exponential now also the Bible is much applied for for I speak loading and This is now the then you have of course two parameters. This is a two-parameter Bible So you have the scale and the shape parameters K and theta. So now they will also Have a more flexibility to fit the set of data and they also show Some trend if you compare with the eyestickness, they have some some tendency to change with it with the eyestickness and Then This is This is the same so we This is now the third one, which is the three-parameter exponential So we see that so now we have the three parameters. We have the mixture coefficient. We have the lambda one. We have lambda two and This is very much applied in The sign of ships they use the three-parameter, but in fact, they only use the upper part because that's what matters. So You frequently see one parameter exponential, but it's only the upper part which are the highest loading So that is applied for the sign And then you have the other community which are applying the Bible distribution. So in the way, there are two schools in the in the this branch to to use two different approaches to Estimation or extreme values. One is the Bible. One is the Upper slope for the exponential distribution, but this is now the true Mixture distribution if you want to use fatigue analysis, you will use the three-parameter exponential to look at that now We look at base and updating and we look at three different probability models. One is the exponential One is the exponential with upper tail data. Then we have the Bible and Then there's a combination of these because you can start with the possibility that These three models are all possible models. So you don't Say that exclude any of them. We say they say that they're equally probable to start with and then you look at the data to see what the data tells you about which model is the best one and Then also we look at probability of failure for fixed threshold and we apply a simplistic data set to make it more transparent, so I have picked nine samples which are Representative for the line loads on the on the bow area. So we have from 33 Kilo Newton per meter and up to 195 and This is now for the exponential. We see that this is exponential probability paper we see we have basically straight line is fitted and So the slope is point 0123 for the The fitted regression line and that means the lambda becomes point 0123 for the regression parameter estimation And you also see we have the lower part is is a high slope the upper part is a smaller slope So clearly we see that the data Does not fit Very well to to the upper part of the model and it doesn't fit too well to the lower part of the model But on average it fits quite well but Then we look at the the posterior distribution of the For this case and now we have a uniform prior so we take the uniform between the lambda max and I'm not mean and then we look at the posterior distribution and the Uniform distribution disappears because it occurs both the normalizing factor and the Nominator so that means We end up with a with a normalization factor and an exponential term which is a summation of the Sample and also a product of the lambda meter. So now this is a posterior shown as a function of lambda and then You can see that we have the peak which corresponds now to the max likelihood estimator because we have uniform prior a normalization factor constant is very small here. So This is now the Unnormalized one. So this is what you get after you divide by the normalization factor Now if you only pick the upper part of the data the three highest values You get the different posterior and you see that now the lambda estimator the max likelihood is smaller It's now closer to 0.01 And normalization constant is very different 2.8 times 10 to minus 10 for this case Then we look at the Bible model and This is the PDF and we have also prior which is uniform between Now two limits. It's one for the shape one for the scale parameter likelihood function is a bit more complex you have products involving the two parameters and we can also plot now the Posterior PDF for this one and the normalization factor now is 0.38 times 10 to minus 18 very small also and We see again that you have a peak which corresponds to max likelihood estimator for s and the shape and the scale parameters So this is now the the peak of this posterior Density function also since also here we have uniform prior prior distributions Then you can look at the extreme values based on the posterior and the here we use the the basic initial distribution and on the exponent Take the exponent to the end power because they assume independence and This can now be approximated for for large n which we have here by Exponential expression and then we can Also introduce the posterior distribution and we get then we get the probability of failure conditional on the shape and the the scale parameters and this is now the posterior probability of failure and Then we can integrate So this is now for the exponential and this is for the vibe. We can now This is conditional the parameters, but then we also combine with the likelihood function And we get the exponential case to the to the left to see that there is an increasing Probability of failure for small and small a lambda and for the upper tail. It's the same, but we have of course different Shapes you have different maximum values for the upper tail we have a maximum of five and for the other one we have a very small maximum value of the Probability of failure conditional probability of failure Now this is for the two parameter model. We see that they also have increasing probability of failure for Lower shape and for higher scale, which is not Unreasonable. This is what you would expect. So you see that the reason you stop here is that you have a Uniform prior which is Limited you have an upper and lower limit on the parameters now you can also look at the unconditional probability of failure by integrating away the the parameter lambda for one case and then you have The vibe will for the other case and you see that we get very different values of the probability of failure one is for the exponential ten to minus eight for the Exponential upper tail ten to minus three and for the vibe will ten one times ten to minus three which are is comparable to the upper tail exponential so They give Quite good correspondence for the two for the exponential upper tail and the vibe will but the exponential The single parameter and all data are it's not so good Then you can also look at combination of different probability models based on based on updating we have seen these formulations by cost us earlier, so the traditional basic and parameter updating is the upper Expression below of one is if you have combination of probabilistic models and J with different a priori probabilities Then you can find the joint posterior of the parameter Vector and the model and then you can integrate away the parameter vector So that means you can look at the marginal posterior of the model and you can also do the same if you have a continuous parameter model space sorry So the lower expression is for the case or yet the the model space is continuous. That means you have for example a structure model with the Continuous parameter and now looking at putting these Numbers into these expressions. We try first with the The prior probabilities for combination of all three distributions. We have one third on each of them and we see that for this small data set the posterior was probabilities they become binary because With the different normalization factors over the exponential tail gets all the weight others to get zero if you start with the Exponential on the Bible with point five then the Bible gets all the attention Exponential zero if you start with the upper tail and Bible with point five Then you end up with the the upper tail exponential and zero for the Bible and this is now due to the Different magnitudes of the normalization factors, but also probably due to the very small data set. So if you Increase the data set This might become more Not so binary as for this case now for the resulting failure probabilities, they also comply with the weighting of the different probabilistic models, so for the only exponential you get two times ten to minus eight as we mentioned then for the combination of all three distributions three point one times ten to minus three and for the Exponential and Bible one point zero ten to minus three and then for the exponential upper tail and Bible You get the same as or having all the three models in in your toolbox at the same time in your model universe, so You see that When you look at different candidates for statistical models, of course, you should not start with a single one you should have different alternatives and then they will Get weights posterior weights according to the data that you have of course you also have other ways of Looking at comparing different probabilistic models like regression coefficients or chi-square tests and all that but this is now For the Bayesian approach and then for the small data set, but what will happen. It is to look at Larger data sets and see how these numbers will will change for for a larger data set than just nine points Yeah, summary of results. This is just what we went through so Conclusions that we have looked at different probabilistic models rise loads and base and updating is performed for a simple simplistic data set Daniel probability for a fixed capacity threshold is computed for different models and Mixture of different models, which are members of the model universe. They are assessed by posterior probabilities And for the present data set is a strong ranking of the three models based on the posterior probability with binary type of Probilities and this is now Also, what was presented the last time we see that The extreme values long-term extreme values based on really real measure data. We have now the the upper tail or the two parameter exponential is to the right and Then the green one is the Bible then the single exponential is the Jell-O To points to the left and we have different return periods and again. We see that the Three-parameter exponential will give higher predictions of the extreme loads than the other two models So clearly the the overall data also seemed to indicate that the three-parameter viable or the upper tail Sorry, three-parameter exponential upper tail exponential is a very good model for the peak peak ice loads to predict extremes So these are now the three models one exponential to viable three three-parameter exponential and They also see we have return periods of one year five year and 20 years and the tendency is the same for all them all of these of course And this if you look at the factors the divide will predict 604 for the 20-year extreme The three-parameter exponential predicts 1061 which is almost a factor of or 40% higher than the viable prediction. So Concluding remarks we looked at the initial parent distribution of the ice peak load process based on updating our parameters basically an updating for the possibilities or credibility so the different models and also posterior failure probabilities for a fixed threshold based for the peak ice load and I should mention the threshold is 12 and a half times the the minimum lambda value. So that is pretty high threshold and that is to get the Probabilities of failure or way of a proper order of magnitude and that means also the design of the ship would clearly satisfy That threshold and even even higher design thresholds to get get the sound ship hull traveling in Arctic waters. So Thank you for your attention