 So, I will do the presentation, and he's taking the questions in the end, and we'll get to that in a second. The title is basically the sign of a virtual interpreter or the structure. It's in Polygon Formation Analysis. So, I suppose to do most case studies in this course section. The focus here is to use site-specific information to inform decisions for design rather than for operation maintenance. So, this is the agenda of the course. I will start by talking about the staging scenario. Then I will introduce the methods used, look about the results, and then I will mention the challenges and the implications. Case study focuses on one specific point of design of an offshore structure for making energy. This point is usually covered as avoiding a hazard to the rest of the installation frequencies caused by the late passing of the worker passing frequency. And the frequency check in magnets is quite significant. If you focus on the realistic respect for regulation in the framework, the idea is that it inspects the natural frequency. We assume there is some uncertainty, and this is covered by assuming that the past minus five percent of the variability in the computation remains to be an additional five percent to the exhibition. And this doesn't focus on the consequences of the residence hazard. It focuses on avoiding the residence hazard, which is final in theory. But as in cases in which already operating in parts of our country, its family complications and the consequences are usually that even the structure needs to be inspected differently. It was planned to be eating, or the operational regime needs to be changed during the operational life. So it has to also be associated. There was this paper in which Jesus Vasen is called for in which an offshore wind farm in the German War of the Sea has been monitored since 2007. It was this 5-3-2-1-1 turbine. And they found that in the monitoring system, they found the solar field effect, which is this dynamic amplification in the rotor passing frequencies, was present. The damage that the structure was facing was higher than it was estimated in the design. And the consequences was that the operational regime had to be changed. And of course, the objective of an offshore wind turbine is not to be there, something is to reduce energy. So, avoiding the operational regime has possible. So the idea is to treat the asset of dynamic amplifications using the project's framework, respect the specific framework more specifically, to quantify all the consequences of having a certain design, to assign a certain personnel frequency in a certain position, and how we decide specific information of the solar influences, or have been quantified in our design decisions. This is just really strange for those who are not that familiar with offshore wind. This left figure is the determination in the rotor speeds, in place of passing certain RPM can be related, depending on the control of the turbine with the rotor point, to quantify and produce a certain speed. So for a certain rotor speed, or certain wind speed, there is a certain forward speed. And here it is constantly committed to reach the rotor speed, if you have to face the wind speed larger than the rotor speed. So you expect these are large winds, let's say for 15 minutes per second, but these are not how light this is. Actually the wind turbine is performed there. Especially in winter if you look at how many times the rotor is rotating at this RPM in winter is very often. So this means that we are in this line here, to produce more than any implications if the design tends to be more to the left. We are here in part of it. So the idea is that one of the constraints of the design, it needs to be designed in this region, which is called sausage, due to economical reasons, which avoid resistance. But we have uncertainty in how we estimate the natural frequency. The uncertainty can be overcome in two ways. One is to invest in design processes, or a more stiff design that moves the natural frequency version to the right. Strategy can be to acquire good information, have a better picture of how soil structure interaction is, and by that managing or treating it. And we can expect, of course, that the optimal decision, because it needs to be a trade between these. The original, I don't understand it, is that influence diagrams, the first and the fourth most of the parameters, are present in a receiving scenario. Orc-wise circuits could be numbers, so it started calculating this was feasible to be treated. And ended up with this influence diagram, which was kind of the most simplistic, but still the certain basis of the problem. So the main idea is that we need to relate the asset between dynamic applications to something that can be associated with some losses. That something is special failure to consume fatigue. Dynamic applications yield a larger fatigue damage, and therefore higher quality of failure to consume fatigue. And that's failure that can create a basic framework, or a framework which is social use. This version of this space diagram is focusing on moving the soil by its soil quality analysis. And here we have the decision about what is the best design we can use to develop information. The design is created to a certain cost, and here I leave it as a portion to a volume of steel. That is, of course, not the case, but the first and the sixth module is very important to it. The soil stickers means start by having non-specific or non-size-specific information, so we can think that there have been tests performed by a certain company that you can acquire, and that's tested in that area. Based on those data, we can have a picture of how the soil in that particular unit can happen to be. So you can have products that can be associated, but it can be used as a private information. And then, of course, to perform further tests in the specific points where the material will be soil over here in the decision, and if we decide to test, we will have some outcomes, performing those tests also associated. So depending on the uncertainty that we have on the soil, and depending on which design we decide to perform, we will have certain problems associated. And that will help us to complete expected utility for each design. Now I will talk about the model. So the idea was to appear a model that was as simple as possible, so I could be manageable in a problem state between framework, and sometimes when you try to make things as easy as possible, you end up having to be more scripted because you need to program yourself the specific pages. So I decided to run any model you would come up with, in which I could model the data behavior of the structure, but due to some nonlinearities, you need to run the simulations in a time domain and you need to solve the equation of motion. So the specific page in here is probably the smallest node of migration of 200% data behavior structure. And I kept the decisions the new, and I kept my proposed 12 designs based on two design or two decision partners to start this to the end where the thing is. And certainly I assumed that the transition between the cell and the mirror in this, to this reference, they were able to be linear and that's from then only being possible. I tried to keep these decisions. These are the 12 designs that I considered. So I can match the relation between the first natural frequency and the cell models of the system and as we see the effects. And this has also an important first outcome. And that is that this invites the expected value of the natural frequency to be always smaller or equal than the natural frequency of the expected cell models. This means that if you look at the deterministic recommendations, we are actually moving further to the, which is exactly what we want. So these are the two definition regions and this is our, here is the significance, this is our prior relation and based on this relation here, we have also the prior estimation of the percentage. I considered these 11 antique design models based on the A13, the shallow side of the WP for the G-Boards in which long states are considered based on the other 10 minutes, wind speed associated with certain intensity, significant weight, height and deep periods. And the goal of those, of those cases to occur are determined by a number. In order to simplify the dynamic populations, I integrated the rotational sampling of the rotor expectations into the inspection. Therefore, I am always going to perform our elastic simulations, which are usually done for these computations in an integrated process. So these are the spectrums I used as a parameter loading. And these are the patients I used for the dynamic motion, that I've covered in the past. So here, the implication is that I used to be generalized mass stiffness and sampling based on using the first model of operation. And this is how I calculated the damage assuming the antique here of the lifetime of the system. So it's like the structure is 20 years. So I run one hour simulations, I have 11, 30,000 cases, 32 weeks, I produce compensation 50 on the soil for each compute and this is the mapping between the antique-like damage and the first natural damage for each of the 12 designs. So we see that at this point the antique-like damage is the highest of all designs expected that this natural frequency will be much more exact. The boundary, the end boundary of the 1P region which only the rotor rotates at that particular frequency should be better and also environmentally it's no more space there. And the pattern is coming down and you can calculate a some of the probabilistic prior definition of the soil almost a classic and also the first natural frequency of the antique-like damage. This is calculated using this in-state function which is taken from the JCSS definition of antique failure it's also used as a non-paragraph. There is a normal history that has been solved as people are equal to one and some are as paragraphs. In the time just in the science so based on prior knowledge the failure I have calculated is to give us the first attempt to see based on his preliminary configuration what would be the best design given our constraints. But now we are trying to model how to quantify the value of science-specific soil information and for that even that this type of testing is quality information to test to get estimation of the probabilisticity and this needs to be related to the true statement. So the best way to do that is to use the lucky function. This here is not done based on data again, people have option differences like they don't know where in space they are trying to operate on data. But for now this is estimated and based on this estimation we can create our lucky function. In G for the modeling information the framework needs to discretize this lucky function and we need to compute the exterior distribution for each considered output of this free exterior distribution. So this is our prior knowledge the basic is lucky function we can obtain this exterior distributions even as we can have them. This is already an important thing to consider if you don't model the prior information when you acquire information you don't have the way to quantify the value you don't have the legend. So even if you're not actually applying the value of information doing this provides you some further knowledge because it lets you mark it lets you mark between prior and exterior and if you already have information about the quality of the testing and taking care of it. So basically this is already solved what is the utility for prior information you can do the same for the different outputs of the test between patient inference we obtain that these are the posterior optimum in the brain cells they're optimum in the speculative utility for each test output and basically of course with respect to that depending on the test output it can trigger different optimum decisions so it's very bad it needs the most strong design with a different model if the test gives and so it's very good that much so we can see already that information can trigger different decisions and that it's probably always strictly linked linked to different decisions that can be triggered by that. This is presentation I just want to point out I guess that the name challenge I found once we got combined products with endogenous with F-15 models because there's no time to start to go up and you start using some of the realistic or correct realistic modeling because you and there's no time to match also that in light of this function is probably the most turning parts of all your information nonsense and that if you have any questions please ask that's all one question one question you can put a question for the students ok what are the measurements how does the model have measurements and source thickness this is one question the simple question is that even if you compute source thickness ok that's thickness the question whether it's a model or not for different speeds of the model you know this is a you know a regular model question and so yeah it's true that there will be also that what is the solid state and it's different based on the test that we're doing and in fact in the soil we need a bigger information for the model so that's what we need to this sample and structure and testing the laboratory and that's that's already important thing savings as you say the since this is not intrinsic for the soil it depends it's actually a reaction for the soil it's a it's a model so it depends how you're pushing the soil so definitely this is going to should be reflected like then since we are using data at this point starting yet it's difficult because we're not there but additional to that of course there is also there's going to be a logical aspects how the soil behaves so if it's dynamic or something behaving in the structure the soil will stop in our state and it's difficult to predict that it's a design section and it will discover there will be a lot of aspects that maybe at the beginning of the life of the structure there is not any indication of it for every time so the idea is to try to reflect all that all that give the highest definition so that means of course I would like to structure from my understanding of the example that I did so you are bringing with you new structure the structure is not installed yet is it one structure or is it a group of structures in the same thing where the same soil is applied of this so the soil is bringing with a group of structures or with one structure how do you think about this question asking questions or yes really you can attempt to answer if you are getting with several or one this inside with one and it needs to reflect two or one then your performance indicator is the particular lifetime is correct and your measurement of the thickness of the soil is even not too big and there are hundreds of frequencies of the structure I think is a correct question and now what do you think what do you think is the particular life minus 30 years for what do you think for how do you think probability of failure is a big question minus time differences one question one comment which held that we like what value of cost for example how much does it cost for daily this let's say this on the going of the soil and what are the failure of cost failure happens but the failure of how do you what cost do you think when you do this do you think their matches are cost uncertain or uncertainty or like this you can see they input uncertainty around cost when you have the uncertainty you can use the name of so that is not a correct way so there is uncertainty here I'm just assuming some problems so the cost of construction is proportional to the volume of the steel the cost of failure is proportional to the construction cost so far it's just through the stations and all that what do you expect well yeah you think but one time let's say some some security analysis you want one main problem is the cost failure is going out of directly you have problems with the cost and we have a lot of models but we have some energy in this phase therefore I ask that's the let's say for how to live in standards therefore I ask to come to this with your analysis and your example is for my understanding how will you live with unshared costs in general so costs that are going to be the same on my design with this shift they cost up or down so the cost of shutdown will be the same with their mirror six or something the term will be the same so these will be the shift and will not change the outcome cost directness the direct cost of failure some of them will be associated with the choice of the design and how that number is rejected as you say to make guesses and see how sensitive your decision is to that destination so in this case I saw that the destination in a very sense that the cost of failure is proportional to the construction costs and that's proportional at the constant time of flight in some reasonable it's often the decision doesn't change the approach so we have to be that size in a way assessing some of the costs and many of the costs are the same independent on the decision and that is the case all the costs are very sensitive and for some of them I was thinking how at the beginning of how when you consider the cost of failure is still not nothing or I think that I had for there's a little bit of the same compared to the design issues how do those effects might be designed to shape what are the conclusions if one wants to extract an example what are the problems and so on and what are the effects of the decision conclusion in the rest of the moment here this is an example I'm going to let go back to the board because this is a particular example where you see what's the why you are going for it you direct design and write or sort of you don't ask about if what you need your model was good or wrong or it depends on the screen so what do you know how to extract so you place things you get a better model just to let you know about the so many things and probably come back to the sort of model you know and find a better relation between sort of this and method to be the better writer so what is missing I think is that we don't know several so we don't get a lot of information we are going for it and you are writing and you are writing the variable that you presented it applies but it's so after this you know it's type of the model and the information that we have that's specific you know the subformance like this get the information directly from the structure of the structure before very long now we know that we know the subformance but we don't get any information you have some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some some So it is interesting that even if we had measurements from different sites, we learned that we can use them for future studies. So you mean this information is just for the launch? This is just for the launch of the data. We structure our standards in three categories. New structure, existing structure but always one, or a group of structures as we said. But you don't expect all of them, it's too expensive. You expect two, three, four, we understand. So we have to select, which is a particular element. I like this, the whole framework, I like the framework, the structure is so, you know, what is the, you know, how it's going to work.