 Okay, thank you. So Welcome from my side I'm the last one standing behind you and the lounge. I know But remember that it's not me to blame if we are late And I hope I still can keep you Awake and I will speak louder than your stomachs make sounds this presentation is Summarizes or present some insights in things we have done in working group free Which I was led by myself together with the Lenny Hadzi who cannot be here today. She sends her regards Some of the presentations also put together by Elizabeth Pismo to the PhD student of me She also cannot be here So people have some people have to work while we have fun here I Will stick to the to the structure that was given So go through the introduction And the aims what did we actually want with this working group originally, which was not so clear always at the beginning Then we look at some of the achievements and I was close with some lessons learned. I Don't think I have to repeat the motivation in general for VOI in analysis, I just want to point out the this last point here, which is the specific purpose of working group free or In the end we said we aim at providing an overview and to some degree also some some links and access to these Analysises and computations that are involved in a VOI analysis We want to give all the guidelines on how to implement those methods and tools well we've seen that your analysis consists of many different parts and Go a bit more into that in the next slide And in some of these different parts we have quite elaborate methods, but Very seldomly these have been put together to actually perform a VOI analysis. That's one of the challenges So we have been trying to Collect identify develop also further and overview these methods and tools So that people coming from different disciplines Can access the whole set of tools and methods This way we also realized that Not a small challenge lies actually in the Vocabulary the fact that people coming from different fields speak different speak different languages People from health monitoring don't use the same terms as people that are more from more coming from the decision analysis point of view and then again people that deal with Reliability of structures as their main focus and again, you say indicator It means something very different to those people At least it meant Something very different to those people and that's a challenge. We realized also during some of the workshops we did and we try to address Okay, now next achievements personally, I don't like the word achievements very much maybe because I'm a scientist and It's somehow I Prefer to look at what what are the challenges what were the challenges and to what degree we have worked on this We know that we That the work is not finished and the more you know the better you are the more you'd realize that you still have to do things but of course we have achieved a few things and this presentation Goes through some of the things we have achieved. By the way, I forgot to mention that Not only was a lady had see also Leading this group together with me. She also made the first draft of these presentations are actually She did her work even though she's not here So we have Like in looking at the pure Output in terms of hard and soft Outputs we had actually produced fact sheets That reflect the work that that was done in work in in conferences in The groups where we met and I'm going to look at a few of those a little bit one thing we have done in the working group is to to come up with a Slightly different scheme than the working group to as presented We work together quite a lot initially also because it was not always clear to us And it's still there's no hard border between what working group two and working group three have done So we work together and we realized that okay for our purpose looking at methods and tools It might be useful to structure the problem in terms of an influence diagram that follows The decisions and the uncertainties throughout the process the colors here are the same ones that you saw in the presentations before So we use the same structures Not exactly the same code the RGB code will be slightly different, but still green Red and so on so what we have here in this influence diagram is a structured way of formulating a decision problem And in this case the VOI problem for SHM those Green greenish colored Parts here are to say typically physics-based models describing performance of our structural systems and Associated what I would be here call indicators again. We had this discussion about this term It's not always used in exactly the same way basically things that you can observe or that one can observe and measure To learn something about the condition of the underlying system. We don't have here It's somewhat red orange color in my screen these are the things related to the modeling and Decisions on the health monitoring so based on so what you can observe Depends on on your your system indicators and on your technologies that you use to to make the observations And Then as we know also very There's no benefit of all these health monitoring if we don't do any actions based on the monitoring results That's what you see here And then the actions will then change your system Hopefully make it better reduce uncertainty reduce particularly reduce risk Then all of this comes at the cost or at the benefit also and In the end we try to optimize this there's a time thing here So this thing is done in this I come back to that it's done in multiple time steps So the challenge lies in the fact that we have actually a sequential decision problem. She's kind of Hidden here in this slide Now the main point of our working group was to look at these techniques here that are kind of have Seem to be less important But actually are crucial and these are Different methods and tools used in the different parts of the analysis and as I said before we have to bring very different methods together or not very different but different methods we need methods to assess The structure the duration models we need Uncertainty Quantification propagation methods we need structural ability methods. We need the social mention nowadays We have all these machine learning tools and all that might go here We have Bayesian analysis We also have economic assessments that it should go there So there's a lot of different type of analysis and Methods that are needed if you really want to make this this process and there's basically nobody here who is really an expert in all these fields it's not really Maybe it would be possible, but I think nobody can claim that he's really the In-depth experts on all these fields So the challenge was to bring this together and the fact sheets in a way helped that no, so just go through a Do very quickly summarize Mention and just mention a few of those fact sheets that that go here and and These are in the colors where they correspond to no, so here we speak of indicators. So in this case We have the the paper by actually By the opinion, why did you ask about this question before because you wrote a paper about this? No, I Didn't understand why you ask a question We wanted to know whether the people are aware of your own fact sheet the thing is your fact sheet is in working group free so Where it's this is basically the review of vibration-based the damage identification methods which is most common studied type of approach to SHM here It also has a survey of algorithms on the extraction of the condition indicators We also then then there's another fact sheet here that Let's say much more directly applied is this real case study where the The methods are applied to And also tested and particularly the the effect of environmental influences on the health monitoring is evaluated there also Here maybe to just to quickly mention we had the fact sheet on a large data set which available therefore Let's say that this can but that can be used to transfer the methods that we develop into into practice looking looking at data from bridges I think was from roofs from all type of structures and then you have also specific These are some of them of that she's dealing with very specific applications from foot bridges to structure ship structures We also then have fact sheets that looked at within more broader on really the methods as they were applied we mentioned already based analysis many times and I'll come back to that also here. You know, this is a challenging vision analysis is challenging in the context of real structural systems where you deal with heavy models where you deal with its completely complex systems And we also have a fact sheet on Robust identification and prediction so which was mentioned before by cost us. So there's a limit there. There are some Cavities to the Bayesian analysis and if you're not careful you might Depending on what you do and depending on how you choose your model you might end up with very different results and So there are also people try also to find alternatives to the Bayesian analysis and Ian Smith here from Lausanne Did the fact sheet on that one approach and then there is Into a very nice fact sheet also by Bernd Lehrer here who again uses these methods and applies them to two examples one is a Of or at the offshore riser response monitoring and if you remember the second one is the ice-induced Strains in a ship hull Okay, there's also some fact sheet here on relates directly to the Method for optimizing the decisions. This is a POM DP Method which can which is one approach of solving the sequential decision-making problem and I Guess it goes as Papacos that you know, it's probably the best expert in this area in our field So he together with the Lenny wrote the fact sheet All these fact sheets are available. They will be So I think that I don't want to go into the details particularly because I'm not the one who wrote most of them so It's not complete so as you see There are so many So many different methods that we need and for each of them you could write multiple fact sheets So for sure, we are not complete. It's also not possible to be complete without writing. Let's say a hundred ten thousand of pages But I think we have a good starting point for somebody who is new to the field and wants to learn about key methodologies and tools In this area so also of course there was dissemination efforts there were special Sessions in conferences people wrote the papers related to the cost action inspired by the cost action coming out of Scientific missions and this list is much much longer I realized that the list on the home page is actually not up to date So I know at least of many other papers that are not there yet, but we'll be there so that has been a lot of activities and Now thinking how to do this presentation. So I was thinking, okay, I Can just go on and give you an overview, but it's a little bit boring without getting into some depth So I decided to to look at two Just give a little insight into two of those Works that have been done here and you forgive me if those are two works that I've been involved myself because those are the ones I can present best So maybe I'll just give you five minutes Of insight into some of the things that that were developed during the cost action Corresponds it also by other sources, but but it's a lot actually inspired by discussions that we had here Okay, one of the things that is really central to the speak of methods and tools is that In the general case at least if you want to model the whole process relatively realistically you end up with an enormous Decision problem. Why? So think you start with the monitoring decisions. You have already many options but then If you in the previous theory decision analysis, you have to model the whole lifetime and in principle at least in theory all the possible options of How things could turn out in your system? So it starts with the deterioration process In let's say this is the discretized time steps in year one year two So you have some you might have some deterioration If you think of a structure that is not just Deterioration yes or no, but it could be different type of deteriorations at different locations in the structure. So this is a huge Potentially a highly already high-dimensional parameter vector here You have a system condition which maybe you can simplify to say for failed. Maybe not Then you have Inspections that you can do you can also do additional monitoring You could decide to do maybe put monitoring later You can do inspections. You have outcomes of those is it those monitoring inspections again. This is high Possibly infinite dimensional parameter vector. You can then do some decisions on repair Maybe you can also change the operation of the structure There's many decisions again that you can do and this is just for one year and then it continues so such a model Really has a problem of being but somewhat exponential You have an exponential growth or polynomial growth depending on what You're looking at and already a long time ago people realize that this is In a general case you cannot find an optimal strategy in such a situation, which means we cannot also We have some difficulties as identifying the the value of information Because it basically the assumption is that we do the optimal thing, but if you cannot actually identify the optimal thing That's a bit of a challenge Peace space complete Now so that what what do we do and well This is not nothing new. I will show you that is actually completely old But we can give it a smart name that comes from the artificial intelligence community So what we can do in order to approach the the problem of These polynomial or exponential complexity with respect to so many having so many possible decisions to do is we define So-called heuristic rules decision rules. They are called in classical decision analysis And so we parametrize the problem by a set of heuristics and some of you might know might be familiar with the inspection optimization of inspection where value of information has been used since the 1980s I would say or 1990s and Some of these rules for example would be for for inspecting components is is that you whenever your Component or your system probability of failure reaches a certain threshold you you decide to inspect Or more practically that you you decide to inspect every five years That's also what is implemented in many rules now that is not going to give an optimal solution Because most likely the optimal solution is not to inspect exactly every five years But it has shown quite nicely in some Publications This gives a very good solution for a simple problem. This is a simple problem because it's not a system is just a component and just inspection but We looked at now extending this heuristic approach or as I call it direct policy search Term coming from the artificial intelligence community to To also with more complex systems So we don't only have a fixed inspection interval or a threshold But we also have the number of components to inspect we have to have a heuristic for which components to inspect What we have to have a heuristic for for what how do we do repair? When do we inspect based on monitoring results? So? one can add these we can add that and The we have looked at a bit into how to optimize for example the locations of of inspection or monitoring by using a Heuristic that says okay, we basically we should inspect or monitor those components that give us the highest value of information now That is something We cannot directly compute but we can find an approximation of the value of information. It somehow depends on how Reliable as the component it depends on the the importance of the components in the structure And and we parameterize this By a parameter and we then try to find We then optimize the whole thing Okay, now. Well, how does that help this heuristic and here we come to the way also back to the challenge I showed with the decision tree so In the end we want to optimize life cycle costs. We want to find the strategies that minimize life cycle costs So so you are the operator You're not the operator but the operator comes to you and he says to you Tell me how I minimize my cost Hopefully And because this of course included the risk and the reliability they might have constraints, of course But you want to that so you have to calculate that the risk and the costs Which is the farm which is the sum of risk Inspection monitoring cost and repair cost now the risk is a you seems trivial again is the cost Time to probability of failure Everything here is conditioned on the strategy. So we fix the strategy is important We say that okay now for let's assume we have a fixed strategy and we evaluate the cost of that strategy But even that is the still that has to see the polynomial Exponential problem and that is the as a gem outcome So that is a high-dimensional infinite dimensional vector So what once we fix this however the good thing is that once we fix the both the heuristic and and what we will observe in the future are There's no more decision involved because the decision is basically fixed by the heuristic and by what we observe so When we simulate that we can we can just directly evaluate For given set as The risk because the disc this probability of failure here depends on your actions But the actions are prescribed by s and said And then here we still need methods here because we have a reliability problem Still you have a structural system. So you have to have my reliability methods that actually can deal with this and It's a vaginal analysis. So so this is a very challenging problem But at least this is a field where efforts exist and are are being developed for example by cost us and so we have You can you plug in those those methods by the way, you might it looks trivial But some things are not trivial at all and here you have a conditioning on set and here have a conditioning on observations up to time t Whereas here you have Depends on the observations to the future. So in principle, you should also depend on the future observations But we can show that it doesn't but it's there's some things that here behind this simple looking formulas that are not so simple actually But this is a correct. So in the end we end up with this expression And then the good the thing is that we then have to Somehow get rid of the this fact that we don't yet know what we actually observe because this is always a random High-dimensional but we can deal with this in a Monte Carlo's approach. Why because the variability of this function is not so large because we have already gotten rid of the Remember for property of failure calculations, we need a lot of samples remember remember because the variance in the Intent indicator function is very large, but In the case where we have already done the integration over the indicator function So we have already calculated the conditional probability of failure The variance of this is not so large with a few hundred samples You can run you can you can solve this high-dimensional problem. So there's no better method than Monte Carlo at this point And then you have to do an optimization and that Okay, I'm not going the detail but to optimize that it's a stochastic optimization or You have at least your objective function is evaluated on the color. So it's a you have noisy objective function so better to use stochastic optimization and We have been using is this cross entropy based approach that gives quite good results So see here and then you optimize all your parameters. So here's for example, you have Okay, I'm not going to explain in detail what this means because the number of inspections the time and then some other parameters here But you in the end you have your objective function and you can find an optimal strategy and that enters then the value of information analysis so this complexity of the decision process and Together with with the fact that we have also a complex physical system process to represent Is a challenge that this is an approach But it's also not a complete solution yet There are many things to to do and there are also other approaches as I said umdp is another approach that works better in some cases than in than this So that is one of the challenges Now I'm looking at the time Just very briefly here. I want to mention this because this is another challenge that that we see Is that we need I mean, we have seen many applications, but but often we are dealing with simplified problems The real problem is a structural system So typically we need a structural system Analysis reliability analysis because we need to know the probability of we need to calculate the probability of failure in order to estimate the consequences and in order to estimate the effect of the monitoring system and And They're actually not that many methods that can realistically calculate Rehabilites for for structures for real structural systems. So so there has been work That that was performed During his cost action, which was also reported in his cost action on on further Trying to calculate the system reliability conditional information always We have to do it with a Bayesian approach in addition and one of the approaches that was developed by by Ronald here in in in BAM is that it was where he looked at the system that Basically, this is the case where a system is represented by a set of components each component Can be in in in in in an okay condition or in a failed condition And you can see already you have two to the power of n possible system states And then trying to to to find efficient methods for for this type of of system representation Um to calculate conditional probability of system failure in time Here uses a Substant simulation based approach to deal with the high dimension again. This problem has a high dimension It has many many random variables so this Substant simulation works quite well works quite well. We will still have a lot of uncertainty in the This is computational uncertainty Not modeling uncertainty But so these are some approaches that were developed and discussed and and worked on during this cost action Well, there was a try to It's not complete yet. We still You can still give inputs from feedback, but there is a first attempt of a kind of a database on on software that is available for Uncertainty quantification and more specifically value of information analysis. Now, you will not find Steadicated software packages for voi analysis. What you find is software packages for uq uncertainty quantification for reliability But but of course those I have Those tools boxes have many of the methods that we need for the voi analysis not all but many so You can you can find this information And just to show one example here of one one of the tools that is described there And this is available to the community And Since we are going to finalize this In the next In this period Please have a look at it and also let us know if we have forgotten some things Okay, finally And this is mostly the work of eleni hattzi and her group was working on a benchmark that is a benchmark for the Not so much for the decision part yet, but for them for the SHM part and Some degree the structural part Um A lady is not here so I cannot really give you many insights into the detail. I think generally it's quite simple There is a relatively simple case here of a beam Free supports you can you have the possibility to introduce damage Um in these elements that are indicated here. So there are six possible damages one can even can indicate one can introduce and Then the structures analyzed using plane stress elements And there are many things that one can vary that are also in real life systems For example I'm skipping forward here first. You can choose here different deterioration models you can Choose different environmental conditions and these environmental conditions affect of course the properties of the model um, you can Change you can change material properties loads. It's all it's also open source. So you can modify whatever you like but there's also a python based in gui here, which Helps to for people who are not So I feel with working directly with the cold The points you see here are the points where you can really get measurement data So if you want so you can assume that there are there are sensors there You can also and you can generate hypothetical sensor Then the results and you can use to test your Urbation analysis your your your mshm algorithms and you could also extend that to And this is something trying to do is to extend that to test your whole VoI framework. So now let's assume you put there some sensors. Can you calculate the value of information of those sensors? As every benchmark it It can it only represents what you put inside So it represents the uncertainties and the environmental factors that are actually encoded here it might not Include it does not include of course what you what you find sometimes in real data things that you don't expect at all But it's I think a very nice tool to For the community since I can really feel the hunger piling up here, so I'm concluding And just but a few lessons learned. I want to mention And it's very personal. Of course if you ask people that participated in the working group, they might have also different They might have very different statements but I guess that at least some of those are also common agreement Not surprisingly, we knew already before that you need a multitude of methods that you need to do a VoI analysis and that is also one of the challenges that we face that At least if you if you want to do that you you actually need to spend quite a lot of time and sometimes One thing we didn't do in this cost action, but what I want Thinking what should do is the the VoI of a VoI analysis What is the value of information of doing a value of information analysis? And that might sometimes not it's not always a positive I would think Particularly of course if the If you look at the small small problem So there is a there is a challenge We have methods for basically almost all the individual tasks I showed you exist But they're not always known to everybody of course Um, there's a problem with terminology And it's also a problem that we cannot come up with to say, okay, if we could just say, okay from the best method in structural ability is this the best method for assessing for evaluating cost is this the best tool for For doing the preposterior analysis is this if we could just put it together it would be great, but the methods we need and the The tools we need that are best suited are very much application specific and That is a bit of A challenge so we cannot go and say, okay, that would be nice. So if you have this type of problem use this Yeah, this is a problem use this It depends a lot on the on the application and it depends on the modeling that we do and this brings me to to Maybe fourth point, which is that in my point of from my point of view the challenge in The real application typically lies in in the modeling How do we model the problem and that the because you can make a model that Then puts a lot of burden on the on the computations Or you can make a model that is much simpler in terms of computations and so the The real challenge and that is very difficult to to to put in a in a In a cookbook type of Documents say, okay, do this do this do this is the modeling here and we engineers always say, okay, this is what we can do No, we are good in modeling everybody We know which models to use and so But in vi this is really the challenge This was the break for the It's a lunch break. No, yes, okay, just My math teacher also always did five minutes longer than Than the gongs or I will do that same And or three minutes Okay, so okay, I spoke about his heuristics and skip that System mobility I also mentioned is a challenge And as we saw also that this is even this common vocal Vocabulary is a challenge So remember this workshop we had in munich and we speak about indicators and After five minutes we realized that if you say indicator, you mean something very different than if I say indicator and if somebody else said indicator, so But we didn't realize at the beginning we just And and and when we write papers also in vi we always have this problem that okay We start out and we call this is a a is an action decision and then this is And then we go on and we find that people in In the mechanical modeling use a for something very different with the crack It's the crack length and then you go further and you read a paper about The particular assessment of health modeling data and they use a for again another thing And then you have to put these things together you have to come up with With a different parameter But then the person who reads this from from from shm community will not understand because a means something else to him or her and This is of course, it's always a problem of interdisciplinary work But the issues that That that we have to somehow or We also try in this cost action To to to do that to improve that to find the common Common ground in which we can communicate Or at least to create this understanding that a is maybe something else than in my personal Narrow overfield All right, this was some of the lessons learned conclusions so Just a repetition again. We need to combine different models methods and tools It's this is the the interesting but also the challenging part. We have provided A collection of methods and tools that that can be used Um, that is now available or will be available also for the public To use and to start as a starting point for future work and for people who are interested in this field Um and participants of the working group have I think also at least in my case based on the discussions and we had during this cost action further develop methods and tools And including develop the benchmark That concludes this presentation And I'm sure we have one or two questions If you want to make yourself unpopular with everybody else then You can now ask one or two questions All right Yeah, you really like you really like to be unpopular Yeah, exactly. No, nothing to do with No, I in your Description of this heuristic approach You mentioned that the identification of which details to To monitor or to inspect That the selection would depend on a number of factors of which also the importance Of of these details, I assume for the overall integrity Was one of the points But in this context, I just wanted to To Mention that Is as you well know, but you didn't mention that so I will do it that in the dependency the dependency between The locations the phenomena the locations which you observe and And I would say the really critical Spots in in the structure Also plays a huge role and and it can in many So there is a strategy Where you actually you design parts of your structure to be really weak At a location where it doesn't matter at all if you have a super strong Dependency between deterioration at that point Which may be an easy to inspect or easy to monitor Location if there's a very strong dependency to the critical the really critical and maybe More Expensive Locations To to inspect so like these I can't remember what we call them Daniel, but like Yes But basically yes, so dummy points or something. Yeah like dummy components or yes Like like moves kind of testing it doesn't really matter, but it contains a lot of information about places where it does matter Yes So that yes, so in this context, maybe just to add to this is that what we This is the fact that I did mention is that the probability of Actually having some some defect matters. Yeah, the reason is that If you inspect something where it's quite likely that you have a defect I mean if the thing is that we see the structures and small structures that we're dealing with Are higher reliability. So 99% of the time you're looking for something you will not find anything because they are doing well so that means that It turns out that those components which actually are not maybe are weaker Can give you more information because it's more likely that you might find that and that even if they are not the critical ones They might serve as an indication for other components Which is and this through the dependence that you have so They are built by the same manufacturer. They are under the same environmental conditions And Yes, so this is one of the reasons actually for allowing me this clarification. So this is one of the reasons why there's having a higher failure probability Or less lower reliability gives potentially a higher value of information. Yeah To look at that part of the structure and I would also like like to address So you commented on jen smith's Approach for robust estimation right? Well, I didn't really comment on his his approach. I think I mentioned in the general problem that he is trying to address Yeah I must admit that I also don't feel too comfortable speaking about his approach because I don't Feel I have Understood it enough. So we will not go into a discussion of that without jen actually being here Uh, but uh, also there was a comment Before about the significance of choices on possible priors And how that can seriously affect let's say ranking of decision alternatives in in in the end I So in With these two comments I I just want to highlight that I I still think that this community is too damp determined to find one system and is seeing the The importance of realizing that different assumptions for instance different assumptions on priors that They actually constitute different possible universes Uh different possible systems and you need to pull the effect of those possible systems into the Uh into the decision analysis in in in order actually to appreciate Accounting for these possible systems consistently I mean there is in many many cases you can debate assumptions You can debate, uh, let's say model adequacy And in the end you will you will not come out with with one Absolute model which can explain it all right? And therefore you need to account for it and robustness has to be seen relative to the ranking of decision alternatives So, uh on this topic on this aspect I think we are failing we need to make a slight turn around And wake up to reality And just realize exactly as you also mentioned in the beginning the more we We learn the more we appreciate That we don't know and this this topic Is definitely one of the I open us For myself. Yeah But there's also an imbalance between the Yeah, let's say this is Nice theoretical framework, which is not just nice. I mean, I I believe in that framework very much um, but there's also the We have to make some amendments for the for the reality of of of real life that that we often are not Able to to to to consider all the models because we have just a limited amount of time And we have to do something very fast and or even if we had more time But given the fact that we're dealing with a problem in many different areas so that My point is that here, I think This this this concerns that that are raised in in in this area not just by e and smith but other people um In other communities. I came across this first in the hydrology calibration of hydrological models, um, but if that These address issues that observe observe in real life and There is also a Bayesian way to to to deal with this But uh, I think this is something but I think we should not We should not make this into a discussion among ourselves, which we can have in the dinner But maybe we should if there's another I don't want to Actually, I I took a course in how to how to address uh, nasty questioners Yeah, I mean I was a phd student. I remember but uh, I don't want to make this discussion short, but Yeah, just just one maybe one more comment. So what what do you see in in in much of data driven modeling right now is that that In in appreciation that you might have these different models popping out of your data Some schemes for model averaging Are also being applied but that that then Let's say averages the effect of potential different models At the interface between the modeling and the decision making and this is where we need to be very careful We need to make the model averaging or the decision making And and this is like a A new type of paradigm. This is something to pay attention to. Okay. Yeah, but okay, it's a formal question But if I just want to say one one sentence is that you the one issue with this is that we have to In fact, you're assuming that you know the utility function throughout your process And this is something you often don't But let's discuss this. Okay. One more question by someone. It's by York Very short question Or maybe actually two questions But the first one is like bridge design currently or structural design is based on the safety factor concept So it's not a full Bayesian approach now There's two questions the first part is if you were like the one that does this generalization standardization stuff and you're Would you move to a full probabilistic approach in terms of looking at what data that we actually have and how the design concept works And if you if you answer that question with yes, what is still missing to get to that point because you already said like modeling issues What is the prior distributions? So how would we if we want to do the full probabilistic approach? How should we go to that goal at some point from your point of view? Okay Well, first of all, I don't I mean I don't believe that in the near future we should go to a full probabilistic approach Maybe not even in the mid middle future Maybe in the long term future We I mean one of the I mean, I think there are also different steps. So one thing is and this is something would be I'm looking at Jochen Köhler here because he's the expert in this really but We look at the code development that is done now. So there's a there's a level behind the code which is Based on probabilistic analysis where we try to calibrate the code to that And in principle that basis Could be used For for that actually designing structures The reason why we don't do it is because it would mean that first of all everybody In the structural engineering community would have to be an expert in these methods or at least reasonably well Not to make mistakes And there's not a question of what is the benefit in this so what do you gain by doing everything probabilistically and and and There's a balance of this today No, we should today we we should stick to the the format as we have it now, but as we get maybe maybe in 20 years We don't have maybe the computer is doing everything by himself For the computer using at least is the standardized probabilistic models, which are not necessarily the We call nominal models Might actually be better to use than what we have Today, but what we realize is that the the current code and the current practice is Is it's a strange mixture of of of empirical of empirical Let's say knowledge and things that have gone into it by by by a huge community Where people have with a lot of experience and and and things have gone wrong So we have realized okay, there is a problem here. We have to do something here and The issue is that it's not very well documented in more in many cases So it we would like to think that okay the structure we have It's it's a semi probabilistic. It's a very short question, but the very long answer I realize So I'm I'm afraid I'm having I'm not yet reaching the point where I see the the closure to the question So I'm trying to make a very short quest answer and then we can continue afterwards. So the very short answer is I believe that the goal is good. We should try to go there. I don't believe we should implement That in the near future But we should try to make the current Kobe design Also more more clearly documented. We need to understand why certain factors are there. Why What is the assumption go away from this or try to get To the extent possible rid of these Empirically based factors that are there somehow that nobody knows where why they came from but so that So that we are able to to slowly move into the process of formalizing Of better understanding the process and then We're probably doing fully automated fully probabilistic design in 100 years from now Okay, thank you very much. Thank you very much for your presentation insights and discussion