 Thank you, Sebastian, for the kind introduction. So I'm happy here to present for you the developments of Working Group 2. I'll do this together with Michael, who will join me on the presentation. And the presentation was also prepared in close collaboration with Magus Crisantopoulos from the University of Surrey, who could unfortunately not be here, but since he's in regards to all of you. So as the title of the Working Group says, what we are looking at here in Working Group 2 is the link between structural health monitoring, technologies and structural performance. What I'll do is briefly recapitulate the aims of this Working Group as they were set forward in the memorandum of understanding. I'll summarize the achievements, some activities for dissemination. The two main outcomes of this Working Group are the framework that we have delivered and then more recent work on the treatment of uncertainties. So we'll get some more attention in this presentation and then at the end we'll wrap up with some conclusions. So we had two aims. So one, this proposal was written and looking at the large amount of variability of structural health monitoring technologies. There was a clear need identified to somehow structure those and categorize those, so categorizing as HM technologies. So looking at what type of observations, what type of information they provide and then relate this to structural performance and somehow identifying their best practices was the first aim for this Working Group. And then the second aim, when we are discussing information provided by observations, information provided by structural health monitoring, you can only quantify the information content when you identify and treat all uncertainties in a consistent way. So quantifying the links between measured quantities and structural performance and treating consistently all uncertainties therein was a second aim of the Working Group. Now at the start of this cost action, we were very happy to attract the attention from a large number of people, many of you who are now in the room have contributed to Working Group too. And we've had input from people working on different structural types, not only bridges, but also on antenna towers, historical masonry structures, different SHM technologies, probably interpreted in a wide sense as also including corrosion sensors apart from accelerometers, fiber optics, so many possible technologies, data analytics. So how can we translate those observations into information or at least take a first step in that process than a number of other things? So this was the basically established or identified in the first phase of the cost action. So we've had significant activities in the first four workshops and we've had more than 20 presentations and fact sheets that record current practice feeling with all this. So having this wide range of input dealing with different structural types and technologies helped us to set up a categorization framework to structural all these contributions. The aim was to have something that promotes the use of a common language terminology, so what these performance, what are performance indicators, observations, technologies and so on. As was very well explained in the first presentation, the aim at the end is to come to informed decisions. So the start is a system or a structure types of performance and then we want to come to informed decisions or ranking of decisions that can be taken. The framework also we need to be or enable let's say the identification or setting out different generic paths, different types of technologies, strategies that could be compared and then evaluated. And then the framework of course also needed to be useful for the overall theoretical framework from working group one and should also allow working group three to identify the different points at which the methods and tools intervene. So this categorization framework was described in a first fact sheet on SHM technologies and structural performance. Now for the second part of the work, we've also started by looking at the current practice. So we've basically asked people in the working group to fill in a questionnaire that allowed us to also identify the current practice in the treatment of uncertainties. And this work was recently finalized or brought to a next step in a summary fact sheet on the classification and treatment of uncertainty. So on which we'll tell a few words more. The dissemination is of course partly through the fact sheets. Those from the first workshop are probably available on the website. A number of others are available to all people in the cost section and then of course also contributed to some of the sessions that were also listed by Sebastian. Now this framework has been presented a number of times so I'll be very brief on it and I'll just focus on those points that are also important in view of the discussion on uncertainty that is following up next. So basically what we've realized is that if we're talking about performance, the different types of performance are also determined by the different structural types. And as said, we've had contributions with dealing with different types and not only bridges but also other types. This is the performance structural health monitoring or the experiments from the first presentation. They provide us with information. They allow us to make observations that link back to the indicators at the end with the aim of making informed decisions or ranking decisions. I'll give a few examples next but let me focus on this link here between the observations and the indicators. In what follows next we'll make a distinction between direct links and indirect links in a sense that in some case there is a straightforward link from the observations to the indicators. For example, if we consider data from accelerometers that are deployed on a structure, the vibrations that are picked up will help us to or from those we can extract model frequencies or mode shapes with system identification algorithms. So that's a more direct link. But of course the observations can also be used in an indirect way. If we think of the same vibrations and the same mode shapes and natural frequencies, those can also be used to update or to calibrate a model which would involve solving an inverse problem. So this would be an indirect link and when I've already made reference to the uncertainties, the way in which the uncertainties are treated could also be somehow different or perhaps different types of uncertainties will intervene. So two examples from the fact sheets that were collected within the frame of the working group. This one by Marius Crisantopoulos and his team on the Great Belt Bridge, a type where fatigue is of a concern and what you can see here are two optional parts that could help on making decisions on the live extension, also including or taking into account possible actions such as strengthening. So fatigue can be monitored by keeping track of the stress ranges. Stress ranges can be identified by measuring strains and then this can be used in this process. Alternatively, we can try to measure crack widths or the indicator is the crack width. We can try to measure those and then also come to decisions. Another example is the Z24 bridge that was also presented a couple of times in the course of this action. So the Z24 bridge was tested within the frame of a bright Europe and European project. It was monitored for one year, vibrations were picked up and at the end of this year, this information was used to set a baseline model that was used next to see if damage that was on purpose applied to the bridge could also be identified from changes in mode shapes and natural frequency. So if such a monitoring system would be deployed on a real structure or on a structure that was actually in use because it was a real structure that would help to, let's say, to plan inspections or repairs and at the end ensure the safety of the structure. Now it comes to the uncertainty. This life cycle assessments finds it based in the general framework of structure liability. Well, this distinction here comes from a publication by Michael, the reference is not here included in these slides, but you'll find all references in the two fact sheets that we have written. So in structure reliability, the distinction is made between different types of uncertainties, physical, which could be considered as inherent and the statistical and model uncertainties which could be considered as epistemic or reducible uncertainties. So these will make the failure probability also uncertain and of course this is also something which can evolve or which will evolve over time as actually we'll also do the distinction between the different types of uncertainties. So that's the general framework and what we've tried to do is see how structural health monitoring adds a layer of uncertainty on top of that. So I'll start here by discussing uncertainties that are involved in model calibration. So this is the indirect link between the observations and the indicators that I just referred to and then Michael will take over to discuss the uncertainties that pop up when let's say we have a more direct link between observations and indicators. So the idea here is that the information or the observations that we collect from a sensor network can help us to update a model of the system and also the predictions that we make in the field, natural frequencies and mode shapes are often used for updating or calibrating models simply because they can be extracted from the structure while it's in operation. What we have as excitation under ambient vibrations is sufficient for identifying those and then this model calibration is usually done by solving a non-linearly squares problem using optimization methods. Now what is important is that it's an inverse problem and often imposed which means that uncertainties will have a big effect on this process. Yeah, that's the one we need. Now of course in the framework that we can apply to treat all these uncertainties in the model calibration process is the Bayesian framework that was already referred to in the first presentation. And it's important to know or to realize that many different types of uncertainties are involved. This is from a paper by Kennedy and O'Hagan that is cited a lot in the literature and they distinguish between parameter uncertainty, model iniquity, residual variability, parametric variability, observation hours, code uncertainty. Now distinguishing between all of those is a challenge because there is a limit to what you can identify from data and taking the model into account is also highly challenging and what we see is that although this general framework is available and everyone will acknowledge the importance of these uncertainties in a lot of work, the focus is only on a few of these and using a quite simplified representation of the uncertainty. And I think here is also the risk for bias that was already referred to in the first presentation. So this is definitely a challenge for us to continue working on in the future. I can't be brief on these, the non probabilistic methods, since we've already heard that the probabilistic methods were superior, but when you're reviewing this work, you'll see that the probabilistic approach is not the only approach, especially when it comes to epistemic uncertainty. There are a lot of different alternatives that are proposed with little or no consensus existing on the subject and the preferred method often depends on the background of the person. Everyone who's had a background in reliability engineering will of course advocate the use of the probabilistic methods and also here in this context, these methods, these probabilistic methods seem to be the most widely applicable as they accommodate incorporating the different uncertainties that arise in the process. So with this, I'd like to hand over to Michael. Yeah, thank you very much, Gert. So statistical uncertainties appear in SHM everywhere where we use data and calculate indicators based on data. So there are three main sources of statistical uncertainties. I mean, any observations that are measured with SHM technology are just noisy versions of the physical quantities that we are getting. So if you're measuring accelerations, we don't exactly get our accelerations, but slightly perturb versions of that due to noise from the sensors. The observations that we're measuring are only obtained in a finite time window while the exact computation of, if you're talking about frequencies, for example, from acceleration data, they would inquire an infinite amount of time if we just have ambient excitation to converge to the true values. And we have insufficient information. So in some cases, we could get an exact computation of our indicators if we had just more information. So for example, if we knew exactly the forces acting on the structure, then we could also compute exactly our frequencies from the structure. But since we only have ambient excitation, we just can assume some statistical properties of our unknown information or insufficient information. And then we have statistical uncertainty due to that. So nearly all indicators that are computed from data are actually random variables having some probability distribution and thus some statistical uncertainty. So the quantification and treatment of this uncertainty is crucial for monitoring in general. I mean, we need to know if a change in an indicator is just due to natural statistical variability or if there's actually a significant change in the structure thus indicating some abnormal behavior. So for uncertainty quantification, in the majority of cases, indicators are just assumed to be Gaussian distributed, which is our simplest kind of distribution. That can actually be justified often through the convergence properties of the indicators that we compute. So if we take time series, if we have some averaging in statistics from the central limit theorem, we can actually show that indicators are Gaussian or some of the indicators are actually Gaussian distributed. And then in these cases, the covariance that we can obtain contains all our uncertainty information, which would not be the case with more complicated distributions. Yeah, and then we can actually compute that covariance directly, for example, as a sample covariance if we have several instances of our indicators. And if that's complicated to do, we can also use some kind of sensitivity-based propagation of the covariance from sample covariance that is directly linked to the data, and then we can propagate that then to the computed indicators. In other cases, indicators may originate from pattern recognition or some more sophisticated statistical time series analysis, and then we can derive the distributions from those methods then. Regarding uncertainty treatment, I mean the main point is to obtain confidence intervals on our indicators, which is very easy in the Gaussian case. When we know the covariance, the standard deviation, or we know that in the three sigma interval, we have a 99.7% probability to have the true value in that interval of the indicator. And so if we have some scalar indicators, then we can directly use that to set up thresholds for multivariate indicators. We can use other tools like myelinobus distances, control charts to treat the uncertainty that is involved in the indicators, just to name a few. And in a general setting, we can use hypothesis tests where we can check for parametric changes in our system based on the underlying distributions of the indicators. And set up then thresholds for decisions from there, getting confidence intervals then based on the properties of such test statistics. So that was just for the general framework. And during the activities of working group two then, we launched a questionnaire among the participants to assess the current practice in quantification and treatment of the uncertainties. And the links between our measured quantities and the structural performance indicators. We asked the following questions. So we had the context of the work on the uncertainties, what sources of uncertainties are present in the work of the participants, how these uncertainties can be best described, how they are taken into account in the work and which methods are used to quantify them and to propagate them and to use these uncertainties for decisions. We received 18 responses from the participants that covered many different aspects in the proposed framework. So the main context of the contributions that we got all these five points. So we have analysis of measurement uncertainties of the use technology. We have uncertainties in data-driven performance indicators that are mainly used for damage detection. We have model-based performance indicators where uncertainties are due to unknown material characteristics and unknown model parameters. We have fatigue or reliability analysis where then we have both model uncertainties and measurement uncertainties and some contributions also in decision-making. Yeah, and then just to give an overview of the contributions that we got. So these contributions are on measurement uncertainties to analyze measurement uncertainties due to optical fibers or setting up in probabilistic model for measurement and inspection uncertainties in general. Uncertainties in data-driven performance indicators are mainly linked to vibration-based SHM to know the uncertainties of damage indicators. So here the main sources of uncertainties are measurement uncertainties and ambient excitation. We have a couple of contributions on model-based performance indicators where the uncertainties are due to unknown material characteristics like unknown soil properties, yeah, unknown environmental models. We have FE model uncertainty that is involved. And then for contributions in fatigue and reliability analysis, we have both model uncertainties and data-based uncertainties that are involved. Yeah, and two contributions on decision-making. So to sum up, the sources of uncertainties that are present in the works that we got, where we got the feedback on, are mainly modeling uncertainties due to unknown material properties, imperfect models for changing environmental and operational conditions, imperfect models for soil structure interaction, et cetera. We have measurement uncertainties, as mentioned previously, and we have contributions on the estimation and statistical uncertainties in general. The majority of the contributions that we got describe those uncertainties then through probabilistic models and statistical inference. So the description of indicators as random variables and random processes. And we just got, I think, one or two contributions that mentioned also fuzzy or interval-based methods or scenario-based models. Mainly, the methods that were used by the participants to quantify or propagate uncertainties are statistical methods and Bayesian inference, and also structural reliability methods then for fatigue and reliability analysis. Some contributions also used yeah, uncertainties cast in bounds by engineers, which are not really based on, or not strictly based on probabilistic principles. So, and then I guess the most important question was, are these uncertainties actually taken into account then in the works, or do they have an impact on decision-making actually? So, the presence of very different kinds of uncertainties is really widely acknowledged, and we also had a few contributions on the resulting statistical uncertainty of the indicators. However, though, often, or the uncertainties are at least partly quantified in the works, they are not always taken into account explicitly then in further analysis. So, we know that there is some uncertainty, but it's not actually treated or taken into account for decisions in some cases. Overall, there is probably a lack of consistency how uncertainties are classified, and also methods for the quantification and treatment of the uncertainties is really on a case-specific basis, and somehow a holistic framework or holistic approach is currently missing to globally treat them and classify them. In general, we found also that the concept of confidence intervals should play a more prominent role for decisions given the varied sources of uncertainties that are present everywhere in all parts of SHM and all parts of the treatment of the information that we get. So, there is really a wide range of techniques that has been used and the scope for categorization of them is really to improve consistency and transparency, also with respect to the framework for structural performance that was developed in networking group two. To conclude, so, as mentioned, a formal holistic and consistent treatment of uncertainties is required. We have uncertainties in all parts of SHM, in particular regarding the observations from diverse SHM technologies. The propagation of uncertainties from the data to the actual performance indicators, directly or indirectly with models, and taking into account model uncertainties, and then also uncertainties to take them into account for decisions, for actual decisions in life cycle management. The questionnaire that we have launched among the participants has shown that the importance of various types of uncertainties is really widely recognized. And, yeah, but decision theory tools and decisions that we make should really include then also the uncertainty quantification and treatment to move towards decisions with confidence levels. In general, working group two has evolved in accordance with the objective set and the memorandum of understanding. The big, yeah, the large feedback that we got from the participants during the action revealed that, well, the SHM applications in civil infrastructures are growing fast. We have different technology readiness levels in different sectors. There are big efforts in the links between monitoring data and evaluating them then through structural performance indicators. But the assessment of the SHM benefits beyond component level is still also in its infancy, so a more global approach, yeah, should also be invested in. The developed frameworks in the working group can improve common understanding and achieve desired levels of transparency and consistency, also with respect to the uncertainties that are involved in all parts of that framework. And as we have seen, the treatment of uncertainties is still patchy and more holistic approaches required. So there's probably quite some room for development, also, yep, then thank you very much for your attention. Thank you very much, Gerd and Michael, for the overview of working group two activities with the focus on uncertainties, especially on the side of SHM. So this provided clearly the basis for one part of guidelines. We are, yeah, just producing. And of course, some of your points may be debatable, but I'm looking forward to that in the future. So for the moment, a few questions. We have some time, Michael. Yeah, thanks for nice presentations. I should say it was a very good overview and quite informative. I was a little bit puzzled by the second last slide, where you mentioned that the decision ranking should account for or should be undertaking with confidence levels. That I don't really understand what you mean. Maybe you can elaborate a little bit. Well, I guess the idea is that if we get some performance indicators, they have some uncertainty. So if we go to some thresholds where we know that with some confidence, everything is in order for our structure. We should take decisions and also based on these confidence levels in order to see with which probability things are actually in order or which are not, for example. So fundamentally, Neumann and Morgenstern they told us that we should rank decision alternatives in accordance with expected value of utility or benefit or whatever we call it. And that implies that all uncertainties are accounted for in the assessment of the expected value and that we don't, let's say, simplify or somehow... I don't know how to express it, but develop uncertainty representations which do not allow for consistent evaluation of the expected value like introduction of confidence intervals and corresponding values. So my proposition would really be to not account for uncertainties by introducing confidence intervals in any step of the process but simply to include all the uncertainties, of course, consistently as you also have underlined many times and then based on that to do expected value operations. I guess the uncertainties are already treated consistently during the analysis for the expected utility. Then that's fine. And then at the end the expected utility or what we get at the end since the uncertainties are included already we already have some confidence level then that we get at the end on the expected utility as well. So that's propagating then through and if you take decisions then those decisions are based already on those uncertainties with respect to some thresholds that are involved so there is also some... But if there are uncertainties associated with the threshold just include those uncertainties in the general model. Yeah, but if they are included then we get some confidence levels with which we take... Confidence levels? Okay, I think we will take this in some of the discussions and projects afterwards. Not on yet. No, it's on. Yes, this was actually one of the debatable points. Well, there may be something in it. To my view, I'm sorry, Mike. But on this level I think we need to be very aware of what Michael said, the foundations, which is the expected value of the utility and the axioms of utility theory. We should be aware of first point. Second, I think it's good to discuss and also to work on this basis. So in this sense, I would like to put this in this context, these debatable comments here or statements. And actually, I'm looking forward to take this later in the project. Thank you very much. Okay, there's one more question. Okay, there's still the morning, I think even Costa raised her hand before. I'm sorry, Mario and Fina. Actually, I wanted to make one point that probably you are actually very well aware, that once you use various theories, another issue is that you introduce a complexity. Everything depends on how you will build your likelihood. You assume a prediction model, you could assume it uncorrelated, give any correlation structure that you want, you'll get completely different uncertainties. Completely different uncertainties, posterior uncertainties. This is, I think, a problem that we have to really consider. I don't know if you have any thoughts about that. I know Gerda knows very well about that. I fully acknowledge and also in the fact sheet we've added a little bit of more detail on indeed the model that is postulated for the prediction uncertainty. And this is a key point because it's important to consider all possible sources of uncertainties and what you'll see in the literature is that in many cases, also in our work, we are using a simplified representation that is not covering everything. And we need to be well aware of all the limitations therein if we introduce those in a decision-making framework, so I fully support the statement. Okay, one last question, Maria Pino. It's a question that is a request of information. In the papers that you collected, is there already something about the probabilistic modeling of the indicators from vibrations, vibration-based indicators of damage? For the probabilistic framework of vibration-based damage? No, I mean... What you're asking is if statistical models for the indicators have been... Yes, indicators from vibration data. Yeah, sure. There are works that quantify the uncertainty from the indicators based on the properties of the data. So we propagate the uncertainty of the data basically to the indicators. So a statistical model is basically telling us that we've got some probability distributions of these indicators that are caused in most cases with a certain covariance. Well, that's the outcome of the direct propagation of the uncertainty to the indicators. So there is some paper about this. Yes, there is some paper about this. Thank you.