 We'll now come to Working Group 3, which deals with methods and tools. We do a slightly different scheme here. Eleni Hjazi and myself, we are making a presentation that, from our point of view, should cover most of the aspects. And we also will have to, in addition to that, a discussion where you can tell us what all we forgot. The idea is that we get some kind of overview on methods and tools that are out there and that are relevant to value of information of structural health monitoring. So the one and a half hours is split into these following sections. The first thing will be this overview presentation, approximately 50 minutes. In which we have made two parts. The first part is dealing with methods and tools for translating data into information. And that is not the actual core of this cost action. But we found that many of the contributions and people experts in the room here are actually working in that direction. The second part is then really on the quantification and the optimization of the value of information, which I think most of the presentations in the morning were speaking about that. So this will be the talk and then we will have these goals. And this is really the discussion that we want to have with everybody here. Not just people who are in Working Group 3 tomorrow. So what should be the goals of this cost action and in particular Working Group 3, what should we do and how should we do it? So this should be some kind of general discussion. The organization, I think, will be also delayed for tomorrow mostly. But if you have some input on the organization, we might look at that as well. And then finally, we have a poster session. So we have contributions from different people. So we decided to make these as posters. We will go outside all together and people will briefly present their poster. And afterwards you can discuss with the people over a coffee. About their work. I will also introduce some of these posters during the talk. Let's just start. And I just put this picture, not because you should now read all the details, but to show that obviously there are a lot of frameworks. And also in the morning we have seen multiple examples of how to consider, how to look at SHM and the whole value chain that is involved. The actual measurement to the decision at the end about what to do with my system. And the nice colorful one above here is from Eleni. And it also shows highlights that you have different type of physical models involved. We'll speak about that. You might also have non-physical models. This is also something that we have to consider. Sometimes you might not have models at all, which is actually also something that is kind of difficult to handle in the framework of pre-procedural analysis. Then we have to interpret that data and to come up ultimately the decisions that the decisions themselves might involve making new inspections, making new monitoring systems. So it's also a continuous chain that never really ends. Which makes it difficult sometimes to quantify these things. These are two examples from posters that you do find outside. So one is Timo, difficult last name, Schwagen-Dieg. But Timo has a nice example of an application. And another one is on Aircraft SAGM by Giulio Cotone. And there are many different frameworks. And all of these are obviously targeted towards specific applications. And the problem is that depending on the application, we will need different tools and methods to model and to calculate what we need. And therefore, in this presentation, we will present quite a large variety of different methods and tools that are out there, probably by no means complete. Which are allowing us in a specific case to do the optimization. Hopefully the quantification of the value of information. Okay, enough for that. So I'll start with the first part. And that is we have systems that measure something. And this is pure data, in my understanding. But then we want to translate that into information. And we have a variety of methods and tools that help us here. First of all, of course, we have the tools that actually give us the data. Now this is not really, I think, a focus so much of this cost action. But of course, we have to consider that depending on which tool we use, we get different sorts and type of data. Time series, just different points, high dimensional data, big data. And that will again, change what we have to do. There's one poster outside from people in Janeva that present testing methods that they have in their lab. And you see the variety of things that you can do. Then, and this is a nice poster that is outside. We will see later from Carmen Andrade and her group. This is already something where we think about the value of the information. Or even though it's mostly done qualitatively. It's the question, okay, I can measure different things. But typically I cannot measure directly what I need, what I want to know. I typically cannot measure directly the defect. I can only get often indirect information. But some of that might be very useful. So this concept of indicators, the question is, what are appropriate indicators? And the techniques to monitor them for a specific problem. It's really at first a qualitative problem. And then, hopefully in this part of this cost action, we also get maybe some quantitative indication as to which of these indicators is more useful than others. And comparing that with the cost of obtaining these indicators, we can again assess what we should do. But of course, the typical, we will combine different indicators. And this is also something that has to be made possible by our tools. How do we combine different indicators? And how do we quantify that? Then, another poster actually, that is not the next step. We measure something. But even if we measure directly or indirectly, we never measure exactly what we want to know. There's always measurement uncertainty, noise. And if it's indirect, obviously we have uncertainty associated with the quantity that we want to predict. And so this poster by Obayron and several contributors that are here today, they look at the part of their work. They look at the quantification of the quality. And what we typically use are these tools, such as probability of detection. How good are we in detecting some defect, for example? A discrete event. But also, not to be forgotten and very important is the probability of getting it wrong in the other direction. What is the probability of predicting something when there is nothing there? And this can apply directly to the monitoring device, but it can also apply to the whole system, which is not just the device itself, but it's the interpretation of the data and so on. And then, if you don't know what is this curve, this is so-called receiver operating curve, which is the combination of probability of detection and probability of false alarms. Even though I haven't heard it this morning, I think it's a very important concept that we should not forget about in this context of this cost action. But there are methods, such as the one presented here, that allow us to obtain these PODs, PFAs, ROC curves. So then we have quantified the quality of the data in respect to what we want to predict. Then we have, and this is our core part, we typically assume, in most of these applications, and if you look at patient decision analysis and pre-bosterior analysis, we assume that we have a model. It can be a very simple model, the empirical or Markov chain or something simple, or it can be a very complicated, finite element-based model with many parameters. But we assume that we have a model and we have inputs that are stochastic, the model itself might be stochastic as well, and then we have an output. And if the input is stochastic, the output will also be random, subject to uncertainty, epistemic as well as aleatory. Now, what we do in the monitoring is we might measure the input, and if we do that, we can then learn about the input. That is kind of trivial, I say, even though it might not be always trivial, but this is statistics, we can use basic statistics, we can use all the type of statistics, but this is nothing special, I think. In most cases, however, we measure some kind of response of the system. And then what we have to do is an inverse analysis where we start at the response and go back to the input. This is what we call the, and this can be done using a Bayesian approach, and this is what we mostly use, but not exclusively. It's also practical here to use Bayesian because, of course, this ties well into the Bayesian decision analysis framework. But there are, of course, alternatives, and sometimes when people use Bayesian, they always think that I belong to some extent to those people as well. We believe that if you do things in a Bayesian way, everything is perfect. Bayesian is the right thing, and it's almost like a philosophy, like a religion. But you have to be aware that, of course, while the theory might be perfect, the models that we have and even the models of the uncertainties that we use in this framework are not perfect, and that leads sometimes to quite a completely wrong result. This morning in the bus, I was discussing with Costas Papameditrio about some of these cases. And so you have to be careful, and there are alternative approaches. I just want to mention one example that is a nice approach, which looks at that from a different perspective. So we cannot do this only in a Bayesian framework, but you can do this in different frameworks, this inverse analysis. But if you do it in a Bayesian framework, everybody has seen this picture, or if not, then you should see it. In a Bayesian context, we start with a prior model, and then this might be highly uncertain, but then we get information, and this likelihood function represents my information. Some of these probability detection that you've seen before is a likelihood function, for example. So this describes my measurement. A measurement error is like a likelihood, can look like a likelihood function. And if you combine the two, prior and posterior, we get, prior and likelihood, we go into the posterior, shown here in green, in this nice figure by one of my students. The combination of the, so basically it's the multiple, I said it's plus, but it's a multiplication of the two, and then we get to the green distribution, which is the posterior model, which now is combined with what I knew before and what I have measured, and typically it reduces the uncertainty. So this, you see the width here is slightly smaller than the width of both likelihood and prior. So there are many methods that I call general purpose type of, you have a numerical model, you have response, and then you have input, and you want to go back from the response to the input. And there are these general purpose type of models, sometimes analytical solutions, Markov-J, Monte Carlo, approximation methods, sequential methods, and then something that rejection sampling methods. And actually the poster, I didn't say here, okay, yes, so the poster by Costas Papadini-Trio shows some of these methods, and he's really one of the experts in this. Okay, I'm just going quickly through some of these, but I want to motivate the beauty of Bayesian, besides the fact that it ties well into this decision analysis, it has also some other advantages, and this is just a very simple example we did to test the new method, and this is maybe you assume that we have this beam and we measure the deformation with some kind of video device at each point, okay, we have many measurements, they are subject to some uncertainty. We assume we know the truth, in this case, it's hypothetical, and we can then back calculate to this inverse analysis, we get an estimate of the flexibility, which is one over the modulus of elasticity. And you can see that we get, this is the true value, which is a random, it's a random process or a random field, if you want, this is the true value, which we assume we know, and then if you do a maximum likelihood estimate, so just fit your model, which has 50 parameters, so it's completely overfitted to the data, you see this is the estimate if you do a maximum likelihood. If you use a Bayesian approach, it actually regulates the problem because of the use of this prior, and so it regulates the problem, so the Bayesian estimate is this, what do you see here, this credible interval, which actually is 95%. So this is distribution, it doesn't give me one value, it gives me a distribution, which tells me that within 95% probability, my true solution is somewhere in this 95% credible interval. Okay, this is an example where we made everything linear, so that's why we use flexibility, and Gaussian, so we get analytical solution. This is for reference purposes, but in reality, of course, that's not the case, so what people use often, and it was mentioned in the morning, is Markov chain Monte Carlo, MCMC. Essentially what it does is produces samples of a Markov chain, correlated samples, that as you wait long enough, those samples will follow the posterior distribution. What long enough means is not always clear in practice, which is one of the problems. This is actually done by Jesus, here he made one of his examples, where you see here, if you don't have data, we make these samples, and they already follow this posterior distribution, so they all follow the same distribution. If you include now monitoring data, you can do the same, and this chain at the beginning has to find the location of the posterior distribution, so you have this kind of behavior here, and eventually it finds the true distribution, and it converges to the solution. So this is a method that is very often used, it comes from statistics, not so much from our field, but it's very often used, but the problem really is this convergence behavior, which is in our problems often very poor. So this is a simple example, but definitely. So, but this is also included in many of more advanced methods that actually are more targeted towards what we do. One of those, or another class of those is sequential Monte Carlo methods, which includes TMCMC, which is actually not an MCMC method, even though the name isn't misleading, but this is very popular in our field, and as I said, it costs us to explain this better, but the idea is that we have, here is our posterior distribution, we start to sample from our prior, and gradually we approach this posterior distribution by doing sequential important sampling. And that is one method that is kind of commonly used. Another approach is rejection sampling, you just sample again from the prior, and we assume here that the prior distribution is actually like this, so these black points are from the prior distribution. We additionally sample these auxiliary variables, we then kind of plot the likelihood function here, and all the samples that are within this green area follow the posterior distribution. Okay, if you don't believe me, true solution against the approximate solution. This is a classical principle, but you can also frame this in the context of reliability methods. So we can transform everything in standard normal space, some of you know reliability methods, and then we can use structural reliability methods to do this more efficiently, because sometimes the probability of landing in this domain is 10 to the minus 100, or 10 to the minus 10. So you have to produce many, many samples before you find some samples that are useful. But if you use these structural reliability methods, you can do it more efficiently. Make sure it's a pretty big sample here again. I want to identify the stiffness of these two, this is two-story building if you want, and you want to identify these two stiffness values, first floor, second floor. And we sample from the prior distribution, this has highly large uncertainty, which kind of can have many different possible values. And we want to now get by rejection sampling to the posterior solution. So we use a combination of these with subset simulation, and again, it's kind of sequentially we approach the posterior distribution. If you know structural reliability, you can very well easily implement this, and this works nicely also in high dimensions. If you have hundreds of parameters, this works. You see also here, we have the sequential solution. So they often, many of these methods you work sequentially. They start from the prior, and sequentially approximate the posterior, which is my updated model. This is now my updated model, and you see it's so-called locally identifiable. We have two possible regions of solutions. Okay, just very briefly, another possibility is to use Bayesian networks for doing Bayesian analysis, and we have one poster on this. So we'll do it very, very briefly. I just want to mention that in Bayesian networks, what we do is we model the problem as graphically through nodes that are individually, actually, I will just make one sentence because I really look at the time and I hope you're already over time. So just one sentence. The Bayesian network works very efficiently graphically because we account for the conditional independence among random variable. And in this way, we can much more efficiently deal with potentially many, many random variables, and we can solve them either analytically or, well, analytically not, but exact if you discretize or through sampling methods again. And I will just refer to the poster for a bit more information. Then I will turn over to my co-presenter. So thank you for handing over the floor. I will now have to take you back to a bit of the more basics of, let's say the structural health monitoring concept where as in the case of reliability, uncertainty also exists at the core of acquiring our data. And so there is a point into understanding how we can take this into account during the imposed methodologies, but also how we can translate what is extracted into indices or features that can then be better exploited by the methodologies described by Daniel. So starting from the tools that we use for the monitoring of such systems. I mentioned here the system identification framework where you all know is divided essentially into the forward and the inverse problem. Where in the forward problem, typically we would like to simulate a system in order to get some kind of confidence with respect to its response. And in the case of the inverse formulation, we monitor the system acquiring measurements of different types and then try to calibrate our initial models based on the, so that the reality matches the representation. So in system identification, we typically have two ways of acquiring the models of the system. And this may be obtained either analytically, meaning as we would do the forward problem formulation and then proceed to experimental fine tuning in the inverse framework. This has more to do with finite element model updating as we call it and their Bayesian methodologies have a very nice framework for implementation. And they can also be directly obtained experimentally through structural identification. Sorry, usually resulting either into physical based representations or sometimes to just data driven representations. And in the recent years, there is some tendency to also prefer the data driven representation because it doesn't necessitate an existence of a system that we precisely know. And why is this relevant? Well, because the identification of these systems comes with a number of uncertainties already from the point of, from the time of acquisition. We have uncertainties that are relating to the loads themselves, which are usually either unknown or simulated with some simplifying assumptions. We have the system itself, which is usually in simulation considered deterministic, but in reality it's nothing but that since we hardly ever precisely know the behavior of the properties of the system, but even the behavior of special components in there where typically where energy is dissipated. And of course, even through the acquisition of measurements, there is the issue of noise which is inherently relating with these instruments. And so in order to be able to take all of these into account, we need appropriate methodologies that can account for these noise terms. Either these are model-based errors or let's say errors from acquisition. And so what you see here is a bit of a segregation that I mentioned earlier, the fact that we can use either a physics-based approach and that could mean some sort of finite element model that we might have of the system or some sort of lower order model that we could potentially fashion from measurements that could be expressed in the time or frequency domain, but which usually comes with some parameters or we could use a data-driven modeling approach. And that means I'm just using the data without really any knowledge of the structure at hand, which however is still in the position to provide me with information that I need for monitoring the condition of the system. These basic segregations pertain to the linear system case, but the corresponding extensions already exist in the non-linear and non-stationary case, and this is very important, especially when discussing new systems, such as for instance, wind turbine structures were for sure you're in the non-stationary case. Now for solving these systems, and in doing so in a first let's say processing stage in a rather in a real-time manner, we again could use a Bayesian let's say approach and this would mean we have to operate in a two-stage process where we in a first stage predict, let's say the response of our system given that some sort of approximative model exists, or and in a second stage, we can update the model based on the measurements that we receive commonly real-time. So in the framework I'm describing here, the goal usually is to try and be fast and not to do this offline, but as data is attained. One way of doing this, or one typical Bayesian approximation used within such a framework is the well-known Kalman filter, which in the linear case is basically just expressed as what I said before, a prediction step where you use your model to predict the state and an update step where you use the discrepancy of the measurement to add a weighted correction to your prediction and this would be your posterior estimation of this time step, okay? Or and when discussing nonlinear systems, which is let's say the more challenging and even more realistic case since hardly any system is linear, you can use some of the available alternatives for trading nonlinear models. Now the available alternatives are not far from the Bayesian, let's say, framework explained earlier, they work on the concept of using particles that approximate our distributions and as typically we don't know this, usually we make for our initial step and assumption on the prior distribution, which is step-wise updated until you reach to posterior estimate, which gives you some confidence in the let's say state evolution of your system. So let me just explain here a few applications of these types of tools. The very first application and the most commonly, let's say published, or the most commonly met in the literature so far is the case of joint state and parameter estimation. Either we're discussing linear or nonlinear systems and what this means is typically we're dealing with systems whose parameters are not known. If you have a vibration-based monitoring system mounted on these structures, then this would usually be a system that has limited observations. We can hardly ever claim, we can have a very dense grid of sensors and what we typically would like to have is a prediction of the overall state responsive and in unmeasured locations but also an estimate of the characteristics of the structure and this can be done through the use of the methods I explained earlier. Here is the numerically generated case of a nonlinear hysteretic system and in the plots you see the prediction of the state response of the real system is in blue and then some approximating methods, some underperform, some perform very well. All of them work in real time, which is quite important and also the prediction of the parameters of the system. In a second implementation which sort of makes it clear why we need things that run in real time, you see the application of the above framework for control. In the case where you're trying to use a magnetological dumping device, for instance, to suppress vibration mitigation of a structure, in this case we studied the laboratory experiment on a scaled shear frame where we try to mitigate the vibration by using an MR damper that is mounted here on the first floor and where we additionally assume, unlike what typically happens in control, that the system we here have in our hands is a system that is not known precisely so we don't really know the stiffness properties of the system by using the methods I explained earlier in this case the incentive Kalman filter and coupling them with control strategies, one can come up with adaptive control tools that are able to take into account the uncertainty of the system and not only achieve efficient control but also provide an estimate of the characteristics of the system at hand, which is quite important. In a further implementation I'm referring here to work that we did jointly with the Costas Papadimitri on an issue that is quite pertinent to, let's say the latest strands of identification from the point of view that we most commonly operate with under the basis of unknown loading. So what do you do in the case where loading is unknown but cannot really be modeled as ambient or through another simplifying assumption as is done commonly. In this case you can adapt the filters I explained earlier to try and achieve a joint input state estimation meaning not only the response of the system but also the acting load. Having said that what we care about mostly is actually the accurate response of the system which would then be exploited in order to get some estimates of the strain and therefore of the fatigue prediction accumulation that you might have in your structure. This is also quite important since the estimate of fatigue is inherently related to condition assessment and in the latest years it has become apparent that we need methodologies for predicting effects like deterioration or fatigue rather than just the promise of extreme event detection or damage that is associated with that. Coming however to the let's say detection of damage or in any case in the detection of the change of the state in the system. We can also look at the implementation of what I mentioned before as data-driven methods. So methods that actually do not require an underlying simulation model. That is in finite element type or in the time domain. There is quite a lot of literature on the topic of detecting damage under varying conditions and it exists because it's really a hard task to try to segregate between what is really damaged the system and what comes from variation of environmental effects or other kind of operating conditions such as traffic or wind loads. A wide class of models exist in the literature for this. There's a multi-model approach, feature extraction methods and functional models and I will show you here something we use which is basically a combination of the two latest categories I explained before. This is an approach that is relying on the use of polynomial chaos expansion in order to take into account statistical information on the input loads which are not precisely known meaning temperature, traffic or any other type of load that you cannot actually precisely measure but that you can have some idea on its statistics which are then combined with measurements from the system in order to be able to derive a functional representation between the response of the system which could be either in the form of a natural of frequencies or model shapes and these influencing factors. Once you know this functional representation then you can actually take it out of your response and you can fashion indices and that's where I said it's important to know how to, it's important to come up with quantities that can be related to condition assessment so you can fashion indices that can tell us whether the response is between regular operating conditions or not. I think, yeah, I don't wanna take up too much time so I'll just briefly explain how this polynomial chaos two works under the assumption that you have some sort of input parameter that follows a given statistic and given that you also have some sort of unknown in this case we have generated it but let's say an unknown nonlinear dependency, a function or a variable that depends in a nonlinear manner on this input variable then you can use the projection tool of polynomial chaos using functions or polynomials that are orthogonal between them but also to the probability distribution of this variable, in this case it's a Gaussian distribution so I would use an approximation of Hermite polynomials like the ones you see here and I would try to combine these in a multivariate way using different orders of them and it turns out that just the fifth order approximation would actually give you something that's quite close to the original unknown relationship so I can use this without really knowing what the underlying relationship is to try and approximate response and that's what we did here in field cases in a number of field cases in a bridge you see here the case of a bridge in Zurich where we have here the plots of the evolution of the natural, of the first four natural frequencies of the bridge with respect to temperature and this would be with the blue points and you see here the green, the prediction of our tool which follows very well the evolution despite the effects of temperature. What you see in red is basically the training period we need so we need the training period to be able to train the network and to be able to figure out what is the functional representation but once this is established then you can use it for predicting. The same thing here in data that were given to us by Alvaro Cunha on the Infante di Enrique Bridge in Portugal and also here for the more challenging case of a wind turbine facility where here things are a bit more complicated because you need to take into account the non-stationarity of the problem but that's just to show that you can actually do this you just vary a bit the tool but the same thing can happen. Once you have that you can actually detect damage and what you see here is basically the verification of the method on a bridge that was destroyed or damaged in Switzerland, near Zurich. These are data that were given to us from Luban, our colleagues in Luban who were participating in the particular program that was monitoring the bridge for a period of some months and then there was a set of damages that were caused on the bridge. What the method does is that it provides an index which we can prescribe within given thresholds or ranges and it's usually the case that infrastructure operators prefer the form of ranges rather than probability distribution functions to be able to tell when damage occurs into the system or not. The overarching question we're trying to reply to is basically what are these indices that you can actually plot here on that you can actually take into account and monitor the revolution through time and what do you do next with them in order to be able to devise the appropriate, let's say, maintenance strategies or intervention schemes to prolong the life of your structure and to maximize the investment that we have done. With this I give the floor to Dan. You know, I have this colleague in my university and after one hour of class he always does yoga with his students. So maybe I'm almost feeling like we should do the same but unfortunately I'm not a yoga master so I will just continue. But now we've kind of bombarded you with techniques and tools that are available to come from data to information and they are quite likely others as well. But I think the idea was to give you just a kind of a spectrum of tools that are available and actually that are tools that we have to consider but all of these are valuable tools that are useful in some context that we have to consider as part of this course action. And now when we come to the second part which is the actual quantification of the value of this kind of information you see the difficulty that on the one hand we should consider all these possible cases, make these analyses as precise as possible and on the other hand if already it takes a whole PhD to understand how to do the posterior part which is to take the data and to get it into information how can we make or how can we get to the second part which is now the as you showed nicely Piotr is the elder bubble which probably requires now what in Germany we have this habilitation, okay. You do a PhD in getting your structural health monitoring system to understand that and then you do your habilitation another four years to do the quantification of its benefit but it's not the time we have but so this is the kind of conflict that we are in. Okay, this I've already seen many times as it stated originally from economics and taken up by Howard in the engineering community and I think the point to mention to say here is okay we have to solve this and this is of course the very simple problem in reality things are much more high dimensional and so on but we have to, this is the posterior problem, okay. So we try to, this is my uncertainty given that I have some measurement and data and then I try to find the optimal action under that, under those information on my updated distribution and we have seen tools to calculate this updated distribution but now we want to figure out, it's okay but if I don't yet have this data but I want to obtain this data in the future and I want to figure out how much is this data actually used or how much uses it to me or when I'm here and I have to consider all the possible outcomes of my, this is the set here all the possible outcomes that I might potentially measure with my system in the future and I have to integrate over all these possible outcomes. So we have, and if I want to maximize that even if I want to figure out what is my optimal sensor configuration, what is my optimal interaction interval, I have to even maximize over that with maximization, integration, very difficult. So, okay, you can also present that in a graphical form, so-called influence diagrams which the computation ends up being the same but sometimes it is more easily visible what is going on. In particular if we have a sequence of decisions that has to be made over a lifetime typically to show it in an influence diagram is kind of useful. So, but it's an extension of the Bayesian network where we indicate the causal relation between things, okay? So, many people think that the monitoring outcome determines my system state but in reality the causal relation is the opposite, okay? My actual system state determines what I will measure. The inference process is the other way around. I observe that and I infer about that but the causal relations are in this, and this influence diagram should follow this causal relation, okay? My monitoring outcome will then determine what I do and so on. Okay, but that said, it's not going to, it's going to help us in the representation and the understanding of the problem but the computations still remain equally difficult, same integration, okay? So there are many challenges associated with that. So we have seen in the morning that we can obviously solve this problem for nice and relatively simplified, let's say, problems and I believe we can, or I know that we can, we have seen also some, I mentioned some examples where we can solve it for also real problems but there are many challenges that when we come to real problems. One of them is, and this I just, actually this is again from this poster outside, is that what I call that we have to identify the decision context. So often people have some system, they know there is uncertainty, so what do we do? And don't look too much at this figure in detail but they know what, they know that there's uncertainty so what they do is, okay, let's get some more certainty, let's put the monitoring system and that is often very useful but at that point often it's not clear yet, at least not explicitly, what we are going to do with this information. However, if we want to make a people's theory of analysis, if we want to calculate what is the value of that information, we need to understand what we are going to do with that information that means before we have the information we have to already model what we are going to do with that information. So that is often difficult in practice to do for real systems, I have observed. The second thing is then, okay, in reality there are so many possible things we can do, okay? This is a simple decision tree which is just for one component of a structural system and we have already many, many possible options how this can go, okay? We might do some inspection that we might find something, we might do a repair, we might do a maintenance action, we might, you know, then different points in time and so this, already this decision tree, now the real thing, it gets very big. Exponentially it increases as you can guess as you go on life cycle. This is just for one component with a relatively simple idea that the component is either damaged or not. In reality the system can have many states and that brings me to the next challenge. These systems are large and just, this is just a simple structural system, okay? Steel, offshore, you know, it's quite simple. But this has many elements and we could inspect this one and we could inspect that one and that one and then we could repair this one. I mean, there is an infinite number of possible combinations of system states, of decisions. So all these trees that look nicely in the theory but then they explode as we go and so this is a difficult problem. Next, which I also illustrate with these offshores, you can guess that that's some of the work I did but that also was also nicely seen in the previous presentation is that we have, if you do realistic models, these models can be computationally demanding. They are not like the climate models that some of other areas use but they are still demanding. And we have to model, okay, this is just showing a kind of a big problem in offshore structure but the point being that might not be possible to do Monte Carlo simulation with possibly 10 to the power of nine simulations if we have this kind of more realistic models of structures. We have to reduce the number of times that we can actually run our model. Okay, so these are some of the challenges and there are actually more challenges which however are not necessarily, maybe these are the challenges that are more related to methods and tools. There's also a modeling challenge, I shouldn't, which is that here we always assume that we have a model, we know what we're going to look for but often you install a monitoring system precisely because you don't have a model and then how we deal with that may be something we should also consider. Okay, but coming back, what are some of the solutions? Okay, one of the things is that I found a very, actually very good solution in many cases is we can actually quite well simplify or even though the decision problem actually is complicated and it seems to be complicated and if there's so many possible, we see all these fancy plots and we get lost and sometimes you can actually reduce it to quite simple problems and there might be approximations of the real world but they are sufficiently precise to give us an idea about whether system A or B is monitoring system A or B is better. So in this case, I mean this was the same decision so we can reduce it, for example, by not only considering those branches in these three that actually have some probability that is more than 0.001 or something. And we can reduce it, so we can make a simple example and actually I'm not going to speak here but I will point again to two posters that are out there. One is Dr. Kotone who is, and I'll just briefly explain that, looking at the aircraft, health monitoring system in an aircraft and this was a project that people developing these monitoring systems, they, you know, they're very good people really, I mean they do very sophisticated models of the system, of the aircraft wing and so on but then they were realizing that they don't know how to quantify the effect of the system on the reliability and when they come to ask us, okay, can you quantify as the reliability? The first answer is no because the reliability does not depend on the monitoring system alone, it depends on what you are going to do with the information that you use and in order to then actually figure out what we're going to do, we had to sit down with the people and think, okay, what are you actually going to do with that information? And we assumed that, okay, it just gives you a green or red light, green means you can fly, red means you cannot fly. And on that basis, which is actually not far from probably the truth, it's of course, this high-dimensional outcome of the monitoring system is reduced to a binary number, namely, the interpretation is such that I can fly, I think I can fly, and it's such that I think I cannot fly. In reality, you have infinite possible outcomes, but if you reduce it to this binary outcome, which is the interpretation, then we get a decision tree that is actually manageable and the Dr. Kotone can tell you about this and we calculate value of information here. The other example, just I find is a very nice example already mentioned this morning by Dr. Schweken Jig. And I think I'll let him explain about, where is he? I'll let him explain about his model, but again, you can show that actually it's possible to get a engineering, of course it's not precise to the last digit, but an engineering outcome as to is it worthwhile to install a monitoring system or not in a real case? So to simplify the actual problem, and I think this is the key, it's not really a tool to figure out, but maybe it's not even a method, it's more a modeling strategy. Don't get lost in these fancies things that you hear from people that do SAGM, try to extract and go back to the basics. And often you can find simple things. Okay, this I just want to mention briefly because it's not, I think it was already mentioned in the morning as well. Possibility might be to actually look into the calculations of these integrals and that's what I tried here, which is the first trial, but to look into the calculations of these integrals and see if we can make this more efficient to at least limit the number of times we have to calculate our model. And instead of using Monte Carlo, we could use important sampling right here, where you focus on the region of actual interest because most of the monitoring outcomes, if you have a system and you want to use it for checking if you have damage, 99% of times no damage will occur and everything's fine. You might only focus on those cases that actually lead to damage. And you can do that in a formal way using important sampling techniques or other techniques from structural reliability. Okay, and then you can, okay, we've already seen something like this, but this could be an outcome that you have. With a few thousand samples, you can get this, what is the value of information as a function of number of measurements and measurement errors, things like that. However, this is a hypothetical example. Now, strategy number three, and since this will be almost there. This is our trick for not folding a flip. This is free. So I'm picking up with an issue we think is of relevance in the devising a framework that is robust with respect to uncertainties for structural health monitoring. And this is basically the issue of optimal sensor placement. And in the past, there have been a variety of methods that have been used and they were mostly deterministic for sensor placement, essentially trying to optimize the detectability of the mode based on the distribution of instruments. But the issue is in fact a bit more complicated of what can be handled by a deterministic problem. Through the fact that in the recent years we have a number of monitoring technologies available which can be combined in a different way to optimize, let's say, costs with respect to the information that is returned from these systems, this now becomes a problem of increasing complexity. So you could have actually systems that have the same type of sensors that are implemented on the structure. You could have heterogenesis, we call them sensors, so fusing different types of information like acceleration on displacement that are collocated. Or you could have an even more complex problem where the sensors are actually non-collocated. And the issue is how do you come up with the optimal placement of these instruments in order to optimize the amount of information that comes from these. And so now we have to turn to approaches that are able to handle this kind of question in a statistical manner. Some early approaches were based on the Fisher information matrix, but now the Bayesian approach, and I guess, again, Costas or Gert who have previous work on this could elaborate on the topic, but this is something that is worth now looking into, especially since operators need to be convinced through solutions that are compromising in the best possible manner. Now, to conclude, essentially, I would like to also go over an approach that we have developed, having in mind the fact that operators need a clear or let's say, process that can sort of more or less be standardized for decision-making regarding the maintenance of civil structures. And now for this, I go back to something that Michael pointed out, the fact that you have to rely also, or you have to be alert and also look for solutions in other fields. And this is basically a method borrowed from the field of robotics originally. So this is how you can plan the route of an agent of a robot in robotics. You can use what is called the partially observable Markov decision process framework, which in contrast to the fully observable Markov decision processes where the world is supposed to be known and the states and actions of an agent are completely defined. We now introduce the uncertainty into the system through this essentially B symbol here, which is the belief that we have in the state of our system. And so the belief is something that is updated through Bayes' Rule once again. But without tiring you with the details, I'm just gonna say that this belief is related to the reward that we can get from the system. And this reward is calculated based on the cost of the inspection or observation methods and also on the cost of the repair actions that are foreseen for such a system. So I will use an example from a specific structure, let's say a bridge system where you have a finite set of observations that are available and a finite set of actions that you can perform and for which you would like to devise a maintenance framework. What this approach does is that it assumes that the state of your system, which is a belief you have to calculate as part, let's say, or update as part of this algorithm is given. And you have a number of decision steps or as they are called horizons in order to take a decision and organize your, let's say, maintenance framework. For this particular case, we have the simple assumption of two types of observation, one that is more coarse visual inspection. So it has, let's say, not so refined level of accuracy is basically a yes or no, good or bad situation with something intermediate. And you have a more refined monitoring system which probably cost, naturally would cost more but can give you a more refined level of information through the, depending on the state of your system X here. So this is essentially the index that quantifies the system response. And this, now you can relate to what I presented earlier. For instance, with a PC polynomial chaos tool that gives you an index which is a condition indicator. You also have a set of actions that are of course not exactly deterministic because we usually have simulation models that can only give us some sort of probabilistic interpretation of what the effect of an action would be on our structure. And so how do you take these into account into a framework that can be easily deployed? So the way you would do this is you start with let's say a belief of your system at the given time instance T for which the method will give you a plot such as the one that you see here where the colors are relating to specific interventions and where the different symbols, maybe it's not so clear, are relating to different observations, sorry different monitoring methods used. And so my belief tells me that I am somewhere here. So the mean and variance of the belief would give me a situation that says that my system is somewhere about here. It would mean that I would have to choose an action which here you cannot see but it's basically a welding. It's one of the intermediate actions for improving our system. And also that my next observation should be the more refined let's say method instead of visual inspection. When you perform this, you would find through the observation method that your belief is shifted because of course it gives you a new confidence into where your system lies in the next time step. And then again the process would be repeated. In this way you can go about devising let's say a maintenance strategy. Of course there are many things here to take into account the fact that this becomes more computational intensive when you put more dimensions into play. And also that because it is a Markov-based approach it doesn't have a memory which is something that already has been improved by the work of Papakostadino and Sinozuka. So, but it has a lot of potential for implementation given the clear structure that it could be related to. And with this I guess I return to. We're almost done, but this kind of framework can also be, not actually extended again, but the related framework is this because limited memory is a diagram. And actually, as you know, Cheney Nielsen from Outdoor because she's out here but she's part of this, she's been working a lot on this. She's essentially, because the problem of the thing is these trees remember they grow exponentially and one way to cut that is to use this pump. But then I have to introduce this belief state. The other option is that I consider that the agent, the decision maker, forgets. Which is actually how we deal in reality with a lot of too much information, we just forget. And then we can focus again. And this is how this helps. And so there is this idea of using these influence diagrams where the decision maker can't forget. And that reduces the problem of these three. So it doesn't grow exponentially anymore. And as I said, actually it's taken from these paper by Cheney and John Sernsson. And just to show that you can actually do something with that. So we have used something, an influence diagram like that is the last slide for optimizing a warning system for natural hazard. So this is a different problem. But there are a number of sensors and we want to optimize the interpretation of the sensors and the placement. And we can do that with this influence. We do that with such an influence diagram which you can actually download from the website. And we come up with this. Now we come back to the beginning, you know. We come up with this probability of protection against false alarm, okay. And these are possible optimal solutions. And then we can find using the value of information which of these is the optimal configuration. Now, so it is actually, we did not bad. So we have left the time we wanted to leave for the discussion. Of course this calls us to give it fast. But this will give us, this leaves us now some time for discussion. And the first thing I saw what I said already at the beginning, you want to use this forum for you to tell us or to, and also this is not, you can also answer it afterwards to us. You can write it by email or whatever. But you have the question, okay. Did we leave out something that you think should be included? And don't tell, but I don't mean your paper. But I mean, you know, more general methods or approaches that are part of this here. Or is it a completely other dimension that we forgot? And also the focus of this, because it's not that clear what is the focus of the working group yet. It's also not entirely clear how to separate this working group from the other working groups. I mean, some of the things we mentioned are more related to the theoretical aspect. Some of them are more related to the applications, to the health monitoring, it's not clear where to cut. And for sure it's not, it's related. The theoretical framework will radically determine what problems we have to solve. And that means the tools that methods that we need will depend a lot on the choice of the framework. Okay, but I shouldn't speak actually, but you should speak. So any comments at this point are very welcome.