 Okay, Bayesian analysis is used in engineering in other fields to make inference. My objective is to present some challenges like theoretical, algorithmic and computational when you apply this to structural dynamics. With Bayesian inference, we try to have a structure to select a model for the structure, its components to estimate the parameters of the model. We can use all these in model calibration, final element model updating, structure health monitoring and damage detection. Once we have uncertainties about parameters and also rank models with Bayesian inference, we can propagate this uncertainty in structural dynamics, we are interested also in updating structural reliability. Also, we can use this uncertainty for optimal experimental design and decision. The Bayesian computational tools are a symptotic approximation in sampling techniques that I will give a brief review of some challenges on this and I will try to give you a couple of applications. Here I am interesting, my model is usually a final element model. Parameters of the models can be stiffness-related parameters, boundary conditions. If I have a model, the values of the parameters, then I have the excitation, then I can predict. These are some equations of motion, the usual equation of motion in structural dynamics. Observations that I collect from a system could be response time histories, frequency response function, model frequencies and mode shapes. I will quantify uncertainties using probabilities and probability will represent the degree of belief. I will be using calculus of probability for making consistent plausible reasoning. Once I have a model plus a final element model with some parameters, one could use the Bayesian theorem, as you see here, to update the uncertainty in these parameters. Probably using the likelihood and the prior and this is the evidence. In order really to build up the likelihood, one has to build up a model for the model prediction error, which represents the difference between measurements and predictions. This is an error. This is one form where you have additive error. The error could be due to measurement or model error, the fact that whatever model you assume is not an exact representation of reality. Usually you make assumptions for this. This is an assumption that you have an error which has zero mean and a covariance structure. Once you've done something like that, then you can build up a posterior in this form. These are the predictions from your final model, given the values of the parameters. Also, you can make model selections. If you have several model classes, MI, you can select the best one or you can rank them. But to do this you need to compute this evidence which is a multidimensional integral. The problem is here with the covariance of the model error. Depending on what you assume, you get different uncertainties. I'm not going to solve this problem for you, but I'm going to at least illustrate that different covariance matrices gives you different results. If you look in the literature, most of the people use uncorrelated predictions. If you have, for example, a final model and you assume that the predictions are uncorrelated, you could think that two sensors very closely to each other gives you different information which is not really true. We have looked into this problem and we have done a Bayesian model class selection to select the best prediction error model out of models that we postulated. I will illustrate all this with a problem where we have a beam and we assume that the real system or the beam behaves as a 3D solid element model. We use the 3D solid element models to analyze this beam and produce simulated data that we assume that they are experimental data. We use a model class to do inference which is just a beam model to the beam model with that many elements and nodes. We use three different model prediction error correlation structures. One is uncorrelated, the second is spatially exponentially correlated and the other one is exponentially a damp cosine correlation. We just try to identify damage at this location here and here I upload the results of the uncertainty in this parameter which is a stiffen parameter characterized damage. The uncertainty for model A uncorrelated tells you that as you increase the number of sensors, as you keep adding sensors, you get more and more information. I'm assuming here that only four modes are contributing, no more. If you have four modes that contribute, you don't expect to add 170 sensors or 150 sensors and you keep getting information. Something is wrong. The fact that you assume uncorrelated model, this is actually wrong because two neighbor sensors closely spaced together, they don't give you the extra information that you would like to get. If you assume exponentially spatially correlated model, this is a model that postulates only positive correlation. Again, you get similar results. If you assume the third model, then you can postulate positive and negative correlation. You get to the point where after something 30 sensors, you don't get extra information. Your uncertainty now becomes fixed by adding more sensors, you don't gain anything. One thing is how you assign the model prediction error correlation is very important. How to do that in reality, I'm not sure. I'm just giving you an example of this importance in an obvious problem that I know that I cannot keep adding a lot of sensors and getting more and more information. Here is a result where by looking at the model evidence, it tells me that the correlated models are much more probable than the uncorrelated one. Of course, here the exponentially cosine correlation wins. Let's go now to the Bayesian computational tools for structural dynamics problems. This is the posterior that I have to represent. I can do that with either asymptotic approximations where I can represent the posterior approximately by a Gaussian distribution. I could use gradient-based techniques to find the optimum by minimizing the minus log of this and also find the Hessian. Such gradient-based techniques are sequential. I could do it also better using stochastic techniques like covariance matrix adaptation that you see over here. But also there are challenges. If you have multimodal PDFs, this doesn't work properly although you could improve. Of course, the computational effort here depends on how many times you solve the system in gradient-based techniques. The nice thing about stochastic methods for doing the optimization is that you can do this in parallel so you can exploit parallel computer architectures. Sampling algorithms. The main algorithm is the Markov chain modekarrow algorithm. But also these algorithms are sequential. Unless you have parallel chains, you are going to waste a lot of time to find the solution or to generate samples. The challenge here comes from the model complexity. We have a large number of degrees of freedom and non-linearities. It might take you a few minutes to run a model. A few minutes, if you do it 10,000 times or 100,000 times, it's a lot of time. If it takes one hour to run a model, then the computational effort is just excessive. You cannot do it. You need to have methods to reduce this cost. For sure, single-chain MCMC algorithms are sequential. They don't help you at all. These sampling algorithms should also handle multimodal PDFs, unidentifiable cases, and pick posterios in high-dimensional space. You really have to devise methods to find where the support is. In all these, an algorithm that is used, this transitional TMCMC, is a very effective algorithm. It has annealing property. I don't have time to go through all the details of this algorithm. I would like to only mention that what you do is to run a large number of very short chains. These chains can be run independently, so you can exploit parallelization. The fact that it has annealing properties, without going into details, you can use adaptive gringing to gain one order of magnitude in computational effort. No more, if you want to be careful and do it correctly. Usually, in optimization, the computational time you can reduce it by 50%. Here you can go close to 90%, depending on the support again. We have worked on these methods parallelization, surrogate methods, and we have introduced software. We have also improved on TMCMC. You can find all these. We have the ex-TMCMC version and a paper published that discusses all that. A couple of papers, more than a couple of papers published that discuss all that. Further improvement on TMCMC, you can find it in this paper by the co-authors of this talk. Software that does this parallelization efficiently in a multi-host environment of heterogeneous computer workers can be found on this side with our collaborators, with my collaborators, actually, in ETH. This Py4U software solves for you the problem either with asymptotic approximations using stochastic methods or with sampling methods using improvements in TMCMC. A third thing that you can do in addition to parallelization and surrogate is to reduce your model if you can. Of course, we can reduce the model using, we know, this method component mode synthesis. It turns out that for each value of your parameter if you try to reduce your model you spend much more time for the reduction than the time to just solve without reducing. So if you have to reduce your model and run your model for different values of the parameters it's best to do the reduction once and then use it. This is not trivial, it's not easy, it's not doable unless there are some characteristics in your model like your components are consistent with the parameterization scheme then you can drastically reduce your model and the computational effort without sacrificing accuracy. We have three or four papers on this issue but we need to demonstrate this with these bridge this is a real bridge in Greece 530 meters long and we have constructed a high fidelity finite element model the size of a finite element is limited by the box cross section thickness in terms of how you get with the soil modeling using again finite elements represented by blocks we get 100,000 degrees of freedom one though can take components that behave linearly of these bridge and reducing them using component mode synthesis actually you expand the solution within the component reduced base using fixed interface normal modes and you keep only a number of these modes It turns out if the component parameterization the stiffness matrix depends non-linearly in one parameter and the mass depends non-linearly in the same parameter h and gr non-linear functions then your reduced mass and stiffness matrices can be written in this form these cases are reduced matrices that they don't depend on the parameter theta these are just non-functions that gives you the dependence on theta and that's your reduction that's your method to reproduce stiffness mass matrices for different values of the parameter theta without redoing the component mode synthesis so we have applied these ideas you can also reduce interface decrease of freedom if you want and for this structure if you want to compute the first 20 modes accurately you can reduce the model from 1 million degrees of freedom to just 500 degrees of freedom by just breaking the structure into components this is an arbitrary I have arbitrarily broken the structure into components and by keeping only a few modes of these components but it turns out that you could do much more than this some of the components contains modes that we have kept that only the static contribution is important so you can take advantage of that and you could reduce this 500 degrees of freedom by one more order of magnitude it complicates a bit the analysis but the idea is the same but accuracy is very good the error is smaller than 0.1% these are model updating results for this bridge for the deck pier and soil stiffness parameters three parameters actual data from 110 points instrumented where we obtain the acceleration time history and model frequencies and mode shapes and 110 points and we do the updating and of course the main thing here is that for the soil stiffness we are completely uncertain what is that stiffness and it turns out to be close to the 0.4% of the nominal value that we assume now you can use all these ideas of model selection Bayesian model selection for structural damage detection and that's what I'm going to do here I will assume that I have the bridge different damage scenarios I don't know which one is true and each damage scenario I have a finite element model and try with a finite element model to monitor just one area that I assume that could be damage, I don't know same here try to monitor just one area one area here, two areas here, five areas here and I'm trying using data to see ok, where's the damage ok, actually I'm trying to identify to rank these models and find out the best model that best model will tell me where the damage is and parameter estimation associated with the damage area will tell me what is the damage I'm assuming here that damage is very simple stiffness reduction but you could assume that it's more complicated than that and I'm trying to run all these using data it takes what happens is that without model reduction and without parameterization and without surrogate it will take you one month and some days to do it ok, these are 8 models if you do model reduction you reduce the time to 40 minutes so of course here we have a database of 8 models where we keep them with reduced models and we can do all that so we have done some work before but these are models ready to use if you use surrogate models you reduce the time to 5 minutes if you use parallelization you can bring the time to seconds ok, and this is very important now once you have uncertainties calculated posterior uncertainties calculated with your Bayesian technique you can propagate uncertainties and of course in structural dynamics you can propagate and find quantities of interest the mean, the standard deviation or credible intervals and so on or you can update your structural reliability this is the integral that you saw usually here this one is the indicator function indicator function 0 or 1 it doesn't have to be and this is the prior distribution if you don't have data or your posterior distribution if you have data and you can solve that efficient method to solve for the prior distribution sorry is the subset simulation and its improvements and for the posterior distribution we can also do this this zeta includes uncertain parameters plus also loading parameters if your load is stochastic so there are two sets of parameters here that you can work with this I'm not going to present any result only the final thing that I'm going to present is some results for Bayesian estimation model selection for estimating the actually the tension in the hangars here is an arch bridge I'm going to get the third hangar I have measured data from these measured data are the six modes in the transverse direction and the six modes in the longitudinal direction only model frequencies because I have placed two sensors one in the transverse and the other in the longitudinal direction and I used two kind of models simple models as everybody uses like beam models and finite element models complicated finite element models the problem here was that although this hangar has some support funny supports, plate supports at the end and although this hangar if it's fixed at the lower and the upper end it has to have in the transverse and in the longitudinal direction exactly the same model frequencies due to symmetry actually experimental data tells you that this is not the case so what happens the supports have to be flexible so we try to see what will you get if you use here a finite element model with fixed supports a finite element model with flexible supports this is one parameter this is a five parameter model and what if we use a beam model with fixed supports but we let the length of the beam free to be determined from the data or same model but we let the length of the beam in the longitudinal and in the transverse direction to be determined from the data two different lengths these are just models which we could assume or we use springs at the upper end where I only use here the upper end because when I analyze the finite element model I realize that flexible is only the upper end in the transverse direction it turns out that model evidence tells me that between the fixed and the flexible finite element model the finite element model the flexible element model gives you much better results the evidence is higher actually it gives you much better fit between these three models tells you that this model here where you have freed the length of the beam this is a model in the transverse in the longitudinal direction to be determined from the data it gives you very good results and actually if you compare your actual load estimation for these five models the four models here give you approximately the same results the fixed model gives you underestimate the actual load by 25% ok so these beam models of course they run in a few seconds or a few minutes the finite element model here runs in several hours or a day ok and despite the fact that we have been using six models model frequencies in the transverse and six model frequencies in the longitudinal direction Bayesian inference tells you that look the tension load is something like that with an uncertainty of 8 to 12% which has to be considered ok when you try to look at the safety of this hangar so I will skip the conclusions in order to accelerate this process ok thank you very much