 I would like just to give a context of his work. So this is a world that has been developed in a short scientific mission with a joint collaboration between Bresig Group and Tieno. Maybe I have him here and he's my let's say, my boss at Tieno, he's leaving me and introduce me and saying to me when I arrive, you arrived late and you shouldn't know I'm drunk. No, very friendly. So we did already a six weeks collaboration. I will just briefly show what we are able to do. So as I said, and I put here an awful proactive S&M tool, damage identification, finite analysis and probabilistic methods. So we have three main tasks. So basically we want to investigate the most damage scenarios and key performance indicators for bridge structures. And then apply this idea to a case study, tasks to industry bridge, okay? And the final task, which we will do later in this summer, it's exactly quantifying the value of these proactive tools that you want. So what I will present here mainly is the first two tasks and some introduction to the last one I've been already thinking about because it's the most challenging, at least personally for me, as an expert on storage engineering. So let's start. So how could we anticipate damage for the laser bridge by using the available data and the available data? What is the available data? So basically, oh, before that, just to show you my presentation, the decision scenario, I will just briefly present the structure, the military system, infinity element model, the damage scenarios and performance indicators that were selected, then the methods that were applied were mainly, okay, this is more technical, responsible for smuggling, and the basic updating, that is a key term explorer, exploratory power, but this is based on basic methods. Then I will present some results, try them, get into the idea of the value of all these information based on the ACN data. And finally, some open questions that I would like to address, not only to Greece, because they are the owner of this bridge, but in the audience in general. So this is the laser bridge. It's one of the longest bridges in Europe and even in World War. Okay, some indications about the military system are the most impressive. I think it's that we have a number here that every here is arriving to the database around one million records. And so this is a big problem for Greece, which what I do with one million records per year on a decision basis for this bridge. So we started also after having this monitoring system during my PhD in 2000, which I concluded in 2012. We did a lot of finite element models for this bridge, not only for the main bridge, but it is here presented, but also for the broad fire lines. There you are. Several models. And I just highlight here how accurate the model was to assess all the phases, the construction phase, the loading test, and the long term. Okay, these are very nice results. Of course, I selected the ones that most impressed, but of course, I think the most challenging to predict is the long term, which is about these trends is mainly affected by long term effects like grip shrinkage and thermal effects. But overall, if I present this, we gain confidence in both sides, either from the monitoring data or from the finite element model, because things match. And there is no calibration in the model. So we are confirming either via modeling or either via observation that things make sense, which brings confidence on the system. So in this context, is how to take advantage of such information for this case, where we have a very important bridge in terms of socio-economical aspect. The impact is huge because it's one of the critical links around this one. We have very comprehensive information in relation to the structure, not only the monitoring data, but also characterization of the materials, the loading, construction sequence, the monitoring data, the finite element model. So how we could extract benefits from this. So there was a discussion in one of the previous meetings in this cost action that I had with Wim and his colleague Agnieszka, which is not here, but perhaps we could look to this by looking to these impressive results, we use this finite element model as a kind of individual bridge, because in fact, it's more, it's so precise, I'm not saying that it's a perfect model, but it's quite precise in terms of the characterization of all important aspects of the structure, geometry, materials, loading and time history, which is proved by these results. So in one of these discussions, we said perhaps, okay, of course, one of the difficulties in civil engineering, when compared to mechanical engineering, is that we cannot build 100 bridges equal and then select five to demolish and do some tests, because we do not, whereas in mechanical, I can produce 1000 cars and then they select some of them to do a motor test. So we went to this idea, go around this by using this model as a virtual bridge. So we have the model in the computer, but look it as a bridge into a computer. This is the bridge. So we started thinking, let's try to explore some damaged scenarios, which we may think that might be the most critical or one of the critical ones for this bridge. So basically we come up with four. So we came up with some problems on the bearings, for example, with some strange movements. We have a second scenario about if you lose some stress on the external tangents, a third one, which also is quite important because this bridge is located in the seismic zone in Lisbon. So we also explored some settlements of two piers, the criteria was mainly because between these two piers, it's defined in the navigation channel. So it's critical also to follow this particular zone of the bridge. And also a third one, which is about let's suppose also that whatever reasons, we have some loss of stiffness in some critical sections. Perhaps above the piers and somewhere in the mid-spines, we may have some cracks which will decrease the stiffness of the sections. So we pick this, we get into the model. And so as I'm saying, we will have four different scenarios and from this big database that we have from the zero bridge, from the matrix system, we select graph sensors, which measure displacements, rotations, strains, and then we come up with a huge database that this is what was explained about response surface, which is able, for example, for one case, what it tells you, for example, for the damage scenario of loss of stress in terms of percentage, no presence loss, total failure. We may understand a little bit when we have, for example, a scenario of not only one time and the two tangents, we try to understand how the sensors on the bridge might react, okay? So basically, all this database tells you some information, how the sensors will move for this different for damage scenario, okay? So that is also included in this framework. It was, so this, just highlight this information, in fact, is from this virtual bridge, because we cannot produce damage on the bridge, yeah? But we can use the finite element model because it's so accurate to try to have some understanding how the sensor will change for this damage scenario. With these several measurements, then we had a little bit of uncertainty, which in fact is derived from the data that is in the system, because we are measuring, in fact, we are measuring all these quantities in the field. So we picked these, we had some uncertainty, and so then let's see how this can, let's say, be able to detect some damage. So then my colleagues at TNO, they are developing an automatic probabilistic damage identification in basic framework, so whatever it is. So let's, for example, look to the peer settlement, okay? So let's suppose in this case that my dimensions here is that I may have only a settlement on peer one in this direction, or I may have also a settlement on peer two in this direction, or I can have a combination of all the possibilities, okay? From zero to one, in terms of a percentage of severity when you assume a specific power, okay? We assume for this illustration a settlement of fact 50 millimeters. So this my colleague, I didn't tell him then what was the magnitude, I didn't tell them how much the settlement value was, neither this one, I just told them that the damage was a peer settlement. I gave them the database that I presented before and they did some guess, okay? So here what they tell you is that from our approach, I can tell you with 1% confidence that the peer settlement on peer one is around 35% of the maximum value and the peer two is around 25%. Then we had a meeting, this is real, what I'm saying. And then I tell them what was the damage that I put it and in fact it was not so far, it is within the area that they predict. Which is interesting, so this is what we say a proactive, but more than this what was also interesting to produce as an illustration, what shows here is pairs of sensors, okay? It's a double entry table, which tell you a little bit, for all these 12 measurements that we have in the bridge, which combination might give you the most informative, which pair is the most informative for this specific level scenario? And for example, here we can see that when you combine the vertical displacement in second span with the third span, or the first with the second, in fact is the most informative, which makes sense of course, because if the settlement is occurring in these two pairs, these vertical displacements here are quite sensitive to this change. But this was interesting to say is that they didn't know what was the magnitude of this settlement. So what is more interesting here is that based on this type of results, we can build some more useful information, my point of view, which is about, as I said, we measure different things, very displacements, vertical displacements, rotations, strengths on the concrete, yeah? So perhaps we can break down that previous table in something that can tell you that for this scenario, that much scenario for the peer settlement, it is not perhaps worthwhile to invest in installing concrete, strengths in concrete, neither in planometers, neither measuring the very displacements, perhaps the most effective way is to measure the deflections. It's the ones that gives you the most information about this specific depth. So if we enlarge this scenario, that is analysis, not considering the four-damage scenarios and keep these groups of measurements, we end up with a kind of a catalog which tells you, perhaps, that depending on the damage scenario that you are analyzing, these type of outputs can tell you which type of sensors you should use to identify more accurately the specific damage, okay? For example, if BRISA, but of other owner, has some bridges with scour problems and perhaps some peer settlements, this catalog might be interesting to consult and to understand, okay, if I have some problems in terms of peer settlements, if I look to this information, I can see that the best investment might be using vertical displacements rather than investing in, which is something normal, dozens of strategies embedded in the country, yeah? So, the idea is, by picking out in this catalog, is now what we want to do now in August, is quantify the volume of information. I've been thinking about it, and in fact, last March I was invited to do a presentation at Bars in Germany, well, BRISA was invited and I went there and I met some people and I crossed with this nice paper from Sebastian. And this decision tree, I found it very interesting because at least as a starting point, I thought perhaps we can look to this example and we can look, for example, in this case, so I'm extracting this picture from the paper, which is here seated, and we could look to this, no inspection or inspect with the support of the ASA jam on the bridge, these are the two options. Then, about the damages, we can say, okay, let's now test this for these four damages, okay? For these combinations, we need them to quantify the benefits of the costs along this decision tree that we have. Once we quantify these costs, we go back and then we can be able to quantify the volume of the information for each damage scenario that we're exploring. And the final idea, at least at this stage, I might be wrong, but this is my idea, we can end up with something that could be a catalog for decision. So basically, it's something like this. So I presented this table before. We have different damage scenarios in this catalog. In this case, we have only four, this can be extended. We are only also using four different type of measurements, which can be also enlarged, if you want. Then we can have this type of catalog. And then, based on the volume of information, we can add a column here, which can tell you what is the value that you get as a decision maker if you use this information and the sensors to have, at least we hope, a benefit for a better maintenance of the structure. So in terms of identifying the damage in an advanced approach and not leaving the structure, they are dating until the next inspection that you will do. So, finally, I have some questions. My question is, and maybe for Prisa perhaps, is this relevant to complement the visual maintenance operations for this specific bridge? Something that is quite important to discuss and also for the quantification of the volume of information of this approach is what is the type and magnitude of the cost that are here involved, which I think is not only about maintenance and repair, but you are talking about this type of damages for this type of bridge can be also a reputation of the company, which is something that how we can maintain this cost. Also, what is important is that we should have the correct order of magnitude of this cost because they work as gradients in this problem. So if you are working 1,000 or millions or billions, things will change. And also, this is very clear as an objective for this short scientific mission is, our relevant is for the decision maker, for example, Prisa, in having a catalog of ways and solutions for tackling support in the most likely damaged scenarios and mainly from the point of view of an infrastructure park, not only for one bridge, but perhaps you have a catalog when you have an infrastructure park with 1,000, 2,000 bridges, could this catalog, as I presented, be useful to get more knowledge, more effective approach and proactive for a better management of the structures? Helder, thank you very much. And we address directly the questions, maybe the catalog question first. What do you think? Is such a catalog possible? Can it be valid for a class of structures? Is it useful? What is your idea? Hello, can you do them from your way? We actually made a catalog like this for a class of bridges in our country or in our county, and it was very useful because it ended up with us changing the maintenance strategy, from maintaining them to letting them expire and replacing them with new ones. And that was a class of bridges that we have several hundreds of. So it was a very useful exercise to do. It was something like this. The visualization was similar, but the design patterns were similar. So I think I'm getting the correct answer. Thank you. Any other reflections? My name is Antoni Berri. The camera was the designer of the Ziri Bridge. And first of all, I would not worry more about the Ziri Bridge behavior once we have such a good position on it. Now, let's think a little about the monitoring systems. I think, first of all, as it was in this case, the monitoring systems must be conceived on the design phase. It's very important. Second, we must define clearly the alarms levels. It's very important for each owner to have the alarms levels perfectly defined. A specialized firm or laboratory or university can conceive and supply the monitoring system. But after the construction, it's important. There must be some entity responsible for the collection and treatment of data, because one million data per year is the big data. And it's also important to separate the effects that we can analyze with that data. For example, to separate the effects of the temperature from the creep and shrinkage, so we can clearly understand the bridge behavior. After that, I think it's a very important designer to be consulted to compare the real behavior of the bridge with his model. Because now, perhaps, the elder knows the bridge better than I, but I think me and my team we know very much about it. And the bridge was opened ten years ago, and since then I consulted database two times. So I don't know if it is okay or it's not okay. And I think it's important to know. So for you, which work on the, let's say, the investigation phase, it's important to know that you have to transform the million of data in several alarm levels that must be clearly defined for the region to act. Because after those alarm reach, it's very important to have clarified, to have perfectly identified what action should be taken. One other thing that I want in this presentation, another presentation I have seen, I think it's very important to work to increase the sense of life cycle. Because, for example, if we have corrosion sensors with a life cycle, a useful life of five years, it's nothing like that. We must have sensors that remain favorable for 20 years or one more. So thank you very much for your presentation. Just one more thing. After all this, it's very important that the collected data after treatment should be analyzed by code authorities. Because to help to calibrate the prescription of the codes. So just a brief answer because I think it deserves a lot of attention. I realize that Professor Antonio prepared very well this discussion for his presentation. I can tell you that in fact I'm working with Grisa to put this data again available to whom Grisa besides that this should be consulting. All the data is properly analyzed because I think you are the father of the bridge in a certain way and you are right about the thresholds that we should have this very clearly defined. I think the design of the bridge should take an active part of this discussion of these thresholds. It makes sense to have this discussion with you. And also something that is important is another discussion but I don't believe as an expert that only putting thresholds to all the sensors is the best approach for the utilization of this system because you have around 400 sensors if it is only basically putting thresholds and then if something triggers that something goes up some thresholds there must be as a representative to understand from the data what is happening and then you can verify with the model. I think there is two levels here. I think there is a level of operation level which Grisa can do by themselves. There are some algorithms that help them to have some understanding of the patterns that is happening then if something goes really wrong depending on this severity they can go to a second level and consult them. If it is really very severe then it is a third level whatever it is but this should be gradual and I think there is a lot of work here to be done at the first level which is the processing of this information from the data which is not all good restricted to put thresholds on in the sensors but I can continue the discussion with you if you are happy. I would like to know if you are happy. Thank you very much. I would like to put the threshold, the designer. You recommend the designer. I am saying that this should be discussed with the designer. With the designer. I think so. Because the knowledge that the designer has from the truth is quite relevant. Of course, of course. In my opinion we should not confuse thresholds with automatic thresholds. We are not talking about automatic actions. There is nothing, of course. All the data should be analyzed. In my opinion, as I told you before in fact the best the team in best conditions to analyze the information in fact they have the model they can interpret all the information. We are not talking about automatic thresholds or automatic actions. This kind of complexity is nothing. We are talking about analyzing the information and get a lot of levels of alert. I like to do anything. At least study. And assume the value. We are talking about, in fact my opinion this could work very properly if we are able to put together all the teams from the investigator all this kind of technology on censoring this crucial as Antonio said and I agree what to analyze where to put the sensors should be discussed with the designer for sure. And after that thresholds should be defined at least to provide us levels to trigger alert. And we are not talking about automatic actions. We are talking about a framework of decisions. The question is that should trigger a process of decision in order to analyze if we should act earlier or you can postpone in order and the target in the end this is the third part the target in the end is to capture all the benefits of this information of the sensory to take the best decisions along the life cycle of the region. The target is in the end of life cycle we can save the most also decreasing the risk and maximizing all the information in fact we are gathering. Thank you very much. So summarizing I think we all agree.