 Well, thanks for the the invitation so I Probably won't be contradicting anything you've heard, but what I did in some ways I I wish I could have talked yesterday morning only because I Want to sort of start this presentation by giving a bit of an overview of What what is design in engineering which we all know? tie that to what shm requires Talk about how people are doing it now and Then why the way that this action this cost action is proposing is very Synergistic with what makes the most sense so we're going to conclude then I'm going to give some examples of where where we've done Simple design of shm using some of these principles Okay, so hopefully and you stop me anytime if you have a question Or a comment. I'm not sure of something. I Promise to go through the mathematics parts of the slides as fast as possible So hopefully everybody's caffeinated okay So Everybody's probably had a design course or five in your undergraduate Okay, so I took a formal definition of the word design just in general doesn't mean about shm It could be anything okay It's the specification of something manifested by an agent intended to accomplish goals Using a set of components satisfying requirements subject to constraints Don't a lot of those words Show up a lot in our discussion the last two or three day couple days already you start to see the the relationship so it defines Specifications plans parameters costs actions I'm not making this up. This is from a the official source okay Processes and how what we do within legal political social environmental safety and economic constraints and achieving those goals Here's the the Ralph the reference Okay, so that's what design is Now I believe Sebastian defined SHM yesterday, and I'm going to get there with a redefinition in this context But there are basically two ways people have done design in history I mean you can argue that this is too simplistic But the first method is known as the rational model It's basically an optimization problem involving involving known constraints and objectives lots of plans It's understood in terms of discrete steps Maybe interdependence based on a very reasoned based Rationalist philosophy with technical rationality at its core. I think as engineers This is what we're most familiar with and this is a quote to describe it design is informed by rational thought Research and knowledge in a controlled manner That's how especially the civil and mechanical engineers in the room have always done design mostly Then came software engineering And that changed everything This is a big part of how software engineering in general is done today It's much more creative and emotional if you can imagine to create design candidates It's improvised nothing or very little is planned other than some very loosely defined objectives Analysis design and execution are completely contemporary and linked not conditional and Objectives constraints and requirements aren't definable is because they're always evolving and I should have underlined this word have lots of uncertainty Another word you've heard a lot of in the last two days So that's that's how design competing philosophies are So now let me define shm and see What should motivate us? This is very consistent with with Sebastian's definition yesterday is the process of developing an in situ Meaning we're not going to take the component out of service Like traditional non-destructive testing although many of the techniques that all of us are working on Overlapped tremendously with structural health monitoring and in DT very related technology groups in situ damage assessment capability for any component or system Damage is defined as changes to the material or geometric properties of that system that affect performance I I make this a bold word because I think we're all engineers in this room any material scientists I didn't pay attention yesterday a material scientist would argue with this right Material scientist has a very different understanding of what damage is but we as engineers we care about performance It's all about what are the goals of how the structure system is going to define It's going to perform Now I separate out shm From something called damage prognosis and that's a big part of what we're doing in here, too That's the process of combining assessments from shm With predictive loading and failure modes to make performance level risk-informed decisions Regarding future actions. Well, again, you're starting to see The same kind of synergy. This is motivating why Why we're doing what we're doing? So what does shm require then if that's the definition well all of those things? You've got to measure something all of us have Cannot do shm without a measurement of something in situ a Key step for the last 30 years in shm has been the second one and what that means is essentially taking that data and Turning it into features that you're going to use that lead to what I think the the Sebastian and Jochen and Daniel called indicators or metrics Something like that and that process right there this one bullet in my presentation is 30 years of research Okay, some of you working in vibration a feature might be a mode shape If you're an ultrasonics expert, I'm that's my area of most expertise really is ultrasonics it might be a time of arrival or a Wavespeed or something like that doesn't matter and then finally this is the big one that's much less recognized in shm a Framework for decision-making using that information from the feature extraction Okay, so you've got a direct decisions or actions, and I'm going to make fun of our field in a minute Okay, and of course there are challenges and And so the question always becomes this can we have a generalized design principle that accommodates all the things I just said Well, both design philosophies are needed as it turns out mostly rational But a couple of elements of action-centric design are actually important for shm. I don't need to read all of these I Would claim that shm because it was started by engineers began Actually action-centric in the early times because what we used to do in our field when I started was in this business in the 1990s Every single person if they wrote a proposal that said hey, I want to put a thousand sensors on a bridge That's the end of the proposal. It's all they would say. Oh, yes There's lots of money go to put a thousand sensors on the bridge. No constraint. No definition. Just go see what you can see That's how shm started now as we got going in this field 30 years later We are evolving much more toward a balanced rational approach in this design Okay, I should have said not because engineers actually I said that backwards the people who started the modern paradigm of shm actually have computer science background their machine learning experts with structural engineers kind of looking over their shoulder so What is that objective a unifying principle risk? It was defined earlier today. I think yesterday as well I'm going to define it very simply the unifying principle that should guide and in my opinion shm design is Risk and I'll explain what what that objective function is later It's the product of the probability of an event and the consequences that event more likely a series of events or a series of interdependencies and And then the amount of risk that you can tolerate is known as safety and that's up to the customer, right? Here's some very Risk risky high school student standing in their dance for a tornado Okay, they accommodate lots of risk Going fishing, you know, that's risky. I suppose different kind of risk. It's up to the customer the user of the technology whatever it is Beachgoers and St. Martin Whatever it is that you can accommodate As a risk profile so if that's the case Why not? Keep doing what the entire field thinks it's been doing and hasn't been in my opinion for 30 years in the USA We have something called the medical field risk assessment. So when you go to the doctor This is the quite the quite the series of questions they will ask Or they will ask themselves maybe not you but they will ask have you ever noticed the field is structural health Monitoring health what we should keep borrowing from the medical field to complete the transition to risk-based design What can go wrong? What's the likelihood that it can go wrong and what are the consequences and associated time scales time scales What can be done? What are the trade-offs in terms of cost benefits and what are the impacts of the current decisions on future decisions and options? Well, isn't that interesting? That's exactly what? Maybe 95% of what we've been talking about. So we should continue to borrow Okay, so Medical questions can be turned easily into the kinds of engineering questions that we want to do for shm design They're the right questions. We want I think Minimum risk, which is what I'm going to call optimal design under uncertainty Minimal risk now what are what are the units of risk? Does it have physical units? Well, what are the units of a probability any answer is acceptable, but all answers are probably wrong None there has no units probability is a dimensionless quantity, right? It's an What about units of? Consequences Well good point, I agree with you, but I would say just talk to your insurance company. There is a price on your head Yeah, I would argue it always comes down to some economic Money you can always project Consequences into a monetary unit almost always So risk is a money Function Ultimately can always be converted into whatever units you want of currency, but it's a money function, okay? So here's where What we're doing now in shm. So if you believe that point that I just made here's why we're missing out In the design world. So like I said, I'm the editor of the shm journal have been for two years I've been on the board for ten prior. So I've seen a lot of papers a lot of papers Every single paper goes like this Section one introduction section two. Let's make a measurement of something section three, let's Use a little bit of knowledge and lots of try this try this try this try that try this try this try this try this Try this and then throw all that away that doesn't work because we don't want to publish the negative results And then we go to something that kind of works and we call it a feature We take the sum of the squares of that feature and we call it a damage index Every single paper I see an shm journal looks something like this. Oh, sorry. I Bet every paper has a graph that looks like this damage in quotes index Everybody loves to show charts like this. Oh look the damage goes up Wow, I've got a I've got a correlated feature Of course, but they never tell you as you guys well know that the error bars on this thing are like this so That is and then of course chapter five future work How many times have you seen that they leave all the hard stuff to somebody else? Okay, guess what I fail those papers first of all This kind of trial-and-error approach without thinking in a forward way about Uncertainty modeling and so forth it throws away valuable information It generates highly volatile noisy non-generalizing features with which you're going to make your decisions right There was a wonderful paper done in Europe Bart Peters, I'm sure some of you maybe know him Let it on the Z-24 bridge in I believe that's in Switzerland, right? Z-24 The bridge Austria. Yes. Thank you. I couldn't remember exactly where he showed in a long-term study That they were using For features they were using I believe mode shapes and natural frequencies of this bridge. They would test it They showed that the features that those sets of features varied 25% over the course of one day and Because it was they were able to do a destructive controlled test They damaged the bridge to only half of its load capacity and the features changed to percent Would you call that a good feature? Think about it The damage to half the bridge is rated capacity load capacity changed what they're using for decision making two to three percent But over the course of one day due to boundary condition changes from the environment twenty four percent the same features changed Turns out that's taking just the sum of squares is rarely optimal So I'm going to introduce some of you to a field that gets us to the optimal way to process data And that without that decision-making framework that you're already learning about in working and you can't evaluate performance in a meaningful way Okay So I would claim this is a better way to do it. I'm not going to go through every block But this is the part where we're going to make a big change that's going to help in the design process We still have to measure something use as much knowledge as we can to get features but we need to have the Accurate feature statistics models to get those likelihoods and I know I'm getting ahead of myself here and then derive What's known as a detector how many people in this room? And I'm going to be surprised if it's any Have taken a formal class in detection theory. I Didn't think so. I don't think we have any electrical engineers in the room. I don't think I'm not one either But I took this class as a professor from a colleague change in my life. I encourage all of you Sit in on a class in detection theory bays or in classical detection theory Because then you can I was I will show you The optimal detector which then helps you lead to how to make that optimal decision boundary You weigh that by decision costs and that suggests the action as we've been talking about So how do we do that? If that's all the case we have to start the process first By defining what to do. What did I say was the worst thing? I hated about shm in the 90s It was all just put sensors out there, right? So instead if we're going to do risk-based optimal design, we actually have to start Defining what it is we want to do Okay, these are the four questions that every design process should start off with What are the things we're actually being designed to monitor? I guarantee you Monitoring for corrosion is not going to get you the same results with the same system as if you had to monitor for a fatigue crack They're not the same thing Okay, so you have to define what that is and you have to have some definition Even if it's somewhat stochastic of the limit states of that critical failure of those critical failure modes You also have to know what specific action and decision structure your shm system is going to direct, right? Is it just going to be able to be a simple binary decision? Stop operation and inspect or continue operation? It could be that simple could be more complicated What are the costs boy? You've been seeing this a lot associated with the decision in actions What's the cost of a bad decision? What's the cost of a good decision? Okay, all those things that were shown in the decision tree analyses. What's the cost of the system itself? That's all part of the framework that you saw and what are the constraints present in the design space? I can't put a thousand sensors on an airplane No one's going to allow that from a weight penalty alone Okay, and a maintenance issue So what do we want a little bit of math now apologize, but This is this is basically what Daniel was talking about different notation different idea Very generally once you identify the constraints in the target limit states What you need is the probability of observing those states given that you measured features, right? That's what we want. We want the probability of observing whatever target state or states we want Given that we've measured Features this could be raw data or it could be something more sophisticated like the mode shapes or whatever for some fixed design That's all the possibilities that go into the design of the shn system where you might put sensors What kind of sensors how often do you query them? Etc. Etc. Okay, that's what you want and the reason part of the reason why We are all interested in a Bayesian framework is that's very hard to do So we take advantage of Bayes theorem because what it does allow us to do what we want Can be converted into things we can actually estimate and or model Or you and use prior knowledge when appropriate Okay, so I don't need to go through the details here. This is this is you guys all know this now after two days Because while this is what we want we know it's basically equal to these things What's the probability of observing features in a known state? That's a good model or if we have the fortune Which we rarely do but in some cases we have the fortune of actually observing failures Right in some fields. This is possible and you can use supervised learning to get this Times the prior information and we that'll get us to what we want This is this is the the fundamental Bayes physics if you will or mathematical mathematics behind what is going why Philosophically we're trying to do what we're trying to do If that's the case Now I've got to bring in the consequences or the law the cost then of all of that probability This is just a definition. I didn't make this up either. Okay the expected cost for any given design Okay, is just the evidence weighed by some cost function Now some people define this in terms of a utility instead of and this is exactly how it's being defined in in your Your notes They're really relate. They're very related. You can look at it very similarly Whether you get utility out of a decision or do you take a loss from a decision? They're Sister sides of the same thing So it's just some well-defined utility or a loss function that depends on the customer The customer tells us or maybe the insurance company right tells us what the consequence the cost the consequence of a bad Decision is and then I can use Bayes theorem to convert that into the things that we can model and or measure Okay, so I just went from the general definition Employed Bayes theorem and I got to this Okay, so this is a measure as a function of design of The cost associated with that design it's a very general definition where I've made very few assumptions at this point Missing one critical ingredient ingredient before I go any further. I'm still missing. How do you get? From data, that's the features whatever you measured into a decision. What's that link? Okay, and that is where I have to bring on detection theory This is why you should take that class Okay, this is why you should take that class This is where I made the link in my mind and I think it's it's it's very interesting Basically detection theory is just a method to quantify the ability to discern between information bearing patterns Under inevitable noise, right? It's just a rigorous probabilistic way to do this and involves hypothesis testing which we've all heard of all right Bayesian detection theory, which is just a form of One form of detection theory beyond the Neiman Pearson, which I'll differentiate from in a minute Basically says minimize the cost loss of decision errors That's the fundamental design principle. That's what Bayesian detection theory tries to do minimize the cost or Or the or the loss Whatever units those are in of a decision error all possible errors that you could make in the decision express Probabilistically from that previous equation weighed by the loss function Whatever it is So that it's not easy to do but it can be done and if it's just a binary decision case which let's be Let's be honest about it many many real shm applications involve binaries decisions as Daniel and you can show this it's a series of by could be multiple Composite but it's a series of composite binary decisions Do I stop to inspect at this point? Do I not perhaps or whatever all right? And if you do the mathematics, you don't have to do it again It's been done for us and it all comes down to a likelihood ratio a Ratio of likelihood functions the detector the optimum detector that minimizes Bayes risk or cost is a Ratio of two likelihoods the ratio of observing our data in state I Divided by the likelihood of observing our data. I keep saying data, but I also mean features in state J Or any other between any two states and that threshold remember Sebastian asked a good question earlier What how do you get the threshold in Bayes theory itself defined by the prior information between the two states you're Determining weighed by the costs of those decisions you see where the business case is coming in to the engineering here Because this is where you choose that magical threshold of where you're going to make the decision between a Distribution of features that represent damage versus a distribution of features that don't Okay, and it comes and cost comes in Some owners structural owners have higher risk profiles. They can tweak all of these numbers Let me interpret these numbers this by the way This is a very simple simplified version where I assume it's a constant cost Doesn't have to be it just makes the math more complicated a constant cost of decision in the binary case C1 0 means what is the cost of saying? It's damaged one when in fact it's not zero someone give me an example of what that would be say in With an aircraft cost of saying it's damaged when it's not what might we do and what's the cost associated with that? If we're talking about monitoring say an aircraft an in-service commercial aircraft Cost of saying it's damaged when it's not might involve a replacement of it's going to be more than that though That's part of it. You might put a you replace a component. You didn't need to Delayed flights you've lost a lot of money from taking the flight out of service inspection time So there's a cost associated with that C00 is saying it's not damaged when it is not damaged That's just the cost of doing business Whatever it is normal operation. Usually you can normalize by that and let it be one C01 is saying it's not damaged when it is That's your insurance company saying well, that's 187 lives times however many million dollars per life And then the cost of saying it's damaged when it in fact really is damaged You still have to take it out of service. You still have to inspect you still have to do maintenance and so forth That's a simplified version Plus any prior knowledge and you can see where if you have no prior knowledge Between the two states and I really don't know if I if how often I've observed or should expect to observe a Critical fatigue crack or not these two would cancel out wouldn't they 50 50? That's that what's called an uninformed prior You could they would cancel out and it's all cost driven Your risk profile where you choose that decision point for the likelihood ratios cost driven completely in base the base formulation base risk formulation Okay, sure Not necessarily it that's gonna it it might be for a given application doesn't have to be Right, I mean the cost of taking something out of service When it isn't may not actually lead to replacement of the part because you may look like look at it quickly and say oh The engine's fine Right that costs less than actually taking it out of service and replacing an entire engine or a fan blade or something So they don't have to be the same it depends on the application So if you know that transformation function known as the detector, that's why it's detection theory From continuous features to discrete decisions. We can use the change of their standard change of variables formula Nothing special about that to go to global loss function now The discrete global loss function among all the states Basically, it's all the pieces and then the final step would be to minimize that global cost over the design space And there's a design principle now. This isn't easy But it's a design principle that will get you the minimum cost Defined by decision error ways weighings of design Okay, now I have an interesting question for somebody Does the minimum risk design do you think? It's going to be necessarily the same as the design. I'll give you maximum probability of detection or classification Yeah, you could probably guess from the way I'm asking the question what I think the answer is No, they might it's rare, but they might they might But it's not guaranteed at all the maximum probability of detection is not necessarily That's what's called Neiman Pearson classical detection theory where you choose for a given false alarm constraint That you're willing to tolerate Now that sets your gamma your your decision Feature boundary and then you you can maximize the POD or probability of detection That's what classical Neiman Pearson detection theory says we're not doing that We're minimizing decision cost error in this design principle The pieces are all there all those questions that have to be asked I color coded them where they appear in that equation So you can study that on your own time Okay, all of those questions get mapped to some of those variables in the design problem. I think okay, so This is the ugly workflow This is the math where everybody's eyes glaze over and we go to sleep, but we don't have to this is what I just showed you step by step What the design workflow could be? Okay, now I will say this is an incredibly hard step right here This last little step and this the last step of optimizing over the design space as you'll see for even simple examples can be challenging because it's a very discontinuous design space and so forth Okay, so let me do an example Very simple one first. I'll do a computational example then I'll do a real one So what my PhD students and I worked on this a few years ago? So a simple example of design using this philosophy Optimal sensor placement. I'm going to constrain the problem that we're going to assume that a wing structure undergoes bending and torsion overloading And it's subject to impact on the leading edge So it might hit debris plus it has the usual bending and torsion Fatigue what's my allowable design space? I'm going to optimize the number in placement of sparse array ultrasonic transducers piezo type transducers So we're not going to use vibration here. It's going to be a different shm technique We're going to be constrained by the customer Airbus said no you have to use ultrasonics because we're comfortable with it. Okay, let's do it and There's our problem. So we have to we have to start off by figuring out. How do we get those likelihoods? Okay, so I'm going to take this wing and I'm going to consider a set of local damage modes defined by the location orientation and type of the of the damage all right just Locally could it happen here could it happen there remember what Daniel said this is where the likelihood modeling is going to start So let me define the problem very clearly and see what all the variables are in my problem So prior knowledge Fatigue cracks at 0 and 45. So there's clearly state the states I want Usually it's you have to be of a certain size so many millimeters or whatever Cracking is distributed over the structure, but biased to the leading edge That's prior knowledge. I'm expect there to be crack initiation Perhaps on the leading edge due to impacts. I have to define my actions This one's going to be simple binary one step Do we stop operation of the aircraft to inspect on the next cycle or no let it keep going normal operation? What's the constrained design space in this case is just number and location of piezas I define the decision and action cost now I make these numbers up, but that's okay for the purposes of illustration the cost of Inspecting an aircraft when there's no damage one dollar per square centimeter much more than that, but okay not inspecting When there is damage $1,000,000 per square centimeter So this would be called my false positive cost Okay, I'm sorry. Well depends how you define negative and positive here This is basically it's not damaged, but we're spending some dollars to have to inspect it because we think it is This is the false negatives where it's damaged, but we thought it was fine I'll say it's $2 per transducer. That's probably the most accurate of the three numbers All right, and then I got to form the likelihood functions That's a that's challenging. So let me show you step-by-step what I did you got to define the problem though You see I went through all the steps here I thought this was a brilliant cartoon that I found where it's a software engineer and a manager and The manager keeps saying well the software engineer says what are we trying to accomplish? I'm trying to make you design my software. I mean, what are you trying to accomplish with the software? I don't know what I can accomplish until you tell me what the software does Trying to get this concept through your thick head the software can do whatever I design it to do Can you tell me it? Can you design it to actually tell me my requirements then? You know, this is the cycle that shm has been in for a long time between the engineer and the customer Really the customer the constraints and the problem have to be defined So how am I going to build that likelihood function in this problem? Give you an example very step-by-step now. I'm going to use ultrasonics So I apologize for anybody to anybody in who has never done an ultrasonic sparse array test It's I'm going to show you the signal processing steps that you go through to get your likelihood function But it may not be the same if you're doing strain monitoring or acceleration or mode shape But the but the ideas are the same the math won't be but the idea is exactly the same So what we do in ultrasonics is we we send out energy from all the every sensor to every other sensor, right? We look for pulse echo or pitch catch a rival of ultrasonic waves at some specified frequency, right? We have one except one piezo actuates all the energy goes in one direction all of the other sensors See that energy and the idea is if there's a defect in the structure It will scatter the ultrasonic energy and we're going to see that so the first thing we have to do is kind of subtract off The background because everything scatters ultrasonic energy Then we're going to do some band-pass filtering because we only put one tone in we don't care about conversion non-linearity to other tones and Because it's impossible to estimate phase in a sparse array you got you wavelengths are so Small compared to the distances we're going to get rid of phase So if this is what the received waveform looks like we're only going to look at the amplitude the red curve Now if I subtract the background and I do it perfectly So I do a test in a pristine condition Whatever that means and then I do the test again No damage, what should that red curve look like same exact test Zero I better get zero because I'll get the exact same red curve again The same pattern of reflections and bouncing off the boundaries, etc. If there's a defect Somewhere There's going to be a mismatch isn't there in the reflected and bit reference signals and you'll see Bumps and you'll always see some because of course like we've been hearing about everything is uncertain What if the temperature changed just a little bit between the two tests? What if your transducer did something? What if there's there's other random processes that can creep into the system? well You can imagine that the red curve is not flat It's full of noise and we got to find the so-called needle in a haystack Okay, we got to find that blip that first blip That tells us there's a scatterer present in the wing So why don't how did I get the detector? Here's the math. I made an assumption You can always Kill me based on assumptions, right? But we have to start with a model somewhere so I assume that the Underlying waveform that blue waveform, which is what I measure with my piezos has Gaussian noise sitting on top of it Okay Now if you go through the process here if you assume that that is whoops if that is Gaussian That's my blue curve and I go through all of these steps through the second one It's still Gaussian. I won't go through the reasons why but subtracting a background Subtraction of two Gaussians is still Gaussian filtering keeps it Gaussian that doesn't linear filtering doesn't change the process Something funky happens when I take the envelope though Okay Envelopes can only be positive. So it's no longer Gaussian there. So I have to model that I'm also going to assume that each data point is independent And if I model that if I actually do the mathematics to see how does Gaussian data Transform to my feature space it turns out it's rice Ian It looks like like a Rayleigh signal with a bias. There's a rate. So it's a standard probability density function form So I have my hypothesis test between undamaged and damage now What should my feature look like it should be ray a rice Ian noise with zero mean if there's no signal and it should Be rice Ian noise with an unknown amplitude an unknown phase or variance if there is a signal Because I don't know necessarily what size of what the size of the signal is right. I just know there's a defect Okay, so I have my feature statistics if I have my feature statistics though I can plug into that detector formula and look what pops out I never in a million years would have guessed this is what I should do with my data But I should sum over all the data take the log of the Bessel function of the data Times the x times the amplitude of my data divided by the variance of the noise level who knew But that's what detection theory would do rigorous detection theory gets me to What this is what's called some of you may have heard this term. This is known as a matched filter Okay, it's a matched filter. We're essentially taking our data We're weighing it against the ratio of what we expect the data to be Divided by the variance and we take the Bessel log of it sum up over all the data So how do I get the expected scatter amplitude and noise? Either prior experiments that's unusual or a model. I used a very simple model I'm going to show you I think in here finite elements weren't even needed You just got to capture the relevant physics and it turns out that in ultrasonics phased array You just need beam spreading One over the square root the distance the amplitude of that wave should go down from a scatter at one over the square root Times some factor that depends on the shape of the scatter that you've defined And it's called a scatter matrix. It's all calculable Okay, it's all calculable. So essentially we discretized that wing into little pieces and looked at every possible path And then I created like Daniel showed us an ROC Okay, I Looked at once I had that detection Once I had that Detector formed from the mathematics. I looked at different how different models Affected the detection performance Okay, so another way you can use ROC's In addition to the way Danielle was showing you earlier is actually to compare Detectors or to compare pieces of how you got the detector Down here. This is my ROC false alarm rate versus detection rate when I just Did something where I assumed I knew nothing about the scatterer I just said that the scatterer itself was a random variable of unknown amplitude and uniformly distributed Okay, so no physics involved and this is my ROC I made a simple assumption of that beam scattering model And and calculated for fatigue cracks at 0 and 45 What the scatter matrix is from a physics model and look how I improved my ROC just from a little bit of physics ROC went all the way up to here Then I did a finite element model Thousands of minutes of calculation of watching waves propagate through the thickness and do this No improvement in the well negligible improvement. So you can see Remember Daniel showed us that perfect performance in a in a in a an ROC would show perfect performance If your line was up here perfect detection independent of false alarms You always get 100 detection and what this is showing you as a way of comparing detectors This one is a lot better than this one. It's closer to the best Okay, so once I had that now I can do the global optimization problem I have the detector. Let's figure out where to put the sensors Okay, how many to use and where to put so now I'm going to do the minimization Over the all the possibilities of where to put sensors on that wing My I decided to use a design gene. I did evolutionary algorithms Because it's a it's a it's a it's a tough problem where my gene For the possibilities is some location x y on the wing plus whether I should use that transducer or not Okay, because if the if the algorithm says oh put the put the sensor 20 meters away from the wing Well, that's pretty stupid But it could do that right so We did all that Here's the results that pop out Depending on sensor cost. This is the optimum arrangement That minimizes bay's risk Given all the costs that I put in You can see that when sensors are cheap the algorithm favors global coverage When sensors get expensive It tends to bias them to the leading edge a little bit in accordance with prior knowledge Okay, so you can kind of put these things in terms of Things that managers like to see for the transducer cost. How many transducers equal The minimum risk what the cost was you can flip that and look at what the total risk was as a function of transducer cost And this can be used as design information and in fact if you have a reasonable model of your design before you build it You can do this before If you don't believe me, let's do an experiment now So It's not a wing. It's a piece of metal with a bunch of sensors now I asked a slightly different question because I didn't want to take sensors on off move around Blah blah blah. I just put a bunch of sensors on there and I said Choose the best four eight 16 32 But that's a very similar problem. I think you can appreciate As to the previous one Okay, so we did the same thing 192 potential damage locations Which I simulated by just putting this little piece of lead on the plate because that scatters ultrasonic energy pretty well It's not real damage. It's just it's just a way of moving damage around Okay, so I went and drank beer while my students stayed up all night And did this experiment Here's the results before any experiments were done on the left The predicted optimal arrangement of all you can budget four sensors was this After the 10,000 experiments that were done, that's the best four By actually looking at all 10,000 possible arrangements brute force Almost exactly the same. This is the oops This is the distribution of cost or risk associated with all 10,000 of the opt of the designs that were possible And in both cases we found the minimum one and you can see as they're get more and more sensors. It's not precise But all that shows is the robustness of the approach the minima don't have sharp sharp peaks in this case They're a little bit more soft the minima minimum cost. So there's robustness. That's probably a good thing From a design perspective, but at the end of the day, it's a design tool So it could be used as a design tool and again this design was taken that bayesian experimental design minimum risk We did it on other structures Here's a bolted frame a more of a civil structure where damage was defined two ways Quarter turns on a bolt and a magnet We still used ultrasonics because it's what i'm the most familiar with and here for example Is the or that you can turn the problem around instead of what's the optimum design Rather you can ask a similar question, which is what we did here What's the order you should remove sensors to maintain a certain level of risk Or certain risk profile and so the the numbering are the sensors That you should you the order that you should remove sensors. This is the least important sensor In the in the possibility this is the most important Okay, the blue were defined to be the minimum The blue five sensors were we're sort of the minimum to achieve in parallel some nominal probability of detection because Sometimes if you just use bayes with bayes risk without checking what equivalent detection rate that gives you you may You may irritate your customer. Oh the minimum risk solution gives you five percent probability of detection that may not meet some critical Criterion here's the optimal number of sensors as a function of A different cost ratios remember I you can put different costs on Baking a bad mist a false positive a different cost on a different false negative You can see the the the ratios up there and you can see that they're Depending on the damage that we're targeting only the bolt loosening only the magnet Or if you have to consider both failure modes in your likelihood function what the minimum is So here there's no value being added By adding more sensors Okay, in this case for this particular cost ratio for example and here it takes more sensors in these other cost ratio risk profiles Okay, that's just what the optimum arrangements were for those different four profiles If you fix the number of sensors You could so you could play around with this however whatever your constraints are We basically the difference between this and the previous slide is we change the objective function slightly Number of sensors or fixed sensors minimum cost could be anything you want So the last thing i'm going to say So I told just wouldn't take an hour and a half The last thing i'm going to say is how synergistic This design philosophy that everybody's talking about is With a very well-established field of control We are basically equivalently solving a stochastic control problem So I want to show you just a few quick slides on that. All right What is control some of us have taken a controls class? We know broadly speaking It's the application of some form of feedback to a system in order to have the system behaving the way we want right Many strategies for doing it Pole placement you've probably heard of adaptive control hierarchical controls stochastic control optimal control robust control among many others Okay, thousand strategies out there But I honed in on optimal control It's a strategy that seeks to minimize a more generalized cost index a cost functional of all the state variables that must be minimized It's very very very equivalent to what we're thinking about there's an optimality criterion a control law That's what we're trying to achieve and constraints just like an shm It's originally attributed to a contragent in bellman and some fact what the equation I'll show you in a moment is known as the bellman optimality The equation and he did it to solve a rocket speed and fuel problem and some problems in economics from game theory So simple example driver on a fixed hilly road going from one place to another Let's say we all have to go to the restaurant Okay, by ourselves. How should the driver accelerator decelerate? That's the control law Fixed amount of fuel could be a constraint. There's no gas petrol station between here and the restaurant Known path. That's a constraint not too many roads from here to the restaurant and the cost functionals could be whatever you want it to be Minimum time. I've got to get the I got to be the first one to get to the beer Because I know students to be all gone by the time I get there minimum fuel You've got to pay for fuel that's important or minimum total expenditure That would be like the bay's risk more close to the total cost that we've been talking about today So the bellman optimality principle an optimal policy has the property that whatever the initial state and initial decision are The remaining decisions must constitute an optimal one with regard to the state Starting with the first decision Hierarchy's just like we're talking about in here. And so assuming discrete time and discrete control We have a state equation that evolves our system, right? We may not know it perfectly But we can model it or estimate it and we have our decision constraints defined in terms of the state variables could be our features Bellman's principle is expressed like this Again, there's a utility or equivalently if you want to do the inverse a loss function Normally in control done with a utility function, but again, there's a synergy there Okay Find the decision that minimizes the global utility function Well Here's the first decision. We split out the first control decision. We get this So here's the first decision. Here's the subsequent decisions. The second term is just the loss So we have this Okay And then everything can be defined recursively see how this is this and then I have every Further decision in some sense or should say loss defined in terms of a previous one And so the idea is to solve for w which will then yield the optimal control decision Okay, that's known as optimal control And it can even deal with uncertainty If the problem contains uncertain parameters the fetus Noise and all those can be included in the loss function And all of a sudden this is starting to look just like the bay's risk equation, isn't it? Okay, that's another minimization problem And look how equivalent they are The top equation was the design principle We're proposing here that incorporates all the things we've been talking about this is the bellman optimality principle Very very similar There's a lot of parallels between them in terms of again on color coding the parallels between Optimal control and structural health monitoring So I think there's a lot we can learn and are using from the control field Although I would claim almost nobody Is publishing in this with this way of thinking almost nobody So it's still new to shm people I think To exploit a lot of the power of optimal control as a parallel to structural health monitoring Okay, so some final comments It's what I like about bayesian experimental design is it's a very rigorous consistent method to execute a risk based design for shm Because all the pieces are there That I think are needed And it puts shm design in the right sweet spot of trading performance risk and cost Right, I mean if if all clients customers had infinite capacity to pay us There'd be no need for this It would just be probability detection or probability of classification But there isn't that's not the real world And I've found recently that optimal control is a very synergistic field that integrates the same elements Where decision space is analogous to the design space It's the same kind of policy question Okay so I use these I hope this this lecture was kind of at a different perspective But there's a lot of the same things in this I would claim that you're that you're learning about it. You're doing now So I think I'll stop there And see if there's any comments or questions or anything Has anybody seen Bayesian experimental design in a statistics class before? No Bayesian stuff is not taught you what you have. Yeah, you don't count Yeah It's not taught very much, but I I don't know my opinion is that it's a it's It's a rigorous powerful tool that's sort of the underpinning behind a lot of what of that Daniel taught Any other comments or questions? It's a lot to absorb That comment Very interesting how you you presented and you said also A lot of this stuff is more or less the same stuff that you've been talking about Using different terms. Yep theories. Yep. At the end you mentioned optimal control Tomorrow I'm going to speak about potential decision-making Using this Markovian decision process. Yep optimal and this Belgian equation comes there, but it's excellent. It's the same thing Excellent Yeah, there's there are some it's just very little Yeah You say no, they they come from Optimal control they do the same thing but they have this in terminology and if they are not very smart But they certainly understand that actually it's always the same thing we're talking about. Yep. You call it I call it feet that you call it It's like It's not that easy to be challenging for students, but they get confused also. Yeah, and it was Helpful to to find your theory that you're approach that I take control view I take Making decision analysis for you and And try also to stick to that and really get deep into that rather than trying in 10 different theories and stay on the top of those theories because more and yet it's It needs to be strong you want to have really your feet on the ground Yeah, at the end of the day you have to implement it Yeah, you have to do it yourself Yep, yeah, that's exactly where I realized And it was very interesting to watch well all the lectures here, but Yesterday I said yep. Yep. Yep. Yep. Yep. Yep, you know see the steps you guys are doing Very much some of this or a lot of this and um It's Coming again giving you a language in a toolbox to think about it and maybe showing you a little more detail About how to directly tie it to shm. So I hope that's what we accomplished a little bit Yes question It's just not optimal Yeah Mm-hmm probably my safety's a message and you're not going to accept it I'm I'm I'm getting tough. No, let me show you something. Let me show you a one one slide Uh, whoops that where we compared all the literature features to the optimal one just to show you exactly what I'm talking I didn't put it in this Presentation, but I think I can do it in another one that has some of the same information in it very quickly here This is a lecture I I give on the the the details of Bayesian experimental design. Let me find the right slide Okay, let's see in this mess where it is Oh, maybe it was in the now it's under the detector chapter See all that math I left out for your benefit You can thank me later Let's see Uh Oh, I think it's not it. Oh, yeah. Yeah. Yeah. Yeah We went on that frame that I showed you the civil the the mechanical frame What I did is I went to the literature for everybody who uses their index for bolted joint detection And I did what I I took their algorithm and I duplicated it replicated myself And then I derive for the bolt a detector what I consider to be the optimal Now it's not the same optimal detector that I just showed you Okay, because it's a problem where I assumed I knew nothing about the reflections In that bolted joint there is no way I can model Ultrasonic energy through all those turns and impedance changes. It's just too hard simple. It's not a plate. It's too hard So I said, okay, I will turn my expected amplitude into a random variable I won't go into the details of how that changes things But it does a little bit same idea and I derived an optimal detector That was very close to the sum of squares for that problem. It's called the energy detector And that's the red. That's the roc. It's the red This is what everybody else in the literature gave for the same data set the same problem So you can see it's truly optimal now. You can also see that The the blue dash is very close And what that is that feature is is essentially Calculating energy at the max at the at the center band of the excitation So that's where most of the energy is anyway So if it weren't very close, I mean it's statistically equivalent. It's the same thing We actually calculated the full energy from the waveform. They just calculated energy in a narrow frequency band and they got Almost the same thing So my point in doing that is showing you there is an optimum again the energy detector for the rib Which was a bae 146 fuselage frame where we had to find blind blind test rivet holes Also optimal and the ones that are very close to it turn out to be essentially just a different way of calculating energy Okay, so they have the same statistical performance They have the and that's all that matters and I think that point was made by our by the lectures Let's say you Let's say I ask every 20 of you to come up with what you think the best feature is Okay, so you're going to try your vibration method. I'm going to try my ultrasonic. You're going to measure strain I don't know. You're going to measure temperature crazy man You're you're going to dip it in water and dance I don't know whatever you want to do for your inspection and we're all going to create an roc Okay, the only thing that matters is not how many steps or how much processing you did But the statistical model of that feature what are the likelihoods? It's all that matters and what what I found in a lot of studies In the in the literature is that people are reinventing damage indices metrics that have exactly the same Statistical performance as stuff that was done 25 years ago It's not really new is it maybe a new feature Oh, look the mic Todd feature. This is the coolest thing ever, but it has the same Statistical performance that some strain energy guy in 1977 wrote about that's all that matters Is the statistical performance down the road so I'll make a final point about that if all of you want research projects Someday, how do you How do you actually process data from raw data to the feature so that it has But you can show that it has that right statistical performance optimum statistical performance some kind of Differentiator between those two distributions maximum likelihood, right? What are the signal processing steps I should do on an acceleration time history to do that? Okay, that's kind of what optimal detection theory is getting at but it doesn't say what to do to go from an acceleration To another feature you just have to do it by brute force. Can you actually solve the inverse problem? I don't know Maybe machine learning will help. I don't know but Anyway Anything else? Okay Good Well, we'll see you guys at the bar