 Let us start our today's lecture for this NPTEL video course on geotechnical earthquake engineering. We are going through the module number 7, which is seismic hazard analysis. A quick recap what we have learnt in our previous lecture. Let us see. We have seen the shortcomings of deterministic seismic hazard analysis and that leads to the requirement of another method, which is nothing but probabilistic seismic hazard analysis, because in this case we take care of all the uncertainties involved in the earthquake event to estimate the hazard for a particular location or region or site. So, we have major characteristics of deterministic seismic hazard analysis like here only a single magnitude that is the m max we consider, single distance r mean we consider and we assume the effect of this m max and r mean to estimate the d s h j. Whereas, in probabilistic seismic hazard analysis we consider all the magnitudes involved, all the distances involved and all the effects involved that is the probability or uncertainty involved in the event is completely taken care of. First in 1969, Cornell through Bulletin of Seismological Society of America B.S.S.A paper first introduced this concept of probabilistic seismic hazard analysis or P.S.H.A. Since then there is a rapid growth in this area of P.S.H.A. and still it is continuing today. So, in P.S.H.A. we have learnt already the input data like seismicity model then seismicity distribution in the space and time magnitude frequency distribution m max maximum possible earthquake ground motion prediction equation or GMPs, which are nothing but attenuation relationships in terms of m and hypo center and site response model. And finally, we will get the output that is the event of exceeding a ground motion level within a time period of t with a probability of p that is what we get as an output. So, the four major steps of P.S.H.A. which we have already seen these are first to identify the sources all the sources and from the site then the recurrence number of earthquakes greater than a subterranean event with respect to the earthquake magnitude then the ground motion prediction equation GMPs based on the uncertainty involved in all the relationship. And finally, to obtain the probability of exceedance of a particular event for a given ground motion parameter. Then we have seen the uncertainty involved in the source to site distance that is we can have various source to site distances if the fault or the rupture is of a large area of course or large length like this. So, what is the mode of considering this uncertainty we can find out r minimum r maximum and all other r that is the distance from site to the source and find out the probability distribution function through the process of histogram for that r. Then we have seen how to proceed with that suppose if we have a linear source like this then from the site we can divide that linear source into a number of segments by either using the concentric circle approach or by using the equal increment approach in the or equality approach in the segment of the linear source. And then find out individual distances which will give us finally, that probability distribution of distance versus that probability of occurrence. Similarly, for area source also we subdivide the area into number of small segments and for volume also we divide it into number of small segments. If we have unequal source that also we have seen we can divide them in unequal areas and all the distances we can correlate to them while calculating the histogram using the weight factor of each of them. Suppose this area is a 1 compared to total area of a. So, a 1 by a will be weighting factor to this distance then similarly suppose this is a 10. So, a 10 by a will be corresponding weighting factor to this distance like that we can find out the histogram putting the weighting factor also. Next we have seen how to characterize the maximum magnitude it is similar to that d s h a process either using all those empirical relationships like with respect to length area or surface displacement of fault or the theoretical determination through the seismic moment concept or the slip rate correlations process. And we often found the distribution is such that earthquake occurrence of low magnitude will be quite often and large magnitude will be very rare. So, based on that Gutenberg and Richter proposed some earthquake recurrence model which will look like this that is if we take any earthquake event at a particular region this is number of occurrence of an earthquake of magnitude scale or in x axis this is the normal scale it will follow some distribution like this which in the log scale of this number of event and normal scale of this magnitude will look like this linear relationship which has been proposed by Gutenberg and Richter through this concept of mean annual accidents which is nothing but number of that event divided by over the time t. So, the recurrence interval is nothing but inverse of mean annual accidents. So, if we plot log of lambda m in this axis the inverse will be log of t r increasing order in the reverse direction of that lambda m, but still they will follow this relationship. Hence Gutenberg Richter law for this kind of earthquake data will look like log of lambda m equals to a minus b m where a is nothing but the intercept on this axis at m equals to 0 with the value of 10 to the power a from here and b is nothing but the slope of this line this recurrence law. So, these a and b coefficients need to be obtained for various regions based on the collected historical earthquake data. So, Gutenberg Richter law can further be expressed in terms of natural logarithm like this in terms of alpha or beta where alpha and beta is nothing but conversion from the log to the base 10 to natural log through this process right. And the earthquake is expressed in terms of a lower threshold magnitude m naught above below which we engineers are not interested. So, that is why there is a lower boundary hence the lambda m has been expressed by this expression as proposed by McGeary and Arrabaz in 1990. Now, the probability distribution function can be expressed like this if we look for an example of worldwide data of circum-pacific belt the equation is proposed like this we will get for various magnitude of m we can obtain what are their recurrence interval like t r we can easily calculate. But this is also we are not pretty sure whether it is giving the correct result or not beyond a certain value like as it is shown over here. So, we have to select upper bound also up to which this equation is valid or applicable. So, to know the validity of the or applicability of that equation we have to bound it up to a maximum value of m, m max for that region whatever maximum value was accounted for from the known earthquake data. So, that is known as bounded Gutenberg Richter recurrence law hence the equation will change to not only with a threshold value of m naught, but also with respect to this m max. So, hence the modified McGeary and Arrabaz equation of mean annual exceedance rate is given by this hence the probability distribution function is given by this expression considering both maximum and minimum. That means your magnitude should lie between this two ranges of threshold value of minimum and maximum value of this one. Then we have seen for distribution of earthquake magnitude we require all these characteristics of earthquake recurrence law to arrive at all these informations are necessary. Hence the seismicity data and geologic data everything can be clubbed together to obtain a characteristics earthquake recurrence law for a particular region. Also for the predictive relationship we should know the conditional probability considering the standard error involved in the attenuation relationship like this for a given value of m equals to m star and r equals to r star. Now, for the temporal uncertainty in terms of time event how to take care of that temporal uncertainty through the Poisson's model of probability distribution which is expressed in this form. So, if we say occurrence of an event at least once is nothing but n greater than equals to 1 will be 1 minus e to the power minus of lambda n t using this Poisson's relationship. So, we have seen in the previous lecture through example process of that at least once occurrence of that event using this Poisson's probability distribution equation that what is the occurrence suppose an event occurs once in 1000 years on an average then lambda will be 1 by 1000. So, occurrence of that event at least once in 100 years will be like this it can be computed which comes out to be 9.52 percent, but the same problem that is occurrence of at least once in 1000 year probability is not 100 percent what the layman will generally say, but it will be 63.2 percent. Now, expressing this lambda in terms of this probability p we will get what is the annual rate of accidents or the corresponding return period for a 10 percent probability of accidents in 50 years time. So, that gives us 475 years of return period the same thing with 2 percent probability of occurrence of earthquake in 50 years time will give us the return period of 2475. And these things we have mentioned we use for our practical design of any structure in earthquake prone areas depending on its importance that is whether we considered then earthquake event to be occurring in the return period of 2475 that is with less probability of occurrence for an event or with a little higher probability of occurrence with a lesser return period. So, with that we have summarized in the previous lecture that all the 4 types of uncertainties involved are with respect to location that is site to source distance then with respect to size the magnitude probability distribution function with respect to effect considering the standard error in the attenuation relationship with respect to timing based on the Poisson's model. Now, in today's lecture we should look how to combine these all the uncertainties involved for an earthquake event together because these are not independent event all these uncertainties occur together. So, we need to look at the conditional probability or the dependency of one event over the other one uncertainty over the other. So, let us look at this combining uncertainties probability computations. Now, we are starting the combining all these uncertainties using the probability theorem. So, what is the total probability theorem gives us the idea this is the from any basic probability theorem we all know that probability of occurrence of an event a will be nothing but suppose if this is the domain the total probability theorem says us that it will be probability of occurrence of a intersection one event b 1 this is b 1 another event probability of occurrence of a intersection b 2 this is b 2 like that up to probability of occurrence of a intersection of b n right. So, which is nothing but it will be probability of occurrence of a is nothing but probability of occurrence of a for a given b 1 multiplied with probability of occurrence of that event b 1 right. And probability of occurrence of a for a given b 2 multiplied with probability of occurrence of that b 2 and so on up to n numbers of events. So, applying this total probability theorem what we can write that probability of occurrence of any y that parameter hazard parameter which we are going to compute greater than some value say mu star will be given by probability of occurrence of that y greater than mu star for a given x multiplied with probability of occurrence of that event x for entire ranges of x right. That means it has to integrate all these probability distribution functions of x. So, x is nothing but is a vector of parameters which is nothing but all the uncertainties involved that is for a given uncertainty this is the value. So, for all the uncertainties you can get the value. So, we assume that m and r are the most important parameters. So, let us say among all the uncertainties let us say m and r are most important parameters. So, we need to find out the dependency of them in this form that is probability of occurrence of one particular parameter or event greater than some y star will be given as double integration of probability of occurrence of that for a given value of m and r multiplied with probability distribution function of that m and probability distribution function of that r which are coming from the uncertainties involved in magnitude uncertainties involved in the distance right. So, like that we can compute the probability of conditional probability or combining probability together clear. So, again we are writing the same thing in the above equation it gives the probability that y star is the given value which it will be exceeded if an earthquake occurs that is we are suppose interested to know about a hazard that at a site peak ground acceleration occurrence will be more than 0.3 g let us say we are interested. So, probability we need to find out that occurrence of that earthquake p g a greater than 0.3 g for all those given conditions of probability that is magnitude uncertainty is taken care of distance uncertainty is taken care of source to site distance then we can find out the combined probability. So, it can convert the probability to annual rate of accidents by multiplying the probability by annual rate of occurrence of earthquake that means lambda of y star can be computed in this fashion as well mu times of this one this probability where this mu is nothing but that e to the power alpha minus beta m norm. Now, if the site of interest is subjected to shaking from more than one site that is n s numbers of site that is there is a influence of one site from another site then what it should be you should consider while obtaining that lambda y mean value of accidents of y star you should consider all the effects coming from all the sites that is sum of i ranges from 1 to n s of summation of all this integration of the probability. So, for realistic cases probability distribution function for m and r are too complicated to integrate analytically. So, what we do we do it numerically that is we do not integrate it analytically like this, but we do integrate them numerically we will discuss that very soon through examples also. So, now dividing the range of possible magnitudes and distances into n m and n r increments that is instead of doing as I said instead of doing this analytical integration now we are doing the numerical integration. So, for numerical integration we are dividing that m ranges whatever range it is expanding that range we are taking or subdividing into n m like this is number of sites similarly number of magnitudes also we are dividing and number of source distance also we are dividing to n r. So, that is what it says it should take the shape of like this when we are doing the numerical integration. So, this is numerical integration sum of i equals to 1 to n s sum of j equals to 1 to n m that is we have taken care of the integration of the moment function or moment uncertainty sum of k equals to 1 to n r we have taken care of the uncertainty due to the distance v of i integrate of this entire thing delta m delta r which gives us for a given value of probability of m equals to m j because this is ranging over j this r equals to r k because this is ranging over k. This probability individual we have to find out that will give us the final value of this lambda y which we are interested to. Now, what does it mean let us look at it very carefully. So, we have already mentioned lambda y star that mean annual rate of accidents of a particular value say 0.3 g of p g a is expressed in this equation by this equation it is expressed that combining all the uncertainties that is we have taken care of what are the uncertainties number of sites number of earthquake magnitudes and number of distances. So, what all of them are mentioning over here let us go one by one this one i ranging from 1 to n s refers to all sites are considered. What the next term this j equals to 1 to n m refers to this blue color box let us see all possible magnitudes are considered that is contribution of each is weighted by its probability of occurrence that is none of the magnitude we are neglecting we are taking all magnitudes within that threshold value of magnitude and the maximum value of magnitude. What we have discussed through that a maguri equation that the probability distribution function m should be within m naught and m max. So, all the magnitudes with their weighing factor why the weighing factor comes into picture because depending on the number of occurrences of course it will come into picture that is taken care of in this term. What is the next term k equals to 1 to n r takes care let us look at this red box all possible distances are considered in this probability that is contribution of each is weighted by its probability of occurrence that distance probability also we have seen like dividing the distances in terms of r min r max and all other distances from site to source whether it is linear volumetric or aerial or unequal length right. So, combining them we get all the information. So, this term takes care of all possible effects are considered that is each weighted by its conditional probability of occurrence this takes care of your for a given value of m j for a given value of r k that is what it says probability of occurrence of that event of y greater than y star for a given m j for a given r k right that is the combining uncertainties right. Now, how to understand it more easily or in a better fashion let us say when we are doing the numerical integration what we have done we have divided it basically into a number of segments of n m segments and n r segments right. So, we have to look into that two dimensional boxes or systems where n m cross n r possible all combinations has to be taken care of. So, each produces some probability of accidents say y star and we need to must compare this probability of occurrence of that even y greater than y star for a given value of m equals to m j and r equals to r k for all values of m j and r k. So, suppose this axis is talks about all the probability distribution function in terms of magnitude and this axis gives us all the probability distribution functions of in terms of distances. So, we have different histograms for different magnitudes like m 1, m 2, m 3 like that. Obviously, the lower magnitude will have more number of occurrences less one will have higher magnitude will have less number of occurrences like this the histograms will come into picture. Similarly, for the distances also we can get the histograms like this taking the weighing factor. Now, each of them which one contributes say m 2 2 say r 3 this value will be the combined probability like that each of this box we have to taken care of when we are considering this combining of the uncertainties to calculate the probability of occurrence of that event greater than y star clear. So, the compute this conditional probability for each element on that grid just now as we have mentioned this grid and enter that in a matrix that is in terms of spreadsheet or in terms of cell that is what value of that probability you are getting in terms of each of this cell will give you some values then we have to combine them to get the total probability. So, now we are considering the effect of that attenuation relationship say for m equals to m 2 this is your attenuation relationship you will have for different magnitude different attenuation relationship. Now, it is not a single one like in deterministic seismic hazard analysis we have taken this m equals to m max, but here we are considering all magnitudes remember m 1 m 2 m 3 for each of the source right. So, here you are getting this mean value let us say y equals to y star about which we are interested to know. So, more than that at various distances that is these are distance probability r 1 r 2 r 3 each of them will have some kind of standard error. Now, from that probability distribution function above that value of y greater than y star for a given value of m equals to m 2 with a condition of r equals to r 1 is this green color shaded value where I am showing now. Whereas, for probability of y greater than y star for a given m equals to same m 2, but r changes to r 2 will be this shaded portion right. Whereas, for probability of occurrence of an event greater than that y star for a given value of m equals to m 2 and r equals to r 3 will be this shaded portion similarly, for 4, 5 and so on. So, like that all this will give us the probability of occurrence of an event for a given m equals to m 2 for all values of r's. That means, if we look at the grid for m 2 all values of r these are the boxes you are taking care of got it in the numerical integration. Similarly, you can do it in the other way for a particular r for various m you can do that for a given r. Now, you will have different attenuation relationship this is for m equals to m 2 you will have m equals to m 1 m equals to m 3 m equals to m m and for each of them you will have a probability distribution function that also you need to consider. That is why as we have just now mentioned each of them will give you the hazard by computing this conditional probability for each of the element that is for r equals to r 1 with m 2 r equals to r 2 with m 2 r equals to r 3 with m 3 these are the boxes where you are getting that conditional probability clear. Now, you have to repeat this process for each source and value they are place their values in the same cell. So, when it is complete then sum of all of them will give you that lambda value. Now, we need to choose a new value of y star for repeating the entire process is not it because to find out your conditional probability you need to go for another set of pairs of y star to develop another curve. See each of this log of lambda y versus y star will give you final seismic hazard curve like earlier in deterministic seismic hazard analysis we got it in terms of a single value right. Here in probabilistic seismic hazard analysis why what you are getting you should get that log of lambda y star and on the other side it can be log of t r in the reverse direction and that particular value of y star. So, this one single point you are getting by doing all this analysis by combining all these things you are getting only one value corresponding to y star let us say it is 0.3 g we have to repeat it for another value say 0.4 g also like that if you can generate this the combined thing will give you the seismic hazard curve in that fashion. So, it is an iterative process of doing the probability of occurrence. So, that is why most of the time you cannot do it in hand actually you have to use some computer programming to repeat this process of probability distribution and do this process and compute the seismic hazard curve. Now, how we use this seismic hazard curve let us see. So, seismic hazard curve it shows the mean annual rate of accidents of a particular ground motion parameter right. So, a seismic hazard curve is the ultimate result of this probabilistic seismic hazard analysis as I have said this is the ultimate output what you get from a probabilistic seismic hazard analysis. Now, if I want to use that let us see how we can use it. So, say let us want to know probability of exceeding a max value of 0.3 g in a 50 year period from a given probabilistic seismic hazard curve. Suppose, we have derived this seismic hazard curve for a region. Now, how to use it for our design what is the use of that result that result is used in this fashion. We want to know the probability of exceeding a value of a max of 0.3 g. So, in that probabilistic seismic hazard curve which is known to us we should go to a value of corresponding to 0.3 g. Now, drop that in this curve get the value of lambda which you are getting from this curve clear. Use that value of lambda in this now that time related uncertainty event. Use that lambda from your probabilistic seismic hazard analysis curve you put it here. Now, you are interested to know in 50 year span of time what is the probability of occurrence exceeding that 0.3 g. So, put that t equals to 50 over here probability comes out to be say 4.9 percent very low probability. Whereas, if the same result if you want to know for a 500 year of period you will get 39.3 percent for the same result because this value remains same only the t changes in this equation. So, if you solve you will get 39. So, it will obviously have a higher probability when your time scale is changing or increasing to 500 year. So, that depends on what is your design life of your structure. So, that is the way we use the probabilistic seismic hazard curve for our seismic design. Are you clear now where we use this probabilistic seismic hazard curve? Now, let us see the application in another direction that is what peak acceleration has a 10 percent probability of being exceeded in a 50 year period will occur. This is the other way of looking at or using the probabilistic seismic hazard curve. What it says for your structure suppose you want to consider 10 percent of probability of occurrence of an event in the year of 50 year scale. You have decided to go for that. You want to know what is the value of peak acceleration or what is the value of design acceleration you should use. So, this is more realistic use of probabilistic seismic hazard in practice of design. Now, 10 percent in 50 years how you will get for a particular region you have already this seismic hazard curve developed. So, go to that curve corresponding to your 10 percent probability in 50 year what it corresponds to T R it is 475 or lambda value of 0.0021 that already we have seen in our example right using that Poisson's distribution. Isn't it? It is known to us already. So, in that curve you look for lambda value of 0.0021 or T R value of point T R value of 475 years. So, draw that line where it intersects the curve drop it from there whatever value of a max you are getting that is your design value. Suppose here you are getting the value of a max equals to 0.21 g. So, that means corresponding to 10 percent probability in 50 year scale will be the peak acceleration should be considered 0.21 g. So, when with this much percentage of probability in 50 year scale you want to design your structure you have to take peak acceleration for design as 0.21 g. Is it clear how we make use of this probabilistic seismic hazard curve in our practical design procedure? Now, we were talking about contributions from various sources like when we talk about contributions for various sources we have to use various seismic curves. So, we can break that lambda value down into the contributions for each source that is this is the combined or total curve which we have obtained condition considering combined probability that up to NS sources now if you want to know individual source how they affect your seismic hazard curve. Suppose geologist or seismologist gave you an information say source 1 is more active in recent past 100 years period of time. Whereas, let us say source 3 and 2 are not so active in last 100 years time. So, you should always look into not only the total probabilistic seismic hazard analysis curve, but also the individual source representation. That means instead of considering all the sources if you break them or if you consider the each one of them each one of them will something come like this say source 1 is coming like this source 2 coming like this source 3 coming like this. So, in this probabilistic seismic hazard analysis you have taken care of only magnitude and distance probability not the source probability got it. So, can break that lambda values down into the contributions from each sources and plot that seismic hazard curve for each source and the total seismic hazard curve equal to some of these source curves and curves need not be parallel quite obvious. May cross each other it shows that which source is most important that is suppose it crosses crosses some other. Obviously, that will show a significant or importance of that particular source compared to the other source clear. So, let us look at here in an example can develop the seismic hazard curve for different ground motion parameters like this lambda value we can generate for a given y star we have mentioned that y can be p g a it can be spectral acceleration it can be spectral velocity like anything. So, that is what it is mentioned you can generate it for peak acceleration spectral acceleration or any other parameter. Now, choose a desired value of lambda to be used and read the corresponding parameter values from the seismic hazard curves. So, one of them will be the total curve and others will be individual values from that you can get a max value if you are talking about your value as peak acceleration you can get s a value which is spectral acceleration if you are talking about or you are deriving probably seismic hazard analysis curve in terms of spectral acceleration. Now, peak acceleration one example is shown over here say 2 percent in 50 years time for different sources like if it is inter plate event say this is the curve if it is intra plate event say this is the curve if it is a crustal event this is the curve like that we have mentioned for different sources you can identify different curves and you can always say which for a particular region which curve is more predominant depending on criss-crossing nature of them which one gives the higher value of your design value of that a max or s a like that right. Like for example, how do we know for peninsular India it is not the inter plate it is intra plate right from this probabilistic seismic hazard curve only we know that clear. This is the way you can find it out easily for a particular region which source or which type of source dominates whether it is fault movement or crustal movement if it is a plate movement what type of plate movement and all those clear. Similarly, in terms of s a also that is when you are talking about spectral acceleration you can find it out for a given natural period it can be estimated for say t equals to 3 seconds it can be estimated for another time period also why this time period is important because that relates to your superstructure or whatever structure you are going to develop or construct based on their natural period your design value of s a will get guided clear and corresponding probabilistic seismic hazard analysis curve we have to take care of. Next is uniform hazard spectrum or in short we call them as UHS UHS it will look like typically like this that is it is plot of the spectral acceleration versus natural period t like this so find the spectral acceleration values for different periods at the constant value of lambda and for all s a values have the same lambda value that is same probability of accidents with the same probability of accidents you have to find it out this uniform hazard spectrum. What does it mean that means suppose if we want to consider 2 percent of probability in 50 years time with a return period of 2475 years we will get one uniform hazard spectrum if we want to take 10 percent of probability of accidents in 50 years of time that is return period of 475 years we will get another uniform hazard spectrum curve clear. So, different uniform hazard spectrum curve or UHS we will get corresponding to different probability of accidents and accordingly based on your importance of the structure you can select which s a by t curve or UHS curve you should use for design clear. Now, let us come to another subtopic which is known as this aggregation or it is also called de aggregation. Now, common question comes arises like what magnitude and distance does that m x value corresponds to that is when you got your answer here like we have mentioned from a probabilistic seismic hazard analysis curve you are getting for your design value of some value of m x which you can use for your design. Now, you are interested to know that this value arises from which source and which magnitude mostly why because obviously it is having effect from all sources and all magnitudes, but there is a weighing factor. So, which one dominates that we are interested to know. So, what we need to do? We need to de aggregate or disintegrate this result further to go back and look for which magnitude is more influential which distance is more influential for this maximum value of for our design. Why it is necessary? Suppose, if we can avoid or retrofit in some way or disintegrate or can make a kind of isolation from that source that will be very good that is the need for using this concept of de aggregation or desegregation. So, let us look at here desegregation the total hazard it includes contributions from all combinations of m and r already we have mentioned that, but we can break that hazard down into contributions to see where from that majority coming from. So, suppose that chosen value or design value comes out to be 0.09. So, if we disintegrate in terms of distance in terms of magnitude we will automatically say that grid comes from corresponding to 75 and 7 magnitude as shown over here. The different values are there as you can see and among this we have taken only the maximum one already we have chosen the maximum one. So, now we are disintegrating that we are looking at different distances and different magnitudes what are their contributions? We found that major contribution comes from this distance and this magnitude. So, this will be dominating when we disintegrate our data. So, in that fashion if you want to use the USGS site USGS site immediately or little after that any earthquake they disintegrate and give in their site for all various earthquake for various regions the desintegration data. So, that it identifies which source actually dominated it may happen during a major earthquake not only one source was involved may be multiple number of sources were involved. Now, which was the major source to identify that major source this is the desintegration process can be very useful. So, here one example is shown over here you can see this is for Seattle in Washington in US with 2 percent of probability in 50 years time that is with respect to return period of 2475 years with spectral acceleration corresponding to time period of 0.2 seconds that is lower time period you can see that is integration of various magnitudes right. Similarly, for another region like Olympia in Washington only you can see different histograms can you see over here. So, various histogram values are already given in this there is again for same t r value same s a corresponding to t a value of 0.2 seconds same location, but for another natural period look at here once again higher value of time period. Why we are interested about different time period? Because it depends on in that locality what type of buildings are there generally in a thumb rule we will see that later number of story of a building divided by 10 is considered as in a thumb rule I am telling again as the natural period of a building. Suppose if we are talking about 20 story building 20 divided by 10 will be 2 seconds. So, 2 seconds is the natural time period for a 20 story building typically I am saying again it is typically not exact exact time period how to obtain it we will see later on in another sub topic. So, typically t equals to 1 second will denote a 10 story building right whereas, t equals to 0.2 seconds will denote just a 2 story building. That means, this disintegration will give information about which earthquake or which source and which distance is more effective for effecting the shallow buildings or shallow structures or low rise structures whereas, this disintegration data will give us the information that which source and which site are important to consider for a tall story building or high rise structure. Can you see the use of them clear now another disintegration parameter by which we can estimate it mathematically considering this effect of this attenuation suppose we have these values how to obtain this value of epsilon that is the standard error. Error is nothing but area under the curve as I have already mentioned how to estimate that it is nothing but ln of y star that value minus of ln of y bar that average divided by whatever is the standard error involved in that regression analysis that will give you the standard error. So, for low value of y star mostly this standard error or disintegration values will be negative as you can see over here if you select y star value pretty low obviously these values will be negative and if it is above this then it will come positive quite obvious for high values mostly these are positive and large clear. Now, let us come to another sub topic which is important which is known as logic tree methods what it is logic tree methods. Now, we have talked about various uncertainties majorly four uncertainties involved while computing this probabilistic seismic hazard analysis. Now, all these uncertainties are not equally important am I right there can be different there can be different importance of different uncertainty involved in the process. So, we have to find out that most appropriate model which may not be clear about the attenuation relationship and magnitude distribution because the effect which we are considering through attenuation relationship we do not know which model is most appropriate because various attenuation relationships may be available for a particular region like for example, for India for north east India also for Himalayan region of India we have already learnt there are several attenuation relationships proposed by various researchers. Now, which one is most correct and which one is least correct we do not know. So, we have to give different importance based on the experience and expertise that will give us this concept of logic tree to consider different uncertainties. Similarly, for the magnitude distribution also how the magnitude is distributed is also depends on what model you are considering which equation you are using whether it is a empirical relation of wells and copper smith or whether it is a seismic moment based calculation or whether it is a plate tectonic movement based relationship all are empirical relations. So, there also you have uncertainties involved. Now, which model you should give more priority which one list priority you do not know that consideration comes through this logic tree method. Similarly, expert may disagree on the model parameters also like fault segmentation also the maximum magnitude there will be always different school of opinions or different thoughts like different expert will tell this attenuation relationship is good another expert will say no this attenuation relationship is good another expert will say no my attenuation relationship is good. So, how to propose a better or realistic or mathematically more correct relationship or seismic hazard value that is why this logic tree comes into picture. So, let us look at this suppose when we are computing we have various attenuation model some example is given BGF model A and S model let us give equal weightage to them. Let us say we do not want to go any controversy that this is more correct this is less correct let us give equal weightage. Now, within them you can use different magnitude distribution one can use Gutenberg Richter magnitude distribution one can use some other characteristics earthquake magnitude distribution. Now, based on your experience you can give different weightage let us say Gutenberg Richter magnitude distribution relation or recurrence relation is more correct let us say 70 percent weightage is given to that and 30 percent weightage is given to characteristic it depends on the engineers of course. Similarly, for another model now when you are computing M max value you can see for different values you will get different weighing factor 0.2, 0.6, 0.2 here also different values here also different values. How these values are arrived at you can see the example over here. Suppose we are going through this channel this model we have selected let us say we are selected Gutenberg Richter model then for M max 7.5 it is coming 0.5 times 0.7 times 0.2. So, weighing factor or weight factor you should consider as 0.07 when you are using this logic tree influence in your probabilistic seismic hazard calculation. So, final value of that y or the design parameter which is obtained as weighted average will give all the values in the terminal branches. So, these w is nothing but weight factor that is if you go through this model this equation this value weighing factor is 0.07. If you go through this model this equation this magnitude your weighing factor is 0.21 like that different values you are already obtaining going through different attenuation model different magnitude distribution different magnitude values. So, by using this logic tree weighing factors finally, you can get the value of your final y in the probabilistic seismic hazard estimation. With this we have come to the end of today's lecture we will continue further in our next lecture.