 Hello everyone, welcome to the what is probably the last presentation for you guys, this session I will introduce the concept of statistical HTA, this is where the STA methodology is moving towards HTA given the lot of complexity in technologies that are lower than 45 nanometers. Although we are not at a stage where we are doing 100 percent HTA, but we are using some of the tools or some of the methodologies that are somewhat statistical in nature. So, till the so I would say STA methodologies are divided into kind of two types, one is still OCE and other is beyond OCD. So, in the last session we discovered what OCD was, what all configuration was and we saw the methodology where we select an we select one operating condition for one run and within that operating condition use timing the rate tool model we want to publish it. Now, this is not we will see in this session that why this is a shorter comparatively shorter presentation, but we will see why this is not good enough or why this is not sufficient or why this is moving out to be lot more complex for technologies beyond 45 and we will see what sort of technology is replacing it. Let us first look at a Gaussian and a normal distribution. So, Gaussian is distribution is a continuous probability distribution that is used to describe real value random value. This is very essential now, this is the concept of random variable is very very essential to the whole process of statistical experience. Now, consider let us discuss a non semi conductor scenario. Now, let us say you go one of you goes ahead and tries to map the height of and you can choose let us say you want to map the height of all the people in one particular city of let us say all males in one particular city. Now, there will be the data will be in number of lakhs of for a for a big metro and you will find that the it is the data is concentrated around an average value. So, it will be a bell shift curve a Gaussian curve which will be the average of it will be concentrated around the average of that. So, this is a random variable the height obviously, is a function of genetics and all that, but when you measure height of a of a big sample size the variable itself becomes random, but it is focused around one particular average value. Same thing happens in fabrication in fabrication let us say for 45 nanometer crosses there are billions of billions of some millions of transistors on a particular chip not all transistors will have exactly 55 nanometer length some will have less some will have more, but again they will they tend to cluster around this is the idea they tend to cluster around a single mean value. The probability distribution formula is this 1 by underscore 2 pi sigma square this sigma is the standard deviation and mu is the mean. So, mu is the mean and sigma square is the variance the measure of width of the distribution the square root of sigma square is called standard variation right. So, you do not need to remember this formula you just need to understand that any such variable in a fabrication process will have a random distribution in fact not it might not have a completely random distribution, but it is being modeled as such we are modeling it is an approximation again see the idea here is that continuous what we call as CPD is the used as a first approximation the variable under group we might also have a different kind of distribution, but most of the variables that affect the performance of the chip like panel when the place of the oxide and so on they can be approximated as a possible distribution there are other probability distribution also, but we are we will be focused on the normal distribution the graph is something like this now on the upper hand only in the upper half of the slide you can see three normal distributions this means mu is equal to 0 once the the blue curve is this one is mu is equal to the the mean is 0 it is centered around 0 and the standard deviation is 0 point. So, variance is 0.2 standard deviation will be square root of that the the red one is again mean is same, but variance is a bit more the third one the yellow one the mean is again 0, but the variance is still more you see that that the greater the variance or the greater the standard deviation the wider is the first what it means is that standard deviation is a measure of the total bit the bit of the Gaussian curve the green one here is the the mu mean is minus 2 and the variance is 0.5. So, as a as an engineer working in fabrication the idea is to make sure that your the standard deviation is under control because the greater the standard deviation the greater is the variance that is what it literally means right. So, let us say you are examine the yellow curve let us say this curve depends the channel length of n most devices. Now, this is not what you want you do not want n less is average is 45 nanometer instead of 0. Now, this kind of wider distribution we do not want because this is telling me this kind of curve is telling me that yes there are. So, this is number of this is here this is the probability distribution right or you can even say number of devices. So, if number of devices the ok yes there are maximum number of devices at 65 nanometer, but a lot more devices are showing channel length which are greater than 65 nanometer and less than 65 nanometer and all and the curve is pretty wide right we do not want that we want a curve of this kind which is sharper less wide which has less variance less standard deviation this is what manufacturing price to control right. Now, one more interesting statistic is that if you take area around the the mean value within one sigma within one standard deviation 68 percent of the area under the curve is within one standard deviation 95 percent of the area is within two standard deviation and 99.7 area is within three standard deviation. This is the fact the third one 99.7 percent area is within three standard deviations this is the idea behind the corner based analysis or SSPA book that is we try to find a corner base STA which covers up to. So, let us say you plotted the device. Now, the minus 3 sigma will be faster devices plus 3 sigma side will be slower devices. So, if you cover if you want to cover almost if you want to have the yield which is more than 99 percent you will try and sign off at plus minus 3 sigma this is the important figure here plus minus 3 sigma this is important here. So, we will look let us go to the next slide. So, the idea is whatever corner is chosen for corner base STA there is one slide which yeah. So, this is this this is the slide I am talking about. So, corner base STA we should try to choose a corner which covers up to 3 sigma slow corner on the plus 3 sigma side plus corner on the minus 3 sigma side this is an example I am doing because there are hundreds of parameters in public this is one parameters where we have plotted channel and number of samples again we get a Gaussian curve. So, we will get something like this we will get a curve something like this around a main value and now we are choosing a corner somewhere around there and here and this is the way we target that my yield should be more than 99 percent almost or rather almost 100 percent. So, the yield is never 100 percent but this is the idea right. So, now let us talk about let us talk about process variations how many types are there and what what do we model how and how. So, process variations are of two type two type systematic and non systematic again systematic non systematic so systematic are the ones which are I would say they are problems in the fabrication process for example, they are they are something which are not acceptable something like saying that the thickness of the oxide is lower than some critical limit and or the channel length is beyond a particular permissible value. So, systematic process variations are not handled in a they are handled in the fabrication process fabrication process should be corrected to get rid of these right. Now, we come to non systematic non systematic are the ones which S S T A or S T A put it in S T A tends to model right, but we try to model these non systematic variation. Now, non systematic variations again can be from die to die or within die, die to die means they are they vary from die to die they affect one die more and one another die less it can happen in a particular wafer in a particular wafer the dies that are lying in the centre of the wafer might behave somewhat differently than the dies that are lying around the corners around the periphery. So, it may happen again there are systematic variation non systematic version within die that means, are a meter affecting the performance of different devices different right. Now, again the derivative within die are of two types which are specially correlated another independent for example, let us take two two parameters let us take the channel length of the CMOS and let us take the thickness of the oxide. Now, these two parameters if they show no correlation to each other that means, the distribution of one is entirely independent of distribution of other then they are independent, but if we figure out that in a particular area the devices that the CMOS devices that have high channel length for example, they also have more thickness of side then they might be specially correlated this is very important for S S T A. S S T A needs to know whether the parameters in question are the independent or are the specially correlated right. Now, let us look at die to die variation this one this category non systematic die to die systematic as I told you systematic is more for they require correction in the fabrication process and some of the within die variation we cannot fix your fabrication process how we can solve. So, let us look at the die to die first die to die variations are called global variations also the most more famous term is global variable die to die or inter die they are variations in process parameters which affect all devices on a die similarly that means, within a die this these variations are not within the die they are from die to die that means, that within a particular die they affect everybody similarly. So, let us say there is some issue with the some issue with the fabrication where in the die is on the corner have slower in MOS devices it can happen or slower CMOS devices that means, that all the PMOS on the corner if are slower this is a die to die this is an example of die to die variation because all the PMOS are generally slower than the compared to the die on the center of the group right. So, here is one example where we plot number of samples and we plot a parameter called EP 1 we get a Gaussian curve. Now, in practice G for example, the example is that G 1 parameter may correspond to the device threshold the V t value for a standard PMOS right. Now, this is we already know that G 1 GP 1 is a global. So, PMOS devices in all circumstances of the die will correspond to the same value of GP 1 that means, let us say for a particular die if you plot GP 1 it will look like this right. Now, let us say for a different die you plot GP 1 again it might look like this, but slightly shifted the distribution is same almost same, but the main may be slightly shifted. So, this is die to die variation. Now, non statistical STA is also called deterministic STA. So, in this case or what we so in this case the slow process model will belong to the plus P sigma comma, our process model will belong to the minus P sigma comma. Die to die variation is handled in deterministic STA by assigning a particular corner a worst case corner or a best case corner. What will that worst case corner will be? Now, let us say let us look at this curve a bit differently. Let us see that this curve assume that this curve is GP 1, but it is now number of samples is the number of die. So, the parameter of GP 1 is nothing, but the average value of GP 1 across number of dies across number of die. Let us say I chose I I plotted this GP 1 across let us say 1000 dies. Now, at a particular die I took one average value of the GT of PMOS on every die and I plotted this again I will get a this type of curve because there is a global variation. Now, I choose a plus 3 sigma corner and a minus 3 sigma corner and do deterministic STA. This will ensure that all these samples are captured in my STA right. So, this is where deterministic STA does the corner represents. So, that is why we call in corner base STA we call the operating condition to be a global corner right because it corresponds to global values right. That is what we discussed right that is why we discussed in the different sections of our series wherever we talked about operating condition is that operating condition represents the worst case scenario across all dies it is a global corner the process SS corresponds to the worst case from a worst case condition and that is the plus 3 sigma condition. Similarly, fast case represents the fastest condition fastest where a particular parameter will be into a fastest device across all dies. So, this is again a global corner. Now, second type of variation what we saw was within die variation or local variation this is also called intra die within die or local local is a more famous term for it. It refers to the process parameters which can differently affect devices on a given die. Now, this is the effect where all the devices on a particular die are not uniform. So, it is intended to capture the random process variation within the die the process even if the fabrication process is correct is absolutely it cannot be perfect. No process can be such that it will manufacture millions of parameters on a particular die which has the same parameter right. So, this is the graph where global variation obviously is a much wider curve and within this this lesser we have we got the delay a number of samples. So, global variation would be a much wider curve obviously, because it is a cross dies it takes into account lot of dies the local variations can be let us say for this curve can be for one particular die this curve can be for another particular die this curve can be for some other die. So, typically what will happen this die will be a typical die this die will be a slow corner and this will be a fast corner because we are plotting delay inverter delay is expected to be the number of samples. So, a typical would lie somewhere in middle slow will lie somewhere around plus 3 sigma and fast will lie somewhere around minus 2 sigma. Within die variations we saw that there are two types purely random the one effect is independent of other second is specially correlated. Processes tend to affect the imperfections in the processes tend to affect closely space devices in a similar manner if such thing happens then it is a sparsely correlated body variation this make them more likely to have similar characteristics than those placed far apart right. So, please remember these two things random is completely random of each other correlated ones they are dependent on the proximity. Similar devices in a very they are that are closely spaced will have similar variations right random are not related to each other. So, some of the effects can be purely random some of the variations can be linked to one another one another one another right. So, this is the essence of the local variation there are two types random and specially correlated. Now, we have already discussed what corner based data tries to address the global variations right and what about local variations local variations are taken care by the by the OCB we will see. So, now the problem with corner based FPA as we go deeper in as we go smaller and smaller is that the variations are now there are no so many types of variations which we cannot ignore. There are metal variations which can range from minus 10 percent to 25 percent environmental variations voltage island and low power devices, alloy drop, temperature rise for T MBTI this is negative bias I think this is the MBTI stands for I think negative bias pressure or something there are hot electron effects these these effects are not they were negligible for 90 nanometer or more, but as we go smaller and smaller these effects are now very much renowned. Again VT and TOX tracking hardware and modeling uncertainty mathematical errors, NLP mistrack fast rise slow fall fast fall slow run PLL. So, all these these variations that were earlier very very small are now becoming bigger and bigger. So, if you want to in a corner based STA if you want to make sure that you take care of everything you you count all the corners then it says some 2 to the power 20 timing run I guess maybe yeah because how many 1 2 3 4 5 ok I am not sure where 2 to the power 20 comes from, but the guard band is minus 65 to 80 percent because a huge guard band you cannot verify this with corner based STA perfectly. So, if you make sure if you let us say you will somehow one or one way or the other you will miss out one particular corner that part I am saying earlier I used to do STA on two corners corner based STA now I am doing STA on 6 to 8 corners. The corners are increasing because of these this variation right because of so many variations earlier the variations like device particular not even of any consideration right. So, this is where the problem arises. So, let us look at the corner based analysis and see how it it tries to model the the local variations we saw that we saw OCD that OCD is a way to model the local variations and how does it do. So, now let us say I am working in a you know in a session where OCD is set and timing the rate is also set right. So, let us consider this part of postage part. So, each delay here each AND gate here has a typical delay which comes from the library it has a min delay which is because of the exact timing delay command we have applied for OCD again it has a max delay because of the timing delay term. So, it each of the gate has min ticker max delay cumulative this ranges to we will add all the min delays we will add on the typical delays we will add on add on the max delay. So, this is min delay reported this is max delay reported what the basic ST analysis is conservative in the sense that it is overestimate the delay of long counts why because it is assuming that all 4 AND gates are at their slowest. For example, when it computes the max path it is assuming that all 4 AND gates are slowest. So, this slowest delay is 0.0.2 for whole path it will assume that all AND gates are fasted right. Therefore, but what is the reality? Reality is this may be fast, but this may be slower or this may be slowest and this may be again fast this is the reality because of the local variations not all the devices will be slowest. So, this is why we say that it will overestimate the delay of long count. So, let us say delay the actual delay will lie somewhere in between it will lie somewhere here may be, but it will overestimate the delay on the max file it will underestimate the delay on the min file. This makes the analysis safe, guaranteed net design will function at least as fast as predicted on the set up side and will not suffer from full time on it. So, this this is the guard band which is very wide because it overestimate the delay of the long path max path and this will be the delay of the min paths. This is the way it makes sure that the analysis is safe from the performance point of view and there are no full time positions right. This is why we say that corner base SPA and most of the cases is pessimistic right ok. So, traditional STA is deterministic means that it tries to define a corner it tries to that is the flat value is either positive or negative and there is only one value that is why we call it deterministic. Traditional STA will tell you whether the path is met or valid that is it. It computes the circuit delay for a specific process condition because it is a corner base STA all parameters that impact the delay of the circuit gate length oxide thickness etcetera are assumed to be fixed and uniformly applied to all the devices. Operating voltage and temperature are uniformly applied because we choose a corner for that the device gate length and even oxide thickness these are not direct these are not these do not influence the STA directly these are part of the the PVT. So, this is operating voltage is V temperature STA oxide length and oxide device gate length and oxide thickness are part of the PVT. It takes care of the global variation by creating multiple corner files and no statistical way of modeling within diverse variation. There is a way OCV is a way OCV is a way, but OCV is not not statistical in nature OCV deterministic means OCV assigns fixed guard band and which tell us that whether the timing is met or valid it is not it is not not a statistical way of modeling what it means it means that the effects the physical effects they are statistical in nature, but the modeling is not statistical we are using a fixed guard base guard band base technique to model the statistical physical modeling. What is ideal? Ideal is what if we could model the variation which is physically statistical also in a in a tool in a tool can we also model them as statistical let us see this is what what SSCA tries to do it models non systematic variation as random variables. Remember non systematic let us go back and see the categories again. So, it only models non systematic right not the systematic systematic as I told you is a has to be corrected in the flow in the in the fabrication process itself. These variables can be uncorrelated partially correlated or perfectly correlated and deterministic STA and SSCA differ in the way they treat correlations. This is the new thing about SSCA SSCA deterministic STA will assume that all the variables are correlated right because it assumes that for example for maximum it assumes that all the gates have maximum delay. So, it is assuming that all the parameters or non systematic calculation all the parameters that result into a device being slower are applied at the same time. So, they are all perfectly correlated for example, go what happens let channel length is bigger vt is higher and all the both these will cause a device to get slower right. So, it assumes that all these devices have more vt all these devices have more channel length. So, it assumes the worst for all the devices for non systematic calculation for best for best mean delay calculation it assumes the best condition for all the devices. So, it assumes that all the variation that are causing causing the all the all the parameters that are causing the variation are perfectly correlated. This is where SSCA is different SSCA tries to model the variables as it can also model the variables at perfectly random that is uncorrelated, partially correlated or perfectly correlated and this is what happens in the real world this is what happens not all variables will be perfectly correlated this is what SSCA models right. So, let us see correlation let me first clarify a couple of things about correlation. Correlation represents the statistical relationship of two or more random variables or observed data values. For example, what is the correlation between height and two types of a person the correlation would be pretty high they are perfectly correlated if the correlation is 1. So, this correlation I am not sure what the number would be, but it will be higher maybe it will be at a 1.8 to 0.9. The taller the person is the greater his shoe size demand of a product and its price this is also correlated supply and demand go hand in hand if the demand of the product is more the price will be higher if the demand is less its price will be lower. What about third weight and IQ of a person obviously not correlated IQ has nothing to do with weight. So, they are they can be either very there can be either very less correlation negative correlation or perfectly random these weight and IQ appear to be perfectly random. Correlation does not imply causality what it means is that a greater shoe size is not not caused due to a greater height alone right. They are correlated means they move together does not mean one causes another right. Now, let us see so now the what what we will do what SSG will try to do it will model the delay of a single path in terms of mean and standard deviation. So, this is the formula. So, it will the delay of a single path the n is the number of stages standard deviation increases as n increases right. So, as the number of stages increases the standard deviation increases row is the correlation row is 1 the effect adds up as a number of stages in p row is 0 the effect can cancel each other as a number of stages in p. Now, let us see one example what it means is that. So, so look at this path it is a multi stage path 1 2 there is an n the combinational stages MCP if let us say for a particular device now I am calculating for a particular device let us say for device 1 for for this this one I know the standard deviation I will know the standard deviation easily this this data can be calculated by SSG obviously, we need some data from boundary also, but let us say we know the standard deviation of this. So, in a perfectly random case where the let us say that the delay of this device is independent of the delay of this device in this case the square root law follows that means, sigma square TPP is equal to sigma square 1 sigma square 2 sigma square 3 say is sigma square MCP assuming the sigma square of T9 is same sigma is same for all the NAND gates which is possible for all the NAND gates will show same sigma sigma square TPP is nothing, but N into sigma MCP into sigma square T9. So, we rearrange and we find that this particular sigma. So, standard deviation is calculated as sigma so ok. So, for a particular delay of a single path n stage path you will have two values one is the sigma other is the mean mean is nothing, but the addition of mean delay for each of the cell each cell in the stage right. So, if you have N NAND gates in this example the mean will be will be simply n into the average delay of the typical delay of 1 NAND gate. Sigma now the calculation of sigma depends on whether they are correlated or not. Now, we will assume for now that delays are perfectly random they are not correlated that is delay of 1 because of the process parameter variation will be independent of delay of 2. So, in that case sigma of a of the total path will become 1 by a square root of the total sigma. So, it is it follows the square root law. Let us see some example now please note we are considering this case we are considering the case where rho is equal to 0 where the effects tend to cancel each other. Now, let us see a path let us see 50 let us say you have 50 gates each end 21 means 20 is the mean this is the new value this is the new value and 1 is the sigma. Now, in a in a deterministic SCA your regular prime time run you will find that the NAND delay would be. So, the OCV would represent the 3 sigma. So, that is why we multiplied by the this 3 is sigma is a sigma value of sigma is 1. So, OCV and determinant deterministic SCA will try and limit the guard band between minus 3 sigma and plus 3 sigma. So, the worst case delay would be mean plus 3 sigma which is 23, mean delay would be mean minus 3 sigma which will be 70. If they are perfectly correlated then this is for every gate each gate will have max delay of 23, mean delay of 17. If all that gate delays are perfectly correlated which means which is what is assumed by a OCV method all of them will show worst case value for max max top that means then total max start delay would be 50 gates would be 50 into 23 or it would be what is the mean of mean value of 50 gates combined with 1000 20 into 50 what is the sigma value sigma value simply is the addition 50 into 150. So, or it will be 1000 plus 3 into 50 mean plus 3 into sigma both from both cases the value is same this is what deterministic SCA does right. It would assume worst case value for every gate, but in real it will not happen. In most of the cases the delays some will be fast some of the gates will be slower and the steps will tend to cancel each other out over a long path and that is mathematically modern by the square root law. Now the sigma will not be 50 for the combined path the sigma will be square root of 50, sigma will be square root of 50 according to the random variable loss. So, it will be 7.07. So, what is the max part delay it will be 7.21. This is what we mentioned we say that the deterministic SCA will overestimate the delay of long path overestimated 1150 whereas, in SCA we found that if we assume the delays to be random perfectly random we get a value of 1021 this is less this is I would say this is more accurate, but obviously, there are some conditions we have assumed that all the variables all the gate delays are random of each other right ok. So, SCA removes specimen worst case process corner will be determined by this these are die to die variations these are sister these are systematic variations these are random variations worst case product corner potential. So, we do not the SCA does not target die to die which is or does not target systematic it only targets the random effects that is it it only targets this effect random effect nothing else. So, rest all the things remain same for die to die the corner based SCA use the corner operating condition for die to die same thing will happen in SCA, but what about the within die within die random to this SCA will only reduce specimen from within die random variable instead of square of random sigma I mean sigma square of random it will do 1 by NCT that means, it depends on the particular path how long is the path the longer the path the more more pessimist it will remove the shorter the path the less pessimist it will remove because the pessimism builds up as the path stage number of stages increase right ok. So, do not worry if you do not understand when you start working in industry you will obviously, come a profit. So, ideal SCA system the effect of a particular variation on delayed model statistically correlation is identified this is the most difficult and most challenging step of SCA the correlation is identified between variables and this is where the thinking is changed slack is not a deterministic value, but again a probability distribution it does not say whether your timing is met or say it says that your timing is met with this probability it is a probability distribution. So, a statistical timer engine a part. So, a static timer a normal time time static timer engine needs net list plus assertions is just constraints delay and slew models what it gives it gives you the slack value right it gives you the slack value and it gives you some method to do diagnosis what SCA adds over it. It needs statistics of the sources of variability it needs some data from foundry it needs those mean and sigma values for this the the process variation and it also needs the dependence that means, how are the variables correlated this is the most difficult part the foundries do not have they have no good way of giving this data they are not ready to give away this data very easily. A lot of testing would be involved and apart from giving the slack it also gives it actually gives the yield breaking it tells us that this is your yield with this particular timing violation this is your yield right. So, it tells us that obviously, the path that is shown a to us as violated will probably not violate at a typical corner even if we do not solve some timing violation we will have some amount of yield although it will be lower. So, this will give us the yield number also some yield number right. Example let us say we see the example of channel length variation tool needs information on distribution of channel length tool calculates the distribution of gate length right. So, parameter is this distribution of p and it will calculate the gate delay tool will calculate the gate delay a lower channel length will have less gate delay a higher channel length will have more gate delay the mean will correspond to some typical length right. So, if we give the tool the effect of channel length variation and we tell the tool that what is the formula to calculate delay it will calculate the delay from the channel length variationdistribution this is what the ideal SST system will do. So, a delay report for SST will look something like this for every gate it will have a mean and a sigma value and combined it will give you a part delay this is the part delay. So, it will give again a mean and a sigma wrong and it will give some kind of a graph like this there this slack curve is again a normal distribution and you say that ok at minus 3 sigma the slack is minus 0.12. So, if you want to sign off till minus 3 sigma you will have to move this curve on the right side you will have to make the sixes right you will have to make sure that minus 3 sigma value is 0 only then you can say that I have signed off my trip for minus 3 sigma right. So, this is where SST differs it gives the yield numbers in terms of sigma value. So, here what is the yield the yield is somewhere between minus 2 sigma and minus 3 sigma somewhere between 95 percent and 99 percent. So, it gives for every part it tells us what is the sigma what is the mean if it is mean minus 3 sigma or mean plus 3 sigma depending on whether it is a perfect part or a whole part if it violates. So, you can report timing up to 0 sigma up to 1 sigma. So, report timing 0 sigma mean you get only the mean value you can report timing plus 3 sigma for set of case you can report time in minus 3 sigma for whole case for example. So, this is how things are done in SST so, it is a so with the increase number of what is the mean with the increase number of corner pessimism is increasing if there is lot of pessimism then you are unnecessarily wasting your time and energy in solving paths with greater pessimism SSCA will reduce the pessimism for random process variation and it will help you in closing the timing much easier that the process becomes much easier. Obviously, there is a shifting thinking earlier we thought in terms of flat being 0 now we think in terms of flat being met till 3 sigma or minus 3 sigma and so on. So, this is the difference these are the references I use to make the presentation you can you can go through this if you want if you are much more interested in this topic I will just now seek about a couple of things which are not there in the slide. So, what is the situation now in present day SSCA is not being used 100 percent we in last session we saw that there is an advanced version of OCD called OCD and there is also one more one thing called TOC. So, AOCV and POCD both they try to bring some advantage of SSCA into a regular STA tool how do they do that for example, in AOCV there are two things now it replaces OCD and it replaces by two things one is the number of stages. So, in case of AOCV the delay of a particular path will depend on the number of stages in a particular path and it will also depend on how far are these cells away from each other. So, prime time needs two tables one is the STA stable. So, what happens in STA stable? STA stable happens it mimics the SSCA standard deviation calculation that table what it does the number of as a number of stages increase the rate will decrease. So, let us say you have two number of stages two the rate will be let us say 1.0 if the number of stages are 10 the rate will be 1.0 it is mimicking the standard deviation to a root formula of SSCA. We what it is telling us that if the number of stages increase the random effects will kind of cancel each other out right. So, the rate will be lower then you have to remove more percentage as a number of stages increase this is the way how random variables are modeled in AOCV. The second factor here is it needs a location based table that needs that ok if the two cells are far apart. So, the table describes that that ok if you are this much far apart then this rate will increase or decrease. So, now it is modeling the correlated variation correlation of the variation it is saying that as the if the cells are closer together the rate will be less if the cells are farther apart the rate will be more. So, the cells that are close to each other have less the rate and therefore, less delay the cells that are farther from each other they will see some of the effects which are specially correlated. So, the delay is more. So, there are two factors one is the location based the rate this is called location based. So, AOCV is combined of two things location based the rating and path based the rating both location based the rating tries to model the specially correlated variable and path based the relation tries to model the random variables. So, this is how we are taking. So, right now since we are not 100 percent into SSCA we are taking the good things of SSCA which are which can be modeled easily inside a regular state rule and we are the and the tools the EDA companies are trying to reduce the things. So, once you are comfortable with this course once you are comfortable with SSCA you can actually try your hand at AOCV or even POCV POCV slightly different variation over AOCV. So, you can read more about it there are these are the resources about statistical STA and unfortunately I do not have any lab which shows the lab is not easier to model because you need lot of data you need some tables which I do not have access to. So, you can read about this is a very nice article the good bad and the statistical the first one you can read more about it and but for all the practical purposes you will be using OCD for some time and then when you go into industry you will see that you will start using more and more of AOCV and POCV. Thank you.