 Welcome to the on chip variation session of time time so this is the session where we now start digging deeper into what goes into final how can we make sure that all the effects that slow down or make a device faster on with are captured in pen time. So I will talk about operating conditions a bit then there is a command there is an option to operating conditions in prime time as analysis mode I will talk about what all types of analysis modes are there and then there is one analysis mode called odd separation which is used very extensively across industry it is actually till 65 nanometer I will say it was the first time but now things are changing as we go with still smaller but we focus on on separation very important technique to model the on chip variation and we will see how so on to variation in short form it is famously called OCB so we will see how it does set up checking and go checking equations get changed with OCB in place we will look at a concept for CRPR which is clockry converging and then we will look at the rest of the sign off is that what all forms part of sign off what we need to check and then I will introduce a very interesting concept called path-based analysis let us take a let us review the operating condition we see that operating conditions are 2 we do we already know that P is the process generating factor this value related to scaling of device parameters depending on whether the fabrication process is resulting into a faster device or a slower device so again as if it is SS it is both nMOS and PMOS are faster compared to the nominal value now the important thing to understand that is that earlier in older technologies this process factor used to be actually a scaling factor that there used to be just one library for for a typical process and if you wanted to model a faster process you typically multiplied the delays by a number less than 1 if you wanted a slower process you typically multiplied the delay by a number greater than 1 so this was there was in fact a scaling factor but in present day we do not we have it all over the industry we are using one library for a particular process we do not scale so in deep some micro we seen that for the taking off in that it is not a linear scaling so they choose to the library guys choose to characterize timing when when when I say characterize with the process involved running 5 minutes so now process there is no nothing called process scaling factor per let us say for an SS corner at any voltage or temperature there will be one library where SS corner there will be one library so if you want to do SS analysis you read the SS library but we are not using scaling now we are not using one library and deriving the SS and SS number we have the numbers for per process available in single file right so we have different files for each process then it is the ambient temperature the temperature will affect delay on the temperature is not the first there are two things here to consider in fact in process also that all process voltage and temperature there are there are two things to consider one is at a chip level basis that means for example when you talk about temperature so one temperature is comes from the specific that is it let us say we say that R5 should work from work from temperature range minus 43 to 145 or 0 to 125 that is one factor so that will make your that will make your define your extreme boundaries of the worst case corner in a best case form so worst case corner becomes slow slow 125 C and a lower voltage right so this lower voltage what is this lower voltage this lower voltage represents we let us say you have one volt chip on you the specification says that it should be able to tolerate up to 10 percent of a voltage shift so you are qualifying a chip for 0.9 to 1.1 volts so these two corners each for setup for voltage for temperature and for process they represent two different extreme corner now let us say I am doing a I am doing checking at worst case corner so I write the library which has the SS which is characterized by this lower voltage that is a 0.9 volts and temperature which is what 25 C now within on this corner this is the worst case corner for the gold chip but what about different parts of chip what about differences are we assuming that all the cells are at their worst are all the cells SS devices SS to all the cells get 0.9 volts to all the cells work at 125 no this is the second effect I am talking about this is called also called so we are never assuming that all the cells all the devices on the chip they behave in a similar fashion we do not assume that we assume that not all devices will be faster will be worse and not so we will see how do we capture this effect but again please please remember that they are too popular to operating one the operating condition when you talk about operating condition defines the absolute maximum or minimum of a particular corner right this when it comes to implementation level it defines what library you are you should be reading when doing SDA so at the very basic level you should be reading one worst case corner along best case corner then within that analysis how do you capture the effects of variation across different devices right so there is a source hanging to it which is the interconnect model type we have already seen this is more for three-layout analysis where time time needs to estimate delay so this interconnect model type is for three-layout so not very not important for this now let us see what is analysis mode so we saw the the operating condition now we choose the operating condition now let us have chosen a worst case function and now I want to choose I want to see what is the analysis I want to perform what type of analysis I want to perform now this is what I was talking about so you have parameters so semiconductor device parameters can vary with conditions such as fabrication process operating temperature and power supply voltage which is operating condition command specifies the operating condition for analysis this is very clear and so specifies the operating condition so that PT can use appropriate parameters value in the logic model ok now after choosing the operating condition you have to choose which of the modes you want to which of the analysis you want to perform either a single operating condition or chip variation or advanced operating condition in case of single operating condition mode prime time we will use this single set of delay parameters for entire design based on whatever laboratory you write whatever PVT can become whatever opportunity what it means is that if you do if you are doing if you are reading the worst case laboratory worst case conditions and you are doing a set of check you are doing a set of reporting all the device and all the cells the path the complete path delay data path will be worst case because it is picking up on the worst case lab is doing delay type question on the worst case lab again all the even for hold check all the delays would be from the same lab without doing any special thing about it this is for single operating condition mode where prime time will use a single operating condition to report give you reports for all the time involved all kind of time involved but this does not represent the reality because not all the devices would be slow slow more all the devices will have a high temperature so all the devices will be having same voltage so how do we capture this effect this effect is captured in something called on the variation mode so this is a conservative analysis which allows both maximum and minimum delays to apply to different parts at the same time this is very important this statement is very very important minimum and maximum delays to apply to different parts at the same time we I will not clarify here I will wait for the next slide where there is there are good figures with in a bit of but one thing common to both single and on separation is that we are using single level so in in we are reading just one library and by just by selecting OCD we enable something which will make the actual on to variation right now on to variation is was very popular till 65 but then with deeper some micron as we we got very smaller it was said that this analysis is on one hand it is very very pessimistic and on other hand it might be optimistic so to solve these problems a new method was proposed by synopsis so it has got advanced on to variation so I will we will not go deeper to this actually there are a number of methods like this to reduce the which is OCD was pretty industry standard but then the improvements over OCD happened in different direction on there is one effect called OCD which was supported by synopsis and again there was a tool called goaltime by new domain they proposed a method called POCD so synopsis again has got extreme day and now goaltime is also part of this so now this POCD is also part of time so time time now supports a OCD and POCD both and on the if you go in keep going in direction the SDA is getting replaced by SDA so SDA is getting SDA is SSTA is statistical SDA so truly the the device on it was that the the variation in device parameters that would be a statistical problem so now we are moving towards statistical SDA I will have one presentation where I will give some basics about SST but for this for the lab for the scope of this course we will work on on this OCD we will not work on just to introduce you to the concept of AOCV so the what AOCV will apply some some factors to be some scaling factors so OCD also has some scaling factors AOCV the scaling factors change depending on the logic depth of the path and on the physical distance travels by a particular path so it is dependent on the the path length that is the number of stages there so we will see this thing will become more it will make more sense than we discussed about OCD so this is the and there is one more analysis condition called BCWC where you can use cool as it is minimum but this is where we compare with what is the difference between the operating condition and OCD mode so see the so there are two two columns here there are two rows set up and hold in both the cases so let us see single and then we will see what OCD improves over it so now let us assume that we are working in a big design of like the line where a data path for example from register to register a data path will we saw an example where the data path there can be multiple data path from register to register path one of them will be longest with most critical path for set up one of them will be process it will be most critical similarly a clock can a clock board a clock design on the clock board can reach registers through different paths if you have a lot of muxing a lot of logic in the clock board so for set up the launch so that there are four type of path now now we are working in post layout board so clock trees also in place so now there are four type of path the launch lock launch lock path which means that the path which clock takes from the definition point to the launch lock this is a launch lock path second is the data path which is the path from clock to queue starts the clock to queue of register one goes to combination logic ends at the data path this is the data path the third is the capture clock path which defines the clock path from the clock definition point to the capture path now we are working in post layout mode so launch lock path will not be exactly same as the capture path can have more or less number of elements or different elements than the launch lock path we will see a lot of this in the lab so for setup when is the setup going to be worst so you when you talk about SPL you always always take the worst case possible this is how we do things right this is how we verify that design will work in the absolute worst and it will also be worse. So the launch lock path so remember that launch lock path and data path are similar in this pattern because they form the part of the launch delay the launch part delay so for setup the worst they are the worst it is for setup right so for launch lock path prime time will take always the late clock late clock we hear mean that it will take the longest path for the launch lock maximum billion the clock cost maximum now late means the maximum late here also means that if we have defined clock latency it will take the longest late option also it will take the maximum delay in the clock path for late define the latency so it will take the late value of the latency maximum here again means that it will take the longest path and with maximum delay so again you have so we have we have seen before that at each node prime time will have a max function number in a min transition number for both for this case it will take the maximum that means it will take the worst function numbers to calculate the way so which will result into maximum delay in the clock path and single operating condition with node rating that means the rating is just scaling with a command called set firing the rate where we scale it and scale it off so node rating means that it will not apply anything in when you select a single operating condition similarly data path will take maximum delay maximum delay considering the longest path on the work formation the situation changes for capture clock path now the capture clock path will be taken it will take the early clock so it could be this you have set the clock latency and money up early option if you take that last it will take the minimum delay shortest path and the best conditions and node rating hold the situation reverse now the data path has to be positive and positive path has to be through it so early clock minimum delay minimum delay single operating and minimum delay node rating for capture clock path it is late clock and maximum delay in the clock path now consider the case that you have a very very simple design where let us say that clock to assume assume for simplification you say that the clock to launch and capture clock has the same point exactly same of every different so you could say that there is a single buffer so you have a clock three and the last there is a one buffer drive both the long path and let us say the data path is very simple you have clock to queue and the combination path is very simple it is only one path there are no multiple paths so it is just let us say there is a series of a that is it there are no other no other paths now in this case there will be a very very little very little difference between the type of delay calculation then for setup and type of delay because there is no difference between a late clock and only clock assume you are no clock waiting there is no difference what will be no difference between a late clock or a late clock or there is no difference between a maximum delay of a data path and a minimum delay of a data path this there will be slight difference between the policy calculation where this will affect the delays to some extent likely to the only difference so single operating condition assumes that all the cells in the path delay are so have the same operating condition which you have only one operating condition and it cannot see any path right. So, in effect if you try to see it matches with the what happens on chip is that all these cells are now operating at the worst condition if you are reading the word even for board check right now this does not really matches the path with the on chip variation that takes place that is why the OCD mode was introduced to capture these effects well now the the most important thing in OCD mode is that we apply something call it call something we call a timing delay we apply a scaling factor so what we do is that we say that okay now our data path will change so you can apply a scale different scaling factor for data path for clock path for nets for cells and so we will see an application in the lab so we have we mean to say is that we apply this we tell that okay in a particular operating condition let us reverse this operation we already know that all the cell delays are at their worst right so we say that okay for worst case of for worst case operating condition this will remain everything but it may happen that some of the cells in the data path can be slightly better than the worst case and it is slightly better they are not essence maybe one device is not completely so it can be faster than the other right what all the devices in a particular path will be able to pass it so we say that okay for late we for if you want to delay the path further if you cannot delay it because it is already at worst in operating condition but we can make if the cell by any chance is faster then it will be faster by maximum let us say 10 percent so we say that if you want for faster cells use the deleting pattern of formula now what it does is that again it will take the worst with malware all the all the delays in the non-sclough path for and data path will be deleted by that factor by the late deleting factor now if you are choosing a worst case that for absolute worst case for what just made deleting factor can be one right you can set it to be one or in fact if you set it to be more than one let the one point one so that what it means it will it will delay the path each of the path by 10 percent more it will add 10 percent more delay to the absolute delay but on the other hand for capture clock it will make it come early still earlier by applying the delay factor now let us say you chose the early deleting to be point one so it will make the clock a bit earlier and in hold is the other way around it will now make the data path the complete data path come a little bit earlier based on the what is the early deleting and the capture clock will now be later this is I understand this is a bit confusing but let us now see how do we live in the next two slides will explain on this further they will be explained some things to make it clear so for min max unit calculation so this is the the set timing the rate is the unique command to OCD mode now we will not worry about the single operating condition mode we will only we will only consider OCD mode for our explanation and we will work on this mode now so further on so set come operating command defines the operating condition and there is an option called analysis mode where we specify what is the analysis mode with a rectangle or OCD by default it will so by default we see that it chooses single operating condition mode this can change from version to version I have seen that the newer version will take the OCD mode as default using this mode we need to perform multiple analysis runs to handle multiple operating condition we have we have seen this typically we need to analyze at least two operating conditions the best case only worst case in OCD mode so this OCD mode important very important it is a very very conservative analysis that allows both minimum and maximum delays to apply to different parts at the same time so we have seen that that it applies for setup check the data path it will make it work still work for hold check it will make the data path in some earlier less words that means that it allows both minimum and maximum delays to apply to different parts at the same time let us see why why do we do such conservative analysis because of the process and environmental parameters may not be uniform they are not uniform across this important data not all the devices are so we looking you are working in best case operating condition you cannot assume that all the devices are some of the devices will be so some of the devices will get a lower voltage or higher voltage or lower temperature or higher temperature besides the variation in the process parameters different portion may also see power voltage depending on the IY drop and competitive these differences can arise due to many factors IY drop VT variations not all devices if you learn about the fabric and we see that how the VT is a step is said by the so it is said by the process cannot be same for all the devices the devices have there is slight change in VT but the problem of VT is only a small change is needed to make the good enough variation in the delay and the power channel and variation we saw that the channel length is a random channel length will be a random problem channel length will depict a Gaussian distribution average around the technology parameter value so let us look at the procedure I am not able to be able to do this but normal devices will have channel length slightly lower or slightly higher than that temperature variations due to local hotspot hotspot means the areas where the power transition is more and corresponding to the temperature rises interconnect metal H so interconnect for the internet because of the metal layer there is a specified thickness but they can be available in fabrication process so all these effects come under this heading on chip variation and this OCB method is very very we apply a conservative method to make sure that we are modeling all these effects that are implemented in terms of the timing calculation. So now let us look at a diagram and see that how OCB handles it up there now let us say now the rate is the rate value okay so fun let us say we set some the rate and because of that the rate let us say we set some the rate so the the maximum the rate would be of more than one and the minimum the rate would be less than one so now let us see the this this logic where this is the clock so this part is the clock launch path this part is the data path and this path is the clock chapter path right so first let us just understand this why this is a long path because this is the long path and the clock reaching here is the long path the clock reaching at the clock reaching at this the chapter path this is the chapter path the chapter path now see what happens to be now one one important thing is that this part of the clock is common to both launch and captain right one thing please vote this thing and we will see how this what significance of it hold later on and now this path this launch clock path now first thing is that this launch clock path is anyway different from the launch capture path after the common point right so anyway they are going to have different deal now let us say we have applied to the rate of 1.1 for for maximum 1.1 and let us say we have applied 0.9 for minimum so what we will do is that it will take whatever delay is there in this section the launch and it will multiply by 1.1 for data path again whatever delay is there from the clock to 2 plus combination it will apply it will multiply by 1.1 and now when it captures this path this this capture path it will multiply by 0.9 it will make the capture edge come right so this is what it is mentioned here launch clock path is maximum max data path andminimum and earliest delay for the capture clock path you can also apply delays to the checks these are delays the setup is not unless an interview do something session the setup remains as it is because it is not a delay it is just a timing check for the particular clock you can also although there there are ways to delay this also right but you will not go into that just make sure that you understand that the launch clock path and the data path will get delayed and the capture clock path will become less will it will come earlier so this is the in the example the setup timing check does not condition does not include any OCB setting for derating delay ok yeah that is what I was mentioning the setup timing check does not include any OCB setting for derating delay that means the check nothing happens to the check data point 3 5 remains as it is prime time does not do anything on it because it is not part of the it is not part of the delay it is a check so unless an interview do something special about it it is not going to modify with them so this is the equation now launch clock path plus max clock path max data path forms the part of the path delay it should be less than or equal to the equation is modified now it should be less than or equal to the clock period less capture clock path minus T setup right earlier when we are considering ideal clocks these two things were not there these two things were absent the launch clock path and the capture clock path and the equation this is now a very similar equation max data path less than or equal to clock period minus T setup this is all what we have been studying till now but now in post layout the clock path become different right so we need to capture that effect somehow in pre layout we capture this effect by the user of set clock and set variable but in post layout we have actual delay available so this equation gets modified and the clock delays come into play here right so the merino clock period this is how you you can pair into equation and calculate what is the merino clock period clock path with max data path it should be a minus minus capture clock path and yeah plus T setup now this is from the figure it is putting some values what is the launch clock path is launch clock path is 1.2 plus launch clock path is 1.2 plus 0.8 data path is 5.2 the capture clock path is 0.6 T setup is 0.35 so this is it is calculating doing some calculations here so we are calculating the merino clock period yes now this statement is very very important the part delays correspond to the delay values without any OCV derating and we have to derate using these quantities this is the way this is the command that we can use to do this please remember OCV without derating at a single operating condition doesn't make a lot of sense so whenever you are doing timing analysis you have to make sure certain things what operating conditions you are reading how do you select the on chip evaluation mode where last thing apply timing delay now one question how do I come up with these values right so these values are usually calculated there are some back end tools there are back end tools who do post who do place and route they have a method they have a way of calculating these periods for you so usually they come from the back end for a particular technology right so these are not arbitrary numbers and in fact these numbers will be different for cell or net for for clock so these big blocks so the clock 3 is a very often very uniform kind of user that means there are some particular types of cells which are only used for clock 3 so usually clock 3 has a tighter limit on variation and data path since it is combination with data path there are any number of cells will be used number of types of cells will be used to the variation band becomes a little wider compared to clock so usually I have seen that the data derived values are much wider much lower than the clocked data values right so this is how setup check is performed so again three things proper operating condition selecting the on chip variation mode and setting the proper techniques these three will make you will help you in doing proper OSPV proper checking proper modeling of the on chip variation now let us see ok other OCV analysis at worse PVT condition so if the setup timing check is being performed at the worst case PVT no derating is necessary on the late pump as they are already the worst possible this is very important this is why the timing derived late is 1 and the derived early is 0.5 or less than 0.5 so this is just please take a while to understand the that they are already at the worst case operating so why made the data path still have more delay the cells are already at the worst delay they cannot have more delay so the timing derived for the late will be 1 earlier yes it will be less than 1 so yeah so the rate specification on the chapter block path can be something like this belong so this is just different way of saying that your late the rates will always be 1 for worst case operating should be 1 in fact many people will slightly the rate depending on so many times it may happen that the worst case one of that you have might not be the absolute worst case form so some people will still put some the rate with the greater than 1 for the late so depends totally depends on what technology library you are following what technology you are working what is the sign of clue so these things are not set in stone they are they vary from even from division to division the same company depending on the history of the chips so it all comes down to you how many chips after manufacturing do not pass your test like the call comes down to you okay now let us see the hold hold is the other way round once you understood it up hold is the same thing the only difference being that the launch block path will be minimum this will be minimum and this will be minimum and the capture clock path will be maximum the equation the equation now becomes the only the equation was that total part delay should be greater than the whole time intake now the equation becomes that launch clock path plus the part delay should be greater than the capture clock path plus the whole time so this is 0.85 plus 1.7 which is this is 0.25 0.25 plus okay I think it is 0.85 plus 0.7 okay yeah so this is 0.85 plus 1.7 minus 1 which is the which is the capture clock path and minus 1.25 which is the voltage should be greater than 0 yeah so this is the equation is just we are in so idea is this should be greater than this so this is again we apply the how you apply counting the rate and if the and ever you want to the rate the time in check that is the P 6.3 where you always you can use this amount take time in the rate minus self check just a portion here this is this is where I think it is I do not recommend doing this I do not recommend using only one now in this case to be on the same side to be on the more competitive side ideally I would not like to change the values of P set up and P hold that is I would not like to make them lesser the higher the value of P set up and P hold the more conservative in the analysis right their check value not the delay value so in any condition I would not like to have the check value which is lower than what is given in the library right. So, I will never apply a minus early value if you want to make it more conservative apply the late value but do not apply any early value right many people what they do is they make the mistake of assuming that the check value let us say the P hold the P hold here the many people assume that it will get the same delay as the early there as the data path here, but it is not true P hold and P set up as a check they are the boundaries if you reduce this value the window during which the time the data should be stable around the clock is reduced. So, you are not in safe territory now you are playing with the value you are making the values less you are making it less conservative. So, if you if you give a minus self check option do not give a minus late option unless until you are very very sure that you want to make it more conservative be careful understand the difference between the self check and without this option that is the same thing right this is a self check not a self delay ok. Now, let us come to CRPR I will just explain what will be what is the significance of CRPR. Now, we have seen that there is something called a common clock path right this is the common clock path which is summoned to both the launch clock and the capture clock. Now, by default what prime time does is that when it lists when it when you see a report time is tomorrow it will start with the clock no clock the clock it will give a clock launch clock it will give a clock capture clock. What prime time by default will do it will apply whatever the rate value is given to everything it will delay this path it will delay this path. So, for the for the launch clock path either the common clock path should not be related why because this is common to launch and capture and if you apply the rates on to it what it will mean that this path will have more delay in launch and less delay in capture it cannot happen physically this is the the same two cells cannot in a in a one single cell cell in one timing check they will not have different delays they should not have different delay they physically not possible. So, it will apply the rates prime time will apply the rates, but then it will give you that it what is it will do it will it will say that ok there is something called CRPR clock reconvergence specimen delay. So, clock reconvergence specimen is the effect of the rating the common clock path and removal step is the one that removes the specimen that mean it will first it will do it this path for example, this path that is x right this path that is x in launch let us say the rates are 1.1 and 0.9. So, the launch this method the x is there the delay for the common clock path not the complete one. So, in the launch for setup let us say I am talking about setup in launch the delay will become 1.1 in capture the delay will become 0.9 x CRPR what it will do it will give you back this difference this difference is 0.2 it. So, in the end of the timing report you will see a row which says that clock reconvergence specimen is removal and it will give you a point it can be pointed. So, whatever it will reduce if your if your slack is negative if your it will it will increase your slack value by this this amount. So, this is why this happens is that the CRPR calculation by default prime time will do launch clock path and capture clock path right it is easier for it because it will now for CRPR so, calculating CRPR prime time needs to understand for every path for every timing path this common clock path keeps on changing. So, the so calculation of CRPR is a very compute intensive job. So, if CRPR is enabled the timing analysis will take no time this is why it is a separate step. So, CRPR is accuracy limitation that occurs when two different clock paths partially share a common physical and this shared segment is assumed to have minimum delay for one path and maximum delay for another path. This condition can occur any time we launch and capture paths use different delay in terms of when we apply that mandatory command and we are assuming more automatic correction of this inaccuracies called MOPIA. There is an example where see this is the way we have selected analysis type to be on separation. So, this is the library min and max R case when we will do timing analysis we will have any one library here. So, time integrate early now when we report timing minus delay and minus. So, this is the common point. So, this is where the delays are written. So, this is this is the min delay 0.64 this is the max delay. Now, the cell here is showing us different delays. So, this cell is common to both the launch and capture and it is showing different delays which is wrong. So, prime time will give me credit of 0.8 minus 0.64 back to you at the end of the timing report. So, this is the report timing. So, clock network delay propagated. So, this is important propagated mean that is what we will report obviously, you understand propagated means we are in we have set propagated clock on this and we are in course layout mode mode. So, this is where prime time gives back. So, here the difference is 0.16 0.8 minus 0.64 this is where prime time gives back. So, propagated delay is 7.16 this is a setup check. If the clock comes earlier there is a problem the clock comes later is good for you. So, this 7.16 clock now comes a bit later. So, it is good for setup it is a credit not a debit right. So, if this 0.16 was not there the flag would have been the value is I think 1.92 minus. So, the flag is increased by this problem 0.16 value. This is to reduce to remove the inaccuracy that is there present in the delay calculation when we are using OCD and set time limit right. So, this is one more example where there can be reconversions clock convergence now in this case is a problem why because the let us see this. Now, what is the launch clock path? So, if there is no case analysis setting on this mode on this particular marks the clock launch path will be the longest the clock launch path will be something like this right. The clock capture path would be something like this. So, please remember this CT actually this CT is same all with this actually the figure should be like this. So, this without even so, but the the point is that this marks will have some value 0 or 1 right and it should not this is a wrong case where you have a logic in your clock path and you have not set anything on it and prime time is taking longer path for long take and further path for other case. In all probability this is not correct. So, what you should do here is that you should set some case analysis on the from proper case analysis on the mark and select either this path or this path. So, this is one case where there is a reconversion. So, the convergence means that the clock is getting diverse here and the clock is converging here it is reconverging. So, you have to make sure you select only one property. So, these are the problems that come during clock path logic designing where you where you have you have a lot of boxes lot of lot of cases where some of the clock will be taken for test mode or a divide by 2 block will be taken for some and so on. So, there are lot of cases where you have this problem and you have to make sure that all such cases you set proper case analysis ok. This is where case analysis becomes important to select a particular mode for timing analysis. If you do not do that then you are you have big violation why because the clock path itself is there is so much variation in the clock path itself. Now, let us talk about sign off methodology. So, what do we do as part of sign off? So, STA can be run for many different scenarios the three main variables UVT corner we have seen operating mode is OCD we apply OCD and the third is biostatic corner. What is the RC interconnect corner right? So, let us see the biostatic interconnect corner we have never seen the the of this course we have never mentioned this form. So, biostatic interconnect corners are governed by the variations in metal brick in the manufacturing process. So, I have already mentioned that there are three tools that will do the extraction that means that will calculate the value of R and C from the layout. So, one is called typical where nominal values of interconnect relationship capacitance are adopted other is max C where the capacitance is max win C where the capacitance is minimum max RC where the RC is maximum min RC where both RMCI minimum. So, usually one of these corners is selected. So, usually what will happen is that worst case top corner will be now worst PVT plus probably max RC and the the best case corner will be min the best case condition whatever the SF and the higher voltage and lower temperature plus a corner where the combination of R and C gives the these delay. So, probably this corner reserve this one is delay for far fifth short minute. So, and can be used for min 5,000. So, apart from knowing the operating condition with PVT you should also know what is the worst corner for allocating usually again this comes from the from the back end team from the people who are doing back end a front end engineer might not know about this. So, it comes from the back end usually. So, then comes the operating mode. So, you have selected. So, you have selected now let us say you have you have selected a worst case corner that is that is a parasitic corner which gives you worst delay. Now, for that operating condition on parasitic corner in a full chip you might need to do so many more functional high speed loss, functional 2, functional 3, functional 4, scans 1, scan 2, base, JTAG. This is where SDA becomes such a big problem more it becomes so much compute intensive. That means, for each operating condition so, let us say you have. So, the let us say if I was working on it has to be operating the machine through for set up 4 for hold right. In fact, all 6 we have to make for set up a hold for all the cases. Now, for the 6 operating condition we have 3 functional modes and we have 3 test modes what it means is that. So, 6, 6 operating conditions into 6 modes. So, this is called PVT is called operating condition this is corner or condition operating condition whether it is functional or not. So, we have 6 corners and 6 modes that makes it 36 run. We have to run prime time 36 times each of these will have individual corner and individual mode. So, one mode will be functional mode 1 and worst case operating condition 1. Other would be functional mode 1 worst case operating mode 2. So, I have got 2 worst case corners and 4 best case corners. So, this is where SDA becomes becomes much more challenging right. In corner ways SDA the number of corners keeps on increasing with the technology getting more and more complex. It is very difficult to define one absolute best case or absolute best case corners. There can be more than one there are more than one. So, this so as an SDA engineer if you want to sign up with this you have to run 36 corners and you have to make sure that your timing does not void rate in all such corners and modes right. So, PVT corner is displayed at what condition the SDA is is take place. So, this is as the combination I was talking about these are modes and these are corners. Usually I am not worried about typical, typical lines between best and worst right. You can, but you can have multiple best case and multiple worst case corners. So, I hope you now appreciate the challenges that life on a SDA engineer. So, the most the biggest challenges to verify everything in so many cases there are 36 cases no time listed ok. Now, let us move ahead now let us let us see let us spend again I will review the transition proposition it is a very important concept. So, let us there is an iron gate there is a timing R from A to O and B to O the problem is of slew calculation at that clue is nothing, but function does it use the slew from A or B. So, XZ so let us say A is arriving late, but B has a worst slew value which output slew should type on top of this that means should it now XZ prime time can only have one max value and one with value it cannot have a value which is dependent on A or dependent on B it cannot have let us say they are they are 5 pins here 5 input pins still the output pin will have one max value and one full value full value for each rival form. It is it can only choose one of these dimensions of all the time input it will not store all the value this type of analysis is called graph based analysis this is the default analysis that prime time performs whatever report time you will do whatever in post out mode or period out mode the type of analysis which you are doing by default is called the graph based analysis right and it just depends there is one variable to there is this one variable to control whether it should take the worst transition value among all the inputs or it should take the transition value for the worst analysis. Use this always this is the default use the worst analysis do not use the worst arrival time it is it become optimistic sometimes. So, default is the worst case loop worst arrivals through as I told before can be optimistic it should not be you prime time will only keep one rising and one calling function at any given point 4 is in OCD mode 4 why 4 rise max rise min fall max and fall min 4 values in OCD mode because one is max and one is one is max and one is min because of the timing the rates one is max and one is min. If it did not do this a pin at the end of logic one will have thousand of possible transmission because of the number of stages. So, let us say you have let us say you have three stages here this is these are not buffers there is some zebrics there is multiple input multiple input right. Now, what about the transition and this pin it will have the transition if prime time was supposed to store all the transition numbers there it will have to go back a long way and it will have thousands of values like this right. So, this is why to make the the analysis reasonable and make the data reasonable not in like thousands of data per endpoint prime time will only keep 4 in OCD mode 4 values. So, it will propagate the worst-case 2 to input and output what it means is that the graph based analysis is very very specific that means for a particular timing path please do not get worried if you do not understand this you do not understand this a little bit of experience also. Now, let us say a path register to register path at each now has a combination logic of depth 3 at each point prime time will store only the worst-case 2 right perfect ok. Now, this worst-case value might not be in line with the actual data path. So, there so this worst-case value the now let us say there is some other pin which is not even in the data path, but it is very one of the input percent and have a very bad transition. So, you will get bad values. So, this is why we say that graph based analysis is very pessimistic comparatively pessimistic most pessimistic, but we have no other choice right because we cannot prime time will not store so many values thousands of values per endpoint right per output of the same. So, what is the what do we do now? Now, we have something that path based analysis path based analysis is accurate more accurate it is not optimistic please understand it is not optimistic, but it is more accurate in graph based and in path based prime time will actually calculate these transition values which is dependent on the path which is path dependent on non-technical. Please note in graph based the calculation of transition as we shall output is takes into account all the inputs that means it will take the it take into account the worst-case value of all the inputs it is not path based it does not depend what what path what timing path we are checking it is a genetic thing that is why it is pessimistic, but in path based prime time will actually calculate the transition depending on what what path is traversing that is why it is called path based and it is only used in the very end of the project where you have very small number of volition player and it is becoming difficult to solve those volitions and you want to make sure that ok first I have to see whether it meets in path based time it is whether PDA will tell me that the path is meeting if the path is meeting I am good I do not need to do any more things. So, because we have into more accurate mode. So, important thing only it so slew propagation creates a pessimism on most timing path when there are few timing volition there these can be run in more detail in more detail to put the actual slew from the start point and report the revised slack if the volitions are small in magnitude this method can indicate this is very important it is only used for the cases where you have only few volition player and they are proven difficult to fix. How do you do PDA that the process for recalculation so you have a report timing you have an option to report time I will show you in the lab there is also command for get timing path which you can associate you can ask time to recalculate, but do not worry about this path right now this command right now they can go for advance amount for advance amount of it but report timing has one one option where you can do you can ask it to so you will see you cannot do it at the complete mode you cannot see update timing in PDA mode it will not do that you have to do a path base you have to do a path the report timing you will get prompt input and you give an option which will tell time time to recalculate the path right the setup and hold constraints may will also be recalculated because they are everything is affected by transfer right the the calculation of setup check and hold it the path delays and so on everything is affected by the transmission calculation. So, once you ask fine time you give a setup path to fine time and ask it to recalculate it will do it for right, but only a very small number of paths if you again include a large number of paths for PDA it will again take a lot of time it might even have right. So, this was about so in this section we discussed about the sign off strategies so of course and foremost the most important strategy will go through things will become more clear when I show you the OTP timing report in the last. Secondly, we discussed about the conditions operating condition or what we discussed about this aspect on it we also saw what are modes you can have function mode test mode and so on. We also tried to appreciate the problem on SEM basis in terms of a number of operating conditions and operating modes and we saw an interesting concept of part based analysis. So, again I will try and include this in the lab as possible. So, we saw that so this apply and understand this slide how the transmission propagation takes place and then these two slides transmission propagation if you understand transmission propagation how it takes place you will understand PDA for you right. In the next session we will see about again we will see an interesting concept of signal integrity and noise. So, signal integrity and noise are not part of the lab they are slightly more complex as compared to timing. So, we will see the concept behind signal integrity on the noise calculation and yeah so thanks a lot.