 Good day. Welcome to the module on VLSI testing for the course of advanced VLSI design. So, I am Virendra Singh from Indian Institute of Technology Bombay. I would take you through the various challenges in VLSI testing and the solutions for that. The outline of this lecture would be some introduction of VLSI test, the VLSI design flow test challenges, test economics and basics of VLSI test. I acknowledge the help or support of Professor Vishwani, Professor Rajith Singh, Professor Fujiwara, Professor Saluja and Professor Sameha who helped me in preparing this course material. So, as Professor Chandorkar mentioned you in the very beginning that here VLSI or chip design has gone through the various phases. If I look at the history of microprocessors it is started from 4004 and now we have dual core or quad core kind of processors which are implemented with nearly billion transistors. And now as he mentioned in his very introductory lecture that the cost of the design manufacturing of a chip is nowadays more or less governed by the testing of a chip. He roughly mentioned some figure that it cost you about 200 dollars per hour if you put it on tester to test. So, that means here if you happen to test your chip for an hour you need to spend 200 dollars that is huge amount and that may not be feasible for a commodity processor or that kind of device to spend 200 dollars for that. So, now I will take you through why this kind of cost incur in testing of a system chip. As we know that here most of the chips which are we are fabricating have some kind of fabrication defects. If they have fabrication defect in that case we have to carefully test the each and every device. If it is a faulty in that case we have to reject those devices if it is fault free we have to ship those devices and get revenue out of it. Some of the devices are not faulty, but they have some weak folds those are latent folds those are not pronounced at the time of manufacturing, but as it works for functions in the field these effects pronounced and then here device will fail after working for few weeks to months. And so we also want to reject these kind of weak devices and generally we need to take these devices through a stress test that is often referred as burn in test in VLSI test domain. So, if you look at the what kind of defects that may incur in the manufacturing process like there are some of the processing related defects which may incur while you are printing your circuit on the chip. There may be like formation of parasitic transistor, there may be missing contact window, there may be be oxide breakdown, there might be some defects which are related to material like here surface is not clear or some surface impurities are there, there might be crystal imperfection. There are some time dependent failures that may occur those are not present at the moment you are fabricating, but like dielectric breakdown or electromigration mean here when device is operating some metal may migrate due to the heat and so that may result into a open. And the other dominating effect nowadays coming up is negative bias thermal instability whenever a PMOS transistor operates under negative bias its threshold voltage increases eventually the delay of the transistor increases hence your circuit operates slower than it was desired. Then there are packaging related failures like here contact degradation and so on and so forth. So, many of the defects that may occur when you are manufacturing a chip some of the defects are also time dependent which are occurring while chip is under operation. So, now here let us look at how difficult the test process is I guess all of us have gone through this kind of exercise to test whether a chip is faulty or fault free and what often we do we try to apply all possible inputs to a chip and then try to see whether we are getting the correct output or not. Like here if I have see this 3 input NAND gate if I want to test it here I need to apply all 8 inputs to this NAND gate and I have to check whether the resulting output is correct or not. So, in this case here I need to apply 8 inputs if there are N number of inputs then I need to apply 2 raise to the power N input this kind of test is known as functional test. Once I apply this test and figure out the device is correct here I do not bother about the defect that may occur while it is while it was under fabrication though here the time dependent defects might still be of concern to us. Then so like here in today's device we may easily have hundreds of inputs when there are hundreds of input you may need to apply 2 raise to the power 100 input that roughly equal to 10 raise to the power 30 if you apply these inputs by a very fast tester that can operate at 1 gigahertz you may need 10 raise to the power 21 seconds that is roughly about 400 billion century that means here one chip you have fabricated today that would be ready to use after 400 billion centuries which is impractical. So, this tells you the difficulty in the testing. Now here our aim is to test a chip in reasonable amount of time not in billions of centuries and reasonable amount of time I will come to that point little bit later reasonable amount of time is few seconds to few minutes I do to several reasons I cannot wait for several centuries. So, that means here if I want to apply in few seconds to few minutes I can apply only a very small subset of these 2 raise to the power 100 inputs. Now the biggest challenge is how to find out that subset of 2 raise to the power 100 inputs that can give me the similar kind of confidence that your device is defect free or your chip is defect free so that it is ready to use. So, now for all the high end circuits like why I have written high end circuit because for some of the chips like twice you even do not need to test because it is expensive. So, now the test time is few seconds to few minutes and every manufactured chip must be tested so that means here we have to apply many thousand test patterns in few minutes to few seconds. So, that means here we have to choose this pattern carefully so that here this can be applied in reasonable amount of time which is few seconds to few minutes. So, now this has impact on economics and what that impact is now if you look at the test cost to apply a test for few minutes that reaches roughly nearly equal to the total manufacturing and design cost. Though despite all these cost here the still testing I say is imperfect because I can never apply those exhaustive test set, but here still I want to get the similar confidence which I would have obtained by applying 2 raise to the power 100 test. So, this slide tells you which was given by ITRS in 2002 what they speculate is that sometimes in 2013 or 2012 the total cost of design and test design and manufacturing of a transistor would be nearly equal to the total test cost of a transistor that is really worry. Now here that means the test has the equal share as design and manufacturing in the IC and if it exceeds in that case here the total cost would be dominated by the test that means here we should be careful and we have to have good methodology to test a device so that we can get the very high confidence and with very minimal cost. So, means here engineering is all about economics always you have to develop a cost effective solution. So, we have to devise a methodology or so that here we can test device in reasonable time and we can get the similar kind of confidence that we would have obtained by applying exhaustively the test. So, the other thing that affects the economics is like due to the defects and if the some of the defects are systematic defects that means there is something wrong with your process you have to set process right and in this exercise your time to market may be delayed and this the area under this curve tells you the how much revenue you can get in the beginning you can get really very high a revenue, but here as time passes the revenue per product decreases because now you have many competitor products in the market and hence your cell reduces and your market cell reduces and hence you will have less and less revenue therefore you have to reduce the cost. So, initially you can sell a device at higher cost. So, now if there is a escalate in the time to market by few months say here delta t now the revenue will reduce to this one and companies they reported that if there is a delay by by 6 months in launching a product the revenue drop may be roughly equal to 30 percent that is really big number. So, the companies are very aggressive in launching their product, but they also want to make sure that quality is very good otherwise it is not only the revenue loss their image is also on the stack. So, that gives bad impression in the market and hence they also do not want to do that. So, there is always a tradeoff. Now, come to the VLSI realization process and let us look at where exactly it contributes. So, where exactly your test appears in the design flows, you know design flow starts from the customer need and customer has some kind of requirement he may say that I want to build a microwave controller and in that microwave controller if I put milk in this they should operate at say 300 watts for 5 minutes if I want to cook a rice then it should operate at say 250 watts for 15 minutes so and so forth. So, from that customers need the engineer must analyze the requirement and determine what exactly their requirement is and from that requirement they have to prepare the specifications. These specifications are most of the time written in hardware description language SDL that may be very log or VSDL or system C and from that SDL often we use the CAD tools that can synthesize your design. So, that means here from this specification you write RTL then RTL you again synthesize as gate level net list then you do the place and route and finally, you do the tap out you produce GDS 2. So, here I want to point out that at every level of synthesis you need to verify whether your design or synthesized RTL or gate level net list is correct in functionality we say this specifications that you have written down. Then once you have GDS 2 you send it to FAB, FAB will fabricate your circuit and then give it back to you. Now you want to make sure that every device has no manufacturing defect it is defect free only in that case you can sell it to the customer. So, now here after fabrication you have to test each and every chip. Now here as we discussed you cannot apply exhaustive test set to each and every chip. So, what you need to do is you need to find out a small test set that can test your chip give you the very high confidence in the manufacture chip that it is defect free. So, for that here often so we need to generate the test that process is referred as test development generally we need gate level net list for this. So, now here we can start this process before fabrication process. So, test development process is one time process for a chip design whereas, manufacturing test application is recurring cost you how to test each and every chip. So, there are couple of definitions we often use in VLSI one is the design synthesis that is defined as for a given IO function we need to develop a procedure to manufacture a device using known materials and processes. The verification is defined as predictive analysis of the design to ensure that synthesized design when it will be manufactured will perform the given input output function. And test is a manufacturing step that ensures that the physical device manufactured from the synthesized design has no manufacturing defects. Often people confuse between verification and test here just briefly I would like to tell you about the difference between verification and test. Verification is responsible to verify the correctness of the design. So, that means here whatever design you have made that is correct with respect to the specifications. Now the verification ensures that the correctness of manufactured hardware test ensures that the correctness of manufactured device. Verification is performed by simulation I guess all of you are familiar with the simulation based verification but simulation is much slower process. So, then here often companies do use emulation wherein they emulate the design on some reconfigurable platform and try to exercise as many vectors as they can and try to ensure the correctness of the design. And the more correct way is the formal method may be in this course and some other lectures will deal with the formal techniques how you can use the formal techniques to verify your design. If you look at the test here it is two step process one step is to develop the test that means here you want to develop a small test set that can be applied in reasonable time and reasonable time is few seconds to few minutes and other thing is the test application. So, test application is recurring so that means here you have to test each and every chip so that means here these vectors you have to apply on the manufactured chip by using automatic test equipment. Verification is performed prior to the manufacturing so that means here it is more or less the software process you have very long or very VHDL design and you want to perform the verification on that whereas, the test has two parts first part is a one time process whereas, the second part is applied to each and every manufactured device. Verification is responsible for quality of design how good quality design you are producing whereas, the test is responsible for the quality of the devices that you are manufacturing. So, in this module we will look at the issues related to test. Now here first let us look at what are the problems with the ideal test ideal test is supposed to detect all possible defect that may occur during the manufacturing process. Ideal test want that here all functionally good device should be be passed and all functionally bad device would be rejected. And so that means here you need to test for large number of possible defects and it is really very very difficult process and defect oriented test is still a open problem some of you might work for your research interest to develop some defect oriented test methodology that may advance state of the art in test. Then here look at the real test ideal test is supposed to detect all possible defect, but here defects may be numerous some of the defects I listed in the beginning, but now here you cannot target all possible defect, but here one thing is very clear all these defects will affect the functionality of device in one way or other. So, that means here these defects may occur in terms of logical error. So, now here so you we need to model the impact of that defect as a as an error in the output of the circuit and that model is referred as the fault model and now here fault model may or may not map all real defects sometimes, but here it is capable to detect by and large large number of the defects. Because you cannot apply or you cannot target all possible defects here, because of the high design complexity here it is almost impossible to get 100 percent coverage for all modelled faults and that is why we get the incomplete coverage and that is another thing which always bothers us always we want that at least we should test a chip for all possible modelled faults, modelled faults are the faults which implicates in terms of logical error. So, and now here I will come to this point little bit later, but here due to the bad design or test methodology here some of the good chip may be classified as bad chips and hence you are rejecting those chips this will lead to the yield loss. Hence it results into the increase per chip cost and now here the other thing is we are applying very small subset of total test that we need to apply. Hence here a fraction of chips may escape the test. So, that means here those chips may be classified as good chips though they are faulty chips and this will result in to the defect level and so that means here the fraction which fraction of bad chips among the total chips is known as defect level and every company wants to keep this as low as possible. The kind of DPPM people look at is something roughly 100 defective parts per million parts manufactured though here again it depends on the application where you would like to use your chips. So, the testing is something like a filtering process, you have some good chips some bad chips then here you want to test these and good chips should be classified as good chips, but some of the bad chips may also be classified as good chips. So, this probability of classifying bad chip as good chip contributes to the defect level whereas we want that here all the bad chips or defective chips should be classified as bad chips, but some of the good chips may also fail the test. I will come to this point why some of the good chips may fail because here we can no longer apply the functional test. We have to work on the structural test and the structural test does not care about the functionality. So, some of the vectors that can be applied in the non-functional mode that can uncover that defect, but those vectors can never be applied in functional mode hence that fault may never be excited hence that fault will not create any malfunctioning in the chip. So, what we can say is that if we classify these good chips as bad chips we are losing those chips unnecessarily and that results into yield loss. So, this process is something like a student examination and all the faculty and the students are struggling with this problem. So, if I want to test how good you people are in learning the VLSI test what I need in reality I want to create all possible questions and you are supposed to answer all those possible questions may be in 2 months, 3 months, 4 months, but nobody has that much time to test. Still here we want to test students and want to award the grade we want to say that this student is pass quality student, this student is fail quality students, but by testing 3 R exams or 1 R exam. So, that means here in 1 R or 3 R you can answer definite number of questions that is much smaller in comparison to the total number of possible questions you may have for that course. So, now then it is a big problem for the instructor to design a small set of questions that can give the similar kind of confidence to the instructor that he or she would have obtained by asking you all possible questions. So, like for example, in a class, so there are say 100 students, 75 percent students are pass quality students, 25 percent students are either not sincere or not attending classes or this course may not be the priority for that. So, they are fail quality students. So, now here ideally I want that here I want to classify these students as pass quality students, these students as fail quality students. I design a questions in order to design a question paper always every instructor use some kind of error model that these are the often errors students do make and then you have to set a question paper that can uncover those errors. So, now out of the 75 percent say the probability of passing the exam of the say 95 percent, so that means here 95 percent students will do write the right answers they will pass the exam. Whereas, 5 percent students they were under stress or they might not have read that portion in the previous night and hence they are not able to answer the questions and then they may fail. And this the similar lines like here out of 25 percent, 95 percent fail quality students will fail, but some of the students may be they might be smart they read only those questions previous night and they are able to solve those questions and they are able to solve the exam and hence they pass the exam. So, out of 75 percent, so 72 percent students will fail by the simple probabilistic calculation you can compute the probability of passes is 72 percent and probability of fail is 27 percent. So, now here this is very much concerned to the instructor that many of the fail quality students are passing the exam and this is much more concerned to the students that here fast quality students are failing the exam. So, now here out of the 72 percent students if I look at the contribution the how many fail quality students are passing the exam that you can compute by computing the conditional probability of students who belongs to the fail subgroup, but then they have passed the exam sorry this is typo this is pro. So, now the probability can be computed as the probability of a failing they are passing the exam into the total probability of the fail quality students divided by the total pass students and now here that comes out to be 1.7 percent. This 1.7 percent is referred as teacher's risk because here these are the fail quality students who are passing the exam you can reduce the teacher's risk by making your question paper more tougher if you do that in that case here some more pass quality student may fail. So, then you have to evaluate the other risk then compute the conditional probability of students who belong to pass class and then they have failed and that comes out to be 13 percent. So, that means here this is the risk of a students by having the harder exam. So, now here we need to have a trade off between the teacher's risk and the student's risk. The things are very similar in VLSI test wherein the teacher's risk is correct corresponds to the consumer's risk because they are likely to get bad parts and a student's risk is corresponding to the foundry risk or companies corporate risk. So, now here you have to have trade off there is little difference because in examination we give some benefit of doubt like here if students are solving more or less large number of problems correctly then we tell them a pass quality student. But in VLSI test it means chip has to pass all the test it is not like the if it passes the 80 percent test it is classified as good test. So, now here that is little difference between the student examination and VLSI test. So, now here look at the what role VLSI testing plays. It plays a role in detection of faults it also plays a role in diagnosing the fault what kind of fault has occurred and if the fault occurs multiple times or in multiple chips then it is of concern and hence we have to look at why that fault is occurring often and that is known as failure mode analysis. So, test also helps you in diagnosing and doing the failure mode analysis that what went wrong in the process while you are manufacturing the chip. So, that you are getting many chips faulty and you have to set the process right. So, now the question comes how well must we test a chip. So, it has direct relationship with the test cost and the quality of test because ideally we want that that here quality should be as good as the exhaustive test is. Now here let us look at how well we must test a chip. Let us say we have a system that has roughly say 100 chips those means these are reality nowadays you have you have systems with more than 100 chips as well and say now means what kind of defect level you can accept. Say one system out of 100 I assume I can accept as a defective. So, that means if it is defective I can send back to the company and then company may replace that though the cost is involved in that. If it is so then what so that means if a system is defect free if all parts are defect free. So, that means here if I say if I can accept 1 percent systems as defective that means here one chip out of 10,000 can be defective that results into like 100 defective parts or chips per million chips that we are producing that means here if I accept the 1 percent defect rate here my defective parts per million requirement is roughly 100. So, for almost all the commercial chips 100 DPPM defective parts per million is admissible whereas for some of the applications like automotive applications they do not want any defective parts. So, they ask for 0 defect 0 defect is almost impossible to get, but here the DPPM must be very low that must be very close to 0 because nobody accept a chip that goes in breaking system of a car and you say that there is a definite possibility that this chip will fail and so nobody wants to buy that kind of car. So, hence they have very strict requirement for the quality of the chips. So, here let us assume I produce 2 million chips and my manufacturing yield is 50 percent that means here 50 percent chips are good 50 percent chips are bad which is very close to the today's manufacturing chip. Though here based on the immature and mature process quality varies. So, the yield may go as high as 80 percent 85 percent that may go as low as 20 percent. So, now here say I have reasonably good yield 50 percent. So, that means 1 million chips are good which we want to save another million chips are bad which you want to reject. Now, here out of 1 billion 1 million bad chip if I say that my DPPM is 100 that means here 100 bad chips are good. Chips may be shipped. So, that means here 999,900 chips must be detected as bad out end may go unnoticed. So, now here if I look at the test coverage this should be 99.99 percent that is the kind of requirement we have if we have 100 DPPM requirement this goes further high if the DPPM requirement is more than the sorry less than the 100 or if the DPPM requirement is relaxed it can be further enhanced. Now, so if you say that 100 DPPM in that case chip with 100 ICs may have 100 percent failure 1.1 percent failure probability chip with 100 parts may be 1.1 percent chip with 500 ICs this can may have 500 5 percent failure probability. Now, here for like automotive industry I say they are targeting 4 0 defect, but they really look for less than 10 defective parts per million parts. So, now here look at the yield model why we have 50 percent or 70 percent yield that comes from the manufacturing defect and the defects may be the gross or area defect that we may have that may be a systematic defect of the process or that may be a random defect. So, well controlled process die yield is mostly limited by the random spot the defect it is not by the systematic defect that may appear due to the systematic process the defect or the material defect. But here now the elimination of random defect is almost impossible. So, if I look at the chip say here this chip has many of the dies some of the dies are defective some of the dies are good dies. So, these dies are the defective dies these why dies are the good dies and now here say in this chip if I say I have 10 defects out of 22 dies and these defects are randomly distributed all through the area in that case here these 10 defects may spoil 10 different dies and this. So, we have to reject those dies. So, this yield model can be captured by a simple portion model that gives you the die yield is a portion process and that is e raise to the power minus lambda where lambda is the average number of defects that may occur per die and that number of defects is equal to the defect density and die area defect density is somewhere between 0.2 to 1 per centimeter square. If you look at the manufactured device right from 60s or 70s this defect density stays more or less same. So, that means here if you have bigger area then you are likely to have more defects and you are likely. So, if these are randomly distributed you are likely to spoil more number of chips and if I say look at the pure random distribution assuming that there are 0.5 defects per square centimeter and die area is say roughly 2 square centimeter. That means here the average number of defects would be equal to 1 and hence the yield would be 37 percent. It looks too pessimistic. So, that means here 37 percent dies are good 30 percent means remaining dies are bad and you have to throw those dies. Though here in reality the portions model does not capture the clustering phenomena and the defects. So, all the defects are really not identically distributed everywhere. So, uniformly distributed everywhere they are clustered and then the clustering density depends on process to process. Now here you can configure out the clustering density of a process and based on that here you can compute the yield, but here definitely when the defects are clustered they will spoil less number of dies hence the yield would be higher. So, now here if you look at the like we were talking about 99.99 percent coverage and now here we are say with 100 d p p m. So, defective parts per million. So, that means here 100 defects per million may escape the test. Hence now here they assume you have million parts and your yield is 100 percent in that case your 0.1 million parts are. So, 1 lakh parts are good those we are shipping, but here the 9 lakhs or 0.9 million parts are bad and so that means here out of 0.9 million the 90 parts may escape the test. So, now here 90 parts may escape the test that means here the total d p p m will go as high as 900. So, this gives you the correlation between the your fault coverage, your yield and d p p m. So, that means here you need very high fault coverage for the this if you want to bring down the d p p m. Assume that another process is matured and you have 90 percent yield out of that. So, from that so like here 9 lakhs parts are good they are safe and 1 lakh parts are bad. So, that means here they may have only 10 test escapes. So, 10 test escapes will give you only 11 defective parts per million parts that is pretty low. So, that means here what it tells us is the biggest challenge is for the complex systems and which have low yield. If you have high yield you are really not very much worried about the test coverage even the lower coverage may give you the very low d p p m. Whereas, so that so it is the low yield dies which are of big concern. Other thing is because of the complexity is very difficult to achieve very high coverage. This and defective parts per million this is not linearly increasing with the complexity. Complexity means the number of gates that you have, but it increases non-linearly with the complexity that is another concern. So, now here if you look at the manufacturing defect that may comes from the flaw in the defects, flaw in the process. Now the variability is again becoming increasingly important as the variability is more in terms of channel length or say of the gate here its speed varies with that and hence some of the devices may not have logical fault, but they may have timing faults. So, these faults are either a permanent fault or hard faults or some of the faults are also transient because the devices are becoming weak and weak and hence like if some environment radiation is strives on that device it may produce a glitch and that glitch may continue to stay in the system and that gives you the wrong output. So, those are the kind of transient faults we have. Now if you look at the test now you have a produced circuit then you want to apply couple of inputs or patterns that you have developed during your test development procedure. So, this is a small set of factors what you will do using a very expensive tester you apply the test, you collect the test response and then that response should be compared with the final with the golden response that you obtained through the simulation and if there is a match you say the device is good and you ship it, if it fails then you have to reject that device. So, means in order to apply the test here we use some expensive devices like here Advent tests this ATEs is one of them and these are really expensive devices. So, now here if you look at the various costs involved with this process those costs are like design cost that we have this yields to the means like here all the chips are not very suitable to develop a very compact test set. If they are not then we have to have augmentation in the design and that augmentation comes in terms of extra area that we need if the area increases again here yield decreases and then with the augmentation of design it comes with the performance penalty as well. So, that is the design effort that we need to put and so this comes as recurring cost. Another cost of course I explained you earlier is the test development cost that is purely the software process. So, you have to develop a small test set and then you have to buy a expensive test equipment that is capable to apply the test, collect the test response compare the golden response with the golden reference and give you the flag whether it is a good chip or bad chip. So, like here this slide tells you that here say if the output produced by this logical block A comes through the goes through the logical block B that is not easy to observe. So, what you can do is you can have one additional bus that can take output from this logical block A and you can simply see that the output is the observed the output from here which was otherwise very difficult to observe through the logical block B and hence here this is the additional cost that is typically referred as design for testability. I will come to this point what are the other design for testability techniques we have. The other cost is the manufacturing test. So, say here you buy a very expensive test equipment that may operate say at 1 gigahertz these data are pretty old may be 10 years old. So, you buy a one instrument the tester that I have shown you earlier say it has 1024 pins and say the base chassis cost is 1.2 million and then 3000 dollar per pin it cost me then it is roughly about 4 million dollars it cost. Now, here when you operate this and say that I will continue to operate this tester for next 5 years after that anyway this tester would be too slow and it would be obsolete. So, now I have to depreciate this over 5 years and say there is 5 percent depreciation. So, now here the sorry 20 percent depreciation. So, about a million dollar 0.85 million dollar is the depreciation cost per year then the maintenance cost may be say the 2 percent cost is the maintenance of the equipment and then some operational cost like here main power and the like AC cost and a building cost and all these things. So, now here if you divide this cost per year then it is something like here 1.4 million dollar per year if you distribute this per second in that case here this comes out to be roughly 4.5 cents per second. If you operate this 24 hours a day 7 days in a week. So, now here assume that roughly it cost you 5 cents per second this is just the test equipment cost and now here if you say you want to test your device for a minute say this cost is roughly 5 cents then it cost you about 3 dollars per minute if you test it if you test your chip for an hour it cost you about 180 dollars per hour that is what professor Chandrakar mentioned you earlier that it cost you about 200 dollars per hour. So, these are the major cost those are involved in that. So, that makes it difficult process. Now, one more thing at the end I would like to mention in the test challenges that like when you manufacture a device you want. So, couple of based on the yield couple of devices are good couple of devices are bad. So, out of the devices which are good couple of devices are weak devices they are likely to fail in initial couple of months like here for example, if you have very weak contact or very weak line and now here when chip operates then due to electro migration that line may become completely open up after a while and hence that chip will fail. So, this phase is known as infant mortality and that happens in first few weeks and after that device operates fairly well for next couple of years and after that here aging starts to appear in the picture. And due to the various aging effects NVTI is one of the dominant effect nowadays and then here device starts to fail to operate at the rated frequency or it may completely fail. So, now we have we need to care two more things first thing is we have to weed out the all the bad parts are just after manufacturing, but then during the testing we also have to make sure that these weak devices should also fail. So, that here they may not fail in the field just after they start to operate. So, now for that you need to provide some accelerated condition so that these devices may see that kind of aging effect right after the manufacturing and hence these. So, we generally test these devices at accelerated voltage accelerated temperature. So, the temperature is something like here about say 125 to 130 degree centigrade whereas, the means voltage roughly we raise to 40 to 50 percent of the nominal VDD that we have. So, these are the means infant mortality we should make sure that this happens right after the manufacturing other thing is. So, after that here whatever devices which passes the test they will work fairly well for couple of years and then here we have to have some mechanism to detect the fault if it occurs while it is in the manufacturing if the lifetime that we are looking for is long enough like for the space applications you cannot replace device every year. So, they have to have life span of 15 to 20 years. So, that means here you have to have some mechanism that may detect the defect that may occur and then you can test that. So, now means conclusion here I try to motivate you why VLSI design is VLSI test is very important in the VLSI design flow why you need to study this topic as well with the advanced VLSI design course and what are the various challenges we face for the VLSI test and at what level we need to take care of various issues. With this I summarize my this lecture have a good day we will meet again with the next lecture wherein we will discuss about the various test fault models, test techniques. So, goodbye. Thank you very much.