 Hello, welcome to the lecture series on Advanced VLSI design course. I was talking about VLSI testing in last couple of lectures. So, today we will discuss little bit more in detail about built-in self-test. A background of built-in self-test was already discussed in the last lecture. So, we have seen that if we use built-in self-test in that case one of the very important aspect that it will brings in is field testing. So, that means here now your manufacturer device you can test as and when you want to test without using very expensive tester. Then in the other advantages it can bring in R like reduced testing and maintenance cost. It will have very low test generation cost because we are not using automatic test pattern generator that takes months to generate the test. We do not need to store the large signature or large test vector as well as the test response for the test of the device. We need very simple automatic test equipment that can facilitate us to apply test. As I discussed in the last class that here we need only one pin that can initiate the operation and can say that now just start the built-in self-test and tell me whether the device is good or bad. And now here because we need very simple tester that is inexpensive. So, we can test large number of devices in parallel and then here because we can apply the test at a speed the test time would be shorter and then here another very important feature is that we can test at functional system speed. So, that means here we operate from circuit at the functional speed. Hence it may discover many more defects. So these are a couple of advantages we have. If you look at the architecture in that case here what we need? We need three different additional entities to help us testing the device. So, you have circuit under test. You need a test generator. So, that means here there would be a hardware test pattern generator you need to have. You need to have a test response collector that can collect the response compare with the golden response and then here it will tell you whether chip is good or chip is faulty. And in order to control all these activities here we need to have a test controller. So, these are the three essential parts of built-in self-test we must have. So now here we have to apply the test pattern at the primary input and collect response from the primary output. So, that means here the pattern generated by the built-in self-test must be multiplexed with the primary inputs with using a multiplexer that would be controlled by the test controller. And then we have to collect the response directly from the output of the circuit. So now here what it can test and what it cannot test? So it can test all the faults in the circuit under test. It can test all the faults in the built-in self-test hardware, but because here we are not accessing directly the primary input. So if there is a fault in this, so now here primary input line itself may be faulty that means it is connected to ground line or open or connected to VDD. In that case here we may not be able to test that. And then here in the same way where the output we line we are not exercising hence we may not be able to test. So these are the part which are cannot be tested if we use built-in self-test. So now let us look at these components one by one. Let us first start with the hardware pattern generator then we will go to the output response analyzer. So let us see how hardware pattern generator can work. So what are the various ways? There are many ways. I mean sometimes it looks bit weird that we say that we want to build complete pattern generator on the circuit that can generate deterministic test. One of the simplest way that one can think of is we generate the test by using automatic test pattern generator ATPG and then store those patterns in ROM. If you do that say we have millions of pattern we are storing those millions of pattern patterns on ROM it may take huge area on the chip and that may be too expensive. Other way is we can generate test pattern exhaustively with simple hardware. Like for example this is one of the hardware that can generate exhaustive set of 8 patterns and in the same way we can build a hardware that can generate for 16 inputs for 50 inputs and more than that exhaustively. Again here if we generate patterns exhaustively the problem is test application. As we have seen in very beginning that if you apply millions of the exhaustive test pattern in that case here it may take several centuries to apply. Same thing holds good here. Hence we cannot apply this pattern in reasonable time or practically on the circuit though we can generate this using some chip hardware. So this is not practical if and grows beyond 20. So then this is not very attractive scheme. So, so far we discussed two schemes one is we can store pattern on the ROM and then make use of those patterns that is impractical. The other scheme is exhaustive test that is also impractical. So now here the other alternative is pseudo exhaustive. That means here we do not generate really exhaustive pattern but it looks like we are generating exhaustive test pattern. For example, so what do we do? We partition circuit based on the fan in cone of the outputs and see that which are the inputs affecting that particular output based on that the based on the inputs which are affecting that particular output we generate exhaustive test for that. Like for example, this is a circuit with two outputs. So output F so this is the fan in cone of output F. So now here it has total eight inputs but here only these 5 inputs are are affecting this output H and only these 5 inputs are affecting output F. So, now here these 3 inputs and these 3 inputs are are are affecting only the X1 to X3 are affecting only H and and X 6 to X7 are affecting only app. So, if I generate exhaustive pattern for these five and exhaustive pattern for these five in that case here I can get the similar kind of confidence that I am getting through exhaustive test. So, now here if you look at how many patterns we need for 8 input we need 2 raise to the power 8 that means 256 patterns, but here now I am dividing that in 2 sets each of 5 inputs that means here from each of the set I am generating 2 raise to the power 5 inputs and then there are 2 sets. So, that means here total 64 inputs. So, there is a good amount of reduction in the number of patterns we can have, but still if you go beyond say 50 inputs or 100 input in that case here again this approach turns out to be impractical hence we cannot use this approach as well. So, now here what are the way ahead if you want to implement built-in self-test we have to generate test patterns which can be applied in reasonable time. So, now here the one of the approach that can that it can be explored is random pattern test as we discussed in very beginning that there can be multiple pattern for a single fault or one fault in other word one fault may have multiple test pattern if I generate some patterns randomly. Let us see for example here on for this gate say this is 4 input case and if I want to generate a pattern for stuck at 1 there are various combinations because here for stuck at 1 I want 0 here in order to excite that. So, now here there are various combination 0 1 1 1 there can be 1 0 1 1 there can be like here 1 1 0 1 there can be 1 1 1 0 there can be like here 1 1 1 0 0 and so on and so forth there are so many combinations all are able to detect this stuck at 0 fault. So, that means here if I generate pattern randomly in that case here I am likely to detect this particular fault. So, that means here random generation could be one of the approaches and if you look at the fault coverage in that case here you may get a curve like this. So, that means here very quickly you may get good coverage, but after that this coverage started to become saturated. So, for many of the circuits the curve goes like this and which is admissible. So, that means here a couple of like here for this particular circuit 100 patterns may can give you about 99.9 percent fault 99 percent fault coverage whereas, if you want to achieve 100 in that case here you may need to apply 1000 patterns. Look at here the 10 pattern itself may give you 80 percent fault coverage. So, that means here if you apply random pattern in that case initially you may get very high coverage and then slowly it will start to saturate, but for some of the circuits the curve may go like this. So, this grows very fast, but then here it will quickly saturated and that happens due to the circuits which have random pattern resistance. So, one of the random pattern resistance that comes from like here for example, again the same circuit I discussed like here if I want to generate test pattern for stuck at 0 fault here that means here I want one and then there can be only one pattern that can excite this. So, that means here the I have to generate this pattern and so for the 4 inputs there can be 2 raise to the power 4 that means there can be 16 pattern and the probability of generating this pattern is 1 by 16 and say if I generate only few patterns say 5 pattern in that case here probability of getting this pattern is very low and hence here we may not be able to detect this particular fault and these kind of circuits are called as random pattern resistant fault circuits. So, if you have large number of such kind of patterns which are needed to be generated here you may not be able to achieve very high coverage, but by and large for most of the circuits you are likely to get this kind of curve. Hence here if we do random so, but here the key issue is the pattern must be random if they are not random in that case here you may not because it may be biased and you may not be able to get this curve. So, now there are 2 questions first is how we generate random pattern and second thing is if I can generate the random pattern is it sufficient. So, the answer of first is it is very difficult to generate very or purely random patterns because here always you are generating this pattern with some algorithms and hence here they may have repeatability. So, and hence you may not be able to get this kind of curve that is one of the issue. Second even assume I can have very good random pattern generator that can generate, but if it is random then the problem is how would I be able to compare the response of the circuit whether my circuit is good or bad because I do not know the response of the circuit when it is fold free because I do not know what pattern it will generate on the fly. So, that means even if I have very good random pattern generator it may not help us. So, then what is the way out? We need by and large the random nature of the patterns and second thing is these pattern must be generated algorithmically. So, that here they have a repeatability that means we can simulate these patterns and generate the golden response of the circuit and then eventually we can compare with the golden response. So, that means we have to generate these patterns algorithmically and the approach is the pseudo random pattern generation. So, now here let us look at how we can generate these pseudo random. So, that means here by and large the nature of pattern that we are generating that means here the mixture of 0s and 1s that should be random and we so that means it preserve the randomness and this should be deterministic. One of the very simple circuit could be the linear feedback shift register. So, if you have a shift register say this is the you have three flip-flops and there is a feedback from the last flip-flop with some modulation by intermediate value here and I feedback here. So, now and now here in this if you look at how it say I initialize this to 1 1 1 then how it will progress. So, initially it will have 1 1 1 value now after that in the second time cycle what will be the output this is a synchronous circuit. So, now this 0 and this 1 and 1 will give you 0. So, next time here output would be 1 1 0 1 1 1 then the very next cycle the output would be 0 0 1 because this 1 will shift here this 1 will shift here. So, and now now in the third cycle this 0 1 will give you 1. So, this will be 1 0 0 then this 0 0 will give you 0. So, you will get 0 1 0 then this 0 1 will give you 1. So, this will be 1 0 1 and then here the again this 0 1 will give you 1. So, this will be 1 1 0 and now here this 0 1 will give you 1 and then here 1 1 1. So, now again we get back to the same value. So, now here how many inputs it is generating 1 2 3 4 5 6 and 7 inputs it is generating. So, that means here this can with a very small circuit we can generate a sequence of all a sequence of 7 vectors exhaustively if you want to generate from this circuit of 3 3 input in that case here it may be 8. So, that means here we are generating nearly exhaustive pattern sequence. And now here if you look at the placement of 0s and 1s it is fairly random it is much different from your counter that you may use to generate. So, now here a linear feedback shift register can be one of the random pseudo random pattern generator. So, it generates the patterns algorithmically that means these are repeatable that is very important because we can simulate that and it has most desirable random number generation property. So, these are the properties we have though here we want to generate as long sequence as possible because here that is good for the fault coverage long sequence means here it should repeat after large number of test sequences. But here really we do not need the exhaustive to raise to the power n sequence though we want long sequence. So, now here what you need to have is you have a feedback from the first flip-flop to the last flip-flop and if there are n number of flip-flops we have and then here you can tap input from some intermediate points. So, now here you can represent this by matrix or by a polynomial if you say polynomial in that case here that can be represented by 1 plus h 1 x plus h 2 x plus h 3 x square plus so on so forth h n minus 1 x n minus 1 plus x raise to the power n. So, if you write down that as a matrix in that case here these are the output values at time t plus 1 and these are the current value in the flip-flop x 0 to x n minus 1 and that can be given by this matrix that is cold ice companion matrix. This has some properties like in the first column the last bit is 1 that comes from the feedback from the first flip-flop to the last flip-flop. These h 1 to h n minus 1 these values can be 0 or 1 based on whether you are tapping that value or not and then that the rest of the this matrix is identity matrix. So, this is the property of this so x t plus 1 is t s into x of t where t s is the companion matrix. This works on the the Galois field theory where in the multiplication by x serve as the right shift in LFSR and addition operation is simply xor operation or modulo 2 operation. The t s companion matrix the properties I already explained you this gives you the near exhaustive sequence. So, that means, here other than all 0s it can give you it may give you all the sequences. So, that means, here cycle length may go to 2 raise to the power n minus 1 that it exclude 0 because here once it goes to all 0 state it may never come out from that and hence that is that can never be generated. So, this is one of the implementation of that and the other implementation here you have a feedback or xor gates in the feedback path you can generate the same property by placing. So, now here in worst case if you are tapping input from all in that case here there can be a sequence of n xor gates from the last flip flop to the first flip flop. So, now and that may consume or that makes the system slower. So, now here in order to improve that the another LFSR is proposed wherein you place the xor gate in the feed forward path and this generates the same sequence. If you look at the companion matrix of this in that case here companion matrix of the modular LFSR is the transpose of the previous one. And now here you are reading tabs these tabs from left to right whereas in the standard LFSR you are reading these tabs from right to left that is the difference. And now again here the same characteristic polynomial remains same as 1 plus h 1 x plus h 2 x square up to h n minus 1 x raise to the power n minus 1 plus x of n. If you want to achieve the long very long sequence in that case here the polynomial that you implement or characteristic polynomial that is given by this expression is should be a primitive polynomial. And what are the conditions for the primitive polynomial the one of the condition is that it must be a monic. So, that means here coefficient of x and term must be b 1. So, that means here it always should have 1 plus x raise to the power n term. If you look at here in that case here it should have 1 plus x raise to the power n term these terms may or may not be there. Then the other condition is that the characteristic polynomial must divide the 1 minus x raise to the power k or 1 plus x raise to the power k. So, like in this previous example I can write the characteristic polynomial of this as 1 plus h 1 x. So, we are tapping from here. So, in that case here this is 1 into x plus we are not tapping anything from here. So, 0 into x square plus x cube. So, this is 1 plus x plus x cube now here let us see whether this is primitive polynomial or not this is generating the longest sequence of 7. Hence it should be a primitive polynomial. So, what are the conditions? So, condition says that this should be a factor of 1 plus 1 minus x raise to the power k or 1 plus x raise to the power k because it follows the modular theory. So, and where k is the 2 raise to the power n minus 1 where 2 n is the number of leaf loss that we have. So, it should be 1 raise to the power k where k is 2 raise to the power n minus 1 here in this case n is 3 hence k would be 7 because there are 3 flip flops I am using. So, that means here it should be the function of a factor of 1 raise to 1 plus x raise to the power 7 and that must have 1 plus x raise to the power n as mandatory term in that where n is here 3. So, that means here that must have 1 plus x raise to the power 3. Let us look at the factor of this. Factor of this would be 1 plus x 1 plus x plus x cube and 1 plus x square plus x cube these are the factor of 1 plus x keep in mind they we are using the modular theory. So, that means x plus x is 0 because plus is xor operation. So, now here let us look at which factor is qualifies for this. This factor does it qualify it does not qualify because here it has it does not have x raise to the power 3 term. So, here this is unqualified term. Now look at this this has 1 plus x cube. So, that means here this is qualified term this has 1 plus x cube term. So, this is qualified polynomial for LFSR that this is primitive hence it can give you the longest sequence. So, that means here I can have a modular LFSR that can generate. So now here this is 1 of the LFSR that implement 1 plus x plus x cube and another LFSR could be like here the both will give you different sequence, but the length will be same 7. So, they and this will implement 1 plus x square plus x cube. So, this way you can you can generate a long sequence. So, now here the we discussed that how we can generate the test test sequence. So, this can give you the nearly exhaustive as I said that we are not interested in exhaustive sequence, but we we are interested in a long sequence. So, that here our fault coverage can be be be very good or in other word I can say that we can quickly achieve very very high fault coverage. So, that means here the nature of the curve that we were looking for can be be be be like this whereas, say this is this is 100 percent and these are the test vectors. So, so here we are we are we are interested. Most of the time in in practical systems because of we have some of the random pattern resistant faults. Hence, we may not be able to get very high fault coverage even after running LFSR for long time. So, what do we do is we go to some reasonable state and after that because the remaining faults are not covered by this because because the these faults are are random pattern resistance. So, these are the the the random pseudo random pattern resistant faults and so, so now now here and almost all these faults do need a unique test. So, what do we do in practice is we generate test using LFSR and achieve say 90 95 or 90 percent for fault coverage and after that here we we do the fault simulation. We find out how many faults are remaining for the remaining faults we go to ATPG generate the test using ATPG and then here we burned those those vectors in ROM and put that ROM. So, first we run LFSR to to to exercise the the test pattern on the chip and then after that we use the the top up pattern from the the the read only memory where we we we store those deterministic test. So, the and and because here the these patterns are very very small in number. So, hence here we do not need need big RAM. So, this is the the way we generate test if we use the built-in self-test. Now, the the the second question is how we collect the response and and then here how I compare with the gold golden response. As we know that here we are generating the pseudo random test patterns these are the repeatable hence. So, we can do the the simulation and we know what would be the the the real output of the the fault free circuit. Now, the question is we can one ways we can store the these these response in random access memory. Sorry a read only memory and compare again here if you store everything on read only memory in that case here the number of number of bits you need in ROM would be very large hence that will consume lot of lot of area. So, what we need to do what we can look for is we can look for is the reduction of volume of that data and one of the the the ways rather than using the the the real data we can compact that data generate some signature. Like for example, when you go to bank nowadays bank do have the the photograph and they they come they match the photograph and all these things, but still if you if you write a a a check they they they do not do not look at the the photographs they they just compare the the the signature. So, signature are signatures are representative. So, you you can can have the the the compact storage of the signature and and then you can you can compute. So, now here let us say for example, as a see you have 500 5 million random patterns generated there are 200 outputs in that case you need 1 million bits and then here this is very uneconomical to to store 1 1 1 million bits. So, now here we need we need compaction. So, what what could be the the way to to generate the signature the the one of the problems could be like even if you go go to to bank somebody else may sign and that looks like same as your your signature. So, that means, there is a little possibility that other person signs and that matches with your your your signature that is known known as aliasing. So, that that that means here due to information loss a signature of of good and bad circuit matches that is known as aliasing. So, now here when you generate signature here we have to be careful that here aliasing should be be be as little as possible or ideally that should not be aliasing. Then we define two terms one is the compaction and another is compression. Compaction is a is drastic reduction in in bits in the the original circuit a response and then here there we lose some information. So, that means here the this is irreversible process from the compacted response we cannot cannot get back the the the the original response. The other term that you people are are well familiar with this compression like here you do GIP. So, wherein you you you compare compressed the data and then here you can again expand it to to the the same original data. So, those those are are lossless. Signature analysis is another term that we define. So, the what it says is that that here compact good machine response into a good machine signature actual signature generated during the tested and testing and compared with the good machine signature. That is the this is how we have to to to compact the good machine signature store somewhere that needs lesser memory space. And then we we compare we again generate the signature on the fly from the the circuit under test and match with the stored response. One of the ways that here we can use the transition count in in the bit stream like here for example, if this is the circuit if I apply a bit sequence here seen in 5 different cycles then here what I will get get is I will get this as as response when it it is fault free and and when it it is it is faulty. So, now here when it is fault free and and and faulty here both of the the the responses do have different transitions. So, here if you look at there is only one transition from 0 to 1 whereas, in this here you can see there can be be two transitions one is from 1 to 0 then 0 to 1 and the then again 1 to 0. So, there are three transitions on the same way at the the second output here there is one transition and here the there is also one transition. So, if you look at only this output in that case here you have distinguishable transition of fault free and faulty machine whereas, on on circuit on on output 2 you may not have distinguishable number of transitions for fault free and faulty circuit. So, if you happen to have only one output say x 2 in that case here you cannot if you use the the the transition count you cannot distinguish whether machine is faulty or or fault free that is known as aliasing whereas, if you have only say say x 1 in that case here you can distinguish that. So, what we want is that here aliasing should be as small as possible and here in this I I do not want to go in detail the the aliasing analysis of this, but here the the the aliasing probability goes like this. So, so if there are n number of inputs that you are compacting in in in k. So, for the the the very low and and very large like here if the the number of transition is is either 0 or 1 in that case the there are only few possibility sorry this will go from here. So, now now here, but here when when the the number of transitions are somewhere in the middle range the aliasing probability is is very high. So, transition count is though it is it is very easy because here you need to to XOR the previous input and and and the the current input generate the the the output using this this one XOR gate and and then here you can can get count the this using a counter and and so the the the transition count is is one of the the easy easy mechanism, but aliasing probability is very high. The other solution is that you can use simply an XOR gate and if you so that that means here all inputs here you you can compact in in one the problem with with with XOR gate is that if your fault effect propagates at at one of the the the input in that case here always you will get get get fault effect at output, but if fault effect propagates at at two places then here it will mask the the because the the the generated signature would be same as the the good machine signature and hence it you you it will go unnoticed. If it propagates to to odd number of of of inputs in that case here you will get distinguishable signature hence you can detect. So, this can detect the error which propagates to one or odd number of of inputs, but this cannot detect if your fault effect propagates to even even number of outputs. So, now now here the if you look at the the the aliasing probability in that case here aliasing probability would be roughly half or or 50 percent that is that is too that is very high. So, now now here we we have to devise. So, once transition means transition count was one of the methods this is use of one XOR gate is is is also another method, but here the aliasing probability is very high. So, most of the the time it is not very usable method. So, now now now here the other ways we can use the the the cyclic redundancy code which were in use in communication systems for long time. So, what do they do they when they the transmit a bit stream they generate some redundant bits from these data bits and send along with the the data and then when you you receive at the output you try you regenerate that if there is a match in that case here you can say that that your transmission was good otherwise there was an error in the the the transmission. Same concept we can use here we can generate the the the CRC codes from the the given bit stream. So, now here we also we can treat data bits from the the circuit primary output and we can compact that that in the in the decreasing order of polynomial which is can be given by by LFSR. So, now means the the the circuit that we can use to compact is again here the the LFSR kind of kind of circuit or or wherein we have linear feedback the safety register. So, now here when you you you scan it through through the LFSR here it will generate a a redundant code. So, say the this is this is a LFSR whose characteristic polynomial is x 5 plus x 3 plus x plus 1 and then here through this x over gate here you are combining the the input bit stream. So, say this is the the input bit stream and initially say we initialize the the this feedback shift register to all 0 values. Keep in mind in LFSR we do not have any input bit stream, but here it it accepts the the the input. Now here let us look at what are the the the what it will generate. So, if you look at it starts from all 0 states and then when you will have you will receive this bit stream 1 0 0 0 1 0 1 0 then here the the the based on on the the the LFSR characteristic polynomial it will give you the the output and at the end after the this 8 bits here we will have 1 0 1 1 0. So, this would be be be available in the in the the flip flops. So, we have 4 flip flops the this would be the 5 flip flops this would be data in the flip flop. I can write the the this as a polynomial that means 1 into x raise to the power 0 plus 0 into x raise to the power 1 plus 1 into x raise to the power 2 plus 1 into x raise to the power 3 plus 0 into x raise to the power 4 that that combines to 1 plus x square plus x cube. In the same way I can write a polynomial for this input bit stream that is your your 1 0 0 0 1 0 1 0 and that I can can represent by polynomial. The same thing you can obtain by division operation. So, now here say this was x 7 plus x cube plus x is the the the polynomial of this data this is x plus x cube plus x 7. So, the this is input bit stream that is divided by the characteristic polynomial. So, if you look at here the characteristic polynomial of this LFSR is x 5 plus x 3 plus x plus 1 and then if you use the modulo theory and and and do the the division in that case here you will get remainder as x 3 plus x square plus 1 that is same as the the the remainder in the flip flops. So, hence if you divide this by the characteristic polynomial you will get the the remainder. So, now here if the this remainder matches with the the correct response that we obtained after the logic simulation in that case your circuit is good and otherwise circuit may be bad. This may result into some sort of aliasing because here now aliasing would be much better because because of of a you have multiple flip flops. So, now if you look at the the the aliasing. So, say there is a bit stream of n bits and there are k flip flops. So, now here you can have 2 raise to the power k combinations of bits generated from this one and out of 2 raise to the power n combination. So, now here the the the 2 raise to the power n divided by 2 raise to the power k pattern will result into the the the same same pattern out of these pattern. So, there the 2 raise to the power n minus k out of that one pattern may be good and other patterns are just aliased pattern. So, minus 1 is the the good pattern out of this 2 raise to the power n are the total pattern, but out of that one pattern is is good which is compacted others are are are bad. So, the this would be 2 raise to the power n minus 1 this is the the the probability of aliasing or probability of masking. This if if you say if n is is greater greater than k in that case here this raise to this is equal to 2 raise to the power minus k. So, this gives us very good observation that here now if you use LFSR in that case here masking probability will not depend on the number of inputs and that will depend only on the number of flip flops that you have in the circuit. So, if you want to reduce that that masking probability you can have more more number of flip flops in the in the circuit. So, now here so far what we discuss is that for every primary output you need to have 1 LFSR that or 1 signature register that that is typically known as single input signature register SISR and so so if you have single input so this is your circuit output you have one single input signature register for another one you have again single input signature register and and so now here you say you are getting n inputs in n number of cycles and then you are compacting this in k bits where k is the number of flip flops that you have in SISR. Now here it has k in flip flop here k flip flop here k flip flop here so now here hardware overhead is too much, but if you look at this this SISR this is a a a simple linear feedback shift register this is linear system if it is linear system in that case it should obey the principle of super position. So, now here if it follows that in that case here now what you can do is you can combine all this SISR into 1 and then you can form a multiple input signature register and now here if they so that means here in multiple input signature register one input you are multiplexing here another input you are multiplexing here another input you are multiplexing here. So, if you have k number of flip flops in that case here inputs from k number of in primary outputs of this circuit you can can multiplex here and and now here they if you want to to find out because this is linear system and and this is the companion matrix of this LFSR then you can can write equation to for the the output in the next cycle x 0 t plus 1 that would be companion matrix multiplied by the the pre current values plus the input arrival at at this point in time that is d 0 d 1 and d 2. So, here you can again the companion matrix will remain same it should follow the same property that we we we discussed earlier. So, now here if you use LFSR LFSR that is known as multiple input signature register MISR and if you use then again here the masking probability will remain same as 2 raise to the power minus k where k is the number number of flip flops we have in the in the in the circuit. So, now this can give you very compact response it needs very less hardware. If you compare with the previous approach that we we we discussed with the transition count in that case here you can say that in in a circuit this is the good value there are three inputs and and and this is the the the good value and if there is one fault at stuck at one fault at a input stuck at fault at b input and stuck at fault at f input that is f output. So, now now now here the these are the the the the responses. So, here if you see now this output if you use the transition count it may not be be able to detect the output. So, this this can give you the the the difference between the transition count and LFSR. So, in summary here I can say that that LFSR we means the built in self test is wonderful approach that can give you facility to apply test at any point in time that means your chip can test itself and hence we can use that for the field test. And so this allows you the the at speed test and and and field test ok this completes the the the built in self test. Now, so far we discussed about the the test for stuck at 0 fault. Let us briefly discussed about another kind of circuit that we have and the those are are let us say memory type of the circuit. So, in memory here it is a very special type of circuit where in it does very specific job it stores value that value can be 0 or 1 it should retain that value until it is rewritten by the system. So, now here what so if you look at the the the memory structure in that case here it will have how the the the sale it will have the the row decoder and and column decoder to to enable one of the particular sale and you want to to read by by using the the sense amplifier or or if you want to write you can you can write on to that particular sale. So, now here look at how complex algorithm I can afford one way is that that here one thing is shall I go for the the the logic test kind of approach where in view you model every fault as stuck at 0 or stuck at 1 we do not need that essentially for memory because it does only restricted operation. So, hence here we can make make a functional fault model for the the memory test and now here what functional. So, now here say if I have n number of sales and assume that I do one operation with with every sale in that case case here the the complexity of my algorithm that I I run would be n. So, that means here n operations I do let us I I run a test at at say 16 megahertz in that case here if your your your memory is is is 1 mb it will take 0.06 second if it is 4 mb it will take and now here you will easily have 2 gb 4 gb at gb 16 gb kind of memory and it will take say say if it is 2 gb 128 8 seconds if you you run your tester. If you look at if the complexity grows to n log n or n raise to the power 3 by 2 n square that means here you you do n square operations with every sale in that case here it may go go go to several hours and and and and that is impractical. So, that means here you have to devise a mechanism to test memory which which has complexity as n. So, that means here you can have fixed number of operations with each and every memory sale. So, now here as I said that that that here memory does some restricted function now we can can generate a simplified functional model for that. So, you add you place address for the memory address decoder will decode the address it will enable one of the memory sale and then you will read or write that and and you will get if you are reading in that case here you will get that out. So, now there there can be a fault in the decoder. So, that means here decoder may enable two sales more multiple sales or that may not enable any of the the sales your memory sale that may stuck to two always logic zero that may stuck to logic one or there may be a coupling between the memory sale. So, that means here if I am reading or writing one memory sale it will also affect the another memory sale. So, if you look at the the the the functional fault model in that case here I can categorize these fault model in four different categories one is stuck at fault. So, that means here any any element can stuck to logic one or stuck to logic zero. So, that means it may not have transition transition fault means here it may not make a transition from 0 to 1 or 1 to 0. So, that means if if you you have stored one in that case it may not go to to to state 0 or there can be a coupling between the the the two two sales or if there is some specific pattern around your sale in that case here it may invert the the the value of the the sale. So, now here in order to test this here the the simplest approach that was was explored was you what do you do is you initialize memory in some particular state. Say you you you have 3 by 3 array. So, you initialize all the the the. So, you can go in any order and and initialize all the the these bits to 0. So, that means here you are writing 0 to all the memory states and you can go in any order order of of of address. Then what do you do? So, this is like here you you you are marching from the first location to the the last location. Then what what what do you do after that next time you come back and and see first read the the earlier written value. If it is 0 in that case that sale is is is good that means here that can store value value value 0. So, there there is no no stuck at one fold there. So, now what you do is you read the this value that is 0 and then here you again rewrite that to 1. So, now here you you you read that value and then here you you write 1 again here the the second one you do the same thing third one you do the same thing fourth fifth sixth seventh. In case any of the the sale is stuck to logic 0 in that case here that was not able to to make a transition to to 1 and then you can detect that immediately here. So, now here you can in this you you can detect stuck at one fold, but here you are not able to to detect stuck at 0 fold. What do you do in the third time you can come back and now here read this values. So, if you read this value again here is if if it is 1 in that case this is correct if this is 1 this is correct. So, again here you you march from first location to to last location. So, that means here you are reading 1 and then. So, this can detect stuck at 0 fold. So, now here how many operations. So, now here this one first you you you initialize the this memory in in all 0 states or all 1 states then here in in one order of of address. So, maybe either from first to last or last to first here you read first the the earlier value compare with the the the earlier written value and then then change to to the the the inverse value. So, read 0 write 1 you finish this then the the the next time here again you you start either from the the the last value to come down to the first or first to last and the the then. So, let us say you you come from first last to first and you you read all these value this can detect all stuck at at fold and now here how many operations with one cell you need to do here for all the these cells you are doing n number of operation here with every cell you are doing 2 operations in that case here 2 n number of operation and here n number of operation. So, that means here total 4 n number of operations you are doing. So, that means here you are accessing if if say 1 micro second is the memory access time in that and and you have says 9 memory cell in that case you need 4 into 9 into 1 micro second as as as the the the test time for this memory that is known known as March test. So, there are couple of. So, so here in this not notation here we say this is March 0 March 1 and March 2. So, there are there are various March tests were were proposed you can can go through the the various March test and if you look at the what they detect the the the very basic test that is known as MATS that I I discuss that it has complexity as 4 n and it can detect all stuck at fold this can detect some of the address decoder fold if the this may not detect all address decoder fold. Whereas, the the the MATS plus that is augmented by by by again here W 0 with the in the third March. So, that means you are writing back back 0 and here you are you are you are going in the forward direction here you are going in the reverse direction that make sure that you will also detect the the all the the address decoder a related fold. So, now now here this will consume this can detect all the these folds if you look at the complexity of various algorithm in that case here this will be the the the complexity this example I have already given you. So, I I I escape. So, now here if you if you look at the memory in that case here we are following certain pattern and if you follow certain pattern in that case here always it is easy to to implement that as a built in self test. So, what we are doing we are generating address right from the the first to last or last to first and we are writing some single bit or we are reading single single bit for that and. So, now here for that you need to to have a pattern generator the this pattern generator is nothing, but it is it is a it is a address generator. So, now you you need to have a counter or LFSR that can generate the the the rising address and and the falling address and then you are you are writing that that value to to that particular memory memory state. So, now here it may. So, what is important here to to to detect a fault is you have to go in certain order of address and when you come back you have to come back exactly in the the the reverse reverse order of address. So, the the the memory is is one of the circuit that is most favored by by built in self test and practically now all the memories are coming with the built in self test and that is why you can when you power up your your your laptop most of the time you might have observed that that it will start to to test the the memory. So, so now that here you you can always make sure that that your your memory is is good sometimes we have some additional rows and columns. So, once you identify a a bad row or bad column you can replace that that bad column or or bad bad row by the redundant row or or column and hence here you you can fully make use of your memory. Thank you very much. Good day.