 Hello, welcome to the lecture series on advanced VLSI design course, I am Varendra Singh from Electrical Engineering Department of IIT Bombay. I will take you through design verification challenges in these couple of lectures, today it is the introductory lecture, I will discuss about the challenges of VLSI design verification and in due course of time I will take you through various challenges, their techniques to handle this and open research problems. So, as mentioned by professor Chandorkar very early in this course that VLSI circuits have gone through various phases and if you look at one of the examples like microprocessor which was started early in 70s with 4004 and it has taken a long journey up to now current say Intel i7 processor. So, I4004 was fabricated with 2300 transistors whereas now we have billions of transistors on the same chip. So, now you can see the growth of the transistors, hence complexity of design is increasing dramatically. So, the number of transistors are increasing, number of input output are also increasing. Now here the big challenge is how to verify the correctness of the design. If you look at the VLSI design realization flow, it starts from the customers need. Customers need may be in terms of like one customer may want to have, want to build a chip for a microwave oven controller. So, he can give his requirement like if you put milk in the microwave oven it should operate for 10 minutes and run at 300 watt. If you want to cook rice then it has to run for say 15 minutes at 350 watts or so on so forth. This requirement so from this design engineer or has to figure out what are the various requirements because the need is vague in nature. Based on the requirement here the design engineer has to write some specifications which are more formal than the English like language. Once you write these formal specifications, so your requirement may directly come from the algorithm like here you want to implement say image processing algorithm then algorithms are non-non and then your specification may be the C code. So from that you have to synthesize your circuit and so it has to go through various steps of synthesis which are fairly automatic steps. Once you synthesize the circuit so do the place and draw out and finally you have GDS-2 you send that to FAB and then FAB facility will fabricate your chip and give you the fabricated chip. Now you have to test each and every manufactured chip for that you need to develop some test vectors that can test your chip in reasonable time. So these are the various phases. Now here when you are synthesizing your circuit you have specification as your golden reference model and then you from the specification you have to go to RT level transformation, from RT you have to go to gate level net list, from gate level net list, transistor level implementation and then you do the place and draw out and then finally the layout. So all these transformations have to respect the given specification. So now here what we want that here at every level we have to validate with respect to the laid down specifications. Now if you zoom in this process little bit in that case here you can see that from this specification you generate the RTL design that you are writing in VSDL or very long. From there you have to synthesize a gate level net list, you optimize for various parameters. Parameters can be area, power, performance or testability. Then once you do that you have to insert the design for test points and you have to augment your circuit in order to make it easier to test. So that you have to insert IO after doing that you have to do the place and draw out and then you do the clock tree synthesis and so yeah you do the clock tree synthesis routing and then means after routing if you are not able to still meet your requirement you have to do the issue that is electronic change order. So now in this process the bigger challenge is that you have to in all these transformations you have to make sure that they respect specifications. Now how do we do that? So let us go through first couple of definitions that we do use. The first thing is design synthesis. So design synthesis is a process, design synthesis process is defined as for a given IO function development of a procedure to manufacture a device using known material and processes. Verification is defined as predictive analysis to ensure that the synthesized design when it will be manufactured it will perform the given input output function. The test is defined as manufacturing step that ensure that the physical device which is manufactured from the synthesized design has no manufacturing defect. So design synthesis essentially tells you that how I can obtain a given IO functionality because customer is concerned about the IO functionality he is not concerned about how you are implementing that whether you are designing sequential circuit or combinational circuit whether you are implementing that using CMOS or you are implementing using NMOS or you are implementing using TTL. So the customer wants given input output functionality then once you have done that you have to analyze that whether your synthesized design respect the specification that those were laid down for the synthesis. So that is why it is a predictive analysis that can ensure the correctness of the behavior. So now if you look at the complexity of the design now because design as I mentioned earlier that now current design do have billions of transistors and that kind of design we cannot design as a flat circuit we have to have hierarchy and then so that means here the various steps we have to go through this from the system level design to the algorithm like here you have a system that can implement various algorithms like if you are processing image there are various algorithms like segmentation tracking and so on and so forth. So now for a given algorithm you have to write the RTL from RTL you have to generate the gate level net list from there you have to have transistor level implementation and then place and route and finally you have layout. So if you look at the complexity this is a pyramidal structure. So at system level the complexity may be few lines of C code. And then if you go to RTL you will have few hundreds or thousands line of very lower VSTL code then if you go to gate level net list you may have several millions of gates then if you go to transistor level implementation you may have hundreds of millions of transistors and so on and so forth. So now if you go down your complexity increases at the same time accuracy increases what this accuracy mean like when we design a circuit we design for given specifications and then here we as a engineer we want to optimize our circuit for a few parameters and those parameters are like area, performance, power and testability. So at every level of abstraction when you do the RTL level design or gate level design you have to estimate how much area it will consume, how much power it will dissipate, what kind of performance I will get and how easy it is to test. So now at higher level of abstraction the estimation is very crude. If you go down and if you do the layout and if you extract the parasitic then you may have more exact values of the power dissipation, area consumption and performance all these things. So now here if you go down your accuracy increases but at higher level of abstraction the complexity is smaller you can handle it in better fashion. So now conventionally here we start from the specification and we go down to the implementation. So you have specification though those are few lines of see code we have from specification you have to architect your system that may have like how many you need to have say processor, a DSP or a GPU then you need some memory then you need some gluelogic or custom logic. Now if you go down in each of these block here you can write the RTL code for that and the RTL code will tell you that what kind of data path you will have, what kind of controller you will have and how data flow will take place. So this gives you the design flow. Now this is your specification when you are transforming from the specification to the system architecture to the RTI you have to make sure that always here it has to respect the specification. If you look at the time consumed by various activities in the design flow these activities are listed here. These are based on the time survey this is bit alt which was done in 2000. If you look at it here you will see that most of the time would be consumed by design verification. Then design creation then place and route and then here your design rule checking static timing analysis and so on and so forth. So here if you add these numbers it may go beyond 100 because couple of activities are being done in parallel. So these are from the old generation designs. Now in current design the design verification time increase little bit from 50% to 60 plus percent and then here design creation shrink little bit from 30 to say 25% or like that. Because now here most of the time we are not designing systems from the scratch we use IPs. So the key observation from this slide is that most of the time we spent for the design verification so that means this is the critical part in the design flow. Hence we have to have very efficient methodology to verify the design if we want to reduce the design time and it is reported by couple of industries that if you your design cycle escalate by 6 months the total revenue decreases by 30% that is huge. So that means here the time to market is very important and you have to design, manufacture and ship the chip as fast as possible. If you look at the revenue in that case here revenue model goes like this. Initially company gets very high profit margin and then slowly it goes down. If there is a escalation in that case here that will go like this. So now here now you will have this much revenue that can be earned from the product and that is understandable because here initially your new product has an age in the market then slowly your computer will also launch the same similar kind of functionality in the market and then you have to reduce the cost so then profit margin decreases. So as I said that key thing is we are spending too much time in design verification you have to reduce that time so how I can reduce that time. Now if you look at in the pyramidal structure as we discussed that the complexity at higher level of abstraction is lower whereas the accuracy is poor as well. So now if you design a system at system level if you get a bug in that case because the complexity is low you need less time to find that bug or locate that bug and the fixing of that bug. So say here at the system level a fixing of a bug takes 3 minutes. If you go down at RT level where the design complexity increases by order of magnitudes then the location of bug or detection of bug and fixing of bug takes longer time because the design is more complex and so now here fixing of a bug may take 3 days. If you go down further to the transistor level design here you have millions of transistors and if you happen to detect a bug the location of the localization of that bug and fixing of that bug may take 3 days. So you can see the kind of time taken at different level of abstraction what it says? It says that we should detect and fix as many bugs as possible at higher level of abstraction possibly at system level or at RT level and then when we go from higher level of abstraction to the lower level of abstraction we should not introduce new bugs or errors. So and this is the key thing in the verification. So we have to remove as many design bugs as possible at earlier stages and we should not introduce new design errors when we are refining the design. If you are going from the higher level of abstraction to the lower level of abstraction you are introducing or adding more and more information that process is known as refinement. So ideally we should not add or we should add 0 error while we are refining though it is very difficult to make sure that there is no error introduced while you are refining the system. So your formal verification can help you in this and we will discuss what are the formal techniques that can help you in detecting maximum bugs at a higher level of abstraction and that make sure that newer error will not be introduced. Now look at how severe this problem is. Let us take a very simple example. All of you might have gone through this at some point in time. Let us take an example of DVD player which all of you have used. Say if you have a DVD player that can have say 6 inputs may be play, pause, stop, fast forward, rewind and then if you are if you are do not pressing any button then it does nothing. Then in order to implement this here we implement a small finite state machine that can have say 5 states stop, pause, play at normally speed, fast forward at 2 axis speed, rewind at 2 axis speed and the finite state machine can be designed like this. So wherein you have these 5 states stop, play, pause, fast forward, rewind. If you do not press anything in that case here it will stay in the same state otherwise it will migrate to the new state. So here if it is in the stop state and only the getting play input here it goes to the play state. If you press a pause button it has to go to pause state again here if you press the play button it can come back to the play state. So this is very small finite state machine. Now let us let us see how difficult it is to verify this small machine. Let us say you have a display that has 1024 into 786 pixels every pixel is represented by true color. So that means here you can encode that in 32 bits. So now here if you want to find out the number of discrete states that you can have would be equal to 2 raised to the power 32 raised to the power 10 raised to the 1024 into 786. How we get this? Because here every pixel can be encoded in 32 bits so there can be 2 raised to the power 32 states and then here there are 1024 into 786 pixels. So now here these are the total number of states we can have. In this here we assume that pixels are dependent on each other so that means one pixel can impact others. So now here if I look at the state transition in that case here the transition can be this many number of states to this many number of states. So now here the combination of the current state to the next state would be square of this number. As I said that here we are assuming that these pixels may have dependence we can fairly assume that pixels are independent to each other that means one pixel do not affect the another picture. So if you assume that the independence of the pixels in that case here the number of total number of states would be equal to the number of pixels into the number of possible colors for a pixel and then here the number of internal states. So 1024 into 786 the total number of pixels one pixel is encoded in terms of 2 raised to the power 32 states and then here there are 5 internal states. So these are the total number of states you can have. Now if you press one button it can go to another state so now here the total number of state transitions would be equal to the number of pixels into the number of possible colors into number of possible inputs that you may get. So now 1024 into 786 into 2 raised to the power 32 into 6. So these are the total next states. So now here what would be the total number of transitions I can have. So total number of transitions would be because you can go from any current state to any next state. These are the next states and these are the number of current states. So current states multiplied by the next state would be the total number of state transitions I we can have. So and we have to verify this system for all the state transitions that the number of state transitions would be 3.4 into 10 raised to the power 32 this is huge number. Assume that we can verify 1 million transitions per second using very fast simulation tool. In that case here it may take several trillion years to verify this design. So that means whatever design we created today that would be ready to manufacture after several trillion centuries which is impractical. So now that means if you want to exhaustively verify your design you need this many years that is impractical. What we want is this should be verified in reasonable time and reasonable time cannot be centuries. So what could be the reasonable time? If you look at the design cycle time it spans somewhere from 6 months to 2 years or 3 years and assume that 60 percent time goes to the verification. So in that case here it can be say a few months to year. So now here your reasonable time is few months to 2 years. Now and exhaustive verification time is trillions of centuries. So you have to reduce this time from trillions of centuries to few months it is huge reduction. Keep in mind that we want similar kind of confidence in our design. So that means the kind of confidence we can have by applying exhaustive simulation vectors we should get from a limited number of simulation vectors. This makes it very complex. Now here I guess this gives you or a flavor how complex the design verification time is and process is how many order of magnitude verification time we have to cut down. If I recall the statement made by Intel India at VLSI design 2011 he means he said that today it is the design verification or validation engineer who is most important person in the entire design flow. He said that it would be the design verification engineer who would be able to buy some real estate in metropolitan cities. That means here he is the person who would be earning the most money and because now as I said that our current design process is IP based so now here we are integrating more and more IPs which is making your design more complex and then here it is very important to visualize the corner cases and now here I will come to a point what are the corner cases how important it is to visualize the corner cases. So now here let us start how your design verification flow goes. So you have a specification those are created from the customer's requirement and you want to implement your system or circuit and your this implementation should respect the specification so that means here there should be equivalence between your specification and implementation how I can do that. I mentioned that there are couple of synthesis steps it has to go from specification to implementation. So like RT level synthesis then gate level synthesis, transistor level synthesis then place and route and finally you get GDS too. So now here and industry want this automatic process so they want that the design process should be push button so that means you feed specification and then here it should produce the design. Now one of the ways that whatever transformation you are doing to get gate level implementation, RT level implementation from specification and then from RT to gate level gate level to transistor level in all these transformations if you can make sure that these transformations are correct you do not need to verify this. So if you believe that your automatic implementation or synthesis process is correct you do not need to verify every design you need to verify only once the implementation of synthesis tools and this so this process is correct by construction. So now here I can completely eliminate the cost of design verification which consume lot of time that is in terms of man hours it contributes to 60-70 percent. This is anyway beautiful way. Now what are the problems? Problems are as follows. The verification process of a software piece of code is even harder problem than hardware verification. Why? Because the design space in software is much bigger and complex than hardware. In hardware you write the bit vector from 0 to 4, 0 to 5 means here you need to use as many bits as you need but in software piece of code you instantiate one variable say nt i. This explodes a design space of 2 raise to the power 32 because this i can take any value and now here the verification of the entire software is extremely difficult or rather I can say that it is next to impossible. Hence you cannot rely on the tools that you are using for the synthesis. Hence you need to verify each and every created design. If you can make sure that this synthesis process is correct you can completely eliminate the design verification cost. So now what are the options? If this is option is not available. One of the option is how to simulate your design and what the simulation mean? Simulation I can simply say you that you can say you implement one XOR gate what you can do is say there are several possible inputs. So if it is two input XOR gate input can be 0 0 0 1 1 0 1 1. So you can apply these couple of inputs and then see whether you are getting the correct behavior or not. So say for 0 0 0 0 you should get 0 for 0 1 it should be 1 for 1 1 it should be 1 and for 1 1 it should be 0. So now here one way is that you can exhaustively simulate this and as I said that exhaustive simulation is not possible. So now what you do is you have to take a subset of this exhaustive input pattern and simulate for that and based on that here you make a decision whether your design is correct or not. So for this you have to build some checkers and drivers this is called as simulation based verification. Simulation because here we are not exhaustively simulating we cannot have the 100 percent confidence in our design and it is very time consuming. So now here again this is not the complete method because we cannot simulate exhaustively. So then here what are the other alternative? Other alternative is we can use mathematics because ultimately you are going to build a circuit which follows the Boolean algebra. So now here you have mathematical expression for the function that you want to implement. So now here if you can specify or you can write the specification in terms of mathematical formula. So that is say we call formal specifications how you can write that I will come to that point. Then you can reason about that whether your implementation always respect the specification or not and this is known as formal verification technique. So in hardware verification here the simulation based verification is referred as simulation based verification and this formal technique is referred as formal verification whereas in software verification domain the simulation based verification is referred as software testing and formal verification is referred as software verification. So sometimes these terms are confusing when you talk with the software people. Let us look at little bit more about what are the challenges we have with the simulation based verification and what are the challenges we have in front of formal techniques. So simulation based verification as I said that we cannot exhaustively test. So now here this is your total design space and then here there are couple of bugs in your design. So you start from say some initially state and then you start to traverse your design in some way and you hit a bug, once you hit a bug then you localize that bug, fix that bug and then again you start your simulation based verification and then you may hit another bug and this way you keep on doing. So this way here if you happen to hit a bug you can say that there is a bug otherwise you can, you believe just that there is no bug but it does not give a guarantee because you did not explore the entire space. So now here the simulation based verification it can, so there are couple of good things about the simulation based verification. One thing is this can be applied across the design level. So that means at system level, at RT level, at gate level, at transistor level, at any level of obstruction you can apply this technique. But as I said that simulation based verification cannot simulate exhaustively the entire design. If you cannot then what are the problems? As I said that you have to pick up the subset of the exhaustive set. Now here say we were talking about this XOR gate. So say here these are the four possibilities 0 0 0 1 1 0 1 1. Now if I pick say the 75% of cases that is still big number. So say as I said in DVD player it takes several centuries if you take 75% cases still it will take several centuries. So even 75% is very big number you cannot pick that. Assume that we pick 75% cases these are the 75% cases I am picking. Now assume that by mistake in place of writing XOR I have written only OR and then here it implemented OR gate. Now when it implements OR gate I look at that and I take these three cases 0 0 0 1 1 0. This will give me 0, this will give me 1, this will give me 1 and there is a exact match with the XOR gate. Hence though I have implemented OR gate I will say that my XOR gate is implemented correctly. So these 75% cases are not sufficient. So I now here when I pick these cases I have to pick in such a way that here we can distinguish these. So now here what can distinguish the OR gate and XOR gate this input. So that means here when you are picking a fewer cases you have to at least pick this case. This is the case when you have OR gate this may have different kind of matching with different kind of gates. So these we are saying corner cases. So it is very very important to visualize the corner cases and that is why the people say that now in the industry we need the most intelligent people in the verification so that they can visualize the corner cases. And other example of corner cases I tell you is say you want to implement 1 5 4. Say you have 8 entries in the 5 4 and so now you are storing some value in the first entry then you store some value in the second entry. So now here how I verify that I will write and I will read that and if I get the same value in that case it is verified. Now when you are writing to the first location or second location or third location here it will behave in the same way. So once you check for the first location it will behave for the third, fourth, fifth. When it comes to the say the 8th location what will happen? Once you have written in 8th location it should generate one specific signal that is buffer full. So that means here after writing 8th location you have to check the additionally whether buffer full signal is generated or not that is one of the corner cases because that is different from other cases. Now even if it generates the buffer full signal when you want to write one more value say the 9th value it may possible that here this 9th value it may what it should say when you are trying to write the 9th value it should say that buffer is full you cannot write. So that means here it should generate overflow signal. So that means here when you are trying to write 9th value it should generate the overflow signal this is another corner case. It may so happen that even if it generates the overflow signal but this can also rewrite that location. So that means here the 8th value is rewritten by the 9th value and you have spoiled your earlier written value. So you have to check that so that means here whether it has rewritten that it has overwritten that value or it is still continue to store the 8th value these are the corner cases. So when you use the simulation based verification you have to visualize these kind of corner cases and so that is very, very critical. Now here as I said that here most of the time we go means we pick up some of the random values and then we simulate for those random values and then we top of this with some of the corner cases. So again here these random values are random and this does not cover all the corner cases and we cannot visualize all the corner cases. Other the problem with the simulation based verification is the simulation speed. Can you think of how fast or how slow the simulation process is? If you look at the simulation process flow whether it uses the compiled code simulation or event driven simulation the simulation speed is something 1 to 2 hertz that is very, very slow. If you look means compare with the actual device speed device can run at fairly high speed say gigahertz. So now your simulation runs 8 to 9 order of magnitude slower than the actual device and now you can compute that even if I run my simulation for 6 months how many vectors I can apply to this design. In order to expedite that the other method which is being exercised in the industry that is emulation. In emulation we try to implement our design on reconfiguration. Configurable fabric and FPGA is one of the recon configurable fabric. So now we implement our design on FPGA. Now FPGA can run at much higher speed like 100 megahertz or so which is significantly faster than your simulation on general purpose processor. This looks very interesting and very fast. Maybe say 2 order of magnitude slower than your real chip still it is 5 to 6 order of magnitude faster than the simulation. Again your exhaustive simulation needs several centuries. So 5 to 6 order of magnitude speed up is not sufficient to go for exhaustive simulation. So still it is based on the corner cases how good you are in visualizing those corner cases. But now as means I comfortably said that you implement entire design on FPGA and this can work say 200 megahertz to 200 megahertz. Now what are the challenges? Can I do that? If you have a small design you can do that but now say you want to verify the current say processor. You design new processor that you want to verify. So entire design you cannot implement on the single FPGA. So you need to divide. If you need to divide the design so now here say I divide design in two blocks. Half say 100 million transistors or gates I am implementing on this. 100 million gates I am implementing on this. Earlier when I was fitting everything in one here I need the IO pins equal to the IO pins of your design. Now when I split this there are couple of internal signals which are crossing from one partition to another partition. And as you know that there are large number of internal signals and now here these signals may be several may approach to million or several hundred thousands. You do not have 100,000 IO pins. So in order to do that what you need to do you need to again further divide this. You need to again further divide this and now here you are limited. So even though you have large number of reconfigurable devices available on your FPGA you cannot make use of that because of IO pins. And now your simulation or emulation speed is determined by the IO speed that is much slower in some kilohertz hundreds of kilohertz that makes it to three order of magnitude slower than a single FPGA. So now here the real emulation speed that you can achieve could be hundreds of kilohertz that is still 4, 5 order of magnitude faster than the simulation but 4, 5 order of magnitude slower than the real chip. Again I said that here this in emulation you cannot apply exhaustive simulation vectors. So you have to rely on the coroner cases which are visualized and you know a famous Pentium bug which was reported by Intel when they were dividing two two numbers there was a change or inaccuracy in 8th or 9th decimal point. And it did cost about 500 million dollars to Intel. So when I mentioned the emulation Intel heavily uses emulation for the design verification and now they use hundreds several hundreds of boards. So FPGA boards to verify one design. So now here this your simulation or emulation is not enough so you have to go for formal techniques. How formal techniques behave? So now here what is the difference in the formal techniques and simulation best verification. Simulation best verification here you know this is you have to apply some test stimuli or vectors you see the response whether if it is correct then you say it is design is correct otherwise not. So now here you heavily rely on the coroner cases but you mean here this is essentially and very good for the initial design debug. Whereas formal method here you need to supply the specification in terms of some mathematical formula or logical formula and you have to specify some of the properties like here for example if you have arbiter say you want to design a arbiter you may have a property that your arbiter should not give access to multiple masters at the same time to use the common resource or if you are you have designed a traffic light controller traffic light controller should not give green signal to cross roads these are the properties. So now here you supply these two your formal verifier will tell you whether these properties is valid for that design or implementation or not. If it fails then here it gives you a counter example, counter example means here it will give you a trace of input under which you get the wrong result that helps you in debugging or localizing the bug. So now here the formal verification technique is equivalent to all case simulation for with respect to a given property. So that means here there are no coroner cases this is always correct with respect to a given property. There was there is a famous court by E. W. Dez extra and what he says is that here program testing that is equivalent to your simulation based verification can be used to show the presence of bug but it can never show the absence of bug it is only the formal verification which can say the absence of bug with respect to given a property. Now here if you look at the formal technique like say this is a simple circuit all of you know that this is NAND realization of a XOR gate. If it is XOR gate in that case here formally mathematically I can specify the output Z is equal to X bar Y plus XY bar. Now here the one of the way is I can use the simulation based verification. So now for XY there can be 4 possible combinations and then from that I can simulate that here what would be the Z and from this one this mathematical expression I can also simulate what would be the Z and if there is a matching in that case here I say that this design is correct otherwise it is incorrect. So this is mathematical this is simulation based verification. On the other hand I can mathematically verify that. So now here the output Z I can specify in terms of B and C. I can say that Z is equal to B bar plus C bar. Now here what is B? B is A bar plus X bar and C is A bar plus Y bar if I can and then here what is A? A is your X bar plus Y bar. If I put the Z in this B bar and C bar and then here I put the expression for A in that case here I will get this as XY bar and X bar Y. I can rewrite this as X bar Y plus XY bar that is same as your mathematical expression. So that means here this tells you that this implementation always respect the specification that is given as X bar Y plus XY bar. So now here this is so what this says is all the so this is based on the transformation that we use at various level which are based on the axioms and theorems. So that means here we can use the mathematical proof for correctness. So if you look at the formal verification there are three different ways to formally verify which are being practiced in the industry. One is deductive verification that means here we have to prove the mathematical theorems. Deductive verification is semi automatic verification techniques because here this is based on the mathematical axioms and theorems in order to prove some theorem you have to use the axioms in some particular order and so now here the tools are not very intelligent. So you have to intervene the execution of those tools and guide them how it should progress to quickly verify that property. So this is semi automatic tool. So it is based on the axioms rules to prove the correctness it is difficult and time consuming. There are other techniques here like here your equivalence checking and model checking. Equivalence checking is the check of equivalence of two design like here for example if you verify the design of an adder with the specification now and that may be your ripple carrier adder. Now you optimize that and for timing you design carry look ahead adder and you want to make sure that both of the adders are equivalent. So now here the equivalence checking can be used and the equivalence checking is fairly automatic technique and it can handle very large design and equivalence checking can be used at various level of abstraction. The other technique that we use is the model checking. In model checking here we specify the design behavior using some mathematical formula and we have to model the implementation using finite state machine or finite automata and then we prove the correctness of some of the property which are specified using mathematical formula. Again here this is based on the symbolic algorithm like BDD check based technique or SAT based technique and this is fairly automatic technique. So model checking and equivalence checking are fairly automatic technique whereas the deductive verification is semi formal technique. In next couple of lectures I will briefly discuss about these equivalence checking, model checking and deductive verification techniques. Thank you very much for your patience full listening we will continue with this in the next lecture.