 Hello everyone, welcome to the first lecture of unit 4. So, in unit 3 we saw we take a we stop synthesis flow. The unit 4 prerequire the unit 4 requires that we are familiar with unit 3 you are comfortable with design compiler working with design compiler. We should be very comfortable in reading the RTL design, design, analyzing the errors, analyzing the laboratory code, setting up the constraints and doing the compile basic compile. Now many times there are designs that are that have very difficult goal, that are either very area intensive or very fine. So, design compiler does provide us with a lot of tools and commands and options. So, that we could tackle these problems and use them to find similar design. So, it is very important before going to this unit to go back to unit 3 and clarify whatever doubts we have. The agenda is that I will introduce the so, all the features all the advanced features for getting lower area or a better time in the performance are contained majorly contained in the command called compile Eltra. So, compile Eltra is the much more advanced form of compile. So, whatever we discuss in this session is majorly will be majorly about compile Eltra. Most of the things are done under control and are not transferring to us, but some of the things are, you can control some of the things and we can to suit our needs. So, I will introduce compile Eltra, we look at something about everything optimization, we look at the options of path group and critical range, what are they or how are they used. Then there is something called advanced critical path presensitives, we will see how they will be tackled, you will see about auto ungrouping, then you will see cost priority, compile directives, these they are quite a few slides about data path optimization which will be very interesting. We will relook at the compile Eltra options again and lastly the slightly more complex topic register it. So, DC Eltra is compile Eltra. So, DC launched the regular compile is called DC expert. So, DC launched a command called compile Eltra, DC launched a feature called DC Eltra. So, there was a command called set Eltra optimization you have to set that variable to prove to enable that, but now everything also functionality comes under the hood of the command compile Eltra. So, it is the full strength of DC Eltra and it will be the most. Now, these slides are by synopsis. So, there would be obviously some marketing comes in all the way. So, what DC Eltra enables us to do? It enables us to do Eltra optimization under the hood, extensive data path optimization. So, earlier back in early 2000s there was a tool by design compiler called mobile compiler. Mobile compiler was specifically built to tackle complex data path. Data path means all the operations like addition, fraction, multiplication, shifting, comparators and so on. So, now all the functionality of module compiler is now excellent actually. All the functionality is brought under compile Eltra. So, it identifies arithmetic operations to provide a common engine to optimize data path for. So, data path we will need optimization across the way too. So, the optimization does not end 1, 2, 3, 5. The back end tools also do some optimizations based on playful and routing data. So, then there are some options we will see we will see how to optimize our synthesis. It also allows us to change to control the corresponding priority. Then DC Eltra compile Eltra also gives us option to automatically improve modules based on either area or time. Then let us see the DW foundation. So, the new DW foundation. So, the latest DW foundation libraries have support for all these things. It has support for ISP data path component. It has even has a number on paper that is used for on. It has microcontrollers, and memory IPs, and verification IPs of similar purpose. So, it has a large collection of idles, plasters and multipliers with various implementations like jerry select, jerry look ahead. Then for multipliers it has both coded for ISP and one code for ISP. So, the codes here that you see the code like RPCS, CLF. So, just note it down. They will be available to you can see that as part of the report. So, how do we just you know in that how do we make sure that DW library is read. We have to select set the variable synthetic library to DW foundation and we have to append this to the link library. You could set this in synopsis to synopsis set ups. I believe in the latest latest versions now the design that is loaded by Excel. You do not have to do anything like this now. You can try it out in the lab. So, these are there are some marketing slides here. So, it also tells what are the new design that functions. It is specific to a particular version. So, these slides are dated somewhat a few years back. So, these are now not new features. These are the things that are already available to you. It supports lot. So, I believe in the data file whatever version you are using of it in the lab will have many more designer functions over and above this. So, to know more about this you can go to all in it to see synopsis documentation. So, it has lot more arithmetic functions, calculations and multiplier, absolute value. There is graphics, alpha blender which is used for video application and so on. So, video application typically require lot of data functions. So, designer has support for lot of these. Let us look at the assistant compiler now. Now, the design compiler by default has features for automatic operation and extraction. Extraction means that from the elaborated design, DC knows that what all stops are part of the final statement. And then so, there is a variable called SSM auto-inferring which default value now default value might have been then you could you can verify by doing a print bar on this variable so, the usually the SSM flow is already enabled and you can just do a compiler and it will do the job. We do not have to do anything specific. However, there is something called state minimization. What it does is that if some states are unused then DC has a capability of removing and optimizing of those systems. So, this feature is not enabled by default. The reason is that the state minimization is a complex problem. And we can use that do a formal verification. The formal verification is available by which we can verify that the gate level netlist we have is functionally equivalent to the RT removal. Now, it is not to verify whether the tool is correct or not. Tool does the job based on our commands and constraints. It may happen that the mistake happened in issuing a command to the tool. For example, I could say I could force the port to logic 0 while compiling by mistake. So, DC will optimize of that the logic connected to that port. But when I do formality, when I do an equivalence check, I find that my RTL and my netlist are not matchless. Why? Because by mistake I get a constant 0 on one particular port. So, formal verification is a very sophisticated very good method to verify that the RTL matches the netlist. You can even compare netlist versus netlist or RTL also matches. Formal means that it is a static approach. If it is not based on any netlist, it is not a simulation. It just compares the through tables. It makes logic points in both reference and implementation design. And at each of it has something called a compare point. At each of those compare points, it will verify whether the logic point is equivalent or distributed. So, if we enable state minimization, then there are problems we can encounter in formal verification. Because formal verification does not only come up with normal verification. So, if you are using formal verification, then you have to be very, very careful when you enable this. It might lead to some problems. So, that is why it is turned to forms by default. So, design compiler has some formality support. It depends that it will write out some files which formality is again a tool from the list. So, it can read those files and understand what design the model has done. So, there is a variable called FSM export formality state input. Again, so it is again recommended here that formal methods do not support state minimization. We should use FSM minimization with great care. Now, so, recommendation, there is nothing special you can use to do for a special state recognition. It will be automatically done under the hood. And however, you can enable this variable for state minimization, which I would recommend not to use as a first step. So, these whatever methods we have discussed here are used for very, very special cases. Then there is a case that you are not making a common constraint on a particular part on a small part of the design, you can employ these techniques. Now, let us look at creating path books. So, we have there is a concept, there is a mechanism by which we can do certain amount of some parts, any parts. So, usually what design compiler will do is, it will break all panning parts into groups. The group is defined by the capture group. So, the clock that captures the data, the name of that clock is the group name. So, if let us say there is no capture clock, then it will come to a default clock group or a none, which is called also for none. You can see, I have discussed this in the lab section also. Now, so all paths not associated with the clock are in a default path group. Now, if a design has complex clocking or complex panning requirement or complex constraint, any of this case, one can create path groups to focus DC on special critical path, specific critical path in a design. Now, so the also by default DC will create let us say, let us say you have iPhone. So, by default DC will create five clock groups plus one default group for path set and not coming under each of these five clock groups. Now, it will try to meet the timing requirement for each group. It will try to make the WNA 0 for each clock group. And if for a particular path clock group, it is not able to meet timing, it will affect the timings of the chain group but not by the group. That means, a restricted of the timing, how a timing group can run path group, it will optimize the other group. So, by creating a different group. So, by default design compiler, this is the statement to remember. Design compiler works only in the worst validate group. The optimization can be controlled by creating and prioritizing path group, which affect only the maximum delay platform. Apart from so, you can create a group a special, let us say you want, you know that one path is very critical in your design. You can create a separate group, even though the DC will try to call create a path group based on the cap set block. But you could create a group for that special path but and give any name for it. Now, even if you do not set higher priority to it, DC will try to make WNA of this path group. So, first thing you try without increasing the priority. Other option is that you could set the path group priority by assigning weights to each group. The default weight is 1.0, which is a default. Otherwise the weights can be from 0 to 100. This is one example that to indicate that the path from input IN3 to flip top 1 is the highest priority path. We can use this kind of a command. Group path, this is a string it can be renamed from IN3 to SF1D. This is the path we wanted to operate with higher priority and we have increased the weight to 200. So, this is the way you can pick and choose paths and tell DC to work more on some group or less. So, there is a very good example in the last four of design compilers where I have discussed where I have what I have done with that. First I have made all the input group, all the path starting from input or ending at output and the register to this all will be done group. And I have compared those signing reports to the phase where I have three separate groups for input and register part and register part. Please go through the example in lab 4 it is a very good example to understand how group path works and how DC optimizes the path, right. Okay, now other a central phase of optimizing near critical path. Now, by default in a particular path group if DC finds that WNS is new it will stop the optimization process. What it means? And it will proceed to area recovery it will not do any delay optimization because it has already met the goal. Now, let us say you want that DC should not stop at WNW. Let us say you want DC to fix time limit in let us say plus 125 picoseconds. It can be a special requirement for a particular group. So, there is a concept called critical range. So, critical range is the one is a parameter that defines the maximum delay path function and if we change it we can we can when we add a critical range to path group we change the function from WNS to critical limit of path. So, by default is always WNS, target is WNS for each path group to be 0. But by applying the critical range we are changing the maximum delay path function from WNS to PNS it is for critical limit of path and DC will optimize all path between this critical range within this critical range. Obviously specifying it will mean that DC will have to work more and so therefore, it is recommended to use critical range only during the formal implementation phase when you are very sure that you want this to happen. The guideline is you should not specify critical range which is more aggressive than 10 percent of the top limit. So, critical range option is available to you as part of the group path command. So, in this earlier class this minus rate tells DC that this is of a higher priority. Further you could add a minus critical range option here to tell DC to what slack it should fix right. Again so, you could use a group path or you could use simply set critical range. Set critical range you can set a particular design in a particular style and so on. Please read the man based for test critical range. Now, let us see a concept which is called advanced critical path three critical range. Now, DC performs now let us say you have compiled a design and found that design is violating by somehow. Now, in the second phase what you can tell DC is that this do a compile minus map effort. So, this is a optional compiler compiler. So, compile EITRA has this advanced critical path three critical and that is it. But usually compile option the map effort is set the point value is set to median. So, when you do a median if you are meeting a time timing rules you do not need to come here. Use this option, but if you are not meeting a timing rules you can use this. And the as strategies include this one of the two of the most famous strategies are aggressive logic application and improved technology method. Let us see what it means. This is one example where this is the critical path from this green thing represents the green line represents the critical path goes to logic the blue bubble here and to the critical pin main critical. So, during advanced CPR what DC can do it can break this logic and restructure it into certain manner that critical it make the critical path smaller in terms of the telling. So, so A and B are not critical. So, it will duplicate the logic now this restructure logic logic at the top which is critical will be optimized for delay. And the one this one below since path to A and B is not critical it will not be the share share will not be applied as well. So, the logic is duplicated in of this the area I will be, but then you want the timing to be solved on this critical path. One more example could be resizing along the critical path now let us say before these drivers are 1 x 1 x 1 x and a P critical after these drivers are upside and the drivers may be in in the course of what we call for upside that feed to the critical one right. So, but these things only happen for critical path not for on the path. What are the critical path? Critical path are the paths with the negative slack each path. How do we enable this? We tell we tell compile minus. Now in many cases compile minus not a path is a long amount of time to process. So, you should only do this first you should compile compile or compile ultra if it works fine then good if not you can use this compile minus not a path to see if after doing critical path resynthesis your design needs timing ok. It needs your DC ultra license again. So, then these compile ultra support auto-ungrouping, compile also supports auto-ungrouping. So, auto-ungrouping will allow DC to merge cells and optimize the object. So, if there are like the module will allow when our module will create an artificial logic and if there are too many small combination clouds of logic that talk to each other across these logical hierarchy then DC cannot optimize them effectively. If we let DC to ungroup them remove the hierarchy then many small forms of logic put together clouds of logic put together can help DC to optimize it more effectively right. So, it improves it also improves the timing because now you can again. So, any kind of optimization like this which ungroup the hierarchy will help you in both timing and area right you can the logic can be shared let us say there are adders which can be shared. So, again area and timing will both improve. So, the graphic here shows it that this part of combination logic here there are three parts of combination logic between these two plots. Obviously, optimizing then each of them individually it is better to optimize then combine together area and timing of the second circuit will be better than the first one. Further you can extend this further you can combine all these three together in one big combination cloud you can even ungroup this. For one level of ungroup you can do by setting so manual ungroup you can do by setting using the command set ungroup and you can ungroup or you can use ungroup minus all which will remove all hierarchy from the design to make the design fly. You can use the ungroup command to ungroup a specific part of the design. Now over and above this over and above manual ungrouping that DC allows you to do there is an option called auto ungrouping. So, one is block size base ungrouping then you set that say that ok all the blocks that have the cell count smaller than this number please ungroup it right. So, the default for this sorry you can use a variable to control this compile auto ungroup area number of cells default is 30 by default area and delay ungrouping count only the child cells in the immediate hierarchy and the child cells in sub design are not considered. You can again use this variable to change the behavior compile auto ungroup count please cells. Now so compile has two option compile minus auto ungroup based on area or delay. We can explicitly after we before we can explicitly ungroup a particular model by using the ungroup command or on the other hand if you want to prevent some block to be ungrouped we do not want a particular block to be ungrouped we can set do not touch on it or we can set ungroup calls on a particular design right. This is a example script. So, the compile auto ungroup area number of cells we have paid the value default is 30 we have paid the 118. So, this is the design we are talking about we do not want value to be ungrouped. So, a value is set ungrouped calls we do not touch when control but please make sure if we say do not touch on when control when control should already be mapped it should not be not real form otherwise DC will not even do nothing. We set ultra so this command is no more needed in the later version for 15 you can just use compile ultra or in this case you can just use compile minus matter for time auto ungroup area. So, in this case we are enabling the critical path resynthesis we are also saying it to ungroup based on area we can report auto ungroup by saying report auto ungroup it will tell us what are the designs that are ungrouped and it looks like this. Then control it remains as it is ALU remains as it is ALU because we set ungroup on it then control because we set do not touch on it B is ungrouped C is ungrouped and auto ungrouping it will give us right. You can try it out again in the lab. Now the something called critical path auto ungrouping so earlier what we saw was mostly area based ungrouping. So, for critical path ungrouping is usually compile ultra has to be there where by default it will be critical path by default to prevent it however we have seen that this in lab 2 I guess that if we do not specify anything to compile ultra just to compile it. So, compile ultra will ungroup the small designs based on the value to prevent this ungrouping this is critical path auto ungrouping to prevent this we have to set a variable or we have to say compile ultra minus no auto ungroup right. So, but obviously the advantages for ungrouping are many that there is a lesser area and better path applied it is only applies to critical path if it is flag delay improvement from auto ungrouping. Now so we talked about FSM extraction we talked about critical path these synthesis we talked about. So, just to summarize for FSM extraction you do not need to know anything special compile ultra is good enough again for critical path these synthesis you have an option called compile minus not applied high, but better than that I would recommend go for compile ultra. Perfect thing for auto ungrouping compile ultra supports auto ungrouping by default to prevent that you have to give minus no auto ungrouping compile ultra. So, for all three things we have seen before instead of using compile ultra we get everything under the hood you do not have to worry about all the ones, but it is good to know how are things done 2 years back or what are the options available to you right it is always good to know the options available to you. Now let us look at something which is independent of either compiler compile ultra. This command set cost priority lets you change the cost function priority list according to which DC will occur. Usually the priority runs like this connection class you can read more about it in the point normal I will explain it here it is then again multiple four nets the min fan out min capacitors these are top four which are usually the highest priority and for a simple design simple one single word they save it usually you both in terms of any such function either connection class or min fan out or min capacitors usually these are encountered less in when it in case of simple design and simple logic right. Connection class comes into picture when we have multiple voltage cells multi voltage design then or you have to use the port multi utility type and multi voltage and you have signals crossing the voltage going. Multiple four nets again design specific usually I do not see a lot of min fan out and min capacitors. Major things are these max transition max fan out max cap when the mutation does not apply to our post it is mainly used by DC program. Max delay and min delay max power max area this is the default priority in direction of the sorrow and among out of these the ones covered in blue are user prioritizable this is the default one but we could use the command set cost priority we can change the priority and this is the syntax no need to do anything if you are using the default feature default is fixed the as equals then go the time delay means max delay has higher priority than max design means you could actually try and use this. Let us say if your design has timing volitions and it needs DRP you can actually try to do the other way around you can set cost priority minus delay and see if your design has better. Again specify that min delay has higher priority than max delay but lower priority than max design means is a very strange feature to have to have I am not even having to use this till now I am not sure who would want to solve old time problems in very sensitive but still it is there so be careful about using it I would recommend never use it there can be some special case and again there is a very customizable part where you can set the priorities among all the customizable checks customizable process. So this is the example where usually max amount comes before maximum but now we have set that with the max delay comes before maximum. Again this is a very special this we would use in a very special cases in very special cases where you probably you are running prior where you want to see if the design will flash with the design the max delay. So these are mostly for experiment so please be careful before using that possibility it will definitely affect your optimal. Now DC allows you for allows greater control over time and now optimization the typical optimization contains a lot of failure. So DC allows for a greater control over different strategies some of the strategies are cost constant propagation deletion of unconnected gates, local optimization, critical path input gates sizing and examples of this. Now you could individually turn among this four of these you can individually turn on or off and on by default all along this command to do this is called set compile directive one of the things you could turn on or off a critical path re-synthesis. So usually critical path re-synthesis is set to true part let us say when you do a compile this is the default linear method. So the extent of critical path re-synthesis is comparatively less which is controlled by this. If you specifically give compile minus half effort height that means you are asking DC should do more of critical path re-synthesis. So again these two of the options are first we have preliminary usage but the extent to which the SF optimization default. So here in this case we can set critical path use this is to be the true or false by default it is set to true. So something like this let us say F is critical then this cloud of logic before F is equal to A H plus V H bar plus V D after critical path re-synthesis these two clouds are combined and now F has some function and 8 AD bar plus V D I am not sure what the function remains same, but the critical path is grouped and re-implemented like this. A portion of the critical design critical path is grouped. So these two are grouped because F is the critical point and they are re-implemented to improve time. This will be labeled during high effort compile you could if you want to solve it, but it is again a question that if you are using compile minus multiple path that means you want DC to perform critical path. Then there is something called constant propagation so let us say you have force design force side of the 1 or 0 then you might have 1 or 0 on the course of some design we have initiated. So DC will propagate that and it will optimize of the gates based on these values for example here the AND gate is tied to 0 this becomes 0 this becomes 0 this becomes 1 this becomes 1 this function becomes 8 so DC will do this kind of thing. These are very basic level of optimizations ideally it is on and it should be on because these are the things the constant values affect a lot of optimization in a design. So by default it is one but somebody who might want to keep the gates in there which are tied to constants may turn it to pulse and obviously the area will be both. Starting a solution of unconnected gates again if the gate is unconnected then DC will use a value such that the gate and let us say a gate is unconnected so what DC will do is it will remove it simply. So here for example there will be an x operation because in simulation there will be an x operation because this value is unknown. So what DC will do it will just remove the problem you can write that because x probably is not used I think x will not be used further that is why DC remove all this logic and again this is one of the basic optimization technique you could turn it off by using set compile directive minus delete and mode it to pulse and this to pulse. Port is called local optimization the DC performs optimization within the use of a given cell to include timing an area. So the option is called local optimization example is fan out optimization. So for example this gate here has a large fan out and this is a critical path. So what DC could do is that it can clone this land gate and put one more land gate here which has same or input condition here and if you try this critical one using so it will have same fan out here and then it can probably down side right. So because whatever so all it can upside this one and so on. So these are the optimization which occur in the gates that are in the neighborhood of the critical path again you could turn it off but I will not recommend it. So in summary they are all defaulted to prove and any one can be turned off. Again the recommendation is not to play with this the idea is to first synthesize the design three but new kind of goals. So these are these are the other way out what are they way out now you would obviously not do this if the design is meeting your now if the design is meeting your timing goals and we do not have anything any special requirement you will not do. In cases for example in the constant propagation well let us go there why would I need to turn it off. Now let us say now I am not sure about the pattern whether it 0 or 1 and this is connected to something and I am not sure what this value would be. Now I can choose this constant I can turn this off the DC will keep all these gates and then I can probably change the value here and see how it is but if I turn this on if I let it to be the default value all the gates would be removed and I will not have the flexibility to check what happens if this can be 0 or 1 in the group. So again there can be very special cases where we want to turn one of these compiled rectives to off and obviously in these cases we are not worried about area or not because all of these cases it should turn off the area will be worse timing might also be worse ok. Now let us look at DC ultra data for optimization so the compiled ultra compared to compiled command delivers a better QR for design containing automatic operator it has comprehensive data part extraction multiplication addition subtraction comparators boxes it have it you employes timings and resource sharing it has support for carry-on implementation support for presentation you see examples of this. So, add the best thing is that you do not have to do anything. Compiled ultra does everything for you just have to see what it actually does. So, after the design is read and analyzed elaborated during the compile between the stage with during which compiled ultra runs PC ultra will run something for data part and then it will start choosing design by library component which best suit this data part and then it will proceed to logic optimization before going on to nothing. So, ultra data part optimization is something that runs under the hood during compile ultra there is no other command there is no special separate command to do this. So, for example, let us look at carry save added transformation to convention automatic you have A minus B C plus B E plus F then these two are added together and then there is a more added. So, usually if you draw it on paper there will be carry propagate adders addition will be done in group of two and you can you understand that if you understand that is the carry bit that takes the longest amount of time to go from the input to the output. So, let us say in this part in this part there are three adders one, two, three. So, it is a carry bit that will take the maximum day. So, typically the complete so complete computation at the end of each operator. So, at the end of this operator this will represent the true value of C super C carry propagate adders are either of a ripple kind or carry look ahead or probably they are slow and they are large the tradition adders. What ultra supports is something called carry save adder. Now, in this case addition can be done in group of two and the outputs here they are partially computed they are not final A plus B C. They are partially computed and fed here to another carry save adder and now in the end we employ a traditional carry propagate adder which will do the final addition. Now, in this case the intermediate points they do not represent the actual addition they are partial they are partial computation, partially computed. So, that what it does is that it improves speed and it makes sure that the carry is does not become a critical that. So, but the the flip side is that the intermediate points here they are not there the values which you could expect. So, for example, the intermediate point here is not actually A minus B C right. So, but the addition the final value of I here is exactly there right. So, again example of data part transmission where there are two multiplication and addition. So, what DC will do is it will try and make do addition first and then do a multiplication this is A into B plus A into C. So, it will do a so B and C are primary and it will do a B plus B and then into A. So, it it knows that from your expression it will know that it will do proper transformation and it will do a resource sharing to make sure that circuit means the timing requirement right. Repetitive, repetitive addition optimization for example, Z is equal to X plus Y here if you solve this it will come to 2 A plus 4 B and 2 into A means A left shifted by 1, 4 into B means B left shifted by 2 into A. So, based on these expression DC is able to know that it is able to calculate the optimal number of operators needed. One more example is partial constant optimization. Now, let us say you have constant bits. So, for example, B 0 left shifted by 3 has will be padded with 0s again C 0 left shifted by 3 will be padded with 0. So, this this logic here in the circle here is redundant you do not need an adder to come to 2 right. So, this is transformed. So, this is removed constants are optimized and the tree is reduced. So, the final adder is 1 bit instead of 4 bits. So, here they are 1, 2, 3 and 4. 4 addition taking place, but after optimization after doing a partial constant optimization this is removed and only one level of adder. So, obviously, a great area saving is is made by DC. So, enabling a simple again the design by license is required for data path optimization design by library should be is enabled automatically once you run from how it runs under the hood. It is on by default and the command we have we saw this command in the lab 4 of design compiler of unit 3. We saw command for the source resources which will list out all the data path resources used installed by design compiler it will tell what is the implementation that it shows it will tell what is the module name of the data path it shows it will tell what are the operations that are going on in that module and so on. So, the synthesis methodology is that avoid expensive carry property. When you have additions when you do addition that are multiple that are more than 2 or 3 it is a carry bit that takes the longest to arrive at the output right. So, avoid expensive carry propagation use redundant representation like carry save and partial product it uses high level arithmetic optimization and it it tries to apply these techniques in the largest possible data path. We will come to that what is a largest possible data path. So, what DC will try to do is that it will try to extract the largest possible data path block from the elaborated article. So, one of the examples can be a sum of product type of expression. So, in this case any arbitrary sum of products from the implemented. So, one data path block means that let us say you write this in expression right is equal to a and b a into b sorry this is the multiplication a multiplied by b that is multiplied by b plus and constant into e plus f minus b plus and constant. Now such sum of product can be. So, this the it is actually a very very compute intensive expression right. There are so many multiplications and addition in the process. All of these DC will determine can be done inside the module. So, just by writing this expression DC will now instantiate one module and if you notice in last code of the design compiler. We saw that when we chose when we told DC not to ungroup BW during compiler truck it actually gave us that module right it gave us that module name also. So, it chooses one module in which it will implement all of these. Now the benefit of choosing one module is that all such operators will be will be are already ungrouped inside that module and DC can do aggressive optimization and efficient right. Again limited product of sum can be implemented in one data. So, what it will do is that it will employ the whatever addition technique we saw earlier this kind of addition technique and make sure that the final it will need only one CPA in the final stage to implement it right. Otherwise all other points would be written as well it will be partial expressions. So, in one data path block with one CPA again POS limited can be implemented in one data block with one CPA. So, select of operations are considered to be data paths. So, it will what it will do it will try to. So, such an expression it will try to implement it in such a manner that there would not be any carry propagation before the select of. So, carry propagation is always the last step right. Again comparisons are similar for example, P 1 is equal to A plus B T 2 is equal to C into D and Z is equal to P 1 greater than T 2. So, comparisons will be implemented on redundant internal users that is without CPA before the comparison. So, CPA is again the last step to be done right. So, largest possible data path blocks are extracted from RTL code the expressions operators are merged into data path block we will see an example. So, extraction can occur only when operators are directly connected without a non-affirmative logical between chain of arithmetic operations against soft type expressions are extracted. These are the operators that are extracted as part of data path operation operators involving with one patients are expected what is not extracted is shifters and equality comparators. So, equality comparators are implemented from a differently they use ZOR gates. So, and ZOR gates are a different part of gates altogether they are not typically very similar to and an OR gates in terms of what optimizations can be done on them. So, therefore, they are not extracted. So, benefits of extraction. So, when we say extraction it means that DC is able to understand the functionality and it will try to implement that functionality as part of a single module. The benefit is that it can share data path operators it utilizes carry safe arithmetic techniques high level arithmetic optimizations carried out on extracted data path block and explores better solutions that might involve a different resource sharing configuration. So, there might be some resource sharing cases which we might not have envisioned at the time of writing of the RTL, but then DC will try and figure out if it can do better resource sharing by looking at your extracted data path and it will implement. So, recommendation is to use report timing to check timing and optimization results and use report resources to determine which operators were absorbed into the data path. Obviously, always checking the warning message is always recommended. So, again this slide tells that what generation strategy there are some intelligent generation strategies for beta QoR mapping of data path special cells, porous C2 compressor, boot encoder, carry select adder, inverting full adder and so on. So, it automatically selects boots or non-boots encoding from enterprise based on a design code. So, now with we will see the example, but the most important point here is that you will by using compile and graph and if you have a designer and askings available with you, you will not be as an RTL designer, you will not be concerned about what implementation will be. You should only be concerned about your logic. So, this code will be defined by the specificationally architected. You just start coding, you can use adder, you can use multiplier, you can please me use plus and minus and minus and do everything what you want. And now obviously, following the different what is right now we have talked about, about logical partitioning about and so on right, not creating artificial boundaries and so on. And now let DC extract the data path. So, again you do not have to do anything special for it. The compile comma ultra command will extract the data path for you. You can use report resources to list down all those data path, list down what all DC extract the data path. And then after resource sharing and after choosing the after extracting the data path, now DC will go on and it will choose the implementation based on the design code right. So, that is what to say there it selects for example, for multiplication it will select boots and non-boot encoding and the decision is always based on the design codes right.